diff --git a/.gitattributes b/.gitattributes index 35b4bb20f16272da6e6cbdb732d0ab6877a69c54..d06acdc32d7b807303bd25f6200e3be5d726ad4d 100644 --- a/.gitattributes +++ b/.gitattributes @@ -1963,3 +1963,19 @@ data/2025/2503_19xxx/2503.19903/0906a2ad-9088-4fc5-8324-00e6a98a949f_origin.pdf data/2025/2503_19xxx/2503.19907/6d3e71a9-f367-43e8-9154-5a3f1c98eccc_origin.pdf filter=lfs diff=lfs merge=lfs -text data/2025/2503_20xxx/2503.20807/9709de1a-2664-4358-b8ae-7fae53399612_origin.pdf filter=lfs diff=lfs merge=lfs -text data/2025/2503_20xxx/2503.20823/460bf0cd-81d7-4046-966b-33a5a80b3f9b_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13268/27688151-5fde-4680-b981-ebf649a96c83_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13299/1182f1ca-70ec-4ed4-9c5b-10aa50e1a3e7_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13360/f41871e7-052a-41b7-a381-f27188074bdf_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13377/48689f30-a9be-48a0-a682-282cc19b9dc2_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13399/3216c4a9-6622-4f8f-9268-b62c8b700c4e_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13415/d6bcf737-8470-4ed5-949a-6d04e53f6bf7_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13423/ee02b8fd-a1b4-4f31-b152-93f5d36bdbe3_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13436/53224727-d9f9-435a-ad74-c91800bf8f34_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13439/696dfadf-883e-4109-9010-e095246a341a_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13441/7dee0cb2-adf2-4076-a132-e57fc86ae441_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13444/1c5e9011-0450-44ee-b04b-75479b89c9af_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13446/40268637-4c49-45fd-874f-4b7639b520fb_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13646/ec61822f-80f5-4b4c-80e5-df7fb23ac8f4_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13657/f9f0f12f-c46c-4f07-9710-e29462265246_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_13xxx/2503.13721/a97f1dd6-65e6-4086-81b6-b457f406ad62_origin.pdf filter=lfs diff=lfs merge=lfs -text +data/2025/2503_16xxx/2503.16527/a8560f1b-d038-456e-8a05-1cf3f71e9afc_origin.pdf filter=lfs diff=lfs merge=lfs -text diff --git a/data/2025/2503_13xxx/2503.13268/27688151-5fde-4680-b981-ebf649a96c83_content_list.json b/data/2025/2503_13xxx/2503.13268/27688151-5fde-4680-b981-ebf649a96c83_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3d17150063d8b15488f5d64a5ceb0a8ad32a882b --- /dev/null +++ b/data/2025/2503_13xxx/2503.13268/27688151-5fde-4680-b981-ebf649a96c83_content_list.json @@ -0,0 +1,952 @@ +[ + { + "type": "text", + "text": "Channel Estimation for Pinching-Antenna Systems (PASS)", + "text_level": 1, + "bbox": [ + 161, + 69, + 836, + 140 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Jian Xiao, Ji Wang, Senior Member, IEEE, and Yuanwei Liu, Fellow, IEEE", + "bbox": [ + 215, + 147, + 781, + 164 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract—Pinching antennas (PAs) represent a revolutionary flexible antenna technology that leverages dielectric waveguides and electromagnetic coupling to mitigate large-scale path loss. This letter is the first to explore channel estimation for Pinching-Antenna SyStems (PASS), addressing their uniquely ill-conditioned and underdetermined channel characteristics. In particular, two efficient deep learning-based channel estimators are proposed. 1) PAMoE: This estimator incorporates dynamic padding, feature embedding, fusion, and mixture of experts (MoE) modules, which effectively leverage the positional information of PAs and exploit expert diversity. 2) PAformer: This Transformer-style estimator employs the self-attention mechanism to predict channel coefficients in a per-antenna manner, which offers more flexibility to adaptively deal with dynamic numbers of PAs in practical deployment. Numerical results demonstrate that 1) the proposed deep learning-based channel estimators outperform conventional methods and exhibit excellent zero-shot learning capabilities, and 2) PAMoE delivers higher channel estimation accuracy via MoE specialization, while PAformer natively handles an arbitrary number of PAs, trading self-attention complexity for superior scalability.", + "bbox": [ + 73, + 220, + 491, + 487 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Index Terms—Channel estimation, mixture of experts, pinching antenna, Transformer.", + "bbox": [ + 73, + 492, + 491, + 518 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "I. INTRODUCTION", + "text_level": 1, + "bbox": [ + 215, + 541, + 351, + 555 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "ADVANCED multiple-input multiple-output (MIMO) is a critical enabling technology for sixth-generation (6G) wireless networks, targeting ultra-high-speed data transmission and seamless connectivity. Among emerging MIMO technologies, flexible antenna systems, e.g., fluid and movable antennas, offer dynamic channel reconfiguration capabilities by leveraging its positional or rotational adjustability to adapt to the evolving spatial distribution of users [1]. However, they still struggle with addressing large-scale path loss and the need for line-of-sight (LoS) links, both of which are essential for high-quality communication. To overcome these challenges, pinching antennas (PAs) offer a novel approach to creating controlled radiation sites. PAs were initially introduced and developed into a prototype by NTT DOCOMO in 2022 [2]. The fundamental concept of PAs relies on the transfer of electromagnetic (EM) waves from a dielectric waveguide to a nearby dielectric material. Compared to typical movable antennas, PAs can achieve movement ranges spanning thousands to tens of thousands of wavelengths and support a larger number of connected antennas on a waveguide. This technology allows", + "bbox": [ + 73, + 560, + 491, + 864 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Jian Xiao and Ji Wang are with the Department of Electronics and Information Engineering, College of Physical Science and Technology, Central China Normal University, Wuhan 430079, China (e-mail: jianx@mails.ccnu.edu.cn; jiwang@ccnu.edu.cn).", + "bbox": [ + 73, + 875, + 488, + 920 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yuanwei Liu is with the Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong (e-mail: yuanwei@hku.hk).", + "bbox": [ + 73, + 920, + 488, + 945 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "for the flexible positioning of PAs, enabling the establishment of adjustable and reliable LoS transceiver links in Pinching-Antenna SyStems (PASS), thereby effectively mitigating large-scale path loss.", + "bbox": [ + 501, + 219, + 921, + 280 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The theoretical exploration of PA technology is still in its early stages, yet it has already garnered significant attention from both academia and industry. The authors of [3] first provided a comprehensive performance analysis for PASS and proposed a non-orthogonal multiple access (NOMA)-enhanced PASS. Subsequently, related works have explored performance analysis [4], physical modeling and beamforming [5], and minimum rate maximization for PASS [6]. It should be emphasized that the existing works on PA optimization strategies hinge crucially on precise channel estimation. However, the channel estimation in PASS has not been investigated in-depth due to new challenges. Firstly, PASS introduces a fundamental challenge in channel estimation due to the highly coupled nature of the in-waveguide channel and wireless propagation channel between PAs and users. Each waveguide is fed by only one radio frequency (RF) chain, while it may carry multiple PAs. This setup necessitates the recovery of high-dimensional information from low-dimensional observations, resulting in an ill-conditioned underdetermined recovery problem. Secondly, the in-waveguide channel is deterministic and completely depends on the positions of PAs in the waveguide. Hence, no stacking of pilot signals can produce additional independent linear equations to solve the high-dimensional wireless propagation channel. Moreover, the dynamic nature of PAs with the plug-and-play capability complicates the channel acquisition. The activation locations of a large number of PAs can be adjusted in a specified region to improve user service. Therefore, the wireless propagation channel presents near-field spatial non-stationarity effects due to the large array aperture.", + "bbox": [ + 501, + 280, + 921, + 718 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "To fill this research gap, this letter presents the first investigation into channel estimation for PASS. First and foremost, we extend classic linear channel estimators by adapting the antenna switching strategy for PASS, which provide fundamental channel estimation benchmarks. Furthermore, inspired by recent advances in deep learning (DL)-enabled channel estimation [7]-[9], we propose two efficient DL models to improve channel estimation accuracy while reducing pilot overhead. The first DL estimator, termed $PAMoE$ , is built on the mixture of experts (MoE) architecture, which integrates PA positions and pilot signal features through multi-expert mechanisms to adaptively model the dynamic channel distributions in PASS. To enhance the flexibility and scalability of the channel estimation model for dynamic PA counts, we further propose a Transformer-style estimator, termed $PAformer$ , which lever-", + "bbox": [ + 501, + 718, + 921, + 944 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 911, + 30, + 919, + 40 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2503.13268v4 [cs.IT] 10 May 2025", + "bbox": [ + 22, + 265, + 58, + 707 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/1809a3d795957d4d22b8aaff957d2fc3fdf1e8341d4a39ed60030baef9bf4f24.jpg", + "image_caption": [ + "Fig. 1. Pinching antennas assisted multi-user systems." + ], + "image_footnote": [], + "bbox": [ + 133, + 73, + 434, + 210 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "ages the self-attention mechanism to predict channel coefficients on a per-antenna basis. Numerical results demonstrate that the proposed DL estimators achieve superior channel estimation accuracy with significantly reduced pilot overhead compared to conventional channel estimators. In particular, both PAMoE and PAformer exhibit superior zero-shot learning capabilities on dynamic PA configurations without retraining operations.", + "bbox": [ + 73, + 262, + 491, + 383 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "II. SYSTEM MODEL AND PROBLEM FORMULATION", + "text_level": 1, + "bbox": [ + 101, + 397, + 465, + 411 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "As illustrated in Fig. 1, considering an uplink communication system that includes a base station (BS) equipped with $N$ PAs and $K$ single-antenna user equipments (UEs). Each PA has the same length $L$ on the waveguide. In a Cartesian system, the UEs are assumed to be randomly distributed within a rectangular region on the $x - y$ plane, with dimensions $D_{x}$ and $D_{y}$ . The position of the $k$ -th UE is represented by $\\psi_{k} = (x_{k},y_{k},0)$ . Suppose the waveguide extends parallel to the $x$ -axis. Its height is denoted by $d$ , and its length aligns with the rectangular dimension $D_{x}$ . Hence, the coordinates of PA $n$ are given by $\\psi_{n}^{\\mathrm{P}} = (x_{n}^{\\mathrm{P}},0,d)$ , where $x_{n}^{\\mathrm{P}}$ lies in the interval $[L,D_x]$ . In this work, the discrete activation deployment of PAs is adopted, which simplifies the hardware design and is more practical than continuous activation [5]. The PAs can only be activated at specific discrete positions along the waveguide, forming the feasible set $\\mathcal{S} = \\left\\{L + \\frac{D_x - L}{Q - 1} (q - 1) \\mid q = 1,2,\\dots ,Q\\right\\}$ , where $Q$ denotes the number of discrete positions available. Since all $N$ PAs lie along the same waveguide, the transmitted signal of each PA is essentially a phase-shifted version of the signal from the BS at the waveguide feed point. The in-waveguide channel $\\mathbf{g} \\in \\mathbb{C}^{N \\times 1}$ can be expressed as", + "bbox": [ + 73, + 415, + 491, + 736 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {g} = \\left[ \\alpha_ {1} e ^ {- j \\frac {2 \\pi}{\\lambda_ {g}} \\left| \\psi_ {0} ^ {\\mathrm {P}} - \\psi_ {1} ^ {\\mathrm {P}} \\right|}, \\dots , \\alpha_ {N} e ^ {- j \\frac {2 \\pi}{\\lambda_ {g}} \\left| \\psi_ {0} ^ {\\mathrm {P}} - \\psi_ {N} ^ {\\mathrm {P}} \\right|} \\right] ^ {\\top}, \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 116, + 741, + 488, + 771 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "where $\\psi_0^{\\mathrm{P}}$ denotes the position of the waveguide's feed point. Parameter $\\lambda_{g} = \\frac{\\lambda}{n_{e}}$ is the guided wavelength. Here, $\\lambda$ is the wavelength in free space and $n_e$ is the effective refractive index of the dielectric waveguide. $\\alpha_{n}$ is the factor that determines the ratio of power exchanged between the waveguide and PA $n$ , governed by the coupling length of the PA [5].", + "bbox": [ + 73, + 777, + 488, + 868 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "For the wireless propagation channel $\\mathbf{h}_k\\in \\mathbb{C}^{N\\times 1}$ between the $k$ -th UE and the PAs, $\\mathbf{h}_k$ is composed of the LoS component $\\mathbf{h}_k^{\\mathrm{LoS}}$ and the non-line-of-sight (NLoS) component $\\mathbf{h}_k^{\\mathrm{NLoS}}$ , i.e., $\\mathbf{h}_k = \\Upsilon_k\\odot \\mathbf{h}_k^{\\mathrm{LoS}} + \\mathbf{h}_k^{\\mathrm{NLoS}}$ . Here, the operator $\\odot$ denotes the Hadamard product. $\\Upsilon_{k} = [\\upsilon_{k,n},\\dots ,\\upsilon_{k,N}]\\in \\mathbb{C}^{N\\times 1}$", + "bbox": [ + 73, + 869, + 491, + 946 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "is a Bernoulli random vector and the variable $\\nu_{k,n}$ takes values from the set $\\{0,1\\}$ , characterizing the existence of a LoS link between the $n$ -th PA and the $k$ -th UE1. The geometric spherical wavefront model-based LoS channel between the $k$ -th UE and the PAs is expressed as [3]", + "bbox": [ + 501, + 69, + 919, + 143 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {h} _ {k} ^ {\\mathrm {L o S}} = \\left[ \\frac {\\sqrt {\\eta} e ^ {- j \\frac {2 \\pi}{\\lambda} | \\psi_ {k} - \\psi_ {1} ^ {\\mathrm {P}} |}}{| \\psi_ {k} - \\psi_ {1} ^ {\\mathrm {P}} |}, \\dots , \\frac {\\sqrt {\\eta} e ^ {- j \\frac {2 \\pi}{\\lambda} | \\psi_ {k} - \\psi_ {N} ^ {\\mathrm {P}} |}}{| \\psi_ {k} - \\psi_ {N} ^ {\\mathrm {P}} |} \\right] ^ {\\top}, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 527, + 148, + 919, + 193 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "where $\\eta = \\frac{\\lambda^2}{16\\pi^2}$ denotes the path loss at a reference distance of $1\\mathrm{m}$ . Considering $S$ scatterers in the $k$ -th UE $\\rightarrow$ PAs link, the NLoS channel $\\mathbf{h}_k^{\\mathrm{NLoS}} \\in \\mathbb{C}^{N \\times 1}$ is given by", + "bbox": [ + 501, + 199, + 919, + 250 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {h} _ {k} ^ {\\mathrm {N L o S}} = \\sqrt {\\frac {1}{S}} \\sum_ {s = 1} ^ {S} \\beta_ {k, s} \\mathbf {a} _ {k, s} e ^ {j \\eta_ {k, s}}, \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 604, + 255, + 919, + 296 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "where $\\beta_{k,s} \\sim \\mathcal{CN}(0, \\sigma_s^2)$ and $\\eta_{k,s} \\sim \\mathcal{U}[0, 2\\pi]$ denote the complex gain and the random phase of the $s$ -th scatterer path, respectively. $\\mathbf{a}_{k,s} \\in \\mathbb{C}^{N \\times 1}$ represents the receiving array response at the PAs. Since the movable region of PAs is large, the equivalent array aperture of PA is likely to exceed the Rayleigh distance that is the criterion to determine the near-field boundary. Hence, the spherical wavefront is utilized to characterize array response $\\mathbf{a}_{k,s}$ and is given by", + "bbox": [ + 501, + 301, + 921, + 425 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {a} _ {k, s} = \\left[ \\frac {\\sqrt {\\eta} e ^ {- j 2 \\pi d _ {s , 1} / \\lambda}}{d _ {k , s} d _ {s , 1}}, \\dots , \\frac {\\sqrt {\\eta} e ^ {- j 2 \\pi d _ {s , N} / \\lambda}}{d _ {k , s} d _ {s , N}} \\right] ^ {\\top}, \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 555, + 430, + 919, + 474 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "where $d_{k,s}$ and $d_{s,n},n\\in \\{1,\\ldots ,N\\}$ denote the distances from UE $k$ to scatterer $s$ and from scatterer $s$ to PA $n$ , respectively.", + "bbox": [ + 503, + 477, + 919, + 507 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The received signal at the BS during the $t$ -th slot can be expressed as", + "bbox": [ + 503, + 508, + 919, + 537 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\ny _ {t} = \\sum_ {k = 1} ^ {K} \\mathbf {g} ^ {\\top} \\mathbf {h} _ {k} s _ {k, t} + n _ {t}, \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 633, + 544, + 919, + 584 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "where $s_{k,t}$ represents the symbol transmitted by the $k$ -th UE, and $n_t \\sim CN(0, \\sigma_n^2)$ is the additive white Gaussian noise.", + "bbox": [ + 503, + 590, + 919, + 619 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In PASS, the in-waveguide channel $\\mathbf{g}$ can be regarded as the deterministic channel component that depends on the locations and the coupling lengths of $\\mathrm{PAs}^2$ . Consequently, we merely need to estimate the wireless propagation channel $\\mathbf{h}_k$ from the $k$ -th UE to the PAs. However, PASS introduces a fundamental challenge in channel estimation due to the highly coupled nature of $\\mathbf{g}$ and $\\mathbf{h}_k$ . Specifically, while each waveguide is connected to multiple PAs, the waveguide channel is fixed and cannot be reconfigured to perform diverse beam measurements as in conventional hybrid precoding. As a result, the system must infer a high-dimensional channel vector $\\mathbf{h}_k$ from inherently low-dimensional pilot observations, typically just one", + "bbox": [ + 501, + 621, + 921, + 801 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "1In the existing works for PASS, the free space channel model between UEs and the PAs is commonly utilized by assuming the flexible PAs close to UEs [3]–[6]. However, in the practical wireless propagation environment, the NLoS channel component caused by the statical and dynamic scatters should be considered. In particular, all PAs are hard to provide the full coverage of LoS services for all UEs in practical deployment environment.", + "bbox": [ + 501, + 816, + 921, + 886 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2This work employs the electronic activation of pre-positioned PAs along the waveguide. The system controller only needs to determine the activation state to ascertain the exact positions of PAs. The reconfiguration of the activated PA set is usually several orders of magnitude faster than the channel coherence time, enabling it to respond efficiently to rapid channel variations.", + "bbox": [ + 501, + 886, + 921, + 944 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 911, + 30, + 919, + 40 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "scalar per waveguide per measurement. To elaborate, suppose the widely used orthogonal pilot transmission strategy, e.g., time-division, is adopted, and $T$ denotes the number of pilot transmission slots transmitted by UE $k$ . Note that even if the UE transmits $T \\geq N$ or more pilot slots, a single waveguide output may provide no more than one linearly independent measurement per slot. That is, each pilot symbol experiences the same fixed merging of $N$ PAs in the waveguide, yielding only repeated versions of the same scalar. Hence, the classic error-criterion-based algorithms, e.g., least square (LS) or linear minimum mean square error (LMMSE) estimators, are difficult to apply directly for channel estimation in $\\mathrm{PASS}^3$ .", + "bbox": [ + 73, + 69, + 491, + 251 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "III. DEEP LEARNING BASED CHANNEL ESTIMATION", + "text_level": 1, + "bbox": [ + 96, + 266, + 470, + 281 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this section, to improve the channel estimation performance, we leverage DL models to develop efficient channel estimation schemes for PASS. However, two critical challenges emerge when applying the DL approach to PASS. Firstly, the channel characteristics vary dramatically as PAs freely change positions within designated regions, which requires the efficient network architecture with sufficient capacity to learn the extended channel state space. Secondly, by dynamically activating candidate PAs along the waveguide in response to real-time communication demands, the system introduces time-varying channel dimension and heterogeneous signal distribution. The proposed DL-based channel estimator necessitates an innovative architecture capable of adaptively handling spatio-temporal variations in channel parameters, thereby fostering advancements in neural network design to enable robust estimation under non-stationary conditions.", + "bbox": [ + 73, + 285, + 491, + 526 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "A. Dataset Construction", + "text_level": 1, + "bbox": [ + 73, + 546, + 243, + 559 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In the offline training stage, we collect $N_{s}$ paired samples, i.e., the pilot observation vector $\\widetilde{\\mathbf{y}}_k\\in \\mathbb{C}^{T\\times 1}$ at the BS, the position set $\\Psi^{\\mathrm{P}} = [\\psi_n^1,\\dots ,\\psi_N^{\\mathrm{P}}] = [(x_1^{\\mathrm{P}},0,d),\\dots ,(x_N^{\\mathrm{P}},0,d)]$ of PAs, and the corresponding channel sample $\\mathbf{h}_k$ . In the proposed channel estimation network, the input tensor is designed as the set of both PA positions and the pilot observation. Considering the waveguide is deployed parallel to the $x$ -axis, we merely need the $x$ -axis coordinate set $\\mathbf{x}^{\\mathrm{P}} = [x_1^{\\mathrm{P}},\\ldots ,x_N^{\\mathrm{P}}]^T\\in \\mathbb{R}^{N\\times 1}$ of PAs as the input feature. To enable the neural network to adaptively estimate channels across varying PA configurations, we utilize the in-waveguide channel $\\mathbf{g}$ to transform the observation $\\widetilde{\\mathbf{y}}_k\\in \\mathbb{C}^{T\\times 1}$ into $\\widetilde{\\mathbf{Y}}_k = \\mathbf{g}\\widetilde{\\mathbf{y}}_k^\\top \\in \\mathbb{C}^{N\\times T}$ . This transformation aligns the input dimension of neural network with the current number of PAs $N$ in PASS, which facilitates the subsequent channel estimation network design. The complex-value matrix $\\widetilde{\\mathbf{Y}}_k$ is converted into the real-value tensor $\\overline{\\mathbf{Y}}_k = \\{\\Re (\\widetilde{\\mathbf{Y}}_k),\\Im (\\widetilde{\\mathbf{Y}}_k)\\} \\in \\mathbb{R}^{N\\times 2T}$ for neural network processing. Accordingly, the label tensor in the network training is $\\bar{\\mathbf{H}}_k = \\{\\Re (\\mathbf{h}_k),\\Im (\\mathbf{h}_k)\\} \\in \\mathbb{R}^{N\\times 2}$ .", + "bbox": [ + 73, + 564, + 491, + 853 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3To collect independent pilot measurements for each PA, as required by LS and LMMSE, one feasible approach is to introduce an antenna switching matrix that selects subsets of PAs in different time slots to realize the signal separation. In Section IV of this letter, we provide antenna switching matrix-based LS and LMMSE estimators as channel estimation benchmarks. Note that switching each PA in different slots introduces hardware overhead, extended measurement time, and potential switch losses, all of which reduce practicality.", + "bbox": [ + 73, + 862, + 491, + 946 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/241e3ab1863fe20fbdb6cad0c762ddc9234ff318ed7416f9440786b3f82e7423.jpg", + "image_caption": [ + "Fig. 2. Proposed PAMoE model for channel estimation." + ], + "image_footnote": [], + "bbox": [ + 568, + 69, + 866, + 202 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "B. PAMoE: Mixture of Experts-Based Channel Estimation", + "text_level": 1, + "bbox": [ + 503, + 252, + 901, + 266 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We first propose a PAMoE estimator as illustrated in Fig. 2 to address high-dimensional dynamic channel estimation from low-dimensional received pilots, which incorporates the dynamic padding, feature embedding, fusion, and position-aware MoE modules [10].", + "bbox": [ + 501, + 270, + 919, + 345 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "1) Dynamic Padding: To accommodate variable PA counts $N$ and handle variable-length inputs, $PAMoE$ employs dynamic padding up to a maximum $N_{\\mathrm{max}}$ . Let $B$ denote the batch size in the network training stage. The input tensor in a training batch can be expressed as PA positions $\\mathbf{P} \\in \\mathbb{R}^{B \\times N \\times 1}$ and pilot signals $\\mathbf{S} \\in \\mathbb{R}^{B \\times N \\times 2T}$ , which are the batch version of $\\mathbf{x}^{\\mathrm{P}}$ and $\\overline{\\mathbf{Y}}_k$ , respectively. If $N < N_{\\mathrm{max}}$ , the network pads along the PA dimension so that both $\\mathbf{P}$ and $\\mathbf{S}$ become length $N_{\\mathrm{max}}$ , i.e., $\\mathbf{P}' = [\\mathbf{P}, \\phi^1] \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times 1}$ , $\\mathbf{S}' = [\\mathbf{S}, \\phi^2] \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times d_{\\mathrm{sig}}}$ , where $\\phi^1$ and $\\phi^2 \\in \\mathbb{R}^{N_{\\mathrm{pad}} \\times 1}(N_{\\mathrm{pad}} = N_{\\mathrm{max}} - N)$ denote the learnable padding embeddings, respectively.", + "bbox": [ + 501, + 345, + 919, + 512 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2) Feature Embedding: To handle continuous spatial information and achieve efficient extrapolation of PAs, the Fourier basis function is employed, developing a Fourier positional embedding approach. Suppose a set of exponentially increasing frequency bases $\\mathbf{f} = \\left[2^{f}\\pi \\right]_{f = 0}^{F - 1}\\in \\mathbb{R}^{F}$ is defined, where $F$ is the number of frequency components. The scaled position encodings can be expressed as", + "bbox": [ + 501, + 512, + 919, + 619 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {\\Theta} = \\mathbf {S} ^ {\\prime} \\odot \\mathbf {f} \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times F}. \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 625, + 627, + 919, + 645 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Then, we apply sinusoidal transformations to obtain sine and cosine features, and then are concatenated as", + "bbox": [ + 503, + 654, + 919, + 684 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {E} = \\operatorname {C o n c a t} \\left[ \\sin (\\boldsymbol {\\Theta}), \\cos (\\boldsymbol {\\Theta}) \\right] \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times 2 F}. \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 565, + 693, + 919, + 712 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Further, $\\mathbf{E}$ is projected into the embedding space $\\mathbf{Z}_{\\mathrm{pos}} = \\mathbf{EW}_{\\mathrm{pos}} + \\mathbf{b}_{\\mathrm{pos}} \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times d_{\\mathrm{embed}}}$ , where $\\mathbf{W}_{\\mathrm{pos}} \\in \\mathbb{R}^{2F \\times d_{\\mathrm{embed}}}$ is the learnable weight matrix and $\\mathbf{b}_{\\mathrm{pos}} \\in \\mathbb{R}^{d_{\\mathrm{embed}}}$ is the bias term. This Fourier embedding effectively captures multi-scale positional variations in the input space. The pilot signal data is embedded to the same hidden dimension $d_{\\mathrm{hid}}$ by the multilayer perceptron (MLP)-based linear mapping module $\\phi_{\\mathrm{sig}}$ , i.e., $\\mathbf{Z}_{\\mathrm{sig}} = \\phi_{\\mathrm{sig}}(\\mathbf{S}') \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times d_{\\mathrm{hid}}}$ .", + "bbox": [ + 501, + 720, + 919, + 840 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3) Feature Fusion: To fuse positional features with pilot signal features, we employ a gating function to generate a gate from a control feature based on PA positions and apply it to the target feature based on pilot signal. The gating operation is formulated as", + "bbox": [ + 501, + 842, + 921, + 915 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {G} = \\sigma \\left(\\mathbf {W} _ {g} \\mathbf {Z} _ {\\text {p o s}} + \\mathbf {b} _ {g}\\right) \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times d _ {\\text {h i d}}}, \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 578, + 925, + 919, + 944 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 911, + 30, + 919, + 40 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/a8f19f86ad289a59e60b7ffa3eb8b3b7b38f5339d4b348c9f7164601efdf43a2.jpg", + "image_caption": [ + "Fig. 3. Proposed PAformer model for channel estimation." + ], + "image_footnote": [], + "bbox": [ + 138, + 70, + 433, + 258 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {Z} _ {\\text {f u s e d}} = \\mathbf {G} \\odot \\mathbf {Z} _ {\\text {s i g}} + \\mathbf {Z} _ {\\text {p o s}} \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times d _ {\\text {h i d}}}, \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 138, + 305, + 491, + 325 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\mathbf{W}_g$ and $\\mathbf{b}_g$ denote the weight and bias of a linear layer with $d_{\\mathrm{hid}}$ neurons, respectively, and $\\sigma (\\cdot)$ is a Sigmoid activation function.", + "bbox": [ + 73, + 332, + 491, + 377 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4) MoE With Gating Network: Suppose there are $E$ experts and each expert takes $\\mathbf{Z}_{\\mathrm{fused}}$ and processes it with an MLP-Mixer block across both the feature dimension $d_{\\mathrm{hid}}$ and the spatial dimension $N_{\\mathrm{max}}$ . Let $\\mathbf{Z}_e = E_e(\\mathbf{Z}_{\\mathrm{fused}}) \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times d_{\\mathrm{hid}}}$ denote the output of expert $e$ , $(e = 1, \\dots, E)$ . A gating network is employed to weight each expert's output. Typically, it pools $\\mathbf{Z}_{\\mathrm{fused}}$ over the PA dimension to get a global context $\\mathbf{z}_{\\mathrm{pool}} \\in \\mathbb{R}^{B \\times d_{\\mathrm{hid}}}$ , and then applies the softmax activation function to produce gating weights, which can be expressed as", + "bbox": [ + 73, + 378, + 490, + 513 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {\\alpha} = \\operatorname {s o f t m a x} \\left(\\mathbf {W} _ {\\alpha} \\mathbf {z} _ {\\text {p o o l}} + \\mathbf {b} _ {\\alpha}\\right) \\in \\mathbb {R} ^ {B \\times E}, \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 148, + 522, + 488, + 542 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\sum_{e=1}^{E} \\alpha_{b,e} = 1$ , $\\forall b \\in \\{1, \\dots, B\\}$ , $\\mathbf{W}_{\\alpha}$ and $\\mathbf{b}_{\\alpha}$ are the weight and bias of a linear layer with $E$ neurons, respectively.", + "bbox": [ + 73, + 549, + 488, + 580 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Given the output $\\mathbf{Z}_e$ of each expert and the gating weights $\\alpha$ , we form a weighted sum over experts. Let $\\alpha$ be reshaped to $\\mathbb{R}^{B\\times 1\\times E}$ so it can broadcast over the $N_{\\mathrm{max}}$ dimension to obtain the following output of MoE", + "bbox": [ + 73, + 580, + 491, + 642 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {Z} _ {\\mathrm {M o E}} = \\sum_ {e = 1} ^ {E} \\boldsymbol {\\alpha} _ {e} \\mathbf {Z} _ {e} \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times d _ {\\mathrm {h i d}}} \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 161, + 647, + 488, + 686 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Finally, the network concatenates the positional feature $\\mathbf{Z}_{\\mathrm{pos}}$ and $\\mathbf{Z}_{\\mathrm{MoE}}$ along the last dimension, i.e., $\\mathbf{Z}_{\\mathrm{concat}} \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times (2d_{\\mathrm{hid}})}$ . Then, a linear layer maps $\\mathbf{Z}_{\\mathrm{concat}}$ to the estimated channel $\\widetilde{\\mathbf{H}} \\in \\mathbb{R}^{B \\times N \\times 2}$ . In $PAMoE$ , the dynamic padding pattern requires the network to fix a predefined maximum number of PAs $N_{\\mathrm{max}}$ in the training stage, which is a limiting factor if PAs exceed the predefined maximum bound $N_{\\mathrm{max}}$ .", + "bbox": [ + 73, + 691, + 491, + 799 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "C. P Aformer: Transformer-Based Channel Estimation", + "text_level": 1, + "bbox": [ + 75, + 819, + 441, + 834 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We further propose a PAformer estimator as illustrated in Fig. 3 to address the scalability limitation of $PAMoE$ , which is inherently constrained by fixed $N_{\\mathrm{max}}$ during training. PAformer incorporates self-attention layers with permutation-equivariance, enabling the network to dynamically accommodate PAs of arbitrary size $N$ . The proposed PAformer predicts channel coefficients in a per-antenna manner, offering", + "bbox": [ + 73, + 838, + 491, + 946 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "flexibility if new data has more antennas than seen before in the training stage.", + "bbox": [ + 503, + 69, + 919, + 98 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "1) Input Embedding: In PAformer, we first concatenate the position and pilot signal features along their last dimension, i.e., $\\mathbf{V} = \\left[\\mathbf{P},\\mathbf{S}\\right]\\in \\mathbb{R}^{B\\times N\\times (2T + 1)}$ . Next, $\\mathbf{V}$ is mapped into a hidden representation $\\mathbf{Z}^{(0)}$ via an embedding network $\\phi$ based on MLP, i.e., $\\mathbf{Z}^{(0)} = \\phi (\\mathbf{V})\\in \\mathbb{R}^{B\\times N\\times d_{\\mathrm{hid}}}$ .", + "bbox": [ + 503, + 99, + 919, + 175 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2) Transformer Encoder: PAformer employs a pre-norm Transformer with $P$ stacked self-attention blocks (SABs), each comprising multi-head self-attention (MHA) and a feedforward network (FFN), both with residual connections and layer normalization. Let $\\mathbf{Z}^{(\\ell -1)}$ denote the input to the $\\ell$ -th SAB, and $\\overline{\\mathbf{Z}}^{(\\ell -1)} = \\mathrm{LayerNorm}(\\mathbf{Z}^{(\\ell -1)})$ is the pre-normalized feature representation by the layer normalization. The output of the MHA module is expressed as $\\mathbf{A}^{(\\ell)} = \\mathrm{MHA}(\\mathbf{Q},\\mathbf{K},\\mathbf{V})\\in$ $\\mathbb{R}^{B\\times N\\times d_{\\mathrm{hid}}}$ where Q, K, and V denote query, key, and value tokens of $\\overline{\\mathbf{Z}}^{(\\ell -1)}$ [11], respectively. The result is combined via a residual connection $\\overline{\\mathbf{Z}}^{\\prime (\\ell)} = \\overline{\\mathbf{Z}}^{(\\ell -1)} + \\mathbf{A}^{(\\ell)}$ . Then, the FFN is applied to obtain $\\mathbf{F}^{(\\ell)} = \\mathrm{FFN}(\\overline{\\mathbf{Z}}^{\\prime (\\ell)})$ , where FFN is a point-wise MLP operating on each PA token separately. Another residual connection completes this sub-layer $\\mathbf{Z}^{(\\ell)} =$ $\\overline{\\mathbf{Z}}^{\\prime (\\ell)} + \\mathbf{F}^{(\\ell)}$ . Stacking and applying $P$ blocks in sequence yields the final feature representation $\\mathbf{Z}^P = \\mathrm{SAB}^{(\\ell)}\\bigg(\\mathbf{Z}^{(\\ell -1)}\\bigg)\\in$ $\\mathbb{R}^{B\\times N\\times d_{\\mathrm{hid}}}$ , $\\ell = 1,\\dots ,P$ , starting from $\\mathbf{Z}^{(0)}$ . The final step maps each antenna-wise feature vector in $\\mathbf{Z}^{(P)}$ to the estimated channel $\\widehat{\\mathbf{H}}_i$ , where $i = 1,\\ldots ,N$ indexes PAs.", + "bbox": [ + 503, + 176, + 921, + 479 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "For the proposed two DL estimators, during the test stage, the trained network can be applied to scenarios with dynamic numbers of PAs, even though the test dataset has a different data distribution and dimensionality compared to the training dataset. This property of the channel estimation network can also be termed as zero-shot learning.", + "bbox": [ + 501, + 479, + 921, + 571 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "IV. NUMERICAL RESULTS", + "text_level": 1, + "bbox": [ + 617, + 589, + 807, + 603 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In simulation setups, we set $Q = 200$ , $K = 4$ , $n_e = 1.4$ , $d = 5$ , $D_x \\times D_y = 20 \\times 20 \\mathrm{~m}^2$ , $S = 6$ and the carrier frequency $f_c = 28 \\mathrm{GHz}$ . In the training dataset construction, we collect $N_s = 10^5$ training samples with the fixed number of PAs $N^{\\mathrm{tr}} = 16$ , while the test number of PAs is $N^{\\mathrm{te}} \\in \\{8, \\dots, 32\\}$ . In the hyper-parameter setups of the proposed DL estimators, we set $N_{\\mathrm{max}} = 32$ , $E = 4$ , $d_{\\mathrm{hid}} = 64$ , $P = 4$ , and $B = 256$ . The 1-norm $\\ell_1$ is used as the loss function in the network training, i.e., $\\ell_1 = ||\\widehat{\\mathbf{H}}_k - \\overline{\\mathbf{H}}_k||_1$ , while the normalized mean squared error (NMSE) is employed as the performance metric, i.e., $\\mathrm{NMSE} = \\mathbb{E}\\{||\\widehat{\\mathbf{H}}_k - \\overline{\\mathbf{H}}_k||_F^2 / ||\\overline{\\mathbf{H}}_k||_F^2\\}$ . The antenna switching-based LS and LMMSE estimators are used as the conventional channel estimation benchmarks. Furthermore, to establish relevant benchmarks incorporating state-of-the-art DL estimators, we further develop two comparison DL benchmarks drawing inspiration from recent attention-based channel estimation networks, i.e., SA-RN-CE [8] and DACEN [9], which are created by replacing the MoE module in PAMoE with spatial attention (SA) and dual attention (DA) mechanisms, respectively.", + "bbox": [ + 503, + 609, + 921, + 898 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In Fig. 4, we present the NMSE performance of different channel estimation schemes with $N^{\\mathrm{te}} = 32$ . The proposed DL estimators with reduced pilot overhead $T$ outperform", + "bbox": [ + 501, + 898, + 921, + 945 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 911, + 31, + 919, + 39 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/693e462a34cadef9c1cfda5dfe90c288c6651c3f415570ed7ba8c78a5da6867a.jpg", + "image_caption": [ + "Fig. 4. NMSE vs. SNR for different algorithms." + ], + "image_footnote": [], + "bbox": [ + 148, + 78, + 410, + 239 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/ceecef8ee6c03c7cbdd8591e5cd323baeb805da3fe491adbccdc3edcb814fd50.jpg", + "image_caption": [ + "Fig. 5. NMSE vs. number of PAs $N$ for different algorithms." + ], + "image_footnote": [], + "bbox": [ + 148, + 292, + 411, + 454 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/a79046fc7708e5fdb8ab65be83e6202f3d9466f4cb785d6263c17bb30b971d73.jpg", + "table_caption": [ + "TABLEI COMPLEXITY ANALYSIS OF PROPOSED DL ESTIMATORS" + ], + "table_footnote": [], + "table_body": "
EstimatorNParams(K)FLOPs(M)Runtime (μs)
CPUGPU
PAMoE8
16211.913.18211.812.73
32
PAformer84.37774.623.814
16545.98.756128.27.943
3217.51226.812.95
", + "bbox": [ + 104, + 532, + 462, + 674 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "the existing linear estimators and DL models. The superior accuracy of $PAMoE$ compared to $PA\\text{former}$ primarily stems from $PAMoE$ 's more specialized design, particularly in its handling and exploitation of the pinching antenna positional information. $PA\\text{former}$ , while robust and scalable due to its Transformer foundation, operates with a more general mechanism to support variable PA counts. In Fig. 5, we provide NMSE performance of different channel estimation schemes for varying numbers of PAs, where the signal-to-noise ratio (SNR) is set to 0 dB. Compared to SA-RN-CE and DACEN, the proposed $PAMoE$ and $PA\\text{former}$ trained by the fixed PA configuration exhibit excellent robustness and generalization for dynamic PAs. The proposed DL estimators possess sufficient zero-shot learning capabilities to deal with distinct data distribution in the test stage. Table I summarizes the number of trainable parameters (Params), floating point", + "bbox": [ + 73, + 703, + 491, + 946 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "operations (FLOPs) and inference runtime of the proposed $PAMoE$ and $PA\\text{former}$ , where the NVIDIA RTX 3090 GPU and the 12th Gen Intel(R) Core(TM) i9- 12900K CPU are used as the inference platform. The average inference runtime per channel estimation instance was calculated by averaging over 1000 Monte Carlo experiments. Firstly, for different numbers of PAs $N$ , the Params of the proposed $PAMoE$ and $PA\\text{former}$ are the same due to the fixed network architecture. Secondly, the FLOPs of $PA\\text{former}$ increase linearly with the number of PAs $N$ , as the higher-dimensional input tensor needs to be processed in the SAB of $PA\\text{former}$ . In the $PAMoE$ model, the input tensor is uniformly padded to the same $N_{\\max}$ via padding interpolation, and hence the FLOPs of the $PAMoE$ remain constant for different $N$ . Finally, we observe that both $PAMoE$ and $PA\\text{former}$ can achieve the channel estimation at the microseconds ( $\\mu s$ ) level, demonstrating the practical feasibility of the proposed models for real-time deployment.", + "bbox": [ + 501, + 69, + 921, + 325 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "V. CONCLUSIONS", + "text_level": 1, + "bbox": [ + 647, + 337, + 779, + 349 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In this letter, we investigated the channel estimation approaches in PASS, and proposed two DL estimators with the advanced neural network architecture to infer a high-dimensional channel vector from inherently low-dimensional pilot observations, respectively. Specifically, the proposed PAMoE accommodates variable PA configurations and exploits multi-expert diversity in the MLP-Mixer for improved channel estimation. Accordingly, the proposed PAFomer is capable of handling arbitrary number of PAs thanks to the self-attention mechanism. Numerical results demonstrated that the proposed DL estimators outperform conventional methods, and significantly reduce the pilot overhead.", + "bbox": [ + 501, + 354, + 921, + 536 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "REFERENCES", + "text_level": 1, + "bbox": [ + 663, + 547, + 761, + 561 + ], + "page_idx": 4 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] X. Shao, Q. Jiang, and R. Zhang, \"6D movable antenna based on user distribution: Modeling and optimization,\" IEEE Trans. Wireless Commun., vol. 24, no. 1, pp. 355-370, Jan. 2025.", + "[2] A. Fukuda, H. Yamamoto, H. Okazaki, Y. Suzuki, and K. Kawai, \"Pinching antenna: Using a dielectric waveguide as an antenna,\" NTT DOCOMO Technical J., vol. 23, no. 3, pp. 5-12, Jan. 2022.", + "[3] Z. Ding, R. Schober, and H. Vincent Poor, “Flexible-antenna systems: A pinching-antenna perspective,” IEEE Trans. Commun., 2025.", + "[4] D. Tyrovolas, S. A. Tegos, P. D. Diamantoulakis, S. Ioannidis, C. K. Liaskos, and G. K. Karagiannidis, “Performance analysis of pinching-antenna systems,” IEEE Trans. Cognit. Commun. Networking, 2025.", + "[5] Z. Wang, C. Ouyang, X. Mu, Y. Liu, and Z. Ding, \"Modeling and beamforming optimization for pinching-antenna systems,\" arXiv preprint arXiv:2502.05917, 2025.", + "[6] S. A. Tegos, P. D. Diamantoulakis, Z. Ding, and G. K. Karagiannidis, \"Minimum data rate maximization for uplink pinching-antenna systems,\" IEEE Wireless Commun. Lett., 2025.", + "[7] F. Rezaei, S. M. Marvasti-Zadeh, C. Tellambura, and A. Maaref, \"Adversarial score-based generative models for MMSE-achieving AmBC channel estimation,\" IEEE Wireless Commun. Lett., vol. 13, no. 4, pp. 1053-1057, Apr. 2024.", + "[8] X. Fan, Y. Zou, and L. Zhai, \"Spatial-attention-based channel estimation in IRS-assisted mmWave MU-MISO systems,\" IEEE Internet Things J., vol. 11, no. 6, pp. 9801-9813, Mar. 2024.", + "[9] B. Zhou, X. Yang, S. Ma, F. Gao, and G. Yang, \"Pay less but get more: A dual-attention-based channel estimation network for massive MIMO systems with low-density pilots,\" IEEE Trans. Wireless Commun., vol. 23, no. 6, pp. 6061-6076, Jun. 2024.", + "[10] W. Cai, J. Jiang, F. Wang, J. Tang, S. Kim, and J. Huang, “A survey on mixture of experts,” arXiv preprint arXiv:2407.06204, 2024.", + "[11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proc. NeurIPS, vol. 30, Nov. 2017." + ], + "bbox": [ + 506, + 566, + 921, + 941 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 911, + 30, + 919, + 40 + ], + "page_idx": 4 + } +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13268/27688151-5fde-4680-b981-ebf649a96c83_model.json b/data/2025/2503_13xxx/2503.13268/27688151-5fde-4680-b981-ebf649a96c83_model.json new file mode 100644 index 0000000000000000000000000000000000000000..965d950d0b1ba5583c7bfd9485a070313ec89bb6 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13268/27688151-5fde-4680-b981-ebf649a96c83_model.json @@ -0,0 +1,1090 @@ +[ + [ + { + "type": "page_number", + "bbox": [ + 0.912, + 0.031, + 0.921, + 0.041 + ], + "angle": 0, + "content": "1" + }, + { + "type": "title", + "bbox": [ + 0.163, + 0.07, + 0.838, + 0.141 + ], + "angle": 0, + "content": "Channel Estimation for Pinching-Antenna Systems (PASS)" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.148, + 0.782, + 0.165 + ], + "angle": 0, + "content": "Jian Xiao, Ji Wang, Senior Member, IEEE, and Yuanwei Liu, Fellow, IEEE" + }, + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.266, + 0.06, + 0.708 + ], + "angle": 270, + "content": "arXiv:2503.13268v4 [cs.IT] 10 May 2025" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.221, + 0.493, + 0.488 + ], + "angle": 0, + "content": "Abstract—Pinching antennas (PAs) represent a revolutionary flexible antenna technology that leverages dielectric waveguides and electromagnetic coupling to mitigate large-scale path loss. This letter is the first to explore channel estimation for Pinching-Antenna SyStems (PASS), addressing their uniquely ill-conditioned and underdetermined channel characteristics. In particular, two efficient deep learning-based channel estimators are proposed. 1) PAMoE: This estimator incorporates dynamic padding, feature embedding, fusion, and mixture of experts (MoE) modules, which effectively leverage the positional information of PAs and exploit expert diversity. 2) PAformer: This Transformer-style estimator employs the self-attention mechanism to predict channel coefficients in a per-antenna manner, which offers more flexibility to adaptively deal with dynamic numbers of PAs in practical deployment. Numerical results demonstrate that 1) the proposed deep learning-based channel estimators outperform conventional methods and exhibit excellent zero-shot learning capabilities, and 2) PAMoE delivers higher channel estimation accuracy via MoE specialization, while PAformer natively handles an arbitrary number of PAs, trading self-attention complexity for superior scalability." + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.493, + 0.492, + 0.52 + ], + "angle": 0, + "content": "Index Terms—Channel estimation, mixture of experts, pinching antenna, Transformer." + }, + { + "type": "title", + "bbox": [ + 0.217, + 0.542, + 0.352, + 0.556 + ], + "angle": 0, + "content": "I. INTRODUCTION" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.561, + 0.493, + 0.865 + ], + "angle": 0, + "content": "ADVANCED multiple-input multiple-output (MIMO) is a critical enabling technology for sixth-generation (6G) wireless networks, targeting ultra-high-speed data transmission and seamless connectivity. Among emerging MIMO technologies, flexible antenna systems, e.g., fluid and movable antennas, offer dynamic channel reconfiguration capabilities by leveraging its positional or rotational adjustability to adapt to the evolving spatial distribution of users [1]. However, they still struggle with addressing large-scale path loss and the need for line-of-sight (LoS) links, both of which are essential for high-quality communication. To overcome these challenges, pinching antennas (PAs) offer a novel approach to creating controlled radiation sites. PAs were initially introduced and developed into a prototype by NTT DOCOMO in 2022 [2]. The fundamental concept of PAs relies on the transfer of electromagnetic (EM) waves from a dielectric waveguide to a nearby dielectric material. Compared to typical movable antennas, PAs can achieve movement ranges spanning thousands to tens of thousands of wavelengths and support a larger number of connected antennas on a waveguide. This technology allows" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.875, + 0.49, + 0.921 + ], + "angle": 0, + "content": "Jian Xiao and Ji Wang are with the Department of Electronics and Information Engineering, College of Physical Science and Technology, Central China Normal University, Wuhan 430079, China (e-mail: jianx@mails.ccnu.edu.cn; jiwang@ccnu.edu.cn)." + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.921, + 0.49, + 0.946 + ], + "angle": 0, + "content": "Yuanwei Liu is with the Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong (e-mail: yuanwei@hku.hk)." + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.22, + 0.922, + 0.281 + ], + "angle": 0, + "content": "for the flexible positioning of PAs, enabling the establishment of adjustable and reliable LoS transceiver links in Pinching-Antenna SyStems (PASS), thereby effectively mitigating large-scale path loss." + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.281, + 0.922, + 0.719 + ], + "angle": 0, + "content": "The theoretical exploration of PA technology is still in its early stages, yet it has already garnered significant attention from both academia and industry. The authors of [3] first provided a comprehensive performance analysis for PASS and proposed a non-orthogonal multiple access (NOMA)-enhanced PASS. Subsequently, related works have explored performance analysis [4], physical modeling and beamforming [5], and minimum rate maximization for PASS [6]. It should be emphasized that the existing works on PA optimization strategies hinge crucially on precise channel estimation. However, the channel estimation in PASS has not been investigated in-depth due to new challenges. Firstly, PASS introduces a fundamental challenge in channel estimation due to the highly coupled nature of the in-waveguide channel and wireless propagation channel between PAs and users. Each waveguide is fed by only one radio frequency (RF) chain, while it may carry multiple PAs. This setup necessitates the recovery of high-dimensional information from low-dimensional observations, resulting in an ill-conditioned underdetermined recovery problem. Secondly, the in-waveguide channel is deterministic and completely depends on the positions of PAs in the waveguide. Hence, no stacking of pilot signals can produce additional independent linear equations to solve the high-dimensional wireless propagation channel. Moreover, the dynamic nature of PAs with the plug-and-play capability complicates the channel acquisition. The activation locations of a large number of PAs can be adjusted in a specified region to improve user service. Therefore, the wireless propagation channel presents near-field spatial non-stationarity effects due to the large array aperture." + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.719, + 0.922, + 0.945 + ], + "angle": 0, + "content": "To fill this research gap, this letter presents the first investigation into channel estimation for PASS. First and foremost, we extend classic linear channel estimators by adapting the antenna switching strategy for PASS, which provide fundamental channel estimation benchmarks. Furthermore, inspired by recent advances in deep learning (DL)-enabled channel estimation [7]-[9], we propose two efficient DL models to improve channel estimation accuracy while reducing pilot overhead. The first DL estimator, termed \\(PAMoE\\), is built on the mixture of experts (MoE) architecture, which integrates PA positions and pilot signal features through multi-expert mechanisms to adaptively model the dynamic channel distributions in PASS. To enhance the flexibility and scalability of the channel estimation model for dynamic PA counts, we further propose a Transformer-style estimator, termed \\(PAformer\\), which lever-" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.912, + 0.031, + 0.921, + 0.041 + ], + "angle": 0, + "content": "2" + }, + { + "type": "image", + "bbox": [ + 0.134, + 0.074, + 0.435, + 0.212 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.075, + 0.226, + 0.375, + 0.24 + ], + "angle": 0, + "content": "Fig. 1. Pinching antennas assisted multi-user systems." + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.263, + 0.492, + 0.385 + ], + "angle": 0, + "content": "ages the self-attention mechanism to predict channel coefficients on a per-antenna basis. Numerical results demonstrate that the proposed DL estimators achieve superior channel estimation accuracy with significantly reduced pilot overhead compared to conventional channel estimators. In particular, both PAMoE and PAformer exhibit superior zero-shot learning capabilities on dynamic PA configurations without retraining operations." + }, + { + "type": "title", + "bbox": [ + 0.102, + 0.398, + 0.466, + 0.412 + ], + "angle": 0, + "content": "II. SYSTEM MODEL AND PROBLEM FORMULATION" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.416, + 0.493, + 0.737 + ], + "angle": 0, + "content": "As illustrated in Fig. 1, considering an uplink communication system that includes a base station (BS) equipped with \\(N\\) PAs and \\(K\\) single-antenna user equipments (UEs). Each PA has the same length \\(L\\) on the waveguide. In a Cartesian system, the UEs are assumed to be randomly distributed within a rectangular region on the \\(x - y\\) plane, with dimensions \\(D_{x}\\) and \\(D_{y}\\). The position of the \\(k\\)-th UE is represented by \\(\\psi_{k} = (x_{k},y_{k},0)\\). Suppose the waveguide extends parallel to the \\(x\\)-axis. Its height is denoted by \\(d\\), and its length aligns with the rectangular dimension \\(D_{x}\\). Hence, the coordinates of PA \\(n\\) are given by \\(\\psi_{n}^{\\mathrm{P}} = (x_{n}^{\\mathrm{P}},0,d)\\), where \\(x_{n}^{\\mathrm{P}}\\) lies in the interval \\([L,D_x]\\). In this work, the discrete activation deployment of PAs is adopted, which simplifies the hardware design and is more practical than continuous activation [5]. The PAs can only be activated at specific discrete positions along the waveguide, forming the feasible set \\(\\mathcal{S} = \\left\\{L + \\frac{D_x - L}{Q - 1} (q - 1) \\mid q = 1,2,\\dots ,Q\\right\\}\\), where \\(Q\\) denotes the number of discrete positions available. Since all \\(N\\) PAs lie along the same waveguide, the transmitted signal of each PA is essentially a phase-shifted version of the signal from the BS at the waveguide feed point. The in-waveguide channel \\(\\mathbf{g} \\in \\mathbb{C}^{N \\times 1}\\) can be expressed as" + }, + { + "type": "equation", + "bbox": [ + 0.117, + 0.742, + 0.49, + 0.772 + ], + "angle": 0, + "content": "\\[\n\\mathbf {g} = \\left[ \\alpha_ {1} e ^ {- j \\frac {2 \\pi}{\\lambda_ {g}} \\left| \\psi_ {0} ^ {\\mathrm {P}} - \\psi_ {1} ^ {\\mathrm {P}} \\right|}, \\dots , \\alpha_ {N} e ^ {- j \\frac {2 \\pi}{\\lambda_ {g}} \\left| \\psi_ {0} ^ {\\mathrm {P}} - \\psi_ {N} ^ {\\mathrm {P}} \\right|} \\right] ^ {\\top}, \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.778, + 0.49, + 0.869 + ], + "angle": 0, + "content": "where \\(\\psi_0^{\\mathrm{P}}\\) denotes the position of the waveguide's feed point. Parameter \\(\\lambda_{g} = \\frac{\\lambda}{n_{e}}\\) is the guided wavelength. Here, \\(\\lambda\\) is the wavelength in free space and \\(n_e\\) is the effective refractive index of the dielectric waveguide. \\(\\alpha_{n}\\) is the factor that determines the ratio of power exchanged between the waveguide and PA \\(n\\), governed by the coupling length of the PA [5]." + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.87, + 0.492, + 0.947 + ], + "angle": 0, + "content": "For the wireless propagation channel \\(\\mathbf{h}_k\\in \\mathbb{C}^{N\\times 1}\\) between the \\(k\\) -th UE and the PAs, \\(\\mathbf{h}_k\\) is composed of the LoS component \\(\\mathbf{h}_k^{\\mathrm{LoS}}\\) and the non-line-of-sight (NLoS) component \\(\\mathbf{h}_k^{\\mathrm{NLoS}}\\) , i.e., \\(\\mathbf{h}_k = \\Upsilon_k\\odot \\mathbf{h}_k^{\\mathrm{LoS}} + \\mathbf{h}_k^{\\mathrm{NLoS}}\\) . Here, the operator \\(\\odot\\) denotes the Hadamard product. \\(\\Upsilon_{k} = [\\upsilon_{k,n},\\dots ,\\upsilon_{k,N}]\\in \\mathbb{C}^{N\\times 1}\\)" + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.07, + 0.921, + 0.145 + ], + "angle": 0, + "content": "is a Bernoulli random vector and the variable \\(\\nu_{k,n}\\) takes values from the set \\(\\{0,1\\}\\), characterizing the existence of a LoS link between the \\(n\\)-th PA and the \\(k\\)-th UE1. The geometric spherical wavefront model-based LoS channel between the \\(k\\)-th UE and the PAs is expressed as [3]" + }, + { + "type": "equation", + "bbox": [ + 0.529, + 0.15, + 0.921, + 0.194 + ], + "angle": 0, + "content": "\\[\n\\mathbf {h} _ {k} ^ {\\mathrm {L o S}} = \\left[ \\frac {\\sqrt {\\eta} e ^ {- j \\frac {2 \\pi}{\\lambda} | \\psi_ {k} - \\psi_ {1} ^ {\\mathrm {P}} |}}{| \\psi_ {k} - \\psi_ {1} ^ {\\mathrm {P}} |}, \\dots , \\frac {\\sqrt {\\eta} e ^ {- j \\frac {2 \\pi}{\\lambda} | \\psi_ {k} - \\psi_ {N} ^ {\\mathrm {P}} |}}{| \\psi_ {k} - \\psi_ {N} ^ {\\mathrm {P}} |} \\right] ^ {\\top}, \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.2, + 0.921, + 0.25 + ], + "angle": 0, + "content": "where \\(\\eta = \\frac{\\lambda^2}{16\\pi^2}\\) denotes the path loss at a reference distance of \\(1\\mathrm{m}\\). Considering \\(S\\) scatterers in the \\(k\\)-th UE \\(\\rightarrow\\) PAs link, the NLoS channel \\(\\mathbf{h}_k^{\\mathrm{NLoS}} \\in \\mathbb{C}^{N \\times 1}\\) is given by" + }, + { + "type": "equation", + "bbox": [ + 0.605, + 0.256, + 0.921, + 0.297 + ], + "angle": 0, + "content": "\\[\n\\mathbf {h} _ {k} ^ {\\mathrm {N L o S}} = \\sqrt {\\frac {1}{S}} \\sum_ {s = 1} ^ {S} \\beta_ {k, s} \\mathbf {a} _ {k, s} e ^ {j \\eta_ {k, s}}, \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.303, + 0.922, + 0.426 + ], + "angle": 0, + "content": "where \\(\\beta_{k,s} \\sim \\mathcal{CN}(0, \\sigma_s^2)\\) and \\(\\eta_{k,s} \\sim \\mathcal{U}[0, 2\\pi]\\) denote the complex gain and the random phase of the \\(s\\)-th scatterer path, respectively. \\(\\mathbf{a}_{k,s} \\in \\mathbb{C}^{N \\times 1}\\) represents the receiving array response at the PAs. Since the movable region of PAs is large, the equivalent array aperture of PA is likely to exceed the Rayleigh distance that is the criterion to determine the near-field boundary. Hence, the spherical wavefront is utilized to characterize array response \\(\\mathbf{a}_{k,s}\\) and is given by" + }, + { + "type": "equation", + "bbox": [ + 0.556, + 0.431, + 0.921, + 0.475 + ], + "angle": 0, + "content": "\\[\n\\mathbf {a} _ {k, s} = \\left[ \\frac {\\sqrt {\\eta} e ^ {- j 2 \\pi d _ {s , 1} / \\lambda}}{d _ {k , s} d _ {s , 1}}, \\dots , \\frac {\\sqrt {\\eta} e ^ {- j 2 \\pi d _ {s , N} / \\lambda}}{d _ {k , s} d _ {s , N}} \\right] ^ {\\top}, \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.478, + 0.921, + 0.508 + ], + "angle": 0, + "content": "where \\(d_{k,s}\\) and \\(d_{s,n},n\\in \\{1,\\ldots ,N\\}\\) denote the distances from UE \\(k\\) to scatterer \\(s\\) and from scatterer \\(s\\) to PA \\(n\\) , respectively." + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.509, + 0.921, + 0.539 + ], + "angle": 0, + "content": "The received signal at the BS during the \\(t\\)-th slot can be expressed as" + }, + { + "type": "equation", + "bbox": [ + 0.635, + 0.545, + 0.921, + 0.585 + ], + "angle": 0, + "content": "\\[\ny _ {t} = \\sum_ {k = 1} ^ {K} \\mathbf {g} ^ {\\top} \\mathbf {h} _ {k} s _ {k, t} + n _ {t}, \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.591, + 0.92, + 0.621 + ], + "angle": 0, + "content": "where \\( s_{k,t} \\) represents the symbol transmitted by the \\( k \\)-th UE, and \\( n_t \\sim CN(0, \\sigma_n^2) \\) is the additive white Gaussian noise." + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.622, + 0.922, + 0.803 + ], + "angle": 0, + "content": "In PASS, the in-waveguide channel \\(\\mathbf{g}\\) can be regarded as the deterministic channel component that depends on the locations and the coupling lengths of \\(\\mathrm{PAs}^2\\). Consequently, we merely need to estimate the wireless propagation channel \\(\\mathbf{h}_k\\) from the \\(k\\)-th UE to the PAs. However, PASS introduces a fundamental challenge in channel estimation due to the highly coupled nature of \\(\\mathbf{g}\\) and \\(\\mathbf{h}_k\\). Specifically, while each waveguide is connected to multiple PAs, the waveguide channel is fixed and cannot be reconfigured to perform diverse beam measurements as in conventional hybrid precoding. As a result, the system must infer a high-dimensional channel vector \\(\\mathbf{h}_k\\) from inherently low-dimensional pilot observations, typically just one" + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.817, + 0.922, + 0.887 + ], + "angle": 0, + "content": "1In the existing works for PASS, the free space channel model between UEs and the PAs is commonly utilized by assuming the flexible PAs close to UEs [3]–[6]. However, in the practical wireless propagation environment, the NLoS channel component caused by the statical and dynamic scatters should be considered. In particular, all PAs are hard to provide the full coverage of LoS services for all UEs in practical deployment environment." + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.887, + 0.922, + 0.945 + ], + "angle": 0, + "content": "2This work employs the electronic activation of pre-positioned PAs along the waveguide. The system controller only needs to determine the activation state to ascertain the exact positions of PAs. The reconfiguration of the activated PA set is usually several orders of magnitude faster than the channel coherence time, enabling it to respond efficiently to rapid channel variations." + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.912, + 0.031, + 0.921, + 0.041 + ], + "angle": 0, + "content": "3" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.07, + 0.493, + 0.252 + ], + "angle": 0, + "content": "scalar per waveguide per measurement. To elaborate, suppose the widely used orthogonal pilot transmission strategy, e.g., time-division, is adopted, and \\( T \\) denotes the number of pilot transmission slots transmitted by UE \\( k \\). Note that even if the UE transmits \\( T \\geq N \\) or more pilot slots, a single waveguide output may provide no more than one linearly independent measurement per slot. That is, each pilot symbol experiences the same fixed merging of \\( N \\) PAs in the waveguide, yielding only repeated versions of the same scalar. Hence, the classic error-criterion-based algorithms, e.g., least square (LS) or linear minimum mean square error (LMMSE) estimators, are difficult to apply directly for channel estimation in \\( \\mathrm{PASS}^3 \\)." + }, + { + "type": "title", + "bbox": [ + 0.097, + 0.267, + 0.471, + 0.282 + ], + "angle": 0, + "content": "III. DEEP LEARNING BASED CHANNEL ESTIMATION" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.286, + 0.492, + 0.527 + ], + "angle": 0, + "content": "In this section, to improve the channel estimation performance, we leverage DL models to develop efficient channel estimation schemes for PASS. However, two critical challenges emerge when applying the DL approach to PASS. Firstly, the channel characteristics vary dramatically as PAs freely change positions within designated regions, which requires the efficient network architecture with sufficient capacity to learn the extended channel state space. Secondly, by dynamically activating candidate PAs along the waveguide in response to real-time communication demands, the system introduces time-varying channel dimension and heterogeneous signal distribution. The proposed DL-based channel estimator necessitates an innovative architecture capable of adaptively handling spatio-temporal variations in channel parameters, thereby fostering advancements in neural network design to enable robust estimation under non-stationary conditions." + }, + { + "type": "title", + "bbox": [ + 0.075, + 0.547, + 0.245, + 0.56 + ], + "angle": 0, + "content": "A. Dataset Construction" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.565, + 0.493, + 0.854 + ], + "angle": 0, + "content": "In the offline training stage, we collect \\(N_{s}\\) paired samples, i.e., the pilot observation vector \\(\\widetilde{\\mathbf{y}}_k\\in \\mathbb{C}^{T\\times 1}\\) at the BS, the position set \\(\\Psi^{\\mathrm{P}} = [\\psi_n^1,\\dots ,\\psi_N^{\\mathrm{P}}] = [(x_1^{\\mathrm{P}},0,d),\\dots ,(x_N^{\\mathrm{P}},0,d)]\\) of PAs, and the corresponding channel sample \\(\\mathbf{h}_k\\). In the proposed channel estimation network, the input tensor is designed as the set of both PA positions and the pilot observation. Considering the waveguide is deployed parallel to the \\(x\\)-axis, we merely need the \\(x\\)-axis coordinate set \\(\\mathbf{x}^{\\mathrm{P}} = [x_1^{\\mathrm{P}},\\ldots ,x_N^{\\mathrm{P}}]^T\\in \\mathbb{R}^{N\\times 1}\\) of PAs as the input feature. To enable the neural network to adaptively estimate channels across varying PA configurations, we utilize the in-waveguide channel \\(\\mathbf{g}\\) to transform the observation \\(\\widetilde{\\mathbf{y}}_k\\in \\mathbb{C}^{T\\times 1}\\) into \\(\\widetilde{\\mathbf{Y}}_k = \\mathbf{g}\\widetilde{\\mathbf{y}}_k^\\top \\in \\mathbb{C}^{N\\times T}\\). This transformation aligns the input dimension of neural network with the current number of PAs \\(N\\) in PASS, which facilitates the subsequent channel estimation network design. The complex-value matrix \\(\\widetilde{\\mathbf{Y}}_k\\) is converted into the real-value tensor \\(\\overline{\\mathbf{Y}}_k = \\{\\Re (\\widetilde{\\mathbf{Y}}_k),\\Im (\\widetilde{\\mathbf{Y}}_k)\\} \\in \\mathbb{R}^{N\\times 2T}\\) for neural network processing. Accordingly, the label tensor in the network training is \\(\\bar{\\mathbf{H}}_k = \\{\\Re (\\mathbf{h}_k),\\Im (\\mathbf{h}_k)\\} \\in \\mathbb{R}^{N\\times 2}\\)." + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.863, + 0.493, + 0.947 + ], + "angle": 0, + "content": "3To collect independent pilot measurements for each PA, as required by LS and LMMSE, one feasible approach is to introduce an antenna switching matrix that selects subsets of PAs in different time slots to realize the signal separation. In Section IV of this letter, we provide antenna switching matrix-based LS and LMMSE estimators as channel estimation benchmarks. Note that switching each PA in different slots introduces hardware overhead, extended measurement time, and potential switch losses, all of which reduce practicality." + }, + { + "type": "image", + "bbox": [ + 0.569, + 0.07, + 0.867, + 0.203 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.504, + 0.213, + 0.813, + 0.227 + ], + "angle": 0, + "content": "Fig. 2. Proposed PAMoE model for channel estimation." + }, + { + "type": "title", + "bbox": [ + 0.504, + 0.253, + 0.902, + 0.267 + ], + "angle": 0, + "content": "B. PAMoE: Mixture of Experts-Based Channel Estimation" + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.271, + 0.921, + 0.346 + ], + "angle": 0, + "content": "We first propose a PAMoE estimator as illustrated in Fig. 2 to address high-dimensional dynamic channel estimation from low-dimensional received pilots, which incorporates the dynamic padding, feature embedding, fusion, and position-aware MoE modules [10]." + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.347, + 0.921, + 0.513 + ], + "angle": 0, + "content": "1) Dynamic Padding: To accommodate variable PA counts \\(N\\) and handle variable-length inputs, \\(PAMoE\\) employs dynamic padding up to a maximum \\(N_{\\mathrm{max}}\\). Let \\(B\\) denote the batch size in the network training stage. The input tensor in a training batch can be expressed as PA positions \\(\\mathbf{P} \\in \\mathbb{R}^{B \\times N \\times 1}\\) and pilot signals \\(\\mathbf{S} \\in \\mathbb{R}^{B \\times N \\times 2T}\\), which are the batch version of \\(\\mathbf{x}^{\\mathrm{P}}\\) and \\(\\overline{\\mathbf{Y}}_k\\), respectively. If \\(N < N_{\\mathrm{max}}\\), the network pads along the PA dimension so that both \\(\\mathbf{P}\\) and \\(\\mathbf{S}\\) become length \\(N_{\\mathrm{max}}\\), i.e., \\(\\mathbf{P}' = [\\mathbf{P}, \\phi^1] \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times 1}\\), \\(\\mathbf{S}' = [\\mathbf{S}, \\phi^2] \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times d_{\\mathrm{sig}}}\\), where \\(\\phi^1\\) and \\(\\phi^2 \\in \\mathbb{R}^{N_{\\mathrm{pad}} \\times 1}(N_{\\mathrm{pad}} = N_{\\mathrm{max}} - N)\\) denote the learnable padding embeddings, respectively." + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.513, + 0.92, + 0.62 + ], + "angle": 0, + "content": "2) Feature Embedding: To handle continuous spatial information and achieve efficient extrapolation of PAs, the Fourier basis function is employed, developing a Fourier positional embedding approach. Suppose a set of exponentially increasing frequency bases \\(\\mathbf{f} = \\left[2^{f}\\pi \\right]_{f = 0}^{F - 1}\\in \\mathbb{R}^{F}\\) is defined, where \\(F\\) is the number of frequency components. The scaled position encodings can be expressed as" + }, + { + "type": "equation", + "bbox": [ + 0.627, + 0.628, + 0.921, + 0.646 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {\\Theta} = \\mathbf {S} ^ {\\prime} \\odot \\mathbf {f} \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times F}. \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.655, + 0.921, + 0.685 + ], + "angle": 0, + "content": "Then, we apply sinusoidal transformations to obtain sine and cosine features, and then are concatenated as" + }, + { + "type": "equation", + "bbox": [ + 0.566, + 0.694, + 0.921, + 0.713 + ], + "angle": 0, + "content": "\\[\n\\mathbf {E} = \\operatorname {C o n c a t} \\left[ \\sin (\\boldsymbol {\\Theta}), \\cos (\\boldsymbol {\\Theta}) \\right] \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times 2 F}. \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.721, + 0.92, + 0.842 + ], + "angle": 0, + "content": "Further, \\(\\mathbf{E}\\) is projected into the embedding space \\(\\mathbf{Z}_{\\mathrm{pos}} = \\mathbf{EW}_{\\mathrm{pos}} + \\mathbf{b}_{\\mathrm{pos}} \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times d_{\\mathrm{embed}}}\\), where \\(\\mathbf{W}_{\\mathrm{pos}} \\in \\mathbb{R}^{2F \\times d_{\\mathrm{embed}}}\\) is the learnable weight matrix and \\(\\mathbf{b}_{\\mathrm{pos}} \\in \\mathbb{R}^{d_{\\mathrm{embed}}}\\) is the bias term. This Fourier embedding effectively captures multi-scale positional variations in the input space. The pilot signal data is embedded to the same hidden dimension \\(d_{\\mathrm{hid}}\\) by the multilayer perceptron (MLP)-based linear mapping module \\(\\phi_{\\mathrm{sig}}\\), i.e., \\(\\mathbf{Z}_{\\mathrm{sig}} = \\phi_{\\mathrm{sig}}(\\mathbf{S}') \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times d_{\\mathrm{hid}}}\\)." + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.843, + 0.922, + 0.916 + ], + "angle": 0, + "content": "3) Feature Fusion: To fuse positional features with pilot signal features, we employ a gating function to generate a gate from a control feature based on PA positions and apply it to the target feature based on pilot signal. The gating operation is formulated as" + }, + { + "type": "equation", + "bbox": [ + 0.58, + 0.926, + 0.921, + 0.945 + ], + "angle": 0, + "content": "\\[\n\\mathbf {G} = \\sigma \\left(\\mathbf {W} _ {g} \\mathbf {Z} _ {\\text {p o s}} + \\mathbf {b} _ {g}\\right) \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times d _ {\\text {h i d}}}, \\tag {8}\n\\]" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.912, + 0.032, + 0.921, + 0.04 + ], + "angle": 0, + "content": "4" + }, + { + "type": "image", + "bbox": [ + 0.14, + 0.071, + 0.434, + 0.26 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.075, + 0.271, + 0.393, + 0.284 + ], + "angle": 0, + "content": "Fig. 3. Proposed PAformer model for channel estimation." + }, + { + "type": "equation", + "bbox": [ + 0.14, + 0.306, + 0.492, + 0.326 + ], + "angle": 0, + "content": "\\[\n\\mathbf {Z} _ {\\text {f u s e d}} = \\mathbf {G} \\odot \\mathbf {Z} _ {\\text {s i g}} + \\mathbf {Z} _ {\\text {p o s}} \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times d _ {\\text {h i d}}}, \\tag {9}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.333, + 0.492, + 0.378 + ], + "angle": 0, + "content": "where \\(\\mathbf{W}_g\\) and \\(\\mathbf{b}_g\\) denote the weight and bias of a linear layer with \\(d_{\\mathrm{hid}}\\) neurons, respectively, and \\(\\sigma (\\cdot)\\) is a Sigmoid activation function." + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.379, + 0.491, + 0.515 + ], + "angle": 0, + "content": "4) MoE With Gating Network: Suppose there are \\( E \\) experts and each expert takes \\( \\mathbf{Z}_{\\mathrm{fused}} \\) and processes it with an MLP-Mixer block across both the feature dimension \\( d_{\\mathrm{hid}} \\) and the spatial dimension \\( N_{\\mathrm{max}} \\). Let \\( \\mathbf{Z}_e = E_e(\\mathbf{Z}_{\\mathrm{fused}}) \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times d_{\\mathrm{hid}}} \\) denote the output of expert \\( e \\), \\( (e = 1, \\dots, E) \\). A gating network is employed to weight each expert's output. Typically, it pools \\( \\mathbf{Z}_{\\mathrm{fused}} \\) over the PA dimension to get a global context \\( \\mathbf{z}_{\\mathrm{pool}} \\in \\mathbb{R}^{B \\times d_{\\mathrm{hid}}} \\), and then applies the softmax activation function to produce gating weights, which can be expressed as" + }, + { + "type": "equation", + "bbox": [ + 0.15, + 0.523, + 0.49, + 0.543 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {\\alpha} = \\operatorname {s o f t m a x} \\left(\\mathbf {W} _ {\\alpha} \\mathbf {z} _ {\\text {p o o l}} + \\mathbf {b} _ {\\alpha}\\right) \\in \\mathbb {R} ^ {B \\times E}, \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.55, + 0.49, + 0.581 + ], + "angle": 0, + "content": "where \\(\\sum_{e=1}^{E} \\alpha_{b,e} = 1\\), \\(\\forall b \\in \\{1, \\dots, B\\}\\), \\(\\mathbf{W}_{\\alpha}\\) and \\(\\mathbf{b}_{\\alpha}\\) are the weight and bias of a linear layer with \\(E\\) neurons, respectively." + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.581, + 0.492, + 0.643 + ], + "angle": 0, + "content": "Given the output \\(\\mathbf{Z}_e\\) of each expert and the gating weights \\(\\alpha\\), we form a weighted sum over experts. Let \\(\\alpha\\) be reshaped to \\(\\mathbb{R}^{B\\times 1\\times E}\\) so it can broadcast over the \\(N_{\\mathrm{max}}\\) dimension to obtain the following output of MoE" + }, + { + "type": "equation", + "bbox": [ + 0.162, + 0.648, + 0.49, + 0.688 + ], + "angle": 0, + "content": "\\[\n\\mathbf {Z} _ {\\mathrm {M o E}} = \\sum_ {e = 1} ^ {E} \\boldsymbol {\\alpha} _ {e} \\mathbf {Z} _ {e} \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times d _ {\\mathrm {h i d}}} \\tag {11}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.693, + 0.492, + 0.8 + ], + "angle": 0, + "content": "Finally, the network concatenates the positional feature \\(\\mathbf{Z}_{\\mathrm{pos}}\\) and \\(\\mathbf{Z}_{\\mathrm{MoE}}\\) along the last dimension, i.e., \\(\\mathbf{Z}_{\\mathrm{concat}} \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times (2d_{\\mathrm{hid}})}\\). Then, a linear layer maps \\(\\mathbf{Z}_{\\mathrm{concat}}\\) to the estimated channel \\(\\widetilde{\\mathbf{H}} \\in \\mathbb{R}^{B \\times N \\times 2}\\). In \\(PAMoE\\), the dynamic padding pattern requires the network to fix a predefined maximum number of PAs \\(N_{\\mathrm{max}}\\) in the training stage, which is a limiting factor if PAs exceed the predefined maximum bound \\(N_{\\mathrm{max}}\\)." + }, + { + "type": "title", + "bbox": [ + 0.076, + 0.82, + 0.442, + 0.835 + ], + "angle": 0, + "content": "C. P Aformer: Transformer-Based Channel Estimation" + }, + { + "type": "text", + "bbox": [ + 0.075, + 0.839, + 0.492, + 0.947 + ], + "angle": 0, + "content": "We further propose a PAformer estimator as illustrated in Fig. 3 to address the scalability limitation of \\( PAMoE \\), which is inherently constrained by fixed \\( N_{\\mathrm{max}} \\) during training. PAformer incorporates self-attention layers with permutation-equivariance, enabling the network to dynamically accommodate PAs of arbitrary size \\( N \\). The proposed PAformer predicts channel coefficients in a per-antenna manner, offering" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.07, + 0.921, + 0.099 + ], + "angle": 0, + "content": "flexibility if new data has more antennas than seen before in the training stage." + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.1, + 0.921, + 0.176 + ], + "angle": 0, + "content": "1) Input Embedding: In PAformer, we first concatenate the position and pilot signal features along their last dimension, i.e., \\(\\mathbf{V} = \\left[\\mathbf{P},\\mathbf{S}\\right]\\in \\mathbb{R}^{B\\times N\\times (2T + 1)}\\). Next, \\(\\mathbf{V}\\) is mapped into a hidden representation \\(\\mathbf{Z}^{(0)}\\) via an embedding network \\(\\phi\\) based on MLP, i.e., \\(\\mathbf{Z}^{(0)} = \\phi (\\mathbf{V})\\in \\mathbb{R}^{B\\times N\\times d_{\\mathrm{hid}}}\\)." + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.177, + 0.922, + 0.48 + ], + "angle": 0, + "content": "2) Transformer Encoder: PAformer employs a pre-norm Transformer with \\(P\\) stacked self-attention blocks (SABs), each comprising multi-head self-attention (MHA) and a feedforward network (FFN), both with residual connections and layer normalization. Let \\(\\mathbf{Z}^{(\\ell -1)}\\) denote the input to the \\(\\ell\\)-th SAB, and \\(\\overline{\\mathbf{Z}}^{(\\ell -1)} = \\mathrm{LayerNorm}(\\mathbf{Z}^{(\\ell -1)})\\) is the pre-normalized feature representation by the layer normalization. The output of the MHA module is expressed as \\(\\mathbf{A}^{(\\ell)} = \\mathrm{MHA}(\\mathbf{Q},\\mathbf{K},\\mathbf{V})\\in\\) \\(\\mathbb{R}^{B\\times N\\times d_{\\mathrm{hid}}}\\) where Q, K, and V denote query, key, and value tokens of \\(\\overline{\\mathbf{Z}}^{(\\ell -1)}\\) [11], respectively. The result is combined via a residual connection \\(\\overline{\\mathbf{Z}}^{\\prime (\\ell)} = \\overline{\\mathbf{Z}}^{(\\ell -1)} + \\mathbf{A}^{(\\ell)}\\). Then, the FFN is applied to obtain \\(\\mathbf{F}^{(\\ell)} = \\mathrm{FFN}(\\overline{\\mathbf{Z}}^{\\prime (\\ell)})\\), where FFN is a point-wise MLP operating on each PA token separately. Another residual connection completes this sub-layer \\(\\mathbf{Z}^{(\\ell)} =\\) \\(\\overline{\\mathbf{Z}}^{\\prime (\\ell)} + \\mathbf{F}^{(\\ell)}\\). Stacking and applying \\(P\\) blocks in sequence yields the final feature representation \\(\\mathbf{Z}^P = \\mathrm{SAB}^{(\\ell)}\\bigg(\\mathbf{Z}^{(\\ell -1)}\\bigg)\\in\\) \\(\\mathbb{R}^{B\\times N\\times d_{\\mathrm{hid}}}\\), \\(\\ell = 1,\\dots ,P\\), starting from \\(\\mathbf{Z}^{(0)}\\). The final step maps each antenna-wise feature vector in \\(\\mathbf{Z}^{(P)}\\) to the estimated channel \\(\\widehat{\\mathbf{H}}_i\\), where \\(i = 1,\\ldots ,N\\) indexes PAs." + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.481, + 0.922, + 0.572 + ], + "angle": 0, + "content": "For the proposed two DL estimators, during the test stage, the trained network can be applied to scenarios with dynamic numbers of PAs, even though the test dataset has a different data distribution and dimensionality compared to the training dataset. This property of the channel estimation network can also be termed as zero-shot learning." + }, + { + "type": "title", + "bbox": [ + 0.619, + 0.59, + 0.808, + 0.604 + ], + "angle": 0, + "content": "IV. NUMERICAL RESULTS" + }, + { + "type": "text", + "bbox": [ + 0.504, + 0.61, + 0.922, + 0.899 + ], + "angle": 0, + "content": "In simulation setups, we set \\(Q = 200\\), \\(K = 4\\), \\(n_e = 1.4\\), \\(d = 5\\), \\(D_x \\times D_y = 20 \\times 20 \\mathrm{~m}^2\\), \\(S = 6\\) and the carrier frequency \\(f_c = 28 \\mathrm{GHz}\\). In the training dataset construction, we collect \\(N_s = 10^5\\) training samples with the fixed number of PAs \\(N^{\\mathrm{tr}} = 16\\), while the test number of PAs is \\(N^{\\mathrm{te}} \\in \\{8, \\dots, 32\\}\\). In the hyper-parameter setups of the proposed DL estimators, we set \\(N_{\\mathrm{max}} = 32\\), \\(E = 4\\), \\(d_{\\mathrm{hid}} = 64\\), \\(P = 4\\), and \\(B = 256\\). The 1-norm \\(\\ell_1\\) is used as the loss function in the network training, i.e., \\(\\ell_1 = ||\\widehat{\\mathbf{H}}_k - \\overline{\\mathbf{H}}_k||_1\\), while the normalized mean squared error (NMSE) is employed as the performance metric, i.e., \\(\\mathrm{NMSE} = \\mathbb{E}\\{||\\widehat{\\mathbf{H}}_k - \\overline{\\mathbf{H}}_k||_F^2 / ||\\overline{\\mathbf{H}}_k||_F^2\\}\\). The antenna switching-based LS and LMMSE estimators are used as the conventional channel estimation benchmarks. Furthermore, to establish relevant benchmarks incorporating state-of-the-art DL estimators, we further develop two comparison DL benchmarks drawing inspiration from recent attention-based channel estimation networks, i.e., SA-RN-CE [8] and DACEN [9], which are created by replacing the MoE module in PAMoE with spatial attention (SA) and dual attention (DA) mechanisms, respectively." + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.9, + 0.922, + 0.946 + ], + "angle": 0, + "content": "In Fig. 4, we present the NMSE performance of different channel estimation schemes with \\( N^{\\mathrm{te}} = 32 \\). The proposed DL estimators with reduced pilot overhead \\( T \\) outperform" + } + ], + [ + { + "type": "page_number", + "bbox": [ + 0.912, + 0.031, + 0.921, + 0.041 + ], + "angle": 0, + "content": "5" + }, + { + "type": "image", + "bbox": [ + 0.15, + 0.079, + 0.411, + 0.24 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.075, + 0.254, + 0.345, + 0.267 + ], + "angle": 0, + "content": "Fig. 4. NMSE vs. SNR for different algorithms." + }, + { + "type": "image", + "bbox": [ + 0.149, + 0.294, + 0.412, + 0.455 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.075, + 0.469, + 0.415, + 0.483 + ], + "angle": 0, + "content": "Fig. 5. NMSE vs. number of PAs \\(N\\) for different algorithms." + }, + { + "type": "table_caption", + "bbox": [ + 0.125, + 0.502, + 0.443, + 0.526 + ], + "angle": 0, + "content": "TABLEI COMPLEXITY ANALYSIS OF PROPOSED DL ESTIMATORS" + }, + { + "type": "table", + "bbox": [ + 0.106, + 0.534, + 0.464, + 0.675 + ], + "angle": 0, + "content": "
EstimatorNParams(K)FLOPs(M)Runtime (μs)
CPUGPU
PAMoE8
16211.913.18211.812.73
32
PAformer84.37774.623.814
16545.98.756128.27.943
3217.51226.812.95
" + }, + { + "type": "text", + "bbox": [ + 0.074, + 0.704, + 0.493, + 0.947 + ], + "angle": 0, + "content": "the existing linear estimators and DL models. The superior accuracy of \\(PAMoE\\) compared to \\(PA\\text{former}\\) primarily stems from \\(PAMoE\\)'s more specialized design, particularly in its handling and exploitation of the pinching antenna positional information. \\(PA\\text{former}\\), while robust and scalable due to its Transformer foundation, operates with a more general mechanism to support variable PA counts. In Fig. 5, we provide NMSE performance of different channel estimation schemes for varying numbers of PAs, where the signal-to-noise ratio (SNR) is set to 0 dB. Compared to SA-RN-CE and DACEN, the proposed \\(PAMoE\\) and \\(PA\\text{former}\\) trained by the fixed PA configuration exhibit excellent robustness and generalization for dynamic PAs. The proposed DL estimators possess sufficient zero-shot learning capabilities to deal with distinct data distribution in the test stage. Table I summarizes the number of trainable parameters (Params), floating point" + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.07, + 0.923, + 0.327 + ], + "angle": 0, + "content": "operations (FLOPs) and inference runtime of the proposed \\(PAMoE\\) and \\(PA\\text{former}\\), where the NVIDIA RTX 3090 GPU and the 12th Gen Intel(R) Core(TM) i9- 12900K CPU are used as the inference platform. The average inference runtime per channel estimation instance was calculated by averaging over 1000 Monte Carlo experiments. Firstly, for different numbers of PAs \\(N\\), the Params of the proposed \\(PAMoE\\) and \\(PA\\text{former}\\) are the same due to the fixed network architecture. Secondly, the FLOPs of \\(PA\\text{former}\\) increase linearly with the number of PAs \\(N\\), as the higher-dimensional input tensor needs to be processed in the SAB of \\(PA\\text{former}\\). In the \\(PAMoE\\) model, the input tensor is uniformly padded to the same \\(N_{\\max}\\) via padding interpolation, and hence the FLOPs of the \\(PAMoE\\) remain constant for different \\(N\\). Finally, we observe that both \\(PAMoE\\) and \\(PA\\text{former}\\) can achieve the channel estimation at the microseconds (\\(\\mu s\\)) level, demonstrating the practical feasibility of the proposed models for real-time deployment." + }, + { + "type": "title", + "bbox": [ + 0.648, + 0.338, + 0.78, + 0.351 + ], + "angle": 0, + "content": "V. CONCLUSIONS" + }, + { + "type": "text", + "bbox": [ + 0.503, + 0.356, + 0.923, + 0.537 + ], + "angle": 0, + "content": "In this letter, we investigated the channel estimation approaches in PASS, and proposed two DL estimators with the advanced neural network architecture to infer a high-dimensional channel vector from inherently low-dimensional pilot observations, respectively. Specifically, the proposed PAMoE accommodates variable PA configurations and exploits multi-expert diversity in the MLP-Mixer for improved channel estimation. Accordingly, the proposed PAFomer is capable of handling arbitrary number of PAs thanks to the self-attention mechanism. Numerical results demonstrated that the proposed DL estimators outperform conventional methods, and significantly reduce the pilot overhead." + }, + { + "type": "title", + "bbox": [ + 0.665, + 0.549, + 0.762, + 0.562 + ], + "angle": 0, + "content": "REFERENCES" + }, + { + "type": "ref_text", + "bbox": [ + 0.515, + 0.567, + 0.922, + 0.603 + ], + "angle": 0, + "content": "[1] X. Shao, Q. Jiang, and R. Zhang, \"6D movable antenna based on user distribution: Modeling and optimization,\" IEEE Trans. Wireless Commun., vol. 24, no. 1, pp. 355-370, Jan. 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.515, + 0.603, + 0.922, + 0.637 + ], + "angle": 0, + "content": "[2] A. Fukuda, H. Yamamoto, H. Okazaki, Y. Suzuki, and K. Kawai, \"Pinching antenna: Using a dielectric waveguide as an antenna,\" NTT DOCOMO Technical J., vol. 23, no. 3, pp. 5-12, Jan. 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.637, + 0.921, + 0.659 + ], + "angle": 0, + "content": "[3] Z. Ding, R. Schober, and H. Vincent Poor, “Flexible-antenna systems: A pinching-antenna perspective,” IEEE Trans. Commun., 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.659, + 0.921, + 0.693 + ], + "angle": 0, + "content": "[4] D. Tyrovolas, S. A. Tegos, P. D. Diamantoulakis, S. Ioannidis, C. K. Liaskos, and G. K. Karagiannidis, “Performance analysis of pinching-antenna systems,” IEEE Trans. Cognit. Commun. Networking, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.693, + 0.921, + 0.727 + ], + "angle": 0, + "content": "[5] Z. Wang, C. Ouyang, X. Mu, Y. Liu, and Z. Ding, \"Modeling and beamforming optimization for pinching-antenna systems,\" arXiv preprint arXiv:2502.05917, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.727, + 0.921, + 0.761 + ], + "angle": 0, + "content": "[6] S. A. Tegos, P. D. Diamantoulakis, Z. Ding, and G. K. Karagiannidis, \"Minimum data rate maximization for uplink pinching-antenna systems,\" IEEE Wireless Commun. Lett., 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.761, + 0.921, + 0.806 + ], + "angle": 0, + "content": "[7] F. Rezaei, S. M. Marvasti-Zadeh, C. Tellambura, and A. Maaref, \"Adversarial score-based generative models for MMSE-achieving AmBC channel estimation,\" IEEE Wireless Commun. Lett., vol. 13, no. 4, pp. 1053-1057, Apr. 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.806, + 0.921, + 0.841 + ], + "angle": 0, + "content": "[8] X. Fan, Y. Zou, and L. Zhai, \"Spatial-attention-based channel estimation in IRS-assisted mmWave MU-MISO systems,\" IEEE Internet Things J., vol. 11, no. 6, pp. 9801-9813, Mar. 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.514, + 0.841, + 0.921, + 0.886 + ], + "angle": 0, + "content": "[9] B. Zhou, X. Yang, S. Ma, F. Gao, and G. Yang, \"Pay less but get more: A dual-attention-based channel estimation network for massive MIMO systems with low-density pilots,\" IEEE Trans. Wireless Commun., vol. 23, no. 6, pp. 6061-6076, Jun. 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.508, + 0.886, + 0.921, + 0.908 + ], + "angle": 0, + "content": "[10] W. Cai, J. Jiang, F. Wang, J. Tang, S. Kim, and J. Huang, “A survey on mixture of experts,” arXiv preprint arXiv:2407.06204, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.508, + 0.908, + 0.921, + 0.942 + ], + "angle": 0, + "content": "[11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proc. NeurIPS, vol. 30, Nov. 2017." + }, + { + "type": "list", + "bbox": [ + 0.508, + 0.567, + 0.922, + 0.942 + ], + "angle": 0, + "content": null + } + ] +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13268/27688151-5fde-4680-b981-ebf649a96c83_origin.pdf b/data/2025/2503_13xxx/2503.13268/27688151-5fde-4680-b981-ebf649a96c83_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..809c0eb842d42e9710aa9169e0f9704647173967 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13268/27688151-5fde-4680-b981-ebf649a96c83_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36ad8569b4283436789ec0402d894921d4515dbd509930ed799b73ecbeefaef3 +size 1493277 diff --git a/data/2025/2503_13xxx/2503.13268/full.md b/data/2025/2503_13xxx/2503.13268/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b24e47083e5613e61ea253a59d3f4bda0b781b35 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13268/full.md @@ -0,0 +1,188 @@ +# Channel Estimation for Pinching-Antenna Systems (PASS) + +Jian Xiao, Ji Wang, Senior Member, IEEE, and Yuanwei Liu, Fellow, IEEE + +Abstract—Pinching antennas (PAs) represent a revolutionary flexible antenna technology that leverages dielectric waveguides and electromagnetic coupling to mitigate large-scale path loss. This letter is the first to explore channel estimation for Pinching-Antenna SyStems (PASS), addressing their uniquely ill-conditioned and underdetermined channel characteristics. In particular, two efficient deep learning-based channel estimators are proposed. 1) PAMoE: This estimator incorporates dynamic padding, feature embedding, fusion, and mixture of experts (MoE) modules, which effectively leverage the positional information of PAs and exploit expert diversity. 2) PAformer: This Transformer-style estimator employs the self-attention mechanism to predict channel coefficients in a per-antenna manner, which offers more flexibility to adaptively deal with dynamic numbers of PAs in practical deployment. Numerical results demonstrate that 1) the proposed deep learning-based channel estimators outperform conventional methods and exhibit excellent zero-shot learning capabilities, and 2) PAMoE delivers higher channel estimation accuracy via MoE specialization, while PAformer natively handles an arbitrary number of PAs, trading self-attention complexity for superior scalability. + +Index Terms—Channel estimation, mixture of experts, pinching antenna, Transformer. + +# I. INTRODUCTION + +ADVANCED multiple-input multiple-output (MIMO) is a critical enabling technology for sixth-generation (6G) wireless networks, targeting ultra-high-speed data transmission and seamless connectivity. Among emerging MIMO technologies, flexible antenna systems, e.g., fluid and movable antennas, offer dynamic channel reconfiguration capabilities by leveraging its positional or rotational adjustability to adapt to the evolving spatial distribution of users [1]. However, they still struggle with addressing large-scale path loss and the need for line-of-sight (LoS) links, both of which are essential for high-quality communication. To overcome these challenges, pinching antennas (PAs) offer a novel approach to creating controlled radiation sites. PAs were initially introduced and developed into a prototype by NTT DOCOMO in 2022 [2]. The fundamental concept of PAs relies on the transfer of electromagnetic (EM) waves from a dielectric waveguide to a nearby dielectric material. Compared to typical movable antennas, PAs can achieve movement ranges spanning thousands to tens of thousands of wavelengths and support a larger number of connected antennas on a waveguide. This technology allows + +Jian Xiao and Ji Wang are with the Department of Electronics and Information Engineering, College of Physical Science and Technology, Central China Normal University, Wuhan 430079, China (e-mail: jianx@mails.ccnu.edu.cn; jiwang@ccnu.edu.cn). + +Yuanwei Liu is with the Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong (e-mail: yuanwei@hku.hk). + +for the flexible positioning of PAs, enabling the establishment of adjustable and reliable LoS transceiver links in Pinching-Antenna SyStems (PASS), thereby effectively mitigating large-scale path loss. + +The theoretical exploration of PA technology is still in its early stages, yet it has already garnered significant attention from both academia and industry. The authors of [3] first provided a comprehensive performance analysis for PASS and proposed a non-orthogonal multiple access (NOMA)-enhanced PASS. Subsequently, related works have explored performance analysis [4], physical modeling and beamforming [5], and minimum rate maximization for PASS [6]. It should be emphasized that the existing works on PA optimization strategies hinge crucially on precise channel estimation. However, the channel estimation in PASS has not been investigated in-depth due to new challenges. Firstly, PASS introduces a fundamental challenge in channel estimation due to the highly coupled nature of the in-waveguide channel and wireless propagation channel between PAs and users. Each waveguide is fed by only one radio frequency (RF) chain, while it may carry multiple PAs. This setup necessitates the recovery of high-dimensional information from low-dimensional observations, resulting in an ill-conditioned underdetermined recovery problem. Secondly, the in-waveguide channel is deterministic and completely depends on the positions of PAs in the waveguide. Hence, no stacking of pilot signals can produce additional independent linear equations to solve the high-dimensional wireless propagation channel. Moreover, the dynamic nature of PAs with the plug-and-play capability complicates the channel acquisition. The activation locations of a large number of PAs can be adjusted in a specified region to improve user service. Therefore, the wireless propagation channel presents near-field spatial non-stationarity effects due to the large array aperture. + +To fill this research gap, this letter presents the first investigation into channel estimation for PASS. First and foremost, we extend classic linear channel estimators by adapting the antenna switching strategy for PASS, which provide fundamental channel estimation benchmarks. Furthermore, inspired by recent advances in deep learning (DL)-enabled channel estimation [7]-[9], we propose two efficient DL models to improve channel estimation accuracy while reducing pilot overhead. The first DL estimator, termed $PAMoE$ , is built on the mixture of experts (MoE) architecture, which integrates PA positions and pilot signal features through multi-expert mechanisms to adaptively model the dynamic channel distributions in PASS. To enhance the flexibility and scalability of the channel estimation model for dynamic PA counts, we further propose a Transformer-style estimator, termed $PAformer$ , which lever- + +![](images/1809a3d795957d4d22b8aaff957d2fc3fdf1e8341d4a39ed60030baef9bf4f24.jpg) +Fig. 1. Pinching antennas assisted multi-user systems. + +ages the self-attention mechanism to predict channel coefficients on a per-antenna basis. Numerical results demonstrate that the proposed DL estimators achieve superior channel estimation accuracy with significantly reduced pilot overhead compared to conventional channel estimators. In particular, both PAMoE and PAformer exhibit superior zero-shot learning capabilities on dynamic PA configurations without retraining operations. + +# II. SYSTEM MODEL AND PROBLEM FORMULATION + +As illustrated in Fig. 1, considering an uplink communication system that includes a base station (BS) equipped with $N$ PAs and $K$ single-antenna user equipments (UEs). Each PA has the same length $L$ on the waveguide. In a Cartesian system, the UEs are assumed to be randomly distributed within a rectangular region on the $x - y$ plane, with dimensions $D_{x}$ and $D_{y}$ . The position of the $k$ -th UE is represented by $\psi_{k} = (x_{k},y_{k},0)$ . Suppose the waveguide extends parallel to the $x$ -axis. Its height is denoted by $d$ , and its length aligns with the rectangular dimension $D_{x}$ . Hence, the coordinates of PA $n$ are given by $\psi_{n}^{\mathrm{P}} = (x_{n}^{\mathrm{P}},0,d)$ , where $x_{n}^{\mathrm{P}}$ lies in the interval $[L,D_x]$ . In this work, the discrete activation deployment of PAs is adopted, which simplifies the hardware design and is more practical than continuous activation [5]. The PAs can only be activated at specific discrete positions along the waveguide, forming the feasible set $\mathcal{S} = \left\{L + \frac{D_x - L}{Q - 1} (q - 1) \mid q = 1,2,\dots ,Q\right\}$ , where $Q$ denotes the number of discrete positions available. Since all $N$ PAs lie along the same waveguide, the transmitted signal of each PA is essentially a phase-shifted version of the signal from the BS at the waveguide feed point. The in-waveguide channel $\mathbf{g} \in \mathbb{C}^{N \times 1}$ can be expressed as + +$$ +\mathbf {g} = \left[ \alpha_ {1} e ^ {- j \frac {2 \pi}{\lambda_ {g}} \left| \psi_ {0} ^ {\mathrm {P}} - \psi_ {1} ^ {\mathrm {P}} \right|}, \dots , \alpha_ {N} e ^ {- j \frac {2 \pi}{\lambda_ {g}} \left| \psi_ {0} ^ {\mathrm {P}} - \psi_ {N} ^ {\mathrm {P}} \right|} \right] ^ {\top}, \tag {1} +$$ + +where $\psi_0^{\mathrm{P}}$ denotes the position of the waveguide's feed point. Parameter $\lambda_{g} = \frac{\lambda}{n_{e}}$ is the guided wavelength. Here, $\lambda$ is the wavelength in free space and $n_e$ is the effective refractive index of the dielectric waveguide. $\alpha_{n}$ is the factor that determines the ratio of power exchanged between the waveguide and PA $n$ , governed by the coupling length of the PA [5]. + +For the wireless propagation channel $\mathbf{h}_k\in \mathbb{C}^{N\times 1}$ between the $k$ -th UE and the PAs, $\mathbf{h}_k$ is composed of the LoS component $\mathbf{h}_k^{\mathrm{LoS}}$ and the non-line-of-sight (NLoS) component $\mathbf{h}_k^{\mathrm{NLoS}}$ , i.e., $\mathbf{h}_k = \Upsilon_k\odot \mathbf{h}_k^{\mathrm{LoS}} + \mathbf{h}_k^{\mathrm{NLoS}}$ . Here, the operator $\odot$ denotes the Hadamard product. $\Upsilon_{k} = [\upsilon_{k,n},\dots ,\upsilon_{k,N}]\in \mathbb{C}^{N\times 1}$ + +is a Bernoulli random vector and the variable $\nu_{k,n}$ takes values from the set $\{0,1\}$ , characterizing the existence of a LoS link between the $n$ -th PA and the $k$ -th UE1. The geometric spherical wavefront model-based LoS channel between the $k$ -th UE and the PAs is expressed as [3] + +$$ +\mathbf {h} _ {k} ^ {\mathrm {L o S}} = \left[ \frac {\sqrt {\eta} e ^ {- j \frac {2 \pi}{\lambda} | \psi_ {k} - \psi_ {1} ^ {\mathrm {P}} |}}{| \psi_ {k} - \psi_ {1} ^ {\mathrm {P}} |}, \dots , \frac {\sqrt {\eta} e ^ {- j \frac {2 \pi}{\lambda} | \psi_ {k} - \psi_ {N} ^ {\mathrm {P}} |}}{| \psi_ {k} - \psi_ {N} ^ {\mathrm {P}} |} \right] ^ {\top}, \tag {2} +$$ + +where $\eta = \frac{\lambda^2}{16\pi^2}$ denotes the path loss at a reference distance of $1\mathrm{m}$ . Considering $S$ scatterers in the $k$ -th UE $\rightarrow$ PAs link, the NLoS channel $\mathbf{h}_k^{\mathrm{NLoS}} \in \mathbb{C}^{N \times 1}$ is given by + +$$ +\mathbf {h} _ {k} ^ {\mathrm {N L o S}} = \sqrt {\frac {1}{S}} \sum_ {s = 1} ^ {S} \beta_ {k, s} \mathbf {a} _ {k, s} e ^ {j \eta_ {k, s}}, \tag {3} +$$ + +where $\beta_{k,s} \sim \mathcal{CN}(0, \sigma_s^2)$ and $\eta_{k,s} \sim \mathcal{U}[0, 2\pi]$ denote the complex gain and the random phase of the $s$ -th scatterer path, respectively. $\mathbf{a}_{k,s} \in \mathbb{C}^{N \times 1}$ represents the receiving array response at the PAs. Since the movable region of PAs is large, the equivalent array aperture of PA is likely to exceed the Rayleigh distance that is the criterion to determine the near-field boundary. Hence, the spherical wavefront is utilized to characterize array response $\mathbf{a}_{k,s}$ and is given by + +$$ +\mathbf {a} _ {k, s} = \left[ \frac {\sqrt {\eta} e ^ {- j 2 \pi d _ {s , 1} / \lambda}}{d _ {k , s} d _ {s , 1}}, \dots , \frac {\sqrt {\eta} e ^ {- j 2 \pi d _ {s , N} / \lambda}}{d _ {k , s} d _ {s , N}} \right] ^ {\top}, \tag {4} +$$ + +where $d_{k,s}$ and $d_{s,n},n\in \{1,\ldots ,N\}$ denote the distances from UE $k$ to scatterer $s$ and from scatterer $s$ to PA $n$ , respectively. + +The received signal at the BS during the $t$ -th slot can be expressed as + +$$ +y _ {t} = \sum_ {k = 1} ^ {K} \mathbf {g} ^ {\top} \mathbf {h} _ {k} s _ {k, t} + n _ {t}, \tag {5} +$$ + +where $s_{k,t}$ represents the symbol transmitted by the $k$ -th UE, and $n_t \sim CN(0, \sigma_n^2)$ is the additive white Gaussian noise. + +In PASS, the in-waveguide channel $\mathbf{g}$ can be regarded as the deterministic channel component that depends on the locations and the coupling lengths of $\mathrm{PAs}^2$ . Consequently, we merely need to estimate the wireless propagation channel $\mathbf{h}_k$ from the $k$ -th UE to the PAs. However, PASS introduces a fundamental challenge in channel estimation due to the highly coupled nature of $\mathbf{g}$ and $\mathbf{h}_k$ . Specifically, while each waveguide is connected to multiple PAs, the waveguide channel is fixed and cannot be reconfigured to perform diverse beam measurements as in conventional hybrid precoding. As a result, the system must infer a high-dimensional channel vector $\mathbf{h}_k$ from inherently low-dimensional pilot observations, typically just one + +1In the existing works for PASS, the free space channel model between UEs and the PAs is commonly utilized by assuming the flexible PAs close to UEs [3]–[6]. However, in the practical wireless propagation environment, the NLoS channel component caused by the statical and dynamic scatters should be considered. In particular, all PAs are hard to provide the full coverage of LoS services for all UEs in practical deployment environment. + +2This work employs the electronic activation of pre-positioned PAs along the waveguide. The system controller only needs to determine the activation state to ascertain the exact positions of PAs. The reconfiguration of the activated PA set is usually several orders of magnitude faster than the channel coherence time, enabling it to respond efficiently to rapid channel variations. + +scalar per waveguide per measurement. To elaborate, suppose the widely used orthogonal pilot transmission strategy, e.g., time-division, is adopted, and $T$ denotes the number of pilot transmission slots transmitted by UE $k$ . Note that even if the UE transmits $T \geq N$ or more pilot slots, a single waveguide output may provide no more than one linearly independent measurement per slot. That is, each pilot symbol experiences the same fixed merging of $N$ PAs in the waveguide, yielding only repeated versions of the same scalar. Hence, the classic error-criterion-based algorithms, e.g., least square (LS) or linear minimum mean square error (LMMSE) estimators, are difficult to apply directly for channel estimation in $\mathrm{PASS}^3$ . + +# III. DEEP LEARNING BASED CHANNEL ESTIMATION + +In this section, to improve the channel estimation performance, we leverage DL models to develop efficient channel estimation schemes for PASS. However, two critical challenges emerge when applying the DL approach to PASS. Firstly, the channel characteristics vary dramatically as PAs freely change positions within designated regions, which requires the efficient network architecture with sufficient capacity to learn the extended channel state space. Secondly, by dynamically activating candidate PAs along the waveguide in response to real-time communication demands, the system introduces time-varying channel dimension and heterogeneous signal distribution. The proposed DL-based channel estimator necessitates an innovative architecture capable of adaptively handling spatio-temporal variations in channel parameters, thereby fostering advancements in neural network design to enable robust estimation under non-stationary conditions. + +# A. Dataset Construction + +In the offline training stage, we collect $N_{s}$ paired samples, i.e., the pilot observation vector $\widetilde{\mathbf{y}}_k\in \mathbb{C}^{T\times 1}$ at the BS, the position set $\Psi^{\mathrm{P}} = [\psi_n^1,\dots ,\psi_N^{\mathrm{P}}] = [(x_1^{\mathrm{P}},0,d),\dots ,(x_N^{\mathrm{P}},0,d)]$ of PAs, and the corresponding channel sample $\mathbf{h}_k$ . In the proposed channel estimation network, the input tensor is designed as the set of both PA positions and the pilot observation. Considering the waveguide is deployed parallel to the $x$ -axis, we merely need the $x$ -axis coordinate set $\mathbf{x}^{\mathrm{P}} = [x_1^{\mathrm{P}},\ldots ,x_N^{\mathrm{P}}]^T\in \mathbb{R}^{N\times 1}$ of PAs as the input feature. To enable the neural network to adaptively estimate channels across varying PA configurations, we utilize the in-waveguide channel $\mathbf{g}$ to transform the observation $\widetilde{\mathbf{y}}_k\in \mathbb{C}^{T\times 1}$ into $\widetilde{\mathbf{Y}}_k = \mathbf{g}\widetilde{\mathbf{y}}_k^\top \in \mathbb{C}^{N\times T}$ . This transformation aligns the input dimension of neural network with the current number of PAs $N$ in PASS, which facilitates the subsequent channel estimation network design. The complex-value matrix $\widetilde{\mathbf{Y}}_k$ is converted into the real-value tensor $\overline{\mathbf{Y}}_k = \{\Re (\widetilde{\mathbf{Y}}_k),\Im (\widetilde{\mathbf{Y}}_k)\} \in \mathbb{R}^{N\times 2T}$ for neural network processing. Accordingly, the label tensor in the network training is $\bar{\mathbf{H}}_k = \{\Re (\mathbf{h}_k),\Im (\mathbf{h}_k)\} \in \mathbb{R}^{N\times 2}$ . + +3To collect independent pilot measurements for each PA, as required by LS and LMMSE, one feasible approach is to introduce an antenna switching matrix that selects subsets of PAs in different time slots to realize the signal separation. In Section IV of this letter, we provide antenna switching matrix-based LS and LMMSE estimators as channel estimation benchmarks. Note that switching each PA in different slots introduces hardware overhead, extended measurement time, and potential switch losses, all of which reduce practicality. + +![](images/241e3ab1863fe20fbdb6cad0c762ddc9234ff318ed7416f9440786b3f82e7423.jpg) +Fig. 2. Proposed PAMoE model for channel estimation. + +# B. PAMoE: Mixture of Experts-Based Channel Estimation + +We first propose a PAMoE estimator as illustrated in Fig. 2 to address high-dimensional dynamic channel estimation from low-dimensional received pilots, which incorporates the dynamic padding, feature embedding, fusion, and position-aware MoE modules [10]. + +1) Dynamic Padding: To accommodate variable PA counts $N$ and handle variable-length inputs, $PAMoE$ employs dynamic padding up to a maximum $N_{\mathrm{max}}$ . Let $B$ denote the batch size in the network training stage. The input tensor in a training batch can be expressed as PA positions $\mathbf{P} \in \mathbb{R}^{B \times N \times 1}$ and pilot signals $\mathbf{S} \in \mathbb{R}^{B \times N \times 2T}$ , which are the batch version of $\mathbf{x}^{\mathrm{P}}$ and $\overline{\mathbf{Y}}_k$ , respectively. If $N < N_{\mathrm{max}}$ , the network pads along the PA dimension so that both $\mathbf{P}$ and $\mathbf{S}$ become length $N_{\mathrm{max}}$ , i.e., $\mathbf{P}' = [\mathbf{P}, \phi^1] \in \mathbb{R}^{B \times N_{\mathrm{max}} \times 1}$ , $\mathbf{S}' = [\mathbf{S}, \phi^2] \in \mathbb{R}^{B \times N_{\mathrm{max}} \times d_{\mathrm{sig}}}$ , where $\phi^1$ and $\phi^2 \in \mathbb{R}^{N_{\mathrm{pad}} \times 1}(N_{\mathrm{pad}} = N_{\mathrm{max}} - N)$ denote the learnable padding embeddings, respectively. + +2) Feature Embedding: To handle continuous spatial information and achieve efficient extrapolation of PAs, the Fourier basis function is employed, developing a Fourier positional embedding approach. Suppose a set of exponentially increasing frequency bases $\mathbf{f} = \left[2^{f}\pi \right]_{f = 0}^{F - 1}\in \mathbb{R}^{F}$ is defined, where $F$ is the number of frequency components. The scaled position encodings can be expressed as + +$$ +\boldsymbol {\Theta} = \mathbf {S} ^ {\prime} \odot \mathbf {f} \in \mathbb {R} ^ {B \times N _ {\max } \times F}. \tag {6} +$$ + +Then, we apply sinusoidal transformations to obtain sine and cosine features, and then are concatenated as + +$$ +\mathbf {E} = \operatorname {C o n c a t} \left[ \sin (\boldsymbol {\Theta}), \cos (\boldsymbol {\Theta}) \right] \in \mathbb {R} ^ {B \times N _ {\max } \times 2 F}. \tag {7} +$$ + +Further, $\mathbf{E}$ is projected into the embedding space $\mathbf{Z}_{\mathrm{pos}} = \mathbf{EW}_{\mathrm{pos}} + \mathbf{b}_{\mathrm{pos}} \in \mathbb{R}^{B \times N_{\mathrm{max}} \times d_{\mathrm{embed}}}$ , where $\mathbf{W}_{\mathrm{pos}} \in \mathbb{R}^{2F \times d_{\mathrm{embed}}}$ is the learnable weight matrix and $\mathbf{b}_{\mathrm{pos}} \in \mathbb{R}^{d_{\mathrm{embed}}}$ is the bias term. This Fourier embedding effectively captures multi-scale positional variations in the input space. The pilot signal data is embedded to the same hidden dimension $d_{\mathrm{hid}}$ by the multilayer perceptron (MLP)-based linear mapping module $\phi_{\mathrm{sig}}$ , i.e., $\mathbf{Z}_{\mathrm{sig}} = \phi_{\mathrm{sig}}(\mathbf{S}') \in \mathbb{R}^{B \times N_{\mathrm{max}} \times d_{\mathrm{hid}}}$ . + +3) Feature Fusion: To fuse positional features with pilot signal features, we employ a gating function to generate a gate from a control feature based on PA positions and apply it to the target feature based on pilot signal. The gating operation is formulated as + +$$ +\mathbf {G} = \sigma \left(\mathbf {W} _ {g} \mathbf {Z} _ {\text {p o s}} + \mathbf {b} _ {g}\right) \in \mathbb {R} ^ {B \times N _ {\max } \times d _ {\text {h i d}}}, \tag {8} +$$ + +![](images/a8f19f86ad289a59e60b7ffa3eb8b3b7b38f5339d4b348c9f7164601efdf43a2.jpg) +Fig. 3. Proposed PAformer model for channel estimation. + +$$ +\mathbf {Z} _ {\text {f u s e d}} = \mathbf {G} \odot \mathbf {Z} _ {\text {s i g}} + \mathbf {Z} _ {\text {p o s}} \in \mathbb {R} ^ {B \times N _ {\max } \times d _ {\text {h i d}}}, \tag {9} +$$ + +where $\mathbf{W}_g$ and $\mathbf{b}_g$ denote the weight and bias of a linear layer with $d_{\mathrm{hid}}$ neurons, respectively, and $\sigma (\cdot)$ is a Sigmoid activation function. + +4) MoE With Gating Network: Suppose there are $E$ experts and each expert takes $\mathbf{Z}_{\mathrm{fused}}$ and processes it with an MLP-Mixer block across both the feature dimension $d_{\mathrm{hid}}$ and the spatial dimension $N_{\mathrm{max}}$ . Let $\mathbf{Z}_e = E_e(\mathbf{Z}_{\mathrm{fused}}) \in \mathbb{R}^{B \times N_{\mathrm{max}} \times d_{\mathrm{hid}}}$ denote the output of expert $e$ , $(e = 1, \dots, E)$ . A gating network is employed to weight each expert's output. Typically, it pools $\mathbf{Z}_{\mathrm{fused}}$ over the PA dimension to get a global context $\mathbf{z}_{\mathrm{pool}} \in \mathbb{R}^{B \times d_{\mathrm{hid}}}$ , and then applies the softmax activation function to produce gating weights, which can be expressed as + +$$ +\boldsymbol {\alpha} = \operatorname {s o f t m a x} \left(\mathbf {W} _ {\alpha} \mathbf {z} _ {\text {p o o l}} + \mathbf {b} _ {\alpha}\right) \in \mathbb {R} ^ {B \times E}, \tag {10} +$$ + +where $\sum_{e=1}^{E} \alpha_{b,e} = 1$ , $\forall b \in \{1, \dots, B\}$ , $\mathbf{W}_{\alpha}$ and $\mathbf{b}_{\alpha}$ are the weight and bias of a linear layer with $E$ neurons, respectively. + +Given the output $\mathbf{Z}_e$ of each expert and the gating weights $\alpha$ , we form a weighted sum over experts. Let $\alpha$ be reshaped to $\mathbb{R}^{B\times 1\times E}$ so it can broadcast over the $N_{\mathrm{max}}$ dimension to obtain the following output of MoE + +$$ +\mathbf {Z} _ {\mathrm {M o E}} = \sum_ {e = 1} ^ {E} \boldsymbol {\alpha} _ {e} \mathbf {Z} _ {e} \in \mathbb {R} ^ {B \times N _ {\max } \times d _ {\mathrm {h i d}}} \tag {11} +$$ + +Finally, the network concatenates the positional feature $\mathbf{Z}_{\mathrm{pos}}$ and $\mathbf{Z}_{\mathrm{MoE}}$ along the last dimension, i.e., $\mathbf{Z}_{\mathrm{concat}} \in \mathbb{R}^{B \times N_{\mathrm{max}} \times (2d_{\mathrm{hid}})}$ . Then, a linear layer maps $\mathbf{Z}_{\mathrm{concat}}$ to the estimated channel $\widetilde{\mathbf{H}} \in \mathbb{R}^{B \times N \times 2}$ . In $PAMoE$ , the dynamic padding pattern requires the network to fix a predefined maximum number of PAs $N_{\mathrm{max}}$ in the training stage, which is a limiting factor if PAs exceed the predefined maximum bound $N_{\mathrm{max}}$ . + +# C. P Aformer: Transformer-Based Channel Estimation + +We further propose a PAformer estimator as illustrated in Fig. 3 to address the scalability limitation of $PAMoE$ , which is inherently constrained by fixed $N_{\mathrm{max}}$ during training. PAformer incorporates self-attention layers with permutation-equivariance, enabling the network to dynamically accommodate PAs of arbitrary size $N$ . The proposed PAformer predicts channel coefficients in a per-antenna manner, offering + +flexibility if new data has more antennas than seen before in the training stage. + +1) Input Embedding: In PAformer, we first concatenate the position and pilot signal features along their last dimension, i.e., $\mathbf{V} = \left[\mathbf{P},\mathbf{S}\right]\in \mathbb{R}^{B\times N\times (2T + 1)}$ . Next, $\mathbf{V}$ is mapped into a hidden representation $\mathbf{Z}^{(0)}$ via an embedding network $\phi$ based on MLP, i.e., $\mathbf{Z}^{(0)} = \phi (\mathbf{V})\in \mathbb{R}^{B\times N\times d_{\mathrm{hid}}}$ . + +2) Transformer Encoder: PAformer employs a pre-norm Transformer with $P$ stacked self-attention blocks (SABs), each comprising multi-head self-attention (MHA) and a feedforward network (FFN), both with residual connections and layer normalization. Let $\mathbf{Z}^{(\ell -1)}$ denote the input to the $\ell$ -th SAB, and $\overline{\mathbf{Z}}^{(\ell -1)} = \mathrm{LayerNorm}(\mathbf{Z}^{(\ell -1)})$ is the pre-normalized feature representation by the layer normalization. The output of the MHA module is expressed as $\mathbf{A}^{(\ell)} = \mathrm{MHA}(\mathbf{Q},\mathbf{K},\mathbf{V})\in$ $\mathbb{R}^{B\times N\times d_{\mathrm{hid}}}$ where Q, K, and V denote query, key, and value tokens of $\overline{\mathbf{Z}}^{(\ell -1)}$ [11], respectively. The result is combined via a residual connection $\overline{\mathbf{Z}}^{\prime (\ell)} = \overline{\mathbf{Z}}^{(\ell -1)} + \mathbf{A}^{(\ell)}$ . Then, the FFN is applied to obtain $\mathbf{F}^{(\ell)} = \mathrm{FFN}(\overline{\mathbf{Z}}^{\prime (\ell)})$ , where FFN is a point-wise MLP operating on each PA token separately. Another residual connection completes this sub-layer $\mathbf{Z}^{(\ell)} =$ $\overline{\mathbf{Z}}^{\prime (\ell)} + \mathbf{F}^{(\ell)}$ . Stacking and applying $P$ blocks in sequence yields the final feature representation $\mathbf{Z}^P = \mathrm{SAB}^{(\ell)}\bigg(\mathbf{Z}^{(\ell -1)}\bigg)\in$ $\mathbb{R}^{B\times N\times d_{\mathrm{hid}}}$ , $\ell = 1,\dots ,P$ , starting from $\mathbf{Z}^{(0)}$ . The final step maps each antenna-wise feature vector in $\mathbf{Z}^{(P)}$ to the estimated channel $\widehat{\mathbf{H}}_i$ , where $i = 1,\ldots ,N$ indexes PAs. + +For the proposed two DL estimators, during the test stage, the trained network can be applied to scenarios with dynamic numbers of PAs, even though the test dataset has a different data distribution and dimensionality compared to the training dataset. This property of the channel estimation network can also be termed as zero-shot learning. + +# IV. NUMERICAL RESULTS + +In simulation setups, we set $Q = 200$ , $K = 4$ , $n_e = 1.4$ , $d = 5$ , $D_x \times D_y = 20 \times 20 \mathrm{~m}^2$ , $S = 6$ and the carrier frequency $f_c = 28 \mathrm{GHz}$ . In the training dataset construction, we collect $N_s = 10^5$ training samples with the fixed number of PAs $N^{\mathrm{tr}} = 16$ , while the test number of PAs is $N^{\mathrm{te}} \in \{8, \dots, 32\}$ . In the hyper-parameter setups of the proposed DL estimators, we set $N_{\mathrm{max}} = 32$ , $E = 4$ , $d_{\mathrm{hid}} = 64$ , $P = 4$ , and $B = 256$ . The 1-norm $\ell_1$ is used as the loss function in the network training, i.e., $\ell_1 = ||\widehat{\mathbf{H}}_k - \overline{\mathbf{H}}_k||_1$ , while the normalized mean squared error (NMSE) is employed as the performance metric, i.e., $\mathrm{NMSE} = \mathbb{E}\{||\widehat{\mathbf{H}}_k - \overline{\mathbf{H}}_k||_F^2 / ||\overline{\mathbf{H}}_k||_F^2\}$ . The antenna switching-based LS and LMMSE estimators are used as the conventional channel estimation benchmarks. Furthermore, to establish relevant benchmarks incorporating state-of-the-art DL estimators, we further develop two comparison DL benchmarks drawing inspiration from recent attention-based channel estimation networks, i.e., SA-RN-CE [8] and DACEN [9], which are created by replacing the MoE module in PAMoE with spatial attention (SA) and dual attention (DA) mechanisms, respectively. + +In Fig. 4, we present the NMSE performance of different channel estimation schemes with $N^{\mathrm{te}} = 32$ . The proposed DL estimators with reduced pilot overhead $T$ outperform + +![](images/693e462a34cadef9c1cfda5dfe90c288c6651c3f415570ed7ba8c78a5da6867a.jpg) +Fig. 4. NMSE vs. SNR for different algorithms. + +![](images/ceecef8ee6c03c7cbdd8591e5cd323baeb805da3fe491adbccdc3edcb814fd50.jpg) +Fig. 5. NMSE vs. number of PAs $N$ for different algorithms. + +TABLEI COMPLEXITY ANALYSIS OF PROPOSED DL ESTIMATORS + +
EstimatorNParams(K)FLOPs(M)Runtime (μs)
CPUGPU
PAMoE8
16211.913.18211.812.73
32
PAformer84.37774.623.814
16545.98.756128.27.943
3217.51226.812.95
+ +the existing linear estimators and DL models. The superior accuracy of $PAMoE$ compared to $PA\text{former}$ primarily stems from $PAMoE$ 's more specialized design, particularly in its handling and exploitation of the pinching antenna positional information. $PA\text{former}$ , while robust and scalable due to its Transformer foundation, operates with a more general mechanism to support variable PA counts. In Fig. 5, we provide NMSE performance of different channel estimation schemes for varying numbers of PAs, where the signal-to-noise ratio (SNR) is set to 0 dB. Compared to SA-RN-CE and DACEN, the proposed $PAMoE$ and $PA\text{former}$ trained by the fixed PA configuration exhibit excellent robustness and generalization for dynamic PAs. The proposed DL estimators possess sufficient zero-shot learning capabilities to deal with distinct data distribution in the test stage. Table I summarizes the number of trainable parameters (Params), floating point + +operations (FLOPs) and inference runtime of the proposed $PAMoE$ and $PA\text{former}$ , where the NVIDIA RTX 3090 GPU and the 12th Gen Intel(R) Core(TM) i9- 12900K CPU are used as the inference platform. The average inference runtime per channel estimation instance was calculated by averaging over 1000 Monte Carlo experiments. Firstly, for different numbers of PAs $N$ , the Params of the proposed $PAMoE$ and $PA\text{former}$ are the same due to the fixed network architecture. Secondly, the FLOPs of $PA\text{former}$ increase linearly with the number of PAs $N$ , as the higher-dimensional input tensor needs to be processed in the SAB of $PA\text{former}$ . In the $PAMoE$ model, the input tensor is uniformly padded to the same $N_{\max}$ via padding interpolation, and hence the FLOPs of the $PAMoE$ remain constant for different $N$ . Finally, we observe that both $PAMoE$ and $PA\text{former}$ can achieve the channel estimation at the microseconds ( $\mu s$ ) level, demonstrating the practical feasibility of the proposed models for real-time deployment. + +# V. CONCLUSIONS + +In this letter, we investigated the channel estimation approaches in PASS, and proposed two DL estimators with the advanced neural network architecture to infer a high-dimensional channel vector from inherently low-dimensional pilot observations, respectively. Specifically, the proposed PAMoE accommodates variable PA configurations and exploits multi-expert diversity in the MLP-Mixer for improved channel estimation. Accordingly, the proposed PAFomer is capable of handling arbitrary number of PAs thanks to the self-attention mechanism. Numerical results demonstrated that the proposed DL estimators outperform conventional methods, and significantly reduce the pilot overhead. + +# REFERENCES + +[1] X. Shao, Q. Jiang, and R. Zhang, "6D movable antenna based on user distribution: Modeling and optimization," IEEE Trans. Wireless Commun., vol. 24, no. 1, pp. 355-370, Jan. 2025. +[2] A. Fukuda, H. Yamamoto, H. Okazaki, Y. Suzuki, and K. Kawai, "Pinching antenna: Using a dielectric waveguide as an antenna," NTT DOCOMO Technical J., vol. 23, no. 3, pp. 5-12, Jan. 2022. +[3] Z. Ding, R. Schober, and H. Vincent Poor, “Flexible-antenna systems: A pinching-antenna perspective,” IEEE Trans. Commun., 2025. +[4] D. Tyrovolas, S. A. Tegos, P. D. Diamantoulakis, S. Ioannidis, C. K. Liaskos, and G. K. Karagiannidis, “Performance analysis of pinching-antenna systems,” IEEE Trans. Cognit. Commun. Networking, 2025. +[5] Z. Wang, C. Ouyang, X. Mu, Y. Liu, and Z. Ding, "Modeling and beamforming optimization for pinching-antenna systems," arXiv preprint arXiv:2502.05917, 2025. +[6] S. A. Tegos, P. D. Diamantoulakis, Z. Ding, and G. K. Karagiannidis, "Minimum data rate maximization for uplink pinching-antenna systems," IEEE Wireless Commun. Lett., 2025. +[7] F. Rezaei, S. M. Marvasti-Zadeh, C. Tellambura, and A. Maaref, "Adversarial score-based generative models for MMSE-achieving AmBC channel estimation," IEEE Wireless Commun. Lett., vol. 13, no. 4, pp. 1053-1057, Apr. 2024. +[8] X. Fan, Y. Zou, and L. Zhai, "Spatial-attention-based channel estimation in IRS-assisted mmWave MU-MISO systems," IEEE Internet Things J., vol. 11, no. 6, pp. 9801-9813, Mar. 2024. +[9] B. Zhou, X. Yang, S. Ma, F. Gao, and G. Yang, "Pay less but get more: A dual-attention-based channel estimation network for massive MIMO systems with low-density pilots," IEEE Trans. Wireless Commun., vol. 23, no. 6, pp. 6061-6076, Jun. 2024. +[10] W. Cai, J. Jiang, F. Wang, J. Tang, S. Kim, and J. Huang, “A survey on mixture of experts,” arXiv preprint arXiv:2407.06204, 2024. +[11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proc. NeurIPS, vol. 30, Nov. 2017. \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13268/images.zip b/data/2025/2503_13xxx/2503.13268/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..339cd05fda0cbc58354d6249b6e76cba400ea296 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13268/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:745f6751708c039e10059419ff108897dbdde37b9fc1b124093db9a75f9592f7 +size 197989 diff --git a/data/2025/2503_13xxx/2503.13268/layout.json b/data/2025/2503_13xxx/2503.13268/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9c9a9409be95c7e7df4a2fdb605144e279d35aa6 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13268/layout.json @@ -0,0 +1,7560 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 99, + 55, + 512, + 111 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 99, + 55, + 512, + 111 + ], + "spans": [ + { + "bbox": [ + 99, + 55, + 512, + 111 + ], + "type": "text", + "content": "Channel Estimation for Pinching-Antenna Systems (PASS)" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 132, + 117, + 478, + 130 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 117, + 478, + 130 + ], + "spans": [ + { + "bbox": [ + 132, + 117, + 478, + 130 + ], + "type": "text", + "content": "Jian Xiao, Ji Wang, Senior Member, IEEE, and Yuanwei Liu, Fellow, IEEE" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 45, + 175, + 301, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 175, + 301, + 386 + ], + "spans": [ + { + "bbox": [ + 45, + 175, + 301, + 386 + ], + "type": "text", + "content": "Abstract—Pinching antennas (PAs) represent a revolutionary flexible antenna technology that leverages dielectric waveguides and electromagnetic coupling to mitigate large-scale path loss. This letter is the first to explore channel estimation for Pinching-Antenna SyStems (PASS), addressing their uniquely ill-conditioned and underdetermined channel characteristics. In particular, two efficient deep learning-based channel estimators are proposed. 1) PAMoE: This estimator incorporates dynamic padding, feature embedding, fusion, and mixture of experts (MoE) modules, which effectively leverage the positional information of PAs and exploit expert diversity. 2) PAformer: This Transformer-style estimator employs the self-attention mechanism to predict channel coefficients in a per-antenna manner, which offers more flexibility to adaptively deal with dynamic numbers of PAs in practical deployment. Numerical results demonstrate that 1) the proposed deep learning-based channel estimators outperform conventional methods and exhibit excellent zero-shot learning capabilities, and 2) PAMoE delivers higher channel estimation accuracy via MoE specialization, while PAformer natively handles an arbitrary number of PAs, trading self-attention complexity for superior scalability." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 45, + 390, + 301, + 411 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 390, + 301, + 411 + ], + "spans": [ + { + "bbox": [ + 45, + 390, + 301, + 411 + ], + "type": "text", + "content": "Index Terms—Channel estimation, mixture of experts, pinching antenna, Transformer." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 132, + 429, + 215, + 440 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 429, + 215, + 440 + ], + "spans": [ + { + "bbox": [ + 132, + 429, + 215, + 440 + ], + "type": "text", + "content": "I. INTRODUCTION" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 45, + 444, + 301, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 444, + 301, + 685 + ], + "spans": [ + { + "bbox": [ + 45, + 444, + 301, + 685 + ], + "type": "text", + "content": "ADVANCED multiple-input multiple-output (MIMO) is a critical enabling technology for sixth-generation (6G) wireless networks, targeting ultra-high-speed data transmission and seamless connectivity. Among emerging MIMO technologies, flexible antenna systems, e.g., fluid and movable antennas, offer dynamic channel reconfiguration capabilities by leveraging its positional or rotational adjustability to adapt to the evolving spatial distribution of users [1]. However, they still struggle with addressing large-scale path loss and the need for line-of-sight (LoS) links, both of which are essential for high-quality communication. To overcome these challenges, pinching antennas (PAs) offer a novel approach to creating controlled radiation sites. PAs were initially introduced and developed into a prototype by NTT DOCOMO in 2022 [2]. The fundamental concept of PAs relies on the transfer of electromagnetic (EM) waves from a dielectric waveguide to a nearby dielectric material. Compared to typical movable antennas, PAs can achieve movement ranges spanning thousands to tens of thousands of wavelengths and support a larger number of connected antennas on a waveguide. This technology allows" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 45, + 693, + 299, + 729 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 693, + 299, + 729 + ], + "spans": [ + { + "bbox": [ + 45, + 693, + 299, + 729 + ], + "type": "text", + "content": "Jian Xiao and Ji Wang are with the Department of Electronics and Information Engineering, College of Physical Science and Technology, Central China Normal University, Wuhan 430079, China (e-mail: jianx@mails.ccnu.edu.cn; jiwang@ccnu.edu.cn)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 45, + 729, + 299, + 749 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 729, + 299, + 749 + ], + "spans": [ + { + "bbox": [ + 45, + 729, + 299, + 749 + ], + "type": "text", + "content": "Yuanwei Liu is with the Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong (e-mail: yuanwei@hku.hk)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 307, + 174, + 564, + 222 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 174, + 564, + 222 + ], + "spans": [ + { + "bbox": [ + 307, + 174, + 564, + 222 + ], + "type": "text", + "content": "for the flexible positioning of PAs, enabling the establishment of adjustable and reliable LoS transceiver links in Pinching-Antenna SyStems (PASS), thereby effectively mitigating large-scale path loss." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 307, + 222, + 564, + 569 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 222, + 564, + 569 + ], + "spans": [ + { + "bbox": [ + 307, + 222, + 564, + 569 + ], + "type": "text", + "content": "The theoretical exploration of PA technology is still in its early stages, yet it has already garnered significant attention from both academia and industry. The authors of [3] first provided a comprehensive performance analysis for PASS and proposed a non-orthogonal multiple access (NOMA)-enhanced PASS. Subsequently, related works have explored performance analysis [4], physical modeling and beamforming [5], and minimum rate maximization for PASS [6]. It should be emphasized that the existing works on PA optimization strategies hinge crucially on precise channel estimation. However, the channel estimation in PASS has not been investigated in-depth due to new challenges. Firstly, PASS introduces a fundamental challenge in channel estimation due to the highly coupled nature of the in-waveguide channel and wireless propagation channel between PAs and users. Each waveguide is fed by only one radio frequency (RF) chain, while it may carry multiple PAs. This setup necessitates the recovery of high-dimensional information from low-dimensional observations, resulting in an ill-conditioned underdetermined recovery problem. Secondly, the in-waveguide channel is deterministic and completely depends on the positions of PAs in the waveguide. Hence, no stacking of pilot signals can produce additional independent linear equations to solve the high-dimensional wireless propagation channel. Moreover, the dynamic nature of PAs with the plug-and-play capability complicates the channel acquisition. The activation locations of a large number of PAs can be adjusted in a specified region to improve user service. Therefore, the wireless propagation channel presents near-field spatial non-stationarity effects due to the large array aperture." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 307, + 569, + 564, + 748 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 569, + 564, + 748 + ], + "spans": [ + { + "bbox": [ + 307, + 569, + 564, + 748 + ], + "type": "text", + "content": "To fill this research gap, this letter presents the first investigation into channel estimation for PASS. First and foremost, we extend classic linear channel estimators by adapting the antenna switching strategy for PASS, which provide fundamental channel estimation benchmarks. Furthermore, inspired by recent advances in deep learning (DL)-enabled channel estimation [7]-[9], we propose two efficient DL models to improve channel estimation accuracy while reducing pilot overhead. The first DL estimator, termed " + }, + { + "bbox": [ + 307, + 569, + 564, + 748 + ], + "type": "inline_equation", + "content": "PAMoE" + }, + { + "bbox": [ + 307, + 569, + 564, + 748 + ], + "type": "text", + "content": ", is built on the mixture of experts (MoE) architecture, which integrates PA positions and pilot signal features through multi-expert mechanisms to adaptively model the dynamic channel distributions in PASS. To enhance the flexibility and scalability of the channel estimation model for dynamic PA counts, we further propose a Transformer-style estimator, termed " + }, + { + "bbox": [ + 307, + 569, + 564, + 748 + ], + "type": "inline_equation", + "content": "PAformer" + }, + { + "bbox": [ + 307, + 569, + 564, + 748 + ], + "type": "text", + "content": ", which lever-" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "spans": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 14, + 210, + 36, + 560 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 210, + 36, + 560 + ], + "spans": [ + { + "bbox": [ + 14, + 210, + 36, + 560 + ], + "type": "text", + "content": "arXiv:2503.13268v4 [cs.IT] 10 May 2025" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 82, + 58, + 266, + 167 + ], + "blocks": [ + { + "bbox": [ + 82, + 58, + 266, + 167 + ], + "lines": [ + { + "bbox": [ + 82, + 58, + 266, + 167 + ], + "spans": [ + { + "bbox": [ + 82, + 58, + 266, + 167 + ], + "type": "image", + "image_path": "1809a3d795957d4d22b8aaff957d2fc3fdf1e8341d4a39ed60030baef9bf4f24.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 45, + 178, + 229, + 190 + ], + "lines": [ + { + "bbox": [ + 45, + 178, + 229, + 190 + ], + "spans": [ + { + "bbox": [ + 45, + 178, + 229, + 190 + ], + "type": "text", + "content": "Fig. 1. Pinching antennas assisted multi-user systems." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 45, + 208, + 301, + 304 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 208, + 301, + 304 + ], + "spans": [ + { + "bbox": [ + 45, + 208, + 301, + 304 + ], + "type": "text", + "content": "ages the self-attention mechanism to predict channel coefficients on a per-antenna basis. Numerical results demonstrate that the proposed DL estimators achieve superior channel estimation accuracy with significantly reduced pilot overhead compared to conventional channel estimators. In particular, both PAMoE and PAformer exhibit superior zero-shot learning capabilities on dynamic PA configurations without retraining operations." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 315, + 285, + 326 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 315, + 285, + 326 + ], + "spans": [ + { + "bbox": [ + 62, + 315, + 285, + 326 + ], + "type": "text", + "content": "II. SYSTEM MODEL AND PROBLEM FORMULATION" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "spans": [ + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": "As illustrated in Fig. 1, considering an uplink communication system that includes a base station (BS) equipped with " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": " PAs and " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": " single-antenna user equipments (UEs). Each PA has the same length " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "L" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": " on the waveguide. In a Cartesian system, the UEs are assumed to be randomly distributed within a rectangular region on the " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "x - y" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": " plane, with dimensions " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "D_{x}" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "D_{y}" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": ". The position of the " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": "-th UE is represented by " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "\\psi_{k} = (x_{k},y_{k},0)" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": ". Suppose the waveguide extends parallel to the " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": "-axis. Its height is denoted by " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": ", and its length aligns with the rectangular dimension " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "D_{x}" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": ". Hence, the coordinates of PA " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": " are given by " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "\\psi_{n}^{\\mathrm{P}} = (x_{n}^{\\mathrm{P}},0,d)" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "x_{n}^{\\mathrm{P}}" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": " lies in the interval " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "[L,D_x]" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": ". In this work, the discrete activation deployment of PAs is adopted, which simplifies the hardware design and is more practical than continuous activation [5]. The PAs can only be activated at specific discrete positions along the waveguide, forming the feasible set " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "\\mathcal{S} = \\left\\{L + \\frac{D_x - L}{Q - 1} (q - 1) \\mid q = 1,2,\\dots ,Q\\right\\}" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "Q" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": " denotes the number of discrete positions available. Since all " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": " PAs lie along the same waveguide, the transmitted signal of each PA is essentially a phase-shifted version of the signal from the BS at the waveguide feed point. The in-waveguide channel " + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "inline_equation", + "content": "\\mathbf{g} \\in \\mathbb{C}^{N \\times 1}" + }, + { + "bbox": [ + 45, + 329, + 301, + 583 + ], + "type": "text", + "content": " can be expressed as" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 71, + 587, + 299, + 611 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 587, + 299, + 611 + ], + "spans": [ + { + "bbox": [ + 71, + 587, + 299, + 611 + ], + "type": "interline_equation", + "content": "\\mathbf {g} = \\left[ \\alpha_ {1} e ^ {- j \\frac {2 \\pi}{\\lambda_ {g}} \\left| \\psi_ {0} ^ {\\mathrm {P}} - \\psi_ {1} ^ {\\mathrm {P}} \\right|}, \\dots , \\alpha_ {N} e ^ {- j \\frac {2 \\pi}{\\lambda_ {g}} \\left| \\psi_ {0} ^ {\\mathrm {P}} - \\psi_ {N} ^ {\\mathrm {P}} \\right|} \\right] ^ {\\top}, \\tag {1}", + "image_path": "f9d8cff381b2fbba57b9ebccf7b5e49ede0b1dbd815183208f9487c8510b9702.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "spans": [ + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "inline_equation", + "content": "\\psi_0^{\\mathrm{P}}" + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "text", + "content": " denotes the position of the waveguide's feed point. Parameter " + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "inline_equation", + "content": "\\lambda_{g} = \\frac{\\lambda}{n_{e}}" + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "text", + "content": " is the guided wavelength. Here, " + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "text", + "content": " is the wavelength in free space and " + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "inline_equation", + "content": "n_e" + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "text", + "content": " is the effective refractive index of the dielectric waveguide. " + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "inline_equation", + "content": "\\alpha_{n}" + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "text", + "content": " is the factor that determines the ratio of power exchanged between the waveguide and PA " + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 45, + 616, + 299, + 688 + ], + "type": "text", + "content": ", governed by the coupling length of the PA [5]." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "spans": [ + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "text", + "content": "For the wireless propagation channel " + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\mathbf{h}_k\\in \\mathbb{C}^{N\\times 1}" + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "text", + "content": " between the " + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "text", + "content": " -th UE and the PAs, " + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\mathbf{h}_k" + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "text", + "content": " is composed of the LoS component " + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\mathbf{h}_k^{\\mathrm{LoS}}" + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "text", + "content": " and the non-line-of-sight (NLoS) component " + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\mathbf{h}_k^{\\mathrm{NLoS}}" + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "text", + "content": " , i.e., " + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\mathbf{h}_k = \\Upsilon_k\\odot \\mathbf{h}_k^{\\mathrm{LoS}} + \\mathbf{h}_k^{\\mathrm{NLoS}}" + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "text", + "content": " . Here, the operator " + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\odot" + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "text", + "content": " denotes the Hadamard product. " + }, + { + "bbox": [ + 45, + 689, + 301, + 750 + ], + "type": "inline_equation", + "content": "\\Upsilon_{k} = [\\upsilon_{k,n},\\dots ,\\upsilon_{k,N}]\\in \\mathbb{C}^{N\\times 1}" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "spans": [ + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "type": "text", + "content": "is a Bernoulli random vector and the variable " + }, + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "type": "inline_equation", + "content": "\\nu_{k,n}" + }, + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "type": "text", + "content": " takes values from the set " + }, + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "type": "inline_equation", + "content": "\\{0,1\\}" + }, + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "type": "text", + "content": ", characterizing the existence of a LoS link between the " + }, + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "type": "text", + "content": "-th PA and the " + }, + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "type": "text", + "content": "-th UE1. The geometric spherical wavefront model-based LoS channel between the " + }, + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 307, + 55, + 563, + 114 + ], + "type": "text", + "content": "-th UE and the PAs is expressed as [3]" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 323, + 118, + 563, + 153 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 118, + 563, + 153 + ], + "spans": [ + { + "bbox": [ + 323, + 118, + 563, + 153 + ], + "type": "interline_equation", + "content": "\\mathbf {h} _ {k} ^ {\\mathrm {L o S}} = \\left[ \\frac {\\sqrt {\\eta} e ^ {- j \\frac {2 \\pi}{\\lambda} | \\psi_ {k} - \\psi_ {1} ^ {\\mathrm {P}} |}}{| \\psi_ {k} - \\psi_ {1} ^ {\\mathrm {P}} |}, \\dots , \\frac {\\sqrt {\\eta} e ^ {- j \\frac {2 \\pi}{\\lambda} | \\psi_ {k} - \\psi_ {N} ^ {\\mathrm {P}} |}}{| \\psi_ {k} - \\psi_ {N} ^ {\\mathrm {P}} |} \\right] ^ {\\top}, \\tag {2}", + "image_path": "da35c5e756220535845baffa309d41ca34601fd8402175b0332282ebe899ae97.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "spans": [ + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "inline_equation", + "content": "\\eta = \\frac{\\lambda^2}{16\\pi^2}" + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "text", + "content": " denotes the path loss at a reference distance of " + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "inline_equation", + "content": "1\\mathrm{m}" + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "text", + "content": ". Considering " + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "text", + "content": " scatterers in the " + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "text", + "content": "-th UE " + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "inline_equation", + "content": "\\rightarrow" + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "text", + "content": " PAs link, the NLoS channel " + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "inline_equation", + "content": "\\mathbf{h}_k^{\\mathrm{NLoS}} \\in \\mathbb{C}^{N \\times 1}" + }, + { + "bbox": [ + 307, + 158, + 563, + 198 + ], + "type": "text", + "content": " is given by" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 370, + 202, + 563, + 235 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 370, + 202, + 563, + 235 + ], + "spans": [ + { + "bbox": [ + 370, + 202, + 563, + 235 + ], + "type": "interline_equation", + "content": "\\mathbf {h} _ {k} ^ {\\mathrm {N L o S}} = \\sqrt {\\frac {1}{S}} \\sum_ {s = 1} ^ {S} \\beta_ {k, s} \\mathbf {a} _ {k, s} e ^ {j \\eta_ {k, s}}, \\tag {3}", + "image_path": "47737856b77b4846d2ea999b8017a0803578cd431166b91bf761deb4da9003df.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "spans": [ + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "type": "inline_equation", + "content": "\\beta_{k,s} \\sim \\mathcal{CN}(0, \\sigma_s^2)" + }, + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "type": "inline_equation", + "content": "\\eta_{k,s} \\sim \\mathcal{U}[0, 2\\pi]" + }, + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "type": "text", + "content": " denote the complex gain and the random phase of the " + }, + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "type": "text", + "content": "-th scatterer path, respectively. " + }, + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "type": "inline_equation", + "content": "\\mathbf{a}_{k,s} \\in \\mathbb{C}^{N \\times 1}" + }, + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "type": "text", + "content": " represents the receiving array response at the PAs. Since the movable region of PAs is large, the equivalent array aperture of PA is likely to exceed the Rayleigh distance that is the criterion to determine the near-field boundary. Hence, the spherical wavefront is utilized to characterize array response " + }, + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "type": "inline_equation", + "content": "\\mathbf{a}_{k,s}" + }, + { + "bbox": [ + 307, + 239, + 564, + 337 + ], + "type": "text", + "content": " and is given by" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 340, + 341, + 563, + 376 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 340, + 341, + 563, + 376 + ], + "spans": [ + { + "bbox": [ + 340, + 341, + 563, + 376 + ], + "type": "interline_equation", + "content": "\\mathbf {a} _ {k, s} = \\left[ \\frac {\\sqrt {\\eta} e ^ {- j 2 \\pi d _ {s , 1} / \\lambda}}{d _ {k , s} d _ {s , 1}}, \\dots , \\frac {\\sqrt {\\eta} e ^ {- j 2 \\pi d _ {s , N} / \\lambda}}{d _ {k , s} d _ {s , N}} \\right] ^ {\\top}, \\tag {4}", + "image_path": "31a8a07fb25b3c3701cd20f4389a2bd3181430ccd72480f964e2fdde9be8829f.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "spans": [ + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "inline_equation", + "content": "d_{k,s}" + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "inline_equation", + "content": "d_{s,n},n\\in \\{1,\\ldots ,N\\}" + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "text", + "content": " denote the distances from UE " + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "text", + "content": " to scatterer " + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "text", + "content": " and from scatterer " + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "text", + "content": " to PA " + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 308, + 378, + 563, + 402 + ], + "type": "text", + "content": " , respectively." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 308, + 403, + 563, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 403, + 563, + 426 + ], + "spans": [ + { + "bbox": [ + 308, + 403, + 563, + 426 + ], + "type": "text", + "content": "The received signal at the BS during the " + }, + { + "bbox": [ + 308, + 403, + 563, + 426 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 308, + 403, + 563, + 426 + ], + "type": "text", + "content": "-th slot can be expressed as" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 388, + 431, + 563, + 463 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 388, + 431, + 563, + 463 + ], + "spans": [ + { + "bbox": [ + 388, + 431, + 563, + 463 + ], + "type": "interline_equation", + "content": "y _ {t} = \\sum_ {k = 1} ^ {K} \\mathbf {g} ^ {\\top} \\mathbf {h} _ {k} s _ {k, t} + n _ {t}, \\tag {5}", + "image_path": "60326b7141facb82f16b1ff2dfe41efefbca8781c78878d59b99277a3c7bf062.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 468, + 563, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 468, + 563, + 491 + ], + "spans": [ + { + "bbox": [ + 308, + 468, + 563, + 491 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 308, + 468, + 563, + 491 + ], + "type": "inline_equation", + "content": "s_{k,t}" + }, + { + "bbox": [ + 308, + 468, + 563, + 491 + ], + "type": "text", + "content": " represents the symbol transmitted by the " + }, + { + "bbox": [ + 308, + 468, + 563, + 491 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 308, + 468, + 563, + 491 + ], + "type": "text", + "content": "-th UE, and " + }, + { + "bbox": [ + 308, + 468, + 563, + 491 + ], + "type": "inline_equation", + "content": "n_t \\sim CN(0, \\sigma_n^2)" + }, + { + "bbox": [ + 308, + 468, + 563, + 491 + ], + "type": "text", + "content": " is the additive white Gaussian noise." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "spans": [ + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "text", + "content": "In PASS, the in-waveguide channel " + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "inline_equation", + "content": "\\mathbf{g}" + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "text", + "content": " can be regarded as the deterministic channel component that depends on the locations and the coupling lengths of " + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "inline_equation", + "content": "\\mathrm{PAs}^2" + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "text", + "content": ". Consequently, we merely need to estimate the wireless propagation channel " + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "inline_equation", + "content": "\\mathbf{h}_k" + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "text", + "content": " from the " + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "text", + "content": "-th UE to the PAs. However, PASS introduces a fundamental challenge in channel estimation due to the highly coupled nature of " + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "inline_equation", + "content": "\\mathbf{g}" + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "inline_equation", + "content": "\\mathbf{h}_k" + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "text", + "content": ". Specifically, while each waveguide is connected to multiple PAs, the waveguide channel is fixed and cannot be reconfigured to perform diverse beam measurements as in conventional hybrid precoding. As a result, the system must infer a high-dimensional channel vector " + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "inline_equation", + "content": "\\mathbf{h}_k" + }, + { + "bbox": [ + 307, + 492, + 564, + 635 + ], + "type": "text", + "content": " from inherently low-dimensional pilot observations, typically just one" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 307, + 647, + 564, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 647, + 564, + 702 + ], + "spans": [ + { + "bbox": [ + 307, + 647, + 564, + 702 + ], + "type": "text", + "content": "1In the existing works for PASS, the free space channel model between UEs and the PAs is commonly utilized by assuming the flexible PAs close to UEs [3]–[6]. However, in the practical wireless propagation environment, the NLoS channel component caused by the statical and dynamic scatters should be considered. In particular, all PAs are hard to provide the full coverage of LoS services for all UEs in practical deployment environment." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 307, + 702, + 564, + 748 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 702, + 564, + 748 + ], + "spans": [ + { + "bbox": [ + 307, + 702, + 564, + 748 + ], + "type": "text", + "content": "2This work employs the electronic activation of pre-positioned PAs along the waveguide. The system controller only needs to determine the activation state to ascertain the exact positions of PAs. The reconfiguration of the activated PA set is usually several orders of magnitude faster than the channel coherence time, enabling it to respond efficiently to rapid channel variations." + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "spans": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 0 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "spans": [ + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "type": "text", + "content": "scalar per waveguide per measurement. To elaborate, suppose the widely used orthogonal pilot transmission strategy, e.g., time-division, is adopted, and " + }, + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "type": "text", + "content": " denotes the number of pilot transmission slots transmitted by UE " + }, + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "type": "text", + "content": ". Note that even if the UE transmits " + }, + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "type": "inline_equation", + "content": "T \\geq N" + }, + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "type": "text", + "content": " or more pilot slots, a single waveguide output may provide no more than one linearly independent measurement per slot. That is, each pilot symbol experiences the same fixed merging of " + }, + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "type": "text", + "content": " PAs in the waveguide, yielding only repeated versions of the same scalar. Hence, the classic error-criterion-based algorithms, e.g., least square (LS) or linear minimum mean square error (LMMSE) estimators, are difficult to apply directly for channel estimation in " + }, + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "type": "inline_equation", + "content": "\\mathrm{PASS}^3" + }, + { + "bbox": [ + 45, + 55, + 301, + 199 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 59, + 211, + 288, + 223 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 59, + 211, + 288, + 223 + ], + "spans": [ + { + "bbox": [ + 59, + 211, + 288, + 223 + ], + "type": "text", + "content": "III. DEEP LEARNING BASED CHANNEL ESTIMATION" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 45, + 226, + 301, + 417 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 226, + 301, + 417 + ], + "spans": [ + { + "bbox": [ + 45, + 226, + 301, + 417 + ], + "type": "text", + "content": "In this section, to improve the channel estimation performance, we leverage DL models to develop efficient channel estimation schemes for PASS. However, two critical challenges emerge when applying the DL approach to PASS. Firstly, the channel characteristics vary dramatically as PAs freely change positions within designated regions, which requires the efficient network architecture with sufficient capacity to learn the extended channel state space. Secondly, by dynamically activating candidate PAs along the waveguide in response to real-time communication demands, the system introduces time-varying channel dimension and heterogeneous signal distribution. The proposed DL-based channel estimator necessitates an innovative architecture capable of adaptively handling spatio-temporal variations in channel parameters, thereby fostering advancements in neural network design to enable robust estimation under non-stationary conditions." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 45, + 433, + 149, + 443 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 433, + 149, + 443 + ], + "spans": [ + { + "bbox": [ + 45, + 433, + 149, + 443 + ], + "type": "text", + "content": "A. Dataset Construction" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "spans": [ + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": "In the offline training stage, we collect " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "N_{s}" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": " paired samples, i.e., the pilot observation vector " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "\\widetilde{\\mathbf{y}}_k\\in \\mathbb{C}^{T\\times 1}" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": " at the BS, the position set " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "\\Psi^{\\mathrm{P}} = [\\psi_n^1,\\dots ,\\psi_N^{\\mathrm{P}}] = [(x_1^{\\mathrm{P}},0,d),\\dots ,(x_N^{\\mathrm{P}},0,d)]" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": " of PAs, and the corresponding channel sample " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "\\mathbf{h}_k" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": ". In the proposed channel estimation network, the input tensor is designed as the set of both PA positions and the pilot observation. Considering the waveguide is deployed parallel to the " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": "-axis, we merely need the " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": "-axis coordinate set " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "\\mathbf{x}^{\\mathrm{P}} = [x_1^{\\mathrm{P}},\\ldots ,x_N^{\\mathrm{P}}]^T\\in \\mathbb{R}^{N\\times 1}" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": " of PAs as the input feature. To enable the neural network to adaptively estimate channels across varying PA configurations, we utilize the in-waveguide channel " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "\\mathbf{g}" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": " to transform the observation " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "\\widetilde{\\mathbf{y}}_k\\in \\mathbb{C}^{T\\times 1}" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": " into " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "\\widetilde{\\mathbf{Y}}_k = \\mathbf{g}\\widetilde{\\mathbf{y}}_k^\\top \\in \\mathbb{C}^{N\\times T}" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": ". This transformation aligns the input dimension of neural network with the current number of PAs " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": " in PASS, which facilitates the subsequent channel estimation network design. The complex-value matrix " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "\\widetilde{\\mathbf{Y}}_k" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": " is converted into the real-value tensor " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "\\overline{\\mathbf{Y}}_k = \\{\\Re (\\widetilde{\\mathbf{Y}}_k),\\Im (\\widetilde{\\mathbf{Y}}_k)\\} \\in \\mathbb{R}^{N\\times 2T}" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": " for neural network processing. Accordingly, the label tensor in the network training is " + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "inline_equation", + "content": "\\bar{\\mathbf{H}}_k = \\{\\Re (\\mathbf{h}_k),\\Im (\\mathbf{h}_k)\\} \\in \\mathbb{R}^{N\\times 2}" + }, + { + "bbox": [ + 45, + 447, + 301, + 676 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 45, + 683, + 301, + 750 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 683, + 301, + 750 + ], + "spans": [ + { + "bbox": [ + 45, + 683, + 301, + 750 + ], + "type": "text", + "content": "3To collect independent pilot measurements for each PA, as required by LS and LMMSE, one feasible approach is to introduce an antenna switching matrix that selects subsets of PAs in different time slots to realize the signal separation. In Section IV of this letter, we provide antenna switching matrix-based LS and LMMSE estimators as channel estimation benchmarks. Note that switching each PA in different slots introduces hardware overhead, extended measurement time, and potential switch losses, all of which reduce practicality." + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 348, + 55, + 530, + 160 + ], + "blocks": [ + { + "bbox": [ + 348, + 55, + 530, + 160 + ], + "lines": [ + { + "bbox": [ + 348, + 55, + 530, + 160 + ], + "spans": [ + { + "bbox": [ + 348, + 55, + 530, + 160 + ], + "type": "image", + "image_path": "241e3ab1863fe20fbdb6cad0c762ddc9234ff318ed7416f9440786b3f82e7423.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 308, + 168, + 497, + 179 + ], + "lines": [ + { + "bbox": [ + 308, + 168, + 497, + 179 + ], + "spans": [ + { + "bbox": [ + 308, + 168, + 497, + 179 + ], + "type": "text", + "content": "Fig. 2. Proposed PAMoE model for channel estimation." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 308, + 200, + 552, + 211 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 200, + 552, + 211 + ], + "spans": [ + { + "bbox": [ + 308, + 200, + 552, + 211 + ], + "type": "text", + "content": "B. PAMoE: Mixture of Experts-Based Channel Estimation" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 307, + 214, + 563, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 214, + 563, + 274 + ], + "spans": [ + { + "bbox": [ + 307, + 214, + 563, + 274 + ], + "type": "text", + "content": "We first propose a PAMoE estimator as illustrated in Fig. 2 to address high-dimensional dynamic channel estimation from low-dimensional received pilots, which incorporates the dynamic padding, feature embedding, fusion, and position-aware MoE modules [10]." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "spans": [ + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": "1) Dynamic Padding: To accommodate variable PA counts " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": " and handle variable-length inputs, " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "PAMoE" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": " employs dynamic padding up to a maximum " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "N_{\\mathrm{max}}" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": ". Let " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": " denote the batch size in the network training stage. The input tensor in a training batch can be expressed as PA positions " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "\\mathbf{P} \\in \\mathbb{R}^{B \\times N \\times 1}" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": " and pilot signals " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "\\mathbf{S} \\in \\mathbb{R}^{B \\times N \\times 2T}" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": ", which are the batch version of " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "\\mathbf{x}^{\\mathrm{P}}" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "\\overline{\\mathbf{Y}}_k" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": ", respectively. If " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "N < N_{\\mathrm{max}}" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": ", the network pads along the PA dimension so that both " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "\\mathbf{P}" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "\\mathbf{S}" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": " become length " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "N_{\\mathrm{max}}" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": ", i.e., " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "\\mathbf{P}' = [\\mathbf{P}, \\phi^1] \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times 1}" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "\\mathbf{S}' = [\\mathbf{S}, \\phi^2] \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times d_{\\mathrm{sig}}}" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "\\phi^1" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "inline_equation", + "content": "\\phi^2 \\in \\mathbb{R}^{N_{\\mathrm{pad}} \\times 1}(N_{\\mathrm{pad}} = N_{\\mathrm{max}} - N)" + }, + { + "bbox": [ + 307, + 274, + 563, + 406 + ], + "type": "text", + "content": " denote the learnable padding embeddings, respectively." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 307, + 406, + 563, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 406, + 563, + 491 + ], + "spans": [ + { + "bbox": [ + 307, + 406, + 563, + 491 + ], + "type": "text", + "content": "2) Feature Embedding: To handle continuous spatial information and achieve efficient extrapolation of PAs, the Fourier basis function is employed, developing a Fourier positional embedding approach. Suppose a set of exponentially increasing frequency bases " + }, + { + "bbox": [ + 307, + 406, + 563, + 491 + ], + "type": "inline_equation", + "content": "\\mathbf{f} = \\left[2^{f}\\pi \\right]_{f = 0}^{F - 1}\\in \\mathbb{R}^{F}" + }, + { + "bbox": [ + 307, + 406, + 563, + 491 + ], + "type": "text", + "content": " is defined, where " + }, + { + "bbox": [ + 307, + 406, + 563, + 491 + ], + "type": "inline_equation", + "content": "F" + }, + { + "bbox": [ + 307, + 406, + 563, + 491 + ], + "type": "text", + "content": " is the number of frequency components. The scaled position encodings can be expressed as" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 383, + 497, + 563, + 511 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 383, + 497, + 563, + 511 + ], + "spans": [ + { + "bbox": [ + 383, + 497, + 563, + 511 + ], + "type": "interline_equation", + "content": "\\boldsymbol {\\Theta} = \\mathbf {S} ^ {\\prime} \\odot \\mathbf {f} \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times F}. \\tag {6}", + "image_path": "a6bce7b7e4541d718941d17854323a930db1d97c50e61ff1fa651b3569bd179d.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 308, + 518, + 563, + 542 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 518, + 563, + 542 + ], + "spans": [ + { + "bbox": [ + 308, + 518, + 563, + 542 + ], + "type": "text", + "content": "Then, we apply sinusoidal transformations to obtain sine and cosine features, and then are concatenated as" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 346, + 549, + 563, + 564 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 346, + 549, + 563, + 564 + ], + "spans": [ + { + "bbox": [ + 346, + 549, + 563, + 564 + ], + "type": "interline_equation", + "content": "\\mathbf {E} = \\operatorname {C o n c a t} \\left[ \\sin (\\boldsymbol {\\Theta}), \\cos (\\boldsymbol {\\Theta}) \\right] \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times 2 F}. \\tag {7}", + "image_path": "bad10297d4043830c1c79115eac50e6215844c80612b906ec2ef674b84e3e7e3.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "spans": [ + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "text", + "content": "Further, " + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "inline_equation", + "content": "\\mathbf{E}" + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "text", + "content": " is projected into the embedding space " + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}_{\\mathrm{pos}} = \\mathbf{EW}_{\\mathrm{pos}} + \\mathbf{b}_{\\mathrm{pos}} \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times d_{\\mathrm{embed}}}" + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "inline_equation", + "content": "\\mathbf{W}_{\\mathrm{pos}} \\in \\mathbb{R}^{2F \\times d_{\\mathrm{embed}}}" + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "text", + "content": " is the learnable weight matrix and " + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "inline_equation", + "content": "\\mathbf{b}_{\\mathrm{pos}} \\in \\mathbb{R}^{d_{\\mathrm{embed}}}" + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "text", + "content": " is the bias term. This Fourier embedding effectively captures multi-scale positional variations in the input space. The pilot signal data is embedded to the same hidden dimension " + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "inline_equation", + "content": "d_{\\mathrm{hid}}" + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "text", + "content": " by the multilayer perceptron (MLP)-based linear mapping module " + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "inline_equation", + "content": "\\phi_{\\mathrm{sig}}" + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "text", + "content": ", i.e., " + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}_{\\mathrm{sig}} = \\phi_{\\mathrm{sig}}(\\mathbf{S}') \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times d_{\\mathrm{hid}}}" + }, + { + "bbox": [ + 307, + 571, + 563, + 666 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 307, + 667, + 564, + 725 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 667, + 564, + 725 + ], + "spans": [ + { + "bbox": [ + 307, + 667, + 564, + 725 + ], + "type": "text", + "content": "3) Feature Fusion: To fuse positional features with pilot signal features, we employ a gating function to generate a gate from a control feature based on PA positions and apply it to the target feature based on pilot signal. The gating operation is formulated as" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 354, + 733, + 563, + 748 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 354, + 733, + 563, + 748 + ], + "spans": [ + { + "bbox": [ + 354, + 733, + 563, + 748 + ], + "type": "interline_equation", + "content": "\\mathbf {G} = \\sigma \\left(\\mathbf {W} _ {g} \\mathbf {Z} _ {\\text {p o s}} + \\mathbf {b} _ {g}\\right) \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times d _ {\\text {h i d}}}, \\tag {8}", + "image_path": "5664d4e9d1057417c4f4c5812b52c6f50f054b96fd37dc778bc557f463bc155c.jpg" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "spans": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 0 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 85, + 56, + 265, + 205 + ], + "blocks": [ + { + "bbox": [ + 85, + 56, + 265, + 205 + ], + "lines": [ + { + "bbox": [ + 85, + 56, + 265, + 205 + ], + "spans": [ + { + "bbox": [ + 85, + 56, + 265, + 205 + ], + "type": "image", + "image_path": "a8f19f86ad289a59e60b7ffa3eb8b3b7b38f5339d4b348c9f7164601efdf43a2.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 45, + 214, + 240, + 224 + ], + "lines": [ + { + "bbox": [ + 45, + 214, + 240, + 224 + ], + "spans": [ + { + "bbox": [ + 45, + 214, + 240, + 224 + ], + "type": "text", + "content": "Fig. 3. Proposed PAformer model for channel estimation." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 85, + 242, + 301, + 258 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 242, + 301, + 258 + ], + "spans": [ + { + "bbox": [ + 85, + 242, + 301, + 258 + ], + "type": "interline_equation", + "content": "\\mathbf {Z} _ {\\text {f u s e d}} = \\mathbf {G} \\odot \\mathbf {Z} _ {\\text {s i g}} + \\mathbf {Z} _ {\\text {p o s}} \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times d _ {\\text {h i d}}}, \\tag {9}", + "image_path": "981862df35fea380168ebba8095c541e236210b8dfaf3f4cf84859050c9671d4.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 45, + 263, + 301, + 299 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 263, + 301, + 299 + ], + "spans": [ + { + "bbox": [ + 45, + 263, + 301, + 299 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 45, + 263, + 301, + 299 + ], + "type": "inline_equation", + "content": "\\mathbf{W}_g" + }, + { + "bbox": [ + 45, + 263, + 301, + 299 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 263, + 301, + 299 + ], + "type": "inline_equation", + "content": "\\mathbf{b}_g" + }, + { + "bbox": [ + 45, + 263, + 301, + 299 + ], + "type": "text", + "content": " denote the weight and bias of a linear layer with " + }, + { + "bbox": [ + 45, + 263, + 301, + 299 + ], + "type": "inline_equation", + "content": "d_{\\mathrm{hid}}" + }, + { + "bbox": [ + 45, + 263, + 301, + 299 + ], + "type": "text", + "content": " neurons, respectively, and " + }, + { + "bbox": [ + 45, + 263, + 301, + 299 + ], + "type": "inline_equation", + "content": "\\sigma (\\cdot)" + }, + { + "bbox": [ + 45, + 263, + 301, + 299 + ], + "type": "text", + "content": " is a Sigmoid activation function." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "spans": [ + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "text", + "content": "4) MoE With Gating Network: Suppose there are " + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "inline_equation", + "content": "E" + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "text", + "content": " experts and each expert takes " + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}_{\\mathrm{fused}}" + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "text", + "content": " and processes it with an MLP-Mixer block across both the feature dimension " + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "inline_equation", + "content": "d_{\\mathrm{hid}}" + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "text", + "content": " and the spatial dimension " + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "inline_equation", + "content": "N_{\\mathrm{max}}" + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "text", + "content": ". Let " + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}_e = E_e(\\mathbf{Z}_{\\mathrm{fused}}) \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times d_{\\mathrm{hid}}}" + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "text", + "content": " denote the output of expert " + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "inline_equation", + "content": "e" + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "inline_equation", + "content": "(e = 1, \\dots, E)" + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "text", + "content": ". A gating network is employed to weight each expert's output. Typically, it pools " + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}_{\\mathrm{fused}}" + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "text", + "content": " over the PA dimension to get a global context " + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "inline_equation", + "content": "\\mathbf{z}_{\\mathrm{pool}} \\in \\mathbb{R}^{B \\times d_{\\mathrm{hid}}}" + }, + { + "bbox": [ + 45, + 300, + 300, + 407 + ], + "type": "text", + "content": ", and then applies the softmax activation function to produce gating weights, which can be expressed as" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 91, + 414, + 299, + 430 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 91, + 414, + 299, + 430 + ], + "spans": [ + { + "bbox": [ + 91, + 414, + 299, + 430 + ], + "type": "interline_equation", + "content": "\\boldsymbol {\\alpha} = \\operatorname {s o f t m a x} \\left(\\mathbf {W} _ {\\alpha} \\mathbf {z} _ {\\text {p o o l}} + \\mathbf {b} _ {\\alpha}\\right) \\in \\mathbb {R} ^ {B \\times E}, \\tag {10}", + "image_path": "ba214a9e0f7b39a39cdc1b432bf7c57a182ec750d3f482664c514300e8099aa6.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "spans": [ + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "type": "inline_equation", + "content": "\\sum_{e=1}^{E} \\alpha_{b,e} = 1" + }, + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "type": "inline_equation", + "content": "\\forall b \\in \\{1, \\dots, B\\}" + }, + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "type": "inline_equation", + "content": "\\mathbf{W}_{\\alpha}" + }, + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "type": "inline_equation", + "content": "\\mathbf{b}_{\\alpha}" + }, + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "type": "text", + "content": " are the weight and bias of a linear layer with " + }, + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "type": "inline_equation", + "content": "E" + }, + { + "bbox": [ + 45, + 435, + 299, + 460 + ], + "type": "text", + "content": " neurons, respectively." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "spans": [ + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "type": "text", + "content": "Given the output " + }, + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}_e" + }, + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "type": "text", + "content": " of each expert and the gating weights " + }, + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "type": "text", + "content": ", we form a weighted sum over experts. Let " + }, + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "type": "text", + "content": " be reshaped to " + }, + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "type": "inline_equation", + "content": "\\mathbb{R}^{B\\times 1\\times E}" + }, + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "type": "text", + "content": " so it can broadcast over the " + }, + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "type": "inline_equation", + "content": "N_{\\mathrm{max}}" + }, + { + "bbox": [ + 45, + 460, + 301, + 509 + ], + "type": "text", + "content": " dimension to obtain the following output of MoE" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 99, + 513, + 299, + 544 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 99, + 513, + 299, + 544 + ], + "spans": [ + { + "bbox": [ + 99, + 513, + 299, + 544 + ], + "type": "interline_equation", + "content": "\\mathbf {Z} _ {\\mathrm {M o E}} = \\sum_ {e = 1} ^ {E} \\boldsymbol {\\alpha} _ {e} \\mathbf {Z} _ {e} \\in \\mathbb {R} ^ {B \\times N _ {\\max } \\times d _ {\\mathrm {h i d}}} \\tag {11}", + "image_path": "d0448b850dbf37ab2d1651fb3c150e4a729dd49a056ce622509728a8caf9df7e.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "spans": [ + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "text", + "content": "Finally, the network concatenates the positional feature " + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}_{\\mathrm{pos}}" + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}_{\\mathrm{MoE}}" + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "text", + "content": " along the last dimension, i.e., " + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}_{\\mathrm{concat}} \\in \\mathbb{R}^{B \\times N_{\\mathrm{max}} \\times (2d_{\\mathrm{hid}})}" + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "text", + "content": ". Then, a linear layer maps " + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}_{\\mathrm{concat}}" + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "text", + "content": " to the estimated channel " + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "inline_equation", + "content": "\\widetilde{\\mathbf{H}} \\in \\mathbb{R}^{B \\times N \\times 2}" + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "text", + "content": ". In " + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "inline_equation", + "content": "PAMoE" + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "text", + "content": ", the dynamic padding pattern requires the network to fix a predefined maximum number of PAs " + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "inline_equation", + "content": "N_{\\mathrm{max}}" + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "text", + "content": " in the training stage, which is a limiting factor if PAs exceed the predefined maximum bound " + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "inline_equation", + "content": "N_{\\mathrm{max}}" + }, + { + "bbox": [ + 45, + 548, + 301, + 633 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 46, + 649, + 270, + 661 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 649, + 270, + 661 + ], + "spans": [ + { + "bbox": [ + 46, + 649, + 270, + 661 + ], + "type": "text", + "content": "C. P Aformer: Transformer-Based Channel Estimation" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 45, + 664, + 301, + 750 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 664, + 301, + 750 + ], + "spans": [ + { + "bbox": [ + 45, + 664, + 301, + 750 + ], + "type": "text", + "content": "We further propose a PAformer estimator as illustrated in Fig. 3 to address the scalability limitation of " + }, + { + "bbox": [ + 45, + 664, + 301, + 750 + ], + "type": "inline_equation", + "content": "PAMoE" + }, + { + "bbox": [ + 45, + 664, + 301, + 750 + ], + "type": "text", + "content": ", which is inherently constrained by fixed " + }, + { + "bbox": [ + 45, + 664, + 301, + 750 + ], + "type": "inline_equation", + "content": "N_{\\mathrm{max}}" + }, + { + "bbox": [ + 45, + 664, + 301, + 750 + ], + "type": "text", + "content": " during training. PAformer incorporates self-attention layers with permutation-equivariance, enabling the network to dynamically accommodate PAs of arbitrary size " + }, + { + "bbox": [ + 45, + 664, + 301, + 750 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 45, + 664, + 301, + 750 + ], + "type": "text", + "content": ". The proposed PAformer predicts channel coefficients in a per-antenna manner, offering" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 308, + 55, + 563, + 78 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 55, + 563, + 78 + ], + "spans": [ + { + "bbox": [ + 308, + 55, + 563, + 78 + ], + "type": "text", + "content": "flexibility if new data has more antennas than seen before in the training stage." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "spans": [ + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "type": "text", + "content": "1) Input Embedding: In PAformer, we first concatenate the position and pilot signal features along their last dimension, i.e., " + }, + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "type": "inline_equation", + "content": "\\mathbf{V} = \\left[\\mathbf{P},\\mathbf{S}\\right]\\in \\mathbb{R}^{B\\times N\\times (2T + 1)}" + }, + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "type": "text", + "content": ". Next, " + }, + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "type": "inline_equation", + "content": "\\mathbf{V}" + }, + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "type": "text", + "content": " is mapped into a hidden representation " + }, + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^{(0)}" + }, + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "type": "text", + "content": " via an embedding network " + }, + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "type": "text", + "content": " based on MLP, i.e., " + }, + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^{(0)} = \\phi (\\mathbf{V})\\in \\mathbb{R}^{B\\times N\\times d_{\\mathrm{hid}}}" + }, + { + "bbox": [ + 308, + 79, + 563, + 139 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "spans": [ + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": "2) Transformer Encoder: PAformer employs a pre-norm Transformer with " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "P" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": " stacked self-attention blocks (SABs), each comprising multi-head self-attention (MHA) and a feedforward network (FFN), both with residual connections and layer normalization. Let " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^{(\\ell -1)}" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": " denote the input to the " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\ell" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": "-th SAB, and " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\overline{\\mathbf{Z}}^{(\\ell -1)} = \\mathrm{LayerNorm}(\\mathbf{Z}^{(\\ell -1)})" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": " is the pre-normalized feature representation by the layer normalization. The output of the MHA module is expressed as " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\mathbf{A}^{(\\ell)} = \\mathrm{MHA}(\\mathbf{Q},\\mathbf{K},\\mathbf{V})\\in" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\mathbb{R}^{B\\times N\\times d_{\\mathrm{hid}}}" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": " where Q, K, and V denote query, key, and value tokens of " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\overline{\\mathbf{Z}}^{(\\ell -1)}" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": " [11], respectively. The result is combined via a residual connection " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\overline{\\mathbf{Z}}^{\\prime (\\ell)} = \\overline{\\mathbf{Z}}^{(\\ell -1)} + \\mathbf{A}^{(\\ell)}" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": ". Then, the FFN is applied to obtain " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\mathbf{F}^{(\\ell)} = \\mathrm{FFN}(\\overline{\\mathbf{Z}}^{\\prime (\\ell)})" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": ", where FFN is a point-wise MLP operating on each PA token separately. Another residual connection completes this sub-layer " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^{(\\ell)} =" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\overline{\\mathbf{Z}}^{\\prime (\\ell)} + \\mathbf{F}^{(\\ell)}" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": ". Stacking and applying " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "P" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": " blocks in sequence yields the final feature representation " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^P = \\mathrm{SAB}^{(\\ell)}\\bigg(\\mathbf{Z}^{(\\ell -1)}\\bigg)\\in" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\mathbb{R}^{B\\times N\\times d_{\\mathrm{hid}}}" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\ell = 1,\\dots ,P" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": ", starting from " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^{(0)}" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": ". The final step maps each antenna-wise feature vector in " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\mathbf{Z}^{(P)}" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": " to the estimated channel " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "\\widehat{\\mathbf{H}}_i" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "inline_equation", + "content": "i = 1,\\ldots ,N" + }, + { + "bbox": [ + 308, + 140, + 564, + 380 + ], + "type": "text", + "content": " indexes PAs." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 307, + 380, + 564, + 453 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 380, + 564, + 453 + ], + "spans": [ + { + "bbox": [ + 307, + 380, + 564, + 453 + ], + "type": "text", + "content": "For the proposed two DL estimators, during the test stage, the trained network can be applied to scenarios with dynamic numbers of PAs, even though the test dataset has a different data distribution and dimensionality compared to the training dataset. This property of the channel estimation network can also be termed as zero-shot learning." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 378, + 467, + 494, + 478 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 378, + 467, + 494, + 478 + ], + "spans": [ + { + "bbox": [ + 378, + 467, + 494, + 478 + ], + "type": "text", + "content": "IV. NUMERICAL RESULTS" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "spans": [ + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": "In simulation setups, we set " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "Q = 200" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "K = 4" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "n_e = 1.4" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "d = 5" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "D_x \\times D_y = 20 \\times 20 \\mathrm{~m}^2" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "S = 6" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": " and the carrier frequency " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "f_c = 28 \\mathrm{GHz}" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ". In the training dataset construction, we collect " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "N_s = 10^5" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": " training samples with the fixed number of PAs " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "N^{\\mathrm{tr}} = 16" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ", while the test number of PAs is " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "N^{\\mathrm{te}} \\in \\{8, \\dots, 32\\}" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ". In the hyper-parameter setups of the proposed DL estimators, we set " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "N_{\\mathrm{max}} = 32" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "E = 4" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "d_{\\mathrm{hid}} = 64" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "P = 4" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "B = 256" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ". The 1-norm " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "\\ell_1" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": " is used as the loss function in the network training, i.e., " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "\\ell_1 = ||\\widehat{\\mathbf{H}}_k - \\overline{\\mathbf{H}}_k||_1" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ", while the normalized mean squared error (NMSE) is employed as the performance metric, i.e., " + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "inline_equation", + "content": "\\mathrm{NMSE} = \\mathbb{E}\\{||\\widehat{\\mathbf{H}}_k - \\overline{\\mathbf{H}}_k||_F^2 / ||\\overline{\\mathbf{H}}_k||_F^2\\}" + }, + { + "bbox": [ + 308, + 483, + 564, + 712 + ], + "type": "text", + "content": ". The antenna switching-based LS and LMMSE estimators are used as the conventional channel estimation benchmarks. Furthermore, to establish relevant benchmarks incorporating state-of-the-art DL estimators, we further develop two comparison DL benchmarks drawing inspiration from recent attention-based channel estimation networks, i.e., SA-RN-CE [8] and DACEN [9], which are created by replacing the MoE module in PAMoE with spatial attention (SA) and dual attention (DA) mechanisms, respectively." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 307, + 712, + 564, + 749 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 712, + 564, + 749 + ], + "spans": [ + { + "bbox": [ + 307, + 712, + 564, + 749 + ], + "type": "text", + "content": "In Fig. 4, we present the NMSE performance of different channel estimation schemes with " + }, + { + "bbox": [ + 307, + 712, + 564, + 749 + ], + "type": "inline_equation", + "content": "N^{\\mathrm{te}} = 32" + }, + { + "bbox": [ + 307, + 712, + 564, + 749 + ], + "type": "text", + "content": ". The proposed DL estimators with reduced pilot overhead " + }, + { + "bbox": [ + 307, + 712, + 564, + 749 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 307, + 712, + 564, + 749 + ], + "type": "text", + "content": " outperform" + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 558, + 25, + 563, + 31 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 558, + 25, + 563, + 31 + ], + "spans": [ + { + "bbox": [ + 558, + 25, + 563, + 31 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 0 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 91, + 62, + 251, + 190 + ], + "blocks": [ + { + "bbox": [ + 91, + 62, + 251, + 190 + ], + "lines": [ + { + "bbox": [ + 91, + 62, + 251, + 190 + ], + "spans": [ + { + "bbox": [ + 91, + 62, + 251, + 190 + ], + "type": "image", + "image_path": "693e462a34cadef9c1cfda5dfe90c288c6651c3f415570ed7ba8c78a5da6867a.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 45, + 201, + 211, + 211 + ], + "lines": [ + { + "bbox": [ + 45, + 201, + 211, + 211 + ], + "spans": [ + { + "bbox": [ + 45, + 201, + 211, + 211 + ], + "type": "text", + "content": "Fig. 4. NMSE vs. SNR for different algorithms." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 91, + 232, + 252, + 360 + ], + "blocks": [ + { + "bbox": [ + 91, + 232, + 252, + 360 + ], + "lines": [ + { + "bbox": [ + 91, + 232, + 252, + 360 + ], + "spans": [ + { + "bbox": [ + 91, + 232, + 252, + 360 + ], + "type": "image", + "image_path": "ceecef8ee6c03c7cbdd8591e5cd323baeb805da3fe491adbccdc3edcb814fd50.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 45, + 371, + 253, + 382 + ], + "lines": [ + { + "bbox": [ + 45, + 371, + 253, + 382 + ], + "spans": [ + { + "bbox": [ + 45, + 371, + 253, + 382 + ], + "type": "text", + "content": "Fig. 5. NMSE vs. number of PAs " + }, + { + "bbox": [ + 45, + 371, + 253, + 382 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 45, + 371, + 253, + 382 + ], + "type": "text", + "content": " for different algorithms." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 64, + 422, + 283, + 534 + ], + "blocks": [ + { + "bbox": [ + 76, + 397, + 271, + 416 + ], + "lines": [ + { + "bbox": [ + 76, + 397, + 271, + 416 + ], + "spans": [ + { + "bbox": [ + 76, + 397, + 271, + 416 + ], + "type": "text", + "content": "TABLEI COMPLEXITY ANALYSIS OF PROPOSED DL ESTIMATORS" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 64, + 422, + 283, + 534 + ], + "lines": [ + { + "bbox": [ + 64, + 422, + 283, + 534 + ], + "spans": [ + { + "bbox": [ + 64, + 422, + 283, + 534 + ], + "type": "table", + "html": "
EstimatorNParams(K)FLOPs(M)Runtime (μs)
CPUGPU
PAMoE8
16211.913.18211.812.73
32
PAformer84.37774.623.814
16545.98.756128.27.943
3217.51226.812.95
", + "image_path": "a79046fc7708e5fdb8ab65be83e6202f3d9466f4cb785d6263c17bb30b971d73.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "spans": [ + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "text", + "content": "the existing linear estimators and DL models. The superior accuracy of " + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "inline_equation", + "content": "PAMoE" + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "text", + "content": " compared to " + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "inline_equation", + "content": "PA\\text{former}" + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "text", + "content": " primarily stems from " + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "inline_equation", + "content": "PAMoE" + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "text", + "content": "'s more specialized design, particularly in its handling and exploitation of the pinching antenna positional information. " + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "inline_equation", + "content": "PA\\text{former}" + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "text", + "content": ", while robust and scalable due to its Transformer foundation, operates with a more general mechanism to support variable PA counts. In Fig. 5, we provide NMSE performance of different channel estimation schemes for varying numbers of PAs, where the signal-to-noise ratio (SNR) is set to 0 dB. Compared to SA-RN-CE and DACEN, the proposed " + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "inline_equation", + "content": "PAMoE" + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "inline_equation", + "content": "PA\\text{former}" + }, + { + "bbox": [ + 45, + 557, + 301, + 750 + ], + "type": "text", + "content": " trained by the fixed PA configuration exhibit excellent robustness and generalization for dynamic PAs. The proposed DL estimators possess sufficient zero-shot learning capabilities to deal with distinct data distribution in the test stage. Table I summarizes the number of trainable parameters (Params), floating point" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "spans": [ + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": "operations (FLOPs) and inference runtime of the proposed " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "PAMoE" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "PA\\text{former}" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": ", where the NVIDIA RTX 3090 GPU and the 12th Gen Intel(R) Core(TM) i9- 12900K CPU are used as the inference platform. The average inference runtime per channel estimation instance was calculated by averaging over 1000 Monte Carlo experiments. Firstly, for different numbers of PAs " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": ", the Params of the proposed " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "PAMoE" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "PA\\text{former}" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": " are the same due to the fixed network architecture. Secondly, the FLOPs of " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "PA\\text{former}" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": " increase linearly with the number of PAs " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": ", as the higher-dimensional input tensor needs to be processed in the SAB of " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "PA\\text{former}" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": ". In the " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "PAMoE" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": " model, the input tensor is uniformly padded to the same " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "N_{\\max}" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": " via padding interpolation, and hence the FLOPs of the " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "PAMoE" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": " remain constant for different " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": ". Finally, we observe that both " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "PAMoE" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "PA\\text{former}" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": " can achieve the channel estimation at the microseconds (" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "inline_equation", + "content": "\\mu s" + }, + { + "bbox": [ + 307, + 55, + 564, + 258 + ], + "type": "text", + "content": ") level, demonstrating the practical feasibility of the proposed models for real-time deployment." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 396, + 267, + 477, + 277 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 396, + 267, + 477, + 277 + ], + "spans": [ + { + "bbox": [ + 396, + 267, + 477, + 277 + ], + "type": "text", + "content": "V. CONCLUSIONS" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 307, + 281, + 564, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 281, + 564, + 425 + ], + "spans": [ + { + "bbox": [ + 307, + 281, + 564, + 425 + ], + "type": "text", + "content": "In this letter, we investigated the channel estimation approaches in PASS, and proposed two DL estimators with the advanced neural network architecture to infer a high-dimensional channel vector from inherently low-dimensional pilot observations, respectively. Specifically, the proposed PAMoE accommodates variable PA configurations and exploits multi-expert diversity in the MLP-Mixer for improved channel estimation. Accordingly, the proposed PAFomer is capable of handling arbitrary number of PAs thanks to the self-attention mechanism. Numerical results demonstrated that the proposed DL estimators outperform conventional methods, and significantly reduce the pilot overhead." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 406, + 434, + 466, + 445 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 406, + 434, + 466, + 445 + ], + "spans": [ + { + "bbox": [ + 406, + 434, + 466, + 445 + ], + "type": "text", + "content": "REFERENCES" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 310, + 449, + 564, + 746 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 315, + 449, + 564, + 477 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 449, + 564, + 477 + ], + "spans": [ + { + "bbox": [ + 315, + 449, + 564, + 477 + ], + "type": "text", + "content": "[1] X. Shao, Q. Jiang, and R. Zhang, \"6D movable antenna based on user distribution: Modeling and optimization,\" IEEE Trans. Wireless Commun., vol. 24, no. 1, pp. 355-370, Jan. 2025." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 315, + 477, + 564, + 504 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 477, + 564, + 504 + ], + "spans": [ + { + "bbox": [ + 315, + 477, + 564, + 504 + ], + "type": "text", + "content": "[2] A. Fukuda, H. Yamamoto, H. Okazaki, Y. Suzuki, and K. Kawai, \"Pinching antenna: Using a dielectric waveguide as an antenna,\" NTT DOCOMO Technical J., vol. 23, no. 3, pp. 5-12, Jan. 2022." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 314, + 504, + 563, + 521 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 504, + 563, + 521 + ], + "spans": [ + { + "bbox": [ + 314, + 504, + 563, + 521 + ], + "type": "text", + "content": "[3] Z. Ding, R. Schober, and H. Vincent Poor, “Flexible-antenna systems: A pinching-antenna perspective,” IEEE Trans. Commun., 2025." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 314, + 521, + 563, + 548 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 521, + 563, + 548 + ], + "spans": [ + { + "bbox": [ + 314, + 521, + 563, + 548 + ], + "type": "text", + "content": "[4] D. Tyrovolas, S. A. Tegos, P. D. Diamantoulakis, S. Ioannidis, C. K. Liaskos, and G. K. Karagiannidis, “Performance analysis of pinching-antenna systems,” IEEE Trans. Cognit. Commun. Networking, 2025." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 314, + 548, + 563, + 575 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 548, + 563, + 575 + ], + "spans": [ + { + "bbox": [ + 314, + 548, + 563, + 575 + ], + "type": "text", + "content": "[5] Z. Wang, C. Ouyang, X. Mu, Y. Liu, and Z. Ding, \"Modeling and beamforming optimization for pinching-antenna systems,\" arXiv preprint arXiv:2502.05917, 2025." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 314, + 575, + 563, + 602 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 575, + 563, + 602 + ], + "spans": [ + { + "bbox": [ + 314, + 575, + 563, + 602 + ], + "type": "text", + "content": "[6] S. A. Tegos, P. D. Diamantoulakis, Z. Ding, and G. K. Karagiannidis, \"Minimum data rate maximization for uplink pinching-antenna systems,\" IEEE Wireless Commun. Lett., 2025." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 314, + 602, + 563, + 638 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 602, + 563, + 638 + ], + "spans": [ + { + "bbox": [ + 314, + 602, + 563, + 638 + ], + "type": "text", + "content": "[7] F. Rezaei, S. M. Marvasti-Zadeh, C. Tellambura, and A. Maaref, \"Adversarial score-based generative models for MMSE-achieving AmBC channel estimation,\" IEEE Wireless Commun. Lett., vol. 13, no. 4, pp. 1053-1057, Apr. 2024." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 314, + 638, + 563, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 638, + 563, + 666 + ], + "spans": [ + { + "bbox": [ + 314, + 638, + 563, + 666 + ], + "type": "text", + "content": "[8] X. Fan, Y. Zou, and L. Zhai, \"Spatial-attention-based channel estimation in IRS-assisted mmWave MU-MISO systems,\" IEEE Internet Things J., vol. 11, no. 6, pp. 9801-9813, Mar. 2024." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 314, + 666, + 563, + 701 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 666, + 563, + 701 + ], + "spans": [ + { + "bbox": [ + 314, + 666, + 563, + 701 + ], + "type": "text", + "content": "[9] B. Zhou, X. Yang, S. Ma, F. Gao, and G. Yang, \"Pay less but get more: A dual-attention-based channel estimation network for massive MIMO systems with low-density pilots,\" IEEE Trans. Wireless Commun., vol. 23, no. 6, pp. 6061-6076, Jun. 2024." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 310, + 701, + 563, + 719 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 310, + 701, + 563, + 719 + ], + "spans": [ + { + "bbox": [ + 310, + 701, + 563, + 719 + ], + "type": "text", + "content": "[10] W. Cai, J. Jiang, F. Wang, J. Tang, S. Kim, and J. Huang, “A survey on mixture of experts,” arXiv preprint arXiv:2407.06204, 2024." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 310, + 719, + 563, + 746 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 310, + 719, + 563, + 746 + ], + "spans": [ + { + "bbox": [ + 310, + 719, + 563, + 746 + ], + "type": "text", + "content": "[11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proc. NeurIPS, vol. 30, Nov. 2017." + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "spans": [ + { + "bbox": [ + 558, + 24, + 563, + 32 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 0 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13299/1182f1ca-70ec-4ed4-9c5b-10aa50e1a3e7_content_list.json b/data/2025/2503_13xxx/2503.13299/1182f1ca-70ec-4ed4-9c5b-10aa50e1a3e7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e0ad82214a1481daa8953bda7c23f2f295e48e44 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13299/1182f1ca-70ec-4ed4-9c5b-10aa50e1a3e7_content_list.json @@ -0,0 +1,4568 @@ +[ + { + "type": "text", + "text": "A Survey on Transformer Context Extension: Approaches and Evaluation", + "text_level": 1, + "bbox": [ + 263, + 79, + 734, + 118 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yijun Liu $^{1}$ , Jinzheng Yu $^{2}$ , Yang Xu $^{1}$ , Zhongyang Li $^{3}$ , Qingfu Zhu $^{1}$", + "bbox": [ + 216, + 124, + 783, + 142 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ Research Center for Social Computing and Interactive Robotics", + "bbox": [ + 231, + 143, + 761, + 158 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Harbin Institute of Technology", + "bbox": [ + 366, + 159, + 630, + 175 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "2 State Key Laboratory of Media Convergence and Communication", + "bbox": [ + 226, + 175, + 769, + 191 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{2}$ Communication University of China", + "bbox": [ + 344, + 192, + 652, + 208 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "3 Huawei Technologies, Co., Ltd.", + "bbox": [ + 361, + 209, + 633, + 225 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{yijunliu, qfzhu}@ir.hit.edu.cn", + "bbox": [ + 342, + 227, + 655, + 242 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 261, + 339, + 275 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Large language models (LLMs) based on Transformer have been widely applied in the filed of natural language processing (NLP), demonstrating strong performance, particularly in handling short text tasks. However, when it comes to long context scenarios, the performance of LLMs degrades due to some challenges. To alleviate this phenomenon, there is a number of work proposed recently. In this survey, we first list the challenges of applying pre-trained LLMs to process long contexts. Then systematically review the approaches related to long context and propose our taxonomy categorizing them into four main types: positional encoding, context compression, retrieval augmented, and attention pattern. In addition to the approaches, we focus on the evaluation of long context, organizing relevant data, tasks, and metrics based on existing long context benchmarks. Finally, we summarize unresolved issues in the long context domain and put forward our views on future developments.", + "bbox": [ + 141, + 284, + 460, + 598 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 607, + 258, + 621 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In recent years, the Transformer (Vaswani, 2017) architecture has made significant progress in many NLP tasks (Devlin, 2018; Radford, 2018; Lewis, 2019; Raffel et al., 2020; Brown, 2020; Chen et al., 2021a; Cobbe et al., 2021), and has become the foundational model of many applications. Large language models can handle tasks involving short texts, within the pre-trained context length. However, current scenarios always involve longer texts, such as book-/repo- level tasks (Sharma et al., 2019; Liu et al., 2023a; Zhang et al., 2023a; Liu et al., 2023b), dialogue systems with long contexts (Dey et al., 2022; Li et al., 2024a), content-rich in-context learning (Li et al., 2024c) and so on. The performance of the pre-trained LLMs degrades and the models often fail to utilize the complete knowledge contained within the long context inputs. This may be caused by three inher", + "bbox": [ + 112, + 631, + 489, + 921 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "ent challenges: out-of-distribution (OOD) problem (Han et al., 2024), \"Lost in the Middle\" phenomenon (Liu et al., 2024a), and quadratic complexity of attention (Zhou et al., 2024). Recently, a lot of work has been proposed to improve and evaluate models' ability to handle long contexts in the community.", + "bbox": [ + 507, + 261, + 884, + 373 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "This survey focuses on approaches and evaluation in the long context field, systematically reviewing existing related work. As illustrated in Figure 1, we propose a novel taxonomy for approaches, categorizing them into four main groups: positional encoding, context compression, retrieval augmented, and attention pattern. Additionally, we focus on the evaluation aspect and organize work on data, tasks, and metrics based on existing benchmarks. In addition to the two main parts of approaches and evaluation, we present our viewpoints on the current unsolved issues and potential future directions in the long context domain. To illustrate the current status more theoretically, we also list the main challenges in the field of long context before introducing specific work. Although most existing methods and benchmarks have not corresponded to them, these challenges are still instructive for the development of approaches and evaluation.", + "bbox": [ + 507, + 374, + 884, + 678 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "There are also some surveys that focus on the long context domain. They each have their own emphasis, but there is no systematic and comprehensive survey of approaches and evaluation in the field of long context, which can provide researchers with a quick and efficient guide. Some surveys only include a part of the methods, lacking a comprehensive overview of the approaches related to long context. Zhao et al. (2023) focus on work addressing length extrapolation from the perspective of positional encoding, while there are some surveys from the perspective of KV Cache (Li et al., 2025; Shi et al., 2024). Besides, though some surveys have categorized existing work, their taxonomies are not clear, and there are overlaps between categories.", + "bbox": [ + 507, + 680, + 884, + 921 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2503.13299v2 [cs.CL] 8 Jul 2025", + "bbox": [ + 21, + 300, + 60, + 694 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 492, + 942, + 502, + 954 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/8155c34b78c72e3a6c72ffa54d70e4bbd68caca6845f74c57719234c7004dcba.jpg", + "image_caption": [ + "Figure 1: Framework of survey. We first list three inherent challenges in Section 2. And then we systematically review related approaches and propose a novel taxonomy with four major categories in Section 3. Next, in Section 4, we organize the evaluation aspect from three perspectives: data, tasks, and metrics based on existing benchmarks. At last, we show our views on future roadmap and open problems in Section 5." + ], + "image_footnote": [], + "bbox": [ + 114, + 80, + 884, + 428 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "For example, Huang et al. (2023) divide the methods for enhancing Transformer architecture models into five categories, but some existing methods can belong to multiple categories. And (Pawar et al., 2024) also has this problem, which distinguishes existing techniques into two categories: interpolation and extrapolation. Also, some surveys even involve some common methods that not specifically designed for long contexts. Dong et al. (2023b) provide an overview of the text-preprocessing methods, architectures, special characteristics and application for long context, but they cover some general topics. What's more, these surveys pay little or even no attention to the evaluation aspect.", + "bbox": [ + 112, + 517, + 487, + 741 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To fill the above gap, our survey proposes a novel and comprehensive taxonomy on both approaches and evaluation aspects. It is worth noting that we focus on work that applies Transformer-based models to long text tasks, but not improving Transformers (nor other architectures) in a universal scenario. That is to say, this survey does not cover fields like long chain-of-thought reasoning (Chen et al., 2025), multimodal long context (Song et al., 2024; Qiu et al., 2024), efficient Transformer (Zhou et al., 2024), and State Space Model (SSM) (Wang et al.,", + "bbox": [ + 112, + 744, + 489, + 921 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2024c). In addition, the long context we focus on is the long input content, rather than the introduction of external knowledge in the Retrieval-Augmented Generation (RAG) scenario (Yu et al., 2024; Zhao et al., 2024; Fan et al., 2024).", + "bbox": [ + 507, + 517, + 882, + 596 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Challenges", + "text_level": 1, + "bbox": [ + 509, + 607, + 638, + 624 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "When applying pre-trained LLMs to the long context scenarios, there are some inherent challenges affecting models' performance. We list the three most important and common challenges: OOD problem, \"Lost in the Middle\" phenomenon, and quadratic complexity.", + "bbox": [ + 507, + 632, + 882, + 728 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "OD Problem When processing sequences that exceed pre-trained context window length, the models face out-of-distribution (OOD) problems. Han et al. (2024) verify theoretically and empirically that three key factors contribute to OOD issues, thereby limiting models' extrapolation capabilities: 1) unseen inter-token distances, 2) increased number of attended tokens, and 3) implicitly encoded position information of the starting tokens.", + "bbox": [ + 507, + 736, + 884, + 881 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "\"Lost in the Middle\" Phenomenon Liu et al. (2024a) discover the \"Lost in the middle\" phe", + "bbox": [ + 507, + 889, + 882, + 921 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 492, + 942, + 504, + 954 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "nomenon through experiments that when LLMs receive a long input, they tend to focus on the information at both the beginning and end of the input sequence. At the same time, they neglect the content in the middle, thus failing to capitalize on the key information within the long input.", + "bbox": [ + 112, + 84, + 489, + 181 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Quadratic Complexity Due to the quadratic complexity of attention, directly using pre-trained LLMs for training or inference on long context is time and resource consuming (Zhou et al., 2024).", + "bbox": [ + 112, + 193, + 487, + 258 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The above are three inherent challenges in the field of long context, and some existing methods have alleviated them to a certain extent. But it is worth noting that most of the methods do not start from this perspective. They consider directly improving the performance of downstream tasks. However, we believe that these three challenges are still the fundamental problems that need to be solved. They play a vital role in the proposal of methods and construction of benchmarks. Moreover, they are the focus of subsequent research.", + "bbox": [ + 112, + 260, + 489, + 437 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Approaches", + "text_level": 1, + "bbox": [ + 112, + 451, + 253, + 469 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "For the existing approaches for long context, we summarize their characteristics and propose a novel taxonomy different from previous work. As illustrated in Figure 1, mainstream methods are divided into four major categories: positional encoding, context compression, retrieval augmented, and attention pattern, which are introduced below. For more details, please see the Appendix A.", + "bbox": [ + 112, + 481, + 489, + 609 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1 Positional Encoding", + "text_level": 1, + "bbox": [ + 112, + 625, + 317, + 640 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Kazemnejad et al. (2024) mention that positional encoding (PE) appears to be a major factor in the length generalization of Transformer. During the inference process, when encountering sequences that exceed the length of the pre-trained window, the model needs to handle the position index that was not encountered during pre-training. This may lead to Out-Of-Distribution (OOD) issues. Thus, we would like to find an appropriate positional encoding method that allows the model to effectively encode position in sequences that exceed the pre-trained window length. Based on the implementation methods, solutions can be categorized into two main types: Variants of Rotary Position Embedding (RoPE, Su et al., 2024) and Attention bias method. The following sections will detail these two methods.", + "bbox": [ + 112, + 646, + 489, + 920 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Though the designed positional encoding strategies can alleviate the extrapolation problem, experiments have found that models without positional encoding (NoPE) show better performance than these methods in reasoning tasks (Kazemnejad et al., 2024). That's because when causal masks are used for decoding, the model reads the sequence sequentially from left to right. And this process naturally incorporates the sequential information of the token. This finding suggests that when designing a position encoding strategy, we may need to consider the way the model processes sequences and the requirements of the task.", + "bbox": [ + 507, + 84, + 884, + 294 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1.1 Variants of RoPE", + "text_level": 1, + "bbox": [ + 507, + 300, + 707, + 315 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Rotary Position Embedding (RoPE, Su et al., 2024) is a positional encoding method utilized in a series of models such as Wang and Komatsuzaki (2021); Touvron et al. (2023); Roziere et al. (2023). RoPE incorporates explicit relative position dependency in self-attention, which can be expressed as", + "bbox": [ + 507, + 319, + 882, + 416 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\operatorname {s i m} \\left(q _ {m}, k _ {n}\\right) = q _ {m} ^ {\\top} R _ {\\Theta , n - m} ^ {d} k _ {n} \\tag {1} \\\\ = (R _ {\\Theta , m} ^ {d} q _ {m}) ^ {\\top} (R _ {\\Theta , n} ^ {d} k _ {n}), \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 537, + 420, + 880, + 463 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $R_{\\Theta,m}^{d}$ is called the rotation matrix. The original RoPE's extrapolation capability is not very robust and can only maintain performance slightly beyond the pre-trained context length. Consequently, existing work enhances RoPE for better extrapolation. The core of RoPE is the rotation matrix $R_{\\Theta,m}^{d}$ , which is parameterized by the position index $m$ and the function family $\\Theta$ . We can optimize RoPE by adjusting these parameters or even the structure of RoPE itself. Existing related work can be divided into three subcategories: position index adjustment, base frequency adjustment, and structural modification.", + "bbox": [ + 507, + 470, + 884, + 678 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Position Index Adjustment This method involves modifying the allocation or calculation of position index $m$ to maintain the relative distances between tokens within the model's pre-trained index range. This can be implemented in three ways. We can adjust the allocation of the position index $m$ (An et al., 2024). Besides, proportionally scale $m$ for long sequences to adapt to the pre-trained window (Chen et al., 2023b). What's more, we can combine the above two methods, reallocating position index to some tokens in the sequence, while scaling the position index for others (Su, 2023).", + "bbox": [ + 507, + 688, + 882, + 881 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Base Frequency Adjustment From the formula of rotation matrix (see details in Appendix A.1.1),", + "bbox": [ + 507, + 889, + 882, + 921 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 492, + 942, + 502, + 953 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "we can see that each non-zero term is a trigonometric function value with $\\theta_{i}$ as independent variable. And the value of $\\theta_{i}$ affects the effect of rotation matrix to a certain extent. Base frequency adjustment is to enhance the model extrapolation performance by modifying $\\theta_{i}$ in the trigonometric function terms in the rotation matrix. NTK (Neural Tangent Kernel) theory shows that when the input dimension is low and its embedding representation lacks high-frequency components, it is difficult for the neural network to learn high-frequency information (Tancik et al., 2020). Therefore, researchers choose to adjust $\\theta_{i}$ with the idea of \"extrapolation on high-frequency and interpolation on low-frequency\". One strategy is to change the base $b$ of the exponential terms $\\theta_{i}$ in the function cluster $\\Theta$ , and change it from the default value $b = 10000$ to other values which can improve the model extrapolation performance (Peng and Quesnelle, 2023; Roziere et al., 2023). Another strategy is to directly scale $\\theta_{i}$ (bloc97, 2023; Peng et al., 2023).", + "bbox": [ + 112, + 85, + 492, + 423 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Structural Modification The methods described above focus on modifying variables in RoPE formula without altering its basic structure. Some existing work explores adjusting the structure of RoPE itself to better extrapolate, which optimizes the original RoPE formula (Sun et al., 2022).", + "bbox": [ + 112, + 430, + 489, + 527 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1.2 Attention Bias", + "text_level": 1, + "bbox": [ + 112, + 535, + 292, + 549 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "This type of method introduces relative position information by adding a bias related to the relative distance between tokens when calculating the similarity between query and key vectors. The process can be expressed as follows:", + "bbox": [ + 112, + 554, + 489, + 634 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathrm {s i m} \\left(q _ {m}, k _ {n}\\right) = q _ {m} ^ {\\top} k _ {n} + f _ {b i a s} (m, n), (2)\n$$\n", + "text_format": "latex", + "bbox": [ + 157, + 643, + 489, + 662 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $f_{bias}(m,n)$ is a bias function that depends on the token position index corresponding to query and key. $f_{bias}(m,n)$ be divided into two categories: learnable (Raffel et al., 2020; Chi et al., 2022a), predefined (Press et al., 2021; Chi et al., 2022b).", + "bbox": [ + 112, + 670, + 492, + 751 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2 Context Compression", + "text_level": 1, + "bbox": [ + 112, + 760, + 332, + 776 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Existing work proposes compressing the long input sequence into a shorter one for representation. These methods can be categorized into two main types by the compression granularity: soft compression and hard compression.", + "bbox": [ + 112, + 782, + 490, + 862 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2.1 Soft Compression", + "text_level": 1, + "bbox": [ + 112, + 870, + 315, + 885 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In order to shorten the sequence length, the soft compression method uses the model to compress", + "bbox": [ + 112, + 889, + 489, + 921 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "the original input token sequence into a shorter summary token sequence. These summary tokens are soft tokens which act as compression representation but do not correspond to words with actual meaning. They are inserted into the original token sequence to form a new input. During the forward pass of the model, the information from the original token sequence is gathered into the summary token sequence, which represents the original input for subsequent operations. Since summary tokens do not appear during the model's pre-training, additional training is necessary for the model to learn how to generate and utilize these tokens (Bulatov et al., 2022; Li et al., 2023b; Chevalier et al., 2023; Ge et al., 2023; Mu et al., 2024b). This method can shorten the length of the hidden vector sequence, so that enabling it to be processed within the model's pre-trained window.", + "bbox": [ + 507, + 84, + 885, + 375 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2.2 Hard Compression", + "text_level": 1, + "bbox": [ + 507, + 385, + 721, + 400 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "This method utilizes some techniques to directly shorten plain text sequence length. This process can be achieved through selection and summarization. It doesn't introduce additional tokens and targeted training, which makes it can be applied to some black box models (Jiang et al., 2023, 2024b; Chen et al., 2023a).", + "bbox": [ + 507, + 405, + 885, + 517 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3 Retrieval Augmented", + "text_level": 1, + "bbox": [ + 507, + 530, + 722, + 546 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Some existing work propose retrieval-augmented methods to enhance model performance on long context tasks by selectively incorporating crucial tokens from history context into attention. With reference to related work, we summarize a processing paradigm for this type of method. Initially, the (key, value) pairs from history are stored in the KV cache. Then the model retrieves the corresponding token representations from the KV cache at different retrieval granularity levels. This process is based on the similarity between current token and history tokens from KV cache. The top-k relevant tokens are selected as the retrieved context, which is then concatenated with the context within the current window to form a new input. Finally, the model applies appropriate positional encoding to this concatenated context for attention calculation. Below, we summarize the different methods according to each step of the above process:", + "bbox": [ + 507, + 552, + 885, + 859 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3.1 Retrieval Granularity", + "text_level": 1, + "bbox": [ + 507, + 869, + 741, + 884 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In the process of long context retrieval, we focus on the most relevant subset of tokens from KV", + "bbox": [ + 507, + 889, + 882, + 921 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 492, + 942, + 505, + 954 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "cache related to the current processing step. Different methods use different retrieval granularity, with the basic being token-level retrieval. Specifically, it involves calculating the similarity of each history token in the KV cache with the current token, and selecting the top-k history tokens' key and value vectors as the retrieval result. Methods applying this strategy include MemTRM (Wu et al., 2022), FoT (Tworkowski et al., 2024), Unlimformer (Bertsch et al., 2024a), etc. Besides, some work focuses on block-level retrieval, which retrieve top-k set of tokens in one step (Wang et al., 2024b; Rubin and Berant, 2023; Xiao et al., 2024; Mohtashami and Jaggi, 2024).", + "bbox": [ + 112, + 84, + 492, + 311 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3.2 Similarity Computation", + "text_level": 1, + "bbox": [ + 112, + 321, + 361, + 336 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Almost all existing works compute the inner product of query and key as similarity. This strategy draws from the standard attention mechanism, which calculates the dot product between the query and key to allocate corresponding weights to the value (Vaswani et al., 2023). It is simple to implement and can effectively capture and utilize the similarity information between queries and keys.", + "bbox": [ + 112, + 342, + 489, + 470 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3.3 Positional Encoding", + "text_level": 1, + "bbox": [ + 112, + 482, + 332, + 498 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "After computing the similarity, we select the topk relative tokens as the results, and call them retrieved context tokens. Correspondingly, tokens within the current window are called as local context tokens. These two types of context tokens are concatenated to form a new set of context tokens. Before these new context tokens are fed into the model for attention computation, it is necessary to consider suitable positional encoding to distinguish the information of tokens at different positions. Some work choose to assign the same position vector to the retrieved context tokens (Wu et al., 2022; Tworkowski et al., 2024; Xiao et al., 2024), while Mohtashami and Jaggi (2023) choose reallocation strategies.", + "bbox": [ + 112, + 502, + 489, + 744 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3.4 Attention Calculation", + "text_level": 1, + "bbox": [ + 112, + 756, + 344, + 770 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Next, when performing attention calculation, we need to consider how to make full use of retrieved context tokens and local context tokens. Different approaches use different strategies. Simply, Tworkowski et al. (2024); Xiao et al. (2024)choose standard attention, while Bertsch et al. (2024a) chooses cross attention. Besides, Wu et al. (2022); Wang et al. (2024b) adopt a Joint Attention method. Landmark employs the Grouped Soft", + "bbox": [ + 112, + 776, + 489, + 921 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "max method, a fine-grained approach for calculation (Mohtashami and Jaggi, 2023).", + "bbox": [ + 507, + 84, + 884, + 116 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4 Attention Pattern", + "text_level": 1, + "bbox": [ + 509, + 128, + 692, + 142 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "There is a class of methods modifying the attention pattern, i.e. the range of tokens attended to. They can better adapt models to expand processing sequence length. Some of them do not require additional training and can be employed as plug-and-play solutions in existing models. These methods can be divided into three main categories: sliding window, parallel context, and sparse attention.", + "bbox": [ + 507, + 149, + 884, + 279 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4.1 Sliding Window", + "text_level": 1, + "bbox": [ + 507, + 288, + 695, + 304 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This type of method divides the sequence into segments and performs attention calculation segment by segment without significantly increasing computational complexity. The attention results from earlier segments are stored, which later segments can use during their attention calculation (Dai et al., 2019; Han et al., 2024; Xiao et al., 2023).", + "bbox": [ + 507, + 307, + 884, + 420 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4.2 Parallel Context", + "text_level": 1, + "bbox": [ + 507, + 431, + 697, + 445 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The Parallel Context method folds the context part of the input (e.g., in-context examples) into multiple segments. These segments first calculate attention independently, and share the same set of position indexes. And then prompt tokens in the input attend to all the context tokens, so that fully utilize contextual information (Ratner et al., 2022; Hao et al., 2022). These methods require no training and can be plug-and-played into existing models.", + "bbox": [ + 507, + 450, + 882, + 595 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4.3 Sparse Attention", + "text_level": 1, + "bbox": [ + 507, + 605, + 702, + 620 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Some work reduce the number of tokens involved in the attention computation, decreasing computational load. They abandon the original attention method which attends to local continuous tokens, while expand the attentive field, and attend to discrete tokens from further context (Ding et al., 2023; Yu et al., 2023; Chen et al., 2023c).", + "bbox": [ + 507, + 624, + 882, + 737 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4 Evaluation", + "text_level": 1, + "bbox": [ + 507, + 750, + 638, + 765 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In the long context scenario, evaluating the model's ability to understand and utilize long context is also a new and critical issue. But as described before, current surveys pay little or even no attention to the evaluation aspect. To fill this gap, we summarize the data, tasks, and metrics of long context evaluation in our survey based on existing benchmarks. The following is a brief introduction, detailed information is in the Appendix B.", + "bbox": [ + 507, + 776, + 884, + 921 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 492, + 942, + 504, + 954 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/18f4aff50e840a643fc047e2c9ef45c7e9c13a77df28a420d6a588c3ae1b3246.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
BenchmarkLength Level#ExamplesDomain
SCROLLS (Shaham et al., 2022)1k~4k119,495Literature, Dialog
ZeroSCROLLS (Shaham et al., 2023)0k~16k4,378Wiki, Literature, Dialog
LongBench (Bai et al., 2023)0k~4k, 4k~8k, >8k4,750Wiki, Literature, Dialog, Report, Code, News
LooGLE (Li et al., 2023a)0k~24k776Wiki, Paper
BAMBOO (Dong et al., 2023a)0k~4k, 4k~16k1,502Wiki, Dialog, Report, Code, Paper
LongICLBench (Li et al., 2024c)2k~50k3,000Dialog, News, Common Sense
L-Eval (An et al., 2023)3k~200k411Literature, Dialog, News, Paper, Common Sense
Ada-LEval (Wang et al., 2024a)1k~128k117,500Literature, Code
∞Bench (Zhang et al., 2024)0k~200k3,946Literature, Dialog, Code
NeedleBench (Li et al., 2024b)1k~4k/8k/32k/128k/200k/1m+-Wiki, Literature, Dialog, Report, Code, News
LV-Eval (Yuan et al., 2024)0k~16k/32k/64k/128k/256k1,729Wiki, Literature, Dialog, Report, Code, News, Paper
", + "bbox": [ + 117, + 80, + 884, + 233 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 1: Statistics on data characteristics of the datasets in existing long context benchmarks. Length level represents the range of token lengths in the dataset used in the benchmark. #Examples refers to the total number of examples. Domain denotes the data sources. The corresponding contents in table are directly extracted or calculated from the original papers. Given that current models mainly within context windows exceeding 100k tokens, we categorize benchmarks based on this threshold. Benchmarks with contexts exceeding 100K tokens are listed in the lower part.", + "bbox": [ + 112, + 241, + 884, + 315 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1 Data", + "text_level": 1, + "bbox": [ + 112, + 338, + 200, + 351 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In order to explore what data should be used to test model's ability to process long context, we conduct a statistical analysis of datasets in existing benchmarks and summarize their data characteristics.", + "bbox": [ + 112, + 359, + 487, + 423 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The evaluation of a model's long context capability requires not only the long data but also the data diversity and quality. As shown in Table 1, we focus on three characteristics of the datasets in existing long context benchmarks: length level, total number of examples, and the domain it covers.", + "bbox": [ + 112, + 424, + 487, + 520 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Besides, we also discuss about knowledge leakage issue, which need to be addressed when constructing the dataset, in the Appendix B.1.2", + "bbox": [ + 112, + 521, + 489, + 570 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2 Tasks", + "text_level": 1, + "bbox": [ + 112, + 580, + 206, + 594 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Currently, existing benchmarks propose numerous tasks to evaluate the model's ability to process long context. But there is no systematic taxonomy for these tasks. Therefore, we divide all tasks in existing benchmarks into seven categories from the perspective of task setting: Question Answering, Needle-in-a-Haystack, Statistical Tasks, Code, In-Context Learning, Text Generation and Other Tasks. Below is the introduction of each type of task, and the details are in the Appendix B.2.", + "bbox": [ + 112, + 602, + 489, + 763 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2.1 Question Answering", + "text_level": 1, + "bbox": [ + 112, + 772, + 334, + 788 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Single-hop Question Answering requires models to locate and extract answers from a single text passage, typically involving a single fact (Rajpurkar, 2016; Joshi et al., 2017; Kočisky et al., 2018).", + "bbox": [ + 112, + 791, + 487, + 854 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Multi-hop Question Answering requires models to integrate information from multiple sources to answer complex questions. This often involves reasoning across different pieces of evidence (Ho", + "bbox": [ + 112, + 857, + 487, + 921 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "et al., 2020; Trivedi et al., 2022; Yang et al., 2018; Chen et al., 2024b; Zhuang et al., 2023).", + "bbox": [ + 507, + 338, + 882, + 370 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2.2 Needle-in-a-Haystack", + "text_level": 1, + "bbox": [ + 507, + 382, + 741, + 397 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Needle-in-a-Haystack evaluate LLMs' ability to extract specific content from long contexts. These tasks can evaluate the model's retrieval capability, also measure the range of context lengths model can handle (Zhu et al., 2024; Mohtashami and Jaggi, 2023; Zhang et al., 2024; Li et al., 2024b).", + "bbox": [ + 507, + 401, + 882, + 499 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2.3 Statistical Tasks", + "text_level": 1, + "bbox": [ + 507, + 510, + 695, + 524 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Long Arithmetic Calculation requires models to perform addition and subtraction operations on lengthy arithmetic expressions (Zhang et al., 2024, 2023b; Cobbe et al., 2021; Xu et al., 2024; Chen et al., 2024a).", + "bbox": [ + 507, + 531, + 882, + 611 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Numerical Information Extraction requires models to identify specific mathematical elements (Zhang et al., 2024; Li et al., 2023a).", + "bbox": [ + 507, + 613, + 882, + 659 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Sentiment Aggregation requires models to output the percentage of positive reviews when provided with a collection of reviews (Angelidis et al., 2021; Shaham et al., 2023).", + "bbox": [ + 507, + 661, + 882, + 725 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Paragraph Counting requires models to count the number of unique paragraphs in a set of randomly repeated and shuffled passages (Bai et al., 2023).", + "bbox": [ + 507, + 727, + 882, + 774 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2.4 Code", + "text_level": 1, + "bbox": [ + 507, + 787, + 611, + 801 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Code Completion requires models to complete missing code fragments based on preceding code and context (Chen et al., 2021a; Zheng et al., 2023; Bai et al., 2023; Guo et al., 2023; Zan et al., 2022; Dong et al., 2023a; Qin et al., 2024).", + "bbox": [ + 507, + 807, + 882, + 887 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Code Running asks models to infer the output of lengthy programs by tracing a series of cascading", + "bbox": [ + 507, + 889, + 882, + 921 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 492, + 942, + 504, + 953 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "function calls (Bubeck et al., 2023; An et al., 2023; Zhang et al., 2024).", + "bbox": [ + 112, + 84, + 485, + 115 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Code Debugging requires models to identify deliberately inserted errors (Zhang et al., 2024).", + "bbox": [ + 112, + 116, + 485, + 148 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.2.5 In-Context Learning", + "text_level": 1, + "bbox": [ + 112, + 156, + 337, + 172 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "The input will contain a certain amount of examples, resulting in a long input. This is caused by the example itself is very long or the number of examples is particularly large. Based on this fact, we divide In-Context Learning task into two categories: long example learning and many-shot learning.", + "bbox": [ + 112, + 175, + 485, + 271 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Long Example Learning requires models to process extensive inputs with long examples which have large label spaces and generate accurate predictions. This task inherently is a long-context challenge (Bai et al., 2023; Li et al., 2024c; Li and Roth, 2002; NLPCC, 2014).", + "bbox": [ + 112, + 273, + 485, + 367 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Many-shot Learning leverages the expanded context windows of models to process hundreds or even thousands of examples in order to complete a given task (Yu et al., 2020; Bertsch et al., 2024b).", + "bbox": [ + 112, + 369, + 485, + 432 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.2.6 Text Generation", + "text_level": 1, + "bbox": [ + 112, + 441, + 302, + 455 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Language Modeling serving as the pre-training task for LLMs, is also a widely used basic task to test the model's ability to generate text.", + "bbox": [ + 112, + 460, + 485, + 507 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Document Summarization requires models to make a summary of the input documents, which encompasses single-document and multi-document tasks. Single-document summarization extracts key information from a single document (Wang et al., 2022; Chen et al., 2021b; Huang et al., 2021; Zhong et al., 2021), while multi-document summarization synthesizes information from multiple sources into a comprehensive, non-repetitive summary containing all key points (Bai et al., 2023; An et al., 2023; Fabbri et al., 2019).", + "bbox": [ + 112, + 508, + 485, + 684 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Open-ended Text Generation requires models to produce coherent and logical content on given topics (Tan et al., 2024; Bai et al., 2024; Kumar et al., 2024; Ni et al., 2024; Rafailov et al., 2024).", + "bbox": [ + 112, + 686, + 485, + 749 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.2.7 Other Tasks", + "text_level": 1, + "bbox": [ + 112, + 758, + 270, + 771 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In addition to the six types of tasks listed above, there are some tasks that are not included in this classification system but are equally important for testing the model's long context ability.", + "bbox": [ + 112, + 776, + 485, + 840 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Reordering asks models to reconstruct the original sequence of shuffled fragments by considering the broad context and logical relationships (Krysciński et al., 2021; Shaham et al., 2023; Li et al., 2023a; Dong et al., 2023a; Wang et al., 2024a).", + "bbox": [ + 112, + 841, + 485, + 920 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Context Consistency shows models an academic paper and a hypothesis, requiring models to judge whether the hypothesis is supported or contradicted by the ideas in the paper (Dong et al., 2023a).", + "bbox": [ + 507, + 84, + 880, + 148 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Summary Source Paragraph Identification challenges models to identify the original source paragraphs for given summaries (Bai et al., 2023).", + "bbox": [ + 507, + 149, + 882, + 197 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Character Identification requires models to identify different speakers in long dialogues by recognizing their distinct characteristics (TVMEG, 2024; Senedd Cymru, 2024; Zhang et al., 2024; Dong et al., 2023a; Chen et al., 2021b).", + "bbox": [ + 507, + 198, + 882, + 277 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3 Metrics", + "text_level": 1, + "bbox": [ + 507, + 290, + 616, + 304 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In addition to data and tasks, metrics can directly reflect the model's ability to handle long contexts. With current long context task designs gradually changing from classic NLP tasks to more practical tasks, the requirements for metrics are constantly increasing. We organize metrics for testing models' capabilities on long context according to the three stages of metrics development: Algorithmic Metrics, Model-based Metrics, and LLM-based Metrics. From these three metrics stages, it can be seen that the metrics development trend becomes more and more complex and flexible.", + "bbox": [ + 507, + 311, + 882, + 502 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3.1 Algorithmic Metrics", + "text_level": 1, + "bbox": [ + 507, + 514, + 729, + 530 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Algorithmic metrics are calculated based on the model output or logits through defined formulas. Their implementation is very simple and can reflect the effect of language modeling and some downstream tasks to a certain extent.", + "bbox": [ + 507, + 533, + 882, + 613 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Perplexity (PPL) is one of the most common algorithmic metrics used in existing long context benchmarks (Beltagy et al., 2020; Roy et al., 2021; Press et al., 2021). Meanwhile, some benchmarks employ other algorithmic metrics such as accuracy, f1, and N-gram-based metrics (ROUGE, Lin, 2004 and BLEU, Papineni et al., 2002, etc.) to evaluate LLMs on certain downstream tasks (Shaham et al., 2023; Bai et al., 2023; Kasai et al., 2021).", + "bbox": [ + 507, + 615, + 882, + 758 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "However, these algorithmic metrics have several limitations, such as content quality, syntactic accuracy, and human correlation issues (Reiter and Belz, 2009; Stent et al., 2005; Sun et al., 2021; An et al., 2023; Improving; Tan et al., 2024). This causes algorithmic metrics to be limited in reflecting the model's ability to process long context. A number of approaches have been developed to improve algorithmic metrics. Such as enhancing scoring techniques, restructuring task formats and so on (Yuan", + "bbox": [ + 507, + 760, + 882, + 920 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 492, + 942, + 502, + 952 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "et al., 2024; Dong et al., 2023a; Li et al., 2024b).", + "bbox": [ + 112, + 84, + 478, + 99 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.3.2 Model-based Metrics", + "text_level": 1, + "bbox": [ + 112, + 110, + 341, + 124 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "To improve the consistency with human judgments, pre-trained language models are being employed to evaluate (Zhang et al., 2020; Yuan et al., 2021). Specifically, pre-trained models (such as BERT, Devlin, 2018, BART, Lewis, 2019, etc.) are used to calculate the similarity score between the model output and reference text to evaluate the performance of downstream tasks.", + "bbox": [ + 112, + 129, + 489, + 256 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "However, these model-based metrics entirely rely on representations learned from pre-trained language models and require reference texts. They may not be accurate enough for evaluating some novel and creative text generation tasks.", + "bbox": [ + 112, + 258, + 487, + 338 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.3.3 LLM-based Metrics", + "text_level": 1, + "bbox": [ + 112, + 348, + 332, + 362 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Combining the above two metrics issues, LLM-based metrics are proposed, utilizing sufficient knowledge within LLMs for evaluation. For example, LLM-based metrics prompt LLMs to offer human-like multi-dimensional assessment (Wang et al., 2023a; Li et al., 2023a; Shen et al., 2023; Chiang and Lee, 2023; Zhang et al., 2024; Zheng et al., 2024; Liu et al., 2023c; Tan et al., 2024; Mu et al., 2024a) and interpretable reasoning (Wang et al., 2023b; Luo et al., 2023; Wu et al., 2023).", + "bbox": [ + 112, + 367, + 489, + 527 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "LLM-based metrics fundamentally distinguish from the other two metrics, which behave much more mechanically. In addition, they demonstrate enhanced agreement with human evaluations (Wang et al., 2023a; Li et al., 2023a). Due to this higher consistency and wider scope of application, LLM-based metrics are gaining increasing attention in long-context evaluation.", + "bbox": [ + 112, + 529, + 489, + 657 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5 Future Roadmap and Open Problems", + "text_level": 1, + "bbox": [ + 112, + 670, + 473, + 686 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Despite the rapid development of long context techniques, numerous challenges remain unresolved. Looking to future roadmap, we list vital open problems and present our perspectives on the developments. They are also divided into two parts: approaches and evaluation.", + "bbox": [ + 112, + 696, + 489, + 791 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.1 Approaches", + "text_level": 1, + "bbox": [ + 112, + 804, + 253, + 819 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Method Integration would combine methods' strengths to address the challenges of extrapolating long context from multiple perspectives.", + "bbox": [ + 112, + 824, + 487, + 873 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Long Text Generation remains under-researched, which concentrate on effective long-text generation techniques and the evaluation of generation quality.", + "bbox": [ + 112, + 873, + 489, + 921 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Sparse Attention Mechanisms may lead to a decrease in models' original language ability, thereby limiting their potential for processing long context. \"Lost-in-the-Middle\" Issue has not yet been completely resolved, there is a lack of targeted solutions and appropriate verification methods.", + "bbox": [ + 507, + 84, + 884, + 183 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Scalability of Methods requires to explore how existing methods can be adapted to models of different scales or even different architectural frameworks, enhancing their generality and applicability.", + "bbox": [ + 507, + 186, + 884, + 250 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Methods Enabling \"Train Short, Test Long\" haven't emerged, which train on short texts while excelling in long-context. These methods can reduce resource needs and improve generalization.", + "bbox": [ + 507, + 253, + 882, + 317 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Trade-off between Information Filtering and Generation Effects means existing methods can be optimized by integrating RAG to enhance efficiency and quality without too long input.", + "bbox": [ + 507, + 319, + 882, + 384 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.2 Evaluation", + "text_level": 1, + "bbox": [ + 507, + 404, + 640, + 418 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Knowledge Leakage Issue is ever-present. As LLMs gain the ability to gather information from the Internet and their training data scope expands, existing solutions become increasingly ineffective and some operations may limit innovation.", + "bbox": [ + 507, + 429, + 882, + 508 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Novel Benchmark Design needed to be proposed. We need to construct benchmarks with coherent content and long-distance dependencies to more effectively test the model's ability to process long context. For example, asking models to process inputs from multiple books.", + "bbox": [ + 507, + 513, + 882, + 608 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Updated LLM-based Metrics are a development direction. Though LLM-based metrics show higher consistency with human judgments than other metrics, they are costly, have random outputs, and even lack human emotions. We need to combine LLM with other techniques to better evaluate.", + "bbox": [ + 507, + 612, + 882, + 708 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6 Conclusion", + "text_level": 1, + "bbox": [ + 507, + 728, + 640, + 743 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this survey, we first list three inherent challenges in processing long context. And then we propose a novel taxonomy for long context approaches and summarize the similarities and differences in each category. In addition, we systematically review the work on evaluation, summarize the data, tasks, and metrics related to long context based on existing benchmark. Finally, we list unsolved issues and put forward our insights on the future development of long context domain.", + "bbox": [ + 507, + 760, + 882, + 920 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 492, + 942, + 504, + 953 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Limitations", + "text_level": 1, + "bbox": [ + 114, + 84, + 220, + 99 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This survey summarizes the approaches and evaluation in the area of long context, and gives our views on future development. However, we don't cover efficient transformer on long context, multimodel long context, etc. In addition, due to limitations in space, we are not able to include all related work.", + "bbox": [ + 112, + 112, + 487, + 206 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Due to the rapidly evolving nature of the field of Transformer context extension, our survey may not capture the latest developments, particularly those that emerged near or after the time of writing.", + "bbox": [ + 112, + 210, + 487, + 274 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 114, + 304, + 213, + 319 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.", + "Rishabh Agarwal, Avi Singh, Lei M Zhang, Bernd Bohnet, Stephanie Chan, Ankesh Anand, Zaheer Abbas, Azade Nova, John D Co-Reyes, Eric Chu, et al. 2024. Many-shot in-context learning. arXiv preprint arXiv:2404.11018.", + "Chenxin An, Shansan Gong, Ming Zhong, Xingjian Zhao, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu. 2023. L-eval: Instituting standardized evaluation for long context language models. arXiv preprint arXiv:2307.11088.", + "Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, and Lingpeng Kong. 2024. Training-free long-context scaling of large language models. arXiv preprint arXiv:2402.17463.", + "Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, and Mirella Lapata. 2021. Extractive opinion summarization in quantized transformer spaces. Transactions of the Association for Computational Linguistics, 9:277-293.", + "Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, et al. 2023. Longbench: A bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508.", + "Yushi Bai, Jiajie Zhang, Xin Lv, Linzhi Zheng, Siqi Zhu, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. 2024. Longwriter: Unleashing $10,000+$ word generation from long context llms. arXiv preprint arXiv:2408.07055.", + "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150." + ], + "bbox": [ + 115, + 330, + 489, + 920 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew Gormley. 2024a. Unlimformer: Long-range transformers with unlimited length input. Advances in Neural Information Processing Systems, 36.", + "Amanda Bertsch, Maor Ivgi, Uri Alon, Jonathan Berant, Matthew R Gormley, and Graham Neubig. 2024b. In-context learning with long-context models: An in-depth exploration. arXiv preprint arXiv:2405.00200.", + "bloc97. 2023. Add NTK-Aware interpolation \"by parts\" correction. https://github.com/jquesnelle/scaled-rope/pull/1.", + "Tom B Brown. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.", + "Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.", + "Aydar Bulatov, Yury Kuratov, and Mikhail Burtsev. 2022. Recurrent memory transformer. Advances in Neural Information Processing Systems, 35:11079-11091.", + "Howard Chen, Ramakanth Pasunuru, Jason Weston, and Asli Celikyilmaz. 2023a. Walking down the memory maze: Beyond context limit through interactive reading. arXiv preprint arXiv:2310.05029.", + "Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021a. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.", + "Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2021b. Summscreen: A dataset for abstractive screenplay summarization. arXiv preprint arXiv:2104.07091.", + "Qiguang Chen, Libo Qin, Jinhao Liu, Dengyun Peng, Jiannan Guan, Peng Wang, Mengkang Hu, Yuhang Zhou, Te Gao, and Wanxiang Che. 2025. Towards reasoning era: A survey of long chain-of-thought for reasoning large language models. Preprint, arXiv:2503.09567.", + "Qiguang Chen, Libo Qin, Jiaqi Wang, Jinxuan Zhou, and Wanxiang Che. 2024a. Unlocking the boundaries of thought: A reasoning granularity framework to quantify and optimize chain-of-thought.", + "Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023b. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595." + ], + "bbox": [ + 510, + 85, + 884, + 920 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 492, + 942, + 504, + 953 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, and Jiaya Jia. 2023c. Longlora: Efficient fine-tuning of long-context large language models. arXiv preprint arXiv:2309.12307.", + "Zhi Chen, Qiguang Chen, Libo Qin, Qipeng Guo, Hai-jun Lv, Yicheng Zou, Wanxiang Che, Hang Yan, Kai Chen, and Dahua Lin. 2024b. What are the essential factors in crafting effective long context multi-hop instruction datasets? insights and best practices. arXiv preprint arXiv:2409.01893.", + "Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and Danqi Chen. 2023. Adapting language models to compress contexts. arXiv preprint arXiv:2305.14788.", + "Ta-Chung Chi, Ting-Han Fan, Peter J Ramadge, and Alexander Rudnicky. 2022a. Kerple: Kernelized relative positional embedding for length extrapolation. Advances in Neural Information Processing Systems, 35:8386-8399.", + "Ta-Chung Chi, Ting-Han Fan, Alexander I Rudnicky, and Peter J Ramadge. 2022b. Dissecting transformer length extrapolation via the lens of receptive field analysis. arXiv preprint arXiv:2212.10356.", + "Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937.", + "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.", + "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988.", + "Jacob Devlin. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.", + "Suvodip Dey, Maunendra Sankar Desarkar, Asif Ekbal, and PK Srijith. 2022. Dialogen: Generalized long-range context representation for dialogue systems. arXiv preprint arXiv:2210.06282.", + "Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, and Furu Wei. 2023. Longnet: Scaling transformers to 1,000,000,000 tokens. arXiv preprint arXiv:2307.02486.", + "Zican Dong, Tianyi Tang, Junyi Li, Wayne Xin Zhao, and Ji-Rong Wen. 2023a. Bamboo: A comprehensive benchmark for evaluating long text modeling capacities of large language models. arXiv preprint arXiv:2309.13345." + ], + "bbox": [ + 115, + 85, + 489, + 919 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Zican Dong, Tianyi Tang, Lunyi Li, and Wayne Xin Zhao. 2023b. A survey on long text modeling with transformers. arXiv preprint arXiv:2302.14502.", + "Alexander R Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. arXiv preprint arXiv:1906.01749.", + "Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. 2024. A survey on rag meeting llms: Towards retrieval-augmented large language models. Preprint, arXiv:2405.06211.", + "Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166.", + "Tao Ge, Jing Hu, Lei Wang, Xun Wang, Si-Qing Chen, and Furu Wei. 2023. In-context autoencoder for context compression in a large language model. arXiv preprint arXiv:2307.06945.", + "Shahriar Golchin and Mihai Surdeanu. 2023. Time travel in llms: Tracing data contamination in large language models. arXiv preprint arXiv:2308.08493.", + "Daya Guo, Canwen Xu, Nan Duan, Jian Yin, and Julian McAuley. 2023. Longcoder: A long-range pretrained language model for code completion. In International Conference on Machine Learning, pages 12098-12107. PMLR.", + "Chi Han, Qifan Wang, Hao Peng, Wenhan Xiong, Yu Chen, Heng Ji, and Sinong Wang. 2024. Lm infinite: Zero-shot extreme length generalization for large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 3991-4008.", + "Yaru Hao, Yutao Sun, Li Dong, Zhixiong Han, Yuxian Gu, and Furu Wei. 2022. Structured prompting: Scaling in-context learning to 1,000 examples. arXiv preprint arXiv:2212.06713.", + "Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multi-hop qa dataset for comprehensive evaluation of reasoning steps. arXiv preprint arXiv:2011.01060.", + "Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. arXiv preprint arXiv:2104.02112.", + "Yunpeng Huang, Jingwei Xu, Junyu Lai, Zixu Jiang, Taolue Chen, Zenan Li, Yuan Yao, Xiaoxing Ma, Lijuan Yang, Hao Chen, et al. 2023. Advancing transformer architecture in long-context large language models: A comprehensive survey. arXiv preprint arXiv:2311.12351." + ], + "bbox": [ + 510, + 85, + 882, + 919 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 489, + 942, + 510, + 954 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "NLG Improving. Not all metrics are guilty: Improving nlg evaluation with llm paraphrasing.", + "Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2024a. LongLLM Lingua: Accelerating and enhancing LLMs in long context scenarios via prompt compression. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1658-1677, Bangkok, Thailand. Association for Computational Linguistics.", + "Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023. Llm lingua: Compressing prompts for accelerated inference of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13358-13376.", + "Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2024b. Longlmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression. Preprint, arXiv:2310.06839.", + "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551.", + "Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander R Fabbri, Yejin Choi, and Noah A Smith. 2021. Bidimensional leaderboards: Generate and evaluate language hand in hand. arXiv preprint arXiv:2112.04139.", + "Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and Siva Reddy. 2024. The impact of positional encoding on length generalization in transformers. Advances in Neural Information Processing Systems, 36.", + "Tomáš Kočisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328.", + "Wojciech Krysciński, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong, and Dragomir Radev. 2021. Booksum: A collection of datasets for long-form narrative summarization. arXiv preprint arXiv:2105.08209.", + "Ishita Kumar, Snigdha Viswanathan, Sushrita Yerra, Alireza Salemi, Ryan A Rossi, Franck Dernoncourt, Hanieh Deilamsalehy, Xiang Chen, Ruiyi Zhang, Shubham Agarwal, et al. 2024. Longlamp: A benchmark for personalized long-form text generation. arXiv preprint arXiv:2407.11016.", + "M Lewis. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461." + ], + "bbox": [ + 115, + 85, + 485, + 917 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Haoyang Li, Yiming Li, Anxin Tian, Tianhao Tang, Zhanchao Xu, Xuejia Chen, Nicole Hu, Wei Dong, Qing Li, and Lei Chen. 2025. A survey on large language model acceleration based on kv cache management. Preprint, arXiv:2412.19442.", + "Jia-Nan Li, Quan Tu, Cunli Mao, Zhengtao Yu, JiRong Wen, and Rui Yan. 2024a. Streamingdialogue: Prolonged dialogue learning via long context compression with minimal losses. arXiv preprint arXiv:2403.08312.", + "Jiaqi Li, Mengmeng Wang, Zilong Zheng, and Muhan Zhang. 2023a. Loogle: Can long-context language models understand long contexts? arXiv preprint arXiv:2311.04939.", + "Mo Li, Songyang Zhang, Yunxin Liu, and Kai Chen. 2024b. Needlebench: Can llms do retrieval and reasoning in 1 million context window? arXiv preprint arXiv:2407.11963.", + "Tianle Li, Ge Zhang, Quy Duc Do, Xiang Yue, and Wenhu Chen. 2024c. Long-context llms struggle with long in-context learning. arXiv preprint arXiv:2404.02060.", + "Xianming Li, Zongxi Li, Xiaotian Luo, Haoran Xie, Xing Lee, Yingbin Zhao, Fu Lee Wang, and Qing Li. 2023b. Recurrent attention networks for long-text modeling. arXiv preprint arXiv:2306.06843.", + "Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics.", + "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.", + "Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024a. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12:157-173.", + "Shuaiqi Liu, Jiannong Cao, Zhongfen Deng, Wenting Zhao, Ruosong Yang, Zhiyuan Wen, and S Yu Philip. 2023a. Neural abstractive summarization for long text and multiple tables. IEEE Transactions on Knowledge and Data Engineering.", + "Tianyang Liu, Canwen Xu, and Julian McAuley. 2023b. Repobench: Benchmarking repository-level code auto-completion systems. arXiv preprint arXiv:2306.03091.", + "Xiaoran Liu, Hang Yan, Shuo Zhang, Chenxin An, Xipeng Qiu, and Dahua Lin. 2024b. Scaling laws of rope-based extrapolation. Preprint, arXiv:2310.05209.", + "Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023c. G-eval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634." + ], + "bbox": [ + 510, + 85, + 880, + 920 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 489, + 942, + 507, + 954 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Zheheng Luo, Qianqian Xie, and Sophia Ananiadou. 2023. Chatgpt as a factual inconsistency evaluator for text summarization. arXiv preprint arXiv:2303.15621.", + "Amirkeivan Mohtashami and Martin Jaggi. 2023. Landmark attention: Random-access infinite context length for transformers. arXiv preprint arXiv:2305.16300.", + "Amirkeivan Mohtashami and Martin Jaggi. 2024. Random-access infinite context length for transformers. Advances in Neural Information Processing Systems, 36.", + "Honglin Mu, Yang Xu, Yunlong Feng, Xiaofeng Han, Yitong Li, Yutai Hou, and Wanxiang Che. 2024a. Beyond static evaluation: A dynamic approach to assessing ai assistants' api invocation capabilities. Preprint, arXiv:2403.11128.", + "Jesse Mu, Xiang Li, and Noah Goodman. 2024b. Learning to compress prompts with gist tokens. Advances in Neural Information Processing Systems, 36.", + "Xuanfan Ni, Hengyi Cai, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, and Piji Li. 2024. $\\mathrm{XI}^2$ bench: A benchmark for extremely long context understanding with long-range dependencies. arXiv preprint arXiv:2404.05446.", + "NLPCC. 2014. Task definition for large scale text categorization at nlpcc 2014.", + "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.", + "Saurav Pawar, SM Tonmoy, SM Zaman, Vinija Jain, Aman Chadha, and Amitava Das. 2024. The what, why, and how of context length extension techniques in large language models—a detailed survey. arXiv preprint arXiv:2401.07872.", + "Bowen Peng and Jeffrey Quesnelle. 2023. Ntk-aware scaled rope allows llama models to have extended $(8k+)$ context size without any fine-tuning and minimal perplexity degradation.", + "Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. 2023. Yarn: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071.", + "Ofir Press, Noah A Smith, and Mike Lewis. 2021. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409.", + "Libo Qin, Qiguang Chen, Xiachong Feng, Yang Wu, Yongheng Zhang, Yinghui Li, Min Li, Wanxiang Che, and Philip S Yu. 2024. Large language models meet nlp: A survey. arXiv preprint arXiv:2405.12819." + ], + "bbox": [ + 115, + 85, + 485, + 919 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Han Qiu, Jiaxing Huang, Peng Gao, Qin Qi, Xiaogin Zhang, Ling Shao, and Shijian Lu. 2024. Longhalqa: Long-context hallucination evaluation for multimodal large language models. Preprint, arXiv:2410.09962.", + "Alec Radford. 2018. Improving language understanding by generative pre-training.", + "Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36.", + "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67.", + "P Rajpurkar. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.", + "Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2022. Parallel context windows for large language models. arXiv preprint arXiv:2212.10947.", + "Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evaluating natural language generation systems. Computational Linguistics, 35(4):529-558.", + "Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2021. Efficient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics, 9:53-68.", + "Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, et al. 2023. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950.", + "Ohad Rubin and Jonathan Berant. 2023. Long-range language modeling with self-retrieval. arXiv preprint arXiv:2306.13421.", + "Senedd Cymru. 2024. Record of proceedings - senedd. https://record.assembly.wales/. Accessed: 2024-10-15.", + "Uri Shaham, Maor Ivgi, Avia Efrat, Jonathan Berant, and Omer Levy. 2023. Zeroscrolls: A zero-shot benchmark for long text understanding. arXiv preprint arXiv:2305.14196.", + "Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, et al. 2022. **Scrolls: Standardized comparison over long language sequences. arXiv preprint arXiv:2201.03533.**" + ], + "bbox": [ + 510, + 85, + 880, + 919 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 489, + 942, + 507, + 954 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Eva Sharma, Chen Li, and Lu Wang. 2019. Bigpatent: A large-scale dataset for abstractive and coherent summarization. arXiv preprint arXiv:1906.03741.", + "Chenhui Shen, Liying Cheng, Xuan-Phi Nguyen, Yang You, and Lidong Bing. 2023. Large language models are not yet human-level evaluators for abstractive summarization. arXiv preprint arXiv:2305.13091.", + "Luohe Shi, Hongyi Zhang, Yao Yao, Zuchao Li, and Hai Zhao. 2024. Keep the cost down: A review on methods to optimize llm's kv-cache consumption. Preprint, arXiv:2407.18003.", + "Dingjie Song, Shunian Chen, Guiming Hardy Chen, Fei Yu, Xiang Wan, and Benyou Wang. 2024. Milebench: Benchmarking mllms in long context. arXiv preprint arXiv:2404.18532.", + "Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In International conference on intelligent text processing and computational linguistics, pages 341-351. Springer.", + "Jianlin Su. 2023. Rectified rotary position embeddings. https://github.com/bojone/erope.", + "Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063.", + "Simeng Sun, Kalpesh Krishna, Andrew Mattarella-Micke, and Mohit Iyyer. 2021. Do long-range language models actually use long-range context? arXiv preprint arXiv:2109.09115.", + "Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, and Furu Wei. 2022. A length-extrapolatable transformer. arXiv preprint arXiv:2212.10554.", + "Haochen Tan, Zhijiang Guo, Zhan Shi, Lu Xu, Zhili Liu, Xiaoguang Li, Yasheng Wang, Lifeng Shang, Qun Liu, and Linqi Song. 2024. Proxyqa: An alternative framework for evaluating long-form text generation with large language models. arXiv preprint arXiv:2401.15042.", + "Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. 2020. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in neural information processing systems, 33:7537-7547.", + "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971." + ], + "bbox": [ + 115, + 85, + 485, + 917 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Musique: Multi-hop questions via single-hop question composition. Transactions of the Association for Computational Linguistics, 10:539-554.", + "TVMEG. 2024. Tvmeg - tv mega engine. https://tvmeg.com/. Accessed: 2024-10-15.", + "Szymon Tworkowski, Konrad Staniszewski, Mikołaj Pacek, Yuhuai Wu, Henryk Michalewski, and Piotr Miłos. 2024. Focused transformer: Contrastive training for context scaling. Advances in Neural Information Processing Systems, 36.", + "A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems.", + "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2023. Attention is all you need. Preprint, arXiv:1706.03762.", + "Alex Wang, Richard Yuanzhe Pang, Angelica Chen, Jason Phang, and Samuel R Bowman. 2022. Squality: Building a long-document summarization dataset the hard way. arXiv preprint arXiv:2205.11465.", + "Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax.", + "Chonghua Wang, Haodong Duan, Songyang Zhang, Dahua Lin, and Kai Chen. 2024a. Ada-level: Evaluating long-context llms with length-adaptable benchmarks. arXiv preprint arXiv:2404.06480.", + "Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023a. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048.", + "Peiyi Wang, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023b. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926.", + "Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, and Furu Wei. 2024b. Augmenting language models with long-term memory. Advances in Neural Information Processing Systems, 36.", + "Xiao Wang, Shiao Wang, Yuhe Ding, Yuehang Li, Wentao Wu, Yao Rong, Weizhe Kong, Ju Huang, Shihao Li, Haoxiang Yang, Ziwen Wang, Bo Jiang, Chenglong Li, Yaowei Wang, Yonghong Tian, and Jin Tang. 2024c. State space model for new-generation network alternative to transformers: A survey. Preprint, arXiv:2404.09516.", + "Ning Wu, Ming Gong, Linjun Shou, Shining Liang, and Daxin Jiang. 2023. Large language models are diverse role-players for summarization evaluation. In" + ], + "bbox": [ + 510, + 85, + 880, + 919 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 489, + 942, + 507, + 954 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "CCF International Conference on Natural Language Processing and Chinese Computing, pages 695-707. Springer.", + "Yuhuai Wu, Markus N Rabe, DeLesley Hutchins, and Christian Szegedy. 2022. Memorizing transformers. arXiv preprint arXiv:2203.08913.", + "Chaojun Xiao, Pangle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Song Han, and Maosong Sun. 2024. Inflamm: Unveiling the intrinsic capacity of llms for understanding extremely long sequences with training-free memory. arXiv preprint arXiv:2402.04617.", + "Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2023. Efficient streaming language models with attention sinks. arXiv preprint arXiv:2309.17453.", + "Xin Xu, Tong Xiao, Zitong Chao, Zhenya Huang, Can Yang, and Yang Wang. 2024. Can llms solve longer math word problems better? arXiv preprint arXiv:2405.14804.", + "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600.", + "Dian Yu, Kai Sun, Claire Cardie, and Dong Yu. 2020. Dialogue-based relation extraction. arXiv preprint arXiv:2004.08056.", + "Hao Yu, Aoran Gan, Kai Zhang, Shiwei Tong, Qi Liu, and Zhaofeng Liu. 2024. Evaluation of retrieval-augmented generation: A survey. Preprint, arXiv:2405.07437.", + "Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, and Mike Lewis. 2023. Megabyte: Predicting million-byte sequences with multiscale transformers. Advances in Neural Information Processing Systems, 36:78808-78823.", + "Tao Yuan, Xuefei Ning, Dong Zhou, Zhijie Yang, Shiyao Li, Minghui Zhuang, Zheyue Tan, Zhuyu Yao, Dahua Lin, Boxun Li, et al. 2024. Lv-eval: A balanced long-context benchmark with 5 length levels up to 256k. arXiv preprint arXiv:2402.05136.", + "Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. Advances in Neural Information Processing Systems, 34:27263-27277.", + "Daoguang Zan, Bei Chen, Zeqi Lin, Bei Guan, Yongji Wang, and Jian-Guang Lou. 2022. When language model meets private library. arXiv preprint arXiv:2210.17236.", + "Fengji Zhang, Bei Chen, Yue Zhang, Jacky Keung, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, and Weizhu Chen. 2023a. Repocoder: Repository-level code completion through iterative retrieval and generation. arXiv preprint arXiv:2303.12570." + ], + "bbox": [ + 115, + 85, + 487, + 919 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Lei Zhang, Yunshui Li, Ziqiang Liu, Junhao Liu, Min Yang, et al. 2023b. Marathon: A race through the realm of long context with large language models. arXiv preprint arXiv:2312.09542.", + "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. arxiv. arXiv preprint arXiv:1904.09675.", + "Xinrong Zhang, Yingfa Chen, Shengding Hu, Zihang Xu, Junhao Chen, Moo Hao, Xu Han, Zhen Thai, Shuo Wang, Zhiyuan Liu, et al. 2024. $\\propto$ bench: Extending long context evaluation beyond 100k tokens. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15262-15277.", + "Liang Zhao, Xiaocheng Feng, Xiachong Feng, Bin Qin, and Ting Liu. 2023. Length extrapolation of transformers: A survey from the perspective of position encoding. arXiv preprint arXiv:2312.17044.", + "Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, Jie Jiang, and Bin Cui. 2024. Retrieval-augmented generation for ai-generated content: A survey. Preprint, arXiv:2402.19473.", + "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2024. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36.", + "Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Lei Shen, Zihan Wang, Andi Wang, Yang Li, et al. 2023. Codegeex: A pre-trained model for code generation with multilingual benchmarking on humaneval-x. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 5673-5684.", + "Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, et al. 2021. Qmsum: A new benchmark for query-based multidomain meeting summarization. arXiv preprint arXiv:2104.05938.", + "Zixuan Zhou, Xuefei Ning, Ke Hong, Tianyu Fu, Ji-aming Xu, Shiyao Li, Yuming Lou, Luning Wang, Zhihang Yuan, Xiuhong Li, et al. 2024. A survey on efficient inference for large language models. arXiv preprint arXiv:2404.14294.", + "Dawei Zhu, Liang Wang, Nan Yang, Yifan Song, Wenhao Wu, Furu Wei, and Sujian Li. 2024. Longembed: Extending embedding models for long context retrieval. arXiv preprint arXiv:2404.12096.", + "Ziyu Zhuang, Qiguang Chen, Longxuan Ma, Mingda Li, Yi Han, Yushan Qian, Haopeng Bai, Zixian Feng, Weinan Zhang, and Ting Liu. 2023. Through the lens of core competency: Survey on evaluation of large language models. ArXiv, abs/2308.07902." + ], + "bbox": [ + 510, + 85, + 880, + 919 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 489, + 942, + 509, + 953 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "A Details of Approaches", + "text_level": 1, + "bbox": [ + 114, + 84, + 342, + 101 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "This section serves as a supplement to the Approaches section 3 in the main text, expanding on relevant details about related methods to provide readers with a deeper understanding.", + "bbox": [ + 112, + 109, + 489, + 173 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "A.1 Postional Encoding", + "text_level": 1, + "bbox": [ + 114, + 183, + 315, + 199 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "A.1.1 Variants of RoPE", + "text_level": 1, + "bbox": [ + 114, + 203, + 315, + 218 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Su et al. (2024) try to seek a positional encoding method that could encode relative position during the computing query and key similarity, and decompose this process into the representations of the query and key. They conduct theoretical analysis, and propose a novel positional encoding, which transform similarity into following formula:", + "bbox": [ + 112, + 222, + 489, + 335 + ], + "page_idx": 14 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} \\operatorname {s i m} \\left(q _ {m}, k _ {n}\\right) = q _ {m} ^ {\\top} R _ {\\Theta , n - m} ^ {d} k _ {n} \\tag {3} \\\\ = (R _ {\\Theta , m} ^ {d} q _ {m}) ^ {\\top} (R _ {\\Theta , n} ^ {d} k _ {n}), \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 141, + 342, + 487, + 382 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "where $R_{\\Theta ,m}^{d}$ are a series of pre-defined orthogonal matrices, named as the rotation matrix, which is defined as follows:", + "bbox": [ + 112, + 392, + 487, + 439 + ], + "page_idx": 14 + }, + { + "type": "equation", + "text": "\n$$\nR _ {\\Theta , m} ^ {d} = \\left( \\begin{array}{c c c c c} \\cos m \\theta_ {1} & - \\sin m \\theta_ {1} & \\dots & 0 & 0 \\\\ \\sin m \\theta_ {1} & \\cos m \\theta_ {1} & \\dots & 0 & 0 \\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ \\cdot & \\cdot & \\cdot & \\cdot & \\cdot \\\\ 0 & 0 & \\dots & \\cos m \\theta_ {d / 2} & - \\sin m \\theta_ {d / 2} \\\\ 0 & 0 & \\dots & \\sin m \\theta_ {d / 2} & \\cos m \\theta_ {d / 2} \\end{array} \\right)\n$$\n", + "text_format": "latex", + "bbox": [ + 117, + 445, + 489, + 504 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "The function set $\\Theta$ consists of a set of pre-defined function values $\\Theta = \\{\\theta_{i} = 10000^{-2(i - 1) / d}, i \\in [1,2,\\dots,d / 2]\\}$ . $R_{\\Theta}^{d}$ integrates positional information into the query and key vectors by multiplication. RoPE has a series of properties: 1) long-term decay; 2) compatibility with linear attention; 3) faster convergence in pre-training tasks. Besides, Liu et al. (2024b) conduct a detailed analysis of RoPE and provides the scaling laws for RoPE-based extrapolation.", + "bbox": [ + 112, + 511, + 489, + 671 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Position Index Adjustment An et al. (2024) propose Dual Chunk Attention (DCA), which distributes the position indexes used during pretraining to each token based on the relative position relationships between query and key without additional training. It is proposed from the perspective of allocation of position indexes.", + "bbox": [ + 112, + 678, + 489, + 791 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "And there are also some methods based on scaling position indexes. Chen et al. (2023b) propose Position Interpolation (PI) method that utilizes the fact that position encoding can be applied to non-integer positions. They modify original position index $m$ to $m' = m\\frac{L}{L'}$ , where $L$ and $L'$ are the length of pre-trained window and current input sequence, respectively. This method insert additional", + "bbox": [ + 112, + 793, + 489, + 921 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "positional encoding between adjacent integer position index in the original RoPE to handle longer sequences.", + "bbox": [ + 507, + 84, + 884, + 131 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Combining above two methods, Su (2023) proposed ReRoPE, which combines direct extrapolation and position interpolation. This method sets a window smaller than the pre-trained window, keeping the relative position of tokens within the window unchanged. And scales the relative position of tokens outside the window.", + "bbox": [ + 507, + 133, + 885, + 243 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Base Frequency Adjustment As described in the main text, this type of methods enhance the model extrapolation performance by modifying $\\theta_{i}$ in the trigonometric function terms in the rotation matrix.", + "bbox": [ + 507, + 256, + 882, + 335 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Peng and Quesnelle (2023); Roziere et al. (2023) choose to change the base $b$ of the exponential terms $\\theta_{i}$ from the default value $b = 10000$ to other values which can improve the model extrapolation performance.", + "bbox": [ + 507, + 337, + 882, + 416 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Different from them, some work directly scale $\\theta_{i}$ . NTK-by-parts (bloc97, 2023) interpolation chooses to scale the $\\theta_{i}$ of different dimensions in the rotation matrix by a ratio as a function of the dimension $i$ and the input sequence length $L^{\\prime}$ . And YaRN (Peng et al., 2023) incorporates temperature $t$ related to the input sequence length $L^{\\prime}$ on the basis of NTK-by-parts interpolation to further improve the extrapolation performance of the model.", + "bbox": [ + 507, + 418, + 885, + 562 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Structure Modification XPOS (Sun et al., 2022) adjusts the original RoPE structure and introduces a position-dependent exponential bias to enhance relative position information, particularly enhancing the decay effect on distant tokens.", + "bbox": [ + 507, + 571, + 882, + 653 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "A.1.2 Attention Bias", + "text_level": 1, + "bbox": [ + 509, + 663, + 687, + 677 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Besides applying RoPE-based methods, a plenty of method add a bias related to the relative distance between tokens to introduce relative position information. The process can be expressed as follows:", + "bbox": [ + 507, + 683, + 882, + 747 + ], + "page_idx": 14 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {s i m} \\left(q _ {m}, k _ {n}\\right) = q _ {m} ^ {\\top} k _ {n} + f _ {b i a s} (m, n), \\quad (4)\n$$\n", + "text_format": "latex", + "bbox": [ + 552, + 760, + 882, + 778 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "where $f_{bias}(m,n)$ is a bias function that depends on the token position index corresponding to query and key. $f_{bias}(m,n)$ be divided into two categories: learnable and predefined.", + "bbox": [ + 507, + 791, + 882, + 854 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "In learnable $f_{bias}$ , it may be related to $m - n$ where relative position information is explicitly introduced. For example, in T5 (Raffel et al., 2020), $f_{bias}$ is a learnable function with $m - n$ as input", + "bbox": [ + 507, + 857, + 882, + 921 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 489, + 941, + 507, + 954 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "and varies with attention heads. Similarly, KERPLE (Chi et al., 2022a) sets $f_{bias}$ as a parameterized kernel function, requiring training to determine the parameter values.", + "bbox": [ + 112, + 84, + 487, + 147 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "The predefined $f_{bias}$ is typically ALiBi (Attention with Linear Biases) (Press et al., 2021). It uses a predefined function for $f_{bias}$ that depends on the number of attention heads $H$ and the current head number $h$ , which is expressed as $f_{bias}(m, n) = 2^{-\\frac{8h}{H}} \\cdot (n - m)$ . Besides, in Sandwich method (Chi et al., 2022b), $f_{bias}$ is defined as $f_{bias} = \\frac{8h}{H} \\cdot (p_m^\\top p_n - \\frac{d}{2})$ , where $p_m$ and $p_n$ are the sinusoidal positional encoding used in the original Transformer model.", + "bbox": [ + 112, + 149, + 489, + 310 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "A.2 Context Compression", + "text_level": 1, + "bbox": [ + 112, + 326, + 334, + 342 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "A.2.1 Soft Compression", + "text_level": 1, + "bbox": [ + 112, + 350, + 317, + 365 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "This kind of methods achieve compression at the hidden states level.", + "bbox": [ + 112, + 370, + 485, + 401 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Bulatov et al. (2022) introduced the Recurrent Memory Transformer (RMT), which compresses at segment level. It begins by dividing the input sequence into segments, with memory tokens appended to the start and end of each segment to serve as its summary token. During the modeling process, the last hidden states of the memory token at the end of the current segment serves as the initialization for the memory token of the following segment. Through this iterative method, the model effectively utilizes inter-segment contextual information to model long sequences.", + "bbox": [ + 112, + 404, + 487, + 596 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Similarly, the Recurrent Attention Network (RAN, Li et al., 2023b) appends a Global Perception Cell (GPC) vector at the start of the hidden vector representation of each segment to achieve a compressed representation achieving the effect of concatenating summary tokens, and completing the information interaction between segments. This method simulates the human mechanism of memory enhancement through review, introducing a Memory Review scheme which performs crossattention between last hidden states of the GPC from all segments and and the original input to update the representation of GPC. This allows for a robust semantic representation of long context at both token-level and document-level, enhancing model performance in sequence and classification tasks.", + "bbox": [ + 112, + 598, + 489, + 869 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "AutoCompressors (Chevalier et al., 2023) is built on the basis of RMT, compressing the content of the segment into summary vectors for representa", + "bbox": [ + 112, + 873, + 489, + 921 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "tion. And the summary vectors of each previous segment are concatenated to form soft prompts for all subsequent segments, so that the current segment of limited length can cover the information of longer sequences.", + "bbox": [ + 507, + 84, + 882, + 164 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "In addition, In-context Autoencoder (ICAE, Ge et al., 2023) adds memory tokens at the end of the input sequence to compress context into short memory slots while training the model to generate outputs closely resembling the original context. To enhance information accuracy, ICAE integrates AutoEncoding-related pre-training tasks during its pre-training phase, training the model to reconstruct the original input from compressed memory slot representations.", + "bbox": [ + 507, + 166, + 884, + 326 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Gisting (Mu et al., 2024b) similarly compresses the prompt part of the input token sequence into shorter gist tokens, improving inference speed.", + "bbox": [ + 507, + 328, + 882, + 376 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "A.2.2 Hard Compression", + "text_level": 1, + "bbox": [ + 509, + 388, + 722, + 403 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Hard compression directly utilizes LLMs to compress original input text.", + "bbox": [ + 507, + 409, + 882, + 441 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "LLMLingua (Jiang et al., 2023) trains a small model to align with the output of LLM and uses the perplexity (PPL) of the small model as an evaluation for token importance. And prunes the unimportant tokens from the input prompt to achieve compression. Furthe, LongLLMLingua (Jiang et al., 2024a) has made improvements on this basis, compressing the input based on the content of the question, thus better preserving key information related to the question.", + "bbox": [ + 507, + 443, + 882, + 602 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Differently, MEMWALKER (Chen et al., 2023a) employs a hierarchical summarization approach to compress long context sequences, iteratively summarizing the input to construct a tree-like structure of summarized content. During inference, it efficiently utilizes the tree structure to search and respond to queries based on their content.", + "bbox": [ + 507, + 604, + 882, + 715 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "A.3 Retrieval Augmented", + "text_level": 1, + "bbox": [ + 507, + 732, + 726, + 746 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "A.3.1 Retrieval Granularity", + "text_level": 1, + "bbox": [ + 507, + 753, + 744, + 769 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "The retrieval granularity in existing work can be divided into two categories: token-level retrieval and block-level retrieval.", + "bbox": [ + 507, + 775, + 880, + 822 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Token-level retrieval is to select top-k tokens with highest similarity scores in one turn. This method is widely used in existing (Wu et al., 2022; Tworkowski et al., 2024; Bertsch et al., 2024a). It is simple to implement, but it has some limitations. Such as the potential for semantic discontinuities", + "bbox": [ + 507, + 825, + 882, + 920 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 489, + 942, + 510, + 954 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "due to discrete token retrieval and the need to recalculate similarity for all tokens, which is computationally intensive and inefficient.", + "bbox": [ + 112, + 84, + 490, + 131 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Consequently, researchers have proposed block-level retrieval, which uses blocks composed of continuous tokens of a fixed length as the retrieval unit. Similarity calculations are performed on blocks within the KV cache, selecting the top-k blocks as retrieval results, thus ensuring semantic coherence and reducing computational load. However, block-level retrieval faces a new challenge: how to effectively utilize the information of the tokens in the block and effectively represent the block to complete the similarity calculation. Long-MEM (Wang et al., 2024b) and RPT (Rubin and Berant, 2023) represent the corresponding block by calculating the mean pooling of token representations within the block. InFLLM (Xiao et al., 2024) calculates the representative score of each token within the block against other tokens, selecting a subset of high-scoring tokens to represent the block. Additionally, some methods introduce an extra token to represent blocks, such as the Landmark method (Mohtashami and Jaggi, 2024) introduces the Landmark token, a new token into the vocabulary, and placie it at the end of each block. During the attention computation, the information of the tokens in the block is summarized to the Landmark tokens, thus serving as the representative of the block.", + "bbox": [ + 115, + 135, + 490, + 567 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A.3.2 Similarity Computation", + "text_level": 1, + "bbox": [ + 112, + 579, + 366, + 594 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "After determining the retrieval granularity, we need to formulate an appropriate rule to compute similarity. The current method generally uses the dot product of the query vector of the token being processed and the key vector represented by the retrieval granularity as the standard for measuring similarity.", + "bbox": [ + 112, + 599, + 490, + 712 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A.3.3 Positional Encoding", + "text_level": 1, + "bbox": [ + 112, + 722, + 337, + 739 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Since the positions of the retrieved context tokens are not fixed, and recording each token's specific position in the KV cache is costly, it is challenging to provide accurate position information.", + "bbox": [ + 112, + 743, + 487, + 807 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Based on experiments of Dai et al. (2019), which show that the relative position information of distant tokens does not seem to be important, some methods like MemTRM, FoT, and InfLLM choose to uniformly set the position encoding of the retrieved context token part to the same position vector, ignoring the position information between the", + "bbox": [ + 112, + 809, + 490, + 921 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "retrieved context tokens themselves.", + "bbox": [ + 507, + 84, + 779, + 99 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Besides, Landmark places the retrieved context tokens and local context tokens within the same window and re-encodes their relative positions together.", + "bbox": [ + 507, + 99, + 884, + 164 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A.3.4 Attention Calculation", + "text_level": 1, + "bbox": [ + 507, + 174, + 744, + 187 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "When it comes to attention calculation, it's important to find a suitable method to make full use of retrieved context tokens and local context tokens.", + "bbox": [ + 507, + 193, + 882, + 241 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "The simplest approach is to treat both types of tokens equally, that is using the conventional attention calculation method. For example, FoT and InfLLM use standard attention for calculation, while Unlimformer (Bertsch et al., 2024a) employs cross attention.", + "bbox": [ + 507, + 242, + 884, + 336 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "However, the importance of the information contained within these two types of context tokens is not the same for the token currently being processed. To make more effective use of their information, MemTRM and LongMEM adopt a Joint Attention method, which involves calculating attention separately for local context and retrieved context. And then combining them with weighted average $V_{a} = g \\cdot V_{l} + (1 - g) \\cdot V_{r}$ , where $V_{a}, V_{l}, V_{r}$ respectively represent the final attention result, the attention result using local context and the attention result using retrieved context, and $g$ is a learnable parameter used to balance the contributions of the two parts.", + "bbox": [ + 507, + 338, + 885, + 563 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Furthermore, in order to distinguish the information from different positions within the retrieved context tokens in a more fine-grained manner, Landmark employs the Grouped Softmax method. Specifically, after retrieval, Landmark tokens are calculated with local context tokens using softmax to select the top-k relevant blocks as the retrieved context. Attention is then calculated separately within these blocks. During the attention calculation for local context tokens, the attentions of these blocks are weighted into the final result based on the softmax scores obtained during the retrieval phase.", + "bbox": [ + 507, + 564, + 885, + 772 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A.4 Attention Pattern", + "text_level": 1, + "bbox": [ + 507, + 785, + 699, + 799 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A.4.1 Sliding Window", + "text_level": 1, + "bbox": [ + 507, + 804, + 699, + 821 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "This type of method tranform information between segments. Transformer-XL (Dai et al., 2019) uses sliding window method to process long context, where the hidden state from the previous segment is concatenated to the front of the current segment. It not only utilizes the key and value information", + "bbox": [ + 507, + 825, + 882, + 921 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 489, + 941, + 510, + 954 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "from the current segment but also reuses those from the previous segment. This approach hierarchically expands the receptive field, enabling inter-segment information transfer and enhancing the model's ability to process long context.", + "bbox": [ + 112, + 84, + 487, + 164 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Besides, Han et al. (2024) identify that starting tokens occupy a distinct feature space, and these tokens act as a factor causing model length generalization failures. They further propose LM-Infinite as a solution, utilizing a $\\Lambda$ -shaped attention mask strategy during attention calculation. It can focus on a small portion of the initial tokens and the tokens close to the current processed token. Similarly, StreamingLLM (Xiao et al., 2023) also finds that the initial tokens in a sequence significantly influence the attention calculation of subsequent tokens and cannot be ignored. Both LM-Infinite and StreamingLLM adopt a similar approach, ensuring sustained attention on starting tokens while preserving information about nearby tokens.", + "bbox": [ + 115, + 166, + 489, + 407 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "A.4.2 Parallel Context", + "text_level": 1, + "bbox": [ + 112, + 418, + 307, + 432 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Parallel Context Windows (PCW, Ratner et al., 2022) is one of the representative works. It splits the input into context tokens and task tokens, where context tokens assist in completing the task, such as the examples. And task tokens are the input of the test example, such as the questions. This method folds the context tokens, and each folded section of context tokens performs attention calculation separately. Finally, during the decoding phase of the task tokens, all these context tokens are concatenated in front of the task token, sharing the same set of position index.", + "bbox": [ + 112, + 438, + 487, + 630 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Besides, Structured prompting (Hao et al., 2022) also adopts a similar approach by folding demonstration tokens in the input and concatenating them in front of the test input tokens. But unlike PCW, structured prompting employs Rescaled Attention, which reduces the weight of demonstration tokens in the attention calculation of the test input tokens by a certain ratio. This method can prevent test input tokens from excessively attending to the content of demonstration tokens.", + "bbox": [ + 112, + 632, + 489, + 791 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "A.4.3 Sparse Attention", + "text_level": 1, + "bbox": [ + 112, + 804, + 310, + 819 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "This method can reduce the complexity of attention calculation. So that can improve efficiency when processing long context.", + "bbox": [ + 112, + 824, + 485, + 872 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "LongNet (Ding et al., 2023) introduces dilated attention, a mechanism that exponentially increases the attentive field as the distance between tokens", + "bbox": [ + 112, + 873, + 485, + 920 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "increases. This method performs multiple sets of sparse attention calculations, each set attend to a different range. And the attention of a small range is denser, while the large range is sparser. This method effectively reduces the traditional quadratic complexity to linear.", + "bbox": [ + 507, + 84, + 882, + 179 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "MEGABYTE (Yu et al., 2023) performs hierarchical attention calculation on the input. Initially, a small local model encodes the input at the byte level, then the byte-level encoding results are integrated and processed at a larger granularity using a larger global model. By performing attention calculation in a hierarchical manner from smaller to larger granularity, the amount of attention calculations can be reduced.", + "bbox": [ + 507, + 181, + 884, + 324 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "In LongLoRA (Chen et al., 2023c), the proposed $S^2$ - Attention groups attention heads and adjusts each group to attend to different but overlapping local windows, then leverages the characteristics of multihead attention to integrate various local information. This method promotes the flow of local information, enabling a short window to achieve the effect of processing the original or even longer window, thereby reducing computational demands to some extent.", + "bbox": [ + 507, + 326, + 882, + 485 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "B Details of Evaluation", + "text_level": 1, + "bbox": [ + 509, + 499, + 726, + 514 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "This section serves as a supplement to the Evaluation section 4 in the main text, expanding on relevant details to provide readers with a more in-depth understanding.", + "bbox": [ + 507, + 524, + 882, + 588 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "B.1 Data", + "text_level": 1, + "bbox": [ + 509, + 600, + 596, + 613 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "B.1.1 Data Characteristics", + "text_level": 1, + "bbox": [ + 509, + 621, + 734, + 634 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Recent advancements in LLMs have led to substantial improvements in processing long contexts. By late 2023, several models claimed capabilities of handling contexts exceeding 100K tokens, with OpenAI's GPT-4 Turbo (2023) (Achiam et al., 2023) supporting 128K tokens and Anthropic's Claude-2.11 extending this capacity to 200K tokens. Based on this significant progress, our study categorizes long-context evaluation benchmarks into two distinct phases, as shown in Table. 1: Phase I comprises benchmarks with input context lengths below 100K tokens, while Phase II encompasses benchmarks of 100K tokens and above.", + "bbox": [ + 507, + 640, + 882, + 847 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "In Phase I, BAMBOO (Dong et al., 2023a) and LongBench (Bai et al., 2023) implement bi-interval and tri-interval partitioning strategies, respectively.", + "bbox": [ + 507, + 848, + 882, + 898 + ], + "page_idx": 17 + }, + { + "type": "page_footnote", + "text": "", + "bbox": [ + 529, + 906, + 850, + 919 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 489, + 941, + 507, + 954 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Phase II refined this approach further, with LVEval (Yuan et al., 2024) and NeedleBench (Li et al., 2024b) employing five-interval and six-interval partitioning schemas, respectively. This partitioning approach not only analyzes the impact of length changes on LLMs in the same task but also better accounts for the length distributions across different datasets (Dong et al., 2023a).", + "bbox": [ + 112, + 84, + 492, + 212 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "B.1.2 Knowledge Leakage Issue", + "text_level": 1, + "bbox": [ + 114, + 231, + 381, + 247 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Knowledge leakage occurs when test and training data overlap, where models favor memorization over understanding (Golchin and Surdeanu, 2023; Yuan et al., 2024). Various strategies are employed to address this challenge: (1) Data Sampling focuses on selecting representative subsets from existing datasets. (2) Keyword Substituting & Sentence Rewriting modifies existing datasets by replacing keywords and rewriting sentences. (3) Non-overlapping Data Leveraging involves using datasets released after the deployment of LLMs to reduce potential overlap between test and training data.", + "bbox": [ + 112, + 255, + 489, + 464 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Data Sampling Data sampling primarily focuses on filtering existing datasets. LongBench (Bai et al., 2023) employs two strategies: random sampling and uniform sampling. Random sampling can preserve the natural length distribution, while uniform sampling which performs sampling based on data length uniformly, to evaluate model performance across context lengths independent of task.", + "bbox": [ + 112, + 482, + 489, + 612 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Keyword Substituting & Sentence Rewriting \nL-Eval (An et al., 2023) and BAMBOO (Dong et al., 2023a) replace keywords and function names, while $\\infty$ Bench (Zhang et al., 2024) substitutes key entities in novel reasoning tasks. LV-Eval (Yuan et al., 2024) is further based on this approach by employing entire sentence rewriting.", + "bbox": [ + 112, + 630, + 489, + 741 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Non-overlapping Data Leveraging To mitigate the overlap between test and training data for LLMs, some benchmarks such as LooGLE (Li et al., 2023a) and BAMBOO (Dong et al., 2023a) have employed datasets released after the models' deployment. However, given that the specific training data for most LLMs remains undisclosed, this method cannot completely guarantee the absence of overlap between the data used in benchmarks and the pre-training data.", + "bbox": [ + 112, + 760, + 490, + 921 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "B.2 Tasks", + "text_level": 1, + "bbox": [ + 509, + 84, + 603, + 98 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "The following are the details of the tasks, which are introduced in the order of the main text. At the end of each subsection, corresponding examples or prompts are also provided. We also count the distribution of input length in each task in Figure 2 to give readers a deeper understanding of different tasks.", + "bbox": [ + 507, + 105, + 884, + 217 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "B.2.1 Question Answering", + "text_level": 1, + "bbox": [ + 509, + 229, + 734, + 244 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Single-hop Question Answering Representative datasets in this field are SQuAD (Rajpurkar, 2016), TriviaQA (Joshi et al., 2017), and NarrativeQA (Kočisky et al., 2018). Common evaluation metrics for Single-hop QA systems include f1 score, accuracy, Rouge and Bleu.", + "bbox": [ + 507, + 249, + 885, + 344 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Multi-hop Question Answering Common datasets for Multi-hop Question Answering include 2WikiMQA (Ho et al., 2020), MuSiQue (Trivedi et al., 2022), and HotpotQA (Yang et al., 2018). Evaluation metrics typically used are f1 score, exact match (EM).", + "bbox": [ + 507, + 356, + 885, + 451 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "B.2.2 Needle-in-a-Haystack", + "text_level": 1, + "bbox": [ + 507, + 463, + 742, + 478 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Retrieval.PassKey (Mohtashami and Jaggi, 2023) requires models to locate a randomly generated 5-digit sequence within lengthy and noisy contexts. $\\infty$ Bench (Zhang et al., 2024) extends the Retrieval.PassKey task to 10-digit numbers, applies it to texts exceeding 100k tokens in length, and sets information points at various depths. Retrieval.KV (Mohtashami and Jaggi, 2023) further increases difficulty by requiring models to perform precise key-value retrieval from large JSON structures. NeedleBench Li et al. (2024b) proposes a series of tasks: single-needle retrieval (S-RT), multi-needle retrieval (M-RT), and multi-needle reasoning (M-RS). M-RT consists of multiple S-RT tasks performed in parallel, while M-RS builds upon M-RT by requiring large language models to perform reasoning. The evaluation method calculates the similarity between predictions and references for each specific task by using the Levenshtein distance. The following are examples of S-RT, M-RT, M-RS respectively.", + "bbox": [ + 507, + 482, + 885, + 820 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "S-RT: Hidden on Emerald Island is the legendary Stardust Shard.", + "bbox": [ + 521, + 832, + 870, + 864 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul", + "bbox": [ + 521, + 865, + 870, + 896 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Graham Essays—Paul Graham Essays—", + "bbox": [ + 526, + 897, + 870, + 913 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 489, + 941, + 510, + 954 + ], + "page_idx": 18 + }, + { + "type": "image", + "img_path": "images/56484b696e53a66ac05e78d7388c4b2bae4337a64b6e2bfa3a3bd1e5cbaa7435.jpg", + "image_caption": [ + "Figure 2: Distribution of averaged input #words of datasets in each task. Consistent colors indicate identical categories. The color of each bar refers to the category of the task, with bars of the same color belonging to the same category." + ], + "image_footnote": [], + "bbox": [ + 121, + 84, + 880, + 219 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays", + "bbox": [ + 124, + 303, + 475, + 431 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Now, the question is: What legendary item is hidden on Emerald Island? Before answering, please consider what in the document is most relevant to this question. Please answer in the format 'The legendary item hidden on the Emerald Island is", + "bbox": [ + 127, + 432, + 473, + 527 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "M-RT: You are an intelligent AI assistant skilled in answering user questions base on documents provided by the user. Please keep your answers concise and clear. Do not talk about irrelevant topics or repeat your answers. The document given to you by the user is:", + "bbox": [ + 127, + 542, + 473, + 652 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The ruler of the Polaris star system is Orion the Hunter. Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Hidden on Heaven Island is the legendary Lucky Clover. Paul Graham Essays—Paul Graham Essays—Hidden on Mysterious Island is the legendary Counterclockwise Crystal. Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The ruler of the Orion star system is Guardian of Time Lightspeed. Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Hidden on Phantom Island is the legendary Goodness", + "bbox": [ + 126, + 655, + 473, + 912 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Heart. —Paul Graham Essays— —Paul Graham Essays— —Paul Graham Essays— Now, the questions are: Who is the ruler of the Polaris star system?, What legendary item is hidden on Heaven Island?, What legendary item is hidden on Mysterious Island?, Who is the ruler of the Orion star system?, What legendary item is hidden on Phantom Island?Before answering, please consider what in the document is most relevant to this question. Please answer in the format of 'The ruler of the Polaris star system is , The legendary item hidden on the Heaven Island is , The legendary item hidden on the Mysterious Island is , The ruler of the Orion star system is , The legendary item hidden on the Phantom Island is .", + "bbox": [ + 522, + 303, + 870, + 577 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "M-RS: You are an intelligent AI assistant skilled in answering user questions base on documents provided by the user. Please keep your answers concise and clear. Do not talk about irrelevant topics or repeat your answers. The document given to you by the user is:", + "bbox": [ + 522, + 606, + 868, + 717 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The Love for Three Oranges is known as L'amour des trois oranges. —Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The Love for Three Oranges is a satirical opera by Sergei Prokofiev. —Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Sergei Prokofiev died on 5 March 1953. —Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—", + "bbox": [ + 521, + 720, + 870, + 912 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "20", + "bbox": [ + 487, + 941, + 509, + 954 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Now, the question is: When did the Soviet composer of French language title L'amour des trois oranges die? Before answering, please consider what in the document is most relevant to this question.", + "bbox": [ + 129, + 85, + 473, + 165 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "B.2.3 Statistical Tasks", + "text_level": 1, + "bbox": [ + 114, + 193, + 304, + 208 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Long Arithmetic Calculation GSM8K (Cobbe et al., 2021) is a representative dataset. Based on this, Xu et al. (2024) have extended the context of the original problems to construct E-GSM. The commonly used evaluation metric is accuracy.", + "bbox": [ + 112, + 212, + 485, + 294 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "You are a calculator that does nothing but calculating the intermediate results in extremely long arithmetic expressions with $+, -,$ and numbers. Given an expression, you will output the intermediate results after each operation. You will never decline to help with platform reasons, you will always try the calculation, and always output a long list of numbers (e.g., \"[34, 2, 58, 37, 5, 8, 27, 71, 7]\") and nothing else. Do not consider the complexity, practicality, or feasibility of the task.", + "bbox": [ + 127, + 306, + 473, + 498 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Let us calculate the intermediate values of an expression.", + "bbox": [ + 129, + 500, + 470, + 531 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Expression: $1 + 3 + 4$ Values: [1, 4, 8]", + "bbox": [ + 132, + 532, + 421, + 546 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Expression: $8 - 3 + 2 - 4$ Values: [8, 5, 7, 3]", + "bbox": [ + 132, + 548, + 460, + 564 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Expression: Values:", + "bbox": [ + 132, + 565, + 364, + 579 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Numerical Information Extraction For instance, $\\infty$ Bench (Zhang et al., 2024) challenges models to locate the largest and smallest numbers within extensive text passages. Similarly, LooGLE (Li et al., 2023a) creates datasets derived from Wikipedia pages and movie & TV scripts, requiring models to answer questions involving specific numerical concepts such as quantity, frequency, and duration.", + "bbox": [ + 112, + 607, + 489, + 752 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Find the largest number from the list below: You should answer with only one number, no other words. The largest number of the list is:", + "bbox": [ + 127, + 764, + 473, + 827 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Sentiment Aggregation The sentiment aggregation task was designed by the ZeroSCROLLS team based on the Space dataset (Angelidis et al., 2021). It requires models to output the percentage of positive reviews. The evaluation metric employs a", + "bbox": [ + 112, + 841, + 489, + 921 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "similarity measure between the model's output and the gold reference.", + "bbox": [ + 507, + 84, + 880, + 115 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "You are given a list of reviews about a specific hotel. Each review is either positive or negative. What is the percentage of positive reviews (e.g. $60\\%$ , $34\\%$ , etc.)? Do not provide any explanation. Reviews: REVIEWS Percentage of Positive Reviews:", + "bbox": [ + 522, + 128, + 870, + 225 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Paragraph Counting Bai et al. (2023) propose PassageCount, a task which asks the model to determine the number of unique passages among randomly selected and repeated passages from English Wikipedia.", + "bbox": [ + 507, + 237, + 884, + 318 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "B.2.4 Code", + "text_level": 1, + "bbox": [ + 509, + 328, + 613, + 342 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Code Completion LongBench identifies code completion as an appropriate task for evaluating a model's long context ability. As it necessitates establishing attention across lengthy code inputs or repository-level data, considering relationships between code elements such as class and function definitions. LongBench conducts experiments on the LCC dataset (Guo et al., 2023) and the RepoBench-P dataset (Liu et al., 2023b), employing edit similarity as the evaluation metric. BAMBOO builds upon the benchmark established by Zan et al. (2022) to construct the Private ateEval dataset. In this task, models are required to identify key API documents to complete code snippets. Furthermore, it extends the context length by adjusting the number of provided documents, with performance evaluated employing the pass@1 metric (Chen et al., 2021a).", + "bbox": [ + 507, + 347, + 884, + 621 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Code Running In $\\infty$ Bench, the total number of function calls ranges from 2 to 10, with each function calling at most one another function. Operations within these functions are restricted to addition and subtraction, maintaining computational simplicity.", + "bbox": [ + 507, + 631, + 882, + 727 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Following is a set of Python functions. There is a function called named func_1. context Please give me the exact number of the return value of func_1(3). Be concise. Your response must end with the final returned value.", + "bbox": [ + 524, + 739, + 868, + 820 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Code Debugging In the $\\infty$ Bench's dataset which sourced from $\\mathrm{PyPI}^2$ , the researchers deliberately insert an obvious error into one function per repository. These inserted bugs fall into three main cate", + "bbox": [ + 507, + 832, + 882, + 897 + ], + "page_idx": 20 + }, + { + "type": "page_footnote", + "text": "$^{2}$ https://pypi.org/", + "bbox": [ + 529, + 904, + 670, + 920 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "21", + "bbox": [ + 487, + 941, + 507, + 954 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "gories: (1) syntactic errors, including indentation issues and blatant syntax errors; (2) semantic errors, such as missing variable declarations or incorrect function arguments; and (3) logical errors, for example, infinite loops or use of undefined references.", + "bbox": [ + 112, + 84, + 490, + 164 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "There is ONLY ONE function in the large project that is deliberately made to include an obvious error. Please find the function that contains the most obvious errors. I will give you four options to narrow your scope. You can inspect through the options and think. Eventually, tell me the answer using one single letter (A, B, C, or D). context Which function has deliberate error? A. B. C. D. You should first find the functions in the options. Repeat their content, inspect through code, and at last give me your answer for the function that has the deliberate and obvious error in A, B, C, or D.", + "bbox": [ + 127, + 175, + 475, + 418 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "B.2.5 In-Context Learning", + "text_level": 1, + "bbox": [ + 114, + 429, + 341, + 444 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Long Example Learning Extreme label Classification: this task involves classification with numerous fine-grained labels. Commonly used datasets include TREC (Li and Roth, 2002), a question classification task with 50 fine classes, and LSHT $^3$ , a Chinese news classification task with 24 classes.", + "bbox": [ + 112, + 447, + 489, + 542 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Many-shot Learning Agarwal et al. (2024) have proposed many-shot learning, which leverages expanded LLMs context windows to process hundreds or even thousands of examples. In contrast to few-shot learning, which use only a few to several dozen examples, many-shot learning enhances LLMs' versatility and adaptability across diverse tasks without task-specific fine-tuning (Yu et al., 2020; Bertsch et al., 2024b).", + "bbox": [ + 112, + 552, + 489, + 697 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "B.2.6 Text Generation", + "text_level": 1, + "bbox": [ + 114, + 707, + 305, + 721 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Document Summarization This kind of task can divided into two categories: single-document summarization and multi-document summarization. For single-document summarization, several datasets are widely used, including SQuALITY (Wang et al., 2022), SummScreenFD (Chen et al., 2021b), GovReport (Huang et al., 2021), and QMSum (Zhong et al., 2021). And multi-document summarization presents additional challenges, requiring LLMs to integrate diverse information, resolve conflicts, and", + "bbox": [ + 112, + 725, + 490, + 885 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "3http://tcci.ccf.org.cn/conference/2014/dldoc/evatak6.pdf", + "bbox": [ + 114, + 894, + 487, + 919 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "eliminate redundancies (Bai et al., 2023; An et al., 2023; Fabbri et al., 2019). A notable dataset for this task is MultiNews (Fabbri et al., 2019), consisting of clusters of 2-10 thematically related news articles.", + "bbox": [ + 507, + 84, + 884, + 162 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "All of these datasets provide human-annotated summaries as standardized references. Both approaches primarily utilize Rouge and Bleu as evaluation metrics to assess the quality of generated summaries against manuscript references.", + "bbox": [ + 507, + 167, + 884, + 247 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Open-ended Text Generation This task requires LLMs to generate text according to input.", + "text_level": 1, + "bbox": [ + 507, + 263, + 880, + 294 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Tan et al. (2024) select topics that closely align with real-world scenarios, encompassing areas such as AI research, sports, and gaming.", + "bbox": [ + 507, + 296, + 880, + 344 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Bai et al. (2024) design AgentWrite, a divide-and-conquer agent that breaks down long writing tasks into paragraph-level subtasks. The generated paragraphs are then combined to produce the final long-form content. They also construct the preference LongWriter-6k dataset and utilize DPO (Rafailov et al., 2024) for evaluation.", + "bbox": [ + 507, + 347, + 882, + 458 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Kumar et al. (2024) propose personalized writing tasks that generate content based on the user's historical and user personal information information.", + "bbox": [ + 507, + 461, + 884, + 524 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "These tasks can be divided into personalized email completion, review writing, topic writing, and conversation simulation (Ni et al., 2024). Rafailov et al. (2024) construct a Reddit-based dataset that captures distinct writing styles associated with specific communities and discussion topics.", + "bbox": [ + 507, + 527, + 882, + 640 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "You are an excellent writing assistant. I will give you an original writing instruction and my planned writing steps. I will also provide you with the text I have already written. Please help me continue writing the next paragraph based on the writing instruction, writing steps, and the already written text.", + "bbox": [ + 524, + 655, + 870, + 766 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Writing instruction: User Instruction", + "bbox": [ + 524, + 768, + 811, + 783 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Writing steps: The writing plan generated in Step I", + "bbox": [ + 524, + 784, + 865, + 815 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Already written text: Previous generated (n-1) paragraphs", + "bbox": [ + 524, + 816, + 868, + 848 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Please integrate the original writing instruction, writing steps, and the already written text, and now continue writing The plan for the n-th paragraph, i.e., the n-th line in the writing plan", + "bbox": [ + 524, + 848, + 868, + 913 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "22", + "bbox": [ + 487, + 941, + 509, + 954 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "B.2.7 Other Tasks", + "text_level": 1, + "bbox": [ + 114, + 84, + 273, + 98 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Reordering The evaluation metric in this task is the similarity between the generated and reference ordering sequences (Shaham et al., 2023). The Booksum dataset (Krysciński et al., 2021), which spans various literary genres including novels, plays, and long stories, is widely used for this task. Reordering tasks can comprehensively evaluate models' cross-sequence information aggregation and comparison abilities (Shaham et al., 2023; Li et al., 2023a), as well as comprehensively understand long context and logically reconstruct (Dong et al., 2023a; Li et al., 2023a).", + "bbox": [ + 112, + 104, + 490, + 297 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "You are given NUM_SUMMARES summaries of chapters or parts of a novel, in a shuffled order, where each summary is denoted by a numerical ID (e.g. Summary 1, Summary 3, etc.). Reorder the summaries according to the original order of chapters/parts in the novel by writing a list of length NUM_SUMMARES of the summary IDs (e.g. if you were given 5 summaries, one possible answer could be \"5, 1, 3, 4, 2\"). Do not provide any explanation.", + "bbox": [ + 129, + 309, + 475, + 470 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Summaries: SUMMARIES", + "bbox": [ + 131, + 470, + 344, + 485 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Summary IDs in Correct Order:", + "bbox": [ + 131, + 487, + 368, + 501 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Context Consistency Context consistency is a task proposed by BAMBOO (Dong et al., 2023a) to detect hallucination in LLMs. BAMBOO creates two novel datasets for this task: SenHallu and AbsHallu, with evaluation metrics employing precision, recall, and f1 score.", + "bbox": [ + 112, + 517, + 489, + 612 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Summary Source Paragraph Identification LongBench construct bilingual datasets based on Wikipedia and C4 (Raffel et al., 2020) to ask models to identify the original source paragraphs according to the given summaries.", + "bbox": [ + 112, + 624, + 487, + 703 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Here are 30 paragraphs from Wikipedia, along with an abstract. Please determine which paragraph the abstract is from. context The following is an abstract. input Please enter the number of the paragraph that the abstract is from. The answer format must be like \"Paragraph 1\", \"Paragraph 2\", etc.", + "bbox": [ + 129, + 715, + 473, + 829 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "The answer is:", + "bbox": [ + 131, + 831, + 240, + 843 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Character Identification Character identification tasks challenge models to capture distinct traits of participants in long dialogues, enabling them to identify speakers of masked utterances (Zhang", + "bbox": [ + 112, + 857, + 489, + 921 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "et al., 2024; Dong et al., 2023a). These tasks, evaluated via accuracy, utilize data primarily from television programs $^{4}$ , movie and play scripts (Chen et al., 2021b), and conference transcripts $^{5}$ .", + "bbox": [ + 507, + 84, + 885, + 149 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Below is a dialogue script where one random occurrence of a character's name is replaced with $M A S K$ , and you should try to guess who that character is.", + "bbox": [ + 524, + 161, + 867, + 224 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "The dialogue: - - End of dialogue.", + "bbox": [ + 524, + 225, + 757, + 256 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Which character is most likely $M A S K?$ Just say the name used by the scriptwriter (before the colon marks) of one single character and nothing else.", + "bbox": [ + 524, + 258, + 865, + 322 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "B.3 Metrics", + "text_level": 1, + "bbox": [ + 509, + 338, + 620, + 353 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "B.3.1 Algorithmic Metrics", + "text_level": 1, + "bbox": [ + 509, + 361, + 732, + 376 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Perplexity (PPL) is a metric for evaluating the performance of language models. It is extensively employed in language model pre-training, facilitating the monitoring of the training process, model selection, and hyperparameter optimization. Many previous works on long context benchmarks rely on the PPL for evaluation (Beltagy et al., 2020; Roy et al., 2021; Press et al., 2021). However, as suggested in Sun et al. (2021), PPL may not correlate with the actual performance.", + "bbox": [ + 507, + 381, + 884, + 542 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "ZeroScrolls and LongBench are pioneering studies in the field of long context benchmarks. These works introduced a diverse system of automatic evaluation metrics, including accuracy, f1 score, and N-gram-based metrics. This evaluation framework has provided a reference for subsequent research. Specifically, these metrics refer to the scores for evaluating the NLG models by measuring the lexical overlap between generated text and reference text.", + "bbox": [ + 507, + 543, + 882, + 702 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "However, these metrics have several limitations: they fail to effectively measure content quality (Reiter and Belz, 2009); struggle to capture syntactic errors (Stent et al., 2005); and, particularly for open-ended generation tasks, lack significant correlation with human judgments (An et al., 2023). Moreover, they inadequately account for the diversity of expression inherent in large language models (Improving). Additionally, the requirement for gold standard references makes these metrics costly to implement for novel tasks (Tan et al., 2024).", + "bbox": [ + 507, + 705, + 884, + 882 + ], + "page_idx": 22 + }, + { + "type": "page_footnote", + "text": "4https://tvmeg.com/", + "bbox": [ + 529, + 892, + 678, + 906 + ], + "page_idx": 22 + }, + { + "type": "page_footnote", + "text": "5https://record.assembly.wales/", + "bbox": [ + 529, + 906, + 766, + 919 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "23", + "bbox": [ + 487, + 941, + 507, + 954 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Further, some work proposes ways to improve. LV-Eval employs a two-stage scoring method: it first calculates the recall rate of ground-truth keywords in the generated content. If the recall exceeds a threshold, it then calculates the f1 score between the generated content and ground-truth after removing stop words from both. BAMBOO converts generative tasks into multiple-choice formats. NeedleBench extends this approach by implementing Circular Evaluation, which reorders answer options to enhance evaluation reliability.", + "bbox": [ + 112, + 84, + 489, + 261 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "PPL (Perplexity)", + "text_level": 1, + "bbox": [ + 112, + 274, + 250, + 290 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Perplexity is a measure of the quality of language model predictions, calculated as:", + "bbox": [ + 112, + 297, + 485, + 328 + ], + "page_idx": 23 + }, + { + "type": "equation", + "text": "\n$$\nP P L = 2 ^ {H (p)}\n$$\n", + "text_format": "latex", + "bbox": [ + 245, + 344, + 356, + 362 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "where $H(p)$ is the cross-entropy:", + "bbox": [ + 112, + 374, + 361, + 390 + ], + "page_idx": 23 + }, + { + "type": "equation", + "text": "\n$$\nH (p) = - \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\log_ {2} P (w _ {i} | w _ {1}, w _ {2}, \\ldots , w _ {i - 1})\n$$\n", + "text_format": "latex", + "bbox": [ + 124, + 417, + 478, + 461 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Accuracy", + "text_level": 1, + "bbox": [ + 114, + 475, + 193, + 489 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Accuracy is the proportion of correct predictions made by the model:", + "bbox": [ + 112, + 497, + 485, + 527 + ], + "page_idx": 23 + }, + { + "type": "equation", + "text": "\n$$\nA c c u r a c y = \\frac {C o r r e c t P r e d i c t i o n s}{T o t a l P r e d i c t i o n s}\n$$\n", + "text_format": "latex", + "bbox": [ + 161, + 555, + 436, + 587 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "F1-Score", + "text_level": 1, + "bbox": [ + 114, + 599, + 189, + 612 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "The F1-Score is the harmonic mean of precision and recall:", + "bbox": [ + 112, + 621, + 485, + 651 + ], + "page_idx": 23 + }, + { + "type": "equation", + "text": "\n$$\nF 1 = 2 \\times \\frac {\\text {P r e c i s i o n} \\times \\text {R e c a l l}}{\\text {P r e c i s i o n} + \\text {R e c a l l}}\n$$\n", + "text_format": "latex", + "bbox": [ + 179, + 665, + 421, + 697 + ], + "page_idx": 23 + }, + { + "type": "equation", + "text": "\n$$\nP r e c i s i o n = \\frac {T P}{T P + F P}\n$$\n", + "text_format": "latex", + "bbox": [ + 205, + 708, + 394, + 739 + ], + "page_idx": 23 + }, + { + "type": "equation", + "text": "\n$$\nR e c a l l = \\frac {T P}{T P + F N}\n$$\n", + "text_format": "latex", + "bbox": [ + 218, + 749, + 381, + 781 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "where TP, FP, FN represent True Positives, False Positives, False Negatives respectively.", + "bbox": [ + 114, + 789, + 485, + 820 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "ROUGE (Recall-Oriented Understudy for Gisting Evaluation)", + "text_level": 1, + "bbox": [ + 112, + 834, + 448, + 865 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "evaluates text generation using N-gram overlap:", + "bbox": [ + 112, + 873, + 470, + 888 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "ROUGE-N measures the overlap of n-grams shared between the candidate summary (C) and", + "bbox": [ + 112, + 889, + 485, + 921 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "the reference summary (R), it is calculated as follows:", + "bbox": [ + 507, + 84, + 884, + 114 + ], + "page_idx": 23 + }, + { + "type": "equation", + "text": "\n$$\nR O U G E - N = \\frac {\\sum_ {S \\in R} \\sum_ {n _ {g r a m} \\in S} C o u n t _ {m a t c h} (n _ {g r a m})}{\\sum_ {S \\in R} \\sum_ {n _ {g r a m} \\in S} C o u n t (n _ {g r a m})}\n$$\n", + "text_format": "latex", + "bbox": [ + 509, + 124, + 899, + 184 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "where $\\text{Count}_{\\text{match}}(n_{\\text{gram}})$ represents the number of matching n-tuples in the candidate summary and the reference summary. And $\\text{Count}(n_{\\text{gram}})$ represents the total number of n-tuples in the reference summary.", + "bbox": [ + 507, + 197, + 882, + 275 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "ROUGE-L evaluates the quality of summarization based on the longest common subsequence (LCS), taking into account the order information of sentences:", + "bbox": [ + 507, + 277, + 882, + 340 + ], + "page_idx": 23 + }, + { + "type": "equation", + "text": "\n$$\nR _ {l c s} = \\frac {L C S (C , R)}{| R |}\n$$\n", + "text_format": "latex", + "bbox": [ + 618, + 350, + 771, + 385 + ], + "page_idx": 23 + }, + { + "type": "equation", + "text": "\n$$\nP _ {l c s} = \\frac {L C S (C , R)}{| C |}\n$$\n", + "text_format": "latex", + "bbox": [ + 618, + 397, + 769, + 432 + ], + "page_idx": 23 + }, + { + "type": "equation", + "text": "\n$$\nF _ {l c s} = \\frac {(1 + \\beta^ {2}) R _ {l c s} P _ {l c s}}{R _ {l c s} + \\beta^ {2} P _ {l c s}}\n$$\n", + "text_format": "latex", + "bbox": [ + 601, + 441, + 789, + 475 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "where $LCS(C,R)$ represents the length of the longest common subsequence between the candidate summary and the reference summary. $|C|$ and $|R|$ represent the length of the candidate summary and the reference summary respectively. $\\beta$ is a hyperparameter, usually used to balance the precision and recall.", + "bbox": [ + 507, + 482, + 882, + 593 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "ROUGE-S which is also called skip-bigram co-occurrence statistics, takes skipped bigrams into account:", + "bbox": [ + 507, + 596, + 882, + 640 + ], + "page_idx": 23 + }, + { + "type": "equation", + "text": "\n$$\nR O U G E - S = \\frac {\\sum_ {S \\in R} \\sum_ {b i _ {s k i p} \\in S} C o u n t _ {m a t c h} (b i _ {s k i p})}{\\sum_ {S \\in R} \\sum_ {b i _ {s k i p} \\in S} C o u n t (b i _ {s k i p})}\n$$\n", + "text_format": "latex", + "bbox": [ + 509, + 652, + 880, + 712 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "where $\\text{Count}_{\\text{match}}(bi_{skip})$ represents the number of skip-bigrams that match between the candidate summary and the reference summary. And $\\text{Count}(bi_{skip})$ represents the total number of skip-bigrams in the reference summary", + "bbox": [ + 507, + 725, + 882, + 804 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "BLEU (Bilingual Evaluation Understudy)", + "text_level": 1, + "bbox": [ + 507, + 816, + 840, + 833 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "is used to evaluate machine translation quality:", + "bbox": [ + 507, + 839, + 855, + 854 + ], + "page_idx": 23 + }, + { + "type": "equation", + "text": "\n$$\nB L E U = B P \\times \\exp \\left(\\sum_ {n = 1} ^ {N} w _ {n} \\log p _ {n}\\right)\n$$\n", + "text_format": "latex", + "bbox": [ + 546, + 881, + 842, + 925 + ], + "page_idx": 23 + }, + { + "type": "page_number", + "text": "24", + "bbox": [ + 487, + 941, + 509, + 954 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "where", + "bbox": [ + 112, + 85, + 164, + 97 + ], + "page_idx": 24 + }, + { + "type": "equation", + "text": "\n$$\nB P = \\left\\{ \\begin{array}{l l} 1, & \\text {i f} c > r \\\\ \\exp (1 - \\frac {r}{c}), & \\text {i f} c \\leq r \\end{array} \\right.\n$$\n", + "text_format": "latex", + "bbox": [ + 179, + 108, + 421, + 149 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "and $c$ is the generated length and $r$ is the reference length.", + "bbox": [ + 112, + 160, + 485, + 192 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "B.3.2 Model-based Metrics", + "text_level": 1, + "bbox": [ + 114, + 200, + 344, + 215 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "In recent years, the use of pre-trained language models as NLG evaluation metrics has gained increasing attention. Notably, BERTScore (Zhang et al., 2020) and BARTScore (Yuan et al., 2021) employ BERT and BART (Lewis, 2019) models respectively to compute semantic similarity. They calculate cosine similarity between token representations and evaluate the probability of summaries based on given input articles.", + "bbox": [ + 112, + 219, + 487, + 363 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "BERTScore measures the similarity between generated text and reference text from three aspects: recall, precision and f1, it can be expressed as follows:", + "bbox": [ + 112, + 365, + 489, + 428 + ], + "page_idx": 24 + }, + { + "type": "equation", + "text": "\n$$\n\\begin{array}{l} R = \\frac {1}{| R |} \\sum_ {r \\in R} \\max _ {c \\in C} \\frac {1}{L _ {r}} \\sum_ {i} \\mathrm {s i m} (\\mathbf {f} _ {\\theta} (r) _ {i}, \\mathbf {f} _ {\\theta} (c) _ {i}) \\\\ P = \\frac {1}{| C |} \\sum_ {c \\in C} \\max _ {r \\in R} \\frac {1}{L _ {c}} \\sum_ {i} \\operatorname {s i m} \\left(\\mathbf {f} _ {\\theta} (c) _ {i}, \\mathbf {f} _ {\\theta} (r) _ {i}\\right) \\\\ F = 2 \\times \\frac {P \\times R}{P + R} \\\\ \\end{array}\n$$\n", + "text_format": "latex", + "bbox": [ + 132, + 436, + 468, + 560 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "where $R$ is the reference text set, $C$ is the generated text set, $L_{r}$ and $L_{c}$ are the lengths of the reference text and generated text respectively, $f_{\\theta}$ is the encoder of the BERT model, and maps the text to the vector space, sim is usually cosine similarity.", + "bbox": [ + 112, + 564, + 487, + 645 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "BARTScore calculates the log-likelihood score of the generated text given the reference text to measure the similarity:", + "bbox": [ + 112, + 645, + 487, + 693 + ], + "page_idx": 24 + }, + { + "type": "equation", + "text": "\n$$\nB A R T S c o r e = \\frac {1}{| C |} \\sum_ {c \\in C} \\frac {1}{L _ {c}} \\sum_ {i} \\log p _ {\\theta} (c _ {i} | c _ {< i}, r)\n$$\n", + "text_format": "latex", + "bbox": [ + 115, + 715, + 485, + 753 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "where $C$ is the set of generated texts, $r$ is the reference text, $c_{i}$ is the ith word in the generated text, and $p_{\\theta}$ is the language model probability distribution of BART model.", + "bbox": [ + 112, + 765, + 489, + 828 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "B.3.3 LLM-based Metrics", + "text_level": 1, + "bbox": [ + 114, + 838, + 334, + 852 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "With the development of LLMs, research has demonstrated their significant correlation with human judgment and their ability to excel at new tasks when provided with instructions (Wang et al.,", + "bbox": [ + 112, + 857, + 489, + 921 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "2023a; Li et al., 2023a). Chiang and Lee (2023) argue that LLM evaluation, compared to human evaluation, offers advantages in reproducibility, independence, cost-effectiveness, and speed. Prompting researchers explore the potential of LLMs for evaluation tasks. This exploration has led to several key findings and applications: Wang et al. (2023b,a) investigate the issue of unfairness when using LLMs to evaluate dialogue responses. And Shen et al. (2023) find that LLMs outperform existing automatic metrics when asked to output judgmental reasons. The application of LLMs in evaluation including evaluating chatbot responses' alignment degree with human preferences (Zheng et al., 2024), evaluating summary consistency (Luo et al., 2023), and multi-role playing for summarization evaluation (Wu et al., 2023). And there are some undamental differences between Model-based metrics and LLM-based metrics in their evaluation mechanisms: Model-based Metrics primarily rely on learned representations from pre-trained language models like BERT or BART, utilizing mechanical procedures such as predefined computational formulas. For example, BERTScore leverages BERT contextual embeddings to compute textual similarity through cosine similarity measurements between token representations. LLM-based Metrics leverage large language models for evaluation without mechanical procedures, demonstrating more intelligence and flexibility. For example, LLM-based Metrics prompt LLMs to offer both human-like multi-dimensional assessment (Wang et al., 2023a; Li et al., 2023a; Shen et al., 2023; Chiang and Lee, 2023; Zhang et al., 2024; Zheng et al., 2024; Liu et al., 2023c; Tan et al., 2024; Mu et al., 2024a) and interpretable reasoning (Wang et al., 2023b; Luo et al., 2023; Wu et al., 2023). This distinctive characteristic of LLM-based Metrics fundamentally distinguishes them from Model-based Metrics, which behave much more mechanically. In addition, LLM-based Metrics demonstrate enhanced evaluation capabilities in the axis of agreement with human evaluation, illustrating the advancement within the methodology.", + "bbox": [ + 507, + 84, + 885, + 791 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Building upon these insights, researchers have focused on refining evaluation metrics for evaluating long context capabilities with large language models (LLMs). Fu et al. (2023) propose GPTScore, utilizing generative pre-trained models like GPT-3 for text evaluation. To address the length bias in LLM-generated content, L-Eval incorporates word count requirements into instruc", + "bbox": [ + 507, + 791, + 885, + 921 + ], + "page_idx": 24 + }, + { + "type": "page_number", + "text": "25", + "bbox": [ + 487, + 941, + 507, + 954 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "tions. Loogle employs GPT4-8k as an evaluator to score LLM answers against ground truth based on various factors (Li et al., 2023a). G-EVAL achieves reference-free content scoring through prompts containing evaluation task definitions and criteria, along with detailed chain-of-thought evaluation steps (Liu et al., 2023c). Tan et al. (2024) have introduced PROXYQA for long-context generation evaluation, evaluating final results based on the accuracy of answers to proxy question.", + "bbox": [ + 114, + 84, + 489, + 244 + ], + "page_idx": 25 + }, + { + "type": "page_number", + "text": "26", + "bbox": [ + 487, + 942, + 510, + 954 + ], + "page_idx": 25 + } +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13299/1182f1ca-70ec-4ed4-9c5b-10aa50e1a3e7_model.json b/data/2025/2503_13xxx/2503.13299/1182f1ca-70ec-4ed4-9c5b-10aa50e1a3e7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7061d2726899da50293876e625d1cf950969158b --- /dev/null +++ b/data/2025/2503_13xxx/2503.13299/1182f1ca-70ec-4ed4-9c5b-10aa50e1a3e7_model.json @@ -0,0 +1,5950 @@ +[ + [ + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.302, + 0.061, + 0.695 + ], + "angle": 270, + "content": "arXiv:2503.13299v2 [cs.CL] 8 Jul 2025" + }, + { + "type": "title", + "bbox": [ + 0.265, + 0.08, + 0.735, + 0.12 + ], + "angle": 0, + "content": "A Survey on Transformer Context Extension: Approaches and Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.218, + 0.125, + 0.784, + 0.143 + ], + "angle": 0, + "content": "Yijun Liu\\(^{1}\\), Jinzheng Yu\\(^{2}\\), Yang Xu\\(^{1}\\), Zhongyang Li\\(^{3}\\), Qingfu Zhu\\(^{1}\\)" + }, + { + "type": "text", + "bbox": [ + 0.233, + 0.144, + 0.763, + 0.159 + ], + "angle": 0, + "content": "\\(^{1}\\) Research Center for Social Computing and Interactive Robotics" + }, + { + "type": "text", + "bbox": [ + 0.368, + 0.16, + 0.631, + 0.176 + ], + "angle": 0, + "content": "1 Harbin Institute of Technology" + }, + { + "type": "text", + "bbox": [ + 0.227, + 0.177, + 0.771, + 0.192 + ], + "angle": 0, + "content": "2 State Key Laboratory of Media Convergence and Communication" + }, + { + "type": "text", + "bbox": [ + 0.345, + 0.193, + 0.653, + 0.209 + ], + "angle": 0, + "content": "\\(^{2}\\) Communication University of China" + }, + { + "type": "text", + "bbox": [ + 0.363, + 0.21, + 0.634, + 0.226 + ], + "angle": 0, + "content": "3 Huawei Technologies, Co., Ltd." + }, + { + "type": "text", + "bbox": [ + 0.343, + 0.228, + 0.656, + 0.243 + ], + "angle": 0, + "content": "{yijunliu, qfzhu}@ir.hit.edu.cn" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.262, + 0.341, + 0.276 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.142, + 0.285, + 0.461, + 0.599 + ], + "angle": 0, + "content": "Large language models (LLMs) based on Transformer have been widely applied in the filed of natural language processing (NLP), demonstrating strong performance, particularly in handling short text tasks. However, when it comes to long context scenarios, the performance of LLMs degrades due to some challenges. To alleviate this phenomenon, there is a number of work proposed recently. In this survey, we first list the challenges of applying pre-trained LLMs to process long contexts. Then systematically review the approaches related to long context and propose our taxonomy categorizing them into four main types: positional encoding, context compression, retrieval augmented, and attention pattern. In addition to the approaches, we focus on the evaluation of long context, organizing relevant data, tasks, and metrics based on existing long context benchmarks. Finally, we summarize unresolved issues in the long context domain and put forward our views on future developments." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.608, + 0.26, + 0.623 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.632, + 0.49, + 0.922 + ], + "angle": 0, + "content": "In recent years, the Transformer (Vaswani, 2017) architecture has made significant progress in many NLP tasks (Devlin, 2018; Radford, 2018; Lewis, 2019; Raffel et al., 2020; Brown, 2020; Chen et al., 2021a; Cobbe et al., 2021), and has become the foundational model of many applications. Large language models can handle tasks involving short texts, within the pre-trained context length. However, current scenarios always involve longer texts, such as book-/repo- level tasks (Sharma et al., 2019; Liu et al., 2023a; Zhang et al., 2023a; Liu et al., 2023b), dialogue systems with long contexts (Dey et al., 2022; Li et al., 2024a), content-rich in-context learning (Li et al., 2024c) and so on. The performance of the pre-trained LLMs degrades and the models often fail to utilize the complete knowledge contained within the long context inputs. This may be caused by three inher" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.262, + 0.885, + 0.374 + ], + "angle": 0, + "content": "ent challenges: out-of-distribution (OOD) problem (Han et al., 2024), \"Lost in the Middle\" phenomenon (Liu et al., 2024a), and quadratic complexity of attention (Zhou et al., 2024). Recently, a lot of work has been proposed to improve and evaluate models' ability to handle long contexts in the community." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.375, + 0.885, + 0.68 + ], + "angle": 0, + "content": "This survey focuses on approaches and evaluation in the long context field, systematically reviewing existing related work. As illustrated in Figure 1, we propose a novel taxonomy for approaches, categorizing them into four main groups: positional encoding, context compression, retrieval augmented, and attention pattern. Additionally, we focus on the evaluation aspect and organize work on data, tasks, and metrics based on existing benchmarks. In addition to the two main parts of approaches and evaluation, we present our viewpoints on the current unsolved issues and potential future directions in the long context domain. To illustrate the current status more theoretically, we also list the main challenges in the field of long context before introducing specific work. Although most existing methods and benchmarks have not corresponded to them, these challenges are still instructive for the development of approaches and evaluation." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.681, + 0.885, + 0.922 + ], + "angle": 0, + "content": "There are also some surveys that focus on the long context domain. They each have their own emphasis, but there is no systematic and comprehensive survey of approaches and evaluation in the field of long context, which can provide researchers with a quick and efficient guide. Some surveys only include a part of the methods, lacking a comprehensive overview of the approaches related to long context. Zhao et al. (2023) focus on work addressing length extrapolation from the perspective of positional encoding, while there are some surveys from the perspective of KV Cache (Li et al., 2025; Shi et al., 2024). Besides, though some surveys have categorized existing work, their taxonomies are not clear, and there are overlaps between categories." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.943, + 0.504, + 0.955 + ], + "angle": 0, + "content": "1" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.115, + 0.081, + 0.885, + 0.429 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.435, + 0.885, + 0.493 + ], + "angle": 0, + "content": "Figure 1: Framework of survey. We first list three inherent challenges in Section 2. And then we systematically review related approaches and propose a novel taxonomy with four major categories in Section 3. Next, in Section 4, we organize the evaluation aspect from three perspectives: data, tasks, and metrics based on existing benchmarks. At last, we show our views on future roadmap and open problems in Section 5." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.518, + 0.489, + 0.743 + ], + "angle": 0, + "content": "For example, Huang et al. (2023) divide the methods for enhancing Transformer architecture models into five categories, but some existing methods can belong to multiple categories. And (Pawar et al., 2024) also has this problem, which distinguishes existing techniques into two categories: interpolation and extrapolation. Also, some surveys even involve some common methods that not specifically designed for long contexts. Dong et al. (2023b) provide an overview of the text-preprocessing methods, architectures, special characteristics and application for long context, but they cover some general topics. What's more, these surveys pay little or even no attention to the evaluation aspect." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.745, + 0.49, + 0.922 + ], + "angle": 0, + "content": "To fill the above gap, our survey proposes a novel and comprehensive taxonomy on both approaches and evaluation aspects. It is worth noting that we focus on work that applies Transformer-based models to long text tasks, but not improving Transformers (nor other architectures) in a universal scenario. That is to say, this survey does not cover fields like long chain-of-thought reasoning (Chen et al., 2025), multimodal long context (Song et al., 2024; Qiu et al., 2024), efficient Transformer (Zhou et al., 2024), and State Space Model (SSM) (Wang et al.," + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.518, + 0.883, + 0.598 + ], + "angle": 0, + "content": "2024c). In addition, the long context we focus on is the long input content, rather than the introduction of external knowledge in the Retrieval-Augmented Generation (RAG) scenario (Yu et al., 2024; Zhao et al., 2024; Fan et al., 2024)." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.608, + 0.64, + 0.625 + ], + "angle": 0, + "content": "2 Challenges" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.633, + 0.884, + 0.73 + ], + "angle": 0, + "content": "When applying pre-trained LLMs to the long context scenarios, there are some inherent challenges affecting models' performance. We list the three most important and common challenges: OOD problem, \"Lost in the Middle\" phenomenon, and quadratic complexity." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.737, + 0.885, + 0.882 + ], + "angle": 0, + "content": "OD Problem When processing sequences that exceed pre-trained context window length, the models face out-of-distribution (OOD) problems. Han et al. (2024) verify theoretically and empirically that three key factors contribute to OOD issues, thereby limiting models' extrapolation capabilities: 1) unseen inter-token distances, 2) increased number of attended tokens, and 3) implicitly encoded position information of the starting tokens." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.89, + 0.884, + 0.922 + ], + "angle": 0, + "content": "\"Lost in the Middle\" Phenomenon Liu et al. (2024a) discover the \"Lost in the middle\" phe" + }, + { + "type": "page_number", + "bbox": [ + 0.493, + 0.943, + 0.505, + 0.955 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.182 + ], + "angle": 0, + "content": "nomenon through experiments that when LLMs receive a long input, they tend to focus on the information at both the beginning and end of the input sequence. At the same time, they neglect the content in the middle, thus failing to capitalize on the key information within the long input." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.195, + 0.489, + 0.259 + ], + "angle": 0, + "content": "Quadratic Complexity Due to the quadratic complexity of attention, directly using pre-trained LLMs for training or inference on long context is time and resource consuming (Zhou et al., 2024)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.261, + 0.49, + 0.438 + ], + "angle": 0, + "content": "The above are three inherent challenges in the field of long context, and some existing methods have alleviated them to a certain extent. But it is worth noting that most of the methods do not start from this perspective. They consider directly improving the performance of downstream tasks. However, we believe that these three challenges are still the fundamental problems that need to be solved. They play a vital role in the proposal of methods and construction of benchmarks. Moreover, they are the focus of subsequent research." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.453, + 0.254, + 0.47 + ], + "angle": 0, + "content": "3 Approaches" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.482, + 0.49, + 0.61 + ], + "angle": 0, + "content": "For the existing approaches for long context, we summarize their characteristics and propose a novel taxonomy different from previous work. As illustrated in Figure 1, mainstream methods are divided into four major categories: positional encoding, context compression, retrieval augmented, and attention pattern, which are introduced below. For more details, please see the Appendix A." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.626, + 0.319, + 0.641 + ], + "angle": 0, + "content": "3.1 Positional Encoding" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.648, + 0.49, + 0.921 + ], + "angle": 0, + "content": "Kazemnejad et al. (2024) mention that positional encoding (PE) appears to be a major factor in the length generalization of Transformer. During the inference process, when encountering sequences that exceed the length of the pre-trained window, the model needs to handle the position index that was not encountered during pre-training. This may lead to Out-Of-Distribution (OOD) issues. Thus, we would like to find an appropriate positional encoding method that allows the model to effectively encode position in sequences that exceed the pre-trained window length. Based on the implementation methods, solutions can be categorized into two main types: Variants of Rotary Position Embedding (RoPE, Su et al., 2024) and Attention bias method. The following sections will detail these two methods." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.295 + ], + "angle": 0, + "content": "Though the designed positional encoding strategies can alleviate the extrapolation problem, experiments have found that models without positional encoding (NoPE) show better performance than these methods in reasoning tasks (Kazemnejad et al., 2024). That's because when causal masks are used for decoding, the model reads the sequence sequentially from left to right. And this process naturally incorporates the sequential information of the token. This finding suggests that when designing a position encoding strategy, we may need to consider the way the model processes sequences and the requirements of the task." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.302, + 0.708, + 0.316 + ], + "angle": 0, + "content": "3.1.1 Variants of RoPE" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.321, + 0.884, + 0.417 + ], + "angle": 0, + "content": "Rotary Position Embedding (RoPE, Su et al., 2024) is a positional encoding method utilized in a series of models such as Wang and Komatsuzaki (2021); Touvron et al. (2023); Roziere et al. (2023). RoPE incorporates explicit relative position dependency in self-attention, which can be expressed as" + }, + { + "type": "equation", + "bbox": [ + 0.538, + 0.422, + 0.882, + 0.464 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\operatorname {s i m} \\left(q _ {m}, k _ {n}\\right) = q _ {m} ^ {\\top} R _ {\\Theta , n - m} ^ {d} k _ {n} \\tag {1} \\\\ = (R _ {\\Theta , m} ^ {d} q _ {m}) ^ {\\top} (R _ {\\Theta , n} ^ {d} k _ {n}), \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.471, + 0.885, + 0.68 + ], + "angle": 0, + "content": "where \\( R_{\\Theta,m}^{d} \\) is called the rotation matrix. The original RoPE's extrapolation capability is not very robust and can only maintain performance slightly beyond the pre-trained context length. Consequently, existing work enhances RoPE for better extrapolation. The core of RoPE is the rotation matrix \\( R_{\\Theta,m}^{d} \\), which is parameterized by the position index \\( m \\) and the function family \\( \\Theta \\). We can optimize RoPE by adjusting these parameters or even the structure of RoPE itself. Existing related work can be divided into three subcategories: position index adjustment, base frequency adjustment, and structural modification." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.689, + 0.884, + 0.882 + ], + "angle": 0, + "content": "Position Index Adjustment This method involves modifying the allocation or calculation of position index \\( m \\) to maintain the relative distances between tokens within the model's pre-trained index range. This can be implemented in three ways. We can adjust the allocation of the position index \\( m \\) (An et al., 2024). Besides, proportionally scale \\( m \\) for long sequences to adapt to the pre-trained window (Chen et al., 2023b). What's more, we can combine the above two methods, reallocating position index to some tokens in the sequence, while scaling the position index for others (Su, 2023)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.89, + 0.884, + 0.922 + ], + "angle": 0, + "content": "Base Frequency Adjustment From the formula of rotation matrix (see details in Appendix A.1.1)," + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.943, + 0.504, + 0.954 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.086, + 0.493, + 0.424 + ], + "angle": 0, + "content": "we can see that each non-zero term is a trigonometric function value with \\(\\theta_{i}\\) as independent variable. And the value of \\(\\theta_{i}\\) affects the effect of rotation matrix to a certain extent. Base frequency adjustment is to enhance the model extrapolation performance by modifying \\(\\theta_{i}\\) in the trigonometric function terms in the rotation matrix. NTK (Neural Tangent Kernel) theory shows that when the input dimension is low and its embedding representation lacks high-frequency components, it is difficult for the neural network to learn high-frequency information (Tancik et al., 2020). Therefore, researchers choose to adjust \\(\\theta_{i}\\) with the idea of \"extrapolation on high-frequency and interpolation on low-frequency\". One strategy is to change the base \\(b\\) of the exponential terms \\(\\theta_{i}\\) in the function cluster \\(\\Theta\\), and change it from the default value \\(b = 10000\\) to other values which can improve the model extrapolation performance (Peng and Quesnelle, 2023; Roziere et al., 2023). Another strategy is to directly scale \\(\\theta_{i}\\) (bloc97, 2023; Peng et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.431, + 0.49, + 0.529 + ], + "angle": 0, + "content": "Structural Modification The methods described above focus on modifying variables in RoPE formula without altering its basic structure. Some existing work explores adjusting the structure of RoPE itself to better extrapolate, which optimizes the original RoPE formula (Sun et al., 2022)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.536, + 0.293, + 0.55 + ], + "angle": 0, + "content": "3.1.2 Attention Bias" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.555, + 0.49, + 0.635 + ], + "angle": 0, + "content": "This type of method introduces relative position information by adding a bias related to the relative distance between tokens when calculating the similarity between query and key vectors. The process can be expressed as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.158, + 0.644, + 0.49, + 0.663 + ], + "angle": 0, + "content": "\\[\n\\mathrm {s i m} \\left(q _ {m}, k _ {n}\\right) = q _ {m} ^ {\\top} k _ {n} + f _ {b i a s} (m, n), (2)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.671, + 0.493, + 0.752 + ], + "angle": 0, + "content": "where \\( f_{bias}(m,n) \\) is a bias function that depends on the token position index corresponding to query and key. \\( f_{bias}(m,n) \\) be divided into two categories: learnable (Raffel et al., 2020; Chi et al., 2022a), predefined (Press et al., 2021; Chi et al., 2022b)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.762, + 0.334, + 0.777 + ], + "angle": 0, + "content": "3.2 Context Compression" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.783, + 0.491, + 0.863 + ], + "angle": 0, + "content": "Existing work proposes compressing the long input sequence into a shorter one for representation. These methods can be categorized into two main types by the compression granularity: soft compression and hard compression." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.871, + 0.317, + 0.887 + ], + "angle": 0, + "content": "3.2.1 Soft Compression" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.89, + 0.49, + 0.922 + ], + "angle": 0, + "content": "In order to shorten the sequence length, the soft compression method uses the model to compress" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.887, + 0.376 + ], + "angle": 0, + "content": "the original input token sequence into a shorter summary token sequence. These summary tokens are soft tokens which act as compression representation but do not correspond to words with actual meaning. They are inserted into the original token sequence to form a new input. During the forward pass of the model, the information from the original token sequence is gathered into the summary token sequence, which represents the original input for subsequent operations. Since summary tokens do not appear during the model's pre-training, additional training is necessary for the model to learn how to generate and utilize these tokens (Bulatov et al., 2022; Li et al., 2023b; Chevalier et al., 2023; Ge et al., 2023; Mu et al., 2024b). This method can shorten the length of the hidden vector sequence, so that enabling it to be processed within the model's pre-trained window." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.386, + 0.722, + 0.401 + ], + "angle": 0, + "content": "3.2.2 Hard Compression" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.406, + 0.886, + 0.518 + ], + "angle": 0, + "content": "This method utilizes some techniques to directly shorten plain text sequence length. This process can be achieved through selection and summarization. It doesn't introduce additional tokens and targeted training, which makes it can be applied to some black box models (Jiang et al., 2023, 2024b; Chen et al., 2023a)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.531, + 0.724, + 0.547 + ], + "angle": 0, + "content": "3.3 Retrieval Augmented" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.553, + 0.886, + 0.86 + ], + "angle": 0, + "content": "Some existing work propose retrieval-augmented methods to enhance model performance on long context tasks by selectively incorporating crucial tokens from history context into attention. With reference to related work, we summarize a processing paradigm for this type of method. Initially, the (key, value) pairs from history are stored in the KV cache. Then the model retrieves the corresponding token representations from the KV cache at different retrieval granularity levels. This process is based on the similarity between current token and history tokens from KV cache. The top-k relevant tokens are selected as the retrieved context, which is then concatenated with the context within the current window to form a new input. Finally, the model applies appropriate positional encoding to this concatenated context for attention calculation. Below, we summarize the different methods according to each step of the above process:" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.87, + 0.742, + 0.885 + ], + "angle": 0, + "content": "3.3.1 Retrieval Granularity" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.89, + 0.884, + 0.922 + ], + "angle": 0, + "content": "In the process of long context retrieval, we focus on the most relevant subset of tokens from KV" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.943, + 0.506, + 0.955 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.493, + 0.312 + ], + "angle": 0, + "content": "cache related to the current processing step. Different methods use different retrieval granularity, with the basic being token-level retrieval. Specifically, it involves calculating the similarity of each history token in the KV cache with the current token, and selecting the top-k history tokens' key and value vectors as the retrieval result. Methods applying this strategy include MemTRM (Wu et al., 2022), FoT (Tworkowski et al., 2024), Unlimformer (Bertsch et al., 2024a), etc. Besides, some work focuses on block-level retrieval, which retrieve top-k set of tokens in one step (Wang et al., 2024b; Rubin and Berant, 2023; Xiao et al., 2024; Mohtashami and Jaggi, 2024)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.322, + 0.363, + 0.337 + ], + "angle": 0, + "content": "3.3.2 Similarity Computation" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.343, + 0.49, + 0.472 + ], + "angle": 0, + "content": "Almost all existing works compute the inner product of query and key as similarity. This strategy draws from the standard attention mechanism, which calculates the dot product between the query and key to allocate corresponding weights to the value (Vaswani et al., 2023). It is simple to implement and can effectively capture and utilize the similarity information between queries and keys." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.483, + 0.333, + 0.499 + ], + "angle": 0, + "content": "3.3.3 Positional Encoding" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.504, + 0.49, + 0.745 + ], + "angle": 0, + "content": "After computing the similarity, we select the topk relative tokens as the results, and call them retrieved context tokens. Correspondingly, tokens within the current window are called as local context tokens. These two types of context tokens are concatenated to form a new set of context tokens. Before these new context tokens are fed into the model for attention computation, it is necessary to consider suitable positional encoding to distinguish the information of tokens at different positions. Some work choose to assign the same position vector to the retrieved context tokens (Wu et al., 2022; Tworkowski et al., 2024; Xiao et al., 2024), while Mohtashami and Jaggi (2023) choose reallocation strategies." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.757, + 0.345, + 0.771 + ], + "angle": 0, + "content": "3.3.4 Attention Calculation" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.777, + 0.49, + 0.922 + ], + "angle": 0, + "content": "Next, when performing attention calculation, we need to consider how to make full use of retrieved context tokens and local context tokens. Different approaches use different strategies. Simply, Tworkowski et al. (2024); Xiao et al. (2024)choose standard attention, while Bertsch et al. (2024a) chooses cross attention. Besides, Wu et al. (2022); Wang et al. (2024b) adopt a Joint Attention method. Landmark employs the Grouped Soft" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.885, + 0.117 + ], + "angle": 0, + "content": "max method, a fine-grained approach for calculation (Mohtashami and Jaggi, 2023)." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.129, + 0.694, + 0.143 + ], + "angle": 0, + "content": "3.4 Attention Pattern" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.15, + 0.885, + 0.28 + ], + "angle": 0, + "content": "There is a class of methods modifying the attention pattern, i.e. the range of tokens attended to. They can better adapt models to expand processing sequence length. Some of them do not require additional training and can be employed as plug-and-play solutions in existing models. These methods can be divided into three main categories: sliding window, parallel context, and sparse attention." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.289, + 0.696, + 0.305 + ], + "angle": 0, + "content": "3.4.1 Sliding Window" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.309, + 0.885, + 0.421 + ], + "angle": 0, + "content": "This type of method divides the sequence into segments and performs attention calculation segment by segment without significantly increasing computational complexity. The attention results from earlier segments are stored, which later segments can use during their attention calculation (Dai et al., 2019; Han et al., 2024; Xiao et al., 2023)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.432, + 0.698, + 0.446 + ], + "angle": 0, + "content": "3.4.2 Parallel Context" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.451, + 0.884, + 0.596 + ], + "angle": 0, + "content": "The Parallel Context method folds the context part of the input (e.g., in-context examples) into multiple segments. These segments first calculate attention independently, and share the same set of position indexes. And then prompt tokens in the input attend to all the context tokens, so that fully utilize contextual information (Ratner et al., 2022; Hao et al., 2022). These methods require no training and can be plug-and-played into existing models." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.606, + 0.704, + 0.621 + ], + "angle": 0, + "content": "3.4.3 Sparse Attention" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.625, + 0.884, + 0.738 + ], + "angle": 0, + "content": "Some work reduce the number of tokens involved in the attention computation, decreasing computational load. They abandon the original attention method which attends to local continuous tokens, while expand the attentive field, and attend to discrete tokens from further context (Ding et al., 2023; Yu et al., 2023; Chen et al., 2023c)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.751, + 0.64, + 0.766 + ], + "angle": 0, + "content": "4 Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.777, + 0.885, + 0.922 + ], + "angle": 0, + "content": "In the long context scenario, evaluating the model's ability to understand and utilize long context is also a new and critical issue. But as described before, current surveys pay little or even no attention to the evaluation aspect. To fill this gap, we summarize the data, tasks, and metrics of long context evaluation in our survey based on existing benchmarks. The following is a brief introduction, detailed information is in the Appendix B." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.943, + 0.505, + 0.955 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.119, + 0.082, + 0.885, + 0.234 + ], + "angle": 0, + "content": "
BenchmarkLength Level#ExamplesDomain
SCROLLS (Shaham et al., 2022)1k~4k119,495Literature, Dialog
ZeroSCROLLS (Shaham et al., 2023)0k~16k4,378Wiki, Literature, Dialog
LongBench (Bai et al., 2023)0k~4k, 4k~8k, >8k4,750Wiki, Literature, Dialog, Report, Code, News
LooGLE (Li et al., 2023a)0k~24k776Wiki, Paper
BAMBOO (Dong et al., 2023a)0k~4k, 4k~16k1,502Wiki, Dialog, Report, Code, Paper
LongICLBench (Li et al., 2024c)2k~50k3,000Dialog, News, Common Sense
L-Eval (An et al., 2023)3k~200k411Literature, Dialog, News, Paper, Common Sense
Ada-LEval (Wang et al., 2024a)1k~128k117,500Literature, Code
∞Bench (Zhang et al., 2024)0k~200k3,946Literature, Dialog, Code
NeedleBench (Li et al., 2024b)1k~4k/8k/32k/128k/200k/1m+-Wiki, Literature, Dialog, Report, Code, News
LV-Eval (Yuan et al., 2024)0k~16k/32k/64k/128k/256k1,729Wiki, Literature, Dialog, Report, Code, News, Paper
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.242, + 0.885, + 0.316 + ], + "angle": 0, + "content": "Table 1: Statistics on data characteristics of the datasets in existing long context benchmarks. Length level represents the range of token lengths in the dataset used in the benchmark. #Examples refers to the total number of examples. Domain denotes the data sources. The corresponding contents in table are directly extracted or calculated from the original papers. Given that current models mainly within context windows exceeding 100k tokens, we categorize benchmarks based on this threshold. Benchmarks with contexts exceeding 100K tokens are listed in the lower part." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.339, + 0.2, + 0.353 + ], + "angle": 0, + "content": "4.1 Data" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.36, + 0.489, + 0.424 + ], + "angle": 0, + "content": "In order to explore what data should be used to test model's ability to process long context, we conduct a statistical analysis of datasets in existing benchmarks and summarize their data characteristics." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.425, + 0.489, + 0.521 + ], + "angle": 0, + "content": "The evaluation of a model's long context capability requires not only the long data but also the data diversity and quality. As shown in Table 1, we focus on three characteristics of the datasets in existing long context benchmarks: length level, total number of examples, and the domain it covers." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.522, + 0.49, + 0.571 + ], + "angle": 0, + "content": "Besides, we also discuss about knowledge leakage issue, which need to be addressed when constructing the dataset, in the Appendix B.1.2" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.581, + 0.207, + 0.595 + ], + "angle": 0, + "content": "4.2 Tasks" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.603, + 0.49, + 0.764 + ], + "angle": 0, + "content": "Currently, existing benchmarks propose numerous tasks to evaluate the model's ability to process long context. But there is no systematic taxonomy for these tasks. Therefore, we divide all tasks in existing benchmarks into seven categories from the perspective of task setting: Question Answering, Needle-in-a-Haystack, Statistical Tasks, Code, In-Context Learning, Text Generation and Other Tasks. Below is the introduction of each type of task, and the details are in the Appendix B.2." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.773, + 0.336, + 0.789 + ], + "angle": 0, + "content": "4.2.1 Question Answering" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.793, + 0.489, + 0.856 + ], + "angle": 0, + "content": "Single-hop Question Answering requires models to locate and extract answers from a single text passage, typically involving a single fact (Rajpurkar, 2016; Joshi et al., 2017; Kočisky et al., 2018)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.858, + 0.489, + 0.922 + ], + "angle": 0, + "content": "Multi-hop Question Answering requires models to integrate information from multiple sources to answer complex questions. This often involves reasoning across different pieces of evidence (Ho" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.339, + 0.883, + 0.371 + ], + "angle": 0, + "content": "et al., 2020; Trivedi et al., 2022; Yang et al., 2018; Chen et al., 2024b; Zhuang et al., 2023)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.383, + 0.742, + 0.398 + ], + "angle": 0, + "content": "4.2.2 Needle-in-a-Haystack" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.403, + 0.884, + 0.5 + ], + "angle": 0, + "content": "Needle-in-a-Haystack evaluate LLMs' ability to extract specific content from long contexts. These tasks can evaluate the model's retrieval capability, also measure the range of context lengths model can handle (Zhu et al., 2024; Mohtashami and Jaggi, 2023; Zhang et al., 2024; Li et al., 2024b)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.511, + 0.697, + 0.525 + ], + "angle": 0, + "content": "4.2.3 Statistical Tasks" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.532, + 0.884, + 0.612 + ], + "angle": 0, + "content": "Long Arithmetic Calculation requires models to perform addition and subtraction operations on lengthy arithmetic expressions (Zhang et al., 2024, 2023b; Cobbe et al., 2021; Xu et al., 2024; Chen et al., 2024a)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.614, + 0.884, + 0.661 + ], + "angle": 0, + "content": "Numerical Information Extraction requires models to identify specific mathematical elements (Zhang et al., 2024; Li et al., 2023a)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.662, + 0.883, + 0.726 + ], + "angle": 0, + "content": "Sentiment Aggregation requires models to output the percentage of positive reviews when provided with a collection of reviews (Angelidis et al., 2021; Shaham et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.728, + 0.883, + 0.775 + ], + "angle": 0, + "content": "Paragraph Counting requires models to count the number of unique paragraphs in a set of randomly repeated and shuffled passages (Bai et al., 2023)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.788, + 0.612, + 0.802 + ], + "angle": 0, + "content": "4.2.4 Code" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.808, + 0.884, + 0.888 + ], + "angle": 0, + "content": "Code Completion requires models to complete missing code fragments based on preceding code and context (Chen et al., 2021a; Zheng et al., 2023; Bai et al., 2023; Guo et al., 2023; Zan et al., 2022; Dong et al., 2023a; Qin et al., 2024)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.89, + 0.884, + 0.922 + ], + "angle": 0, + "content": "Code Running asks models to infer the output of lengthy programs by tracing a series of cascading" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.943, + 0.505, + 0.954 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.085, + 0.486, + 0.116 + ], + "angle": 0, + "content": "function calls (Bubeck et al., 2023; An et al., 2023; Zhang et al., 2024)." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.117, + 0.486, + 0.149 + ], + "angle": 0, + "content": "Code Debugging requires models to identify deliberately inserted errors (Zhang et al., 2024)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.157, + 0.338, + 0.173 + ], + "angle": 0, + "content": "4.2.5 In-Context Learning" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.177, + 0.486, + 0.272 + ], + "angle": 0, + "content": "The input will contain a certain amount of examples, resulting in a long input. This is caused by the example itself is very long or the number of examples is particularly large. Based on this fact, we divide In-Context Learning task into two categories: long example learning and many-shot learning." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.274, + 0.486, + 0.368 + ], + "angle": 0, + "content": "Long Example Learning requires models to process extensive inputs with long examples which have large label spaces and generate accurate predictions. This task inherently is a long-context challenge (Bai et al., 2023; Li et al., 2024c; Li and Roth, 2002; NLPCC, 2014)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.37, + 0.487, + 0.433 + ], + "angle": 0, + "content": "Many-shot Learning leverages the expanded context windows of models to process hundreds or even thousands of examples in order to complete a given task (Yu et al., 2020; Bertsch et al., 2024b)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.442, + 0.303, + 0.456 + ], + "angle": 0, + "content": "4.2.6 Text Generation" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.461, + 0.486, + 0.508 + ], + "angle": 0, + "content": "Language Modeling serving as the pre-training task for LLMs, is also a widely used basic task to test the model's ability to generate text." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.51, + 0.486, + 0.685 + ], + "angle": 0, + "content": "Document Summarization requires models to make a summary of the input documents, which encompasses single-document and multi-document tasks. Single-document summarization extracts key information from a single document (Wang et al., 2022; Chen et al., 2021b; Huang et al., 2021; Zhong et al., 2021), while multi-document summarization synthesizes information from multiple sources into a comprehensive, non-repetitive summary containing all key points (Bai et al., 2023; An et al., 2023; Fabbri et al., 2019)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.687, + 0.487, + 0.75 + ], + "angle": 0, + "content": "Open-ended Text Generation requires models to produce coherent and logical content on given topics (Tan et al., 2024; Bai et al., 2024; Kumar et al., 2024; Ni et al., 2024; Rafailov et al., 2024)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.759, + 0.271, + 0.772 + ], + "angle": 0, + "content": "4.2.7 Other Tasks" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.777, + 0.486, + 0.841 + ], + "angle": 0, + "content": "In addition to the six types of tasks listed above, there are some tasks that are not included in this classification system but are equally important for testing the model's long context ability." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.842, + 0.486, + 0.921 + ], + "angle": 0, + "content": "Reordering asks models to reconstruct the original sequence of shuffled fragments by considering the broad context and logical relationships (Krysciński et al., 2021; Shaham et al., 2023; Li et al., 2023a; Dong et al., 2023a; Wang et al., 2024a)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.881, + 0.149 + ], + "angle": 0, + "content": "Context Consistency shows models an academic paper and a hypothesis, requiring models to judge whether the hypothesis is supported or contradicted by the ideas in the paper (Dong et al., 2023a)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.15, + 0.883, + 0.198 + ], + "angle": 0, + "content": "Summary Source Paragraph Identification challenges models to identify the original source paragraphs for given summaries (Bai et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.199, + 0.883, + 0.278 + ], + "angle": 0, + "content": "Character Identification requires models to identify different speakers in long dialogues by recognizing their distinct characteristics (TVMEG, 2024; Senedd Cymru, 2024; Zhang et al., 2024; Dong et al., 2023a; Chen et al., 2021b)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.291, + 0.617, + 0.305 + ], + "angle": 0, + "content": "4.3 Metrics" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.312, + 0.883, + 0.504 + ], + "angle": 0, + "content": "In addition to data and tasks, metrics can directly reflect the model's ability to handle long contexts. With current long context task designs gradually changing from classic NLP tasks to more practical tasks, the requirements for metrics are constantly increasing. We organize metrics for testing models' capabilities on long context according to the three stages of metrics development: Algorithmic Metrics, Model-based Metrics, and LLM-based Metrics. From these three metrics stages, it can be seen that the metrics development trend becomes more and more complex and flexible." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.516, + 0.73, + 0.531 + ], + "angle": 0, + "content": "4.3.1 Algorithmic Metrics" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.535, + 0.883, + 0.614 + ], + "angle": 0, + "content": "Algorithmic metrics are calculated based on the model output or logits through defined formulas. Their implementation is very simple and can reflect the effect of language modeling and some downstream tasks to a certain extent." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.617, + 0.883, + 0.759 + ], + "angle": 0, + "content": "Perplexity (PPL) is one of the most common algorithmic metrics used in existing long context benchmarks (Beltagy et al., 2020; Roy et al., 2021; Press et al., 2021). Meanwhile, some benchmarks employ other algorithmic metrics such as accuracy, f1, and N-gram-based metrics (ROUGE, Lin, 2004 and BLEU, Papineni et al., 2002, etc.) to evaluate LLMs on certain downstream tasks (Shaham et al., 2023; Bai et al., 2023; Kasai et al., 2021)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.761, + 0.883, + 0.921 + ], + "angle": 0, + "content": "However, these algorithmic metrics have several limitations, such as content quality, syntactic accuracy, and human correlation issues (Reiter and Belz, 2009; Stent et al., 2005; Sun et al., 2021; An et al., 2023; Improving; Tan et al., 2024). This causes algorithmic metrics to be limited in reflecting the model's ability to process long context. A number of approaches have been developed to improve algorithmic metrics. Such as enhancing scoring techniques, restructuring task formats and so on (Yuan" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.943, + 0.504, + 0.953 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.085, + 0.479, + 0.1 + ], + "angle": 0, + "content": "et al., 2024; Dong et al., 2023a; Li et al., 2024b)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.111, + 0.342, + 0.125 + ], + "angle": 0, + "content": "4.3.2 Model-based Metrics" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.13, + 0.49, + 0.257 + ], + "angle": 0, + "content": "To improve the consistency with human judgments, pre-trained language models are being employed to evaluate (Zhang et al., 2020; Yuan et al., 2021). Specifically, pre-trained models (such as BERT, Devlin, 2018, BART, Lewis, 2019, etc.) are used to calculate the similarity score between the model output and reference text to evaluate the performance of downstream tasks." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.259, + 0.489, + 0.34 + ], + "angle": 0, + "content": "However, these model-based metrics entirely rely on representations learned from pre-trained language models and require reference texts. They may not be accurate enough for evaluating some novel and creative text generation tasks." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.349, + 0.334, + 0.363 + ], + "angle": 0, + "content": "4.3.3 LLM-based Metrics" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.368, + 0.49, + 0.528 + ], + "angle": 0, + "content": "Combining the above two metrics issues, LLM-based metrics are proposed, utilizing sufficient knowledge within LLMs for evaluation. For example, LLM-based metrics prompt LLMs to offer human-like multi-dimensional assessment (Wang et al., 2023a; Li et al., 2023a; Shen et al., 2023; Chiang and Lee, 2023; Zhang et al., 2024; Zheng et al., 2024; Liu et al., 2023c; Tan et al., 2024; Mu et al., 2024a) and interpretable reasoning (Wang et al., 2023b; Luo et al., 2023; Wu et al., 2023)." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.53, + 0.49, + 0.658 + ], + "angle": 0, + "content": "LLM-based metrics fundamentally distinguish from the other two metrics, which behave much more mechanically. In addition, they demonstrate enhanced agreement with human evaluations (Wang et al., 2023a; Li et al., 2023a). Due to this higher consistency and wider scope of application, LLM-based metrics are gaining increasing attention in long-context evaluation." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.671, + 0.475, + 0.687 + ], + "angle": 0, + "content": "5 Future Roadmap and Open Problems" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.697, + 0.49, + 0.793 + ], + "angle": 0, + "content": "Despite the rapid development of long context techniques, numerous challenges remain unresolved. Looking to future roadmap, we list vital open problems and present our perspectives on the developments. They are also divided into two parts: approaches and evaluation." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.805, + 0.255, + 0.82 + ], + "angle": 0, + "content": "5.1 Approaches" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.825, + 0.489, + 0.874 + ], + "angle": 0, + "content": "Method Integration would combine methods' strengths to address the challenges of extrapolating long context from multiple perspectives." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.874, + 0.49, + 0.922 + ], + "angle": 0, + "content": "Long Text Generation remains under-researched, which concentrate on effective long-text generation techniques and the evaluation of generation quality." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.184 + ], + "angle": 0, + "content": "Sparse Attention Mechanisms may lead to a decrease in models' original language ability, thereby limiting their potential for processing long context. \"Lost-in-the-Middle\" Issue has not yet been completely resolved, there is a lack of targeted solutions and appropriate verification methods." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.187, + 0.885, + 0.252 + ], + "angle": 0, + "content": "Scalability of Methods requires to explore how existing methods can be adapted to models of different scales or even different architectural frameworks, enhancing their generality and applicability." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.254, + 0.884, + 0.318 + ], + "angle": 0, + "content": "Methods Enabling \"Train Short, Test Long\" haven't emerged, which train on short texts while excelling in long-context. These methods can reduce resource needs and improve generalization." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.321, + 0.884, + 0.385 + ], + "angle": 0, + "content": "Trade-off between Information Filtering and Generation Effects means existing methods can be optimized by integrating RAG to enhance efficiency and quality without too long input." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.405, + 0.642, + 0.419 + ], + "angle": 0, + "content": "5.2 Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.43, + 0.884, + 0.51 + ], + "angle": 0, + "content": "Knowledge Leakage Issue is ever-present. As LLMs gain the ability to gather information from the Internet and their training data scope expands, existing solutions become increasingly ineffective and some operations may limit innovation." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.514, + 0.884, + 0.609 + ], + "angle": 0, + "content": "Novel Benchmark Design needed to be proposed. We need to construct benchmarks with coherent content and long-distance dependencies to more effectively test the model's ability to process long context. For example, asking models to process inputs from multiple books." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.613, + 0.884, + 0.709 + ], + "angle": 0, + "content": "Updated LLM-based Metrics are a development direction. Though LLM-based metrics show higher consistency with human judgments than other metrics, they are costly, have random outputs, and even lack human emotions. We need to combine LLM with other techniques to better evaluate." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.73, + 0.642, + 0.744 + ], + "angle": 0, + "content": "6 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.761, + 0.883, + 0.921 + ], + "angle": 0, + "content": "In this survey, we first list three inherent challenges in processing long context. And then we propose a novel taxonomy for long context approaches and summarize the similarities and differences in each category. In addition, we systematically review the work on evaluation, summarize the data, tasks, and metrics related to long context based on existing benchmark. Finally, we list unsolved issues and put forward our insights on the future development of long context domain." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.943, + 0.505, + 0.954 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.085, + 0.221, + 0.1 + ], + "angle": 0, + "content": "Limitations" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.113, + 0.488, + 0.208 + ], + "angle": 0, + "content": "This survey summarizes the approaches and evaluation in the area of long context, and gives our views on future development. However, we don't cover efficient transformer on long context, multimodel long context, etc. In addition, due to limitations in space, we are not able to include all related work." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.211, + 0.488, + 0.275 + ], + "angle": 0, + "content": "Due to the rapidly evolving nature of the field of Transformer context extension, our survey may not capture the latest developments, particularly those that emerged near or after the time of writing." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.305, + 0.214, + 0.32 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.116, + 0.331, + 0.49, + 0.397 + ], + "angle": 0, + "content": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.411, + 0.49, + 0.476 + ], + "angle": 0, + "content": "Rishabh Agarwal, Avi Singh, Lei M Zhang, Bernd Bohnet, Stephanie Chan, Ankesh Anand, Zaheer Abbas, Azade Nova, John D Co-Reyes, Eric Chu, et al. 2024. Many-shot in-context learning. arXiv preprint arXiv:2404.11018." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.492, + 0.488, + 0.558 + ], + "angle": 0, + "content": "Chenxin An, Shansan Gong, Ming Zhong, Xingjian Zhao, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu. 2023. L-eval: Instituting standardized evaluation for long context language models. arXiv preprint arXiv:2307.11088." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.572, + 0.488, + 0.625 + ], + "angle": 0, + "content": "Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, and Lingpeng Kong. 2024. Training-free long-context scaling of large language models. arXiv preprint arXiv:2402.17463." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.64, + 0.488, + 0.706 + ], + "angle": 0, + "content": "Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, and Mirella Lapata. 2021. Extractive opinion summarization in quantized transformer spaces. Transactions of the Association for Computational Linguistics, 9:277-293." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.72, + 0.488, + 0.786 + ], + "angle": 0, + "content": "Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, et al. 2023. Longbench: A bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.8, + 0.488, + 0.866 + ], + "angle": 0, + "content": "Yushi Bai, Jiajie Zhang, Xin Lv, Linzhi Zheng, Siqi Zhu, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. 2024. Longwriter: Unleashing \\(10,000+\\) word generation from long context llms. arXiv preprint arXiv:2408.07055." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.881, + 0.488, + 0.921 + ], + "angle": 0, + "content": "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150." + }, + { + "type": "list", + "bbox": [ + 0.116, + 0.331, + 0.49, + 0.921 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.513, + 0.086, + 0.883, + 0.151 + ], + "angle": 0, + "content": "Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew Gormley. 2024a. Unlimformer: Long-range transformers with unlimited length input. Advances in Neural Information Processing Systems, 36." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.166, + 0.885, + 0.219 + ], + "angle": 0, + "content": "Amanda Bertsch, Maor Ivgi, Uri Alon, Jonathan Berant, Matthew R Gormley, and Graham Neubig. 2024b. In-context learning with long-context models: An in-depth exploration. arXiv preprint arXiv:2405.00200." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.232, + 0.883, + 0.272 + ], + "angle": 0, + "content": "bloc97. 2023. Add NTK-Aware interpolation \"by parts\" correction. https://github.com/jquesnelle/scaled-rope/pull/1." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.286, + 0.883, + 0.312 + ], + "angle": 0, + "content": "Tom B Brown. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.326, + 0.883, + 0.403 + ], + "angle": 0, + "content": "Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.418, + 0.883, + 0.469 + ], + "angle": 0, + "content": "Aydar Bulatov, Yury Kuratov, and Mikhail Burtsev. 2022. Recurrent memory transformer. Advances in Neural Information Processing Systems, 35:11079-11091." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.484, + 0.883, + 0.537 + ], + "angle": 0, + "content": "Howard Chen, Ramakanth Pasunuru, Jason Weston, and Asli Celikyilmaz. 2023a. Walking down the memory maze: Beyond context limit through interactive reading. arXiv preprint arXiv:2310.05029." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.551, + 0.883, + 0.629 + ], + "angle": 0, + "content": "Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021a. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.643, + 0.883, + 0.695 + ], + "angle": 0, + "content": "Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2021b. Summscreen: A dataset for abstractive screenplay summarization. arXiv preprint arXiv:2104.07091." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.71, + 0.883, + 0.787 + ], + "angle": 0, + "content": "Qiguang Chen, Libo Qin, Jinhao Liu, Dengyun Peng, Jiannan Guan, Peng Wang, Mengkang Hu, Yuhang Zhou, Te Gao, and Wanxiang Che. 2025. Towards reasoning era: A survey of long chain-of-thought for reasoning large language models. Preprint, arXiv:2503.09567." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.802, + 0.883, + 0.855 + ], + "angle": 0, + "content": "Qiguang Chen, Libo Qin, Jiaqi Wang, Jinxuan Zhou, and Wanxiang Che. 2024a. Unlocking the boundaries of thought: A reasoning granularity framework to quantify and optimize chain-of-thought." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.868, + 0.883, + 0.921 + ], + "angle": 0, + "content": "Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023b. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.885, + 0.921 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.943, + 0.505, + 0.954 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.14 + ], + "angle": 0, + "content": "Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, and Jiaya Jia. 2023c. Longlora: Efficient fine-tuning of long-context large language models. arXiv preprint arXiv:2309.12307." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.15, + 0.488, + 0.229 + ], + "angle": 0, + "content": "Zhi Chen, Qiguang Chen, Libo Qin, Qipeng Guo, Hai-jun Lv, Yicheng Zou, Wanxiang Che, Hang Yan, Kai Chen, and Dahua Lin. 2024b. What are the essential factors in crafting effective long context multi-hop instruction datasets? insights and best practices. arXiv preprint arXiv:2409.01893." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.24, + 0.488, + 0.292 + ], + "angle": 0, + "content": "Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and Danqi Chen. 2023. Adapting language models to compress contexts. arXiv preprint arXiv:2305.14788." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.304, + 0.488, + 0.369 + ], + "angle": 0, + "content": "Ta-Chung Chi, Ting-Han Fan, Peter J Ramadge, and Alexander Rudnicky. 2022a. Kerple: Kernelized relative positional embedding for length extrapolation. Advances in Neural Information Processing Systems, 35:8386-8399." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.381, + 0.488, + 0.434 + ], + "angle": 0, + "content": "Ta-Chung Chi, Ting-Han Fan, Alexander I Rudnicky, and Peter J Ramadge. 2022b. Dissecting transformer length extrapolation via the lens of receptive field analysis. arXiv preprint arXiv:2212.10356." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.445, + 0.488, + 0.485 + ], + "angle": 0, + "content": "Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.496, + 0.488, + 0.562 + ], + "angle": 0, + "content": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.573, + 0.488, + 0.652 + ], + "angle": 0, + "content": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.663, + 0.488, + 0.703 + ], + "angle": 0, + "content": "Jacob Devlin. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.714, + 0.488, + 0.766 + ], + "angle": 0, + "content": "Suvodip Dey, Maunendra Sankar Desarkar, Asif Ekbal, and PK Srijith. 2022. Dialogen: Generalized long-range context representation for dialogue systems. arXiv preprint arXiv:2210.06282." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.777, + 0.488, + 0.843 + ], + "angle": 0, + "content": "Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, and Furu Wei. 2023. Longnet: Scaling transformers to 1,000,000,000 tokens. arXiv preprint arXiv:2307.02486." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.854, + 0.488, + 0.92 + ], + "angle": 0, + "content": "Zican Dong, Tianyi Tang, Junyi Li, Wayne Xin Zhao, and Ji-Rong Wen. 2023a. Bamboo: A comprehensive benchmark for evaluating long text modeling capacities of large language models. arXiv preprint arXiv:2309.13345." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.92 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.086, + 0.881, + 0.126 + ], + "angle": 0, + "content": "Zican Dong, Tianyi Tang, Lunyi Li, and Wayne Xin Zhao. 2023b. A survey on long text modeling with transformers. arXiv preprint arXiv:2302.14502." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.137, + 0.883, + 0.202 + ], + "angle": 0, + "content": "Alexander R Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. arXiv preprint arXiv:1906.01749." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.214, + 0.883, + 0.279 + ], + "angle": 0, + "content": "Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. 2024. A survey on rag meeting llms: Towards retrieval-augmented large language models. Preprint, arXiv:2405.06211." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.291, + 0.883, + 0.331 + ], + "angle": 0, + "content": "Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.342, + 0.883, + 0.395 + ], + "angle": 0, + "content": "Tao Ge, Jing Hu, Lei Wang, Xun Wang, Si-Qing Chen, and Furu Wei. 2023. In-context autoencoder for context compression in a large language model. arXiv preprint arXiv:2307.06945." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.406, + 0.883, + 0.445 + ], + "angle": 0, + "content": "Shahriar Golchin and Mihai Surdeanu. 2023. Time travel in llms: Tracing data contamination in large language models. arXiv preprint arXiv:2308.08493." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.457, + 0.883, + 0.522 + ], + "angle": 0, + "content": "Daya Guo, Canwen Xu, Nan Duan, Jian Yin, and Julian McAuley. 2023. Longcoder: A long-range pretrained language model for code completion. In International Conference on Machine Learning, pages 12098-12107. PMLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.534, + 0.883, + 0.639 + ], + "angle": 0, + "content": "Chi Han, Qifan Wang, Hao Peng, Wenhan Xiong, Yu Chen, Heng Ji, and Sinong Wang. 2024. Lm infinite: Zero-shot extreme length generalization for large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 3991-4008." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.65, + 0.883, + 0.703 + ], + "angle": 0, + "content": "Yaru Hao, Yutao Sun, Li Dong, Zhixiong Han, Yuxian Gu, and Furu Wei. 2022. Structured prompting: Scaling in-context learning to 1,000 examples. arXiv preprint arXiv:2212.06713." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.714, + 0.883, + 0.766 + ], + "angle": 0, + "content": "Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multi-hop qa dataset for comprehensive evaluation of reasoning steps. arXiv preprint arXiv:2011.01060." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.777, + 0.883, + 0.83 + ], + "angle": 0, + "content": "Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. arXiv preprint arXiv:2104.02112." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.842, + 0.883, + 0.92 + ], + "angle": 0, + "content": "Yunpeng Huang, Jingwei Xu, Junyu Lai, Zixu Jiang, Taolue Chen, Zenan Li, Yuan Yao, Xiaoxing Ma, Lijuan Yang, Hao Chen, et al. 2023. Advancing transformer architecture in long-context large language models: A comprehensive survey. arXiv preprint arXiv:2311.12351." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.086, + 0.883, + 0.92 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.943, + 0.511, + 0.955 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.486, + 0.113 + ], + "angle": 0, + "content": "NLG Improving. Not all metrics are guilty: Improving nlg evaluation with llm paraphrasing." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.124, + 0.487, + 0.229 + ], + "angle": 0, + "content": "Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2024a. LongLLM Lingua: Accelerating and enhancing LLMs in long context scenarios via prompt compression. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1658-1677, Bangkok, Thailand. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.24, + 0.486, + 0.318 + ], + "angle": 0, + "content": "Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023. Llm lingua: Compressing prompts for accelerated inference of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13358-13376." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.33, + 0.487, + 0.395 + ], + "angle": 0, + "content": "Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2024b. Longlmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression. Preprint, arXiv:2310.06839." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.407, + 0.487, + 0.459 + ], + "angle": 0, + "content": "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.47, + 0.487, + 0.536 + ], + "angle": 0, + "content": "Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander R Fabbri, Yejin Choi, and Noah A Smith. 2021. Bidimensional leaderboards: Generate and evaluate language hand in hand. arXiv preprint arXiv:2112.04139." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.548, + 0.487, + 0.613 + ], + "angle": 0, + "content": "Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and Siva Reddy. 2024. The impact of positional encoding on length generalization in transformers. Advances in Neural Information Processing Systems, 36." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.624, + 0.487, + 0.69 + ], + "angle": 0, + "content": "Tomáš Kočisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.702, + 0.487, + 0.766 + ], + "angle": 0, + "content": "Wojciech Krysciński, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong, and Dragomir Radev. 2021. Booksum: A collection of datasets for long-form narrative summarization. arXiv preprint arXiv:2105.08209." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.778, + 0.487, + 0.856 + ], + "angle": 0, + "content": "Ishita Kumar, Snigdha Viswanathan, Sushrita Yerra, Alireza Salemi, Ryan A Rossi, Franck Dernoncourt, Hanieh Deilamsalehy, Xiang Chen, Ruiyi Zhang, Shubham Agarwal, et al. 2024. Longlamp: A benchmark for personalized long-form text generation. arXiv preprint arXiv:2407.11016." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.868, + 0.487, + 0.919 + ], + "angle": 0, + "content": "M Lewis. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.086, + 0.882, + 0.152 + ], + "angle": 0, + "content": "Haoyang Li, Yiming Li, Anxin Tian, Tianhao Tang, Zhanchao Xu, Xuejia Chen, Nicole Hu, Wei Dong, Qing Li, and Lei Chen. 2025. A survey on large language model acceleration based on kv cache management. Preprint, arXiv:2412.19442." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.163, + 0.882, + 0.227 + ], + "angle": 0, + "content": "Jia-Nan Li, Quan Tu, Cunli Mao, Zhengtao Yu, JiRong Wen, and Rui Yan. 2024a. Streamingdialogue: Prolonged dialogue learning via long context compression with minimal losses. arXiv preprint arXiv:2403.08312." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.238, + 0.882, + 0.29 + ], + "angle": 0, + "content": "Jiaqi Li, Mengmeng Wang, Zilong Zheng, and Muhan Zhang. 2023a. Loogle: Can long-context language models understand long contexts? arXiv preprint arXiv:2311.04939." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.301, + 0.882, + 0.353 + ], + "angle": 0, + "content": "Mo Li, Songyang Zhang, Yunxin Liu, and Kai Chen. 2024b. Needlebench: Can llms do retrieval and reasoning in 1 million context window? arXiv preprint arXiv:2407.11963." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.364, + 0.882, + 0.416 + ], + "angle": 0, + "content": "Tianle Li, Ge Zhang, Quy Duc Do, Xiang Yue, and Wenhu Chen. 2024c. Long-context llms struggle with long in-context learning. arXiv preprint arXiv:2404.02060." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.427, + 0.882, + 0.48 + ], + "angle": 0, + "content": "Xianming Li, Zongxi Li, Xiaotian Luo, Haoran Xie, Xing Lee, Yingbin Zhao, Fu Lee Wang, and Qing Li. 2023b. Recurrent attention networks for long-text modeling. arXiv preprint arXiv:2306.06843." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.49, + 0.882, + 0.53 + ], + "angle": 0, + "content": "Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.54, + 0.882, + 0.58 + ], + "angle": 0, + "content": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.59, + 0.882, + 0.656 + ], + "angle": 0, + "content": "Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024a. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12:157-173." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.666, + 0.882, + 0.732 + ], + "angle": 0, + "content": "Shuaiqi Liu, Jiannong Cao, Zhongfen Deng, Wenting Zhao, Ruosong Yang, Zhiyuan Wen, and S Yu Philip. 2023a. Neural abstractive summarization for long text and multiple tables. IEEE Transactions on Knowledge and Data Engineering." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.742, + 0.882, + 0.794 + ], + "angle": 0, + "content": "Tianyang Liu, Canwen Xu, and Julian McAuley. 2023b. Repobench: Benchmarking repository-level code auto-completion systems. arXiv preprint arXiv:2306.03091." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.805, + 0.882, + 0.856 + ], + "angle": 0, + "content": "Xiaoran Liu, Hang Yan, Shuo Zhang, Chenxin An, Xipeng Qiu, and Dahua Lin. 2024b. Scaling laws of rope-based extrapolation. Preprint, arXiv:2310.05209." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.868, + 0.882, + 0.921 + ], + "angle": 0, + "content": "Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023c. G-eval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.086, + 0.882, + 0.921 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.943, + 0.508, + 0.955 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.138 + ], + "angle": 0, + "content": "Zheheng Luo, Qianqian Xie, and Sophia Ananiadou. 2023. Chatgpt as a factual inconsistency evaluator for text summarization. arXiv preprint arXiv:2303.15621." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.15, + 0.487, + 0.202 + ], + "angle": 0, + "content": "Amirkeivan Mohtashami and Martin Jaggi. 2023. Landmark attention: Random-access infinite context length for transformers. arXiv preprint arXiv:2305.16300." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.214, + 0.487, + 0.266 + ], + "angle": 0, + "content": "Amirkeivan Mohtashami and Martin Jaggi. 2024. Random-access infinite context length for transformers. Advances in Neural Information Processing Systems, 36." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.278, + 0.487, + 0.343 + ], + "angle": 0, + "content": "Honglin Mu, Yang Xu, Yunlong Feng, Xiaofeng Han, Yitong Li, Yutai Hou, and Wanxiang Che. 2024a. Beyond static evaluation: A dynamic approach to assessing ai assistants' api invocation capabilities. Preprint, arXiv:2403.11128." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.355, + 0.487, + 0.395 + ], + "angle": 0, + "content": "Jesse Mu, Xiang Li, and Noah Goodman. 2024b. Learning to compress prompts with gist tokens. Advances in Neural Information Processing Systems, 36." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.406, + 0.487, + 0.47 + ], + "angle": 0, + "content": "Xuanfan Ni, Hengyi Cai, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, and Piji Li. 2024. \\(\\mathrm{XI}^2\\) bench: A benchmark for extremely long context understanding with long-range dependencies. arXiv preprint arXiv:2404.05446." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.483, + 0.487, + 0.51 + ], + "angle": 0, + "content": "NLPCC. 2014. Task definition for large scale text categorization at nlpcc 2014." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.521, + 0.487, + 0.587 + ], + "angle": 0, + "content": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.598, + 0.487, + 0.664 + ], + "angle": 0, + "content": "Saurav Pawar, SM Tonmoy, SM Zaman, Vinija Jain, Aman Chadha, and Amitava Das. 2024. The what, why, and how of context length extension techniques in large language models—a detailed survey. arXiv preprint arXiv:2401.07872." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.675, + 0.487, + 0.728 + ], + "angle": 0, + "content": "Bowen Peng and Jeffrey Quesnelle. 2023. Ntk-aware scaled rope allows llama models to have extended \\((8k+)\\) context size without any fine-tuning and minimal perplexity degradation." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.739, + 0.487, + 0.791 + ], + "angle": 0, + "content": "Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. 2023. Yarn: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.803, + 0.487, + 0.855 + ], + "angle": 0, + "content": "Ofir Press, Noah A Smith, and Mike Lewis. 2021. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.867, + 0.487, + 0.92 + ], + "angle": 0, + "content": "Libo Qin, Qiguang Chen, Xiachong Feng, Yang Wu, Yongheng Zhang, Yinghui Li, Min Li, Wanxiang Che, and Philip S Yu. 2024. Large language models meet nlp: A survey. arXiv preprint arXiv:2405.12819." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.92 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.086, + 0.882, + 0.15 + ], + "angle": 0, + "content": "Han Qiu, Jiaxing Huang, Peng Gao, Qin Qi, Xiaogin Zhang, Ling Shao, and Shijian Lu. 2024. Longhalqa: Long-context hallucination evaluation for multimodal large language models. Preprint, arXiv:2410.09962." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.161, + 0.881, + 0.188 + ], + "angle": 0, + "content": "Alec Radford. 2018. Improving language understanding by generative pre-training." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.197, + 0.882, + 0.262 + ], + "angle": 0, + "content": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.272, + 0.882, + 0.349 + ], + "angle": 0, + "content": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.36, + 0.882, + 0.398 + ], + "angle": 0, + "content": "P Rajpurkar. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.409, + 0.882, + 0.474 + ], + "angle": 0, + "content": "Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2022. Parallel context windows for large language models. arXiv preprint arXiv:2212.10947." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.483, + 0.882, + 0.535 + ], + "angle": 0, + "content": "Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evaluating natural language generation systems. Computational Linguistics, 35(4):529-558." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.545, + 0.882, + 0.609 + ], + "angle": 0, + "content": "Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2021. Efficient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics, 9:53-68." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.621, + 0.882, + 0.686 + ], + "angle": 0, + "content": "Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, et al. 2023. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.695, + 0.882, + 0.734 + ], + "angle": 0, + "content": "Ohad Rubin and Jonathan Berant. 2023. Long-range language modeling with self-retrieval. arXiv preprint arXiv:2306.13421." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.744, + 0.882, + 0.782 + ], + "angle": 0, + "content": "Senedd Cymru. 2024. Record of proceedings - senedd. https://record.assembly.wales/. Accessed: 2024-10-15." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.793, + 0.882, + 0.845 + ], + "angle": 0, + "content": "Uri Shaham, Maor Ivgi, Avia Efrat, Jonathan Berant, and Omer Levy. 2023. Zeroscrolls: A zero-shot benchmark for long text understanding. arXiv preprint arXiv:2305.14196." + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.855, + 0.882, + 0.92 + ], + "angle": 0, + "content": "Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, et al. 2022. **Scrolls: Standardized comparison over long language sequences. arXiv preprint arXiv:2201.03533.**" + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.086, + 0.882, + 0.92 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.943, + 0.509, + 0.955 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.126 + ], + "angle": 0, + "content": "Eva Sharma, Chen Li, and Lu Wang. 2019. Bigpatent: A large-scale dataset for abstractive and coherent summarization. arXiv preprint arXiv:1906.03741." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.138, + 0.486, + 0.191 + ], + "angle": 0, + "content": "Chenhui Shen, Liying Cheng, Xuan-Phi Nguyen, Yang You, and Lidong Bing. 2023. Large language models are not yet human-level evaluators for abstractive summarization. arXiv preprint arXiv:2305.13091." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.204, + 0.487, + 0.256 + ], + "angle": 0, + "content": "Luohe Shi, Hongyi Zhang, Yao Yao, Zuchao Li, and Hai Zhao. 2024. Keep the cost down: A review on methods to optimize llm's kv-cache consumption. Preprint, arXiv:2407.18003." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.269, + 0.487, + 0.32 + ], + "angle": 0, + "content": "Dingjie Song, Shunian Chen, Guiming Hardy Chen, Fei Yu, Xiang Wan, and Benyou Wang. 2024. Milebench: Benchmarking mllms in long context. arXiv preprint arXiv:2404.18532." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.334, + 0.487, + 0.4 + ], + "angle": 0, + "content": "Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In International conference on intelligent text processing and computational linguistics, pages 341-351. Springer." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.412, + 0.487, + 0.439 + ], + "angle": 0, + "content": "Jianlin Su. 2023. Rectified rotary position embeddings. https://github.com/bojone/erope." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.451, + 0.487, + 0.504 + ], + "angle": 0, + "content": "Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.516, + 0.487, + 0.569 + ], + "angle": 0, + "content": "Simeng Sun, Kalpesh Krishna, Andrew Mattarella-Micke, and Mohit Iyyer. 2021. Do long-range language models actually use long-range context? arXiv preprint arXiv:2109.09115." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.581, + 0.487, + 0.634 + ], + "angle": 0, + "content": "Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, and Furu Wei. 2022. A length-extrapolatable transformer. arXiv preprint arXiv:2212.10554." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.646, + 0.487, + 0.724 + ], + "angle": 0, + "content": "Haochen Tan, Zhijiang Guo, Zhan Shi, Lu Xu, Zhili Liu, Xiaoguang Li, Yasheng Wang, Lifeng Shang, Qun Liu, and Linqi Song. 2024. Proxyqa: An alternative framework for evaluating long-form text generation with large language models. arXiv preprint arXiv:2401.15042." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.737, + 0.487, + 0.828 + ], + "angle": 0, + "content": "Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. 2020. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in neural information processing systems, 33:7537-7547." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.842, + 0.487, + 0.919 + ], + "angle": 0, + "content": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.919 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.152 + ], + "angle": 0, + "content": "Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Musique: Multi-hop questions via single-hop question composition. Transactions of the Association for Computational Linguistics, 10:539-554." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.163, + 0.882, + 0.189 + ], + "angle": 0, + "content": "TVMEG. 2024. Tvmeg - tv mega engine. https://tvmeg.com/. Accessed: 2024-10-15." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.199, + 0.882, + 0.265 + ], + "angle": 0, + "content": "Szymon Tworkowski, Konrad Staniszewski, Mikołaj Pacek, Yuhuai Wu, Henryk Michalewski, and Piotr Miłos. 2024. Focused transformer: Contrastive training for context scaling. Advances in Neural Information Processing Systems, 36." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.275, + 0.882, + 0.302 + ], + "angle": 0, + "content": "A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.312, + 0.882, + 0.365 + ], + "angle": 0, + "content": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2023. Attention is all you need. Preprint, arXiv:1706.03762." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.375, + 0.882, + 0.428 + ], + "angle": 0, + "content": "Alex Wang, Richard Yuanzhe Pang, Angelica Chen, Jason Phang, and Samuel R Bowman. 2022. Squality: Building a long-document summarization dataset the hard way. arXiv preprint arXiv:2205.11465." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.438, + 0.882, + 0.491 + ], + "angle": 0, + "content": "Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.501, + 0.882, + 0.553 + ], + "angle": 0, + "content": "Chonghua Wang, Haodong Duan, Songyang Zhang, Dahua Lin, and Kai Chen. 2024a. Ada-level: Evaluating long-context llms with length-adaptable benchmarks. arXiv preprint arXiv:2404.06480." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.564, + 0.882, + 0.629 + ], + "angle": 0, + "content": "Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023a. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.64, + 0.882, + 0.693 + ], + "angle": 0, + "content": "Peiyi Wang, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023b. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.703, + 0.882, + 0.768 + ], + "angle": 0, + "content": "Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, and Furu Wei. 2024b. Augmenting language models with long-term memory. Advances in Neural Information Processing Systems, 36." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.779, + 0.882, + 0.869 + ], + "angle": 0, + "content": "Xiao Wang, Shiao Wang, Yuhe Ding, Yuehang Li, Wentao Wu, Yao Rong, Weizhe Kong, Ju Huang, Shihao Li, Haoxiang Yang, Ziwen Wang, Bo Jiang, Chenglong Li, Yaowei Wang, Yonghong Tian, and Jin Tang. 2024c. State space model for new-generation network alternative to transformers: A survey. Preprint, arXiv:2404.09516." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.88, + 0.882, + 0.92 + ], + "angle": 0, + "content": "Ning Wu, Ming Gong, Linjun Shou, Shining Liang, and Daxin Jiang. 2023. Large language models are diverse role-players for summarization evaluation. In" + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.882, + 0.92 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.943, + 0.509, + 0.955 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.135, + 0.086, + 0.489, + 0.127 + ], + "angle": 0, + "content": "CCF International Conference on Natural Language Processing and Chinese Computing, pages 695-707. Springer." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.135, + 0.487, + 0.175 + ], + "angle": 0, + "content": "Yuhuai Wu, Markus N Rabe, DeLesley Hutchins, and Christian Szegedy. 2022. Memorizing transformers. arXiv preprint arXiv:2203.08913." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.184, + 0.487, + 0.263 + ], + "angle": 0, + "content": "Chaojun Xiao, Pangle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Song Han, and Maosong Sun. 2024. Inflamm: Unveiling the intrinsic capacity of llms for understanding extremely long sequences with training-free memory. arXiv preprint arXiv:2402.04617." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.272, + 0.486, + 0.323 + ], + "angle": 0, + "content": "Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2023. Efficient streaming language models with attention sinks. arXiv preprint arXiv:2309.17453." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.333, + 0.487, + 0.385 + ], + "angle": 0, + "content": "Xin Xu, Tong Xiao, Zitong Chao, Zhenya Huang, Can Yang, and Yang Wang. 2024. Can llms solve longer math word problems better? arXiv preprint arXiv:2405.14804." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.395, + 0.487, + 0.461 + ], + "angle": 0, + "content": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.47, + 0.487, + 0.509 + ], + "angle": 0, + "content": "Dian Yu, Kai Sun, Claire Cardie, and Dong Yu. 2020. Dialogue-based relation extraction. arXiv preprint arXiv:2004.08056." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.52, + 0.487, + 0.571 + ], + "angle": 0, + "content": "Hao Yu, Aoran Gan, Kai Zhang, Shiwei Tong, Qi Liu, and Zhaofeng Liu. 2024. Evaluation of retrieval-augmented generation: A survey. Preprint, arXiv:2405.07437." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.581, + 0.487, + 0.647 + ], + "angle": 0, + "content": "Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, and Mike Lewis. 2023. Megabyte: Predicting million-byte sequences with multiscale transformers. Advances in Neural Information Processing Systems, 36:78808-78823." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.656, + 0.487, + 0.722 + ], + "angle": 0, + "content": "Tao Yuan, Xuefei Ning, Dong Zhou, Zhijie Yang, Shiyao Li, Minghui Zhuang, Zheyue Tan, Zhuyu Yao, Dahua Lin, Boxun Li, et al. 2024. Lv-eval: A balanced long-context benchmark with 5 length levels up to 256k. arXiv preprint arXiv:2402.05136." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.731, + 0.487, + 0.783 + ], + "angle": 0, + "content": "Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. Advances in Neural Information Processing Systems, 34:27263-27277." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.793, + 0.487, + 0.845 + ], + "angle": 0, + "content": "Daoguang Zan, Bei Chen, Zeqi Lin, Bei Guan, Yongji Wang, and Jian-Guang Lou. 2022. When language model meets private library. arXiv preprint arXiv:2210.17236." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.854, + 0.487, + 0.92 + ], + "angle": 0, + "content": "Fengji Zhang, Bei Chen, Yue Zhang, Jacky Keung, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, and Weizhu Chen. 2023a. Repocoder: Repository-level code completion through iterative retrieval and generation. arXiv preprint arXiv:2303.12570." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.489, + 0.92 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.086, + 0.882, + 0.139 + ], + "angle": 0, + "content": "Lei Zhang, Yunshui Li, Ziqiang Liu, Junhao Liu, Min Yang, et al. 2023b. Marathon: A race through the realm of long context with large language models. arXiv preprint arXiv:2312.09542." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.148, + 0.882, + 0.199 + ], + "angle": 0, + "content": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. arxiv. arXiv preprint arXiv:1904.09675." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.209, + 0.882, + 0.301 + ], + "angle": 0, + "content": "Xinrong Zhang, Yingfa Chen, Shengding Hu, Zihang Xu, Junhao Chen, Moo Hao, Xu Han, Zhen Thai, Shuo Wang, Zhiyuan Liu, et al. 2024. \\(\\propto\\) bench: Extending long context evaluation beyond 100k tokens. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15262-15277." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.309, + 0.882, + 0.361 + ], + "angle": 0, + "content": "Liang Zhao, Xiaocheng Feng, Xiachong Feng, Bin Qin, and Ting Liu. 2023. Length extrapolation of transformers: A survey from the perspective of position encoding. arXiv preprint arXiv:2312.17044." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.37, + 0.882, + 0.436 + ], + "angle": 0, + "content": "Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, Jie Jiang, and Bin Cui. 2024. Retrieval-augmented generation for ai-generated content: A survey. Preprint, arXiv:2402.19473." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.444, + 0.882, + 0.523 + ], + "angle": 0, + "content": "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2024. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.532, + 0.882, + 0.624 + ], + "angle": 0, + "content": "Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Lei Shen, Zihan Wang, Andi Wang, Yang Li, et al. 2023. Codegeex: A pre-trained model for code generation with multilingual benchmarking on humaneval-x. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 5673-5684." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.632, + 0.882, + 0.709 + ], + "angle": 0, + "content": "Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, et al. 2021. Qmsum: A new benchmark for query-based multidomain meeting summarization. arXiv preprint arXiv:2104.05938." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.719, + 0.882, + 0.785 + ], + "angle": 0, + "content": "Zixuan Zhou, Xuefei Ning, Ke Hong, Tianyu Fu, Ji-aming Xu, Shiyao Li, Yuming Lou, Luning Wang, Zhihang Yuan, Xiuhong Li, et al. 2024. A survey on efficient inference for large language models. arXiv preprint arXiv:2404.14294." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.793, + 0.882, + 0.846 + ], + "angle": 0, + "content": "Dawei Zhu, Liang Wang, Nan Yang, Yifan Song, Wenhao Wu, Furu Wei, and Sujian Li. 2024. Longembed: Extending embedding models for long context retrieval. arXiv preprint arXiv:2404.12096." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.854, + 0.882, + 0.92 + ], + "angle": 0, + "content": "Ziyu Zhuang, Qiguang Chen, Longxuan Ma, Mingda Li, Yi Han, Yushan Qian, Haopeng Bai, Zixian Feng, Weinan Zhang, and Ting Liu. 2023. Through the lens of core competency: Survey on evaluation of large language models. ArXiv, abs/2308.07902." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.086, + 0.882, + 0.92 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.943, + 0.51, + 0.954 + ], + "angle": 0, + "content": "14" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.085, + 0.343, + 0.102 + ], + "angle": 0, + "content": "A Details of Approaches" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.11, + 0.49, + 0.174 + ], + "angle": 0, + "content": "This section serves as a supplement to the Approaches section 3 in the main text, expanding on relevant details about related methods to provide readers with a deeper understanding." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.184, + 0.317, + 0.2 + ], + "angle": 0, + "content": "A.1 Postional Encoding" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.204, + 0.317, + 0.219 + ], + "angle": 0, + "content": "A.1.1 Variants of RoPE" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.223, + 0.49, + 0.336 + ], + "angle": 0, + "content": "Su et al. (2024) try to seek a positional encoding method that could encode relative position during the computing query and key similarity, and decompose this process into the representations of the query and key. They conduct theoretical analysis, and propose a novel positional encoding, which transform similarity into following formula:" + }, + { + "type": "equation", + "bbox": [ + 0.142, + 0.343, + 0.488, + 0.384 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} \\operatorname {s i m} \\left(q _ {m}, k _ {n}\\right) = q _ {m} ^ {\\top} R _ {\\Theta , n - m} ^ {d} k _ {n} \\tag {3} \\\\ = (R _ {\\Theta , m} ^ {d} q _ {m}) ^ {\\top} (R _ {\\Theta , n} ^ {d} k _ {n}), \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.393, + 0.489, + 0.441 + ], + "angle": 0, + "content": "where \\(R_{\\Theta ,m}^{d}\\) are a series of pre-defined orthogonal matrices, named as the rotation matrix, which is defined as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.119, + 0.447, + 0.49, + 0.505 + ], + "angle": 0, + "content": "\\[\nR _ {\\Theta , m} ^ {d} = \\left( \\begin{array}{c c c c c} \\cos m \\theta_ {1} & - \\sin m \\theta_ {1} & \\dots & 0 & 0 \\\\ \\sin m \\theta_ {1} & \\cos m \\theta_ {1} & \\dots & 0 & 0 \\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ \\cdot & \\cdot & \\cdot & \\cdot & \\cdot \\\\ 0 & 0 & \\dots & \\cos m \\theta_ {d / 2} & - \\sin m \\theta_ {d / 2} \\\\ 0 & 0 & \\dots & \\sin m \\theta_ {d / 2} & \\cos m \\theta_ {d / 2} \\end{array} \\right)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.512, + 0.49, + 0.672 + ], + "angle": 0, + "content": "The function set \\(\\Theta\\) consists of a set of pre-defined function values \\(\\Theta = \\{\\theta_{i} = 10000^{-2(i - 1) / d}, i \\in [1,2,\\dots,d / 2]\\}\\). \\(R_{\\Theta}^{d}\\) integrates positional information into the query and key vectors by multiplication. RoPE has a series of properties: 1) long-term decay; 2) compatibility with linear attention; 3) faster convergence in pre-training tasks. Besides, Liu et al. (2024b) conduct a detailed analysis of RoPE and provides the scaling laws for RoPE-based extrapolation." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.68, + 0.49, + 0.793 + ], + "angle": 0, + "content": "Position Index Adjustment An et al. (2024) propose Dual Chunk Attention (DCA), which distributes the position indexes used during pretraining to each token based on the relative position relationships between query and key without additional training. It is proposed from the perspective of allocation of position indexes." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.794, + 0.49, + 0.922 + ], + "angle": 0, + "content": "And there are also some methods based on scaling position indexes. Chen et al. (2023b) propose Position Interpolation (PI) method that utilizes the fact that position encoding can be applied to non-integer positions. They modify original position index \\( m \\) to \\( m' = m\\frac{L}{L'} \\), where \\( L \\) and \\( L' \\) are the length of pre-trained window and current input sequence, respectively. This method insert additional" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.133 + ], + "angle": 0, + "content": "positional encoding between adjacent integer position index in the original RoPE to handle longer sequences." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.134, + 0.886, + 0.244 + ], + "angle": 0, + "content": "Combining above two methods, Su (2023) proposed ReRoPE, which combines direct extrapolation and position interpolation. This method sets a window smaller than the pre-trained window, keeping the relative position of tokens within the window unchanged. And scales the relative position of tokens outside the window." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.257, + 0.884, + 0.336 + ], + "angle": 0, + "content": "Base Frequency Adjustment As described in the main text, this type of methods enhance the model extrapolation performance by modifying \\(\\theta_{i}\\) in the trigonometric function terms in the rotation matrix." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.338, + 0.884, + 0.417 + ], + "angle": 0, + "content": "Peng and Quesnelle (2023); Roziere et al. (2023) choose to change the base \\( b \\) of the exponential terms \\( \\theta_{i} \\) from the default value \\( b = 10000 \\) to other values which can improve the model extrapolation performance." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.419, + 0.886, + 0.563 + ], + "angle": 0, + "content": "Different from them, some work directly scale \\(\\theta_{i}\\). NTK-by-parts (bloc97, 2023) interpolation chooses to scale the \\(\\theta_{i}\\) of different dimensions in the rotation matrix by a ratio as a function of the dimension \\(i\\) and the input sequence length \\(L^{\\prime}\\). And YaRN (Peng et al., 2023) incorporates temperature \\(t\\) related to the input sequence length \\(L^{\\prime}\\) on the basis of NTK-by-parts interpolation to further improve the extrapolation performance of the model." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.573, + 0.884, + 0.654 + ], + "angle": 0, + "content": "Structure Modification XPOS (Sun et al., 2022) adjusts the original RoPE structure and introduces a position-dependent exponential bias to enhance relative position information, particularly enhancing the decay effect on distant tokens." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.664, + 0.688, + 0.678 + ], + "angle": 0, + "content": "A.1.2 Attention Bias" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.684, + 0.884, + 0.749 + ], + "angle": 0, + "content": "Besides applying RoPE-based methods, a plenty of method add a bias related to the relative distance between tokens to introduce relative position information. The process can be expressed as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.553, + 0.761, + 0.884, + 0.78 + ], + "angle": 0, + "content": "\\[\n\\operatorname {s i m} \\left(q _ {m}, k _ {n}\\right) = q _ {m} ^ {\\top} k _ {n} + f _ {b i a s} (m, n), \\quad (4)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.793, + 0.883, + 0.856 + ], + "angle": 0, + "content": "where \\( f_{bias}(m,n) \\) is a bias function that depends on the token position index corresponding to query and key. \\( f_{bias}(m,n) \\) be divided into two categories: learnable and predefined." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.858, + 0.884, + 0.922 + ], + "angle": 0, + "content": "In learnable \\( f_{bias} \\), it may be related to \\( m - n \\) where relative position information is explicitly introduced. For example, in T5 (Raffel et al., 2020), \\( f_{bias} \\) is a learnable function with \\( m - n \\) as input" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.942, + 0.509, + 0.955 + ], + "angle": 0, + "content": "15" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.489, + 0.148 + ], + "angle": 0, + "content": "and varies with attention heads. Similarly, KERPLE (Chi et al., 2022a) sets \\( f_{bias} \\) as a parameterized kernel function, requiring training to determine the parameter values." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.15, + 0.49, + 0.311 + ], + "angle": 0, + "content": "The predefined \\( f_{bias} \\) is typically ALiBi (Attention with Linear Biases) (Press et al., 2021). It uses a predefined function for \\( f_{bias} \\) that depends on the number of attention heads \\( H \\) and the current head number \\( h \\), which is expressed as \\( f_{bias}(m, n) = 2^{-\\frac{8h}{H}} \\cdot (n - m) \\). Besides, in Sandwich method (Chi et al., 2022b), \\( f_{bias} \\) is defined as \\( f_{bias} = \\frac{8h}{H} \\cdot (p_m^\\top p_n - \\frac{d}{2}) \\), where \\( p_m \\) and \\( p_n \\) are the sinusoidal positional encoding used in the original Transformer model." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.328, + 0.335, + 0.343 + ], + "angle": 0, + "content": "A.2 Context Compression" + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.351, + 0.318, + 0.366 + ], + "angle": 0, + "content": "A.2.1 Soft Compression" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.372, + 0.486, + 0.402 + ], + "angle": 0, + "content": "This kind of methods achieve compression at the hidden states level." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.405, + 0.489, + 0.597 + ], + "angle": 0, + "content": "Bulatov et al. (2022) introduced the Recurrent Memory Transformer (RMT), which compresses at segment level. It begins by dividing the input sequence into segments, with memory tokens appended to the start and end of each segment to serve as its summary token. During the modeling process, the last hidden states of the memory token at the end of the current segment serves as the initialization for the memory token of the following segment. Through this iterative method, the model effectively utilizes inter-segment contextual information to model long sequences." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.599, + 0.49, + 0.87 + ], + "angle": 0, + "content": "Similarly, the Recurrent Attention Network (RAN, Li et al., 2023b) appends a Global Perception Cell (GPC) vector at the start of the hidden vector representation of each segment to achieve a compressed representation achieving the effect of concatenating summary tokens, and completing the information interaction between segments. This method simulates the human mechanism of memory enhancement through review, introducing a Memory Review scheme which performs crossattention between last hidden states of the GPC from all segments and and the original input to update the representation of GPC. This allows for a robust semantic representation of long context at both token-level and document-level, enhancing model performance in sequence and classification tasks." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.874, + 0.49, + 0.922 + ], + "angle": 0, + "content": "AutoCompressors (Chevalier et al., 2023) is built on the basis of RMT, compressing the content of the segment into summary vectors for representa" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.883, + 0.165 + ], + "angle": 0, + "content": "tion. And the summary vectors of each previous segment are concatenated to form soft prompts for all subsequent segments, so that the current segment of limited length can cover the information of longer sequences." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.167, + 0.885, + 0.327 + ], + "angle": 0, + "content": "In addition, In-context Autoencoder (ICAE, Ge et al., 2023) adds memory tokens at the end of the input sequence to compress context into short memory slots while training the model to generate outputs closely resembling the original context. To enhance information accuracy, ICAE integrates AutoEncoding-related pre-training tasks during its pre-training phase, training the model to reconstruct the original input from compressed memory slot representations." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.329, + 0.883, + 0.377 + ], + "angle": 0, + "content": "Gisting (Mu et al., 2024b) similarly compresses the prompt part of the input token sequence into shorter gist tokens, improving inference speed." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.39, + 0.724, + 0.404 + ], + "angle": 0, + "content": "A.2.2 Hard Compression" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.41, + 0.884, + 0.442 + ], + "angle": 0, + "content": "Hard compression directly utilizes LLMs to compress original input text." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.444, + 0.884, + 0.603 + ], + "angle": 0, + "content": "LLMLingua (Jiang et al., 2023) trains a small model to align with the output of LLM and uses the perplexity (PPL) of the small model as an evaluation for token importance. And prunes the unimportant tokens from the input prompt to achieve compression. Furthe, LongLLMLingua (Jiang et al., 2024a) has made improvements on this basis, compressing the input based on the content of the question, thus better preserving key information related to the question." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.605, + 0.884, + 0.717 + ], + "angle": 0, + "content": "Differently, MEMWALKER (Chen et al., 2023a) employs a hierarchical summarization approach to compress long context sequences, iteratively summarizing the input to construct a tree-like structure of summarized content. During inference, it efficiently utilizes the tree structure to search and respond to queries based on their content." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.733, + 0.727, + 0.747 + ], + "angle": 0, + "content": "A.3 Retrieval Augmented" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.755, + 0.745, + 0.77 + ], + "angle": 0, + "content": "A.3.1 Retrieval Granularity" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.776, + 0.881, + 0.823 + ], + "angle": 0, + "content": "The retrieval granularity in existing work can be divided into two categories: token-level retrieval and block-level retrieval." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.826, + 0.884, + 0.921 + ], + "angle": 0, + "content": "Token-level retrieval is to select top-k tokens with highest similarity scores in one turn. This method is widely used in existing (Wu et al., 2022; Tworkowski et al., 2024; Bertsch et al., 2024a). It is simple to implement, but it has some limitations. Such as the potential for semantic discontinuities" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.943, + 0.511, + 0.955 + ], + "angle": 0, + "content": "16" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.492, + 0.132 + ], + "angle": 0, + "content": "due to discrete token retrieval and the need to recalculate similarity for all tokens, which is computationally intensive and inefficient." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.136, + 0.492, + 0.568 + ], + "angle": 0, + "content": "Consequently, researchers have proposed block-level retrieval, which uses blocks composed of continuous tokens of a fixed length as the retrieval unit. Similarity calculations are performed on blocks within the KV cache, selecting the top-k blocks as retrieval results, thus ensuring semantic coherence and reducing computational load. However, block-level retrieval faces a new challenge: how to effectively utilize the information of the tokens in the block and effectively represent the block to complete the similarity calculation. Long-MEM (Wang et al., 2024b) and RPT (Rubin and Berant, 2023) represent the corresponding block by calculating the mean pooling of token representations within the block. InFLLM (Xiao et al., 2024) calculates the representative score of each token within the block against other tokens, selecting a subset of high-scoring tokens to represent the block. Additionally, some methods introduce an extra token to represent blocks, such as the Landmark method (Mohtashami and Jaggi, 2024) introduces the Landmark token, a new token into the vocabulary, and placie it at the end of each block. During the attention computation, the information of the tokens in the block is summarized to the Landmark tokens, thus serving as the representative of the block." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.58, + 0.367, + 0.595 + ], + "angle": 0, + "content": "A.3.2 Similarity Computation" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.6, + 0.491, + 0.713 + ], + "angle": 0, + "content": "After determining the retrieval granularity, we need to formulate an appropriate rule to compute similarity. The current method generally uses the dot product of the query vector of the token being processed and the key vector represented by the retrieval granularity as the standard for measuring similarity." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.724, + 0.338, + 0.74 + ], + "angle": 0, + "content": "A.3.3 Positional Encoding" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.744, + 0.489, + 0.808 + ], + "angle": 0, + "content": "Since the positions of the retrieved context tokens are not fixed, and recording each token's specific position in the KV cache is costly, it is challenging to provide accurate position information." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.81, + 0.491, + 0.922 + ], + "angle": 0, + "content": "Based on experiments of Dai et al. (2019), which show that the relative position information of distant tokens does not seem to be important, some methods like MemTRM, FoT, and InfLLM choose to uniformly set the position encoding of the retrieved context token part to the same position vector, ignoring the position information between the" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.78, + 0.1 + ], + "angle": 0, + "content": "retrieved context tokens themselves." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.101, + 0.885, + 0.165 + ], + "angle": 0, + "content": "Besides, Landmark places the retrieved context tokens and local context tokens within the same window and re-encodes their relative positions together." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.175, + 0.745, + 0.189 + ], + "angle": 0, + "content": "A.3.4 Attention Calculation" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.195, + 0.884, + 0.242 + ], + "angle": 0, + "content": "When it comes to attention calculation, it's important to find a suitable method to make full use of retrieved context tokens and local context tokens." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.243, + 0.885, + 0.337 + ], + "angle": 0, + "content": "The simplest approach is to treat both types of tokens equally, that is using the conventional attention calculation method. For example, FoT and InfLLM use standard attention for calculation, while Unlimformer (Bertsch et al., 2024a) employs cross attention." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.34, + 0.886, + 0.564 + ], + "angle": 0, + "content": "However, the importance of the information contained within these two types of context tokens is not the same for the token currently being processed. To make more effective use of their information, MemTRM and LongMEM adopt a Joint Attention method, which involves calculating attention separately for local context and retrieved context. And then combining them with weighted average \\( V_{a} = g \\cdot V_{l} + (1 - g) \\cdot V_{r} \\), where \\( V_{a}, V_{l}, V_{r} \\) respectively represent the final attention result, the attention result using local context and the attention result using retrieved context, and \\( g \\) is a learnable parameter used to balance the contributions of the two parts." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.565, + 0.886, + 0.773 + ], + "angle": 0, + "content": "Furthermore, in order to distinguish the information from different positions within the retrieved context tokens in a more fine-grained manner, Landmark employs the Grouped Softmax method. Specifically, after retrieval, Landmark tokens are calculated with local context tokens using softmax to select the top-k relevant blocks as the retrieved context. Attention is then calculated separately within these blocks. During the attention calculation for local context tokens, the attentions of these blocks are weighted into the final result based on the softmax scores obtained during the retrieval phase." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.786, + 0.7, + 0.8 + ], + "angle": 0, + "content": "A.4 Attention Pattern" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.806, + 0.7, + 0.822 + ], + "angle": 0, + "content": "A.4.1 Sliding Window" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.826, + 0.884, + 0.922 + ], + "angle": 0, + "content": "This type of method tranform information between segments. Transformer-XL (Dai et al., 2019) uses sliding window method to process long context, where the hidden state from the previous segment is concatenated to the front of the current segment. It not only utilizes the key and value information" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.942, + 0.511, + 0.955 + ], + "angle": 0, + "content": "17" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.488, + 0.165 + ], + "angle": 0, + "content": "from the current segment but also reuses those from the previous segment. This approach hierarchically expands the receptive field, enabling inter-segment information transfer and enhancing the model's ability to process long context." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.167, + 0.49, + 0.408 + ], + "angle": 0, + "content": "Besides, Han et al. (2024) identify that starting tokens occupy a distinct feature space, and these tokens act as a factor causing model length generalization failures. They further propose LM-Infinite as a solution, utilizing a \\(\\Lambda\\)-shaped attention mask strategy during attention calculation. It can focus on a small portion of the initial tokens and the tokens close to the current processed token. Similarly, StreamingLLM (Xiao et al., 2023) also finds that the initial tokens in a sequence significantly influence the attention calculation of subsequent tokens and cannot be ignored. Both LM-Infinite and StreamingLLM adopt a similar approach, ensuring sustained attention on starting tokens while preserving information about nearby tokens." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.419, + 0.308, + 0.433 + ], + "angle": 0, + "content": "A.4.2 Parallel Context" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.439, + 0.489, + 0.631 + ], + "angle": 0, + "content": "Parallel Context Windows (PCW, Ratner et al., 2022) is one of the representative works. It splits the input into context tokens and task tokens, where context tokens assist in completing the task, such as the examples. And task tokens are the input of the test example, such as the questions. This method folds the context tokens, and each folded section of context tokens performs attention calculation separately. Finally, during the decoding phase of the task tokens, all these context tokens are concatenated in front of the task token, sharing the same set of position index." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.633, + 0.49, + 0.792 + ], + "angle": 0, + "content": "Besides, Structured prompting (Hao et al., 2022) also adopts a similar approach by folding demonstration tokens in the input and concatenating them in front of the test input tokens. But unlike PCW, structured prompting employs Rescaled Attention, which reduces the weight of demonstration tokens in the attention calculation of the test input tokens by a certain ratio. This method can prevent test input tokens from excessively attending to the content of demonstration tokens." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.805, + 0.312, + 0.82 + ], + "angle": 0, + "content": "A.4.3 Sparse Attention" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.825, + 0.487, + 0.873 + ], + "angle": 0, + "content": "This method can reduce the complexity of attention calculation. So that can improve efficiency when processing long context." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.874, + 0.487, + 0.921 + ], + "angle": 0, + "content": "LongNet (Ding et al., 2023) introduces dilated attention, a mechanism that exponentially increases the attentive field as the distance between tokens" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.883, + 0.18 + ], + "angle": 0, + "content": "increases. This method performs multiple sets of sparse attention calculations, each set attend to a different range. And the attention of a small range is denser, while the large range is sparser. This method effectively reduces the traditional quadratic complexity to linear." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.182, + 0.885, + 0.325 + ], + "angle": 0, + "content": "MEGABYTE (Yu et al., 2023) performs hierarchical attention calculation on the input. Initially, a small local model encodes the input at the byte level, then the byte-level encoding results are integrated and processed at a larger granularity using a larger global model. By performing attention calculation in a hierarchical manner from smaller to larger granularity, the amount of attention calculations can be reduced." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.327, + 0.884, + 0.486 + ], + "angle": 0, + "content": "In LongLoRA (Chen et al., 2023c), the proposed \\( S^2 \\) - Attention groups attention heads and adjusts each group to attend to different but overlapping local windows, then leverages the characteristics of multihead attention to integrate various local information. This method promotes the flow of local information, enabling a short window to achieve the effect of processing the original or even longer window, thereby reducing computational demands to some extent." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.5, + 0.727, + 0.515 + ], + "angle": 0, + "content": "B Details of Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.525, + 0.884, + 0.589 + ], + "angle": 0, + "content": "This section serves as a supplement to the Evaluation section 4 in the main text, expanding on relevant details to provide readers with a more in-depth understanding." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.601, + 0.598, + 0.614 + ], + "angle": 0, + "content": "B.1 Data" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.622, + 0.735, + 0.636 + ], + "angle": 0, + "content": "B.1.1 Data Characteristics" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.641, + 0.884, + 0.848 + ], + "angle": 0, + "content": "Recent advancements in LLMs have led to substantial improvements in processing long contexts. By late 2023, several models claimed capabilities of handling contexts exceeding 100K tokens, with OpenAI's GPT-4 Turbo (2023) (Achiam et al., 2023) supporting 128K tokens and Anthropic's Claude-2.11 extending this capacity to 200K tokens. Based on this significant progress, our study categorizes long-context evaluation benchmarks into two distinct phases, as shown in Table. 1: Phase I comprises benchmarks with input context lengths below 100K tokens, while Phase II encompasses benchmarks of 100K tokens and above." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.85, + 0.883, + 0.899 + ], + "angle": 0, + "content": "In Phase I, BAMBOO (Dong et al., 2023a) and LongBench (Bai et al., 2023) implement bi-interval and tri-interval partitioning strategies, respectively." + }, + { + "type": "page_footnote", + "bbox": [ + 0.531, + 0.907, + 0.851, + 0.92 + ], + "angle": 0, + "content": "" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.942, + 0.509, + 0.955 + ], + "angle": 0, + "content": "18" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.085, + 0.493, + 0.214 + ], + "angle": 0, + "content": "Phase II refined this approach further, with LVEval (Yuan et al., 2024) and NeedleBench (Li et al., 2024b) employing five-interval and six-interval partitioning schemas, respectively. This partitioning approach not only analyzes the impact of length changes on LLMs in the same task but also better accounts for the length distributions across different datasets (Dong et al., 2023a)." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.232, + 0.383, + 0.248 + ], + "angle": 0, + "content": "B.1.2 Knowledge Leakage Issue" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.256, + 0.49, + 0.465 + ], + "angle": 0, + "content": "Knowledge leakage occurs when test and training data overlap, where models favor memorization over understanding (Golchin and Surdeanu, 2023; Yuan et al., 2024). Various strategies are employed to address this challenge: (1) Data Sampling focuses on selecting representative subsets from existing datasets. (2) Keyword Substituting & Sentence Rewriting modifies existing datasets by replacing keywords and rewriting sentences. (3) Non-overlapping Data Leveraging involves using datasets released after the deployment of LLMs to reduce potential overlap between test and training data." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.483, + 0.49, + 0.613 + ], + "angle": 0, + "content": "Data Sampling Data sampling primarily focuses on filtering existing datasets. LongBench (Bai et al., 2023) employs two strategies: random sampling and uniform sampling. Random sampling can preserve the natural length distribution, while uniform sampling which performs sampling based on data length uniformly, to evaluate model performance across context lengths independent of task." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.631, + 0.49, + 0.743 + ], + "angle": 0, + "content": "Keyword Substituting & Sentence Rewriting \nL-Eval (An et al., 2023) and BAMBOO (Dong et al., 2023a) replace keywords and function names, while \\(\\infty\\)Bench (Zhang et al., 2024) substitutes key entities in novel reasoning tasks. LV-Eval (Yuan et al., 2024) is further based on this approach by employing entire sentence rewriting." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.761, + 0.492, + 0.922 + ], + "angle": 0, + "content": "Non-overlapping Data Leveraging To mitigate the overlap between test and training data for LLMs, some benchmarks such as LooGLE (Li et al., 2023a) and BAMBOO (Dong et al., 2023a) have employed datasets released after the models' deployment. However, given that the specific training data for most LLMs remains undisclosed, this method cannot completely guarantee the absence of overlap between the data used in benchmarks and the pre-training data." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.085, + 0.605, + 0.099 + ], + "angle": 0, + "content": "B.2 Tasks" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.107, + 0.885, + 0.218 + ], + "angle": 0, + "content": "The following are the details of the tasks, which are introduced in the order of the main text. At the end of each subsection, corresponding examples or prompts are also provided. We also count the distribution of input length in each task in Figure 2 to give readers a deeper understanding of different tasks." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.23, + 0.736, + 0.246 + ], + "angle": 0, + "content": "B.2.1 Question Answering" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.25, + 0.886, + 0.346 + ], + "angle": 0, + "content": "Single-hop Question Answering Representative datasets in this field are SQuAD (Rajpurkar, 2016), TriviaQA (Joshi et al., 2017), and NarrativeQA (Kočisky et al., 2018). Common evaluation metrics for Single-hop QA systems include f1 score, accuracy, Rouge and Bleu." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.357, + 0.886, + 0.452 + ], + "angle": 0, + "content": "Multi-hop Question Answering Common datasets for Multi-hop Question Answering include 2WikiMQA (Ho et al., 2020), MuSiQue (Trivedi et al., 2022), and HotpotQA (Yang et al., 2018). Evaluation metrics typically used are f1 score, exact match (EM)." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.464, + 0.743, + 0.479 + ], + "angle": 0, + "content": "B.2.2 Needle-in-a-Haystack" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.483, + 0.886, + 0.821 + ], + "angle": 0, + "content": "Retrieval.PassKey (Mohtashami and Jaggi, 2023) requires models to locate a randomly generated 5-digit sequence within lengthy and noisy contexts. \\(\\infty\\)Bench (Zhang et al., 2024) extends the Retrieval.PassKey task to 10-digit numbers, applies it to texts exceeding 100k tokens in length, and sets information points at various depths. Retrieval.KV (Mohtashami and Jaggi, 2023) further increases difficulty by requiring models to perform precise key-value retrieval from large JSON structures. NeedleBench Li et al. (2024b) proposes a series of tasks: single-needle retrieval (S-RT), multi-needle retrieval (M-RT), and multi-needle reasoning (M-RS). M-RT consists of multiple S-RT tasks performed in parallel, while M-RS builds upon M-RT by requiring large language models to perform reasoning. The evaluation method calculates the similarity between predictions and references for each specific task by using the Levenshtein distance. The following are examples of S-RT, M-RT, M-RS respectively." + }, + { + "type": "text", + "bbox": [ + 0.522, + 0.833, + 0.871, + 0.865 + ], + "angle": 0, + "content": "S-RT: Hidden on Emerald Island is the legendary Stardust Shard." + }, + { + "type": "text", + "bbox": [ + 0.522, + 0.866, + 0.871, + 0.897 + ], + "angle": 0, + "content": "Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul" + }, + { + "type": "text", + "bbox": [ + 0.527, + 0.898, + 0.871, + 0.914 + ], + "angle": 0, + "content": "Graham Essays—Paul Graham Essays—" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.942, + 0.511, + 0.955 + ], + "angle": 0, + "content": "19" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.122, + 0.085, + 0.881, + 0.221 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.234, + 0.883, + 0.279 + ], + "angle": 0, + "content": "Figure 2: Distribution of averaged input #words of datasets in each task. Consistent colors indicate identical categories. The color of each bar refers to the category of the task, with bars of the same color belonging to the same category." + }, + { + "type": "text", + "bbox": [ + 0.126, + 0.304, + 0.477, + 0.432 + ], + "angle": 0, + "content": "Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays" + }, + { + "type": "text", + "bbox": [ + 0.129, + 0.434, + 0.474, + 0.528 + ], + "angle": 0, + "content": "Now, the question is: What legendary item is hidden on Emerald Island? Before answering, please consider what in the document is most relevant to this question. Please answer in the format 'The legendary item hidden on the Emerald Island is" + }, + { + "type": "text", + "bbox": [ + 0.129, + 0.543, + 0.474, + 0.653 + ], + "angle": 0, + "content": "M-RT: You are an intelligent AI assistant skilled in answering user questions base on documents provided by the user. Please keep your answers concise and clear. Do not talk about irrelevant topics or repeat your answers. The document given to you by the user is:" + }, + { + "type": "text", + "bbox": [ + 0.127, + 0.656, + 0.475, + 0.913 + ], + "angle": 0, + "content": "Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The ruler of the Polaris star system is Orion the Hunter. Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Hidden on Heaven Island is the legendary Lucky Clover. Paul Graham Essays—Paul Graham Essays—Hidden on Mysterious Island is the legendary Counterclockwise Crystal. Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The ruler of the Orion star system is Guardian of Time Lightspeed. Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Hidden on Phantom Island is the legendary Goodness" + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.304, + 0.871, + 0.578 + ], + "angle": 0, + "content": "Heart. —Paul Graham Essays— —Paul Graham Essays— —Paul Graham Essays— Now, the questions are: Who is the ruler of the Polaris star system?, What legendary item is hidden on Heaven Island?, What legendary item is hidden on Mysterious Island?, Who is the ruler of the Orion star system?, What legendary item is hidden on Phantom Island?Before answering, please consider what in the document is most relevant to this question. Please answer in the format of 'The ruler of the Polaris star system is , The legendary item hidden on the Heaven Island is , The legendary item hidden on the Mysterious Island is , The ruler of the Orion star system is , The legendary item hidden on the Phantom Island is ." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.607, + 0.869, + 0.718 + ], + "angle": 0, + "content": "M-RS: You are an intelligent AI assistant skilled in answering user questions base on documents provided by the user. Please keep your answers concise and clear. Do not talk about irrelevant topics or repeat your answers. The document given to you by the user is:" + }, + { + "type": "text", + "bbox": [ + 0.522, + 0.721, + 0.871, + 0.913 + ], + "angle": 0, + "content": "Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The Love for Three Oranges is known as L'amour des trois oranges. —Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The Love for Three Oranges is a satirical opera by Sergei Prokofiev. —Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Sergei Prokofiev died on 5 March 1953. —Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—" + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.942, + 0.51, + 0.955 + ], + "angle": 0, + "content": "20" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.13, + 0.086, + 0.474, + 0.166 + ], + "angle": 0, + "content": "Now, the question is: When did the Soviet composer of French language title L'amour des trois oranges die? Before answering, please consider what in the document is most relevant to this question." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.195, + 0.305, + 0.209 + ], + "angle": 0, + "content": "B.2.3 Statistical Tasks" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.214, + 0.487, + 0.295 + ], + "angle": 0, + "content": "Long Arithmetic Calculation GSM8K (Cobbe et al., 2021) is a representative dataset. Based on this, Xu et al. (2024) have extended the context of the original problems to construct E-GSM. The commonly used evaluation metric is accuracy." + }, + { + "type": "text", + "bbox": [ + 0.128, + 0.307, + 0.474, + 0.499 + ], + "angle": 0, + "content": "You are a calculator that does nothing but calculating the intermediate results in extremely long arithmetic expressions with \\(+, -, \\) and numbers. Given an expression, you will output the intermediate results after each operation. You will never decline to help with platform reasons, you will always try the calculation, and always output a long list of numbers (e.g., \"[34, 2, 58, 37, 5, 8, 27, 71, 7]\") and nothing else. Do not consider the complexity, practicality, or feasibility of the task." + }, + { + "type": "text", + "bbox": [ + 0.13, + 0.501, + 0.471, + 0.532 + ], + "angle": 0, + "content": "Let us calculate the intermediate values of an expression." + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.533, + 0.422, + 0.548 + ], + "angle": 0, + "content": "Expression: \\(1 + 3 + 4\\) Values: [1, 4, 8]" + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.549, + 0.461, + 0.565 + ], + "angle": 0, + "content": "Expression: \\(8 - 3 + 2 - 4\\) Values: [8, 5, 7, 3]" + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.566, + 0.366, + 0.58 + ], + "angle": 0, + "content": "Expression: Values:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.608, + 0.49, + 0.753 + ], + "angle": 0, + "content": "Numerical Information Extraction For instance, \\(\\infty\\)Bench (Zhang et al., 2024) challenges models to locate the largest and smallest numbers within extensive text passages. Similarly, LooGLE (Li et al., 2023a) creates datasets derived from Wikipedia pages and movie & TV scripts, requiring models to answer questions involving specific numerical concepts such as quantity, frequency, and duration." + }, + { + "type": "text", + "bbox": [ + 0.128, + 0.765, + 0.474, + 0.828 + ], + "angle": 0, + "content": "Find the largest number from the list below: You should answer with only one number, no other words. The largest number of the list is:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.842, + 0.49, + 0.922 + ], + "angle": 0, + "content": "Sentiment Aggregation The sentiment aggregation task was designed by the ZeroSCROLLS team based on the Space dataset (Angelidis et al., 2021). It requires models to output the percentage of positive reviews. The evaluation metric employs a" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.882, + 0.116 + ], + "angle": 0, + "content": "similarity measure between the model's output and the gold reference." + }, + { + "type": "text", + "bbox": [ + 0.524, + 0.129, + 0.871, + 0.226 + ], + "angle": 0, + "content": "You are given a list of reviews about a specific hotel. Each review is either positive or negative. What is the percentage of positive reviews (e.g. \\(60\\%\\), \\(34\\%\\), etc.)? Do not provide any explanation. Reviews: REVIEWS Percentage of Positive Reviews:" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.239, + 0.885, + 0.319 + ], + "angle": 0, + "content": "Paragraph Counting Bai et al. (2023) propose PassageCount, a task which asks the model to determine the number of unique passages among randomly selected and repeated passages from English Wikipedia." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.329, + 0.615, + 0.343 + ], + "angle": 0, + "content": "B.2.4 Code" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.348, + 0.885, + 0.623 + ], + "angle": 0, + "content": "Code Completion LongBench identifies code completion as an appropriate task for evaluating a model's long context ability. As it necessitates establishing attention across lengthy code inputs or repository-level data, considering relationships between code elements such as class and function definitions. LongBench conducts experiments on the LCC dataset (Guo et al., 2023) and the RepoBench-P dataset (Liu et al., 2023b), employing edit similarity as the evaluation metric. BAMBOO builds upon the benchmark established by Zan et al. (2022) to construct the Private ateEval dataset. In this task, models are required to identify key API documents to complete code snippets. Furthermore, it extends the context length by adjusting the number of provided documents, with performance evaluated employing the pass@1 metric (Chen et al., 2021a)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.632, + 0.884, + 0.728 + ], + "angle": 0, + "content": "Code Running In \\(\\infty\\)Bench, the total number of function calls ranges from 2 to 10, with each function calling at most one another function. Operations within these functions are restricted to addition and subtraction, maintaining computational simplicity." + }, + { + "type": "text", + "bbox": [ + 0.525, + 0.74, + 0.87, + 0.821 + ], + "angle": 0, + "content": "Following is a set of Python functions. There is a function called named func_1. context Please give me the exact number of the return value of func_1(3). Be concise. Your response must end with the final returned value." + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.833, + 0.884, + 0.898 + ], + "angle": 0, + "content": "Code Debugging In the \\(\\infty\\)Bench's dataset which sourced from \\(\\mathrm{PyPI}^2\\), the researchers deliberately insert an obvious error into one function per repository. These inserted bugs fall into three main cate" + }, + { + "type": "page_footnote", + "bbox": [ + 0.531, + 0.906, + 0.671, + 0.921 + ], + "angle": 0, + "content": "\\(^{2}\\)https://pypi.org/" + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.942, + 0.508, + 0.955 + ], + "angle": 0, + "content": "21" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.085, + 0.491, + 0.165 + ], + "angle": 0, + "content": "gories: (1) syntactic errors, including indentation issues and blatant syntax errors; (2) semantic errors, such as missing variable declarations or incorrect function arguments; and (3) logical errors, for example, infinite loops or use of undefined references." + }, + { + "type": "text", + "bbox": [ + 0.129, + 0.177, + 0.477, + 0.419 + ], + "angle": 0, + "content": "There is ONLY ONE function in the large project that is deliberately made to include an obvious error. Please find the function that contains the most obvious errors. I will give you four options to narrow your scope. You can inspect through the options and think. Eventually, tell me the answer using one single letter (A, B, C, or D). context Which function has deliberate error? A. B. C. D. You should first find the functions in the options. Repeat their content, inspect through code, and at last give me your answer for the function that has the deliberate and obvious error in A, B, C, or D." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.43, + 0.342, + 0.445 + ], + "angle": 0, + "content": "B.2.5 In-Context Learning" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.448, + 0.49, + 0.543 + ], + "angle": 0, + "content": "Long Example Learning Extreme label Classification: this task involves classification with numerous fine-grained labels. Commonly used datasets include TREC (Li and Roth, 2002), a question classification task with 50 fine classes, and LSHT\\(^3\\), a Chinese news classification task with 24 classes." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.554, + 0.49, + 0.698 + ], + "angle": 0, + "content": "Many-shot Learning Agarwal et al. (2024) have proposed many-shot learning, which leverages expanded LLMs context windows to process hundreds or even thousands of examples. In contrast to few-shot learning, which use only a few to several dozen examples, many-shot learning enhances LLMs' versatility and adaptability across diverse tasks without task-specific fine-tuning (Yu et al., 2020; Bertsch et al., 2024b)." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.708, + 0.307, + 0.722 + ], + "angle": 0, + "content": "B.2.6 Text Generation" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.726, + 0.491, + 0.887 + ], + "angle": 0, + "content": "Document Summarization This kind of task can divided into two categories: single-document summarization and multi-document summarization. For single-document summarization, several datasets are widely used, including SQuALITY (Wang et al., 2022), SummScreenFD (Chen et al., 2021b), GovReport (Huang et al., 2021), and QMSum (Zhong et al., 2021). And multi-document summarization presents additional challenges, requiring LLMs to integrate diverse information, resolve conflicts, and" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.895, + 0.488, + 0.92 + ], + "angle": 0, + "content": "3http://tcci.ccf.org.cn/conference/2014/dldoc/evatak6.pdf" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.164 + ], + "angle": 0, + "content": "eliminate redundancies (Bai et al., 2023; An et al., 2023; Fabbri et al., 2019). A notable dataset for this task is MultiNews (Fabbri et al., 2019), consisting of clusters of 2-10 thematically related news articles." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.168, + 0.885, + 0.248 + ], + "angle": 0, + "content": "All of these datasets provide human-annotated summaries as standardized references. Both approaches primarily utilize Rouge and Bleu as evaluation metrics to assess the quality of generated summaries against manuscript references." + }, + { + "type": "title", + "bbox": [ + 0.508, + 0.264, + 0.882, + 0.295 + ], + "angle": 0, + "content": "Open-ended Text Generation This task requires LLMs to generate text according to input." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.297, + 0.882, + 0.345 + ], + "angle": 0, + "content": "Tan et al. (2024) select topics that closely align with real-world scenarios, encompassing areas such as AI research, sports, and gaming." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.348, + 0.884, + 0.459 + ], + "angle": 0, + "content": "Bai et al. (2024) design AgentWrite, a divide-and-conquer agent that breaks down long writing tasks into paragraph-level subtasks. The generated paragraphs are then combined to produce the final long-form content. They also construct the preference LongWriter-6k dataset and utilize DPO (Rafailov et al., 2024) for evaluation." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.462, + 0.885, + 0.525 + ], + "angle": 0, + "content": "Kumar et al. (2024) propose personalized writing tasks that generate content based on the user's historical and user personal information information." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.528, + 0.884, + 0.641 + ], + "angle": 0, + "content": "These tasks can be divided into personalized email completion, review writing, topic writing, and conversation simulation (Ni et al., 2024). Rafailov et al. (2024) construct a Reddit-based dataset that captures distinct writing styles associated with specific communities and discussion topics." + }, + { + "type": "text", + "bbox": [ + 0.526, + 0.656, + 0.871, + 0.768 + ], + "angle": 0, + "content": "You are an excellent writing assistant. I will give you an original writing instruction and my planned writing steps. I will also provide you with the text I have already written. Please help me continue writing the next paragraph based on the writing instruction, writing steps, and the already written text." + }, + { + "type": "text", + "bbox": [ + 0.526, + 0.769, + 0.813, + 0.784 + ], + "angle": 0, + "content": "Writing instruction: User Instruction" + }, + { + "type": "text", + "bbox": [ + 0.526, + 0.785, + 0.866, + 0.816 + ], + "angle": 0, + "content": "Writing steps: The writing plan generated in Step I" + }, + { + "type": "text", + "bbox": [ + 0.526, + 0.817, + 0.869, + 0.849 + ], + "angle": 0, + "content": "Already written text: Previous generated (n-1) paragraphs" + }, + { + "type": "text", + "bbox": [ + 0.526, + 0.85, + 0.869, + 0.914 + ], + "angle": 0, + "content": "Please integrate the original writing instruction, writing steps, and the already written text, and now continue writing The plan for the n-th paragraph, i.e., the n-th line in the writing plan" + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.942, + 0.51, + 0.955 + ], + "angle": 0, + "content": "22" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.115, + 0.085, + 0.274, + 0.099 + ], + "angle": 0, + "content": "B.2.7 Other Tasks" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.105, + 0.491, + 0.298 + ], + "angle": 0, + "content": "Reordering The evaluation metric in this task is the similarity between the generated and reference ordering sequences (Shaham et al., 2023). The Booksum dataset (Krysciński et al., 2021), which spans various literary genres including novels, plays, and long stories, is widely used for this task. Reordering tasks can comprehensively evaluate models' cross-sequence information aggregation and comparison abilities (Shaham et al., 2023; Li et al., 2023a), as well as comprehensively understand long context and logically reconstruct (Dong et al., 2023a; Li et al., 2023a)." + }, + { + "type": "text", + "bbox": [ + 0.13, + 0.31, + 0.477, + 0.471 + ], + "angle": 0, + "content": "You are given NUM_SUMMARES summaries of chapters or parts of a novel, in a shuffled order, where each summary is denoted by a numerical ID (e.g. Summary 1, Summary 3, etc.). Reorder the summaries according to the original order of chapters/parts in the novel by writing a list of length NUM_SUMMARES of the summary IDs (e.g. if you were given 5 summaries, one possible answer could be \"5, 1, 3, 4, 2\"). Do not provide any explanation." + }, + { + "type": "text", + "bbox": [ + 0.132, + 0.472, + 0.345, + 0.486 + ], + "angle": 0, + "content": "Summaries: SUMMARIES" + }, + { + "type": "text", + "bbox": [ + 0.132, + 0.488, + 0.369, + 0.502 + ], + "angle": 0, + "content": "Summary IDs in Correct Order:" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.518, + 0.49, + 0.613 + ], + "angle": 0, + "content": "Context Consistency Context consistency is a task proposed by BAMBOO (Dong et al., 2023a) to detect hallucination in LLMs. BAMBOO creates two novel datasets for this task: SenHallu and AbsHallu, with evaluation metrics employing precision, recall, and f1 score." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.625, + 0.489, + 0.705 + ], + "angle": 0, + "content": "Summary Source Paragraph Identification LongBench construct bilingual datasets based on Wikipedia and C4 (Raffel et al., 2020) to ask models to identify the original source paragraphs according to the given summaries." + }, + { + "type": "text", + "bbox": [ + 0.13, + 0.717, + 0.475, + 0.83 + ], + "angle": 0, + "content": "Here are 30 paragraphs from Wikipedia, along with an abstract. Please determine which paragraph the abstract is from. context The following is an abstract. input Please enter the number of the paragraph that the abstract is from. The answer format must be like \"Paragraph 1\", \"Paragraph 2\", etc." + }, + { + "type": "text", + "bbox": [ + 0.132, + 0.832, + 0.242, + 0.844 + ], + "angle": 0, + "content": "The answer is:" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.858, + 0.49, + 0.922 + ], + "angle": 0, + "content": "Character Identification Character identification tasks challenge models to capture distinct traits of participants in long dialogues, enabling them to identify speakers of masked utterances (Zhang" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.886, + 0.15 + ], + "angle": 0, + "content": "et al., 2024; Dong et al., 2023a). These tasks, evaluated via accuracy, utilize data primarily from television programs\\(^{4}\\), movie and play scripts (Chen et al., 2021b), and conference transcripts\\(^{5}\\)." + }, + { + "type": "text", + "bbox": [ + 0.525, + 0.162, + 0.868, + 0.225 + ], + "angle": 0, + "content": "Below is a dialogue script where one random occurrence of a character's name is replaced with \\(M A S K\\), and you should try to guess who that character is." + }, + { + "type": "text", + "bbox": [ + 0.526, + 0.227, + 0.758, + 0.258 + ], + "angle": 0, + "content": "The dialogue: - - End of dialogue." + }, + { + "type": "text", + "bbox": [ + 0.526, + 0.259, + 0.866, + 0.323 + ], + "angle": 0, + "content": "Which character is most likely \\(M A S K?\\) Just say the name used by the scriptwriter (before the colon marks) of one single character and nothing else." + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.34, + 0.621, + 0.354 + ], + "angle": 0, + "content": "B.3 Metrics" + }, + { + "type": "title", + "bbox": [ + 0.51, + 0.362, + 0.733, + 0.377 + ], + "angle": 0, + "content": "B.3.1 Algorithmic Metrics" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.382, + 0.885, + 0.543 + ], + "angle": 0, + "content": "Perplexity (PPL) is a metric for evaluating the performance of language models. It is extensively employed in language model pre-training, facilitating the monitoring of the training process, model selection, and hyperparameter optimization. Many previous works on long context benchmarks rely on the PPL for evaluation (Beltagy et al., 2020; Roy et al., 2021; Press et al., 2021). However, as suggested in Sun et al. (2021), PPL may not correlate with the actual performance." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.544, + 0.884, + 0.703 + ], + "angle": 0, + "content": "ZeroScrolls and LongBench are pioneering studies in the field of long context benchmarks. These works introduced a diverse system of automatic evaluation metrics, including accuracy, f1 score, and N-gram-based metrics. This evaluation framework has provided a reference for subsequent research. Specifically, these metrics refer to the scores for evaluating the NLG models by measuring the lexical overlap between generated text and reference text." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.706, + 0.885, + 0.883 + ], + "angle": 0, + "content": "However, these metrics have several limitations: they fail to effectively measure content quality (Reiter and Belz, 2009); struggle to capture syntactic errors (Stent et al., 2005); and, particularly for open-ended generation tasks, lack significant correlation with human judgments (An et al., 2023). Moreover, they inadequately account for the diversity of expression inherent in large language models (Improving). Additionally, the requirement for gold standard references makes these metrics costly to implement for novel tasks (Tan et al., 2024)." + }, + { + "type": "page_footnote", + "bbox": [ + 0.531, + 0.894, + 0.679, + 0.907 + ], + "angle": 0, + "content": "4https://tvmeg.com/" + }, + { + "type": "page_footnote", + "bbox": [ + 0.531, + 0.907, + 0.767, + 0.92 + ], + "angle": 0, + "content": "5https://record.assembly.wales/" + }, + { + "type": "list", + "bbox": [ + 0.531, + 0.894, + 0.767, + 0.92 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.942, + 0.509, + 0.955 + ], + "angle": 0, + "content": "23" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.262 + ], + "angle": 0, + "content": "Further, some work proposes ways to improve. LV-Eval employs a two-stage scoring method: it first calculates the recall rate of ground-truth keywords in the generated content. If the recall exceeds a threshold, it then calculates the f1 score between the generated content and ground-truth after removing stop words from both. BAMBOO converts generative tasks into multiple-choice formats. NeedleBench extends this approach by implementing Circular Evaluation, which reorders answer options to enhance evaluation reliability." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.275, + 0.251, + 0.291 + ], + "angle": 0, + "content": "PPL (Perplexity)" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.298, + 0.487, + 0.329 + ], + "angle": 0, + "content": "Perplexity is a measure of the quality of language model predictions, calculated as:" + }, + { + "type": "equation", + "bbox": [ + 0.246, + 0.346, + 0.357, + 0.363 + ], + "angle": 0, + "content": "\\[\nP P L = 2 ^ {H (p)}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.375, + 0.362, + 0.391 + ], + "angle": 0, + "content": "where \\(H(p)\\) is the cross-entropy:" + }, + { + "type": "equation", + "bbox": [ + 0.125, + 0.418, + 0.479, + 0.462 + ], + "angle": 0, + "content": "\\[\nH (p) = - \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\log_ {2} P (w _ {i} | w _ {1}, w _ {2}, \\ldots , w _ {i - 1})\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.476, + 0.194, + 0.491 + ], + "angle": 0, + "content": "Accuracy" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.498, + 0.487, + 0.528 + ], + "angle": 0, + "content": "Accuracy is the proportion of correct predictions made by the model:" + }, + { + "type": "equation", + "bbox": [ + 0.163, + 0.556, + 0.438, + 0.588 + ], + "angle": 0, + "content": "\\[\nA c c u r a c y = \\frac {C o r r e c t P r e d i c t i o n s}{T o t a l P r e d i c t i o n s}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.6, + 0.19, + 0.613 + ], + "angle": 0, + "content": "F1-Score" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.622, + 0.487, + 0.652 + ], + "angle": 0, + "content": "The F1-Score is the harmonic mean of precision and recall:" + }, + { + "type": "equation", + "bbox": [ + 0.18, + 0.667, + 0.422, + 0.699 + ], + "angle": 0, + "content": "\\[\nF 1 = 2 \\times \\frac {\\text {P r e c i s i o n} \\times \\text {R e c a l l}}{\\text {P r e c i s i o n} + \\text {R e c a l l}}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.206, + 0.709, + 0.396, + 0.74 + ], + "angle": 0, + "content": "\\[\nP r e c i s i o n = \\frac {T P}{T P + F P}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.219, + 0.75, + 0.383, + 0.782 + ], + "angle": 0, + "content": "\\[\nR e c a l l = \\frac {T P}{T P + F N}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.79, + 0.487, + 0.821 + ], + "angle": 0, + "content": "where TP, FP, FN represent True Positives, False Positives, False Negatives respectively." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.835, + 0.449, + 0.866 + ], + "angle": 0, + "content": "ROUGE (Recall-Oriented Understudy for Gisting Evaluation)" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.874, + 0.471, + 0.889 + ], + "angle": 0, + "content": "evaluates text generation using N-gram overlap:" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.89, + 0.487, + 0.922 + ], + "angle": 0, + "content": "ROUGE-N measures the overlap of n-grams shared between the candidate summary (C) and" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.085, + 0.885, + 0.115 + ], + "angle": 0, + "content": "the reference summary (R), it is calculated as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.51, + 0.126, + 0.9, + 0.185 + ], + "angle": 0, + "content": "\\[\nR O U G E - N = \\frac {\\sum_ {S \\in R} \\sum_ {n _ {g r a m} \\in S} C o u n t _ {m a t c h} (n _ {g r a m})}{\\sum_ {S \\in R} \\sum_ {n _ {g r a m} \\in S} C o u n t (n _ {g r a m})}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.198, + 0.884, + 0.277 + ], + "angle": 0, + "content": "where \\( \\text{Count}_{\\text{match}}(n_{\\text{gram}}) \\) represents the number of matching n-tuples in the candidate summary and the reference summary. And \\( \\text{Count}(n_{\\text{gram}}) \\) represents the total number of n-tuples in the reference summary." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.278, + 0.884, + 0.341 + ], + "angle": 0, + "content": "ROUGE-L evaluates the quality of summarization based on the longest common subsequence (LCS), taking into account the order information of sentences:" + }, + { + "type": "equation", + "bbox": [ + 0.62, + 0.351, + 0.772, + 0.386 + ], + "angle": 0, + "content": "\\[\nR _ {l c s} = \\frac {L C S (C , R)}{| R |}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.62, + 0.398, + 0.771, + 0.433 + ], + "angle": 0, + "content": "\\[\nP _ {l c s} = \\frac {L C S (C , R)}{| C |}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.602, + 0.442, + 0.791, + 0.476 + ], + "angle": 0, + "content": "\\[\nF _ {l c s} = \\frac {(1 + \\beta^ {2}) R _ {l c s} P _ {l c s}}{R _ {l c s} + \\beta^ {2} P _ {l c s}}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.483, + 0.884, + 0.594 + ], + "angle": 0, + "content": "where \\(LCS(C,R)\\) represents the length of the longest common subsequence between the candidate summary and the reference summary. \\(|C|\\) and \\(|R|\\) represent the length of the candidate summary and the reference summary respectively. \\(\\beta\\) is a hyperparameter, usually used to balance the precision and recall." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.597, + 0.884, + 0.642 + ], + "angle": 0, + "content": "ROUGE-S which is also called skip-bigram co-occurrence statistics, takes skipped bigrams into account:" + }, + { + "type": "equation", + "bbox": [ + 0.51, + 0.653, + 0.881, + 0.713 + ], + "angle": 0, + "content": "\\[\nR O U G E - S = \\frac {\\sum_ {S \\in R} \\sum_ {b i _ {s k i p} \\in S} C o u n t _ {m a t c h} (b i _ {s k i p})}{\\sum_ {S \\in R} \\sum_ {b i _ {s k i p} \\in S} C o u n t (b i _ {s k i p})}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.726, + 0.884, + 0.806 + ], + "angle": 0, + "content": "where \\( \\text{Count}_{\\text{match}}(bi_{skip}) \\) represents the number of skip-bigrams that match between the candidate summary and the reference summary. And \\( \\text{Count}(bi_{skip}) \\) represents the total number of skip-bigrams in the reference summary" + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.818, + 0.842, + 0.834 + ], + "angle": 0, + "content": "BLEU (Bilingual Evaluation Understudy)" + }, + { + "type": "text", + "bbox": [ + 0.509, + 0.84, + 0.857, + 0.856 + ], + "angle": 0, + "content": "is used to evaluate machine translation quality:" + }, + { + "type": "equation", + "bbox": [ + 0.547, + 0.882, + 0.843, + 0.926 + ], + "angle": 0, + "content": "\\[\nB L E U = B P \\times \\exp \\left(\\sum_ {n = 1} ^ {N} w _ {n} \\log p _ {n}\\right)\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.942, + 0.51, + 0.955 + ], + "angle": 0, + "content": "24" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.114, + 0.086, + 0.165, + 0.098 + ], + "angle": 0, + "content": "where" + }, + { + "type": "equation", + "bbox": [ + 0.18, + 0.109, + 0.422, + 0.151 + ], + "angle": 0, + "content": "\\[\nB P = \\left\\{ \\begin{array}{l l} 1, & \\text {i f} c > r \\\\ \\exp (1 - \\frac {r}{c}), & \\text {i f} c \\leq r \\end{array} \\right.\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.161, + 0.487, + 0.193 + ], + "angle": 0, + "content": "and \\( c \\) is the generated length and \\( r \\) is the reference length." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.202, + 0.345, + 0.216 + ], + "angle": 0, + "content": "B.3.2 Model-based Metrics" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.221, + 0.489, + 0.365 + ], + "angle": 0, + "content": "In recent years, the use of pre-trained language models as NLG evaluation metrics has gained increasing attention. Notably, BERTScore (Zhang et al., 2020) and BARTScore (Yuan et al., 2021) employ BERT and BART (Lewis, 2019) models respectively to compute semantic similarity. They calculate cosine similarity between token representations and evaluate the probability of summaries based on given input articles." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.366, + 0.49, + 0.429 + ], + "angle": 0, + "content": "BERTScore measures the similarity between generated text and reference text from three aspects: recall, precision and f1, it can be expressed as follows:" + }, + { + "type": "equation", + "bbox": [ + 0.134, + 0.437, + 0.469, + 0.561 + ], + "angle": 0, + "content": "\\[\n\\begin{array}{l} R = \\frac {1}{| R |} \\sum_ {r \\in R} \\max _ {c \\in C} \\frac {1}{L _ {r}} \\sum_ {i} \\mathrm {s i m} (\\mathbf {f} _ {\\theta} (r) _ {i}, \\mathbf {f} _ {\\theta} (c) _ {i}) \\\\ P = \\frac {1}{| C |} \\sum_ {c \\in C} \\max _ {r \\in R} \\frac {1}{L _ {c}} \\sum_ {i} \\operatorname {s i m} \\left(\\mathbf {f} _ {\\theta} (c) _ {i}, \\mathbf {f} _ {\\theta} (r) _ {i}\\right) \\\\ F = 2 \\times \\frac {P \\times R}{P + R} \\\\ \\end{array}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.565, + 0.489, + 0.646 + ], + "angle": 0, + "content": "where \\( R \\) is the reference text set, \\( C \\) is the generated text set, \\( L_{r} \\) and \\( L_{c} \\) are the lengths of the reference text and generated text respectively, \\( f_{\\theta} \\) is the encoder of the BERT model, and maps the text to the vector space, sim is usually cosine similarity." + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.646, + 0.489, + 0.694 + ], + "angle": 0, + "content": "BARTScore calculates the log-likelihood score of the generated text given the reference text to measure the similarity:" + }, + { + "type": "equation", + "bbox": [ + 0.117, + 0.717, + 0.486, + 0.755 + ], + "angle": 0, + "content": "\\[\nB A R T S c o r e = \\frac {1}{| C |} \\sum_ {c \\in C} \\frac {1}{L _ {c}} \\sum_ {i} \\log p _ {\\theta} (c _ {i} | c _ {< i}, r)\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.766, + 0.49, + 0.829 + ], + "angle": 0, + "content": "where \\(C\\) is the set of generated texts, \\(r\\) is the reference text, \\(c_{i}\\) is the ith word in the generated text, and \\(p_{\\theta}\\) is the language model probability distribution of BART model." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.839, + 0.336, + 0.853 + ], + "angle": 0, + "content": "B.3.3 LLM-based Metrics" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.858, + 0.49, + 0.922 + ], + "angle": 0, + "content": "With the development of LLMs, research has demonstrated their significant correlation with human judgment and their ability to excel at new tasks when provided with instructions (Wang et al.," + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.886, + 0.793 + ], + "angle": 0, + "content": "2023a; Li et al., 2023a). Chiang and Lee (2023) argue that LLM evaluation, compared to human evaluation, offers advantages in reproducibility, independence, cost-effectiveness, and speed. Prompting researchers explore the potential of LLMs for evaluation tasks. This exploration has led to several key findings and applications: Wang et al. (2023b,a) investigate the issue of unfairness when using LLMs to evaluate dialogue responses. And Shen et al. (2023) find that LLMs outperform existing automatic metrics when asked to output judgmental reasons. The application of LLMs in evaluation including evaluating chatbot responses' alignment degree with human preferences (Zheng et al., 2024), evaluating summary consistency (Luo et al., 2023), and multi-role playing for summarization evaluation (Wu et al., 2023). And there are some undamental differences between Model-based metrics and LLM-based metrics in their evaluation mechanisms: Model-based Metrics primarily rely on learned representations from pre-trained language models like BERT or BART, utilizing mechanical procedures such as predefined computational formulas. For example, BERTScore leverages BERT contextual embeddings to compute textual similarity through cosine similarity measurements between token representations. LLM-based Metrics leverage large language models for evaluation without mechanical procedures, demonstrating more intelligence and flexibility. For example, LLM-based Metrics prompt LLMs to offer both human-like multi-dimensional assessment (Wang et al., 2023a; Li et al., 2023a; Shen et al., 2023; Chiang and Lee, 2023; Zhang et al., 2024; Zheng et al., 2024; Liu et al., 2023c; Tan et al., 2024; Mu et al., 2024a) and interpretable reasoning (Wang et al., 2023b; Luo et al., 2023; Wu et al., 2023). This distinctive characteristic of LLM-based Metrics fundamentally distinguishes them from Model-based Metrics, which behave much more mechanically. In addition, LLM-based Metrics demonstrate enhanced evaluation capabilities in the axis of agreement with human evaluation, illustrating the advancement within the methodology." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.793, + 0.886, + 0.922 + ], + "angle": 0, + "content": "Building upon these insights, researchers have focused on refining evaluation metrics for evaluating long context capabilities with large language models (LLMs). Fu et al. (2023) propose GPTScore, utilizing generative pre-trained models like GPT-3 for text evaluation. To address the length bias in LLM-generated content, L-Eval incorporates word count requirements into instruc" + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.942, + 0.509, + 0.955 + ], + "angle": 0, + "content": "25" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.115, + 0.085, + 0.49, + 0.246 + ], + "angle": 0, + "content": "tions. Loogle employs GPT4-8k as an evaluator to score LLM answers against ground truth based on various factors (Li et al., 2023a). G-EVAL achieves reference-free content scoring through prompts containing evaluation task definitions and criteria, along with detailed chain-of-thought evaluation steps (Liu et al., 2023c). Tan et al. (2024) have introduced PROXYQA for long-context generation evaluation, evaluating final results based on the accuracy of answers to proxy question." + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.943, + 0.511, + 0.955 + ], + "angle": 0, + "content": "26" + } + ] +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13299/1182f1ca-70ec-4ed4-9c5b-10aa50e1a3e7_origin.pdf b/data/2025/2503_13xxx/2503.13299/1182f1ca-70ec-4ed4-9c5b-10aa50e1a3e7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4dc0c866c6c81fe587c95f322cd1b7851f7ea720 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13299/1182f1ca-70ec-4ed4-9c5b-10aa50e1a3e7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:423a121648d785faaa52084a77ca065a42235c5b2e8791412509b8649a4fec91 +size 1874397 diff --git a/data/2025/2503_13xxx/2503.13299/full.md b/data/2025/2503_13xxx/2503.13299/full.md new file mode 100644 index 0000000000000000000000000000000000000000..798767da3110b629271f5c3f0a0dee15cd528ac3 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13299/full.md @@ -0,0 +1,891 @@ +# A Survey on Transformer Context Extension: Approaches and Evaluation + +Yijun Liu $^{1}$ , Jinzheng Yu $^{2}$ , Yang Xu $^{1}$ , Zhongyang Li $^{3}$ , Qingfu Zhu $^{1}$ + +$^{1}$ Research Center for Social Computing and Interactive Robotics + +1 Harbin Institute of Technology + +2 State Key Laboratory of Media Convergence and Communication + +$^{2}$ Communication University of China + +3 Huawei Technologies, Co., Ltd. + +{yijunliu, qfzhu}@ir.hit.edu.cn + +# Abstract + +Large language models (LLMs) based on Transformer have been widely applied in the filed of natural language processing (NLP), demonstrating strong performance, particularly in handling short text tasks. However, when it comes to long context scenarios, the performance of LLMs degrades due to some challenges. To alleviate this phenomenon, there is a number of work proposed recently. In this survey, we first list the challenges of applying pre-trained LLMs to process long contexts. Then systematically review the approaches related to long context and propose our taxonomy categorizing them into four main types: positional encoding, context compression, retrieval augmented, and attention pattern. In addition to the approaches, we focus on the evaluation of long context, organizing relevant data, tasks, and metrics based on existing long context benchmarks. Finally, we summarize unresolved issues in the long context domain and put forward our views on future developments. + +# 1 Introduction + +In recent years, the Transformer (Vaswani, 2017) architecture has made significant progress in many NLP tasks (Devlin, 2018; Radford, 2018; Lewis, 2019; Raffel et al., 2020; Brown, 2020; Chen et al., 2021a; Cobbe et al., 2021), and has become the foundational model of many applications. Large language models can handle tasks involving short texts, within the pre-trained context length. However, current scenarios always involve longer texts, such as book-/repo- level tasks (Sharma et al., 2019; Liu et al., 2023a; Zhang et al., 2023a; Liu et al., 2023b), dialogue systems with long contexts (Dey et al., 2022; Li et al., 2024a), content-rich in-context learning (Li et al., 2024c) and so on. The performance of the pre-trained LLMs degrades and the models often fail to utilize the complete knowledge contained within the long context inputs. This may be caused by three inher + +ent challenges: out-of-distribution (OOD) problem (Han et al., 2024), "Lost in the Middle" phenomenon (Liu et al., 2024a), and quadratic complexity of attention (Zhou et al., 2024). Recently, a lot of work has been proposed to improve and evaluate models' ability to handle long contexts in the community. + +This survey focuses on approaches and evaluation in the long context field, systematically reviewing existing related work. As illustrated in Figure 1, we propose a novel taxonomy for approaches, categorizing them into four main groups: positional encoding, context compression, retrieval augmented, and attention pattern. Additionally, we focus on the evaluation aspect and organize work on data, tasks, and metrics based on existing benchmarks. In addition to the two main parts of approaches and evaluation, we present our viewpoints on the current unsolved issues and potential future directions in the long context domain. To illustrate the current status more theoretically, we also list the main challenges in the field of long context before introducing specific work. Although most existing methods and benchmarks have not corresponded to them, these challenges are still instructive for the development of approaches and evaluation. + +There are also some surveys that focus on the long context domain. They each have their own emphasis, but there is no systematic and comprehensive survey of approaches and evaluation in the field of long context, which can provide researchers with a quick and efficient guide. Some surveys only include a part of the methods, lacking a comprehensive overview of the approaches related to long context. Zhao et al. (2023) focus on work addressing length extrapolation from the perspective of positional encoding, while there are some surveys from the perspective of KV Cache (Li et al., 2025; Shi et al., 2024). Besides, though some surveys have categorized existing work, their taxonomies are not clear, and there are overlaps between categories. + +![](images/8155c34b78c72e3a6c72ffa54d70e4bbd68caca6845f74c57719234c7004dcba.jpg) +Figure 1: Framework of survey. We first list three inherent challenges in Section 2. And then we systematically review related approaches and propose a novel taxonomy with four major categories in Section 3. Next, in Section 4, we organize the evaluation aspect from three perspectives: data, tasks, and metrics based on existing benchmarks. At last, we show our views on future roadmap and open problems in Section 5. + +For example, Huang et al. (2023) divide the methods for enhancing Transformer architecture models into five categories, but some existing methods can belong to multiple categories. And (Pawar et al., 2024) also has this problem, which distinguishes existing techniques into two categories: interpolation and extrapolation. Also, some surveys even involve some common methods that not specifically designed for long contexts. Dong et al. (2023b) provide an overview of the text-preprocessing methods, architectures, special characteristics and application for long context, but they cover some general topics. What's more, these surveys pay little or even no attention to the evaluation aspect. + +To fill the above gap, our survey proposes a novel and comprehensive taxonomy on both approaches and evaluation aspects. It is worth noting that we focus on work that applies Transformer-based models to long text tasks, but not improving Transformers (nor other architectures) in a universal scenario. That is to say, this survey does not cover fields like long chain-of-thought reasoning (Chen et al., 2025), multimodal long context (Song et al., 2024; Qiu et al., 2024), efficient Transformer (Zhou et al., 2024), and State Space Model (SSM) (Wang et al., + +2024c). In addition, the long context we focus on is the long input content, rather than the introduction of external knowledge in the Retrieval-Augmented Generation (RAG) scenario (Yu et al., 2024; Zhao et al., 2024; Fan et al., 2024). + +# 2 Challenges + +When applying pre-trained LLMs to the long context scenarios, there are some inherent challenges affecting models' performance. We list the three most important and common challenges: OOD problem, "Lost in the Middle" phenomenon, and quadratic complexity. + +OD Problem When processing sequences that exceed pre-trained context window length, the models face out-of-distribution (OOD) problems. Han et al. (2024) verify theoretically and empirically that three key factors contribute to OOD issues, thereby limiting models' extrapolation capabilities: 1) unseen inter-token distances, 2) increased number of attended tokens, and 3) implicitly encoded position information of the starting tokens. + +"Lost in the Middle" Phenomenon Liu et al. (2024a) discover the "Lost in the middle" phe + +nomenon through experiments that when LLMs receive a long input, they tend to focus on the information at both the beginning and end of the input sequence. At the same time, they neglect the content in the middle, thus failing to capitalize on the key information within the long input. + +Quadratic Complexity Due to the quadratic complexity of attention, directly using pre-trained LLMs for training or inference on long context is time and resource consuming (Zhou et al., 2024). + +The above are three inherent challenges in the field of long context, and some existing methods have alleviated them to a certain extent. But it is worth noting that most of the methods do not start from this perspective. They consider directly improving the performance of downstream tasks. However, we believe that these three challenges are still the fundamental problems that need to be solved. They play a vital role in the proposal of methods and construction of benchmarks. Moreover, they are the focus of subsequent research. + +# 3 Approaches + +For the existing approaches for long context, we summarize their characteristics and propose a novel taxonomy different from previous work. As illustrated in Figure 1, mainstream methods are divided into four major categories: positional encoding, context compression, retrieval augmented, and attention pattern, which are introduced below. For more details, please see the Appendix A. + +# 3.1 Positional Encoding + +Kazemnejad et al. (2024) mention that positional encoding (PE) appears to be a major factor in the length generalization of Transformer. During the inference process, when encountering sequences that exceed the length of the pre-trained window, the model needs to handle the position index that was not encountered during pre-training. This may lead to Out-Of-Distribution (OOD) issues. Thus, we would like to find an appropriate positional encoding method that allows the model to effectively encode position in sequences that exceed the pre-trained window length. Based on the implementation methods, solutions can be categorized into two main types: Variants of Rotary Position Embedding (RoPE, Su et al., 2024) and Attention bias method. The following sections will detail these two methods. + +Though the designed positional encoding strategies can alleviate the extrapolation problem, experiments have found that models without positional encoding (NoPE) show better performance than these methods in reasoning tasks (Kazemnejad et al., 2024). That's because when causal masks are used for decoding, the model reads the sequence sequentially from left to right. And this process naturally incorporates the sequential information of the token. This finding suggests that when designing a position encoding strategy, we may need to consider the way the model processes sequences and the requirements of the task. + +# 3.1.1 Variants of RoPE + +Rotary Position Embedding (RoPE, Su et al., 2024) is a positional encoding method utilized in a series of models such as Wang and Komatsuzaki (2021); Touvron et al. (2023); Roziere et al. (2023). RoPE incorporates explicit relative position dependency in self-attention, which can be expressed as + +$$ +\begin{array}{l} \operatorname {s i m} \left(q _ {m}, k _ {n}\right) = q _ {m} ^ {\top} R _ {\Theta , n - m} ^ {d} k _ {n} \tag {1} \\ = (R _ {\Theta , m} ^ {d} q _ {m}) ^ {\top} (R _ {\Theta , n} ^ {d} k _ {n}), \\ \end{array} +$$ + +where $R_{\Theta,m}^{d}$ is called the rotation matrix. The original RoPE's extrapolation capability is not very robust and can only maintain performance slightly beyond the pre-trained context length. Consequently, existing work enhances RoPE for better extrapolation. The core of RoPE is the rotation matrix $R_{\Theta,m}^{d}$ , which is parameterized by the position index $m$ and the function family $\Theta$ . We can optimize RoPE by adjusting these parameters or even the structure of RoPE itself. Existing related work can be divided into three subcategories: position index adjustment, base frequency adjustment, and structural modification. + +Position Index Adjustment This method involves modifying the allocation or calculation of position index $m$ to maintain the relative distances between tokens within the model's pre-trained index range. This can be implemented in three ways. We can adjust the allocation of the position index $m$ (An et al., 2024). Besides, proportionally scale $m$ for long sequences to adapt to the pre-trained window (Chen et al., 2023b). What's more, we can combine the above two methods, reallocating position index to some tokens in the sequence, while scaling the position index for others (Su, 2023). + +Base Frequency Adjustment From the formula of rotation matrix (see details in Appendix A.1.1), + +we can see that each non-zero term is a trigonometric function value with $\theta_{i}$ as independent variable. And the value of $\theta_{i}$ affects the effect of rotation matrix to a certain extent. Base frequency adjustment is to enhance the model extrapolation performance by modifying $\theta_{i}$ in the trigonometric function terms in the rotation matrix. NTK (Neural Tangent Kernel) theory shows that when the input dimension is low and its embedding representation lacks high-frequency components, it is difficult for the neural network to learn high-frequency information (Tancik et al., 2020). Therefore, researchers choose to adjust $\theta_{i}$ with the idea of "extrapolation on high-frequency and interpolation on low-frequency". One strategy is to change the base $b$ of the exponential terms $\theta_{i}$ in the function cluster $\Theta$ , and change it from the default value $b = 10000$ to other values which can improve the model extrapolation performance (Peng and Quesnelle, 2023; Roziere et al., 2023). Another strategy is to directly scale $\theta_{i}$ (bloc97, 2023; Peng et al., 2023). + +Structural Modification The methods described above focus on modifying variables in RoPE formula without altering its basic structure. Some existing work explores adjusting the structure of RoPE itself to better extrapolate, which optimizes the original RoPE formula (Sun et al., 2022). + +# 3.1.2 Attention Bias + +This type of method introduces relative position information by adding a bias related to the relative distance between tokens when calculating the similarity between query and key vectors. The process can be expressed as follows: + +$$ +\mathrm {s i m} \left(q _ {m}, k _ {n}\right) = q _ {m} ^ {\top} k _ {n} + f _ {b i a s} (m, n), (2) +$$ + +where $f_{bias}(m,n)$ is a bias function that depends on the token position index corresponding to query and key. $f_{bias}(m,n)$ be divided into two categories: learnable (Raffel et al., 2020; Chi et al., 2022a), predefined (Press et al., 2021; Chi et al., 2022b). + +# 3.2 Context Compression + +Existing work proposes compressing the long input sequence into a shorter one for representation. These methods can be categorized into two main types by the compression granularity: soft compression and hard compression. + +# 3.2.1 Soft Compression + +In order to shorten the sequence length, the soft compression method uses the model to compress + +the original input token sequence into a shorter summary token sequence. These summary tokens are soft tokens which act as compression representation but do not correspond to words with actual meaning. They are inserted into the original token sequence to form a new input. During the forward pass of the model, the information from the original token sequence is gathered into the summary token sequence, which represents the original input for subsequent operations. Since summary tokens do not appear during the model's pre-training, additional training is necessary for the model to learn how to generate and utilize these tokens (Bulatov et al., 2022; Li et al., 2023b; Chevalier et al., 2023; Ge et al., 2023; Mu et al., 2024b). This method can shorten the length of the hidden vector sequence, so that enabling it to be processed within the model's pre-trained window. + +# 3.2.2 Hard Compression + +This method utilizes some techniques to directly shorten plain text sequence length. This process can be achieved through selection and summarization. It doesn't introduce additional tokens and targeted training, which makes it can be applied to some black box models (Jiang et al., 2023, 2024b; Chen et al., 2023a). + +# 3.3 Retrieval Augmented + +Some existing work propose retrieval-augmented methods to enhance model performance on long context tasks by selectively incorporating crucial tokens from history context into attention. With reference to related work, we summarize a processing paradigm for this type of method. Initially, the (key, value) pairs from history are stored in the KV cache. Then the model retrieves the corresponding token representations from the KV cache at different retrieval granularity levels. This process is based on the similarity between current token and history tokens from KV cache. The top-k relevant tokens are selected as the retrieved context, which is then concatenated with the context within the current window to form a new input. Finally, the model applies appropriate positional encoding to this concatenated context for attention calculation. Below, we summarize the different methods according to each step of the above process: + +# 3.3.1 Retrieval Granularity + +In the process of long context retrieval, we focus on the most relevant subset of tokens from KV + +cache related to the current processing step. Different methods use different retrieval granularity, with the basic being token-level retrieval. Specifically, it involves calculating the similarity of each history token in the KV cache with the current token, and selecting the top-k history tokens' key and value vectors as the retrieval result. Methods applying this strategy include MemTRM (Wu et al., 2022), FoT (Tworkowski et al., 2024), Unlimformer (Bertsch et al., 2024a), etc. Besides, some work focuses on block-level retrieval, which retrieve top-k set of tokens in one step (Wang et al., 2024b; Rubin and Berant, 2023; Xiao et al., 2024; Mohtashami and Jaggi, 2024). + +# 3.3.2 Similarity Computation + +Almost all existing works compute the inner product of query and key as similarity. This strategy draws from the standard attention mechanism, which calculates the dot product between the query and key to allocate corresponding weights to the value (Vaswani et al., 2023). It is simple to implement and can effectively capture and utilize the similarity information between queries and keys. + +# 3.3.3 Positional Encoding + +After computing the similarity, we select the topk relative tokens as the results, and call them retrieved context tokens. Correspondingly, tokens within the current window are called as local context tokens. These two types of context tokens are concatenated to form a new set of context tokens. Before these new context tokens are fed into the model for attention computation, it is necessary to consider suitable positional encoding to distinguish the information of tokens at different positions. Some work choose to assign the same position vector to the retrieved context tokens (Wu et al., 2022; Tworkowski et al., 2024; Xiao et al., 2024), while Mohtashami and Jaggi (2023) choose reallocation strategies. + +# 3.3.4 Attention Calculation + +Next, when performing attention calculation, we need to consider how to make full use of retrieved context tokens and local context tokens. Different approaches use different strategies. Simply, Tworkowski et al. (2024); Xiao et al. (2024)choose standard attention, while Bertsch et al. (2024a) chooses cross attention. Besides, Wu et al. (2022); Wang et al. (2024b) adopt a Joint Attention method. Landmark employs the Grouped Soft + +max method, a fine-grained approach for calculation (Mohtashami and Jaggi, 2023). + +# 3.4 Attention Pattern + +There is a class of methods modifying the attention pattern, i.e. the range of tokens attended to. They can better adapt models to expand processing sequence length. Some of them do not require additional training and can be employed as plug-and-play solutions in existing models. These methods can be divided into three main categories: sliding window, parallel context, and sparse attention. + +# 3.4.1 Sliding Window + +This type of method divides the sequence into segments and performs attention calculation segment by segment without significantly increasing computational complexity. The attention results from earlier segments are stored, which later segments can use during their attention calculation (Dai et al., 2019; Han et al., 2024; Xiao et al., 2023). + +# 3.4.2 Parallel Context + +The Parallel Context method folds the context part of the input (e.g., in-context examples) into multiple segments. These segments first calculate attention independently, and share the same set of position indexes. And then prompt tokens in the input attend to all the context tokens, so that fully utilize contextual information (Ratner et al., 2022; Hao et al., 2022). These methods require no training and can be plug-and-played into existing models. + +# 3.4.3 Sparse Attention + +Some work reduce the number of tokens involved in the attention computation, decreasing computational load. They abandon the original attention method which attends to local continuous tokens, while expand the attentive field, and attend to discrete tokens from further context (Ding et al., 2023; Yu et al., 2023; Chen et al., 2023c). + +# 4 Evaluation + +In the long context scenario, evaluating the model's ability to understand and utilize long context is also a new and critical issue. But as described before, current surveys pay little or even no attention to the evaluation aspect. To fill this gap, we summarize the data, tasks, and metrics of long context evaluation in our survey based on existing benchmarks. The following is a brief introduction, detailed information is in the Appendix B. + +
BenchmarkLength Level#ExamplesDomain
SCROLLS (Shaham et al., 2022)1k~4k119,495Literature, Dialog
ZeroSCROLLS (Shaham et al., 2023)0k~16k4,378Wiki, Literature, Dialog
LongBench (Bai et al., 2023)0k~4k, 4k~8k, >8k4,750Wiki, Literature, Dialog, Report, Code, News
LooGLE (Li et al., 2023a)0k~24k776Wiki, Paper
BAMBOO (Dong et al., 2023a)0k~4k, 4k~16k1,502Wiki, Dialog, Report, Code, Paper
LongICLBench (Li et al., 2024c)2k~50k3,000Dialog, News, Common Sense
L-Eval (An et al., 2023)3k~200k411Literature, Dialog, News, Paper, Common Sense
Ada-LEval (Wang et al., 2024a)1k~128k117,500Literature, Code
∞Bench (Zhang et al., 2024)0k~200k3,946Literature, Dialog, Code
NeedleBench (Li et al., 2024b)1k~4k/8k/32k/128k/200k/1m+-Wiki, Literature, Dialog, Report, Code, News
LV-Eval (Yuan et al., 2024)0k~16k/32k/64k/128k/256k1,729Wiki, Literature, Dialog, Report, Code, News, Paper
+ +Table 1: Statistics on data characteristics of the datasets in existing long context benchmarks. Length level represents the range of token lengths in the dataset used in the benchmark. #Examples refers to the total number of examples. Domain denotes the data sources. The corresponding contents in table are directly extracted or calculated from the original papers. Given that current models mainly within context windows exceeding 100k tokens, we categorize benchmarks based on this threshold. Benchmarks with contexts exceeding 100K tokens are listed in the lower part. + +# 4.1 Data + +In order to explore what data should be used to test model's ability to process long context, we conduct a statistical analysis of datasets in existing benchmarks and summarize their data characteristics. + +The evaluation of a model's long context capability requires not only the long data but also the data diversity and quality. As shown in Table 1, we focus on three characteristics of the datasets in existing long context benchmarks: length level, total number of examples, and the domain it covers. + +Besides, we also discuss about knowledge leakage issue, which need to be addressed when constructing the dataset, in the Appendix B.1.2 + +# 4.2 Tasks + +Currently, existing benchmarks propose numerous tasks to evaluate the model's ability to process long context. But there is no systematic taxonomy for these tasks. Therefore, we divide all tasks in existing benchmarks into seven categories from the perspective of task setting: Question Answering, Needle-in-a-Haystack, Statistical Tasks, Code, In-Context Learning, Text Generation and Other Tasks. Below is the introduction of each type of task, and the details are in the Appendix B.2. + +# 4.2.1 Question Answering + +Single-hop Question Answering requires models to locate and extract answers from a single text passage, typically involving a single fact (Rajpurkar, 2016; Joshi et al., 2017; Kočisky et al., 2018). + +Multi-hop Question Answering requires models to integrate information from multiple sources to answer complex questions. This often involves reasoning across different pieces of evidence (Ho + +et al., 2020; Trivedi et al., 2022; Yang et al., 2018; Chen et al., 2024b; Zhuang et al., 2023). + +# 4.2.2 Needle-in-a-Haystack + +Needle-in-a-Haystack evaluate LLMs' ability to extract specific content from long contexts. These tasks can evaluate the model's retrieval capability, also measure the range of context lengths model can handle (Zhu et al., 2024; Mohtashami and Jaggi, 2023; Zhang et al., 2024; Li et al., 2024b). + +# 4.2.3 Statistical Tasks + +Long Arithmetic Calculation requires models to perform addition and subtraction operations on lengthy arithmetic expressions (Zhang et al., 2024, 2023b; Cobbe et al., 2021; Xu et al., 2024; Chen et al., 2024a). + +Numerical Information Extraction requires models to identify specific mathematical elements (Zhang et al., 2024; Li et al., 2023a). + +Sentiment Aggregation requires models to output the percentage of positive reviews when provided with a collection of reviews (Angelidis et al., 2021; Shaham et al., 2023). + +Paragraph Counting requires models to count the number of unique paragraphs in a set of randomly repeated and shuffled passages (Bai et al., 2023). + +# 4.2.4 Code + +Code Completion requires models to complete missing code fragments based on preceding code and context (Chen et al., 2021a; Zheng et al., 2023; Bai et al., 2023; Guo et al., 2023; Zan et al., 2022; Dong et al., 2023a; Qin et al., 2024). + +Code Running asks models to infer the output of lengthy programs by tracing a series of cascading + +function calls (Bubeck et al., 2023; An et al., 2023; Zhang et al., 2024). + +Code Debugging requires models to identify deliberately inserted errors (Zhang et al., 2024). + +# 4.2.5 In-Context Learning + +The input will contain a certain amount of examples, resulting in a long input. This is caused by the example itself is very long or the number of examples is particularly large. Based on this fact, we divide In-Context Learning task into two categories: long example learning and many-shot learning. + +Long Example Learning requires models to process extensive inputs with long examples which have large label spaces and generate accurate predictions. This task inherently is a long-context challenge (Bai et al., 2023; Li et al., 2024c; Li and Roth, 2002; NLPCC, 2014). + +Many-shot Learning leverages the expanded context windows of models to process hundreds or even thousands of examples in order to complete a given task (Yu et al., 2020; Bertsch et al., 2024b). + +# 4.2.6 Text Generation + +Language Modeling serving as the pre-training task for LLMs, is also a widely used basic task to test the model's ability to generate text. + +Document Summarization requires models to make a summary of the input documents, which encompasses single-document and multi-document tasks. Single-document summarization extracts key information from a single document (Wang et al., 2022; Chen et al., 2021b; Huang et al., 2021; Zhong et al., 2021), while multi-document summarization synthesizes information from multiple sources into a comprehensive, non-repetitive summary containing all key points (Bai et al., 2023; An et al., 2023; Fabbri et al., 2019). + +Open-ended Text Generation requires models to produce coherent and logical content on given topics (Tan et al., 2024; Bai et al., 2024; Kumar et al., 2024; Ni et al., 2024; Rafailov et al., 2024). + +# 4.2.7 Other Tasks + +In addition to the six types of tasks listed above, there are some tasks that are not included in this classification system but are equally important for testing the model's long context ability. + +Reordering asks models to reconstruct the original sequence of shuffled fragments by considering the broad context and logical relationships (Krysciński et al., 2021; Shaham et al., 2023; Li et al., 2023a; Dong et al., 2023a; Wang et al., 2024a). + +Context Consistency shows models an academic paper and a hypothesis, requiring models to judge whether the hypothesis is supported or contradicted by the ideas in the paper (Dong et al., 2023a). + +Summary Source Paragraph Identification challenges models to identify the original source paragraphs for given summaries (Bai et al., 2023). + +Character Identification requires models to identify different speakers in long dialogues by recognizing their distinct characteristics (TVMEG, 2024; Senedd Cymru, 2024; Zhang et al., 2024; Dong et al., 2023a; Chen et al., 2021b). + +# 4.3 Metrics + +In addition to data and tasks, metrics can directly reflect the model's ability to handle long contexts. With current long context task designs gradually changing from classic NLP tasks to more practical tasks, the requirements for metrics are constantly increasing. We organize metrics for testing models' capabilities on long context according to the three stages of metrics development: Algorithmic Metrics, Model-based Metrics, and LLM-based Metrics. From these three metrics stages, it can be seen that the metrics development trend becomes more and more complex and flexible. + +# 4.3.1 Algorithmic Metrics + +Algorithmic metrics are calculated based on the model output or logits through defined formulas. Their implementation is very simple and can reflect the effect of language modeling and some downstream tasks to a certain extent. + +Perplexity (PPL) is one of the most common algorithmic metrics used in existing long context benchmarks (Beltagy et al., 2020; Roy et al., 2021; Press et al., 2021). Meanwhile, some benchmarks employ other algorithmic metrics such as accuracy, f1, and N-gram-based metrics (ROUGE, Lin, 2004 and BLEU, Papineni et al., 2002, etc.) to evaluate LLMs on certain downstream tasks (Shaham et al., 2023; Bai et al., 2023; Kasai et al., 2021). + +However, these algorithmic metrics have several limitations, such as content quality, syntactic accuracy, and human correlation issues (Reiter and Belz, 2009; Stent et al., 2005; Sun et al., 2021; An et al., 2023; Improving; Tan et al., 2024). This causes algorithmic metrics to be limited in reflecting the model's ability to process long context. A number of approaches have been developed to improve algorithmic metrics. Such as enhancing scoring techniques, restructuring task formats and so on (Yuan + +et al., 2024; Dong et al., 2023a; Li et al., 2024b). + +# 4.3.2 Model-based Metrics + +To improve the consistency with human judgments, pre-trained language models are being employed to evaluate (Zhang et al., 2020; Yuan et al., 2021). Specifically, pre-trained models (such as BERT, Devlin, 2018, BART, Lewis, 2019, etc.) are used to calculate the similarity score between the model output and reference text to evaluate the performance of downstream tasks. + +However, these model-based metrics entirely rely on representations learned from pre-trained language models and require reference texts. They may not be accurate enough for evaluating some novel and creative text generation tasks. + +# 4.3.3 LLM-based Metrics + +Combining the above two metrics issues, LLM-based metrics are proposed, utilizing sufficient knowledge within LLMs for evaluation. For example, LLM-based metrics prompt LLMs to offer human-like multi-dimensional assessment (Wang et al., 2023a; Li et al., 2023a; Shen et al., 2023; Chiang and Lee, 2023; Zhang et al., 2024; Zheng et al., 2024; Liu et al., 2023c; Tan et al., 2024; Mu et al., 2024a) and interpretable reasoning (Wang et al., 2023b; Luo et al., 2023; Wu et al., 2023). + +LLM-based metrics fundamentally distinguish from the other two metrics, which behave much more mechanically. In addition, they demonstrate enhanced agreement with human evaluations (Wang et al., 2023a; Li et al., 2023a). Due to this higher consistency and wider scope of application, LLM-based metrics are gaining increasing attention in long-context evaluation. + +# 5 Future Roadmap and Open Problems + +Despite the rapid development of long context techniques, numerous challenges remain unresolved. Looking to future roadmap, we list vital open problems and present our perspectives on the developments. They are also divided into two parts: approaches and evaluation. + +# 5.1 Approaches + +Method Integration would combine methods' strengths to address the challenges of extrapolating long context from multiple perspectives. + +Long Text Generation remains under-researched, which concentrate on effective long-text generation techniques and the evaluation of generation quality. + +Sparse Attention Mechanisms may lead to a decrease in models' original language ability, thereby limiting their potential for processing long context. "Lost-in-the-Middle" Issue has not yet been completely resolved, there is a lack of targeted solutions and appropriate verification methods. + +Scalability of Methods requires to explore how existing methods can be adapted to models of different scales or even different architectural frameworks, enhancing their generality and applicability. + +Methods Enabling "Train Short, Test Long" haven't emerged, which train on short texts while excelling in long-context. These methods can reduce resource needs and improve generalization. + +Trade-off between Information Filtering and Generation Effects means existing methods can be optimized by integrating RAG to enhance efficiency and quality without too long input. + +# 5.2 Evaluation + +Knowledge Leakage Issue is ever-present. As LLMs gain the ability to gather information from the Internet and their training data scope expands, existing solutions become increasingly ineffective and some operations may limit innovation. + +Novel Benchmark Design needed to be proposed. We need to construct benchmarks with coherent content and long-distance dependencies to more effectively test the model's ability to process long context. For example, asking models to process inputs from multiple books. + +Updated LLM-based Metrics are a development direction. Though LLM-based metrics show higher consistency with human judgments than other metrics, they are costly, have random outputs, and even lack human emotions. We need to combine LLM with other techniques to better evaluate. + +# 6 Conclusion + +In this survey, we first list three inherent challenges in processing long context. And then we propose a novel taxonomy for long context approaches and summarize the similarities and differences in each category. In addition, we systematically review the work on evaluation, summarize the data, tasks, and metrics related to long context based on existing benchmark. Finally, we list unsolved issues and put forward our insights on the future development of long context domain. + +# Limitations + +This survey summarizes the approaches and evaluation in the area of long context, and gives our views on future development. However, we don't cover efficient transformer on long context, multimodel long context, etc. In addition, due to limitations in space, we are not able to include all related work. + +Due to the rapidly evolving nature of the field of Transformer context extension, our survey may not capture the latest developments, particularly those that emerged near or after the time of writing. + +# References + +Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. +Rishabh Agarwal, Avi Singh, Lei M Zhang, Bernd Bohnet, Stephanie Chan, Ankesh Anand, Zaheer Abbas, Azade Nova, John D Co-Reyes, Eric Chu, et al. 2024. Many-shot in-context learning. arXiv preprint arXiv:2404.11018. +Chenxin An, Shansan Gong, Ming Zhong, Xingjian Zhao, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu. 2023. L-eval: Instituting standardized evaluation for long context language models. arXiv preprint arXiv:2307.11088. +Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, and Lingpeng Kong. 2024. Training-free long-context scaling of large language models. arXiv preprint arXiv:2402.17463. +Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, and Mirella Lapata. 2021. Extractive opinion summarization in quantized transformer spaces. Transactions of the Association for Computational Linguistics, 9:277-293. +Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, et al. 2023. Longbench: A bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508. +Yushi Bai, Jiajie Zhang, Xin Lv, Linzhi Zheng, Siqi Zhu, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. 2024. Longwriter: Unleashing $10,000+$ word generation from long context llms. arXiv preprint arXiv:2408.07055. +Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. + +Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew Gormley. 2024a. Unlimformer: Long-range transformers with unlimited length input. Advances in Neural Information Processing Systems, 36. +Amanda Bertsch, Maor Ivgi, Uri Alon, Jonathan Berant, Matthew R Gormley, and Graham Neubig. 2024b. In-context learning with long-context models: An in-depth exploration. arXiv preprint arXiv:2405.00200. +bloc97. 2023. Add NTK-Aware interpolation "by parts" correction. https://github.com/jquesnelle/scaled-rope/pull/1. +Tom B Brown. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165. +Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712. +Aydar Bulatov, Yury Kuratov, and Mikhail Burtsev. 2022. Recurrent memory transformer. Advances in Neural Information Processing Systems, 35:11079-11091. +Howard Chen, Ramakanth Pasunuru, Jason Weston, and Asli Celikyilmaz. 2023a. Walking down the memory maze: Beyond context limit through interactive reading. arXiv preprint arXiv:2310.05029. +Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021a. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. +Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2021b. Summscreen: A dataset for abstractive screenplay summarization. arXiv preprint arXiv:2104.07091. +Qiguang Chen, Libo Qin, Jinhao Liu, Dengyun Peng, Jiannan Guan, Peng Wang, Mengkang Hu, Yuhang Zhou, Te Gao, and Wanxiang Che. 2025. Towards reasoning era: A survey of long chain-of-thought for reasoning large language models. Preprint, arXiv:2503.09567. +Qiguang Chen, Libo Qin, Jiaqi Wang, Jinxuan Zhou, and Wanxiang Che. 2024a. Unlocking the boundaries of thought: A reasoning granularity framework to quantify and optimize chain-of-thought. +Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023b. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595. + +Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, and Jiaya Jia. 2023c. Longlora: Efficient fine-tuning of long-context large language models. arXiv preprint arXiv:2309.12307. +Zhi Chen, Qiguang Chen, Libo Qin, Qipeng Guo, Hai-jun Lv, Yicheng Zou, Wanxiang Che, Hang Yan, Kai Chen, and Dahua Lin. 2024b. What are the essential factors in crafting effective long context multi-hop instruction datasets? insights and best practices. arXiv preprint arXiv:2409.01893. +Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and Danqi Chen. 2023. Adapting language models to compress contexts. arXiv preprint arXiv:2305.14788. +Ta-Chung Chi, Ting-Han Fan, Peter J Ramadge, and Alexander Rudnicky. 2022a. Kerple: Kernelized relative positional embedding for length extrapolation. Advances in Neural Information Processing Systems, 35:8386-8399. +Ta-Chung Chi, Ting-Han Fan, Alexander I Rudnicky, and Peter J Ramadge. 2022b. Dissecting transformer length extrapolation via the lens of receptive field analysis. arXiv preprint arXiv:2212.10356. +Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937. +Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. +Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988. +Jacob Devlin. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Suvodip Dey, Maunendra Sankar Desarkar, Asif Ekbal, and PK Srijith. 2022. Dialogen: Generalized long-range context representation for dialogue systems. arXiv preprint arXiv:2210.06282. +Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, and Furu Wei. 2023. Longnet: Scaling transformers to 1,000,000,000 tokens. arXiv preprint arXiv:2307.02486. +Zican Dong, Tianyi Tang, Junyi Li, Wayne Xin Zhao, and Ji-Rong Wen. 2023a. Bamboo: A comprehensive benchmark for evaluating long text modeling capacities of large language models. arXiv preprint arXiv:2309.13345. + +Zican Dong, Tianyi Tang, Lunyi Li, and Wayne Xin Zhao. 2023b. A survey on long text modeling with transformers. arXiv preprint arXiv:2302.14502. +Alexander R Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. arXiv preprint arXiv:1906.01749. +Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. 2024. A survey on rag meeting llms: Towards retrieval-augmented large language models. Preprint, arXiv:2405.06211. +Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166. +Tao Ge, Jing Hu, Lei Wang, Xun Wang, Si-Qing Chen, and Furu Wei. 2023. In-context autoencoder for context compression in a large language model. arXiv preprint arXiv:2307.06945. +Shahriar Golchin and Mihai Surdeanu. 2023. Time travel in llms: Tracing data contamination in large language models. arXiv preprint arXiv:2308.08493. +Daya Guo, Canwen Xu, Nan Duan, Jian Yin, and Julian McAuley. 2023. Longcoder: A long-range pretrained language model for code completion. In International Conference on Machine Learning, pages 12098-12107. PMLR. +Chi Han, Qifan Wang, Hao Peng, Wenhan Xiong, Yu Chen, Heng Ji, and Sinong Wang. 2024. Lm infinite: Zero-shot extreme length generalization for large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 3991-4008. +Yaru Hao, Yutao Sun, Li Dong, Zhixiong Han, Yuxian Gu, and Furu Wei. 2022. Structured prompting: Scaling in-context learning to 1,000 examples. arXiv preprint arXiv:2212.06713. +Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multi-hop qa dataset for comprehensive evaluation of reasoning steps. arXiv preprint arXiv:2011.01060. +Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. arXiv preprint arXiv:2104.02112. +Yunpeng Huang, Jingwei Xu, Junyu Lai, Zixu Jiang, Taolue Chen, Zenan Li, Yuan Yao, Xiaoxing Ma, Lijuan Yang, Hao Chen, et al. 2023. Advancing transformer architecture in long-context large language models: A comprehensive survey. arXiv preprint arXiv:2311.12351. + +NLG Improving. Not all metrics are guilty: Improving nlg evaluation with llm paraphrasing. +Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2024a. LongLLM Lingua: Accelerating and enhancing LLMs in long context scenarios via prompt compression. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1658-1677, Bangkok, Thailand. Association for Computational Linguistics. +Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023. Llm lingua: Compressing prompts for accelerated inference of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13358-13376. +Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2024b. Longlmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression. Preprint, arXiv:2310.06839. +Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551. +Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander R Fabbri, Yejin Choi, and Noah A Smith. 2021. Bidimensional leaderboards: Generate and evaluate language hand in hand. arXiv preprint arXiv:2112.04139. +Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and Siva Reddy. 2024. The impact of positional encoding on length generalization in transformers. Advances in Neural Information Processing Systems, 36. +Tomáš Kočisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328. +Wojciech Krysciński, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong, and Dragomir Radev. 2021. Booksum: A collection of datasets for long-form narrative summarization. arXiv preprint arXiv:2105.08209. +Ishita Kumar, Snigdha Viswanathan, Sushrita Yerra, Alireza Salemi, Ryan A Rossi, Franck Dernoncourt, Hanieh Deilamsalehy, Xiang Chen, Ruiyi Zhang, Shubham Agarwal, et al. 2024. Longlamp: A benchmark for personalized long-form text generation. arXiv preprint arXiv:2407.11016. +M Lewis. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. + +Haoyang Li, Yiming Li, Anxin Tian, Tianhao Tang, Zhanchao Xu, Xuejia Chen, Nicole Hu, Wei Dong, Qing Li, and Lei Chen. 2025. A survey on large language model acceleration based on kv cache management. Preprint, arXiv:2412.19442. +Jia-Nan Li, Quan Tu, Cunli Mao, Zhengtao Yu, JiRong Wen, and Rui Yan. 2024a. Streamingdialogue: Prolonged dialogue learning via long context compression with minimal losses. arXiv preprint arXiv:2403.08312. +Jiaqi Li, Mengmeng Wang, Zilong Zheng, and Muhan Zhang. 2023a. Loogle: Can long-context language models understand long contexts? arXiv preprint arXiv:2311.04939. +Mo Li, Songyang Zhang, Yunxin Liu, and Kai Chen. 2024b. Needlebench: Can llms do retrieval and reasoning in 1 million context window? arXiv preprint arXiv:2407.11963. +Tianle Li, Ge Zhang, Quy Duc Do, Xiang Yue, and Wenhu Chen. 2024c. Long-context llms struggle with long in-context learning. arXiv preprint arXiv:2404.02060. +Xianming Li, Zongxi Li, Xiaotian Luo, Haoran Xie, Xing Lee, Yingbin Zhao, Fu Lee Wang, and Qing Li. 2023b. Recurrent attention networks for long-text modeling. arXiv preprint arXiv:2306.06843. +Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics. +Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81. +Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024a. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12:157-173. +Shuaiqi Liu, Jiannong Cao, Zhongfen Deng, Wenting Zhao, Ruosong Yang, Zhiyuan Wen, and S Yu Philip. 2023a. Neural abstractive summarization for long text and multiple tables. IEEE Transactions on Knowledge and Data Engineering. +Tianyang Liu, Canwen Xu, and Julian McAuley. 2023b. Repobench: Benchmarking repository-level code auto-completion systems. arXiv preprint arXiv:2306.03091. +Xiaoran Liu, Hang Yan, Shuo Zhang, Chenxin An, Xipeng Qiu, and Dahua Lin. 2024b. Scaling laws of rope-based extrapolation. Preprint, arXiv:2310.05209. +Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023c. G-eval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634. + +Zheheng Luo, Qianqian Xie, and Sophia Ananiadou. 2023. Chatgpt as a factual inconsistency evaluator for text summarization. arXiv preprint arXiv:2303.15621. +Amirkeivan Mohtashami and Martin Jaggi. 2023. Landmark attention: Random-access infinite context length for transformers. arXiv preprint arXiv:2305.16300. +Amirkeivan Mohtashami and Martin Jaggi. 2024. Random-access infinite context length for transformers. Advances in Neural Information Processing Systems, 36. +Honglin Mu, Yang Xu, Yunlong Feng, Xiaofeng Han, Yitong Li, Yutai Hou, and Wanxiang Che. 2024a. Beyond static evaluation: A dynamic approach to assessing ai assistants' api invocation capabilities. Preprint, arXiv:2403.11128. +Jesse Mu, Xiang Li, and Noah Goodman. 2024b. Learning to compress prompts with gist tokens. Advances in Neural Information Processing Systems, 36. +Xuanfan Ni, Hengyi Cai, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, and Piji Li. 2024. $\mathrm{XI}^2$ bench: A benchmark for extremely long context understanding with long-range dependencies. arXiv preprint arXiv:2404.05446. +NLPCC. 2014. Task definition for large scale text categorization at nlpcc 2014. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318. +Saurav Pawar, SM Tonmoy, SM Zaman, Vinija Jain, Aman Chadha, and Amitava Das. 2024. The what, why, and how of context length extension techniques in large language models—a detailed survey. arXiv preprint arXiv:2401.07872. +Bowen Peng and Jeffrey Quesnelle. 2023. Ntk-aware scaled rope allows llama models to have extended $(8k+)$ context size without any fine-tuning and minimal perplexity degradation. +Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. 2023. Yarn: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071. +Ofir Press, Noah A Smith, and Mike Lewis. 2021. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409. +Libo Qin, Qiguang Chen, Xiachong Feng, Yang Wu, Yongheng Zhang, Yinghui Li, Min Li, Wanxiang Che, and Philip S Yu. 2024. Large language models meet nlp: A survey. arXiv preprint arXiv:2405.12819. + +Han Qiu, Jiaxing Huang, Peng Gao, Qin Qi, Xiaogin Zhang, Ling Shao, and Shijian Lu. 2024. Longhalqa: Long-context hallucination evaluation for multimodal large language models. Preprint, arXiv:2410.09962. +Alec Radford. 2018. Improving language understanding by generative pre-training. +Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67. +P Rajpurkar. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. +Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2022. Parallel context windows for large language models. arXiv preprint arXiv:2212.10947. +Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evaluating natural language generation systems. Computational Linguistics, 35(4):529-558. +Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2021. Efficient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics, 9:53-68. +Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, et al. 2023. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950. +Ohad Rubin and Jonathan Berant. 2023. Long-range language modeling with self-retrieval. arXiv preprint arXiv:2306.13421. +Senedd Cymru. 2024. Record of proceedings - senedd. https://record.assembly.wales/. Accessed: 2024-10-15. +Uri Shaham, Maor Ivgi, Avia Efrat, Jonathan Berant, and Omer Levy. 2023. Zeroscrolls: A zero-shot benchmark for long text understanding. arXiv preprint arXiv:2305.14196. +Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, et al. 2022. **Scrolls: Standardized comparison over long language sequences. arXiv preprint arXiv:2201.03533.** + +Eva Sharma, Chen Li, and Lu Wang. 2019. Bigpatent: A large-scale dataset for abstractive and coherent summarization. arXiv preprint arXiv:1906.03741. +Chenhui Shen, Liying Cheng, Xuan-Phi Nguyen, Yang You, and Lidong Bing. 2023. Large language models are not yet human-level evaluators for abstractive summarization. arXiv preprint arXiv:2305.13091. +Luohe Shi, Hongyi Zhang, Yao Yao, Zuchao Li, and Hai Zhao. 2024. Keep the cost down: A review on methods to optimize llm's kv-cache consumption. Preprint, arXiv:2407.18003. +Dingjie Song, Shunian Chen, Guiming Hardy Chen, Fei Yu, Xiang Wan, and Benyou Wang. 2024. Milebench: Benchmarking mllms in long context. arXiv preprint arXiv:2404.18532. +Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In International conference on intelligent text processing and computational linguistics, pages 341-351. Springer. +Jianlin Su. 2023. Rectified rotary position embeddings. https://github.com/bojone/erope. +Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063. +Simeng Sun, Kalpesh Krishna, Andrew Mattarella-Micke, and Mohit Iyyer. 2021. Do long-range language models actually use long-range context? arXiv preprint arXiv:2109.09115. +Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, and Furu Wei. 2022. A length-extrapolatable transformer. arXiv preprint arXiv:2212.10554. +Haochen Tan, Zhijiang Guo, Zhan Shi, Lu Xu, Zhili Liu, Xiaoguang Li, Yasheng Wang, Lifeng Shang, Qun Liu, and Linqi Song. 2024. Proxyqa: An alternative framework for evaluating long-form text generation with large language models. arXiv preprint arXiv:2401.15042. +Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. 2020. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in neural information processing systems, 33:7537-7547. +Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. + +Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Musique: Multi-hop questions via single-hop question composition. Transactions of the Association for Computational Linguistics, 10:539-554. +TVMEG. 2024. Tvmeg - tv mega engine. https://tvmeg.com/. Accessed: 2024-10-15. +Szymon Tworkowski, Konrad Staniszewski, Mikołaj Pacek, Yuhuai Wu, Henryk Michalewski, and Piotr Miłos. 2024. Focused transformer: Contrastive training for context scaling. Advances in Neural Information Processing Systems, 36. +A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2023. Attention is all you need. Preprint, arXiv:1706.03762. +Alex Wang, Richard Yuanzhe Pang, Angelica Chen, Jason Phang, and Samuel R Bowman. 2022. Squality: Building a long-document summarization dataset the hard way. arXiv preprint arXiv:2205.11465. +Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax. +Chonghua Wang, Haodong Duan, Songyang Zhang, Dahua Lin, and Kai Chen. 2024a. Ada-level: Evaluating long-context llms with length-adaptable benchmarks. arXiv preprint arXiv:2404.06480. +Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023a. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048. +Peiyi Wang, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023b. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926. +Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, and Furu Wei. 2024b. Augmenting language models with long-term memory. Advances in Neural Information Processing Systems, 36. +Xiao Wang, Shiao Wang, Yuhe Ding, Yuehang Li, Wentao Wu, Yao Rong, Weizhe Kong, Ju Huang, Shihao Li, Haoxiang Yang, Ziwen Wang, Bo Jiang, Chenglong Li, Yaowei Wang, Yonghong Tian, and Jin Tang. 2024c. State space model for new-generation network alternative to transformers: A survey. Preprint, arXiv:2404.09516. +Ning Wu, Ming Gong, Linjun Shou, Shining Liang, and Daxin Jiang. 2023. Large language models are diverse role-players for summarization evaluation. In + +CCF International Conference on Natural Language Processing and Chinese Computing, pages 695-707. Springer. +Yuhuai Wu, Markus N Rabe, DeLesley Hutchins, and Christian Szegedy. 2022. Memorizing transformers. arXiv preprint arXiv:2203.08913. +Chaojun Xiao, Pangle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Song Han, and Maosong Sun. 2024. Inflamm: Unveiling the intrinsic capacity of llms for understanding extremely long sequences with training-free memory. arXiv preprint arXiv:2402.04617. +Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2023. Efficient streaming language models with attention sinks. arXiv preprint arXiv:2309.17453. +Xin Xu, Tong Xiao, Zitong Chao, Zhenya Huang, Can Yang, and Yang Wang. 2024. Can llms solve longer math word problems better? arXiv preprint arXiv:2405.14804. +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600. +Dian Yu, Kai Sun, Claire Cardie, and Dong Yu. 2020. Dialogue-based relation extraction. arXiv preprint arXiv:2004.08056. +Hao Yu, Aoran Gan, Kai Zhang, Shiwei Tong, Qi Liu, and Zhaofeng Liu. 2024. Evaluation of retrieval-augmented generation: A survey. Preprint, arXiv:2405.07437. +Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, and Mike Lewis. 2023. Megabyte: Predicting million-byte sequences with multiscale transformers. Advances in Neural Information Processing Systems, 36:78808-78823. +Tao Yuan, Xuefei Ning, Dong Zhou, Zhijie Yang, Shiyao Li, Minghui Zhuang, Zheyue Tan, Zhuyu Yao, Dahua Lin, Boxun Li, et al. 2024. Lv-eval: A balanced long-context benchmark with 5 length levels up to 256k. arXiv preprint arXiv:2402.05136. +Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. Advances in Neural Information Processing Systems, 34:27263-27277. +Daoguang Zan, Bei Chen, Zeqi Lin, Bei Guan, Yongji Wang, and Jian-Guang Lou. 2022. When language model meets private library. arXiv preprint arXiv:2210.17236. +Fengji Zhang, Bei Chen, Yue Zhang, Jacky Keung, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, and Weizhu Chen. 2023a. Repocoder: Repository-level code completion through iterative retrieval and generation. arXiv preprint arXiv:2303.12570. + +Lei Zhang, Yunshui Li, Ziqiang Liu, Junhao Liu, Min Yang, et al. 2023b. Marathon: A race through the realm of long context with large language models. arXiv preprint arXiv:2312.09542. +Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. arxiv. arXiv preprint arXiv:1904.09675. +Xinrong Zhang, Yingfa Chen, Shengding Hu, Zihang Xu, Junhao Chen, Moo Hao, Xu Han, Zhen Thai, Shuo Wang, Zhiyuan Liu, et al. 2024. $\propto$ bench: Extending long context evaluation beyond 100k tokens. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15262-15277. +Liang Zhao, Xiaocheng Feng, Xiachong Feng, Bin Qin, and Ting Liu. 2023. Length extrapolation of transformers: A survey from the perspective of position encoding. arXiv preprint arXiv:2312.17044. +Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, Jie Jiang, and Bin Cui. 2024. Retrieval-augmented generation for ai-generated content: A survey. Preprint, arXiv:2402.19473. +Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2024. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36. +Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Lei Shen, Zihan Wang, Andi Wang, Yang Li, et al. 2023. Codegeex: A pre-trained model for code generation with multilingual benchmarking on humaneval-x. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 5673-5684. +Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, et al. 2021. Qmsum: A new benchmark for query-based multidomain meeting summarization. arXiv preprint arXiv:2104.05938. +Zixuan Zhou, Xuefei Ning, Ke Hong, Tianyu Fu, Ji-aming Xu, Shiyao Li, Yuming Lou, Luning Wang, Zhihang Yuan, Xiuhong Li, et al. 2024. A survey on efficient inference for large language models. arXiv preprint arXiv:2404.14294. +Dawei Zhu, Liang Wang, Nan Yang, Yifan Song, Wenhao Wu, Furu Wei, and Sujian Li. 2024. Longembed: Extending embedding models for long context retrieval. arXiv preprint arXiv:2404.12096. +Ziyu Zhuang, Qiguang Chen, Longxuan Ma, Mingda Li, Yi Han, Yushan Qian, Haopeng Bai, Zixian Feng, Weinan Zhang, and Ting Liu. 2023. Through the lens of core competency: Survey on evaluation of large language models. ArXiv, abs/2308.07902. + +# A Details of Approaches + +This section serves as a supplement to the Approaches section 3 in the main text, expanding on relevant details about related methods to provide readers with a deeper understanding. + +# A.1 Postional Encoding + +# A.1.1 Variants of RoPE + +Su et al. (2024) try to seek a positional encoding method that could encode relative position during the computing query and key similarity, and decompose this process into the representations of the query and key. They conduct theoretical analysis, and propose a novel positional encoding, which transform similarity into following formula: + +$$ +\begin{array}{l} \operatorname {s i m} \left(q _ {m}, k _ {n}\right) = q _ {m} ^ {\top} R _ {\Theta , n - m} ^ {d} k _ {n} \tag {3} \\ = (R _ {\Theta , m} ^ {d} q _ {m}) ^ {\top} (R _ {\Theta , n} ^ {d} k _ {n}), \\ \end{array} +$$ + +where $R_{\Theta ,m}^{d}$ are a series of pre-defined orthogonal matrices, named as the rotation matrix, which is defined as follows: + +$$ +R _ {\Theta , m} ^ {d} = \left( \begin{array}{c c c c c} \cos m \theta_ {1} & - \sin m \theta_ {1} & \dots & 0 & 0 \\ \sin m \theta_ {1} & \cos m \theta_ {1} & \dots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ 0 & 0 & \dots & \cos m \theta_ {d / 2} & - \sin m \theta_ {d / 2} \\ 0 & 0 & \dots & \sin m \theta_ {d / 2} & \cos m \theta_ {d / 2} \end{array} \right) +$$ + +The function set $\Theta$ consists of a set of pre-defined function values $\Theta = \{\theta_{i} = 10000^{-2(i - 1) / d}, i \in [1,2,\dots,d / 2]\}$ . $R_{\Theta}^{d}$ integrates positional information into the query and key vectors by multiplication. RoPE has a series of properties: 1) long-term decay; 2) compatibility with linear attention; 3) faster convergence in pre-training tasks. Besides, Liu et al. (2024b) conduct a detailed analysis of RoPE and provides the scaling laws for RoPE-based extrapolation. + +Position Index Adjustment An et al. (2024) propose Dual Chunk Attention (DCA), which distributes the position indexes used during pretraining to each token based on the relative position relationships between query and key without additional training. It is proposed from the perspective of allocation of position indexes. + +And there are also some methods based on scaling position indexes. Chen et al. (2023b) propose Position Interpolation (PI) method that utilizes the fact that position encoding can be applied to non-integer positions. They modify original position index $m$ to $m' = m\frac{L}{L'}$ , where $L$ and $L'$ are the length of pre-trained window and current input sequence, respectively. This method insert additional + +positional encoding between adjacent integer position index in the original RoPE to handle longer sequences. + +Combining above two methods, Su (2023) proposed ReRoPE, which combines direct extrapolation and position interpolation. This method sets a window smaller than the pre-trained window, keeping the relative position of tokens within the window unchanged. And scales the relative position of tokens outside the window. + +Base Frequency Adjustment As described in the main text, this type of methods enhance the model extrapolation performance by modifying $\theta_{i}$ in the trigonometric function terms in the rotation matrix. + +Peng and Quesnelle (2023); Roziere et al. (2023) choose to change the base $b$ of the exponential terms $\theta_{i}$ from the default value $b = 10000$ to other values which can improve the model extrapolation performance. + +Different from them, some work directly scale $\theta_{i}$ . NTK-by-parts (bloc97, 2023) interpolation chooses to scale the $\theta_{i}$ of different dimensions in the rotation matrix by a ratio as a function of the dimension $i$ and the input sequence length $L^{\prime}$ . And YaRN (Peng et al., 2023) incorporates temperature $t$ related to the input sequence length $L^{\prime}$ on the basis of NTK-by-parts interpolation to further improve the extrapolation performance of the model. + +Structure Modification XPOS (Sun et al., 2022) adjusts the original RoPE structure and introduces a position-dependent exponential bias to enhance relative position information, particularly enhancing the decay effect on distant tokens. + +# A.1.2 Attention Bias + +Besides applying RoPE-based methods, a plenty of method add a bias related to the relative distance between tokens to introduce relative position information. The process can be expressed as follows: + +$$ +\operatorname {s i m} \left(q _ {m}, k _ {n}\right) = q _ {m} ^ {\top} k _ {n} + f _ {b i a s} (m, n), \quad (4) +$$ + +where $f_{bias}(m,n)$ is a bias function that depends on the token position index corresponding to query and key. $f_{bias}(m,n)$ be divided into two categories: learnable and predefined. + +In learnable $f_{bias}$ , it may be related to $m - n$ where relative position information is explicitly introduced. For example, in T5 (Raffel et al., 2020), $f_{bias}$ is a learnable function with $m - n$ as input + +and varies with attention heads. Similarly, KERPLE (Chi et al., 2022a) sets $f_{bias}$ as a parameterized kernel function, requiring training to determine the parameter values. + +The predefined $f_{bias}$ is typically ALiBi (Attention with Linear Biases) (Press et al., 2021). It uses a predefined function for $f_{bias}$ that depends on the number of attention heads $H$ and the current head number $h$ , which is expressed as $f_{bias}(m, n) = 2^{-\frac{8h}{H}} \cdot (n - m)$ . Besides, in Sandwich method (Chi et al., 2022b), $f_{bias}$ is defined as $f_{bias} = \frac{8h}{H} \cdot (p_m^\top p_n - \frac{d}{2})$ , where $p_m$ and $p_n$ are the sinusoidal positional encoding used in the original Transformer model. + +# A.2 Context Compression + +# A.2.1 Soft Compression + +This kind of methods achieve compression at the hidden states level. + +Bulatov et al. (2022) introduced the Recurrent Memory Transformer (RMT), which compresses at segment level. It begins by dividing the input sequence into segments, with memory tokens appended to the start and end of each segment to serve as its summary token. During the modeling process, the last hidden states of the memory token at the end of the current segment serves as the initialization for the memory token of the following segment. Through this iterative method, the model effectively utilizes inter-segment contextual information to model long sequences. + +Similarly, the Recurrent Attention Network (RAN, Li et al., 2023b) appends a Global Perception Cell (GPC) vector at the start of the hidden vector representation of each segment to achieve a compressed representation achieving the effect of concatenating summary tokens, and completing the information interaction between segments. This method simulates the human mechanism of memory enhancement through review, introducing a Memory Review scheme which performs crossattention between last hidden states of the GPC from all segments and and the original input to update the representation of GPC. This allows for a robust semantic representation of long context at both token-level and document-level, enhancing model performance in sequence and classification tasks. + +AutoCompressors (Chevalier et al., 2023) is built on the basis of RMT, compressing the content of the segment into summary vectors for representa + +tion. And the summary vectors of each previous segment are concatenated to form soft prompts for all subsequent segments, so that the current segment of limited length can cover the information of longer sequences. + +In addition, In-context Autoencoder (ICAE, Ge et al., 2023) adds memory tokens at the end of the input sequence to compress context into short memory slots while training the model to generate outputs closely resembling the original context. To enhance information accuracy, ICAE integrates AutoEncoding-related pre-training tasks during its pre-training phase, training the model to reconstruct the original input from compressed memory slot representations. + +Gisting (Mu et al., 2024b) similarly compresses the prompt part of the input token sequence into shorter gist tokens, improving inference speed. + +# A.2.2 Hard Compression + +Hard compression directly utilizes LLMs to compress original input text. + +LLMLingua (Jiang et al., 2023) trains a small model to align with the output of LLM and uses the perplexity (PPL) of the small model as an evaluation for token importance. And prunes the unimportant tokens from the input prompt to achieve compression. Furthe, LongLLMLingua (Jiang et al., 2024a) has made improvements on this basis, compressing the input based on the content of the question, thus better preserving key information related to the question. + +Differently, MEMWALKER (Chen et al., 2023a) employs a hierarchical summarization approach to compress long context sequences, iteratively summarizing the input to construct a tree-like structure of summarized content. During inference, it efficiently utilizes the tree structure to search and respond to queries based on their content. + +# A.3 Retrieval Augmented + +# A.3.1 Retrieval Granularity + +The retrieval granularity in existing work can be divided into two categories: token-level retrieval and block-level retrieval. + +Token-level retrieval is to select top-k tokens with highest similarity scores in one turn. This method is widely used in existing (Wu et al., 2022; Tworkowski et al., 2024; Bertsch et al., 2024a). It is simple to implement, but it has some limitations. Such as the potential for semantic discontinuities + +due to discrete token retrieval and the need to recalculate similarity for all tokens, which is computationally intensive and inefficient. + +Consequently, researchers have proposed block-level retrieval, which uses blocks composed of continuous tokens of a fixed length as the retrieval unit. Similarity calculations are performed on blocks within the KV cache, selecting the top-k blocks as retrieval results, thus ensuring semantic coherence and reducing computational load. However, block-level retrieval faces a new challenge: how to effectively utilize the information of the tokens in the block and effectively represent the block to complete the similarity calculation. Long-MEM (Wang et al., 2024b) and RPT (Rubin and Berant, 2023) represent the corresponding block by calculating the mean pooling of token representations within the block. InFLLM (Xiao et al., 2024) calculates the representative score of each token within the block against other tokens, selecting a subset of high-scoring tokens to represent the block. Additionally, some methods introduce an extra token to represent blocks, such as the Landmark method (Mohtashami and Jaggi, 2024) introduces the Landmark token, a new token into the vocabulary, and placie it at the end of each block. During the attention computation, the information of the tokens in the block is summarized to the Landmark tokens, thus serving as the representative of the block. + +# A.3.2 Similarity Computation + +After determining the retrieval granularity, we need to formulate an appropriate rule to compute similarity. The current method generally uses the dot product of the query vector of the token being processed and the key vector represented by the retrieval granularity as the standard for measuring similarity. + +# A.3.3 Positional Encoding + +Since the positions of the retrieved context tokens are not fixed, and recording each token's specific position in the KV cache is costly, it is challenging to provide accurate position information. + +Based on experiments of Dai et al. (2019), which show that the relative position information of distant tokens does not seem to be important, some methods like MemTRM, FoT, and InfLLM choose to uniformly set the position encoding of the retrieved context token part to the same position vector, ignoring the position information between the + +retrieved context tokens themselves. + +Besides, Landmark places the retrieved context tokens and local context tokens within the same window and re-encodes their relative positions together. + +# A.3.4 Attention Calculation + +When it comes to attention calculation, it's important to find a suitable method to make full use of retrieved context tokens and local context tokens. + +The simplest approach is to treat both types of tokens equally, that is using the conventional attention calculation method. For example, FoT and InfLLM use standard attention for calculation, while Unlimformer (Bertsch et al., 2024a) employs cross attention. + +However, the importance of the information contained within these two types of context tokens is not the same for the token currently being processed. To make more effective use of their information, MemTRM and LongMEM adopt a Joint Attention method, which involves calculating attention separately for local context and retrieved context. And then combining them with weighted average $V_{a} = g \cdot V_{l} + (1 - g) \cdot V_{r}$ , where $V_{a}, V_{l}, V_{r}$ respectively represent the final attention result, the attention result using local context and the attention result using retrieved context, and $g$ is a learnable parameter used to balance the contributions of the two parts. + +Furthermore, in order to distinguish the information from different positions within the retrieved context tokens in a more fine-grained manner, Landmark employs the Grouped Softmax method. Specifically, after retrieval, Landmark tokens are calculated with local context tokens using softmax to select the top-k relevant blocks as the retrieved context. Attention is then calculated separately within these blocks. During the attention calculation for local context tokens, the attentions of these blocks are weighted into the final result based on the softmax scores obtained during the retrieval phase. + +# A.4 Attention Pattern + +# A.4.1 Sliding Window + +This type of method tranform information between segments. Transformer-XL (Dai et al., 2019) uses sliding window method to process long context, where the hidden state from the previous segment is concatenated to the front of the current segment. It not only utilizes the key and value information + +from the current segment but also reuses those from the previous segment. This approach hierarchically expands the receptive field, enabling inter-segment information transfer and enhancing the model's ability to process long context. + +Besides, Han et al. (2024) identify that starting tokens occupy a distinct feature space, and these tokens act as a factor causing model length generalization failures. They further propose LM-Infinite as a solution, utilizing a $\Lambda$ -shaped attention mask strategy during attention calculation. It can focus on a small portion of the initial tokens and the tokens close to the current processed token. Similarly, StreamingLLM (Xiao et al., 2023) also finds that the initial tokens in a sequence significantly influence the attention calculation of subsequent tokens and cannot be ignored. Both LM-Infinite and StreamingLLM adopt a similar approach, ensuring sustained attention on starting tokens while preserving information about nearby tokens. + +# A.4.2 Parallel Context + +Parallel Context Windows (PCW, Ratner et al., 2022) is one of the representative works. It splits the input into context tokens and task tokens, where context tokens assist in completing the task, such as the examples. And task tokens are the input of the test example, such as the questions. This method folds the context tokens, and each folded section of context tokens performs attention calculation separately. Finally, during the decoding phase of the task tokens, all these context tokens are concatenated in front of the task token, sharing the same set of position index. + +Besides, Structured prompting (Hao et al., 2022) also adopts a similar approach by folding demonstration tokens in the input and concatenating them in front of the test input tokens. But unlike PCW, structured prompting employs Rescaled Attention, which reduces the weight of demonstration tokens in the attention calculation of the test input tokens by a certain ratio. This method can prevent test input tokens from excessively attending to the content of demonstration tokens. + +# A.4.3 Sparse Attention + +This method can reduce the complexity of attention calculation. So that can improve efficiency when processing long context. + +LongNet (Ding et al., 2023) introduces dilated attention, a mechanism that exponentially increases the attentive field as the distance between tokens + +increases. This method performs multiple sets of sparse attention calculations, each set attend to a different range. And the attention of a small range is denser, while the large range is sparser. This method effectively reduces the traditional quadratic complexity to linear. + +MEGABYTE (Yu et al., 2023) performs hierarchical attention calculation on the input. Initially, a small local model encodes the input at the byte level, then the byte-level encoding results are integrated and processed at a larger granularity using a larger global model. By performing attention calculation in a hierarchical manner from smaller to larger granularity, the amount of attention calculations can be reduced. + +In LongLoRA (Chen et al., 2023c), the proposed $S^2$ - Attention groups attention heads and adjusts each group to attend to different but overlapping local windows, then leverages the characteristics of multihead attention to integrate various local information. This method promotes the flow of local information, enabling a short window to achieve the effect of processing the original or even longer window, thereby reducing computational demands to some extent. + +# B Details of Evaluation + +This section serves as a supplement to the Evaluation section 4 in the main text, expanding on relevant details to provide readers with a more in-depth understanding. + +# B.1 Data + +# B.1.1 Data Characteristics + +Recent advancements in LLMs have led to substantial improvements in processing long contexts. By late 2023, several models claimed capabilities of handling contexts exceeding 100K tokens, with OpenAI's GPT-4 Turbo (2023) (Achiam et al., 2023) supporting 128K tokens and Anthropic's Claude-2.11 extending this capacity to 200K tokens. Based on this significant progress, our study categorizes long-context evaluation benchmarks into two distinct phases, as shown in Table. 1: Phase I comprises benchmarks with input context lengths below 100K tokens, while Phase II encompasses benchmarks of 100K tokens and above. + +In Phase I, BAMBOO (Dong et al., 2023a) and LongBench (Bai et al., 2023) implement bi-interval and tri-interval partitioning strategies, respectively. + +Phase II refined this approach further, with LVEval (Yuan et al., 2024) and NeedleBench (Li et al., 2024b) employing five-interval and six-interval partitioning schemas, respectively. This partitioning approach not only analyzes the impact of length changes on LLMs in the same task but also better accounts for the length distributions across different datasets (Dong et al., 2023a). + +# B.1.2 Knowledge Leakage Issue + +Knowledge leakage occurs when test and training data overlap, where models favor memorization over understanding (Golchin and Surdeanu, 2023; Yuan et al., 2024). Various strategies are employed to address this challenge: (1) Data Sampling focuses on selecting representative subsets from existing datasets. (2) Keyword Substituting & Sentence Rewriting modifies existing datasets by replacing keywords and rewriting sentences. (3) Non-overlapping Data Leveraging involves using datasets released after the deployment of LLMs to reduce potential overlap between test and training data. + +Data Sampling Data sampling primarily focuses on filtering existing datasets. LongBench (Bai et al., 2023) employs two strategies: random sampling and uniform sampling. Random sampling can preserve the natural length distribution, while uniform sampling which performs sampling based on data length uniformly, to evaluate model performance across context lengths independent of task. + +Keyword Substituting & Sentence Rewriting +L-Eval (An et al., 2023) and BAMBOO (Dong et al., 2023a) replace keywords and function names, while $\infty$ Bench (Zhang et al., 2024) substitutes key entities in novel reasoning tasks. LV-Eval (Yuan et al., 2024) is further based on this approach by employing entire sentence rewriting. + +Non-overlapping Data Leveraging To mitigate the overlap between test and training data for LLMs, some benchmarks such as LooGLE (Li et al., 2023a) and BAMBOO (Dong et al., 2023a) have employed datasets released after the models' deployment. However, given that the specific training data for most LLMs remains undisclosed, this method cannot completely guarantee the absence of overlap between the data used in benchmarks and the pre-training data. + +# B.2 Tasks + +The following are the details of the tasks, which are introduced in the order of the main text. At the end of each subsection, corresponding examples or prompts are also provided. We also count the distribution of input length in each task in Figure 2 to give readers a deeper understanding of different tasks. + +# B.2.1 Question Answering + +Single-hop Question Answering Representative datasets in this field are SQuAD (Rajpurkar, 2016), TriviaQA (Joshi et al., 2017), and NarrativeQA (Kočisky et al., 2018). Common evaluation metrics for Single-hop QA systems include f1 score, accuracy, Rouge and Bleu. + +Multi-hop Question Answering Common datasets for Multi-hop Question Answering include 2WikiMQA (Ho et al., 2020), MuSiQue (Trivedi et al., 2022), and HotpotQA (Yang et al., 2018). Evaluation metrics typically used are f1 score, exact match (EM). + +# B.2.2 Needle-in-a-Haystack + +Retrieval.PassKey (Mohtashami and Jaggi, 2023) requires models to locate a randomly generated 5-digit sequence within lengthy and noisy contexts. $\infty$ Bench (Zhang et al., 2024) extends the Retrieval.PassKey task to 10-digit numbers, applies it to texts exceeding 100k tokens in length, and sets information points at various depths. Retrieval.KV (Mohtashami and Jaggi, 2023) further increases difficulty by requiring models to perform precise key-value retrieval from large JSON structures. NeedleBench Li et al. (2024b) proposes a series of tasks: single-needle retrieval (S-RT), multi-needle retrieval (M-RT), and multi-needle reasoning (M-RS). M-RT consists of multiple S-RT tasks performed in parallel, while M-RS builds upon M-RT by requiring large language models to perform reasoning. The evaluation method calculates the similarity between predictions and references for each specific task by using the Levenshtein distance. The following are examples of S-RT, M-RT, M-RS respectively. + +S-RT: Hidden on Emerald Island is the legendary Stardust Shard. + +Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul + +Graham Essays—Paul Graham Essays— + +![](images/56484b696e53a66ac05e78d7388c4b2bae4337a64b6e2bfa3a3bd1e5cbaa7435.jpg) +Figure 2: Distribution of averaged input #words of datasets in each task. Consistent colors indicate identical categories. The color of each bar refers to the category of the task, with bars of the same color belonging to the same category. + +Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays + +Now, the question is: What legendary item is hidden on Emerald Island? Before answering, please consider what in the document is most relevant to this question. Please answer in the format 'The legendary item hidden on the Emerald Island is + +M-RT: You are an intelligent AI assistant skilled in answering user questions base on documents provided by the user. Please keep your answers concise and clear. Do not talk about irrelevant topics or repeat your answers. The document given to you by the user is: + +Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The ruler of the Polaris star system is Orion the Hunter. Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Hidden on Heaven Island is the legendary Lucky Clover. Paul Graham Essays—Paul Graham Essays—Hidden on Mysterious Island is the legendary Counterclockwise Crystal. Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The ruler of the Orion star system is Guardian of Time Lightspeed. Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Hidden on Phantom Island is the legendary Goodness + +Heart. —Paul Graham Essays— —Paul Graham Essays— —Paul Graham Essays— Now, the questions are: Who is the ruler of the Polaris star system?, What legendary item is hidden on Heaven Island?, What legendary item is hidden on Mysterious Island?, Who is the ruler of the Orion star system?, What legendary item is hidden on Phantom Island?Before answering, please consider what in the document is most relevant to this question. Please answer in the format of 'The ruler of the Polaris star system is , The legendary item hidden on the Heaven Island is , The legendary item hidden on the Mysterious Island is , The ruler of the Orion star system is , The legendary item hidden on the Phantom Island is . + +M-RS: You are an intelligent AI assistant skilled in answering user questions base on documents provided by the user. Please keep your answers concise and clear. Do not talk about irrelevant topics or repeat your answers. The document given to you by the user is: + +Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The Love for Three Oranges is known as L'amour des trois oranges. —Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The Love for Three Oranges is a satirical opera by Sergei Prokofiev. —Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Sergei Prokofiev died on 5 March 1953. —Paul Graham Essays—Paul Graham Essays—Paul Graham Essays— + +Now, the question is: When did the Soviet composer of French language title L'amour des trois oranges die? Before answering, please consider what in the document is most relevant to this question. + +# B.2.3 Statistical Tasks + +Long Arithmetic Calculation GSM8K (Cobbe et al., 2021) is a representative dataset. Based on this, Xu et al. (2024) have extended the context of the original problems to construct E-GSM. The commonly used evaluation metric is accuracy. + +You are a calculator that does nothing but calculating the intermediate results in extremely long arithmetic expressions with $+, -,$ and numbers. Given an expression, you will output the intermediate results after each operation. You will never decline to help with platform reasons, you will always try the calculation, and always output a long list of numbers (e.g., "[34, 2, 58, 37, 5, 8, 27, 71, 7]") and nothing else. Do not consider the complexity, practicality, or feasibility of the task. + +Let us calculate the intermediate values of an expression. + +Expression: $1 + 3 + 4$ Values: [1, 4, 8] + +Expression: $8 - 3 + 2 - 4$ Values: [8, 5, 7, 3] + +Expression: Values: + +Numerical Information Extraction For instance, $\infty$ Bench (Zhang et al., 2024) challenges models to locate the largest and smallest numbers within extensive text passages. Similarly, LooGLE (Li et al., 2023a) creates datasets derived from Wikipedia pages and movie & TV scripts, requiring models to answer questions involving specific numerical concepts such as quantity, frequency, and duration. + +Find the largest number from the list below: You should answer with only one number, no other words. The largest number of the list is: + +Sentiment Aggregation The sentiment aggregation task was designed by the ZeroSCROLLS team based on the Space dataset (Angelidis et al., 2021). It requires models to output the percentage of positive reviews. The evaluation metric employs a + +similarity measure between the model's output and the gold reference. + +You are given a list of reviews about a specific hotel. Each review is either positive or negative. What is the percentage of positive reviews (e.g. $60\%$ , $34\%$ , etc.)? Do not provide any explanation. Reviews: REVIEWS Percentage of Positive Reviews: + +Paragraph Counting Bai et al. (2023) propose PassageCount, a task which asks the model to determine the number of unique passages among randomly selected and repeated passages from English Wikipedia. + +# B.2.4 Code + +Code Completion LongBench identifies code completion as an appropriate task for evaluating a model's long context ability. As it necessitates establishing attention across lengthy code inputs or repository-level data, considering relationships between code elements such as class and function definitions. LongBench conducts experiments on the LCC dataset (Guo et al., 2023) and the RepoBench-P dataset (Liu et al., 2023b), employing edit similarity as the evaluation metric. BAMBOO builds upon the benchmark established by Zan et al. (2022) to construct the Private ateEval dataset. In this task, models are required to identify key API documents to complete code snippets. Furthermore, it extends the context length by adjusting the number of provided documents, with performance evaluated employing the pass@1 metric (Chen et al., 2021a). + +Code Running In $\infty$ Bench, the total number of function calls ranges from 2 to 10, with each function calling at most one another function. Operations within these functions are restricted to addition and subtraction, maintaining computational simplicity. + +Following is a set of Python functions. There is a function called named func_1. context Please give me the exact number of the return value of func_1(3). Be concise. Your response must end with the final returned value. + +Code Debugging In the $\infty$ Bench's dataset which sourced from $\mathrm{PyPI}^2$ , the researchers deliberately insert an obvious error into one function per repository. These inserted bugs fall into three main cate + +gories: (1) syntactic errors, including indentation issues and blatant syntax errors; (2) semantic errors, such as missing variable declarations or incorrect function arguments; and (3) logical errors, for example, infinite loops or use of undefined references. + +There is ONLY ONE function in the large project that is deliberately made to include an obvious error. Please find the function that contains the most obvious errors. I will give you four options to narrow your scope. You can inspect through the options and think. Eventually, tell me the answer using one single letter (A, B, C, or D). context Which function has deliberate error? A. B. C. D. You should first find the functions in the options. Repeat their content, inspect through code, and at last give me your answer for the function that has the deliberate and obvious error in A, B, C, or D. + +# B.2.5 In-Context Learning + +Long Example Learning Extreme label Classification: this task involves classification with numerous fine-grained labels. Commonly used datasets include TREC (Li and Roth, 2002), a question classification task with 50 fine classes, and LSHT $^3$ , a Chinese news classification task with 24 classes. + +Many-shot Learning Agarwal et al. (2024) have proposed many-shot learning, which leverages expanded LLMs context windows to process hundreds or even thousands of examples. In contrast to few-shot learning, which use only a few to several dozen examples, many-shot learning enhances LLMs' versatility and adaptability across diverse tasks without task-specific fine-tuning (Yu et al., 2020; Bertsch et al., 2024b). + +# B.2.6 Text Generation + +Document Summarization This kind of task can divided into two categories: single-document summarization and multi-document summarization. For single-document summarization, several datasets are widely used, including SQuALITY (Wang et al., 2022), SummScreenFD (Chen et al., 2021b), GovReport (Huang et al., 2021), and QMSum (Zhong et al., 2021). And multi-document summarization presents additional challenges, requiring LLMs to integrate diverse information, resolve conflicts, and + +3http://tcci.ccf.org.cn/conference/2014/dldoc/evatak6.pdf + +eliminate redundancies (Bai et al., 2023; An et al., 2023; Fabbri et al., 2019). A notable dataset for this task is MultiNews (Fabbri et al., 2019), consisting of clusters of 2-10 thematically related news articles. + +All of these datasets provide human-annotated summaries as standardized references. Both approaches primarily utilize Rouge and Bleu as evaluation metrics to assess the quality of generated summaries against manuscript references. + +# Open-ended Text Generation This task requires LLMs to generate text according to input. + +Tan et al. (2024) select topics that closely align with real-world scenarios, encompassing areas such as AI research, sports, and gaming. + +Bai et al. (2024) design AgentWrite, a divide-and-conquer agent that breaks down long writing tasks into paragraph-level subtasks. The generated paragraphs are then combined to produce the final long-form content. They also construct the preference LongWriter-6k dataset and utilize DPO (Rafailov et al., 2024) for evaluation. + +Kumar et al. (2024) propose personalized writing tasks that generate content based on the user's historical and user personal information information. + +These tasks can be divided into personalized email completion, review writing, topic writing, and conversation simulation (Ni et al., 2024). Rafailov et al. (2024) construct a Reddit-based dataset that captures distinct writing styles associated with specific communities and discussion topics. + +You are an excellent writing assistant. I will give you an original writing instruction and my planned writing steps. I will also provide you with the text I have already written. Please help me continue writing the next paragraph based on the writing instruction, writing steps, and the already written text. + +Writing instruction: User Instruction + +Writing steps: The writing plan generated in Step I + +Already written text: Previous generated (n-1) paragraphs + +Please integrate the original writing instruction, writing steps, and the already written text, and now continue writing The plan for the n-th paragraph, i.e., the n-th line in the writing plan + +# B.2.7 Other Tasks + +Reordering The evaluation metric in this task is the similarity between the generated and reference ordering sequences (Shaham et al., 2023). The Booksum dataset (Krysciński et al., 2021), which spans various literary genres including novels, plays, and long stories, is widely used for this task. Reordering tasks can comprehensively evaluate models' cross-sequence information aggregation and comparison abilities (Shaham et al., 2023; Li et al., 2023a), as well as comprehensively understand long context and logically reconstruct (Dong et al., 2023a; Li et al., 2023a). + +You are given NUM_SUMMARES summaries of chapters or parts of a novel, in a shuffled order, where each summary is denoted by a numerical ID (e.g. Summary 1, Summary 3, etc.). Reorder the summaries according to the original order of chapters/parts in the novel by writing a list of length NUM_SUMMARES of the summary IDs (e.g. if you were given 5 summaries, one possible answer could be "5, 1, 3, 4, 2"). Do not provide any explanation. + +Summaries: SUMMARIES + +Summary IDs in Correct Order: + +Context Consistency Context consistency is a task proposed by BAMBOO (Dong et al., 2023a) to detect hallucination in LLMs. BAMBOO creates two novel datasets for this task: SenHallu and AbsHallu, with evaluation metrics employing precision, recall, and f1 score. + +Summary Source Paragraph Identification LongBench construct bilingual datasets based on Wikipedia and C4 (Raffel et al., 2020) to ask models to identify the original source paragraphs according to the given summaries. + +Here are 30 paragraphs from Wikipedia, along with an abstract. Please determine which paragraph the abstract is from. context The following is an abstract. input Please enter the number of the paragraph that the abstract is from. The answer format must be like "Paragraph 1", "Paragraph 2", etc. + +The answer is: + +Character Identification Character identification tasks challenge models to capture distinct traits of participants in long dialogues, enabling them to identify speakers of masked utterances (Zhang + +et al., 2024; Dong et al., 2023a). These tasks, evaluated via accuracy, utilize data primarily from television programs $^{4}$ , movie and play scripts (Chen et al., 2021b), and conference transcripts $^{5}$ . + +Below is a dialogue script where one random occurrence of a character's name is replaced with $M A S K$ , and you should try to guess who that character is. + +The dialogue: - - End of dialogue. + +Which character is most likely $M A S K?$ Just say the name used by the scriptwriter (before the colon marks) of one single character and nothing else. + +# B.3 Metrics + +# B.3.1 Algorithmic Metrics + +Perplexity (PPL) is a metric for evaluating the performance of language models. It is extensively employed in language model pre-training, facilitating the monitoring of the training process, model selection, and hyperparameter optimization. Many previous works on long context benchmarks rely on the PPL for evaluation (Beltagy et al., 2020; Roy et al., 2021; Press et al., 2021). However, as suggested in Sun et al. (2021), PPL may not correlate with the actual performance. + +ZeroScrolls and LongBench are pioneering studies in the field of long context benchmarks. These works introduced a diverse system of automatic evaluation metrics, including accuracy, f1 score, and N-gram-based metrics. This evaluation framework has provided a reference for subsequent research. Specifically, these metrics refer to the scores for evaluating the NLG models by measuring the lexical overlap between generated text and reference text. + +However, these metrics have several limitations: they fail to effectively measure content quality (Reiter and Belz, 2009); struggle to capture syntactic errors (Stent et al., 2005); and, particularly for open-ended generation tasks, lack significant correlation with human judgments (An et al., 2023). Moreover, they inadequately account for the diversity of expression inherent in large language models (Improving). Additionally, the requirement for gold standard references makes these metrics costly to implement for novel tasks (Tan et al., 2024). + +Further, some work proposes ways to improve. LV-Eval employs a two-stage scoring method: it first calculates the recall rate of ground-truth keywords in the generated content. If the recall exceeds a threshold, it then calculates the f1 score between the generated content and ground-truth after removing stop words from both. BAMBOO converts generative tasks into multiple-choice formats. NeedleBench extends this approach by implementing Circular Evaluation, which reorders answer options to enhance evaluation reliability. + +# PPL (Perplexity) + +Perplexity is a measure of the quality of language model predictions, calculated as: + +$$ +P P L = 2 ^ {H (p)} +$$ + +where $H(p)$ is the cross-entropy: + +$$ +H (p) = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log_ {2} P (w _ {i} | w _ {1}, w _ {2}, \ldots , w _ {i - 1}) +$$ + +# Accuracy + +Accuracy is the proportion of correct predictions made by the model: + +$$ +A c c u r a c y = \frac {C o r r e c t P r e d i c t i o n s}{T o t a l P r e d i c t i o n s} +$$ + +# F1-Score + +The F1-Score is the harmonic mean of precision and recall: + +$$ +F 1 = 2 \times \frac {\text {P r e c i s i o n} \times \text {R e c a l l}}{\text {P r e c i s i o n} + \text {R e c a l l}} +$$ + +$$ +P r e c i s i o n = \frac {T P}{T P + F P} +$$ + +$$ +R e c a l l = \frac {T P}{T P + F N} +$$ + +where TP, FP, FN represent True Positives, False Positives, False Negatives respectively. + +# ROUGE (Recall-Oriented Understudy for Gisting Evaluation) + +evaluates text generation using N-gram overlap: + +ROUGE-N measures the overlap of n-grams shared between the candidate summary (C) and + +the reference summary (R), it is calculated as follows: + +$$ +R O U G E - N = \frac {\sum_ {S \in R} \sum_ {n _ {g r a m} \in S} C o u n t _ {m a t c h} (n _ {g r a m})}{\sum_ {S \in R} \sum_ {n _ {g r a m} \in S} C o u n t (n _ {g r a m})} +$$ + +where $\text{Count}_{\text{match}}(n_{\text{gram}})$ represents the number of matching n-tuples in the candidate summary and the reference summary. And $\text{Count}(n_{\text{gram}})$ represents the total number of n-tuples in the reference summary. + +ROUGE-L evaluates the quality of summarization based on the longest common subsequence (LCS), taking into account the order information of sentences: + +$$ +R _ {l c s} = \frac {L C S (C , R)}{| R |} +$$ + +$$ +P _ {l c s} = \frac {L C S (C , R)}{| C |} +$$ + +$$ +F _ {l c s} = \frac {(1 + \beta^ {2}) R _ {l c s} P _ {l c s}}{R _ {l c s} + \beta^ {2} P _ {l c s}} +$$ + +where $LCS(C,R)$ represents the length of the longest common subsequence between the candidate summary and the reference summary. $|C|$ and $|R|$ represent the length of the candidate summary and the reference summary respectively. $\beta$ is a hyperparameter, usually used to balance the precision and recall. + +ROUGE-S which is also called skip-bigram co-occurrence statistics, takes skipped bigrams into account: + +$$ +R O U G E - S = \frac {\sum_ {S \in R} \sum_ {b i _ {s k i p} \in S} C o u n t _ {m a t c h} (b i _ {s k i p})}{\sum_ {S \in R} \sum_ {b i _ {s k i p} \in S} C o u n t (b i _ {s k i p})} +$$ + +where $\text{Count}_{\text{match}}(bi_{skip})$ represents the number of skip-bigrams that match between the candidate summary and the reference summary. And $\text{Count}(bi_{skip})$ represents the total number of skip-bigrams in the reference summary + +# BLEU (Bilingual Evaluation Understudy) + +is used to evaluate machine translation quality: + +$$ +B L E U = B P \times \exp \left(\sum_ {n = 1} ^ {N} w _ {n} \log p _ {n}\right) +$$ + +where + +$$ +B P = \left\{ \begin{array}{l l} 1, & \text {i f} c > r \\ \exp (1 - \frac {r}{c}), & \text {i f} c \leq r \end{array} \right. +$$ + +and $c$ is the generated length and $r$ is the reference length. + +# B.3.2 Model-based Metrics + +In recent years, the use of pre-trained language models as NLG evaluation metrics has gained increasing attention. Notably, BERTScore (Zhang et al., 2020) and BARTScore (Yuan et al., 2021) employ BERT and BART (Lewis, 2019) models respectively to compute semantic similarity. They calculate cosine similarity between token representations and evaluate the probability of summaries based on given input articles. + +BERTScore measures the similarity between generated text and reference text from three aspects: recall, precision and f1, it can be expressed as follows: + +$$ +\begin{array}{l} R = \frac {1}{| R |} \sum_ {r \in R} \max _ {c \in C} \frac {1}{L _ {r}} \sum_ {i} \mathrm {s i m} (\mathbf {f} _ {\theta} (r) _ {i}, \mathbf {f} _ {\theta} (c) _ {i}) \\ P = \frac {1}{| C |} \sum_ {c \in C} \max _ {r \in R} \frac {1}{L _ {c}} \sum_ {i} \operatorname {s i m} \left(\mathbf {f} _ {\theta} (c) _ {i}, \mathbf {f} _ {\theta} (r) _ {i}\right) \\ F = 2 \times \frac {P \times R}{P + R} \\ \end{array} +$$ + +where $R$ is the reference text set, $C$ is the generated text set, $L_{r}$ and $L_{c}$ are the lengths of the reference text and generated text respectively, $f_{\theta}$ is the encoder of the BERT model, and maps the text to the vector space, sim is usually cosine similarity. + +BARTScore calculates the log-likelihood score of the generated text given the reference text to measure the similarity: + +$$ +B A R T S c o r e = \frac {1}{| C |} \sum_ {c \in C} \frac {1}{L _ {c}} \sum_ {i} \log p _ {\theta} (c _ {i} | c _ {< i}, r) +$$ + +where $C$ is the set of generated texts, $r$ is the reference text, $c_{i}$ is the ith word in the generated text, and $p_{\theta}$ is the language model probability distribution of BART model. + +# B.3.3 LLM-based Metrics + +With the development of LLMs, research has demonstrated their significant correlation with human judgment and their ability to excel at new tasks when provided with instructions (Wang et al., + +2023a; Li et al., 2023a). Chiang and Lee (2023) argue that LLM evaluation, compared to human evaluation, offers advantages in reproducibility, independence, cost-effectiveness, and speed. Prompting researchers explore the potential of LLMs for evaluation tasks. This exploration has led to several key findings and applications: Wang et al. (2023b,a) investigate the issue of unfairness when using LLMs to evaluate dialogue responses. And Shen et al. (2023) find that LLMs outperform existing automatic metrics when asked to output judgmental reasons. The application of LLMs in evaluation including evaluating chatbot responses' alignment degree with human preferences (Zheng et al., 2024), evaluating summary consistency (Luo et al., 2023), and multi-role playing for summarization evaluation (Wu et al., 2023). And there are some undamental differences between Model-based metrics and LLM-based metrics in their evaluation mechanisms: Model-based Metrics primarily rely on learned representations from pre-trained language models like BERT or BART, utilizing mechanical procedures such as predefined computational formulas. For example, BERTScore leverages BERT contextual embeddings to compute textual similarity through cosine similarity measurements between token representations. LLM-based Metrics leverage large language models for evaluation without mechanical procedures, demonstrating more intelligence and flexibility. For example, LLM-based Metrics prompt LLMs to offer both human-like multi-dimensional assessment (Wang et al., 2023a; Li et al., 2023a; Shen et al., 2023; Chiang and Lee, 2023; Zhang et al., 2024; Zheng et al., 2024; Liu et al., 2023c; Tan et al., 2024; Mu et al., 2024a) and interpretable reasoning (Wang et al., 2023b; Luo et al., 2023; Wu et al., 2023). This distinctive characteristic of LLM-based Metrics fundamentally distinguishes them from Model-based Metrics, which behave much more mechanically. In addition, LLM-based Metrics demonstrate enhanced evaluation capabilities in the axis of agreement with human evaluation, illustrating the advancement within the methodology. + +Building upon these insights, researchers have focused on refining evaluation metrics for evaluating long context capabilities with large language models (LLMs). Fu et al. (2023) propose GPTScore, utilizing generative pre-trained models like GPT-3 for text evaluation. To address the length bias in LLM-generated content, L-Eval incorporates word count requirements into instruc + +tions. Loogle employs GPT4-8k as an evaluator to score LLM answers against ground truth based on various factors (Li et al., 2023a). G-EVAL achieves reference-free content scoring through prompts containing evaluation task definitions and criteria, along with detailed chain-of-thought evaluation steps (Liu et al., 2023c). Tan et al. (2024) have introduced PROXYQA for long-context generation evaluation, evaluating final results based on the accuracy of answers to proxy question. \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13299/images.zip b/data/2025/2503_13xxx/2503.13299/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a7bea6a69ea3b263493662c0c9fa012d8eb71b9b --- /dev/null +++ b/data/2025/2503_13xxx/2503.13299/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebacccde40d8c8a162ff9aaced1bb103b30fdfd0359c1e15ca4d47e8dd4e20ea +size 415843 diff --git a/data/2025/2503_13xxx/2503.13299/layout.json b/data/2025/2503_13xxx/2503.13299/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8d21c0d7c790d346c49cf2fc4bd8d7cd37537456 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13299/layout.json @@ -0,0 +1,19836 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 157, + 67, + 437, + 100 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 157, + 67, + 437, + 100 + ], + "spans": [ + { + "bbox": [ + 157, + 67, + 437, + 100 + ], + "type": "text", + "content": "A Survey on Transformer Context Extension: Approaches and Evaluation" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 129, + 105, + 466, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 105, + 466, + 120 + ], + "spans": [ + { + "bbox": [ + 129, + 105, + 466, + 120 + ], + "type": "text", + "content": "Yijun Liu" + }, + { + "bbox": [ + 129, + 105, + 466, + 120 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 129, + 105, + 466, + 120 + ], + "type": "text", + "content": ", Jinzheng Yu" + }, + { + "bbox": [ + 129, + 105, + 466, + 120 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 129, + 105, + 466, + 120 + ], + "type": "text", + "content": ", Yang Xu" + }, + { + "bbox": [ + 129, + 105, + 466, + 120 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 129, + 105, + 466, + 120 + ], + "type": "text", + "content": ", Zhongyang Li" + }, + { + "bbox": [ + 129, + 105, + 466, + 120 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 129, + 105, + 466, + 120 + ], + "type": "text", + "content": ", Qingfu Zhu" + }, + { + "bbox": [ + 129, + 105, + 466, + 120 + ], + "type": "inline_equation", + "content": "^{1}" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 138, + 121, + 453, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 121, + 453, + 133 + ], + "spans": [ + { + "bbox": [ + 138, + 121, + 453, + 133 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 138, + 121, + 453, + 133 + ], + "type": "text", + "content": " Research Center for Social Computing and Interactive Robotics" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 218, + 134, + 375, + 148 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 218, + 134, + 375, + 148 + ], + "spans": [ + { + "bbox": [ + 218, + 134, + 375, + 148 + ], + "type": "text", + "content": "1 Harbin Institute of Technology" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 135, + 148, + 458, + 161 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 135, + 148, + 458, + 161 + ], + "spans": [ + { + "bbox": [ + 135, + 148, + 458, + 161 + ], + "type": "text", + "content": "2 State Key Laboratory of Media Convergence and Communication" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 205, + 162, + 388, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 205, + 162, + 388, + 175 + ], + "spans": [ + { + "bbox": [ + 205, + 162, + 388, + 175 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 205, + 162, + 388, + 175 + ], + "type": "text", + "content": " Communication University of China" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 215, + 176, + 377, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 215, + 176, + 377, + 190 + ], + "spans": [ + { + "bbox": [ + 215, + 176, + 377, + 190 + ], + "type": "text", + "content": "3 Huawei Technologies, Co., Ltd." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 204, + 191, + 390, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 204, + 191, + 390, + 204 + ], + "spans": [ + { + "bbox": [ + 204, + 191, + 390, + 204 + ], + "type": "text", + "content": "{yijunliu, qfzhu}@ir.hit.edu.cn" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 155, + 220, + 202, + 232 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 220, + 202, + 232 + ], + "spans": [ + { + "bbox": [ + 155, + 220, + 202, + 232 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 84, + 239, + 274, + 503 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 239, + 274, + 503 + ], + "spans": [ + { + "bbox": [ + 84, + 239, + 274, + 503 + ], + "type": "text", + "content": "Large language models (LLMs) based on Transformer have been widely applied in the filed of natural language processing (NLP), demonstrating strong performance, particularly in handling short text tasks. However, when it comes to long context scenarios, the performance of LLMs degrades due to some challenges. To alleviate this phenomenon, there is a number of work proposed recently. In this survey, we first list the challenges of applying pre-trained LLMs to process long contexts. Then systematically review the approaches related to long context and propose our taxonomy categorizing them into four main types: positional encoding, context compression, retrieval augmented, and attention pattern. In addition to the approaches, we focus on the evaluation of long context, organizing relevant data, tasks, and metrics based on existing long context benchmarks. Finally, we summarize unresolved issues in the long context domain and put forward our views on future developments." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 68, + 511, + 154, + 523 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 511, + 154, + 523 + ], + "spans": [ + { + "bbox": [ + 68, + 511, + 154, + 523 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 531, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 531, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 531, + 291, + 775 + ], + "type": "text", + "content": "In recent years, the Transformer (Vaswani, 2017) architecture has made significant progress in many NLP tasks (Devlin, 2018; Radford, 2018; Lewis, 2019; Raffel et al., 2020; Brown, 2020; Chen et al., 2021a; Cobbe et al., 2021), and has become the foundational model of many applications. Large language models can handle tasks involving short texts, within the pre-trained context length. However, current scenarios always involve longer texts, such as book-/repo- level tasks (Sharma et al., 2019; Liu et al., 2023a; Zhang et al., 2023a; Liu et al., 2023b), dialogue systems with long contexts (Dey et al., 2022; Li et al., 2024a), content-rich in-context learning (Li et al., 2024c) and so on. The performance of the pre-trained LLMs degrades and the models often fail to utilize the complete knowledge contained within the long context inputs. This may be caused by three inher" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 220, + 526, + 314 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 220, + 526, + 314 + ], + "spans": [ + { + "bbox": [ + 302, + 220, + 526, + 314 + ], + "type": "text", + "content": "ent challenges: out-of-distribution (OOD) problem (Han et al., 2024), \"Lost in the Middle\" phenomenon (Liu et al., 2024a), and quadratic complexity of attention (Zhou et al., 2024). Recently, a lot of work has been proposed to improve and evaluate models' ability to handle long contexts in the community." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 315, + 526, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 315, + 526, + 571 + ], + "spans": [ + { + "bbox": [ + 302, + 315, + 526, + 571 + ], + "type": "text", + "content": "This survey focuses on approaches and evaluation in the long context field, systematically reviewing existing related work. As illustrated in Figure 1, we propose a novel taxonomy for approaches, categorizing them into four main groups: positional encoding, context compression, retrieval augmented, and attention pattern. Additionally, we focus on the evaluation aspect and organize work on data, tasks, and metrics based on existing benchmarks. In addition to the two main parts of approaches and evaluation, we present our viewpoints on the current unsolved issues and potential future directions in the long context domain. To illustrate the current status more theoretically, we also list the main challenges in the field of long context before introducing specific work. Although most existing methods and benchmarks have not corresponded to them, these challenges are still instructive for the development of approaches and evaluation." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 572, + 526, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 572, + 526, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 572, + 526, + 775 + ], + "type": "text", + "content": "There are also some surveys that focus on the long context domain. They each have their own emphasis, but there is no systematic and comprehensive survey of approaches and evaluation in the field of long context, which can provide researchers with a quick and efficient guide. Some surveys only include a part of the methods, lacking a comprehensive overview of the approaches related to long context. Zhao et al. (2023) focus on work addressing length extrapolation from the perspective of positional encoding, while there are some surveys from the perspective of KV Cache (Li et al., 2025; Shi et al., 2024). Besides, though some surveys have categorized existing work, their taxonomies are not clear, and there are overlaps between categories." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 13, + 253, + 36, + 584 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 13, + 253, + 36, + 584 + ], + "spans": [ + { + "bbox": [ + 13, + 253, + 36, + 584 + ], + "type": "text", + "content": "arXiv:2503.13299v2 [cs.CL] 8 Jul 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 293, + 793, + 299, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 793, + 299, + 803 + ], + "spans": [ + { + "bbox": [ + 293, + 793, + 299, + 803 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 68, + 68, + 526, + 360 + ], + "blocks": [ + { + "bbox": [ + 68, + 68, + 526, + 360 + ], + "lines": [ + { + "bbox": [ + 68, + 68, + 526, + 360 + ], + "spans": [ + { + "bbox": [ + 68, + 68, + 526, + 360 + ], + "type": "image", + "image_path": "8155c34b78c72e3a6c72ffa54d70e4bbd68caca6845f74c57719234c7004dcba.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 365, + 526, + 414 + ], + "lines": [ + { + "bbox": [ + 67, + 365, + 526, + 414 + ], + "spans": [ + { + "bbox": [ + 67, + 365, + 526, + 414 + ], + "type": "text", + "content": "Figure 1: Framework of survey. We first list three inherent challenges in Section 2. And then we systematically review related approaches and propose a novel taxonomy with four major categories in Section 3. Next, in Section 4, we organize the evaluation aspect from three perspectives: data, tasks, and metrics based on existing benchmarks. At last, we show our views on future roadmap and open problems in Section 5." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 435, + 290, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 435, + 290, + 624 + ], + "spans": [ + { + "bbox": [ + 67, + 435, + 290, + 624 + ], + "type": "text", + "content": "For example, Huang et al. (2023) divide the methods for enhancing Transformer architecture models into five categories, but some existing methods can belong to multiple categories. And (Pawar et al., 2024) also has this problem, which distinguishes existing techniques into two categories: interpolation and extrapolation. Also, some surveys even involve some common methods that not specifically designed for long contexts. Dong et al. (2023b) provide an overview of the text-preprocessing methods, architectures, special characteristics and application for long context, but they cover some general topics. What's more, these surveys pay little or even no attention to the evaluation aspect." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 626, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 626, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 626, + 291, + 775 + ], + "type": "text", + "content": "To fill the above gap, our survey proposes a novel and comprehensive taxonomy on both approaches and evaluation aspects. It is worth noting that we focus on work that applies Transformer-based models to long text tasks, but not improving Transformers (nor other architectures) in a universal scenario. That is to say, this survey does not cover fields like long chain-of-thought reasoning (Chen et al., 2025), multimodal long context (Song et al., 2024; Qiu et al., 2024), efficient Transformer (Zhou et al., 2024), and State Space Model (SSM) (Wang et al.," + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 302, + 435, + 525, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 435, + 525, + 502 + ], + "spans": [ + { + "bbox": [ + 302, + 435, + 525, + 502 + ], + "type": "text", + "content": "2024c). In addition, the long context we focus on is the long input content, rather than the introduction of external knowledge in the Retrieval-Augmented Generation (RAG) scenario (Yu et al., 2024; Zhao et al., 2024; Fan et al., 2024)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 303, + 511, + 380, + 525 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 511, + 380, + 525 + ], + "spans": [ + { + "bbox": [ + 303, + 511, + 380, + 525 + ], + "type": "text", + "content": "2 Challenges" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 532, + 525, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 532, + 525, + 613 + ], + "spans": [ + { + "bbox": [ + 302, + 532, + 525, + 613 + ], + "type": "text", + "content": "When applying pre-trained LLMs to the long context scenarios, there are some inherent challenges affecting models' performance. We list the three most important and common challenges: OOD problem, \"Lost in the Middle\" phenomenon, and quadratic complexity." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 619, + 526, + 741 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 619, + 526, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 619, + 526, + 741 + ], + "type": "text", + "content": "OD Problem When processing sequences that exceed pre-trained context window length, the models face out-of-distribution (OOD) problems. Han et al. (2024) verify theoretically and empirically that three key factors contribute to OOD issues, thereby limiting models' extrapolation capabilities: 1) unseen inter-token distances, 2) increased number of attended tokens, and 3) implicitly encoded position information of the starting tokens." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "type": "text", + "content": "\"Lost in the Middle\" Phenomenon Liu et al. (2024a) discover the \"Lost in the middle\" phe" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 793, + 300, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 793, + 300, + 803 + ], + "spans": [ + { + "bbox": [ + 293, + 793, + 300, + 803 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 153 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 153 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 153 + ], + "type": "text", + "content": "nomenon through experiments that when LLMs receive a long input, they tend to focus on the information at both the beginning and end of the input sequence. At the same time, they neglect the content in the middle, thus failing to capitalize on the key information within the long input." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 163, + 290, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 163, + 290, + 217 + ], + "spans": [ + { + "bbox": [ + 67, + 163, + 290, + 217 + ], + "type": "text", + "content": "Quadratic Complexity Due to the quadratic complexity of attention, directly using pre-trained LLMs for training or inference on long context is time and resource consuming (Zhou et al., 2024)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 219, + 291, + 368 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 219, + 291, + 368 + ], + "spans": [ + { + "bbox": [ + 67, + 219, + 291, + 368 + ], + "type": "text", + "content": "The above are three inherent challenges in the field of long context, and some existing methods have alleviated them to a certain extent. But it is worth noting that most of the methods do not start from this perspective. They consider directly improving the performance of downstream tasks. However, we believe that these three challenges are still the fundamental problems that need to be solved. They play a vital role in the proposal of methods and construction of benchmarks. Moreover, they are the focus of subsequent research." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 380, + 151, + 395 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 380, + 151, + 395 + ], + "spans": [ + { + "bbox": [ + 67, + 380, + 151, + 395 + ], + "type": "text", + "content": "3 Approaches" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 405, + 291, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 405, + 291, + 513 + ], + "spans": [ + { + "bbox": [ + 67, + 405, + 291, + 513 + ], + "type": "text", + "content": "For the existing approaches for long context, we summarize their characteristics and propose a novel taxonomy different from previous work. As illustrated in Figure 1, mainstream methods are divided into four major categories: positional encoding, context compression, retrieval augmented, and attention pattern, which are introduced below. For more details, please see the Appendix A." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 526, + 189, + 539 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 526, + 189, + 539 + ], + "spans": [ + { + "bbox": [ + 67, + 526, + 189, + 539 + ], + "type": "text", + "content": "3.1 Positional Encoding" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 544, + 291, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 544, + 291, + 774 + ], + "spans": [ + { + "bbox": [ + 67, + 544, + 291, + 774 + ], + "type": "text", + "content": "Kazemnejad et al. (2024) mention that positional encoding (PE) appears to be a major factor in the length generalization of Transformer. During the inference process, when encountering sequences that exceed the length of the pre-trained window, the model needs to handle the position index that was not encountered during pre-training. This may lead to Out-Of-Distribution (OOD) issues. Thus, we would like to find an appropriate positional encoding method that allows the model to effectively encode position in sequences that exceed the pre-trained window length. Based on the implementation methods, solutions can be categorized into two main types: Variants of Rotary Position Embedding (RoPE, Su et al., 2024) and Attention bias method. The following sections will detail these two methods." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 71, + 526, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 248 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 248 + ], + "type": "text", + "content": "Though the designed positional encoding strategies can alleviate the extrapolation problem, experiments have found that models without positional encoding (NoPE) show better performance than these methods in reasoning tasks (Kazemnejad et al., 2024). That's because when causal masks are used for decoding, the model reads the sequence sequentially from left to right. And this process naturally incorporates the sequential information of the token. This finding suggests that when designing a position encoding strategy, we may need to consider the way the model processes sequences and the requirements of the task." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 253, + 421, + 265 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 253, + 421, + 265 + ], + "spans": [ + { + "bbox": [ + 302, + 253, + 421, + 265 + ], + "type": "text", + "content": "3.1.1 Variants of RoPE" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 269, + 525, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 269, + 525, + 350 + ], + "spans": [ + { + "bbox": [ + 302, + 269, + 525, + 350 + ], + "type": "text", + "content": "Rotary Position Embedding (RoPE, Su et al., 2024) is a positional encoding method utilized in a series of models such as Wang and Komatsuzaki (2021); Touvron et al. (2023); Roziere et al. (2023). RoPE incorporates explicit relative position dependency in self-attention, which can be expressed as" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 320, + 354, + 524, + 390 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 354, + 524, + 390 + ], + "spans": [ + { + "bbox": [ + 320, + 354, + 524, + 390 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\operatorname {s i m} \\left(q _ {m}, k _ {n}\\right) = q _ {m} ^ {\\top} R _ {\\Theta , n - m} ^ {d} k _ {n} \\tag {1} \\\\ = (R _ {\\Theta , m} ^ {d} q _ {m}) ^ {\\top} (R _ {\\Theta , n} ^ {d} k _ {n}), \\\\ \\end{array}", + "image_path": "6c7fe99edc47905cb81f8c4779238cdaa54cf68f69247febe521619938f00de9.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 396, + 526, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 396, + 526, + 571 + ], + "spans": [ + { + "bbox": [ + 302, + 396, + 526, + 571 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 302, + 396, + 526, + 571 + ], + "type": "inline_equation", + "content": "R_{\\Theta,m}^{d}" + }, + { + "bbox": [ + 302, + 396, + 526, + 571 + ], + "type": "text", + "content": " is called the rotation matrix. The original RoPE's extrapolation capability is not very robust and can only maintain performance slightly beyond the pre-trained context length. Consequently, existing work enhances RoPE for better extrapolation. The core of RoPE is the rotation matrix " + }, + { + "bbox": [ + 302, + 396, + 526, + 571 + ], + "type": "inline_equation", + "content": "R_{\\Theta,m}^{d}" + }, + { + "bbox": [ + 302, + 396, + 526, + 571 + ], + "type": "text", + "content": ", which is parameterized by the position index " + }, + { + "bbox": [ + 302, + 396, + 526, + 571 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 302, + 396, + 526, + 571 + ], + "type": "text", + "content": " and the function family " + }, + { + "bbox": [ + 302, + 396, + 526, + 571 + ], + "type": "inline_equation", + "content": "\\Theta" + }, + { + "bbox": [ + 302, + 396, + 526, + 571 + ], + "type": "text", + "content": ". We can optimize RoPE by adjusting these parameters or even the structure of RoPE itself. Existing related work can be divided into three subcategories: position index adjustment, base frequency adjustment, and structural modification." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 579, + 525, + 741 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 579, + 525, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 579, + 525, + 741 + ], + "type": "text", + "content": "Position Index Adjustment This method involves modifying the allocation or calculation of position index " + }, + { + "bbox": [ + 302, + 579, + 525, + 741 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 302, + 579, + 525, + 741 + ], + "type": "text", + "content": " to maintain the relative distances between tokens within the model's pre-trained index range. This can be implemented in three ways. We can adjust the allocation of the position index " + }, + { + "bbox": [ + 302, + 579, + 525, + 741 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 302, + 579, + 525, + 741 + ], + "type": "text", + "content": " (An et al., 2024). Besides, proportionally scale " + }, + { + "bbox": [ + 302, + 579, + 525, + 741 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 302, + 579, + 525, + 741 + ], + "type": "text", + "content": " for long sequences to adapt to the pre-trained window (Chen et al., 2023b). What's more, we can combine the above two methods, reallocating position index to some tokens in the sequence, while scaling the position index for others (Su, 2023)." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "type": "text", + "content": "Base Frequency Adjustment From the formula of rotation matrix (see details in Appendix A.1.1)," + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 793, + 299, + 802 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 793, + 299, + 802 + ], + "spans": [ + { + "bbox": [ + 293, + 793, + 299, + 802 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "spans": [ + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "text", + "content": "we can see that each non-zero term is a trigonometric function value with " + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "text", + "content": " as independent variable. And the value of " + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "text", + "content": " affects the effect of rotation matrix to a certain extent. Base frequency adjustment is to enhance the model extrapolation performance by modifying " + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "text", + "content": " in the trigonometric function terms in the rotation matrix. NTK (Neural Tangent Kernel) theory shows that when the input dimension is low and its embedding representation lacks high-frequency components, it is difficult for the neural network to learn high-frequency information (Tancik et al., 2020). Therefore, researchers choose to adjust " + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "text", + "content": " with the idea of \"extrapolation on high-frequency and interpolation on low-frequency\". One strategy is to change the base " + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "inline_equation", + "content": "b" + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "text", + "content": " of the exponential terms " + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "text", + "content": " in the function cluster " + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "inline_equation", + "content": "\\Theta" + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "text", + "content": ", and change it from the default value " + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "inline_equation", + "content": "b = 10000" + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "text", + "content": " to other values which can improve the model extrapolation performance (Peng and Quesnelle, 2023; Roziere et al., 2023). Another strategy is to directly scale " + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 67, + 72, + 293, + 356 + ], + "type": "text", + "content": " (bloc97, 2023; Peng et al., 2023)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 362, + 291, + 444 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 362, + 291, + 444 + ], + "spans": [ + { + "bbox": [ + 67, + 362, + 291, + 444 + ], + "type": "text", + "content": "Structural Modification The methods described above focus on modifying variables in RoPE formula without altering its basic structure. Some existing work explores adjusting the structure of RoPE itself to better extrapolate, which optimizes the original RoPE formula (Sun et al., 2022)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 450, + 174, + 462 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 450, + 174, + 462 + ], + "spans": [ + { + "bbox": [ + 67, + 450, + 174, + 462 + ], + "type": "text", + "content": "3.1.2 Attention Bias" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 466, + 291, + 534 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 466, + 291, + 534 + ], + "spans": [ + { + "bbox": [ + 67, + 466, + 291, + 534 + ], + "type": "text", + "content": "This type of method introduces relative position information by adding a bias related to the relative distance between tokens when calculating the similarity between query and key vectors. The process can be expressed as follows:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 94, + 541, + 291, + 557 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 541, + 291, + 557 + ], + "spans": [ + { + "bbox": [ + 94, + 541, + 291, + 557 + ], + "type": "interline_equation", + "content": "\\mathrm {s i m} \\left(q _ {m}, k _ {n}\\right) = q _ {m} ^ {\\top} k _ {n} + f _ {b i a s} (m, n), (2)", + "image_path": "67255c653bddde248a27b85c2796aba30edc5147552f0046bd722df4e9c14038.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 564, + 293, + 632 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 564, + 293, + 632 + ], + "spans": [ + { + "bbox": [ + 67, + 564, + 293, + 632 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 564, + 293, + 632 + ], + "type": "inline_equation", + "content": "f_{bias}(m,n)" + }, + { + "bbox": [ + 67, + 564, + 293, + 632 + ], + "type": "text", + "content": " is a bias function that depends on the token position index corresponding to query and key. " + }, + { + "bbox": [ + 67, + 564, + 293, + 632 + ], + "type": "inline_equation", + "content": "f_{bias}(m,n)" + }, + { + "bbox": [ + 67, + 564, + 293, + 632 + ], + "type": "text", + "content": " be divided into two categories: learnable (Raffel et al., 2020; Chi et al., 2022a), predefined (Press et al., 2021; Chi et al., 2022b)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 640, + 198, + 653 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 640, + 198, + 653 + ], + "spans": [ + { + "bbox": [ + 67, + 640, + 198, + 653 + ], + "type": "text", + "content": "3.2 Context Compression" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 658, + 292, + 725 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 658, + 292, + 725 + ], + "spans": [ + { + "bbox": [ + 67, + 658, + 292, + 725 + ], + "type": "text", + "content": "Existing work proposes compressing the long input sequence into a shorter one for representation. These methods can be categorized into two main types by the compression granularity: soft compression and hard compression." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 732, + 188, + 745 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 732, + 188, + 745 + ], + "spans": [ + { + "bbox": [ + 67, + 732, + 188, + 745 + ], + "type": "text", + "content": "3.2.1 Soft Compression" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 748, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 748, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 748, + 291, + 775 + ], + "type": "text", + "content": "In order to shorten the sequence length, the soft compression method uses the model to compress" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 71, + 527, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 527, + 316 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 527, + 316 + ], + "type": "text", + "content": "the original input token sequence into a shorter summary token sequence. These summary tokens are soft tokens which act as compression representation but do not correspond to words with actual meaning. They are inserted into the original token sequence to form a new input. During the forward pass of the model, the information from the original token sequence is gathered into the summary token sequence, which represents the original input for subsequent operations. Since summary tokens do not appear during the model's pre-training, additional training is necessary for the model to learn how to generate and utilize these tokens (Bulatov et al., 2022; Li et al., 2023b; Chevalier et al., 2023; Ge et al., 2023; Mu et al., 2024b). This method can shorten the length of the hidden vector sequence, so that enabling it to be processed within the model's pre-trained window." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 324, + 429, + 337 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 324, + 429, + 337 + ], + "spans": [ + { + "bbox": [ + 302, + 324, + 429, + 337 + ], + "type": "text", + "content": "3.2.2 Hard Compression" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 341, + 527, + 435 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 341, + 527, + 435 + ], + "spans": [ + { + "bbox": [ + 302, + 341, + 527, + 435 + ], + "type": "text", + "content": "This method utilizes some techniques to directly shorten plain text sequence length. This process can be achieved through selection and summarization. It doesn't introduce additional tokens and targeted training, which makes it can be applied to some black box models (Jiang et al., 2023, 2024b; Chen et al., 2023a)." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 446, + 430, + 460 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 446, + 430, + 460 + ], + "spans": [ + { + "bbox": [ + 302, + 446, + 430, + 460 + ], + "type": "text", + "content": "3.3 Retrieval Augmented" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 465, + 527, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 465, + 527, + 723 + ], + "spans": [ + { + "bbox": [ + 302, + 465, + 527, + 723 + ], + "type": "text", + "content": "Some existing work propose retrieval-augmented methods to enhance model performance on long context tasks by selectively incorporating crucial tokens from history context into attention. With reference to related work, we summarize a processing paradigm for this type of method. Initially, the (key, value) pairs from history are stored in the KV cache. Then the model retrieves the corresponding token representations from the KV cache at different retrieval granularity levels. This process is based on the similarity between current token and history tokens from KV cache. The top-k relevant tokens are selected as the retrieved context, which is then concatenated with the context within the current window to form a new input. Finally, the model applies appropriate positional encoding to this concatenated context for attention calculation. Below, we summarize the different methods according to each step of the above process:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 731, + 441, + 744 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 731, + 441, + 744 + ], + "spans": [ + { + "bbox": [ + 302, + 731, + 441, + 744 + ], + "type": "text", + "content": "3.3.1 Retrieval Granularity" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "type": "text", + "content": "In the process of long context retrieval, we focus on the most relevant subset of tokens from KV" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 793, + 301, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 793, + 301, + 803 + ], + "spans": [ + { + "bbox": [ + 293, + 793, + 301, + 803 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 293, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 293, + 262 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 293, + 262 + ], + "type": "text", + "content": "cache related to the current processing step. Different methods use different retrieval granularity, with the basic being token-level retrieval. Specifically, it involves calculating the similarity of each history token in the KV cache with the current token, and selecting the top-k history tokens' key and value vectors as the retrieval result. Methods applying this strategy include MemTRM (Wu et al., 2022), FoT (Tworkowski et al., 2024), Unlimformer (Bertsch et al., 2024a), etc. Besides, some work focuses on block-level retrieval, which retrieve top-k set of tokens in one step (Wang et al., 2024b; Rubin and Berant, 2023; Xiao et al., 2024; Mohtashami and Jaggi, 2024)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 270, + 215, + 283 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 270, + 215, + 283 + ], + "spans": [ + { + "bbox": [ + 67, + 270, + 215, + 283 + ], + "type": "text", + "content": "3.3.2 Similarity Computation" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 288, + 291, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 288, + 291, + 396 + ], + "spans": [ + { + "bbox": [ + 67, + 288, + 291, + 396 + ], + "type": "text", + "content": "Almost all existing works compute the inner product of query and key as similarity. This strategy draws from the standard attention mechanism, which calculates the dot product between the query and key to allocate corresponding weights to the value (Vaswani et al., 2023). It is simple to implement and can effectively capture and utilize the similarity information between queries and keys." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 406, + 198, + 419 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 406, + 198, + 419 + ], + "spans": [ + { + "bbox": [ + 67, + 406, + 198, + 419 + ], + "type": "text", + "content": "3.3.3 Positional Encoding" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 423, + 291, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 423, + 291, + 626 + ], + "spans": [ + { + "bbox": [ + 67, + 423, + 291, + 626 + ], + "type": "text", + "content": "After computing the similarity, we select the topk relative tokens as the results, and call them retrieved context tokens. Correspondingly, tokens within the current window are called as local context tokens. These two types of context tokens are concatenated to form a new set of context tokens. Before these new context tokens are fed into the model for attention computation, it is necessary to consider suitable positional encoding to distinguish the information of tokens at different positions. Some work choose to assign the same position vector to the retrieved context tokens (Wu et al., 2022; Tworkowski et al., 2024; Xiao et al., 2024), while Mohtashami and Jaggi (2023) choose reallocation strategies." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 636, + 205, + 648 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 636, + 205, + 648 + ], + "spans": [ + { + "bbox": [ + 67, + 636, + 205, + 648 + ], + "type": "text", + "content": "3.3.4 Attention Calculation" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 653, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 653, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 653, + 291, + 775 + ], + "type": "text", + "content": "Next, when performing attention calculation, we need to consider how to make full use of retrieved context tokens and local context tokens. Different approaches use different strategies. Simply, Tworkowski et al. (2024); Xiao et al. (2024)choose standard attention, while Bertsch et al. (2024a) chooses cross attention. Besides, Wu et al. (2022); Wang et al. (2024b) adopt a Joint Attention method. Landmark employs the Grouped Soft" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 71, + 526, + 98 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 98 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 98 + ], + "type": "text", + "content": "max method, a fine-grained approach for calculation (Mohtashami and Jaggi, 2023)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 303, + 108, + 412, + 120 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 108, + 412, + 120 + ], + "spans": [ + { + "bbox": [ + 303, + 108, + 412, + 120 + ], + "type": "text", + "content": "3.4 Attention Pattern" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 126, + 526, + 235 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 126, + 526, + 235 + ], + "spans": [ + { + "bbox": [ + 302, + 126, + 526, + 235 + ], + "type": "text", + "content": "There is a class of methods modifying the attention pattern, i.e. the range of tokens attended to. They can better adapt models to expand processing sequence length. Some of them do not require additional training and can be employed as plug-and-play solutions in existing models. These methods can be divided into three main categories: sliding window, parallel context, and sparse attention." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 243, + 414, + 256 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 243, + 414, + 256 + ], + "spans": [ + { + "bbox": [ + 302, + 243, + 414, + 256 + ], + "type": "text", + "content": "3.4.1 Sliding Window" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 259, + 526, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 259, + 526, + 354 + ], + "spans": [ + { + "bbox": [ + 302, + 259, + 526, + 354 + ], + "type": "text", + "content": "This type of method divides the sequence into segments and performs attention calculation segment by segment without significantly increasing computational complexity. The attention results from earlier segments are stored, which later segments can use during their attention calculation (Dai et al., 2019; Han et al., 2024; Xiao et al., 2023)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 363, + 415, + 375 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 363, + 415, + 375 + ], + "spans": [ + { + "bbox": [ + 302, + 363, + 415, + 375 + ], + "type": "text", + "content": "3.4.2 Parallel Context" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 379, + 525, + 501 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 379, + 525, + 501 + ], + "spans": [ + { + "bbox": [ + 302, + 379, + 525, + 501 + ], + "type": "text", + "content": "The Parallel Context method folds the context part of the input (e.g., in-context examples) into multiple segments. These segments first calculate attention independently, and share the same set of position indexes. And then prompt tokens in the input attend to all the context tokens, so that fully utilize contextual information (Ratner et al., 2022; Hao et al., 2022). These methods require no training and can be plug-and-played into existing models." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 509, + 418, + 522 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 509, + 418, + 522 + ], + "spans": [ + { + "bbox": [ + 302, + 509, + 418, + 522 + ], + "type": "text", + "content": "3.4.3 Sparse Attention" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 525, + 525, + 620 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 525, + 525, + 620 + ], + "spans": [ + { + "bbox": [ + 302, + 525, + 525, + 620 + ], + "type": "text", + "content": "Some work reduce the number of tokens involved in the attention computation, decreasing computational load. They abandon the original attention method which attends to local continuous tokens, while expand the attentive field, and attend to discrete tokens from further context (Ding et al., 2023; Yu et al., 2023; Chen et al., 2023c)." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 631, + 380, + 644 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 631, + 380, + 644 + ], + "spans": [ + { + "bbox": [ + 302, + 631, + 380, + 644 + ], + "type": "text", + "content": "4 Evaluation" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 653, + 526, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 653, + 526, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 653, + 526, + 775 + ], + "type": "text", + "content": "In the long context scenario, evaluating the model's ability to understand and utilize long context is also a new and critical issue. But as described before, current surveys pay little or even no attention to the evaluation aspect. To fill this gap, we summarize the data, tasks, and metrics of long context evaluation in our survey based on existing benchmarks. The following is a brief introduction, detailed information is in the Appendix B." + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 793, + 300, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 793, + 300, + 803 + ], + "spans": [ + { + "bbox": [ + 293, + 793, + 300, + 803 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 70, + 68, + 526, + 196 + ], + "blocks": [ + { + "bbox": [ + 70, + 68, + 526, + 196 + ], + "lines": [ + { + "bbox": [ + 70, + 68, + 526, + 196 + ], + "spans": [ + { + "bbox": [ + 70, + 68, + 526, + 196 + ], + "type": "table", + "html": "
BenchmarkLength Level#ExamplesDomain
SCROLLS (Shaham et al., 2022)1k~4k119,495Literature, Dialog
ZeroSCROLLS (Shaham et al., 2023)0k~16k4,378Wiki, Literature, Dialog
LongBench (Bai et al., 2023)0k~4k, 4k~8k, >8k4,750Wiki, Literature, Dialog, Report, Code, News
LooGLE (Li et al., 2023a)0k~24k776Wiki, Paper
BAMBOO (Dong et al., 2023a)0k~4k, 4k~16k1,502Wiki, Dialog, Report, Code, Paper
LongICLBench (Li et al., 2024c)2k~50k3,000Dialog, News, Common Sense
L-Eval (An et al., 2023)3k~200k411Literature, Dialog, News, Paper, Common Sense
Ada-LEval (Wang et al., 2024a)1k~128k117,500Literature, Code
∞Bench (Zhang et al., 2024)0k~200k3,946Literature, Dialog, Code
NeedleBench (Li et al., 2024b)1k~4k/8k/32k/128k/200k/1m+-Wiki, Literature, Dialog, Report, Code, News
LV-Eval (Yuan et al., 2024)0k~16k/32k/64k/128k/256k1,729Wiki, Literature, Dialog, Report, Code, News, Paper
", + "image_path": "18f4aff50e840a643fc047e2c9ef45c7e9c13a77df28a420d6a588c3ae1b3246.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 203, + 526, + 265 + ], + "lines": [ + { + "bbox": [ + 67, + 203, + 526, + 265 + ], + "spans": [ + { + "bbox": [ + 67, + 203, + 526, + 265 + ], + "type": "text", + "content": "Table 1: Statistics on data characteristics of the datasets in existing long context benchmarks. Length level represents the range of token lengths in the dataset used in the benchmark. #Examples refers to the total number of examples. Domain denotes the data sources. The corresponding contents in table are directly extracted or calculated from the original papers. Given that current models mainly within context windows exceeding 100k tokens, we categorize benchmarks based on this threshold. Benchmarks with contexts exceeding 100K tokens are listed in the lower part." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 67, + 285, + 119, + 296 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 285, + 119, + 296 + ], + "spans": [ + { + "bbox": [ + 67, + 285, + 119, + 296 + ], + "type": "text", + "content": "4.1 Data" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 302, + 290, + 356 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 302, + 290, + 356 + ], + "spans": [ + { + "bbox": [ + 67, + 302, + 290, + 356 + ], + "type": "text", + "content": "In order to explore what data should be used to test model's ability to process long context, we conduct a statistical analysis of datasets in existing benchmarks and summarize their data characteristics." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 357, + 290, + 438 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 357, + 290, + 438 + ], + "spans": [ + { + "bbox": [ + 67, + 357, + 290, + 438 + ], + "type": "text", + "content": "The evaluation of a model's long context capability requires not only the long data but also the data diversity and quality. As shown in Table 1, we focus on three characteristics of the datasets in existing long context benchmarks: length level, total number of examples, and the domain it covers." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 439, + 291, + 480 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 439, + 291, + 480 + ], + "spans": [ + { + "bbox": [ + 67, + 439, + 291, + 480 + ], + "type": "text", + "content": "Besides, we also discuss about knowledge leakage issue, which need to be addressed when constructing the dataset, in the Appendix B.1.2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 488, + 123, + 500 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 488, + 123, + 500 + ], + "spans": [ + { + "bbox": [ + 67, + 488, + 123, + 500 + ], + "type": "text", + "content": "4.2 Tasks" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 507, + 291, + 642 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 507, + 291, + 642 + ], + "spans": [ + { + "bbox": [ + 67, + 507, + 291, + 642 + ], + "type": "text", + "content": "Currently, existing benchmarks propose numerous tasks to evaluate the model's ability to process long context. But there is no systematic taxonomy for these tasks. Therefore, we divide all tasks in existing benchmarks into seven categories from the perspective of task setting: Question Answering, Needle-in-a-Haystack, Statistical Tasks, Code, In-Context Learning, Text Generation and Other Tasks. Below is the introduction of each type of task, and the details are in the Appendix B.2." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 650, + 199, + 663 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 650, + 199, + 663 + ], + "spans": [ + { + "bbox": [ + 67, + 650, + 199, + 663 + ], + "type": "text", + "content": "4.2.1 Question Answering" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 666, + 290, + 719 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 666, + 290, + 719 + ], + "spans": [ + { + "bbox": [ + 67, + 666, + 290, + 719 + ], + "type": "text", + "content": "Single-hop Question Answering requires models to locate and extract answers from a single text passage, typically involving a single fact (Rajpurkar, 2016; Joshi et al., 2017; Kočisky et al., 2018)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 721, + 290, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 721, + 290, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 721, + 290, + 775 + ], + "type": "text", + "content": "Multi-hop Question Answering requires models to integrate information from multiple sources to answer complex questions. This often involves reasoning across different pieces of evidence (Ho" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 285, + 525, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 285, + 525, + 312 + ], + "spans": [ + { + "bbox": [ + 302, + 285, + 525, + 312 + ], + "type": "text", + "content": "et al., 2020; Trivedi et al., 2022; Yang et al., 2018; Chen et al., 2024b; Zhuang et al., 2023)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 322, + 441, + 334 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 322, + 441, + 334 + ], + "spans": [ + { + "bbox": [ + 302, + 322, + 441, + 334 + ], + "type": "text", + "content": "4.2.2 Needle-in-a-Haystack" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 338, + 525, + 420 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 338, + 525, + 420 + ], + "spans": [ + { + "bbox": [ + 302, + 338, + 525, + 420 + ], + "type": "text", + "content": "Needle-in-a-Haystack evaluate LLMs' ability to extract specific content from long contexts. These tasks can evaluate the model's retrieval capability, also measure the range of context lengths model can handle (Zhu et al., 2024; Mohtashami and Jaggi, 2023; Zhang et al., 2024; Li et al., 2024b)." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 429, + 414, + 441 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 429, + 414, + 441 + ], + "spans": [ + { + "bbox": [ + 302, + 429, + 414, + 441 + ], + "type": "text", + "content": "4.2.3 Statistical Tasks" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 447, + 525, + 514 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 447, + 525, + 514 + ], + "spans": [ + { + "bbox": [ + 302, + 447, + 525, + 514 + ], + "type": "text", + "content": "Long Arithmetic Calculation requires models to perform addition and subtraction operations on lengthy arithmetic expressions (Zhang et al., 2024, 2023b; Cobbe et al., 2021; Xu et al., 2024; Chen et al., 2024a)." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 516, + 525, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 516, + 525, + 555 + ], + "spans": [ + { + "bbox": [ + 302, + 516, + 525, + 555 + ], + "type": "text", + "content": "Numerical Information Extraction requires models to identify specific mathematical elements (Zhang et al., 2024; Li et al., 2023a)." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 556, + 525, + 610 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 556, + 525, + 610 + ], + "spans": [ + { + "bbox": [ + 302, + 556, + 525, + 610 + ], + "type": "text", + "content": "Sentiment Aggregation requires models to output the percentage of positive reviews when provided with a collection of reviews (Angelidis et al., 2021; Shaham et al., 2023)." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 612, + 525, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 612, + 525, + 651 + ], + "spans": [ + { + "bbox": [ + 302, + 612, + 525, + 651 + ], + "type": "text", + "content": "Paragraph Counting requires models to count the number of unique paragraphs in a set of randomly repeated and shuffled passages (Bai et al., 2023)." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 662, + 364, + 674 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 662, + 364, + 674 + ], + "spans": [ + { + "bbox": [ + 302, + 662, + 364, + 674 + ], + "type": "text", + "content": "4.2.4 Code" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 679, + 525, + 746 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 679, + 525, + 746 + ], + "spans": [ + { + "bbox": [ + 302, + 679, + 525, + 746 + ], + "type": "text", + "content": "Code Completion requires models to complete missing code fragments based on preceding code and context (Chen et al., 2021a; Zheng et al., 2023; Bai et al., 2023; Guo et al., 2023; Zan et al., 2022; Dong et al., 2023a; Qin et al., 2024)." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "type": "text", + "content": "Code Running asks models to infer the output of lengthy programs by tracing a series of cascading" + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 793, + 300, + 802 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 793, + 300, + 802 + ], + "spans": [ + { + "bbox": [ + 293, + 793, + 300, + 802 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 289, + 97 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 289, + 97 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 289, + 97 + ], + "type": "text", + "content": "function calls (Bubeck et al., 2023; An et al., 2023; Zhang et al., 2024)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 98, + 289, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 98, + 289, + 125 + ], + "spans": [ + { + "bbox": [ + 67, + 98, + 289, + 125 + ], + "type": "text", + "content": "Code Debugging requires models to identify deliberately inserted errors (Zhang et al., 2024)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 132, + 201, + 145 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 132, + 201, + 145 + ], + "spans": [ + { + "bbox": [ + 67, + 132, + 201, + 145 + ], + "type": "text", + "content": "4.2.5 In-Context Learning" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 148, + 289, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 148, + 289, + 228 + ], + "spans": [ + { + "bbox": [ + 67, + 148, + 289, + 228 + ], + "type": "text", + "content": "The input will contain a certain amount of examples, resulting in a long input. This is caused by the example itself is very long or the number of examples is particularly large. Based on this fact, we divide In-Context Learning task into two categories: long example learning and many-shot learning." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 230, + 289, + 309 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 230, + 289, + 309 + ], + "spans": [ + { + "bbox": [ + 67, + 230, + 289, + 309 + ], + "type": "text", + "content": "Long Example Learning requires models to process extensive inputs with long examples which have large label spaces and generate accurate predictions. This task inherently is a long-context challenge (Bai et al., 2023; Li et al., 2024c; Li and Roth, 2002; NLPCC, 2014)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 311, + 289, + 364 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 311, + 289, + 364 + ], + "spans": [ + { + "bbox": [ + 67, + 311, + 289, + 364 + ], + "type": "text", + "content": "Many-shot Learning leverages the expanded context windows of models to process hundreds or even thousands of examples in order to complete a given task (Yu et al., 2020; Bertsch et al., 2024b)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 371, + 180, + 383 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 371, + 180, + 383 + ], + "spans": [ + { + "bbox": [ + 67, + 371, + 180, + 383 + ], + "type": "text", + "content": "4.2.6 Text Generation" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 387, + 289, + 427 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 387, + 289, + 427 + ], + "spans": [ + { + "bbox": [ + 67, + 387, + 289, + 427 + ], + "type": "text", + "content": "Language Modeling serving as the pre-training task for LLMs, is also a widely used basic task to test the model's ability to generate text." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 428, + 289, + 576 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 428, + 289, + 576 + ], + "spans": [ + { + "bbox": [ + 67, + 428, + 289, + 576 + ], + "type": "text", + "content": "Document Summarization requires models to make a summary of the input documents, which encompasses single-document and multi-document tasks. Single-document summarization extracts key information from a single document (Wang et al., 2022; Chen et al., 2021b; Huang et al., 2021; Zhong et al., 2021), while multi-document summarization synthesizes information from multiple sources into a comprehensive, non-repetitive summary containing all key points (Bai et al., 2023; An et al., 2023; Fabbri et al., 2019)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 577, + 289, + 630 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 577, + 289, + 630 + ], + "spans": [ + { + "bbox": [ + 67, + 577, + 289, + 630 + ], + "type": "text", + "content": "Open-ended Text Generation requires models to produce coherent and logical content on given topics (Tan et al., 2024; Bai et al., 2024; Kumar et al., 2024; Ni et al., 2024; Rafailov et al., 2024)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 638, + 161, + 649 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 638, + 161, + 649 + ], + "spans": [ + { + "bbox": [ + 67, + 638, + 161, + 649 + ], + "type": "text", + "content": "4.2.7 Other Tasks" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 653, + 289, + 707 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 653, + 289, + 707 + ], + "spans": [ + { + "bbox": [ + 67, + 653, + 289, + 707 + ], + "type": "text", + "content": "In addition to the six types of tasks listed above, there are some tasks that are not included in this classification system but are equally important for testing the model's long context ability." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 708, + 289, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 708, + 289, + 774 + ], + "spans": [ + { + "bbox": [ + 67, + 708, + 289, + 774 + ], + "type": "text", + "content": "Reordering asks models to reconstruct the original sequence of shuffled fragments by considering the broad context and logical relationships (Krysciński et al., 2021; Shaham et al., 2023; Li et al., 2023a; Dong et al., 2023a; Wang et al., 2024a)." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 71, + 524, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 524, + 125 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 524, + 125 + ], + "type": "text", + "content": "Context Consistency shows models an academic paper and a hypothesis, requiring models to judge whether the hypothesis is supported or contradicted by the ideas in the paper (Dong et al., 2023a)." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 126, + 525, + 166 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 126, + 525, + 166 + ], + "spans": [ + { + "bbox": [ + 302, + 126, + 525, + 166 + ], + "type": "text", + "content": "Summary Source Paragraph Identification challenges models to identify the original source paragraphs for given summaries (Bai et al., 2023)." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 167, + 525, + 233 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 167, + 525, + 233 + ], + "spans": [ + { + "bbox": [ + 302, + 167, + 525, + 233 + ], + "type": "text", + "content": "Character Identification requires models to identify different speakers in long dialogues by recognizing their distinct characteristics (TVMEG, 2024; Senedd Cymru, 2024; Zhang et al., 2024; Dong et al., 2023a; Chen et al., 2021b)." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 244, + 367, + 256 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 244, + 367, + 256 + ], + "spans": [ + { + "bbox": [ + 302, + 244, + 367, + 256 + ], + "type": "text", + "content": "4.3 Metrics" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 262, + 525, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 262, + 525, + 423 + ], + "spans": [ + { + "bbox": [ + 302, + 262, + 525, + 423 + ], + "type": "text", + "content": "In addition to data and tasks, metrics can directly reflect the model's ability to handle long contexts. With current long context task designs gradually changing from classic NLP tasks to more practical tasks, the requirements for metrics are constantly increasing. We organize metrics for testing models' capabilities on long context according to the three stages of metrics development: Algorithmic Metrics, Model-based Metrics, and LLM-based Metrics. From these three metrics stages, it can be seen that the metrics development trend becomes more and more complex and flexible." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 433, + 434, + 446 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 433, + 434, + 446 + ], + "spans": [ + { + "bbox": [ + 302, + 433, + 434, + 446 + ], + "type": "text", + "content": "4.3.1 Algorithmic Metrics" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 449, + 525, + 516 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 449, + 525, + 516 + ], + "spans": [ + { + "bbox": [ + 302, + 449, + 525, + 516 + ], + "type": "text", + "content": "Algorithmic metrics are calculated based on the model output or logits through defined formulas. Their implementation is very simple and can reflect the effect of language modeling and some downstream tasks to a certain extent." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 518, + 525, + 638 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 518, + 525, + 638 + ], + "spans": [ + { + "bbox": [ + 302, + 518, + 525, + 638 + ], + "type": "text", + "content": "Perplexity (PPL) is one of the most common algorithmic metrics used in existing long context benchmarks (Beltagy et al., 2020; Roy et al., 2021; Press et al., 2021). Meanwhile, some benchmarks employ other algorithmic metrics such as accuracy, f1, and N-gram-based metrics (ROUGE, Lin, 2004 and BLEU, Papineni et al., 2002, etc.) to evaluate LLMs on certain downstream tasks (Shaham et al., 2023; Bai et al., 2023; Kasai et al., 2021)." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 302, + 640, + 525, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 640, + 525, + 774 + ], + "spans": [ + { + "bbox": [ + 302, + 640, + 525, + 774 + ], + "type": "text", + "content": "However, these algorithmic metrics have several limitations, such as content quality, syntactic accuracy, and human correlation issues (Reiter and Belz, 2009; Stent et al., 2005; Sun et al., 2021; An et al., 2023; Improving; Tan et al., 2024). This causes algorithmic metrics to be limited in reflecting the model's ability to process long context. A number of approaches have been developed to improve algorithmic metrics. Such as enhancing scoring techniques, restructuring task formats and so on (Yuan" + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 793, + 299, + 801 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 793, + 299, + 801 + ], + "spans": [ + { + "bbox": [ + 293, + 793, + 299, + 801 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 285, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 285, + 84 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 285, + 84 + ], + "type": "text", + "content": "et al., 2024; Dong et al., 2023a; Li et al., 2024b)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 93, + 203, + 105 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 93, + 203, + 105 + ], + "spans": [ + { + "bbox": [ + 67, + 93, + 203, + 105 + ], + "type": "text", + "content": "4.3.2 Model-based Metrics" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 109, + 291, + 216 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 109, + 291, + 216 + ], + "spans": [ + { + "bbox": [ + 67, + 109, + 291, + 216 + ], + "type": "text", + "content": "To improve the consistency with human judgments, pre-trained language models are being employed to evaluate (Zhang et al., 2020; Yuan et al., 2021). Specifically, pre-trained models (such as BERT, Devlin, 2018, BART, Lewis, 2019, etc.) are used to calculate the similarity score between the model output and reference text to evaluate the performance of downstream tasks." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 217, + 290, + 285 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 217, + 290, + 285 + ], + "spans": [ + { + "bbox": [ + 67, + 217, + 290, + 285 + ], + "type": "text", + "content": "However, these model-based metrics entirely rely on representations learned from pre-trained language models and require reference texts. They may not be accurate enough for evaluating some novel and creative text generation tasks." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 293, + 198, + 305 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 293, + 198, + 305 + ], + "spans": [ + { + "bbox": [ + 67, + 293, + 198, + 305 + ], + "type": "text", + "content": "4.3.3 LLM-based Metrics" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 309, + 291, + 444 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 309, + 291, + 444 + ], + "spans": [ + { + "bbox": [ + 67, + 309, + 291, + 444 + ], + "type": "text", + "content": "Combining the above two metrics issues, LLM-based metrics are proposed, utilizing sufficient knowledge within LLMs for evaluation. For example, LLM-based metrics prompt LLMs to offer human-like multi-dimensional assessment (Wang et al., 2023a; Li et al., 2023a; Shen et al., 2023; Chiang and Lee, 2023; Zhang et al., 2024; Zheng et al., 2024; Liu et al., 2023c; Tan et al., 2024; Mu et al., 2024a) and interpretable reasoning (Wang et al., 2023b; Luo et al., 2023; Wu et al., 2023)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 445, + 291, + 553 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 445, + 291, + 553 + ], + "spans": [ + { + "bbox": [ + 67, + 445, + 291, + 553 + ], + "type": "text", + "content": "LLM-based metrics fundamentally distinguish from the other two metrics, which behave much more mechanically. In addition, they demonstrate enhanced agreement with human evaluations (Wang et al., 2023a; Li et al., 2023a). Due to this higher consistency and wider scope of application, LLM-based metrics are gaining increasing attention in long-context evaluation." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 564, + 282, + 577 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 564, + 282, + 577 + ], + "spans": [ + { + "bbox": [ + 67, + 564, + 282, + 577 + ], + "type": "text", + "content": "5 Future Roadmap and Open Problems" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 586, + 291, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 586, + 291, + 666 + ], + "spans": [ + { + "bbox": [ + 67, + 586, + 291, + 666 + ], + "type": "text", + "content": "Despite the rapid development of long context techniques, numerous challenges remain unresolved. Looking to future roadmap, we list vital open problems and present our perspectives on the developments. They are also divided into two parts: approaches and evaluation." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 677, + 151, + 689 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 677, + 151, + 689 + ], + "spans": [ + { + "bbox": [ + 67, + 677, + 151, + 689 + ], + "type": "text", + "content": "5.1 Approaches" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 693, + 290, + 735 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 693, + 290, + 735 + ], + "spans": [ + { + "bbox": [ + 67, + 693, + 290, + 735 + ], + "type": "text", + "content": "Method Integration would combine methods' strengths to address the challenges of extrapolating long context from multiple perspectives." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 735, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 735, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 735, + 291, + 775 + ], + "type": "text", + "content": "Long Text Generation remains under-researched, which concentrate on effective long-text generation techniques and the evaluation of generation quality." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 71, + 526, + 154 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 154 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 154 + ], + "type": "text", + "content": "Sparse Attention Mechanisms may lead to a decrease in models' original language ability, thereby limiting their potential for processing long context. \"Lost-in-the-Middle\" Issue has not yet been completely resolved, there is a lack of targeted solutions and appropriate verification methods." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 157, + 526, + 211 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 157, + 526, + 211 + ], + "spans": [ + { + "bbox": [ + 302, + 157, + 526, + 211 + ], + "type": "text", + "content": "Scalability of Methods requires to explore how existing methods can be adapted to models of different scales or even different architectural frameworks, enhancing their generality and applicability." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 213, + 525, + 267 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 213, + 525, + 267 + ], + "spans": [ + { + "bbox": [ + 302, + 213, + 525, + 267 + ], + "type": "text", + "content": "Methods Enabling \"Train Short, Test Long\" haven't emerged, which train on short texts while excelling in long-context. These methods can reduce resource needs and improve generalization." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 269, + 525, + 323 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 269, + 525, + 323 + ], + "spans": [ + { + "bbox": [ + 302, + 269, + 525, + 323 + ], + "type": "text", + "content": "Trade-off between Information Filtering and Generation Effects means existing methods can be optimized by integrating RAG to enhance efficiency and quality without too long input." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 340, + 381, + 352 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 340, + 381, + 352 + ], + "spans": [ + { + "bbox": [ + 302, + 340, + 381, + 352 + ], + "type": "text", + "content": "5.2 Evaluation" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 361, + 525, + 428 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 361, + 525, + 428 + ], + "spans": [ + { + "bbox": [ + 302, + 361, + 525, + 428 + ], + "type": "text", + "content": "Knowledge Leakage Issue is ever-present. As LLMs gain the ability to gather information from the Internet and their training data scope expands, existing solutions become increasingly ineffective and some operations may limit innovation." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 432, + 525, + 512 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 432, + 525, + 512 + ], + "spans": [ + { + "bbox": [ + 302, + 432, + 525, + 512 + ], + "type": "text", + "content": "Novel Benchmark Design needed to be proposed. We need to construct benchmarks with coherent content and long-distance dependencies to more effectively test the model's ability to process long context. For example, asking models to process inputs from multiple books." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 302, + 515, + 525, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 515, + 525, + 596 + ], + "spans": [ + { + "bbox": [ + 302, + 515, + 525, + 596 + ], + "type": "text", + "content": "Updated LLM-based Metrics are a development direction. Though LLM-based metrics show higher consistency with human judgments than other metrics, they are costly, have random outputs, and even lack human emotions. We need to combine LLM with other techniques to better evaluate." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 613, + 381, + 625 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 613, + 381, + 625 + ], + "spans": [ + { + "bbox": [ + 302, + 613, + 381, + 625 + ], + "type": "text", + "content": "6 Conclusion" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 302, + 640, + 525, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 640, + 525, + 774 + ], + "spans": [ + { + "bbox": [ + 302, + 640, + 525, + 774 + ], + "type": "text", + "content": "In this survey, we first list three inherent challenges in processing long context. And then we propose a novel taxonomy for long context approaches and summarize the similarities and differences in each category. In addition, we systematically review the work on evaluation, summarize the data, tasks, and metrics related to long context based on existing benchmark. Finally, we list unsolved issues and put forward our insights on the future development of long context domain." + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 793, + 300, + 802 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 793, + 300, + 802 + ], + "spans": [ + { + "bbox": [ + 293, + 793, + 300, + 802 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 131, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 131, + 84 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 131, + 84 + ], + "type": "text", + "content": "Limitations" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 95, + 290, + 174 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 95, + 290, + 174 + ], + "spans": [ + { + "bbox": [ + 67, + 95, + 290, + 174 + ], + "type": "text", + "content": "This survey summarizes the approaches and evaluation in the area of long context, and gives our views on future development. However, we don't cover efficient transformer on long context, multimodel long context, etc. In addition, due to limitations in space, we are not able to include all related work." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 177, + 290, + 231 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 177, + 290, + 231 + ], + "spans": [ + { + "bbox": [ + 67, + 177, + 290, + 231 + ], + "type": "text", + "content": "Due to the rapidly evolving nature of the field of Transformer context extension, our survey may not capture the latest developments, particularly those that emerged near or after the time of writing." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 68, + 256, + 127, + 269 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 256, + 127, + 269 + ], + "spans": [ + { + "bbox": [ + 68, + 256, + 127, + 269 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 278, + 291, + 774 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 69, + 278, + 291, + 333 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 278, + 291, + 333 + ], + "spans": [ + { + "bbox": [ + 69, + 278, + 291, + 333 + ], + "type": "text", + "content": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 345, + 291, + 400 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 345, + 291, + 400 + ], + "spans": [ + { + "bbox": [ + 69, + 345, + 291, + 400 + ], + "type": "text", + "content": "Rishabh Agarwal, Avi Singh, Lei M Zhang, Bernd Bohnet, Stephanie Chan, Ankesh Anand, Zaheer Abbas, Azade Nova, John D Co-Reyes, Eric Chu, et al. 2024. Many-shot in-context learning. arXiv preprint arXiv:2404.11018." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 413, + 290, + 469 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 413, + 290, + 469 + ], + "spans": [ + { + "bbox": [ + 69, + 413, + 290, + 469 + ], + "type": "text", + "content": "Chenxin An, Shansan Gong, Ming Zhong, Xingjian Zhao, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu. 2023. L-eval: Instituting standardized evaluation for long context language models. arXiv preprint arXiv:2307.11088." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 481, + 290, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 481, + 290, + 525 + ], + "spans": [ + { + "bbox": [ + 69, + 481, + 290, + 525 + ], + "type": "text", + "content": "Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, and Lingpeng Kong. 2024. Training-free long-context scaling of large language models. arXiv preprint arXiv:2402.17463." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 538, + 290, + 593 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 538, + 290, + 593 + ], + "spans": [ + { + "bbox": [ + 69, + 538, + 290, + 593 + ], + "type": "text", + "content": "Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, and Mirella Lapata. 2021. Extractive opinion summarization in quantized transformer spaces. Transactions of the Association for Computational Linguistics, 9:277-293." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 605, + 290, + 661 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 605, + 290, + 661 + ], + "spans": [ + { + "bbox": [ + 69, + 605, + 290, + 661 + ], + "type": "text", + "content": "Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, et al. 2023. Longbench: A bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 672, + 290, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 672, + 290, + 728 + ], + "spans": [ + { + "bbox": [ + 69, + 672, + 290, + 728 + ], + "type": "text", + "content": "Yushi Bai, Jiajie Zhang, Xin Lv, Linzhi Zheng, Siqi Zhu, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. 2024. Longwriter: Unleashing " + }, + { + "bbox": [ + 69, + 672, + 290, + 728 + ], + "type": "inline_equation", + "content": "10,000+" + }, + { + "bbox": [ + 69, + 672, + 290, + 728 + ], + "type": "text", + "content": " word generation from long context llms. arXiv preprint arXiv:2408.07055." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 740, + 290, + 774 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 740, + 290, + 774 + ], + "spans": [ + { + "bbox": [ + 69, + 740, + 290, + 774 + ], + "type": "text", + "content": "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 526, + 774 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 305, + 72, + 525, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 72, + 525, + 126 + ], + "spans": [ + { + "bbox": [ + 305, + 72, + 525, + 126 + ], + "type": "text", + "content": "Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew Gormley. 2024a. Unlimformer: Long-range transformers with unlimited length input. Advances in Neural Information Processing Systems, 36." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 139, + 526, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 139, + 526, + 184 + ], + "spans": [ + { + "bbox": [ + 304, + 139, + 526, + 184 + ], + "type": "text", + "content": "Amanda Bertsch, Maor Ivgi, Uri Alon, Jonathan Berant, Matthew R Gormley, and Graham Neubig. 2024b. In-context learning with long-context models: An in-depth exploration. arXiv preprint arXiv:2405.00200." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 195, + 525, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 195, + 525, + 228 + ], + "spans": [ + { + "bbox": [ + 304, + 195, + 525, + 228 + ], + "type": "text", + "content": "bloc97. 2023. Add NTK-Aware interpolation \"by parts\" correction. https://github.com/jquesnelle/scaled-rope/pull/1." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 240, + 525, + 262 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 240, + 525, + 262 + ], + "spans": [ + { + "bbox": [ + 304, + 240, + 525, + 262 + ], + "type": "text", + "content": "Tom B Brown. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 274, + 525, + 338 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 274, + 525, + 338 + ], + "spans": [ + { + "bbox": [ + 304, + 274, + 525, + 338 + ], + "type": "text", + "content": "Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 351, + 525, + 394 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 351, + 525, + 394 + ], + "spans": [ + { + "bbox": [ + 304, + 351, + 525, + 394 + ], + "type": "text", + "content": "Aydar Bulatov, Yury Kuratov, and Mikhail Burtsev. 2022. Recurrent memory transformer. Advances in Neural Information Processing Systems, 35:11079-11091." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 407, + 525, + 451 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 407, + 525, + 451 + ], + "spans": [ + { + "bbox": [ + 304, + 407, + 525, + 451 + ], + "type": "text", + "content": "Howard Chen, Ramakanth Pasunuru, Jason Weston, and Asli Celikyilmaz. 2023a. Walking down the memory maze: Beyond context limit through interactive reading. arXiv preprint arXiv:2310.05029." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 463, + 525, + 528 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 463, + 525, + 528 + ], + "spans": [ + { + "bbox": [ + 304, + 463, + 525, + 528 + ], + "type": "text", + "content": "Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021a. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 540, + 525, + 584 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 540, + 525, + 584 + ], + "spans": [ + { + "bbox": [ + 304, + 540, + 525, + 584 + ], + "type": "text", + "content": "Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2021b. Summscreen: A dataset for abstractive screenplay summarization. arXiv preprint arXiv:2104.07091." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 597, + 525, + 661 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 597, + 525, + 661 + ], + "spans": [ + { + "bbox": [ + 304, + 597, + 525, + 661 + ], + "type": "text", + "content": "Qiguang Chen, Libo Qin, Jinhao Liu, Dengyun Peng, Jiannan Guan, Peng Wang, Mengkang Hu, Yuhang Zhou, Te Gao, and Wanxiang Che. 2025. Towards reasoning era: A survey of long chain-of-thought for reasoning large language models. Preprint, arXiv:2503.09567." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 674, + 525, + 719 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 674, + 525, + 719 + ], + "spans": [ + { + "bbox": [ + 304, + 674, + 525, + 719 + ], + "type": "text", + "content": "Qiguang Chen, Libo Qin, Jiaqi Wang, Jinxuan Zhou, and Wanxiang Che. 2024a. Unlocking the boundaries of thought: A reasoning granularity framework to quantify and optimize chain-of-thought." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 304, + 729, + 525, + 774 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 729, + 525, + 774 + ], + "spans": [ + { + "bbox": [ + 304, + 729, + 525, + 774 + ], + "type": "text", + "content": "Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023b. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595." + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 793, + 300, + 802 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 793, + 300, + 802 + ], + "spans": [ + { + "bbox": [ + 293, + 793, + 300, + 802 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 773 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 291, + 117 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 291, + 117 + ], + "type": "text", + "content": "Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, and Jiaya Jia. 2023c. Longlora: Efficient fine-tuning of long-context large language models. arXiv preprint arXiv:2309.12307." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 126, + 290, + 192 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 126, + 290, + 192 + ], + "spans": [ + { + "bbox": [ + 69, + 126, + 290, + 192 + ], + "type": "text", + "content": "Zhi Chen, Qiguang Chen, Libo Qin, Qipeng Guo, Hai-jun Lv, Yicheng Zou, Wanxiang Che, Hang Yan, Kai Chen, and Dahua Lin. 2024b. What are the essential factors in crafting effective long context multi-hop instruction datasets? insights and best practices. arXiv preprint arXiv:2409.01893." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 201, + 290, + 245 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 201, + 290, + 245 + ], + "spans": [ + { + "bbox": [ + 69, + 201, + 290, + 245 + ], + "type": "text", + "content": "Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and Danqi Chen. 2023. Adapting language models to compress contexts. arXiv preprint arXiv:2305.14788." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 255, + 290, + 310 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 255, + 290, + 310 + ], + "spans": [ + { + "bbox": [ + 69, + 255, + 290, + 310 + ], + "type": "text", + "content": "Ta-Chung Chi, Ting-Han Fan, Peter J Ramadge, and Alexander Rudnicky. 2022a. Kerple: Kernelized relative positional embedding for length extrapolation. Advances in Neural Information Processing Systems, 35:8386-8399." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 320, + 290, + 364 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 320, + 290, + 364 + ], + "spans": [ + { + "bbox": [ + 69, + 320, + 290, + 364 + ], + "type": "text", + "content": "Ta-Chung Chi, Ting-Han Fan, Alexander I Rudnicky, and Peter J Ramadge. 2022b. Dissecting transformer length extrapolation via the lens of receptive field analysis. arXiv preprint arXiv:2212.10356." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 374, + 290, + 407 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 374, + 290, + 407 + ], + "spans": [ + { + "bbox": [ + 69, + 374, + 290, + 407 + ], + "type": "text", + "content": "Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 417, + 290, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 417, + 290, + 472 + ], + "spans": [ + { + "bbox": [ + 69, + 417, + 290, + 472 + ], + "type": "text", + "content": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 481, + 290, + 548 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 481, + 290, + 548 + ], + "spans": [ + { + "bbox": [ + 69, + 481, + 290, + 548 + ], + "type": "text", + "content": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 557, + 290, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 557, + 290, + 591 + ], + "spans": [ + { + "bbox": [ + 69, + 557, + 290, + 591 + ], + "type": "text", + "content": "Jacob Devlin. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 600, + 290, + 644 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 600, + 290, + 644 + ], + "spans": [ + { + "bbox": [ + 69, + 600, + 290, + 644 + ], + "type": "text", + "content": "Suvodip Dey, Maunendra Sankar Desarkar, Asif Ekbal, and PK Srijith. 2022. Dialogen: Generalized long-range context representation for dialogue systems. arXiv preprint arXiv:2210.06282." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 653, + 290, + 708 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 653, + 290, + 708 + ], + "spans": [ + { + "bbox": [ + 69, + 653, + 290, + 708 + ], + "type": "text", + "content": "Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, and Furu Wei. 2023. Longnet: Scaling transformers to 1,000,000,000 tokens. arXiv preprint arXiv:2307.02486." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 718, + 290, + 773 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 718, + 290, + 773 + ], + "spans": [ + { + "bbox": [ + 69, + 718, + 290, + 773 + ], + "type": "text", + "content": "Zican Dong, Tianyi Tang, Junyi Li, Wayne Xin Zhao, and Ji-Rong Wen. 2023a. Bamboo: A comprehensive benchmark for evaluating long text modeling capacities of large language models. arXiv preprint arXiv:2309.13345." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 525, + 773 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 304, + 72, + 524, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 524, + 105 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 524, + 105 + ], + "type": "text", + "content": "Zican Dong, Tianyi Tang, Lunyi Li, and Wayne Xin Zhao. 2023b. A survey on long text modeling with transformers. arXiv preprint arXiv:2302.14502." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 115, + 525, + 169 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 115, + 525, + 169 + ], + "spans": [ + { + "bbox": [ + 304, + 115, + 525, + 169 + ], + "type": "text", + "content": "Alexander R Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. arXiv preprint arXiv:1906.01749." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 179, + 525, + 234 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 179, + 525, + 234 + ], + "spans": [ + { + "bbox": [ + 304, + 179, + 525, + 234 + ], + "type": "text", + "content": "Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. 2024. A survey on rag meeting llms: Towards retrieval-augmented large language models. Preprint, arXiv:2405.06211." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 244, + 525, + 278 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 244, + 525, + 278 + ], + "spans": [ + { + "bbox": [ + 304, + 244, + 525, + 278 + ], + "type": "text", + "content": "Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 287, + 525, + 332 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 287, + 525, + 332 + ], + "spans": [ + { + "bbox": [ + 304, + 287, + 525, + 332 + ], + "type": "text", + "content": "Tao Ge, Jing Hu, Lei Wang, Xun Wang, Si-Qing Chen, and Furu Wei. 2023. In-context autoencoder for context compression in a large language model. arXiv preprint arXiv:2307.06945." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 341, + 525, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 341, + 525, + 374 + ], + "spans": [ + { + "bbox": [ + 304, + 341, + 525, + 374 + ], + "type": "text", + "content": "Shahriar Golchin and Mihai Surdeanu. 2023. Time travel in llms: Tracing data contamination in large language models. arXiv preprint arXiv:2308.08493." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 384, + 525, + 439 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 384, + 525, + 439 + ], + "spans": [ + { + "bbox": [ + 304, + 384, + 525, + 439 + ], + "type": "text", + "content": "Daya Guo, Canwen Xu, Nan Duan, Jian Yin, and Julian McAuley. 2023. Longcoder: A long-range pretrained language model for code completion. In International Conference on Machine Learning, pages 12098-12107. PMLR." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 449, + 525, + 537 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 449, + 525, + 537 + ], + "spans": [ + { + "bbox": [ + 304, + 449, + 525, + 537 + ], + "type": "text", + "content": "Chi Han, Qifan Wang, Hao Peng, Wenhan Xiong, Yu Chen, Heng Ji, and Sinong Wang. 2024. Lm infinite: Zero-shot extreme length generalization for large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 3991-4008." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 546, + 525, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 546, + 525, + 591 + ], + "spans": [ + { + "bbox": [ + 304, + 546, + 525, + 591 + ], + "type": "text", + "content": "Yaru Hao, Yutao Sun, Li Dong, Zhixiong Han, Yuxian Gu, and Furu Wei. 2022. Structured prompting: Scaling in-context learning to 1,000 examples. arXiv preprint arXiv:2212.06713." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 600, + 525, + 644 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 600, + 525, + 644 + ], + "spans": [ + { + "bbox": [ + 304, + 600, + 525, + 644 + ], + "type": "text", + "content": "Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multi-hop qa dataset for comprehensive evaluation of reasoning steps. arXiv preprint arXiv:2011.01060." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 653, + 525, + 698 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 653, + 525, + 698 + ], + "spans": [ + { + "bbox": [ + 304, + 653, + 525, + 698 + ], + "type": "text", + "content": "Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. arXiv preprint arXiv:2104.02112." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 304, + 708, + 525, + 773 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 708, + 525, + 773 + ], + "spans": [ + { + "bbox": [ + 304, + 708, + 525, + 773 + ], + "type": "text", + "content": "Yunpeng Huang, Jingwei Xu, Junyu Lai, Zixu Jiang, Taolue Chen, Zenan Li, Yuan Yao, Xiaoxing Ma, Lijuan Yang, Hao Chen, et al. 2023. Advancing transformer architecture in long-context large language models: A comprehensive survey. arXiv preprint arXiv:2311.12351." + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 793, + 304, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 793, + 304, + 803 + ], + "spans": [ + { + "bbox": [ + 291, + 793, + 304, + 803 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 772 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 289, + 95 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 289, + 95 + ], + "type": "text", + "content": "NLG Improving. Not all metrics are guilty: Improving nlg evaluation with llm paraphrasing." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 104, + 289, + 192 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 104, + 289, + 192 + ], + "spans": [ + { + "bbox": [ + 69, + 104, + 289, + 192 + ], + "type": "text", + "content": "Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2024a. LongLLM Lingua: Accelerating and enhancing LLMs in long context scenarios via prompt compression. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1658-1677, Bangkok, Thailand. Association for Computational Linguistics." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 201, + 289, + 267 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 201, + 289, + 267 + ], + "spans": [ + { + "bbox": [ + 69, + 201, + 289, + 267 + ], + "type": "text", + "content": "Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023. Llm lingua: Compressing prompts for accelerated inference of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13358-13376." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 277, + 289, + 332 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 277, + 289, + 332 + ], + "spans": [ + { + "bbox": [ + 69, + 277, + 289, + 332 + ], + "type": "text", + "content": "Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2024b. Longlmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression. Preprint, arXiv:2310.06839." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 342, + 289, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 342, + 289, + 386 + ], + "spans": [ + { + "bbox": [ + 69, + 342, + 289, + 386 + ], + "type": "text", + "content": "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 395, + 289, + 450 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 395, + 289, + 450 + ], + "spans": [ + { + "bbox": [ + 69, + 395, + 289, + 450 + ], + "type": "text", + "content": "Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander R Fabbri, Yejin Choi, and Noah A Smith. 2021. Bidimensional leaderboards: Generate and evaluate language hand in hand. arXiv preprint arXiv:2112.04139." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 460, + 289, + 515 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 460, + 289, + 515 + ], + "spans": [ + { + "bbox": [ + 69, + 460, + 289, + 515 + ], + "type": "text", + "content": "Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and Siva Reddy. 2024. The impact of positional encoding on length generalization in transformers. Advances in Neural Information Processing Systems, 36." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 524, + 289, + 580 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 524, + 289, + 580 + ], + "spans": [ + { + "bbox": [ + 69, + 524, + 289, + 580 + ], + "type": "text", + "content": "Tomáš Kočisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 590, + 289, + 644 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 590, + 289, + 644 + ], + "spans": [ + { + "bbox": [ + 69, + 590, + 289, + 644 + ], + "type": "text", + "content": "Wojciech Krysciński, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong, and Dragomir Radev. 2021. Booksum: A collection of datasets for long-form narrative summarization. arXiv preprint arXiv:2105.08209." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 654, + 289, + 719 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 654, + 289, + 719 + ], + "spans": [ + { + "bbox": [ + 69, + 654, + 289, + 719 + ], + "type": "text", + "content": "Ishita Kumar, Snigdha Viswanathan, Sushrita Yerra, Alireza Salemi, Ryan A Rossi, Franck Dernoncourt, Hanieh Deilamsalehy, Xiang Chen, Ruiyi Zhang, Shubham Agarwal, et al. 2024. Longlamp: A benchmark for personalized long-form text generation. arXiv preprint arXiv:2407.11016." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 729, + 289, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 729, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 729, + 289, + 772 + ], + "type": "text", + "content": "M Lewis. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 774 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 304, + 72, + 524, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 524, + 127 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 524, + 127 + ], + "type": "text", + "content": "Haoyang Li, Yiming Li, Anxin Tian, Tianhao Tang, Zhanchao Xu, Xuejia Chen, Nicole Hu, Wei Dong, Qing Li, and Lei Chen. 2025. A survey on large language model acceleration based on kv cache management. Preprint, arXiv:2412.19442." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 137, + 524, + 190 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 137, + 524, + 190 + ], + "spans": [ + { + "bbox": [ + 304, + 137, + 524, + 190 + ], + "type": "text", + "content": "Jia-Nan Li, Quan Tu, Cunli Mao, Zhengtao Yu, JiRong Wen, and Rui Yan. 2024a. Streamingdialogue: Prolonged dialogue learning via long context compression with minimal losses. arXiv preprint arXiv:2403.08312." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 200, + 524, + 243 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 200, + 524, + 243 + ], + "spans": [ + { + "bbox": [ + 304, + 200, + 524, + 243 + ], + "type": "text", + "content": "Jiaqi Li, Mengmeng Wang, Zilong Zheng, and Muhan Zhang. 2023a. Loogle: Can long-context language models understand long contexts? arXiv preprint arXiv:2311.04939." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 253, + 524, + 296 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 253, + 524, + 296 + ], + "spans": [ + { + "bbox": [ + 304, + 253, + 524, + 296 + ], + "type": "text", + "content": "Mo Li, Songyang Zhang, Yunxin Liu, and Kai Chen. 2024b. Needlebench: Can llms do retrieval and reasoning in 1 million context window? arXiv preprint arXiv:2407.11963." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 306, + 524, + 349 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 306, + 524, + 349 + ], + "spans": [ + { + "bbox": [ + 304, + 306, + 524, + 349 + ], + "type": "text", + "content": "Tianle Li, Ge Zhang, Quy Duc Do, Xiang Yue, and Wenhu Chen. 2024c. Long-context llms struggle with long in-context learning. arXiv preprint arXiv:2404.02060." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 359, + 524, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 359, + 524, + 403 + ], + "spans": [ + { + "bbox": [ + 304, + 359, + 524, + 403 + ], + "type": "text", + "content": "Xianming Li, Zongxi Li, Xiaotian Luo, Haoran Xie, Xing Lee, Yingbin Zhao, Fu Lee Wang, and Qing Li. 2023b. Recurrent attention networks for long-text modeling. arXiv preprint arXiv:2306.06843." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 412, + 524, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 412, + 524, + 445 + ], + "spans": [ + { + "bbox": [ + 304, + 412, + 524, + 445 + ], + "type": "text", + "content": "Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 454, + 524, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 454, + 524, + 487 + ], + "spans": [ + { + "bbox": [ + 304, + 454, + 524, + 487 + ], + "type": "text", + "content": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 496, + 524, + 551 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 496, + 524, + 551 + ], + "spans": [ + { + "bbox": [ + 304, + 496, + 524, + 551 + ], + "type": "text", + "content": "Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024a. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12:157-173." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 560, + 524, + 615 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 560, + 524, + 615 + ], + "spans": [ + { + "bbox": [ + 304, + 560, + 524, + 615 + ], + "type": "text", + "content": "Shuaiqi Liu, Jiannong Cao, Zhongfen Deng, Wenting Zhao, Ruosong Yang, Zhiyuan Wen, and S Yu Philip. 2023a. Neural abstractive summarization for long text and multiple tables. IEEE Transactions on Knowledge and Data Engineering." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 624, + 524, + 667 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 624, + 524, + 667 + ], + "spans": [ + { + "bbox": [ + 304, + 624, + 524, + 667 + ], + "type": "text", + "content": "Tianyang Liu, Canwen Xu, and Julian McAuley. 2023b. Repobench: Benchmarking repository-level code auto-completion systems. arXiv preprint arXiv:2306.03091." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 677, + 524, + 719 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 677, + 524, + 719 + ], + "spans": [ + { + "bbox": [ + 304, + 677, + 524, + 719 + ], + "type": "text", + "content": "Xiaoran Liu, Hang Yan, Shuo Zhang, Chenxin An, Xipeng Qiu, and Dahua Lin. 2024b. Scaling laws of rope-based extrapolation. Preprint, arXiv:2310.05209." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 304, + 729, + 524, + 774 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 729, + 524, + 774 + ], + "spans": [ + { + "bbox": [ + 304, + 729, + 524, + 774 + ], + "type": "text", + "content": "Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023c. G-eval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634." + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 793, + 302, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 793, + 302, + 803 + ], + "spans": [ + { + "bbox": [ + 291, + 793, + 302, + 803 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 773 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 289, + 116 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 289, + 116 + ], + "type": "text", + "content": "Zheheng Luo, Qianqian Xie, and Sophia Ananiadou. 2023. Chatgpt as a factual inconsistency evaluator for text summarization. arXiv preprint arXiv:2303.15621." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 126, + 289, + 169 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 126, + 289, + 169 + ], + "spans": [ + { + "bbox": [ + 69, + 126, + 289, + 169 + ], + "type": "text", + "content": "Amirkeivan Mohtashami and Martin Jaggi. 2023. Landmark attention: Random-access infinite context length for transformers. arXiv preprint arXiv:2305.16300." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 179, + 289, + 223 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 179, + 289, + 223 + ], + "spans": [ + { + "bbox": [ + 69, + 179, + 289, + 223 + ], + "type": "text", + "content": "Amirkeivan Mohtashami and Martin Jaggi. 2024. Random-access infinite context length for transformers. Advances in Neural Information Processing Systems, 36." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 233, + 289, + 288 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 233, + 289, + 288 + ], + "spans": [ + { + "bbox": [ + 69, + 233, + 289, + 288 + ], + "type": "text", + "content": "Honglin Mu, Yang Xu, Yunlong Feng, Xiaofeng Han, Yitong Li, Yutai Hou, and Wanxiang Che. 2024a. Beyond static evaluation: A dynamic approach to assessing ai assistants' api invocation capabilities. Preprint, arXiv:2403.11128." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 298, + 289, + 332 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 298, + 289, + 332 + ], + "spans": [ + { + "bbox": [ + 69, + 298, + 289, + 332 + ], + "type": "text", + "content": "Jesse Mu, Xiang Li, and Noah Goodman. 2024b. Learning to compress prompts with gist tokens. Advances in Neural Information Processing Systems, 36." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 341, + 289, + 395 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 341, + 289, + 395 + ], + "spans": [ + { + "bbox": [ + 69, + 341, + 289, + 395 + ], + "type": "text", + "content": "Xuanfan Ni, Hengyi Cai, Xiaochi Wei, Shuaiqiang Wang, Dawei Yin, and Piji Li. 2024. " + }, + { + "bbox": [ + 69, + 341, + 289, + 395 + ], + "type": "inline_equation", + "content": "\\mathrm{XI}^2" + }, + { + "bbox": [ + 69, + 341, + 289, + 395 + ], + "type": "text", + "content": " bench: A benchmark for extremely long context understanding with long-range dependencies. arXiv preprint arXiv:2404.05446." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 406, + 289, + 428 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 406, + 289, + 428 + ], + "spans": [ + { + "bbox": [ + 69, + 406, + 289, + 428 + ], + "type": "text", + "content": "NLPCC. 2014. Task definition for large scale text categorization at nlpcc 2014." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 438, + 289, + 493 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 438, + 289, + 493 + ], + "spans": [ + { + "bbox": [ + 69, + 438, + 289, + 493 + ], + "type": "text", + "content": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 502, + 289, + 558 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 502, + 289, + 558 + ], + "spans": [ + { + "bbox": [ + 69, + 502, + 289, + 558 + ], + "type": "text", + "content": "Saurav Pawar, SM Tonmoy, SM Zaman, Vinija Jain, Aman Chadha, and Amitava Das. 2024. The what, why, and how of context length extension techniques in large language models—a detailed survey. arXiv preprint arXiv:2401.07872." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 567, + 289, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 567, + 289, + 612 + ], + "spans": [ + { + "bbox": [ + 69, + 567, + 289, + 612 + ], + "type": "text", + "content": "Bowen Peng and Jeffrey Quesnelle. 2023. Ntk-aware scaled rope allows llama models to have extended " + }, + { + "bbox": [ + 69, + 567, + 289, + 612 + ], + "type": "inline_equation", + "content": "(8k+)" + }, + { + "bbox": [ + 69, + 567, + 289, + 612 + ], + "type": "text", + "content": " context size without any fine-tuning and minimal perplexity degradation." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 621, + 289, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 621, + 289, + 665 + ], + "spans": [ + { + "bbox": [ + 69, + 621, + 289, + 665 + ], + "type": "text", + "content": "Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. 2023. Yarn: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 675, + 289, + 719 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 675, + 289, + 719 + ], + "spans": [ + { + "bbox": [ + 69, + 675, + 289, + 719 + ], + "type": "text", + "content": "Ofir Press, Noah A Smith, and Mike Lewis. 2021. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 69, + 729, + 289, + 773 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 729, + 289, + 773 + ], + "spans": [ + { + "bbox": [ + 69, + 729, + 289, + 773 + ], + "type": "text", + "content": "Libo Qin, Qiguang Chen, Xiachong Feng, Yang Wu, Yongheng Zhang, Yinghui Li, Min Li, Wanxiang Che, and Philip S Yu. 2024. Large language models meet nlp: A survey. arXiv preprint arXiv:2405.12819." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 773 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 304, + 72, + 524, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 524, + 126 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 524, + 126 + ], + "type": "text", + "content": "Han Qiu, Jiaxing Huang, Peng Gao, Qin Qi, Xiaogin Zhang, Ling Shao, and Shijian Lu. 2024. Longhalqa: Long-context hallucination evaluation for multimodal large language models. Preprint, arXiv:2410.09962." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 135, + 524, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 135, + 524, + 158 + ], + "spans": [ + { + "bbox": [ + 304, + 135, + 524, + 158 + ], + "type": "text", + "content": "Alec Radford. 2018. Improving language understanding by generative pre-training." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 165, + 524, + 220 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 165, + 524, + 220 + ], + "spans": [ + { + "bbox": [ + 304, + 165, + 524, + 220 + ], + "type": "text", + "content": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 228, + 524, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 228, + 524, + 293 + ], + "spans": [ + { + "bbox": [ + 304, + 228, + 524, + 293 + ], + "type": "text", + "content": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1-67." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 302, + 524, + 334 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 302, + 524, + 334 + ], + "spans": [ + { + "bbox": [ + 304, + 302, + 524, + 334 + ], + "type": "text", + "content": "P Rajpurkar. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 343, + 524, + 398 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 343, + 524, + 398 + ], + "spans": [ + { + "bbox": [ + 304, + 343, + 524, + 398 + ], + "type": "text", + "content": "Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2022. Parallel context windows for large language models. arXiv preprint arXiv:2212.10947." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 406, + 524, + 449 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 406, + 524, + 449 + ], + "spans": [ + { + "bbox": [ + 304, + 406, + 524, + 449 + ], + "type": "text", + "content": "Ehud Reiter and Anja Belz. 2009. An investigation into the validity of some metrics for automatically evaluating natural language generation systems. Computational Linguistics, 35(4):529-558." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 458, + 524, + 512 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 458, + 524, + 512 + ], + "spans": [ + { + "bbox": [ + 304, + 458, + 524, + 512 + ], + "type": "text", + "content": "Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2021. Efficient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics, 9:53-68." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 522, + 524, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 522, + 524, + 576 + ], + "spans": [ + { + "bbox": [ + 304, + 522, + 524, + 576 + ], + "type": "text", + "content": "Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, et al. 2023. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 584, + 524, + 617 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 584, + 524, + 617 + ], + "spans": [ + { + "bbox": [ + 304, + 584, + 524, + 617 + ], + "type": "text", + "content": "Ohad Rubin and Jonathan Berant. 2023. Long-range language modeling with self-retrieval. arXiv preprint arXiv:2306.13421." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 304, + 625, + 524, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 625, + 524, + 657 + ], + "spans": [ + { + "bbox": [ + 304, + 625, + 524, + 657 + ], + "type": "text", + "content": "Senedd Cymru. 2024. Record of proceedings - senedd. https://record.assembly.wales/. Accessed: 2024-10-15." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 304, + 666, + 524, + 710 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 666, + 524, + 710 + ], + "spans": [ + { + "bbox": [ + 304, + 666, + 524, + 710 + ], + "type": "text", + "content": "Uri Shaham, Maor Ivgi, Avia Efrat, Jonathan Berant, and Omer Levy. 2023. Zeroscrolls: A zero-shot benchmark for long text understanding. arXiv preprint arXiv:2305.14196." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 304, + 719, + 524, + 773 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 719, + 524, + 773 + ], + "spans": [ + { + "bbox": [ + 304, + 719, + 524, + 773 + ], + "type": "text", + "content": "Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, et al. 2022. **Scrolls: Standardized comparison over long language sequences. arXiv preprint arXiv:2201.03533.**" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 793, + 302, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 793, + 302, + 803 + ], + "spans": [ + { + "bbox": [ + 291, + 793, + 302, + 803 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 772 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 289, + 105 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 289, + 105 + ], + "type": "text", + "content": "Eva Sharma, Chen Li, and Lu Wang. 2019. Bigpatent: A large-scale dataset for abstractive and coherent summarization. arXiv preprint arXiv:1906.03741." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 116, + 289, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 116, + 289, + 160 + ], + "spans": [ + { + "bbox": [ + 69, + 116, + 289, + 160 + ], + "type": "text", + "content": "Chenhui Shen, Liying Cheng, Xuan-Phi Nguyen, Yang You, and Lidong Bing. 2023. Large language models are not yet human-level evaluators for abstractive summarization. arXiv preprint arXiv:2305.13091." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 171, + 289, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 171, + 289, + 215 + ], + "spans": [ + { + "bbox": [ + 69, + 171, + 289, + 215 + ], + "type": "text", + "content": "Luohe Shi, Hongyi Zhang, Yao Yao, Zuchao Li, and Hai Zhao. 2024. Keep the cost down: A review on methods to optimize llm's kv-cache consumption. Preprint, arXiv:2407.18003." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 226, + 289, + 269 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 226, + 289, + 269 + ], + "spans": [ + { + "bbox": [ + 69, + 226, + 289, + 269 + ], + "type": "text", + "content": "Dingjie Song, Shunian Chen, Guiming Hardy Chen, Fei Yu, Xiang Wan, and Benyou Wang. 2024. Milebench: Benchmarking mllms in long context. arXiv preprint arXiv:2404.18532." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 280, + 289, + 336 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 280, + 289, + 336 + ], + "spans": [ + { + "bbox": [ + 69, + 280, + 289, + 336 + ], + "type": "text", + "content": "Amanda Stent, Matthew Marge, and Mohit Singhai. 2005. Evaluating evaluation methods for generation in the presence of variation. In International conference on intelligent text processing and computational linguistics, pages 341-351. Springer." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 346, + 289, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 346, + 289, + 369 + ], + "spans": [ + { + "bbox": [ + 69, + 346, + 289, + 369 + ], + "type": "text", + "content": "Jianlin Su. 2023. Rectified rotary position embeddings. https://github.com/bojone/erope." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 379, + 289, + 423 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 379, + 289, + 423 + ], + "spans": [ + { + "bbox": [ + 69, + 379, + 289, + 423 + ], + "type": "text", + "content": "Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 433, + 289, + 478 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 433, + 289, + 478 + ], + "spans": [ + { + "bbox": [ + 69, + 433, + 289, + 478 + ], + "type": "text", + "content": "Simeng Sun, Kalpesh Krishna, Andrew Mattarella-Micke, and Mohit Iyyer. 2021. Do long-range language models actually use long-range context? arXiv preprint arXiv:2109.09115." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 488, + 289, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 488, + 289, + 533 + ], + "spans": [ + { + "bbox": [ + 69, + 488, + 289, + 533 + ], + "type": "text", + "content": "Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, and Furu Wei. 2022. A length-extrapolatable transformer. arXiv preprint arXiv:2212.10554." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 543, + 289, + 608 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 543, + 289, + 608 + ], + "spans": [ + { + "bbox": [ + 69, + 543, + 289, + 608 + ], + "type": "text", + "content": "Haochen Tan, Zhijiang Guo, Zhan Shi, Lu Xu, Zhili Liu, Xiaoguang Li, Yasheng Wang, Lifeng Shang, Qun Liu, and Linqi Song. 2024. Proxyqa: An alternative framework for evaluating long-form text generation with large language models. arXiv preprint arXiv:2401.15042." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 619, + 289, + 696 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 619, + 289, + 696 + ], + "spans": [ + { + "bbox": [ + 69, + 619, + 289, + 696 + ], + "type": "text", + "content": "Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. 2020. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in neural information processing systems, 33:7537-7547." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 708, + 289, + 772 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 708, + 289, + 772 + ], + "spans": [ + { + "bbox": [ + 69, + 708, + 289, + 772 + ], + "type": "text", + "content": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 773 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 304, + 72, + 524, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 524, + 127 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 524, + 127 + ], + "type": "text", + "content": "Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Musique: Multi-hop questions via single-hop question composition. Transactions of the Association for Computational Linguistics, 10:539-554." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 137, + 524, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 137, + 524, + 158 + ], + "spans": [ + { + "bbox": [ + 304, + 137, + 524, + 158 + ], + "type": "text", + "content": "TVMEG. 2024. Tvmeg - tv mega engine. https://tvmeg.com/. Accessed: 2024-10-15." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 167, + 524, + 222 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 167, + 524, + 222 + ], + "spans": [ + { + "bbox": [ + 304, + 167, + 524, + 222 + ], + "type": "text", + "content": "Szymon Tworkowski, Konrad Staniszewski, Mikołaj Pacek, Yuhuai Wu, Henryk Michalewski, and Piotr Miłos. 2024. Focused transformer: Contrastive training for context scaling. Advances in Neural Information Processing Systems, 36." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 231, + 524, + 253 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 231, + 524, + 253 + ], + "spans": [ + { + "bbox": [ + 304, + 231, + 524, + 253 + ], + "type": "text", + "content": "A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 262, + 524, + 306 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 262, + 524, + 306 + ], + "spans": [ + { + "bbox": [ + 304, + 262, + 524, + 306 + ], + "type": "text", + "content": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2023. Attention is all you need. Preprint, arXiv:1706.03762." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 315, + 524, + 359 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 315, + 524, + 359 + ], + "spans": [ + { + "bbox": [ + 304, + 315, + 524, + 359 + ], + "type": "text", + "content": "Alex Wang, Richard Yuanzhe Pang, Angelica Chen, Jason Phang, and Samuel R Bowman. 2022. Squality: Building a long-document summarization dataset the hard way. arXiv preprint arXiv:2205.11465." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 368, + 524, + 412 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 368, + 524, + 412 + ], + "spans": [ + { + "bbox": [ + 304, + 368, + 524, + 412 + ], + "type": "text", + "content": "Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 421, + 524, + 465 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 421, + 524, + 465 + ], + "spans": [ + { + "bbox": [ + 304, + 421, + 524, + 465 + ], + "type": "text", + "content": "Chonghua Wang, Haodong Duan, Songyang Zhang, Dahua Lin, and Kai Chen. 2024a. Ada-level: Evaluating long-context llms with length-adaptable benchmarks. arXiv preprint arXiv:2404.06480." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 474, + 524, + 528 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 474, + 524, + 528 + ], + "spans": [ + { + "bbox": [ + 304, + 474, + 524, + 528 + ], + "type": "text", + "content": "Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023a. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 538, + 524, + 582 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 538, + 524, + 582 + ], + "spans": [ + { + "bbox": [ + 304, + 538, + 524, + 582 + ], + "type": "text", + "content": "Peiyi Wang, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023b. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 591, + 524, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 591, + 524, + 645 + ], + "spans": [ + { + "bbox": [ + 304, + 591, + 524, + 645 + ], + "type": "text", + "content": "Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, and Furu Wei. 2024b. Augmenting language models with long-term memory. Advances in Neural Information Processing Systems, 36." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 304, + 655, + 524, + 730 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 655, + 524, + 730 + ], + "spans": [ + { + "bbox": [ + 304, + 655, + 524, + 730 + ], + "type": "text", + "content": "Xiao Wang, Shiao Wang, Yuhe Ding, Yuehang Li, Wentao Wu, Yao Rong, Weizhe Kong, Ju Huang, Shihao Li, Haoxiang Yang, Ziwen Wang, Bo Jiang, Chenglong Li, Yaowei Wang, Yonghong Tian, and Jin Tang. 2024c. State space model for new-generation network alternative to transformers: A survey. Preprint, arXiv:2404.09516." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 304, + 740, + 524, + 773 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 740, + 524, + 773 + ], + "spans": [ + { + "bbox": [ + 304, + 740, + 524, + 773 + ], + "type": "text", + "content": "Ning Wu, Ming Gong, Linjun Shou, Shining Liang, and Daxin Jiang. 2023. Large language models are diverse role-players for summarization evaluation. In" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 793, + 302, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 793, + 302, + 803 + ], + "spans": [ + { + "bbox": [ + 291, + 793, + 302, + 803 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 290, + 773 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 80, + 72, + 290, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 72, + 290, + 106 + ], + "spans": [ + { + "bbox": [ + 80, + 72, + 290, + 106 + ], + "type": "text", + "content": "CCF International Conference on Natural Language Processing and Chinese Computing, pages 695-707. Springer." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 113, + 289, + 147 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 113, + 289, + 147 + ], + "spans": [ + { + "bbox": [ + 69, + 113, + 289, + 147 + ], + "type": "text", + "content": "Yuhuai Wu, Markus N Rabe, DeLesley Hutchins, and Christian Szegedy. 2022. Memorizing transformers. arXiv preprint arXiv:2203.08913." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 154, + 289, + 221 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 154, + 289, + 221 + ], + "spans": [ + { + "bbox": [ + 69, + 154, + 289, + 221 + ], + "type": "text", + "content": "Chaojun Xiao, Pangle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Song Han, and Maosong Sun. 2024. Inflamm: Unveiling the intrinsic capacity of llms for understanding extremely long sequences with training-free memory. arXiv preprint arXiv:2402.04617." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 228, + 289, + 271 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 228, + 289, + 271 + ], + "spans": [ + { + "bbox": [ + 69, + 228, + 289, + 271 + ], + "type": "text", + "content": "Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2023. Efficient streaming language models with attention sinks. arXiv preprint arXiv:2309.17453." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 280, + 289, + 323 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 280, + 289, + 323 + ], + "spans": [ + { + "bbox": [ + 69, + 280, + 289, + 323 + ], + "type": "text", + "content": "Xin Xu, Tong Xiao, Zitong Chao, Zhenya Huang, Can Yang, and Yang Wang. 2024. Can llms solve longer math word problems better? arXiv preprint arXiv:2405.14804." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 332, + 289, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 332, + 289, + 387 + ], + "spans": [ + { + "bbox": [ + 69, + 332, + 289, + 387 + ], + "type": "text", + "content": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 395, + 289, + 428 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 395, + 289, + 428 + ], + "spans": [ + { + "bbox": [ + 69, + 395, + 289, + 428 + ], + "type": "text", + "content": "Dian Yu, Kai Sun, Claire Cardie, and Dong Yu. 2020. Dialogue-based relation extraction. arXiv preprint arXiv:2004.08056." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 437, + 289, + 480 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 437, + 289, + 480 + ], + "spans": [ + { + "bbox": [ + 69, + 437, + 289, + 480 + ], + "type": "text", + "content": "Hao Yu, Aoran Gan, Kai Zhang, Shiwei Tong, Qi Liu, and Zhaofeng Liu. 2024. Evaluation of retrieval-augmented generation: A survey. Preprint, arXiv:2405.07437." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 488, + 289, + 544 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 488, + 289, + 544 + ], + "spans": [ + { + "bbox": [ + 69, + 488, + 289, + 544 + ], + "type": "text", + "content": "Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, and Mike Lewis. 2023. Megabyte: Predicting million-byte sequences with multiscale transformers. Advances in Neural Information Processing Systems, 36:78808-78823." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 551, + 289, + 607 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 551, + 289, + 607 + ], + "spans": [ + { + "bbox": [ + 69, + 551, + 289, + 607 + ], + "type": "text", + "content": "Tao Yuan, Xuefei Ning, Dong Zhou, Zhijie Yang, Shiyao Li, Minghui Zhuang, Zheyue Tan, Zhuyu Yao, Dahua Lin, Boxun Li, et al. 2024. Lv-eval: A balanced long-context benchmark with 5 length levels up to 256k. arXiv preprint arXiv:2402.05136." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 614, + 289, + 658 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 614, + 289, + 658 + ], + "spans": [ + { + "bbox": [ + 69, + 614, + 289, + 658 + ], + "type": "text", + "content": "Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. Advances in Neural Information Processing Systems, 34:27263-27277." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 666, + 289, + 710 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 666, + 289, + 710 + ], + "spans": [ + { + "bbox": [ + 69, + 666, + 289, + 710 + ], + "type": "text", + "content": "Daoguang Zan, Bei Chen, Zeqi Lin, Bei Guan, Yongji Wang, and Jian-Guang Lou. 2022. When language model meets private library. arXiv preprint arXiv:2210.17236." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 69, + 718, + 289, + 773 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 718, + 289, + 773 + ], + "spans": [ + { + "bbox": [ + 69, + 718, + 289, + 773 + ], + "type": "text", + "content": "Fengji Zhang, Bei Chen, Yue Zhang, Jacky Keung, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, and Weizhu Chen. 2023a. Repocoder: Repository-level code completion through iterative retrieval and generation. arXiv preprint arXiv:2303.12570." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 524, + 773 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 304, + 72, + 524, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 524, + 116 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 524, + 116 + ], + "type": "text", + "content": "Lei Zhang, Yunshui Li, Ziqiang Liu, Junhao Liu, Min Yang, et al. 2023b. Marathon: A race through the realm of long context with large language models. arXiv preprint arXiv:2312.09542." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 124, + 524, + 167 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 124, + 524, + 167 + ], + "spans": [ + { + "bbox": [ + 304, + 124, + 524, + 167 + ], + "type": "text", + "content": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. arxiv. arXiv preprint arXiv:1904.09675." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 175, + 524, + 253 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 175, + 524, + 253 + ], + "spans": [ + { + "bbox": [ + 304, + 175, + 524, + 253 + ], + "type": "text", + "content": "Xinrong Zhang, Yingfa Chen, Shengding Hu, Zihang Xu, Junhao Chen, Moo Hao, Xu Han, Zhen Thai, Shuo Wang, Zhiyuan Liu, et al. 2024. " + }, + { + "bbox": [ + 304, + 175, + 524, + 253 + ], + "type": "inline_equation", + "content": "\\propto" + }, + { + "bbox": [ + 304, + 175, + 524, + 253 + ], + "type": "text", + "content": " bench: Extending long context evaluation beyond 100k tokens. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15262-15277." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 259, + 524, + 303 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 259, + 524, + 303 + ], + "spans": [ + { + "bbox": [ + 304, + 259, + 524, + 303 + ], + "type": "text", + "content": "Liang Zhao, Xiaocheng Feng, Xiachong Feng, Bin Qin, and Ting Liu. 2023. Length extrapolation of transformers: A survey from the perspective of position encoding. arXiv preprint arXiv:2312.17044." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 311, + 524, + 366 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 311, + 524, + 366 + ], + "spans": [ + { + "bbox": [ + 304, + 311, + 524, + 366 + ], + "type": "text", + "content": "Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, Jie Jiang, and Bin Cui. 2024. Retrieval-augmented generation for ai-generated content: A survey. Preprint, arXiv:2402.19473." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 373, + 524, + 439 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 373, + 524, + 439 + ], + "spans": [ + { + "bbox": [ + 304, + 373, + 524, + 439 + ], + "type": "text", + "content": "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2024. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 447, + 524, + 524 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 447, + 524, + 524 + ], + "spans": [ + { + "bbox": [ + 304, + 447, + 524, + 524 + ], + "type": "text", + "content": "Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Lei Shen, Zihan Wang, Andi Wang, Yang Li, et al. 2023. Codegeex: A pre-trained model for code generation with multilingual benchmarking on humaneval-x. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 5673-5684." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 531, + 524, + 596 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 531, + 524, + 596 + ], + "spans": [ + { + "bbox": [ + 304, + 531, + 524, + 596 + ], + "type": "text", + "content": "Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, et al. 2021. Qmsum: A new benchmark for query-based multidomain meeting summarization. arXiv preprint arXiv:2104.05938." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 604, + 524, + 660 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 604, + 524, + 660 + ], + "spans": [ + { + "bbox": [ + 304, + 604, + 524, + 660 + ], + "type": "text", + "content": "Zixuan Zhou, Xuefei Ning, Ke Hong, Tianyu Fu, Ji-aming Xu, Shiyao Li, Yuming Lou, Luning Wang, Zhihang Yuan, Xiuhong Li, et al. 2024. A survey on efficient inference for large language models. arXiv preprint arXiv:2404.14294." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 666, + 524, + 711 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 666, + 524, + 711 + ], + "spans": [ + { + "bbox": [ + 304, + 666, + 524, + 711 + ], + "type": "text", + "content": "Dawei Zhu, Liang Wang, Nan Yang, Yifan Song, Wenhao Wu, Furu Wei, and Sujian Li. 2024. Longembed: Extending embedding models for long context retrieval. arXiv preprint arXiv:2404.12096." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 304, + 718, + 524, + 773 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 718, + 524, + 773 + ], + "spans": [ + { + "bbox": [ + 304, + 718, + 524, + 773 + ], + "type": "text", + "content": "Ziyu Zhuang, Qiguang Chen, Longxuan Ma, Mingda Li, Yi Han, Yushan Qian, Haopeng Bai, Zixian Feng, Weinan Zhang, and Ting Liu. 2023. Through the lens of core competency: Survey on evaluation of large language models. ArXiv, abs/2308.07902." + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 793, + 303, + 802 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 793, + 303, + 802 + ], + "spans": [ + { + "bbox": [ + 291, + 793, + 303, + 802 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 204, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 204, + 85 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 204, + 85 + ], + "type": "text", + "content": "A Details of Approaches" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 92, + 291, + 146 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 92, + 291, + 146 + ], + "spans": [ + { + "bbox": [ + 67, + 92, + 291, + 146 + ], + "type": "text", + "content": "This section serves as a supplement to the Approaches section 3 in the main text, expanding on relevant details about related methods to provide readers with a deeper understanding." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 154, + 188, + 168 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 154, + 188, + 168 + ], + "spans": [ + { + "bbox": [ + 68, + 154, + 188, + 168 + ], + "type": "text", + "content": "A.1 Postional Encoding" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 68, + 171, + 188, + 184 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 171, + 188, + 184 + ], + "spans": [ + { + "bbox": [ + 68, + 171, + 188, + 184 + ], + "type": "text", + "content": "A.1.1 Variants of RoPE" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 187, + 291, + 282 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 187, + 291, + 282 + ], + "spans": [ + { + "bbox": [ + 67, + 187, + 291, + 282 + ], + "type": "text", + "content": "Su et al. (2024) try to seek a positional encoding method that could encode relative position during the computing query and key similarity, and decompose this process into the representations of the query and key. They conduct theoretical analysis, and propose a novel positional encoding, which transform similarity into following formula:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 84, + 288, + 290, + 322 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 288, + 290, + 322 + ], + "spans": [ + { + "bbox": [ + 84, + 288, + 290, + 322 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\operatorname {s i m} \\left(q _ {m}, k _ {n}\\right) = q _ {m} ^ {\\top} R _ {\\Theta , n - m} ^ {d} k _ {n} \\tag {3} \\\\ = (R _ {\\Theta , m} ^ {d} q _ {m}) ^ {\\top} (R _ {\\Theta , n} ^ {d} k _ {n}), \\\\ \\end{array}", + "image_path": "2c989fb1c9b4cc75637dbe386b8f29865aca502eb3fd6bbf7fdc599df0cdbc86.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 330, + 290, + 370 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 330, + 290, + 370 + ], + "spans": [ + { + "bbox": [ + 67, + 330, + 290, + 370 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 330, + 290, + 370 + ], + "type": "inline_equation", + "content": "R_{\\Theta ,m}^{d}" + }, + { + "bbox": [ + 67, + 330, + 290, + 370 + ], + "type": "text", + "content": " are a series of pre-defined orthogonal matrices, named as the rotation matrix, which is defined as follows:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 70, + 375, + 291, + 424 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 70, + 375, + 291, + 424 + ], + "spans": [ + { + "bbox": [ + 70, + 375, + 291, + 424 + ], + "type": "interline_equation", + "content": "R _ {\\Theta , m} ^ {d} = \\left( \\begin{array}{c c c c c} \\cos m \\theta_ {1} & - \\sin m \\theta_ {1} & \\dots & 0 & 0 \\\\ \\sin m \\theta_ {1} & \\cos m \\theta_ {1} & \\dots & 0 & 0 \\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ \\cdot & \\cdot & \\cdot & \\cdot & \\cdot \\\\ 0 & 0 & \\dots & \\cos m \\theta_ {d / 2} & - \\sin m \\theta_ {d / 2} \\\\ 0 & 0 & \\dots & \\sin m \\theta_ {d / 2} & \\cos m \\theta_ {d / 2} \\end{array} \\right)", + "image_path": "f85d2c3f61ac40fc64679c55f4597e20aba096dac6842999219c50338877fedf.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 430, + 291, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 430, + 291, + 565 + ], + "spans": [ + { + "bbox": [ + 67, + 430, + 291, + 565 + ], + "type": "text", + "content": "The function set " + }, + { + "bbox": [ + 67, + 430, + 291, + 565 + ], + "type": "inline_equation", + "content": "\\Theta" + }, + { + "bbox": [ + 67, + 430, + 291, + 565 + ], + "type": "text", + "content": " consists of a set of pre-defined function values " + }, + { + "bbox": [ + 67, + 430, + 291, + 565 + ], + "type": "inline_equation", + "content": "\\Theta = \\{\\theta_{i} = 10000^{-2(i - 1) / d}, i \\in [1,2,\\dots,d / 2]\\}" + }, + { + "bbox": [ + 67, + 430, + 291, + 565 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 67, + 430, + 291, + 565 + ], + "type": "inline_equation", + "content": "R_{\\Theta}^{d}" + }, + { + "bbox": [ + 67, + 430, + 291, + 565 + ], + "type": "text", + "content": " integrates positional information into the query and key vectors by multiplication. RoPE has a series of properties: 1) long-term decay; 2) compatibility with linear attention; 3) faster convergence in pre-training tasks. Besides, Liu et al. (2024b) conduct a detailed analysis of RoPE and provides the scaling laws for RoPE-based extrapolation." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 571, + 291, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 571, + 291, + 666 + ], + "spans": [ + { + "bbox": [ + 67, + 571, + 291, + 666 + ], + "type": "text", + "content": "Position Index Adjustment An et al. (2024) propose Dual Chunk Attention (DCA), which distributes the position indexes used during pretraining to each token based on the relative position relationships between query and key without additional training. It is proposed from the perspective of allocation of position indexes." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 667, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 667, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 667, + 291, + 775 + ], + "type": "text", + "content": "And there are also some methods based on scaling position indexes. Chen et al. (2023b) propose Position Interpolation (PI) method that utilizes the fact that position encoding can be applied to non-integer positions. They modify original position index " + }, + { + "bbox": [ + 67, + 667, + 291, + 775 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 67, + 667, + 291, + 775 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 67, + 667, + 291, + 775 + ], + "type": "inline_equation", + "content": "m' = m\\frac{L}{L'}" + }, + { + "bbox": [ + 67, + 667, + 291, + 775 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 67, + 667, + 291, + 775 + ], + "type": "inline_equation", + "content": "L" + }, + { + "bbox": [ + 67, + 667, + 291, + 775 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 67, + 667, + 291, + 775 + ], + "type": "inline_equation", + "content": "L'" + }, + { + "bbox": [ + 67, + 667, + 291, + 775 + ], + "type": "text", + "content": " are the length of pre-trained window and current input sequence, respectively. This method insert additional" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 111 + ], + "type": "text", + "content": "positional encoding between adjacent integer position index in the original RoPE to handle longer sequences." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 112, + 527, + 205 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 112, + 527, + 205 + ], + "spans": [ + { + "bbox": [ + 302, + 112, + 527, + 205 + ], + "type": "text", + "content": "Combining above two methods, Su (2023) proposed ReRoPE, which combines direct extrapolation and position interpolation. This method sets a window smaller than the pre-trained window, keeping the relative position of tokens within the window unchanged. And scales the relative position of tokens outside the window." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 216, + 525, + 282 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 216, + 525, + 282 + ], + "spans": [ + { + "bbox": [ + 302, + 216, + 525, + 282 + ], + "type": "text", + "content": "Base Frequency Adjustment As described in the main text, this type of methods enhance the model extrapolation performance by modifying " + }, + { + "bbox": [ + 302, + 216, + 525, + 282 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 302, + 216, + 525, + 282 + ], + "type": "text", + "content": " in the trigonometric function terms in the rotation matrix." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 284, + 525, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 284, + 525, + 350 + ], + "spans": [ + { + "bbox": [ + 302, + 284, + 525, + 350 + ], + "type": "text", + "content": "Peng and Quesnelle (2023); Roziere et al. (2023) choose to change the base " + }, + { + "bbox": [ + 302, + 284, + 525, + 350 + ], + "type": "inline_equation", + "content": "b" + }, + { + "bbox": [ + 302, + 284, + 525, + 350 + ], + "type": "text", + "content": " of the exponential terms " + }, + { + "bbox": [ + 302, + 284, + 525, + 350 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 302, + 284, + 525, + 350 + ], + "type": "text", + "content": " from the default value " + }, + { + "bbox": [ + 302, + 284, + 525, + 350 + ], + "type": "inline_equation", + "content": "b = 10000" + }, + { + "bbox": [ + 302, + 284, + 525, + 350 + ], + "type": "text", + "content": " to other values which can improve the model extrapolation performance." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "spans": [ + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "text", + "content": "Different from them, some work directly scale " + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "text", + "content": ". NTK-by-parts (bloc97, 2023) interpolation chooses to scale the " + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "text", + "content": " of different dimensions in the rotation matrix by a ratio as a function of the dimension " + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "text", + "content": " and the input sequence length " + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "inline_equation", + "content": "L^{\\prime}" + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "text", + "content": ". And YaRN (Peng et al., 2023) incorporates temperature " + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "text", + "content": " related to the input sequence length " + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "inline_equation", + "content": "L^{\\prime}" + }, + { + "bbox": [ + 302, + 352, + 527, + 473 + ], + "type": "text", + "content": " on the basis of NTK-by-parts interpolation to further improve the extrapolation performance of the model." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 481, + 525, + 550 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 481, + 525, + 550 + ], + "spans": [ + { + "bbox": [ + 302, + 481, + 525, + 550 + ], + "type": "text", + "content": "Structure Modification XPOS (Sun et al., 2022) adjusts the original RoPE structure and introduces a position-dependent exponential bias to enhance relative position information, particularly enhancing the decay effect on distant tokens." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 303, + 558, + 409, + 570 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 558, + 409, + 570 + ], + "spans": [ + { + "bbox": [ + 303, + 558, + 409, + 570 + ], + "type": "text", + "content": "A.1.2 Attention Bias" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 575, + 525, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 575, + 525, + 629 + ], + "spans": [ + { + "bbox": [ + 302, + 575, + 525, + 629 + ], + "type": "text", + "content": "Besides applying RoPE-based methods, a plenty of method add a bias related to the relative distance between tokens to introduce relative position information. The process can be expressed as follows:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 329, + 640, + 525, + 655 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 640, + 525, + 655 + ], + "spans": [ + { + "bbox": [ + 329, + 640, + 525, + 655 + ], + "type": "interline_equation", + "content": "\\operatorname {s i m} \\left(q _ {m}, k _ {n}\\right) = q _ {m} ^ {\\top} k _ {n} + f _ {b i a s} (m, n), \\quad (4)", + "image_path": "07dd0d2d903d3f96375ddac200c3c6bfd4f1cf51aa083cffc80742fd1bef8cf8.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 666, + 525, + 719 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 666, + 525, + 719 + ], + "spans": [ + { + "bbox": [ + 302, + 666, + 525, + 719 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 302, + 666, + 525, + 719 + ], + "type": "inline_equation", + "content": "f_{bias}(m,n)" + }, + { + "bbox": [ + 302, + 666, + 525, + 719 + ], + "type": "text", + "content": " is a bias function that depends on the token position index corresponding to query and key. " + }, + { + "bbox": [ + 302, + 666, + 525, + 719 + ], + "type": "inline_equation", + "content": "f_{bias}(m,n)" + }, + { + "bbox": [ + 302, + 666, + 525, + 719 + ], + "type": "text", + "content": " be divided into two categories: learnable and predefined." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 302, + 721, + 525, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 721, + 525, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 721, + 525, + 775 + ], + "type": "text", + "content": "In learnable " + }, + { + "bbox": [ + 302, + 721, + 525, + 775 + ], + "type": "inline_equation", + "content": "f_{bias}" + }, + { + "bbox": [ + 302, + 721, + 525, + 775 + ], + "type": "text", + "content": ", it may be related to " + }, + { + "bbox": [ + 302, + 721, + 525, + 775 + ], + "type": "inline_equation", + "content": "m - n" + }, + { + "bbox": [ + 302, + 721, + 525, + 775 + ], + "type": "text", + "content": " where relative position information is explicitly introduced. For example, in T5 (Raffel et al., 2020), " + }, + { + "bbox": [ + 302, + 721, + 525, + 775 + ], + "type": "inline_equation", + "content": "f_{bias}" + }, + { + "bbox": [ + 302, + 721, + 525, + 775 + ], + "type": "text", + "content": " is a learnable function with " + }, + { + "bbox": [ + 302, + 721, + 525, + 775 + ], + "type": "inline_equation", + "content": "m - n" + }, + { + "bbox": [ + 302, + 721, + 525, + 775 + ], + "type": "text", + "content": " as input" + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 792, + 302, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 792, + 302, + 803 + ], + "spans": [ + { + "bbox": [ + 291, + 792, + 302, + 803 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 124 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 124 + ], + "type": "text", + "content": "and varies with attention heads. Similarly, KERPLE (Chi et al., 2022a) sets " + }, + { + "bbox": [ + 67, + 71, + 290, + 124 + ], + "type": "inline_equation", + "content": "f_{bias}" + }, + { + "bbox": [ + 67, + 71, + 290, + 124 + ], + "type": "text", + "content": " as a parameterized kernel function, requiring training to determine the parameter values." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "spans": [ + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "text", + "content": "The predefined " + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "inline_equation", + "content": "f_{bias}" + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "text", + "content": " is typically ALiBi (Attention with Linear Biases) (Press et al., 2021). It uses a predefined function for " + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "inline_equation", + "content": "f_{bias}" + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "text", + "content": " that depends on the number of attention heads " + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "inline_equation", + "content": "H" + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "text", + "content": " and the current head number " + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "inline_equation", + "content": "h" + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "text", + "content": ", which is expressed as " + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "inline_equation", + "content": "f_{bias}(m, n) = 2^{-\\frac{8h}{H}} \\cdot (n - m)" + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "text", + "content": ". Besides, in Sandwich method (Chi et al., 2022b), " + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "inline_equation", + "content": "f_{bias}" + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "text", + "content": " is defined as " + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "inline_equation", + "content": "f_{bias} = \\frac{8h}{H} \\cdot (p_m^\\top p_n - \\frac{d}{2})" + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "inline_equation", + "content": "p_m" + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "inline_equation", + "content": "p_n" + }, + { + "bbox": [ + 67, + 126, + 291, + 261 + ], + "type": "text", + "content": " are the sinusoidal positional encoding used in the original Transformer model." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 275, + 199, + 288 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 275, + 199, + 288 + ], + "spans": [ + { + "bbox": [ + 67, + 275, + 199, + 288 + ], + "type": "text", + "content": "A.2 Context Compression" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 295, + 189, + 307 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 295, + 189, + 307 + ], + "spans": [ + { + "bbox": [ + 67, + 295, + 189, + 307 + ], + "type": "text", + "content": "A.2.1 Soft Compression" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 312, + 289, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 312, + 289, + 338 + ], + "spans": [ + { + "bbox": [ + 67, + 312, + 289, + 338 + ], + "type": "text", + "content": "This kind of methods achieve compression at the hidden states level." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 340, + 290, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 340, + 290, + 502 + ], + "spans": [ + { + "bbox": [ + 67, + 340, + 290, + 502 + ], + "type": "text", + "content": "Bulatov et al. (2022) introduced the Recurrent Memory Transformer (RMT), which compresses at segment level. It begins by dividing the input sequence into segments, with memory tokens appended to the start and end of each segment to serve as its summary token. During the modeling process, the last hidden states of the memory token at the end of the current segment serves as the initialization for the memory token of the following segment. Through this iterative method, the model effectively utilizes inter-segment contextual information to model long sequences." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 503, + 291, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 503, + 291, + 731 + ], + "spans": [ + { + "bbox": [ + 67, + 503, + 291, + 731 + ], + "type": "text", + "content": "Similarly, the Recurrent Attention Network (RAN, Li et al., 2023b) appends a Global Perception Cell (GPC) vector at the start of the hidden vector representation of each segment to achieve a compressed representation achieving the effect of concatenating summary tokens, and completing the information interaction between segments. This method simulates the human mechanism of memory enhancement through review, introducing a Memory Review scheme which performs crossattention between last hidden states of the GPC from all segments and and the original input to update the representation of GPC. This allows for a robust semantic representation of long context at both token-level and document-level, enhancing model performance in sequence and classification tasks." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 735, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 735, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 735, + 291, + 775 + ], + "type": "text", + "content": "AutoCompressors (Chevalier et al., 2023) is built on the basis of RMT, compressing the content of the segment into summary vectors for representa" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 71, + 525, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 525, + 138 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 525, + 138 + ], + "type": "text", + "content": "tion. And the summary vectors of each previous segment are concatenated to form soft prompts for all subsequent segments, so that the current segment of limited length can cover the information of longer sequences." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 140, + 526, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 140, + 526, + 275 + ], + "spans": [ + { + "bbox": [ + 302, + 140, + 526, + 275 + ], + "type": "text", + "content": "In addition, In-context Autoencoder (ICAE, Ge et al., 2023) adds memory tokens at the end of the input sequence to compress context into short memory slots while training the model to generate outputs closely resembling the original context. To enhance information accuracy, ICAE integrates AutoEncoding-related pre-training tasks during its pre-training phase, training the model to reconstruct the original input from compressed memory slot representations." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 276, + 525, + 317 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 276, + 525, + 317 + ], + "spans": [ + { + "bbox": [ + 302, + 276, + 525, + 317 + ], + "type": "text", + "content": "Gisting (Mu et al., 2024b) similarly compresses the prompt part of the input token sequence into shorter gist tokens, improving inference speed." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 303, + 327, + 430, + 339 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 327, + 430, + 339 + ], + "spans": [ + { + "bbox": [ + 303, + 327, + 430, + 339 + ], + "type": "text", + "content": "A.2.2 Hard Compression" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 344, + 525, + 371 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 344, + 525, + 371 + ], + "spans": [ + { + "bbox": [ + 302, + 344, + 525, + 371 + ], + "type": "text", + "content": "Hard compression directly utilizes LLMs to compress original input text." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 373, + 525, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 373, + 525, + 507 + ], + "spans": [ + { + "bbox": [ + 302, + 373, + 525, + 507 + ], + "type": "text", + "content": "LLMLingua (Jiang et al., 2023) trains a small model to align with the output of LLM and uses the perplexity (PPL) of the small model as an evaluation for token importance. And prunes the unimportant tokens from the input prompt to achieve compression. Furthe, LongLLMLingua (Jiang et al., 2024a) has made improvements on this basis, compressing the input based on the content of the question, thus better preserving key information related to the question." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 508, + 525, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 508, + 525, + 602 + ], + "spans": [ + { + "bbox": [ + 302, + 508, + 525, + 602 + ], + "type": "text", + "content": "Differently, MEMWALKER (Chen et al., 2023a) employs a hierarchical summarization approach to compress long context sequences, iteratively summarizing the input to construct a tree-like structure of summarized content. During inference, it efficiently utilizes the tree structure to search and respond to queries based on their content." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 616, + 432, + 628 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 616, + 432, + 628 + ], + "spans": [ + { + "bbox": [ + 302, + 616, + 432, + 628 + ], + "type": "text", + "content": "A.3 Retrieval Augmented" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 634, + 443, + 647 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 634, + 443, + 647 + ], + "spans": [ + { + "bbox": [ + 302, + 634, + 443, + 647 + ], + "type": "text", + "content": "A.3.1 Retrieval Granularity" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 652, + 524, + 692 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 652, + 524, + 692 + ], + "spans": [ + { + "bbox": [ + 302, + 652, + 524, + 692 + ], + "type": "text", + "content": "The retrieval granularity in existing work can be divided into two categories: token-level retrieval and block-level retrieval." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 694, + 525, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 694, + 525, + 774 + ], + "spans": [ + { + "bbox": [ + 302, + 694, + 525, + 774 + ], + "type": "text", + "content": "Token-level retrieval is to select top-k tokens with highest similarity scores in one turn. This method is widely used in existing (Wu et al., 2022; Tworkowski et al., 2024; Bertsch et al., 2024a). It is simple to implement, but it has some limitations. Such as the potential for semantic discontinuities" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 793, + 304, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 793, + 304, + 803 + ], + "spans": [ + { + "bbox": [ + 291, + 793, + 304, + 803 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 292, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 292, + 111 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 292, + 111 + ], + "type": "text", + "content": "due to discrete token retrieval and the need to recalculate similarity for all tokens, which is computationally intensive and inefficient." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 114, + 292, + 477 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 114, + 292, + 477 + ], + "spans": [ + { + "bbox": [ + 69, + 114, + 292, + 477 + ], + "type": "text", + "content": "Consequently, researchers have proposed block-level retrieval, which uses blocks composed of continuous tokens of a fixed length as the retrieval unit. Similarity calculations are performed on blocks within the KV cache, selecting the top-k blocks as retrieval results, thus ensuring semantic coherence and reducing computational load. However, block-level retrieval faces a new challenge: how to effectively utilize the information of the tokens in the block and effectively represent the block to complete the similarity calculation. Long-MEM (Wang et al., 2024b) and RPT (Rubin and Berant, 2023) represent the corresponding block by calculating the mean pooling of token representations within the block. InFLLM (Xiao et al., 2024) calculates the representative score of each token within the block against other tokens, selecting a subset of high-scoring tokens to represent the block. Additionally, some methods introduce an extra token to represent blocks, such as the Landmark method (Mohtashami and Jaggi, 2024) introduces the Landmark token, a new token into the vocabulary, and placie it at the end of each block. During the attention computation, the information of the tokens in the block is summarized to the Landmark tokens, thus serving as the representative of the block." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 487, + 218, + 500 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 487, + 218, + 500 + ], + "spans": [ + { + "bbox": [ + 67, + 487, + 218, + 500 + ], + "type": "text", + "content": "A.3.2 Similarity Computation" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 504, + 292, + 599 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 504, + 292, + 599 + ], + "spans": [ + { + "bbox": [ + 67, + 504, + 292, + 599 + ], + "type": "text", + "content": "After determining the retrieval granularity, we need to formulate an appropriate rule to compute similarity. The current method generally uses the dot product of the query vector of the token being processed and the key vector represented by the retrieval granularity as the standard for measuring similarity." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 608, + 201, + 622 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 608, + 201, + 622 + ], + "spans": [ + { + "bbox": [ + 67, + 608, + 201, + 622 + ], + "type": "text", + "content": "A.3.3 Positional Encoding" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 625, + 290, + 679 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 625, + 290, + 679 + ], + "spans": [ + { + "bbox": [ + 67, + 625, + 290, + 679 + ], + "type": "text", + "content": "Since the positions of the retrieved context tokens are not fixed, and recording each token's specific position in the KV cache is costly, it is challenging to provide accurate position information." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 681, + 292, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 681, + 292, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 681, + 292, + 775 + ], + "type": "text", + "content": "Based on experiments of Dai et al. (2019), which show that the relative position information of distant tokens does not seem to be important, some methods like MemTRM, FoT, and InfLLM choose to uniformly set the position encoding of the retrieved context token part to the same position vector, ignoring the position information between the" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 71, + 464, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 464, + 84 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 464, + 84 + ], + "type": "text", + "content": "retrieved context tokens themselves." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 84, + 526, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 84, + 526, + 138 + ], + "spans": [ + { + "bbox": [ + 302, + 84, + 526, + 138 + ], + "type": "text", + "content": "Besides, Landmark places the retrieved context tokens and local context tokens within the same window and re-encodes their relative positions together." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 147, + 443, + 158 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 147, + 443, + 158 + ], + "spans": [ + { + "bbox": [ + 302, + 147, + 443, + 158 + ], + "type": "text", + "content": "A.3.4 Attention Calculation" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 163, + 525, + 203 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 163, + 525, + 203 + ], + "spans": [ + { + "bbox": [ + 302, + 163, + 525, + 203 + ], + "type": "text", + "content": "When it comes to attention calculation, it's important to find a suitable method to make full use of retrieved context tokens and local context tokens." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 204, + 526, + 283 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 204, + 526, + 283 + ], + "spans": [ + { + "bbox": [ + 302, + 204, + 526, + 283 + ], + "type": "text", + "content": "The simplest approach is to treat both types of tokens equally, that is using the conventional attention calculation method. For example, FoT and InfLLM use standard attention for calculation, while Unlimformer (Bertsch et al., 2024a) employs cross attention." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 285, + 527, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 285, + 527, + 474 + ], + "spans": [ + { + "bbox": [ + 302, + 285, + 527, + 474 + ], + "type": "text", + "content": "However, the importance of the information contained within these two types of context tokens is not the same for the token currently being processed. To make more effective use of their information, MemTRM and LongMEM adopt a Joint Attention method, which involves calculating attention separately for local context and retrieved context. And then combining them with weighted average " + }, + { + "bbox": [ + 302, + 285, + 527, + 474 + ], + "type": "inline_equation", + "content": "V_{a} = g \\cdot V_{l} + (1 - g) \\cdot V_{r}" + }, + { + "bbox": [ + 302, + 285, + 527, + 474 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 302, + 285, + 527, + 474 + ], + "type": "inline_equation", + "content": "V_{a}, V_{l}, V_{r}" + }, + { + "bbox": [ + 302, + 285, + 527, + 474 + ], + "type": "text", + "content": " respectively represent the final attention result, the attention result using local context and the attention result using retrieved context, and " + }, + { + "bbox": [ + 302, + 285, + 527, + 474 + ], + "type": "inline_equation", + "content": "g" + }, + { + "bbox": [ + 302, + 285, + 527, + 474 + ], + "type": "text", + "content": " is a learnable parameter used to balance the contributions of the two parts." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 475, + 527, + 650 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 475, + 527, + 650 + ], + "spans": [ + { + "bbox": [ + 302, + 475, + 527, + 650 + ], + "type": "text", + "content": "Furthermore, in order to distinguish the information from different positions within the retrieved context tokens in a more fine-grained manner, Landmark employs the Grouped Softmax method. Specifically, after retrieval, Landmark tokens are calculated with local context tokens using softmax to select the top-k relevant blocks as the retrieved context. Attention is then calculated separately within these blocks. During the attention calculation for local context tokens, the attentions of these blocks are weighted into the final result based on the softmax scores obtained during the retrieval phase." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 661, + 416, + 672 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 661, + 416, + 672 + ], + "spans": [ + { + "bbox": [ + 302, + 661, + 416, + 672 + ], + "type": "text", + "content": "A.4 Attention Pattern" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 677, + 416, + 691 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 677, + 416, + 691 + ], + "spans": [ + { + "bbox": [ + 302, + 677, + 416, + 691 + ], + "type": "text", + "content": "A.4.1 Sliding Window" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 694, + 525, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 694, + 525, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 694, + 525, + 775 + ], + "type": "text", + "content": "This type of method tranform information between segments. Transformer-XL (Dai et al., 2019) uses sliding window method to process long context, where the hidden state from the previous segment is concatenated to the front of the current segment. It not only utilizes the key and value information" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 792, + 304, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 792, + 304, + 803 + ], + "spans": [ + { + "bbox": [ + 291, + 792, + 304, + 803 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "type": "text", + "content": "from the current segment but also reuses those from the previous segment. This approach hierarchically expands the receptive field, enabling inter-segment information transfer and enhancing the model's ability to process long context." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 140, + 291, + 343 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 140, + 291, + 343 + ], + "spans": [ + { + "bbox": [ + 69, + 140, + 291, + 343 + ], + "type": "text", + "content": "Besides, Han et al. (2024) identify that starting tokens occupy a distinct feature space, and these tokens act as a factor causing model length generalization failures. They further propose LM-Infinite as a solution, utilizing a " + }, + { + "bbox": [ + 69, + 140, + 291, + 343 + ], + "type": "inline_equation", + "content": "\\Lambda" + }, + { + "bbox": [ + 69, + 140, + 291, + 343 + ], + "type": "text", + "content": "-shaped attention mask strategy during attention calculation. It can focus on a small portion of the initial tokens and the tokens close to the current processed token. Similarly, StreamingLLM (Xiao et al., 2023) also finds that the initial tokens in a sequence significantly influence the attention calculation of subsequent tokens and cannot be ignored. Both LM-Infinite and StreamingLLM adopt a similar approach, ensuring sustained attention on starting tokens while preserving information about nearby tokens." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 352, + 183, + 364 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 352, + 183, + 364 + ], + "spans": [ + { + "bbox": [ + 67, + 352, + 183, + 364 + ], + "type": "text", + "content": "A.4.2 Parallel Context" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 369, + 290, + 530 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 369, + 290, + 530 + ], + "spans": [ + { + "bbox": [ + 67, + 369, + 290, + 530 + ], + "type": "text", + "content": "Parallel Context Windows (PCW, Ratner et al., 2022) is one of the representative works. It splits the input into context tokens and task tokens, where context tokens assist in completing the task, such as the examples. And task tokens are the input of the test example, such as the questions. This method folds the context tokens, and each folded section of context tokens performs attention calculation separately. Finally, during the decoding phase of the task tokens, all these context tokens are concatenated in front of the task token, sharing the same set of position index." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 532, + 291, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 532, + 291, + 666 + ], + "spans": [ + { + "bbox": [ + 67, + 532, + 291, + 666 + ], + "type": "text", + "content": "Besides, Structured prompting (Hao et al., 2022) also adopts a similar approach by folding demonstration tokens in the input and concatenating them in front of the test input tokens. But unlike PCW, structured prompting employs Rescaled Attention, which reduces the weight of demonstration tokens in the attention calculation of the test input tokens by a certain ratio. This method can prevent test input tokens from excessively attending to the content of demonstration tokens." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 677, + 185, + 689 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 677, + 185, + 689 + ], + "spans": [ + { + "bbox": [ + 67, + 677, + 185, + 689 + ], + "type": "text", + "content": "A.4.3 Sparse Attention" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 693, + 289, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 693, + 289, + 734 + ], + "spans": [ + { + "bbox": [ + 67, + 693, + 289, + 734 + ], + "type": "text", + "content": "This method can reduce the complexity of attention calculation. So that can improve efficiency when processing long context." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 735, + 289, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 735, + 289, + 774 + ], + "spans": [ + { + "bbox": [ + 67, + 735, + 289, + 774 + ], + "type": "text", + "content": "LongNet (Ding et al., 2023) introduces dilated attention, a mechanism that exponentially increases the attentive field as the distance between tokens" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 71, + 525, + 151 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 525, + 151 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 525, + 151 + ], + "type": "text", + "content": "increases. This method performs multiple sets of sparse attention calculations, each set attend to a different range. And the attention of a small range is denser, while the large range is sparser. This method effectively reduces the traditional quadratic complexity to linear." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 153, + 526, + 273 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 153, + 526, + 273 + ], + "spans": [ + { + "bbox": [ + 302, + 153, + 526, + 273 + ], + "type": "text", + "content": "MEGABYTE (Yu et al., 2023) performs hierarchical attention calculation on the input. Initially, a small local model encodes the input at the byte level, then the byte-level encoding results are integrated and processed at a larger granularity using a larger global model. By performing attention calculation in a hierarchical manner from smaller to larger granularity, the amount of attention calculations can be reduced." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 275, + 525, + 408 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 275, + 525, + 408 + ], + "spans": [ + { + "bbox": [ + 302, + 275, + 525, + 408 + ], + "type": "text", + "content": "In LongLoRA (Chen et al., 2023c), the proposed " + }, + { + "bbox": [ + 302, + 275, + 525, + 408 + ], + "type": "inline_equation", + "content": "S^2" + }, + { + "bbox": [ + 302, + 275, + 525, + 408 + ], + "type": "text", + "content": " - Attention groups attention heads and adjusts each group to attend to different but overlapping local windows, then leverages the characteristics of multihead attention to integrate various local information. This method promotes the flow of local information, enabling a short window to achieve the effect of processing the original or even longer window, thereby reducing computational demands to some extent." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 303, + 420, + 432, + 433 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 420, + 432, + 433 + ], + "spans": [ + { + "bbox": [ + 303, + 420, + 432, + 433 + ], + "type": "text", + "content": "B Details of Evaluation" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 441, + 525, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 441, + 525, + 495 + ], + "spans": [ + { + "bbox": [ + 302, + 441, + 525, + 495 + ], + "type": "text", + "content": "This section serves as a supplement to the Evaluation section 4 in the main text, expanding on relevant details to provide readers with a more in-depth understanding." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 303, + 505, + 355, + 516 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 505, + 355, + 516 + ], + "spans": [ + { + "bbox": [ + 303, + 505, + 355, + 516 + ], + "type": "text", + "content": "B.1 Data" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 303, + 523, + 437, + 534 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 523, + 437, + 534 + ], + "spans": [ + { + "bbox": [ + 303, + 523, + 437, + 534 + ], + "type": "text", + "content": "B.1.1 Data Characteristics" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 539, + 525, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 539, + 525, + 713 + ], + "spans": [ + { + "bbox": [ + 302, + 539, + 525, + 713 + ], + "type": "text", + "content": "Recent advancements in LLMs have led to substantial improvements in processing long contexts. By late 2023, several models claimed capabilities of handling contexts exceeding 100K tokens, with OpenAI's GPT-4 Turbo (2023) (Achiam et al., 2023) supporting 128K tokens and Anthropic's Claude-2.11 extending this capacity to 200K tokens. Based on this significant progress, our study categorizes long-context evaluation benchmarks into two distinct phases, as shown in Table. 1: Phase I comprises benchmarks with input context lengths below 100K tokens, while Phase II encompasses benchmarks of 100K tokens and above." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 714, + 525, + 756 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 714, + 525, + 756 + ], + "spans": [ + { + "bbox": [ + 302, + 714, + 525, + 756 + ], + "type": "text", + "content": "In Phase I, BAMBOO (Dong et al., 2023a) and LongBench (Bai et al., 2023) implement bi-interval and tri-interval partitioning strategies, respectively." + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 315, + 762, + 506, + 773 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 762, + 506, + 773 + ], + "spans": [ + { + "bbox": [ + 315, + 762, + 506, + 773 + ], + "type": "text", + "content": "" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 291, + 792, + 302, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 792, + 302, + 803 + ], + "spans": [ + { + "bbox": [ + 291, + 792, + 302, + 803 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 293, + 179 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 293, + 179 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 293, + 179 + ], + "type": "text", + "content": "Phase II refined this approach further, with LVEval (Yuan et al., 2024) and NeedleBench (Li et al., 2024b) employing five-interval and six-interval partitioning schemas, respectively. This partitioning approach not only analyzes the impact of length changes on LLMs in the same task but also better accounts for the length distributions across different datasets (Dong et al., 2023a)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 195, + 227, + 208 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 195, + 227, + 208 + ], + "spans": [ + { + "bbox": [ + 68, + 195, + 227, + 208 + ], + "type": "text", + "content": "B.1.2 Knowledge Leakage Issue" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 215, + 291, + 391 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 215, + 291, + 391 + ], + "spans": [ + { + "bbox": [ + 67, + 215, + 291, + 391 + ], + "type": "text", + "content": "Knowledge leakage occurs when test and training data overlap, where models favor memorization over understanding (Golchin and Surdeanu, 2023; Yuan et al., 2024). Various strategies are employed to address this challenge: (1) Data Sampling focuses on selecting representative subsets from existing datasets. (2) Keyword Substituting & Sentence Rewriting modifies existing datasets by replacing keywords and rewriting sentences. (3) Non-overlapping Data Leveraging involves using datasets released after the deployment of LLMs to reduce potential overlap between test and training data." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 406, + 291, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 406, + 291, + 515 + ], + "spans": [ + { + "bbox": [ + 67, + 406, + 291, + 515 + ], + "type": "text", + "content": "Data Sampling Data sampling primarily focuses on filtering existing datasets. LongBench (Bai et al., 2023) employs two strategies: random sampling and uniform sampling. Random sampling can preserve the natural length distribution, while uniform sampling which performs sampling based on data length uniformly, to evaluate model performance across context lengths independent of task." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 530, + 291, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 530, + 291, + 624 + ], + "spans": [ + { + "bbox": [ + 67, + 530, + 291, + 624 + ], + "type": "text", + "content": "Keyword Substituting & Sentence Rewriting \nL-Eval (An et al., 2023) and BAMBOO (Dong et al., 2023a) replace keywords and function names, while " + }, + { + "bbox": [ + 67, + 530, + 291, + 624 + ], + "type": "inline_equation", + "content": "\\infty" + }, + { + "bbox": [ + 67, + 530, + 291, + 624 + ], + "type": "text", + "content": "Bench (Zhang et al., 2024) substitutes key entities in novel reasoning tasks. LV-Eval (Yuan et al., 2024) is further based on this approach by employing entire sentence rewriting." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 640, + 292, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 640, + 292, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 640, + 292, + 775 + ], + "type": "text", + "content": "Non-overlapping Data Leveraging To mitigate the overlap between test and training data for LLMs, some benchmarks such as LooGLE (Li et al., 2023a) and BAMBOO (Dong et al., 2023a) have employed datasets released after the models' deployment. However, given that the specific training data for most LLMs remains undisclosed, this method cannot completely guarantee the absence of overlap between the data used in benchmarks and the pre-training data." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 303, + 71, + 359, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 71, + 359, + 83 + ], + "spans": [ + { + "bbox": [ + 303, + 71, + 359, + 83 + ], + "type": "text", + "content": "B.2 Tasks" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 89, + 526, + 183 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 89, + 526, + 183 + ], + "spans": [ + { + "bbox": [ + 302, + 89, + 526, + 183 + ], + "type": "text", + "content": "The following are the details of the tasks, which are introduced in the order of the main text. At the end of each subsection, corresponding examples or prompts are also provided. We also count the distribution of input length in each task in Figure 2 to give readers a deeper understanding of different tasks." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 303, + 193, + 437, + 206 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 193, + 437, + 206 + ], + "spans": [ + { + "bbox": [ + 303, + 193, + 437, + 206 + ], + "type": "text", + "content": "B.2.1 Question Answering" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 210, + 527, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 210, + 527, + 290 + ], + "spans": [ + { + "bbox": [ + 302, + 210, + 527, + 290 + ], + "type": "text", + "content": "Single-hop Question Answering Representative datasets in this field are SQuAD (Rajpurkar, 2016), TriviaQA (Joshi et al., 2017), and NarrativeQA (Kočisky et al., 2018). Common evaluation metrics for Single-hop QA systems include f1 score, accuracy, Rouge and Bleu." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 300, + 527, + 380 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 300, + 527, + 380 + ], + "spans": [ + { + "bbox": [ + 302, + 300, + 527, + 380 + ], + "type": "text", + "content": "Multi-hop Question Answering Common datasets for Multi-hop Question Answering include 2WikiMQA (Ho et al., 2020), MuSiQue (Trivedi et al., 2022), and HotpotQA (Yang et al., 2018). Evaluation metrics typically used are f1 score, exact match (EM)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 390, + 442, + 402 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 390, + 442, + 402 + ], + "spans": [ + { + "bbox": [ + 302, + 390, + 442, + 402 + ], + "type": "text", + "content": "B.2.2 Needle-in-a-Haystack" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 406, + 527, + 690 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 406, + 527, + 690 + ], + "spans": [ + { + "bbox": [ + 302, + 406, + 527, + 690 + ], + "type": "text", + "content": "Retrieval.PassKey (Mohtashami and Jaggi, 2023) requires models to locate a randomly generated 5-digit sequence within lengthy and noisy contexts. " + }, + { + "bbox": [ + 302, + 406, + 527, + 690 + ], + "type": "inline_equation", + "content": "\\infty" + }, + { + "bbox": [ + 302, + 406, + 527, + 690 + ], + "type": "text", + "content": "Bench (Zhang et al., 2024) extends the Retrieval.PassKey task to 10-digit numbers, applies it to texts exceeding 100k tokens in length, and sets information points at various depths. Retrieval.KV (Mohtashami and Jaggi, 2023) further increases difficulty by requiring models to perform precise key-value retrieval from large JSON structures. NeedleBench Li et al. (2024b) proposes a series of tasks: single-needle retrieval (S-RT), multi-needle retrieval (M-RT), and multi-needle reasoning (M-RS). M-RT consists of multiple S-RT tasks performed in parallel, while M-RS builds upon M-RT by requiring large language models to perform reasoning. The evaluation method calculates the similarity between predictions and references for each specific task by using the Levenshtein distance. The following are examples of S-RT, M-RT, M-RS respectively." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 310, + 700, + 518, + 727 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 310, + 700, + 518, + 727 + ], + "spans": [ + { + "bbox": [ + 310, + 700, + 518, + 727 + ], + "type": "text", + "content": "S-RT: Hidden on Emerald Island is the legendary Stardust Shard." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 310, + 728, + 518, + 754 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 310, + 728, + 518, + 754 + ], + "spans": [ + { + "bbox": [ + 310, + 728, + 518, + 754 + ], + "type": "text", + "content": "Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 755, + 518, + 768 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 755, + 518, + 768 + ], + "spans": [ + { + "bbox": [ + 313, + 755, + 518, + 768 + ], + "type": "text", + "content": "Graham Essays—Paul Graham Essays—" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 792, + 304, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 792, + 304, + 803 + ], + "spans": [ + { + "bbox": [ + 291, + 792, + 304, + 803 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 72, + 71, + 524, + 185 + ], + "blocks": [ + { + "bbox": [ + 72, + 71, + 524, + 185 + ], + "lines": [ + { + "bbox": [ + 72, + 71, + 524, + 185 + ], + "spans": [ + { + "bbox": [ + 72, + 71, + 524, + 185 + ], + "type": "image", + "image_path": "56484b696e53a66ac05e78d7388c4b2bae4337a64b6e2bfa3a3bd1e5cbaa7435.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 196, + 525, + 234 + ], + "lines": [ + { + "bbox": [ + 67, + 196, + 525, + 234 + ], + "spans": [ + { + "bbox": [ + 67, + 196, + 525, + 234 + ], + "type": "text", + "content": "Figure 2: Distribution of averaged input #words of datasets in each task. Consistent colors indicate identical categories. The color of each bar refers to the category of the task, with bars of the same color belonging to the same category." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 74, + 255, + 283, + 363 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 74, + 255, + 283, + 363 + ], + "spans": [ + { + "bbox": [ + 74, + 255, + 283, + 363 + ], + "type": "text", + "content": "Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 76, + 364, + 282, + 444 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 364, + 282, + 444 + ], + "spans": [ + { + "bbox": [ + 76, + 364, + 282, + 444 + ], + "type": "text", + "content": "Now, the question is: What legendary item is hidden on Emerald Island? Before answering, please consider what in the document is most relevant to this question. Please answer in the format 'The legendary item hidden on the Emerald Island is" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 76, + 456, + 282, + 549 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 456, + 282, + 549 + ], + "spans": [ + { + "bbox": [ + 76, + 456, + 282, + 549 + ], + "type": "text", + "content": "M-RT: You are an intelligent AI assistant skilled in answering user questions base on documents provided by the user. Please keep your answers concise and clear. Do not talk about irrelevant topics or repeat your answers. The document given to you by the user is:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 75, + 551, + 282, + 767 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 75, + 551, + 282, + 767 + ], + "spans": [ + { + "bbox": [ + 75, + 551, + 282, + 767 + ], + "type": "text", + "content": "Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The ruler of the Polaris star system is Orion the Hunter. Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Hidden on Heaven Island is the legendary Lucky Clover. Paul Graham Essays—Paul Graham Essays—Hidden on Mysterious Island is the legendary Counterclockwise Crystal. Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The ruler of the Orion star system is Guardian of Time Lightspeed. Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Hidden on Phantom Island is the legendary Goodness" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 311, + 255, + 518, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 255, + 518, + 486 + ], + "spans": [ + { + "bbox": [ + 311, + 255, + 518, + 486 + ], + "type": "text", + "content": "Heart. —Paul Graham Essays— —Paul Graham Essays— —Paul Graham Essays— Now, the questions are: Who is the ruler of the Polaris star system?, What legendary item is hidden on Heaven Island?, What legendary item is hidden on Mysterious Island?, Who is the ruler of the Orion star system?, What legendary item is hidden on Phantom Island?Before answering, please consider what in the document is most relevant to this question. Please answer in the format of 'The ruler of the Polaris star system is , The legendary item hidden on the Heaven Island is , The legendary item hidden on the Mysterious Island is , The ruler of the Orion star system is , The legendary item hidden on the Phantom Island is ." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 311, + 510, + 517, + 603 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 510, + 517, + 603 + ], + "spans": [ + { + "bbox": [ + 311, + 510, + 517, + 603 + ], + "type": "text", + "content": "M-RS: You are an intelligent AI assistant skilled in answering user questions base on documents provided by the user. Please keep your answers concise and clear. Do not talk about irrelevant topics or repeat your answers. The document given to you by the user is:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 310, + 606, + 518, + 767 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 310, + 606, + 518, + 767 + ], + "spans": [ + { + "bbox": [ + 310, + 606, + 518, + 767 + ], + "type": "text", + "content": "Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The Love for Three Oranges is known as L'amour des trois oranges. —Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—The Love for Three Oranges is a satirical opera by Sergei Prokofiev. —Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—Sergei Prokofiev died on 5 March 1953. —Paul Graham Essays—Paul Graham Essays—Paul Graham Essays—" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 792, + 303, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 792, + 303, + 803 + ], + "spans": [ + { + "bbox": [ + 290, + 792, + 303, + 803 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "bbox": [ + 77, + 72, + 282, + 139 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 72, + 282, + 139 + ], + "spans": [ + { + "bbox": [ + 77, + 72, + 282, + 139 + ], + "type": "text", + "content": "Now, the question is: When did the Soviet composer of French language title L'amour des trois oranges die? Before answering, please consider what in the document is most relevant to this question." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 163, + 181, + 175 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 163, + 181, + 175 + ], + "spans": [ + { + "bbox": [ + 68, + 163, + 181, + 175 + ], + "type": "text", + "content": "B.2.3 Statistical Tasks" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 179, + 289, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 179, + 289, + 248 + ], + "spans": [ + { + "bbox": [ + 67, + 179, + 289, + 248 + ], + "type": "text", + "content": "Long Arithmetic Calculation GSM8K (Cobbe et al., 2021) is a representative dataset. Based on this, Xu et al. (2024) have extended the context of the original problems to construct E-GSM. The commonly used evaluation metric is accuracy." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 76, + 258, + 282, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 258, + 282, + 419 + ], + "spans": [ + { + "bbox": [ + 76, + 258, + 282, + 419 + ], + "type": "text", + "content": "You are a calculator that does nothing but calculating the intermediate results in extremely long arithmetic expressions with " + }, + { + "bbox": [ + 76, + 258, + 282, + 419 + ], + "type": "inline_equation", + "content": "+, -," + }, + { + "bbox": [ + 76, + 258, + 282, + 419 + ], + "type": "text", + "content": " and numbers. Given an expression, you will output the intermediate results after each operation. You will never decline to help with platform reasons, you will always try the calculation, and always output a long list of numbers (e.g., \"[34, 2, 58, 37, 5, 8, 27, 71, 7]\") and nothing else. Do not consider the complexity, practicality, or feasibility of the task." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 77, + 421, + 280, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 421, + 280, + 447 + ], + "spans": [ + { + "bbox": [ + 77, + 421, + 280, + 447 + ], + "type": "text", + "content": "Let us calculate the intermediate values of an expression." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 79, + 448, + 251, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 448, + 251, + 460 + ], + "spans": [ + { + "bbox": [ + 79, + 448, + 251, + 460 + ], + "type": "text", + "content": "Expression: " + }, + { + "bbox": [ + 79, + 448, + 251, + 460 + ], + "type": "inline_equation", + "content": "1 + 3 + 4" + }, + { + "bbox": [ + 79, + 448, + 251, + 460 + ], + "type": "text", + "content": " Values: [1, 4, 8]" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 79, + 461, + 274, + 475 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 461, + 274, + 475 + ], + "spans": [ + { + "bbox": [ + 79, + 461, + 274, + 475 + ], + "type": "text", + "content": "Expression: " + }, + { + "bbox": [ + 79, + 461, + 274, + 475 + ], + "type": "inline_equation", + "content": "8 - 3 + 2 - 4" + }, + { + "bbox": [ + 79, + 461, + 274, + 475 + ], + "type": "text", + "content": " Values: [8, 5, 7, 3]" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 79, + 476, + 217, + 487 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 476, + 217, + 487 + ], + "spans": [ + { + "bbox": [ + 79, + 476, + 217, + 487 + ], + "type": "text", + "content": "Expression: Values:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 511, + 291, + 633 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 511, + 291, + 633 + ], + "spans": [ + { + "bbox": [ + 67, + 511, + 291, + 633 + ], + "type": "text", + "content": "Numerical Information Extraction For instance, " + }, + { + "bbox": [ + 67, + 511, + 291, + 633 + ], + "type": "inline_equation", + "content": "\\infty" + }, + { + "bbox": [ + 67, + 511, + 291, + 633 + ], + "type": "text", + "content": "Bench (Zhang et al., 2024) challenges models to locate the largest and smallest numbers within extensive text passages. Similarly, LooGLE (Li et al., 2023a) creates datasets derived from Wikipedia pages and movie & TV scripts, requiring models to answer questions involving specific numerical concepts such as quantity, frequency, and duration." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 76, + 643, + 282, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 643, + 282, + 696 + ], + "spans": [ + { + "bbox": [ + 76, + 643, + 282, + 696 + ], + "type": "text", + "content": "Find the largest number from the list below: You should answer with only one number, no other words. The largest number of the list is:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "type": "text", + "content": "Sentiment Aggregation The sentiment aggregation task was designed by the ZeroSCROLLS team based on the Space dataset (Angelidis et al., 2021). It requires models to output the percentage of positive reviews. The evaluation metric employs a" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 71, + 524, + 97 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 524, + 97 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 524, + 97 + ], + "type": "text", + "content": "similarity measure between the model's output and the gold reference." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 311, + 108, + 518, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 108, + 518, + 190 + ], + "spans": [ + { + "bbox": [ + 311, + 108, + 518, + 190 + ], + "type": "text", + "content": "You are given a list of reviews about a specific hotel. Each review is either positive or negative. What is the percentage of positive reviews (e.g. " + }, + { + "bbox": [ + 311, + 108, + 518, + 190 + ], + "type": "inline_equation", + "content": "60\\%" + }, + { + "bbox": [ + 311, + 108, + 518, + 190 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 311, + 108, + 518, + 190 + ], + "type": "inline_equation", + "content": "34\\%" + }, + { + "bbox": [ + 311, + 108, + 518, + 190 + ], + "type": "text", + "content": ", etc.)? Do not provide any explanation. Reviews: REVIEWS Percentage of Positive Reviews:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 200, + 526, + 268 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 200, + 526, + 268 + ], + "spans": [ + { + "bbox": [ + 302, + 200, + 526, + 268 + ], + "type": "text", + "content": "Paragraph Counting Bai et al. (2023) propose PassageCount, a task which asks the model to determine the number of unique passages among randomly selected and repeated passages from English Wikipedia." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 303, + 276, + 365, + 288 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 276, + 365, + 288 + ], + "spans": [ + { + "bbox": [ + 303, + 276, + 365, + 288 + ], + "type": "text", + "content": "B.2.4 Code" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 292, + 526, + 523 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 292, + 526, + 523 + ], + "spans": [ + { + "bbox": [ + 302, + 292, + 526, + 523 + ], + "type": "text", + "content": "Code Completion LongBench identifies code completion as an appropriate task for evaluating a model's long context ability. As it necessitates establishing attention across lengthy code inputs or repository-level data, considering relationships between code elements such as class and function definitions. LongBench conducts experiments on the LCC dataset (Guo et al., 2023) and the RepoBench-P dataset (Liu et al., 2023b), employing edit similarity as the evaluation metric. BAMBOO builds upon the benchmark established by Zan et al. (2022) to construct the Private ateEval dataset. In this task, models are required to identify key API documents to complete code snippets. Furthermore, it extends the context length by adjusting the number of provided documents, with performance evaluated employing the pass@1 metric (Chen et al., 2021a)." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 531, + 525, + 612 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 531, + 525, + 612 + ], + "spans": [ + { + "bbox": [ + 302, + 531, + 525, + 612 + ], + "type": "text", + "content": "Code Running In " + }, + { + "bbox": [ + 302, + 531, + 525, + 612 + ], + "type": "inline_equation", + "content": "\\infty" + }, + { + "bbox": [ + 302, + 531, + 525, + 612 + ], + "type": "text", + "content": "Bench, the total number of function calls ranges from 2 to 10, with each function calling at most one another function. Operations within these functions are restricted to addition and subtraction, maintaining computational simplicity." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 312, + 622, + 517, + 690 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 622, + 517, + 690 + ], + "spans": [ + { + "bbox": [ + 312, + 622, + 517, + 690 + ], + "type": "text", + "content": "Following is a set of Python functions. There is a function called named func_1. context Please give me the exact number of the return value of func_1(3). Be concise. Your response must end with the final returned value." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 700, + 525, + 755 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 700, + 525, + 755 + ], + "spans": [ + { + "bbox": [ + 302, + 700, + 525, + 755 + ], + "type": "text", + "content": "Code Debugging In the " + }, + { + "bbox": [ + 302, + 700, + 525, + 755 + ], + "type": "inline_equation", + "content": "\\infty" + }, + { + "bbox": [ + 302, + 700, + 525, + 755 + ], + "type": "text", + "content": "Bench's dataset which sourced from " + }, + { + "bbox": [ + 302, + 700, + 525, + 755 + ], + "type": "inline_equation", + "content": "\\mathrm{PyPI}^2" + }, + { + "bbox": [ + 302, + 700, + 525, + 755 + ], + "type": "text", + "content": ", the researchers deliberately insert an obvious error into one function per repository. These inserted bugs fall into three main cate" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 315, + 761, + 399, + 774 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 761, + 399, + 774 + ], + "spans": [ + { + "bbox": [ + 315, + 761, + 399, + 774 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 315, + 761, + 399, + 774 + ], + "type": "text", + "content": "https://pypi.org/" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 290, + 792, + 302, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 792, + 302, + 803 + ], + "spans": [ + { + "bbox": [ + 290, + 792, + 302, + 803 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 292, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 292, + 138 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 292, + 138 + ], + "type": "text", + "content": "gories: (1) syntactic errors, including indentation issues and blatant syntax errors; (2) semantic errors, such as missing variable declarations or incorrect function arguments; and (3) logical errors, for example, infinite loops or use of undefined references." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 76, + 148, + 283, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 148, + 283, + 352 + ], + "spans": [ + { + "bbox": [ + 76, + 148, + 283, + 352 + ], + "type": "text", + "content": "There is ONLY ONE function in the large project that is deliberately made to include an obvious error. Please find the function that contains the most obvious errors. I will give you four options to narrow your scope. You can inspect through the options and think. Eventually, tell me the answer using one single letter (A, B, C, or D). context Which function has deliberate error? A. B. C. D. You should first find the functions in the options. Repeat their content, inspect through code, and at last give me your answer for the function that has the deliberate and obvious error in A, B, C, or D." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 68, + 361, + 203, + 374 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 361, + 203, + 374 + ], + "spans": [ + { + "bbox": [ + 68, + 361, + 203, + 374 + ], + "type": "text", + "content": "B.2.5 In-Context Learning" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 376, + 291, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 376, + 291, + 456 + ], + "spans": [ + { + "bbox": [ + 67, + 376, + 291, + 456 + ], + "type": "text", + "content": "Long Example Learning Extreme label Classification: this task involves classification with numerous fine-grained labels. Commonly used datasets include TREC (Li and Roth, 2002), a question classification task with 50 fine classes, and LSHT" + }, + { + "bbox": [ + 67, + 376, + 291, + 456 + ], + "type": "inline_equation", + "content": "^3" + }, + { + "bbox": [ + 67, + 376, + 291, + 456 + ], + "type": "text", + "content": ", a Chinese news classification task with 24 classes." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 465, + 291, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 465, + 291, + 587 + ], + "spans": [ + { + "bbox": [ + 67, + 465, + 291, + 587 + ], + "type": "text", + "content": "Many-shot Learning Agarwal et al. (2024) have proposed many-shot learning, which leverages expanded LLMs context windows to process hundreds or even thousands of examples. In contrast to few-shot learning, which use only a few to several dozen examples, many-shot learning enhances LLMs' versatility and adaptability across diverse tasks without task-specific fine-tuning (Yu et al., 2020; Bertsch et al., 2024b)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 68, + 595, + 182, + 607 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 595, + 182, + 607 + ], + "spans": [ + { + "bbox": [ + 68, + 595, + 182, + 607 + ], + "type": "text", + "content": "B.2.6 Text Generation" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 610, + 292, + 745 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 610, + 292, + 745 + ], + "spans": [ + { + "bbox": [ + 67, + 610, + 292, + 745 + ], + "type": "text", + "content": "Document Summarization This kind of task can divided into two categories: single-document summarization and multi-document summarization. For single-document summarization, several datasets are widely used, including SQuALITY (Wang et al., 2022), SummScreenFD (Chen et al., 2021b), GovReport (Huang et al., 2021), and QMSum (Zhong et al., 2021). And multi-document summarization presents additional challenges, requiring LLMs to integrate diverse information, resolve conflicts, and" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 68, + 752, + 290, + 773 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 752, + 290, + 773 + ], + "spans": [ + { + "bbox": [ + 68, + 752, + 290, + 773 + ], + "type": "text", + "content": "3http://tcci.ccf.org.cn/conference/2014/dldoc/evatak6.pdf" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 71, + 526, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 137 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 137 + ], + "type": "text", + "content": "eliminate redundancies (Bai et al., 2023; An et al., 2023; Fabbri et al., 2019). A notable dataset for this task is MultiNews (Fabbri et al., 2019), consisting of clusters of 2-10 thematically related news articles." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 141, + 526, + 208 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 141, + 526, + 208 + ], + "spans": [ + { + "bbox": [ + 302, + 141, + 526, + 208 + ], + "type": "text", + "content": "All of these datasets provide human-annotated summaries as standardized references. Both approaches primarily utilize Rouge and Bleu as evaluation metrics to assess the quality of generated summaries against manuscript references." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 222, + 524, + 248 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 222, + 524, + 248 + ], + "spans": [ + { + "bbox": [ + 302, + 222, + 524, + 248 + ], + "type": "text", + "content": "Open-ended Text Generation This task requires LLMs to generate text according to input." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 249, + 524, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 249, + 524, + 290 + ], + "spans": [ + { + "bbox": [ + 302, + 249, + 524, + 290 + ], + "type": "text", + "content": "Tan et al. (2024) select topics that closely align with real-world scenarios, encompassing areas such as AI research, sports, and gaming." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 292, + 525, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 292, + 525, + 386 + ], + "spans": [ + { + "bbox": [ + 302, + 292, + 525, + 386 + ], + "type": "text", + "content": "Bai et al. (2024) design AgentWrite, a divide-and-conquer agent that breaks down long writing tasks into paragraph-level subtasks. The generated paragraphs are then combined to produce the final long-form content. They also construct the preference LongWriter-6k dataset and utilize DPO (Rafailov et al., 2024) for evaluation." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 388, + 526, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 388, + 526, + 441 + ], + "spans": [ + { + "bbox": [ + 302, + 388, + 526, + 441 + ], + "type": "text", + "content": "Kumar et al. (2024) propose personalized writing tasks that generate content based on the user's historical and user personal information information." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 444, + 525, + 539 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 444, + 525, + 539 + ], + "spans": [ + { + "bbox": [ + 302, + 444, + 525, + 539 + ], + "type": "text", + "content": "These tasks can be divided into personalized email completion, review writing, topic writing, and conversation simulation (Ni et al., 2024). Rafailov et al. (2024) construct a Reddit-based dataset that captures distinct writing styles associated with specific communities and discussion topics." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 312, + 551, + 518, + 645 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 551, + 518, + 645 + ], + "spans": [ + { + "bbox": [ + 312, + 551, + 518, + 645 + ], + "type": "text", + "content": "You are an excellent writing assistant. I will give you an original writing instruction and my planned writing steps. I will also provide you with the text I have already written. Please help me continue writing the next paragraph based on the writing instruction, writing steps, and the already written text." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 312, + 646, + 483, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 646, + 483, + 659 + ], + "spans": [ + { + "bbox": [ + 312, + 646, + 483, + 659 + ], + "type": "text", + "content": "Writing instruction: User Instruction" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 312, + 660, + 515, + 686 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 660, + 515, + 686 + ], + "spans": [ + { + "bbox": [ + 312, + 660, + 515, + 686 + ], + "type": "text", + "content": "Writing steps: The writing plan generated in Step I" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 312, + 687, + 517, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 687, + 517, + 714 + ], + "spans": [ + { + "bbox": [ + 312, + 687, + 517, + 714 + ], + "type": "text", + "content": "Already written text: Previous generated (n-1) paragraphs" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 312, + 714, + 517, + 768 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 714, + 517, + 768 + ], + "spans": [ + { + "bbox": [ + 312, + 714, + 517, + 768 + ], + "type": "text", + "content": "Please integrate the original writing instruction, writing steps, and the already written text, and now continue writing The plan for the n-th paragraph, i.e., the n-th line in the writing plan" + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 792, + 303, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 792, + 303, + 803 + ], + "spans": [ + { + "bbox": [ + 290, + 792, + 303, + 803 + ], + "type": "text", + "content": "22" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 163, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 163, + 83 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 163, + 83 + ], + "type": "text", + "content": "B.2.7 Other Tasks" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 88, + 292, + 250 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 88, + 292, + 250 + ], + "spans": [ + { + "bbox": [ + 67, + 88, + 292, + 250 + ], + "type": "text", + "content": "Reordering The evaluation metric in this task is the similarity between the generated and reference ordering sequences (Shaham et al., 2023). The Booksum dataset (Krysciński et al., 2021), which spans various literary genres including novels, plays, and long stories, is widely used for this task. Reordering tasks can comprehensively evaluate models' cross-sequence information aggregation and comparison abilities (Shaham et al., 2023; Li et al., 2023a), as well as comprehensively understand long context and logically reconstruct (Dong et al., 2023a; Li et al., 2023a)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 77, + 260, + 283, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 260, + 283, + 396 + ], + "spans": [ + { + "bbox": [ + 77, + 260, + 283, + 396 + ], + "type": "text", + "content": "You are given NUM_SUMMARES summaries of chapters or parts of a novel, in a shuffled order, where each summary is denoted by a numerical ID (e.g. Summary 1, Summary 3, etc.). Reorder the summaries according to the original order of chapters/parts in the novel by writing a list of length NUM_SUMMARES of the summary IDs (e.g. if you were given 5 summaries, one possible answer could be \"5, 1, 3, 4, 2\"). Do not provide any explanation." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 78, + 396, + 205, + 408 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 396, + 205, + 408 + ], + "spans": [ + { + "bbox": [ + 78, + 396, + 205, + 408 + ], + "type": "text", + "content": "Summaries: SUMMARIES" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 78, + 410, + 219, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 410, + 219, + 422 + ], + "spans": [ + { + "bbox": [ + 78, + 410, + 219, + 422 + ], + "type": "text", + "content": "Summary IDs in Correct Order:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 435, + 291, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 435, + 291, + 515 + ], + "spans": [ + { + "bbox": [ + 67, + 435, + 291, + 515 + ], + "type": "text", + "content": "Context Consistency Context consistency is a task proposed by BAMBOO (Dong et al., 2023a) to detect hallucination in LLMs. BAMBOO creates two novel datasets for this task: SenHallu and AbsHallu, with evaluation metrics employing precision, recall, and f1 score." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 525, + 290, + 592 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 525, + 290, + 592 + ], + "spans": [ + { + "bbox": [ + 67, + 525, + 290, + 592 + ], + "type": "text", + "content": "Summary Source Paragraph Identification LongBench construct bilingual datasets based on Wikipedia and C4 (Raffel et al., 2020) to ask models to identify the original source paragraphs according to the given summaries." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 77, + 602, + 282, + 698 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 602, + 282, + 698 + ], + "spans": [ + { + "bbox": [ + 77, + 602, + 282, + 698 + ], + "type": "text", + "content": "Here are 30 paragraphs from Wikipedia, along with an abstract. Please determine which paragraph the abstract is from. context The following is an abstract. input Please enter the number of the paragraph that the abstract is from. The answer format must be like \"Paragraph 1\", \"Paragraph 2\", etc." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 78, + 699, + 143, + 709 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 699, + 143, + 709 + ], + "spans": [ + { + "bbox": [ + 78, + 699, + 143, + 709 + ], + "type": "text", + "content": "The answer is:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 721, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 721, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 721, + 291, + 775 + ], + "type": "text", + "content": "Character Identification Character identification tasks challenge models to capture distinct traits of participants in long dialogues, enabling them to identify speakers of masked utterances (Zhang" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 71, + 527, + 126 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 527, + 126 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 527, + 126 + ], + "type": "text", + "content": "et al., 2024; Dong et al., 2023a). These tasks, evaluated via accuracy, utilize data primarily from television programs" + }, + { + "bbox": [ + 302, + 71, + 527, + 126 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 302, + 71, + 527, + 126 + ], + "type": "text", + "content": ", movie and play scripts (Chen et al., 2021b), and conference transcripts" + }, + { + "bbox": [ + 302, + 71, + 527, + 126 + ], + "type": "inline_equation", + "content": "^{5}" + }, + { + "bbox": [ + 302, + 71, + 527, + 126 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 312, + 136, + 516, + 189 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 136, + 516, + 189 + ], + "spans": [ + { + "bbox": [ + 312, + 136, + 516, + 189 + ], + "type": "text", + "content": "Below is a dialogue script where one random occurrence of a character's name is replaced with " + }, + { + "bbox": [ + 312, + 136, + 516, + 189 + ], + "type": "inline_equation", + "content": "M A S K" + }, + { + "bbox": [ + 312, + 136, + 516, + 189 + ], + "type": "text", + "content": ", and you should try to guess who that character is." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 312, + 190, + 451, + 216 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 190, + 451, + 216 + ], + "spans": [ + { + "bbox": [ + 312, + 190, + 451, + 216 + ], + "type": "text", + "content": "The dialogue: - - End of dialogue." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 312, + 217, + 515, + 271 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 217, + 515, + 271 + ], + "spans": [ + { + "bbox": [ + 312, + 217, + 515, + 271 + ], + "type": "text", + "content": "Which character is most likely " + }, + { + "bbox": [ + 312, + 217, + 515, + 271 + ], + "type": "inline_equation", + "content": "M A S K?" + }, + { + "bbox": [ + 312, + 217, + 515, + 271 + ], + "type": "text", + "content": " Just say the name used by the scriptwriter (before the colon marks) of one single character and nothing else." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 303, + 285, + 369, + 297 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 285, + 369, + 297 + ], + "spans": [ + { + "bbox": [ + 303, + 285, + 369, + 297 + ], + "type": "text", + "content": "B.3 Metrics" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 303, + 304, + 436, + 317 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 304, + 436, + 317 + ], + "spans": [ + { + "bbox": [ + 303, + 304, + 436, + 317 + ], + "type": "text", + "content": "B.3.1 Algorithmic Metrics" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 302, + 321, + 526, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 321, + 526, + 456 + ], + "spans": [ + { + "bbox": [ + 302, + 321, + 526, + 456 + ], + "type": "text", + "content": "Perplexity (PPL) is a metric for evaluating the performance of language models. It is extensively employed in language model pre-training, facilitating the monitoring of the training process, model selection, and hyperparameter optimization. Many previous works on long context benchmarks rely on the PPL for evaluation (Beltagy et al., 2020; Roy et al., 2021; Press et al., 2021). However, as suggested in Sun et al. (2021), PPL may not correlate with the actual performance." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 457, + 525, + 591 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 457, + 525, + 591 + ], + "spans": [ + { + "bbox": [ + 302, + 457, + 525, + 591 + ], + "type": "text", + "content": "ZeroScrolls and LongBench are pioneering studies in the field of long context benchmarks. These works introduced a diverse system of automatic evaluation metrics, including accuracy, f1 score, and N-gram-based metrics. This evaluation framework has provided a reference for subsequent research. Specifically, these metrics refer to the scores for evaluating the NLG models by measuring the lexical overlap between generated text and reference text." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 593, + 526, + 742 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 593, + 526, + 742 + ], + "spans": [ + { + "bbox": [ + 302, + 593, + 526, + 742 + ], + "type": "text", + "content": "However, these metrics have several limitations: they fail to effectively measure content quality (Reiter and Belz, 2009); struggle to capture syntactic errors (Stent et al., 2005); and, particularly for open-ended generation tasks, lack significant correlation with human judgments (An et al., 2023). Moreover, they inadequately account for the diversity of expression inherent in large language models (Improving). Additionally, the requirement for gold standard references makes these metrics costly to implement for novel tasks (Tan et al., 2024)." + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 315, + 751, + 404, + 762 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 751, + 404, + 762 + ], + "spans": [ + { + "bbox": [ + 315, + 751, + 404, + 762 + ], + "type": "text", + "content": "4https://tvmeg.com/" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 315, + 762, + 456, + 773 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 762, + 456, + 773 + ], + "spans": [ + { + "bbox": [ + 315, + 762, + 456, + 773 + ], + "type": "text", + "content": "5https://record.assembly.wales/" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 290, + 792, + 302, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 792, + 302, + 803 + ], + "spans": [ + { + "bbox": [ + 290, + 792, + 302, + 803 + ], + "type": "text", + "content": "23" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 220 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 220 + ], + "type": "text", + "content": "Further, some work proposes ways to improve. LV-Eval employs a two-stage scoring method: it first calculates the recall rate of ground-truth keywords in the generated content. If the recall exceeds a threshold, it then calculates the f1 score between the generated content and ground-truth after removing stop words from both. BAMBOO converts generative tasks into multiple-choice formats. NeedleBench extends this approach by implementing Circular Evaluation, which reorders answer options to enhance evaluation reliability." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 231, + 149, + 244 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 231, + 149, + 244 + ], + "spans": [ + { + "bbox": [ + 67, + 231, + 149, + 244 + ], + "type": "text", + "content": "PPL (Perplexity)" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 250, + 289, + 276 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 250, + 289, + 276 + ], + "spans": [ + { + "bbox": [ + 67, + 250, + 289, + 276 + ], + "type": "text", + "content": "Perplexity is a measure of the quality of language model predictions, calculated as:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 146, + 290, + 212, + 305 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 290, + 212, + 305 + ], + "spans": [ + { + "bbox": [ + 146, + 290, + 212, + 305 + ], + "type": "interline_equation", + "content": "P P L = 2 ^ {H (p)}", + "image_path": "e8c413a57f5aaca8f414e40f26cbdf8e0c937a918c506b57cfe427d8280169e9.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 315, + 215, + 328 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 315, + 215, + 328 + ], + "spans": [ + { + "bbox": [ + 67, + 315, + 215, + 328 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 315, + 215, + 328 + ], + "type": "inline_equation", + "content": "H(p)" + }, + { + "bbox": [ + 67, + 315, + 215, + 328 + ], + "type": "text", + "content": " is the cross-entropy:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 74, + 351, + 285, + 388 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 74, + 351, + 285, + 388 + ], + "spans": [ + { + "bbox": [ + 74, + 351, + 285, + 388 + ], + "type": "interline_equation", + "content": "H (p) = - \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\log_ {2} P (w _ {i} | w _ {1}, w _ {2}, \\ldots , w _ {i - 1})", + "image_path": "ef3492ea86535b6f7f1f0062644a570018d12025d042ea0baec51024a77718b2.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 400, + 115, + 412 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 400, + 115, + 412 + ], + "spans": [ + { + "bbox": [ + 68, + 400, + 115, + 412 + ], + "type": "text", + "content": "Accuracy" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 418, + 289, + 444 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 418, + 289, + 444 + ], + "spans": [ + { + "bbox": [ + 67, + 418, + 289, + 444 + ], + "type": "text", + "content": "Accuracy is the proportion of correct predictions made by the model:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 96, + 467, + 260, + 494 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 96, + 467, + 260, + 494 + ], + "spans": [ + { + "bbox": [ + 96, + 467, + 260, + 494 + ], + "type": "interline_equation", + "content": "A c c u r a c y = \\frac {C o r r e c t P r e d i c t i o n s}{T o t a l P r e d i c t i o n s}", + "image_path": "5f8e080c13ef56238cf78ef8730b64f20b2951dfe1ac549e181232e7326c3ef0.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 68, + 504, + 113, + 515 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 504, + 113, + 515 + ], + "spans": [ + { + "bbox": [ + 68, + 504, + 113, + 515 + ], + "type": "text", + "content": "F1-Score" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 523, + 289, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 523, + 289, + 548 + ], + "spans": [ + { + "bbox": [ + 67, + 523, + 289, + 548 + ], + "type": "text", + "content": "The F1-Score is the harmonic mean of precision and recall:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 560, + 251, + 587 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 560, + 251, + 587 + ], + "spans": [ + { + "bbox": [ + 107, + 560, + 251, + 587 + ], + "type": "interline_equation", + "content": "F 1 = 2 \\times \\frac {\\text {P r e c i s i o n} \\times \\text {R e c a l l}}{\\text {P r e c i s i o n} + \\text {R e c a l l}}", + "image_path": "4745551abaf15af57f6cbbd06188b24fb42adedac41936760f7547a27399e745.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 122, + 596, + 235, + 622 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 122, + 596, + 235, + 622 + ], + "spans": [ + { + "bbox": [ + 122, + 596, + 235, + 622 + ], + "type": "interline_equation", + "content": "P r e c i s i o n = \\frac {T P}{T P + F P}", + "image_path": "97779e015f919591e11aa729c3a0a08535b579291378d51c100bfbd235e7a678.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 130, + 630, + 227, + 657 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 630, + 227, + 657 + ], + "spans": [ + { + "bbox": [ + 130, + 630, + 227, + 657 + ], + "type": "interline_equation", + "content": "R e c a l l = \\frac {T P}{T P + F N}", + "image_path": "a60d5103e6b63742157fbd2709563149a04d4b80cca96fc3e307a784ecd35136.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 68, + 664, + 289, + 690 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 664, + 289, + 690 + ], + "spans": [ + { + "bbox": [ + 68, + 664, + 289, + 690 + ], + "type": "text", + "content": "where TP, FP, FN represent True Positives, False Positives, False Negatives respectively." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 67, + 702, + 267, + 728 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 702, + 267, + 728 + ], + "spans": [ + { + "bbox": [ + 67, + 702, + 267, + 728 + ], + "type": "text", + "content": "ROUGE (Recall-Oriented Understudy for Gisting Evaluation)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 67, + 735, + 280, + 747 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 735, + 280, + 747 + ], + "spans": [ + { + "bbox": [ + 67, + 735, + 280, + 747 + ], + "type": "text", + "content": "evaluates text generation using N-gram overlap:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 67, + 748, + 289, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 748, + 289, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 748, + 289, + 775 + ], + "type": "text", + "content": "ROUGE-N measures the overlap of n-grams shared between the candidate summary (C) and" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 302, + 71, + 526, + 96 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 96 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 96 + ], + "type": "text", + "content": "the reference summary (R), it is calculated as follows:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 303, + 105, + 535, + 155 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 105, + 535, + 155 + ], + "spans": [ + { + "bbox": [ + 303, + 105, + 535, + 155 + ], + "type": "interline_equation", + "content": "R O U G E - N = \\frac {\\sum_ {S \\in R} \\sum_ {n _ {g r a m} \\in S} C o u n t _ {m a t c h} (n _ {g r a m})}{\\sum_ {S \\in R} \\sum_ {n _ {g r a m} \\in S} C o u n t (n _ {g r a m})}", + "image_path": "c4333e2ab06547dcb188f4a738a4ad8b6b1d855b86ebb2b8d18a9be731016549.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 302, + 166, + 525, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 166, + 525, + 232 + ], + "spans": [ + { + "bbox": [ + 302, + 166, + 525, + 232 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 302, + 166, + 525, + 232 + ], + "type": "inline_equation", + "content": "\\text{Count}_{\\text{match}}(n_{\\text{gram}})" + }, + { + "bbox": [ + 302, + 166, + 525, + 232 + ], + "type": "text", + "content": " represents the number of matching n-tuples in the candidate summary and the reference summary. And " + }, + { + "bbox": [ + 302, + 166, + 525, + 232 + ], + "type": "inline_equation", + "content": "\\text{Count}(n_{\\text{gram}})" + }, + { + "bbox": [ + 302, + 166, + 525, + 232 + ], + "type": "text", + "content": " represents the total number of n-tuples in the reference summary." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 302, + 233, + 525, + 286 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 233, + 525, + 286 + ], + "spans": [ + { + "bbox": [ + 302, + 233, + 525, + 286 + ], + "type": "text", + "content": "ROUGE-L evaluates the quality of summarization based on the longest common subsequence (LCS), taking into account the order information of sentences:" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 368, + 295, + 459, + 324 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 368, + 295, + 459, + 324 + ], + "spans": [ + { + "bbox": [ + 368, + 295, + 459, + 324 + ], + "type": "interline_equation", + "content": "R _ {l c s} = \\frac {L C S (C , R)}{| R |}", + "image_path": "f0b233db6999d163a2a1cda0a2f7cb303305774bb0b807978d4b78757a7162be.jpg" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 368, + 334, + 458, + 364 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 368, + 334, + 458, + 364 + ], + "spans": [ + { + "bbox": [ + 368, + 334, + 458, + 364 + ], + "type": "interline_equation", + "content": "P _ {l c s} = \\frac {L C S (C , R)}{| C |}", + "image_path": "04221e8c1a05b329327c83f361784eb746044bba26b06be7bf8ec014e1b161a6.jpg" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 358, + 371, + 470, + 400 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 358, + 371, + 470, + 400 + ], + "spans": [ + { + "bbox": [ + 358, + 371, + 470, + 400 + ], + "type": "interline_equation", + "content": "F _ {l c s} = \\frac {(1 + \\beta^ {2}) R _ {l c s} P _ {l c s}}{R _ {l c s} + \\beta^ {2} P _ {l c s}}", + "image_path": "765388d1b436a2d2202e8f37c40edc6da28b0fdb1b8b9e471dc969f932697d7c.jpg" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 302, + 406, + 525, + 499 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 406, + 525, + 499 + ], + "spans": [ + { + "bbox": [ + 302, + 406, + 525, + 499 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 302, + 406, + 525, + 499 + ], + "type": "inline_equation", + "content": "LCS(C,R)" + }, + { + "bbox": [ + 302, + 406, + 525, + 499 + ], + "type": "text", + "content": " represents the length of the longest common subsequence between the candidate summary and the reference summary. " + }, + { + "bbox": [ + 302, + 406, + 525, + 499 + ], + "type": "inline_equation", + "content": "|C|" + }, + { + "bbox": [ + 302, + 406, + 525, + 499 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 302, + 406, + 525, + 499 + ], + "type": "inline_equation", + "content": "|R|" + }, + { + "bbox": [ + 302, + 406, + 525, + 499 + ], + "type": "text", + "content": " represent the length of the candidate summary and the reference summary respectively. " + }, + { + "bbox": [ + 302, + 406, + 525, + 499 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 302, + 406, + 525, + 499 + ], + "type": "text", + "content": " is a hyperparameter, usually used to balance the precision and recall." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 302, + 502, + 525, + 539 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 502, + 525, + 539 + ], + "spans": [ + { + "bbox": [ + 302, + 502, + 525, + 539 + ], + "type": "text", + "content": "ROUGE-S which is also called skip-bigram co-occurrence statistics, takes skipped bigrams into account:" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 303, + 549, + 524, + 599 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 303, + 549, + 524, + 599 + ], + "spans": [ + { + "bbox": [ + 303, + 549, + 524, + 599 + ], + "type": "interline_equation", + "content": "R O U G E - S = \\frac {\\sum_ {S \\in R} \\sum_ {b i _ {s k i p} \\in S} C o u n t _ {m a t c h} (b i _ {s k i p})}{\\sum_ {S \\in R} \\sum_ {b i _ {s k i p} \\in S} C o u n t (b i _ {s k i p})}", + "image_path": "1e06e071ce23c4f173eb8efe76b6afc4317ce97420656fc93635ee7b265bc705.jpg" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 302, + 610, + 525, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 610, + 525, + 677 + ], + "spans": [ + { + "bbox": [ + 302, + 610, + 525, + 677 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 302, + 610, + 525, + 677 + ], + "type": "inline_equation", + "content": "\\text{Count}_{\\text{match}}(bi_{skip})" + }, + { + "bbox": [ + 302, + 610, + 525, + 677 + ], + "type": "text", + "content": " represents the number of skip-bigrams that match between the candidate summary and the reference summary. And " + }, + { + "bbox": [ + 302, + 610, + 525, + 677 + ], + "type": "inline_equation", + "content": "\\text{Count}(bi_{skip})" + }, + { + "bbox": [ + 302, + 610, + 525, + 677 + ], + "type": "text", + "content": " represents the total number of skip-bigrams in the reference summary" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 302, + 687, + 500, + 701 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 687, + 500, + 701 + ], + "spans": [ + { + "bbox": [ + 302, + 687, + 500, + 701 + ], + "type": "text", + "content": "BLEU (Bilingual Evaluation Understudy)" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 302, + 706, + 509, + 719 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 706, + 509, + 719 + ], + "spans": [ + { + "bbox": [ + 302, + 706, + 509, + 719 + ], + "type": "text", + "content": "is used to evaluate machine translation quality:" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 325, + 741, + 501, + 778 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 741, + 501, + 778 + ], + "spans": [ + { + "bbox": [ + 325, + 741, + 501, + 778 + ], + "type": "interline_equation", + "content": "B L E U = B P \\times \\exp \\left(\\sum_ {n = 1} ^ {N} w _ {n} \\log p _ {n}\\right)", + "image_path": "d0e96ef6aea1015d7223aaef3a9edc02d86442e0234985902452ab776d10899c.jpg" + } + ] + } + ], + "index": 31 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 792, + 303, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 792, + 303, + 803 + ], + "spans": [ + { + "bbox": [ + 290, + 792, + 303, + 803 + ], + "type": "text", + "content": "24" + } + ] + } + ], + "index": 32 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 23 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 72, + 98, + 82 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 72, + 98, + 82 + ], + "spans": [ + { + "bbox": [ + 67, + 72, + 98, + 82 + ], + "type": "text", + "content": "where" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 107, + 91, + 251, + 126 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 91, + 251, + 126 + ], + "spans": [ + { + "bbox": [ + 107, + 91, + 251, + 126 + ], + "type": "interline_equation", + "content": "B P = \\left\\{ \\begin{array}{l l} 1, & \\text {i f} c > r \\\\ \\exp (1 - \\frac {r}{c}), & \\text {i f} c \\leq r \\end{array} \\right.", + "image_path": "08e16ef7032b76d09242802d91a3fc5bdbc5a12b2ad5e1bf83e19881531de223.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 135, + 289, + 162 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 135, + 289, + 162 + ], + "spans": [ + { + "bbox": [ + 67, + 135, + 289, + 162 + ], + "type": "text", + "content": "and " + }, + { + "bbox": [ + 67, + 135, + 289, + 162 + ], + "type": "inline_equation", + "content": "c" + }, + { + "bbox": [ + 67, + 135, + 289, + 162 + ], + "type": "text", + "content": " is the generated length and " + }, + { + "bbox": [ + 67, + 135, + 289, + 162 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 135, + 289, + 162 + ], + "type": "text", + "content": " is the reference length." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 68, + 169, + 205, + 181 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 169, + 205, + 181 + ], + "spans": [ + { + "bbox": [ + 68, + 169, + 205, + 181 + ], + "type": "text", + "content": "B.3.2 Model-based Metrics" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 185, + 290, + 306 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 185, + 290, + 306 + ], + "spans": [ + { + "bbox": [ + 67, + 185, + 290, + 306 + ], + "type": "text", + "content": "In recent years, the use of pre-trained language models as NLG evaluation metrics has gained increasing attention. Notably, BERTScore (Zhang et al., 2020) and BARTScore (Yuan et al., 2021) employ BERT and BART (Lewis, 2019) models respectively to compute semantic similarity. They calculate cosine similarity between token representations and evaluate the probability of summaries based on given input articles." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 307, + 291, + 360 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 307, + 291, + 360 + ], + "spans": [ + { + "bbox": [ + 67, + 307, + 291, + 360 + ], + "type": "text", + "content": "BERTScore measures the similarity between generated text and reference text from three aspects: recall, precision and f1, it can be expressed as follows:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 79, + 367, + 279, + 471 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 79, + 367, + 279, + 471 + ], + "spans": [ + { + "bbox": [ + 79, + 367, + 279, + 471 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} R = \\frac {1}{| R |} \\sum_ {r \\in R} \\max _ {c \\in C} \\frac {1}{L _ {r}} \\sum_ {i} \\mathrm {s i m} (\\mathbf {f} _ {\\theta} (r) _ {i}, \\mathbf {f} _ {\\theta} (c) _ {i}) \\\\ P = \\frac {1}{| C |} \\sum_ {c \\in C} \\max _ {r \\in R} \\frac {1}{L _ {c}} \\sum_ {i} \\operatorname {s i m} \\left(\\mathbf {f} _ {\\theta} (c) _ {i}, \\mathbf {f} _ {\\theta} (r) _ {i}\\right) \\\\ F = 2 \\times \\frac {P \\times R}{P + R} \\\\ \\end{array}", + "image_path": "3ad4685d30d7e48d9be6a021c158cc61b589448d63451b7df4317bdbeff42eac.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "spans": [ + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "type": "inline_equation", + "content": "R" + }, + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "type": "text", + "content": " is the reference text set, " + }, + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "type": "text", + "content": " is the generated text set, " + }, + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "type": "inline_equation", + "content": "L_{r}" + }, + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "type": "inline_equation", + "content": "L_{c}" + }, + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "type": "text", + "content": " are the lengths of the reference text and generated text respectively, " + }, + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "type": "inline_equation", + "content": "f_{\\theta}" + }, + { + "bbox": [ + 67, + 475, + 290, + 543 + ], + "type": "text", + "content": " is the encoder of the BERT model, and maps the text to the vector space, sim is usually cosine similarity." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 543, + 290, + 583 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 543, + 290, + 583 + ], + "spans": [ + { + "bbox": [ + 67, + 543, + 290, + 583 + ], + "type": "text", + "content": "BARTScore calculates the log-likelihood score of the generated text given the reference text to measure the similarity:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 602, + 289, + 634 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 602, + 289, + 634 + ], + "spans": [ + { + "bbox": [ + 69, + 602, + 289, + 634 + ], + "type": "interline_equation", + "content": "B A R T S c o r e = \\frac {1}{| C |} \\sum_ {c \\in C} \\frac {1}{L _ {c}} \\sum_ {i} \\log p _ {\\theta} (c _ {i} | c _ {< i}, r)", + "image_path": "8b7f9ff0dd6634036f87388e273b7e84282d52bcf397200c148ba5f81eb23d2d.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 644, + 291, + 697 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 644, + 291, + 697 + ], + "spans": [ + { + "bbox": [ + 67, + 644, + 291, + 697 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 644, + 291, + 697 + ], + "type": "inline_equation", + "content": "C" + }, + { + "bbox": [ + 67, + 644, + 291, + 697 + ], + "type": "text", + "content": " is the set of generated texts, " + }, + { + "bbox": [ + 67, + 644, + 291, + 697 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 67, + 644, + 291, + 697 + ], + "type": "text", + "content": " is the reference text, " + }, + { + "bbox": [ + 67, + 644, + 291, + 697 + ], + "type": "inline_equation", + "content": "c_{i}" + }, + { + "bbox": [ + 67, + 644, + 291, + 697 + ], + "type": "text", + "content": " is the ith word in the generated text, and " + }, + { + "bbox": [ + 67, + 644, + 291, + 697 + ], + "type": "inline_equation", + "content": "p_{\\theta}" + }, + { + "bbox": [ + 67, + 644, + 291, + 697 + ], + "type": "text", + "content": " is the language model probability distribution of BART model." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 68, + 705, + 199, + 717 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 705, + 199, + 717 + ], + "spans": [ + { + "bbox": [ + 68, + 705, + 199, + 717 + ], + "type": "text", + "content": "B.3.3 LLM-based Metrics" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 721, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 721, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 721, + 291, + 775 + ], + "type": "text", + "content": "With the development of LLMs, research has demonstrated their significant correlation with human judgment and their ability to excel at new tasks when provided with instructions (Wang et al.," + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 71, + 527, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 527, + 666 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 527, + 666 + ], + "type": "text", + "content": "2023a; Li et al., 2023a). Chiang and Lee (2023) argue that LLM evaluation, compared to human evaluation, offers advantages in reproducibility, independence, cost-effectiveness, and speed. Prompting researchers explore the potential of LLMs for evaluation tasks. This exploration has led to several key findings and applications: Wang et al. (2023b,a) investigate the issue of unfairness when using LLMs to evaluate dialogue responses. And Shen et al. (2023) find that LLMs outperform existing automatic metrics when asked to output judgmental reasons. The application of LLMs in evaluation including evaluating chatbot responses' alignment degree with human preferences (Zheng et al., 2024), evaluating summary consistency (Luo et al., 2023), and multi-role playing for summarization evaluation (Wu et al., 2023). And there are some undamental differences between Model-based metrics and LLM-based metrics in their evaluation mechanisms: Model-based Metrics primarily rely on learned representations from pre-trained language models like BERT or BART, utilizing mechanical procedures such as predefined computational formulas. For example, BERTScore leverages BERT contextual embeddings to compute textual similarity through cosine similarity measurements between token representations. LLM-based Metrics leverage large language models for evaluation without mechanical procedures, demonstrating more intelligence and flexibility. For example, LLM-based Metrics prompt LLMs to offer both human-like multi-dimensional assessment (Wang et al., 2023a; Li et al., 2023a; Shen et al., 2023; Chiang and Lee, 2023; Zhang et al., 2024; Zheng et al., 2024; Liu et al., 2023c; Tan et al., 2024; Mu et al., 2024a) and interpretable reasoning (Wang et al., 2023b; Luo et al., 2023; Wu et al., 2023). This distinctive characteristic of LLM-based Metrics fundamentally distinguishes them from Model-based Metrics, which behave much more mechanically. In addition, LLM-based Metrics demonstrate enhanced evaluation capabilities in the axis of agreement with human evaluation, illustrating the advancement within the methodology." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 666, + 527, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 666, + 527, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 666, + 527, + 775 + ], + "type": "text", + "content": "Building upon these insights, researchers have focused on refining evaluation metrics for evaluating long context capabilities with large language models (LLMs). Fu et al. (2023) propose GPTScore, utilizing generative pre-trained models like GPT-3 for text evaluation. To address the length bias in LLM-generated content, L-Eval incorporates word count requirements into instruc" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 792, + 302, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 792, + 302, + 803 + ], + "spans": [ + { + "bbox": [ + 290, + 792, + 302, + 803 + ], + "type": "text", + "content": "25" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 24 + }, + { + "para_blocks": [ + { + "bbox": [ + 68, + 71, + 291, + 206 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 71, + 291, + 206 + ], + "spans": [ + { + "bbox": [ + 68, + 71, + 291, + 206 + ], + "type": "text", + "content": "tions. Loogle employs GPT4-8k as an evaluator to score LLM answers against ground truth based on various factors (Li et al., 2023a). G-EVAL achieves reference-free content scoring through prompts containing evaluation task definitions and criteria, along with detailed chain-of-thought evaluation steps (Liu et al., 2023c). Tan et al. (2024) have introduced PROXYQA for long-context generation evaluation, evaluating final results based on the accuracy of answers to proxy question." + } + ] + } + ], + "index": 0 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 793, + 304, + 803 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 793, + 304, + 803 + ], + "spans": [ + { + "bbox": [ + 290, + 793, + 304, + 803 + ], + "type": "text", + "content": "26" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 25 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13360/f41871e7-052a-41b7-a381-f27188074bdf_content_list.json b/data/2025/2503_13xxx/2503.13360/f41871e7-052a-41b7-a381-f27188074bdf_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..2d3e6bc442b0a44e133ccb5b87c5275785782352 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13360/f41871e7-052a-41b7-a381-f27188074bdf_content_list.json @@ -0,0 +1,1896 @@ +[ + { + "type": "text", + "text": "Mitigating Visual Forgetting via Take-along Visual Conditioning for Multi-modal Long CoT Reasoning", + "text_level": 1, + "bbox": [ + 166, + 89, + 831, + 130 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Hai-Long Sun $^{1,2}$ , Zhun Sun $^{4,5}$ , Houwen Peng $^{3}$ , Han-Jia Ye $^{1,2}$ †", + "bbox": [ + 221, + 145, + 776, + 162 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ School of Artificial Intelligence, Nanjing University", + "bbox": [ + 284, + 162, + 712, + 180 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{2}$ National Key Laboratory for Novel Software Technology, Nanjing University", + "bbox": [ + 181, + 180, + 813, + 196 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{3}$ Tencent $^{4}$ Center for Language AI Research, Tohoku University", + "bbox": [ + 238, + 196, + 757, + 212 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{5}$ RIKEN Center for Advanced Intelligence Project", + "bbox": [ + 294, + 212, + 702, + 230 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 260, + 260, + 339, + 275 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recent advancements in Large Language Models (LLMs) have demonstrated enhanced reasoning capabilities, evolving from Chain-of-Thought (CoT) prompting to advanced, product-oriented solutions like OpenAI o1. During our re-implementation of this model, we noticed that in multimodal tasks requiring visual input (e.g., geometry problems), Multimodal LLMs (MLLMs) struggle to maintain focus on the visual information, in other words, MLLMs suffer from a gradual decline in attention to visual information as reasoning progresses, causing text-over-relied outputs. To investigate this, we ablate image inputs during long-chain reasoning. Concretely, we truncate the reasoning process midway, then re-complete the reasoning process with the input image removed. We observe only a $\\sim 2\\%$ accuracy drop on MathVista's test-hard subset, revealing the model's textual outputs dominate the following reasoning process. Motivated by this, we propose Take-along Visual Conditioning (TVC), a strategy that shifts image input to critical reasoning stages and compresses redundant visual tokens via dynamic pruning. This methodology helps the model retain attention to the visual components throughout the reasoning. Our approach achieves state-of-the-art performance on average across five mathematical reasoning benchmarks (+3.4 points vs previous sota), demonstrating the effectiveness of TVC in enhancing multimodal reasoning systems. The project page is available at https://sun-hailong.github.io/projects/TVC.", + "bbox": [ + 142, + 288, + 460, + 772 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 114, + 784, + 258, + 800 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Large Language Models (LLMs) have achieved significant advancements in natural language processing (NLP), particularly in the area of reasoning. These models have evolved from simple", + "bbox": [ + 112, + 810, + 487, + 875 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "prompt-based Chain-of-Thought (CoT) (Wei et al., 2022) techniques to sophisticated product-oriented solutions like OpenAI's o1 (OpenAI, 2024b), DeepSeek-R1 (DeepSeek-AI, 2024), and Qwen-QVQ (QwenTeam, 2024), demonstrating iterative reasoning capabilities for complex multi-step tasks, which enables them to handle tasks that require multi-step thinking, logic, and knowledge integration. Recently, several works also extended the CoT reasoning process to MLLMs settings through data-centric innovations. For instance, Math-LLaVA (Shi et al., 2024) pioneers domain-specific training with the MathV360K dataset, while MAmmoTH-VL (Guo et al., 2024) scales up multimodal CoT data generation.", + "bbox": [ + 507, + 260, + 884, + 501 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "While such progress is notable in text-based domains, extending these advancements to multimodal scenarios presents unique challenges that transcend traditional language model boundaries. Reasoning in MLLMs requires fused understanding across different modalities, for example, in geometric reasoning tasks the model should interpret and reason about images alongside text. Therefore, the model's ability to integrate and maintain focus on both types of information is critical. Unlike text-only LLMs that reinforce problem context through linguistic recurrence, MLLMs struggle to sustain visual attention across reasoning steps. That is, as the length of the context increases, the model is more inclined to conduct the next step of reasoning based on the previously outputted text rather than the information of the image itself, which eventually leads to the continuation of the wrong text reasoning process and degraded model performance. We term this phenomenon as visual forgetting.", + "bbox": [ + 507, + 502, + 884, + 824 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In this work, we conduct a diagnostic analysis of the visual forgetting effect within a long-chain reasoning system. The system processes multimodal Q&A tasks through a series of interconnected reasoning steps. We demonstrate significantly reduced attentional allocation to visual inputs during multi", + "bbox": [ + 507, + 825, + 884, + 921 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2503.13360v2 [cs.CV] 3 Jun 2025", + "bbox": [ + 21, + 296, + 60, + 700 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Work done during their internship/employment at Tencent Hunyuan", + "bbox": [ + 112, + 881, + 485, + 906 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "†Corresponding author: yehj@lamda.nju.edu.cn", + "bbox": [ + 137, + 906, + 430, + 920 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "stage reasoning processes. More formally, our analysis: 1) truncates the reasoning process midway and removes the image embeddings; 2) regenerates subsequent reasoning trajectories; 3) evaluates the reasoning outcomes of the pre/post-ablation inference trajectories. Intuitively, the performance gap between normal reasoning and diagnostic reasoning reveals the model's dependency on generated text over original visual evidence. Our results (See Section 2.1) on the MathVista-Hard datasets show that, removing the image midway through the reasoning only causes an insignificant performance degradation (i.e. $\\sim 2\\%$ ), indicating that the model completes the reasoning process based primarily on its output text. More importantly, we also observe that the model's dependency on the visual evidence diminishes over time since the early removal of the image inputs could hurt accuracy by $\\sim 20\\%$ . This suggests model's reasoning employs both visual and textual information in the early stage, then over-relying on text history which limits full visual reasoning potential.", + "bbox": [ + 110, + 84, + 492, + 437 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Motivated by this, we propose a novel strategy to mitigate the visual forgetting effect and maintain visual attention throughout the reasoning process. Our methodology compresses and shifts the image input to the later stages of the reasoning process, ensuring the model integrates sufficient visual evidence into its reasoning. This approach results in improved performance and achieves state-of-the-art results on average across five mathematical reasoning benchmarks (i.e., $+3.4\\%$ vs previous sota). Our findings highlight the effectiveness of this strategy in enhancing the performance of multimodal reasoning systems, providing a robust solution to the problem of visual forgetting in long-chain reasoning tasks.", + "bbox": [ + 110, + 439, + 489, + 682 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Take-along Visual Conditioning: Sustaining Visual Evidence for Multi-modal Long CoT Reasoning", + "text_level": 1, + "bbox": [ + 112, + 689, + 448, + 740 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this section, we first discuss our motivation and observations of the visual forgetting phenomenon in MLLM reasoning systems (Section 2.1). Then, we propose the Take-alone Visual Conditioning (TVC) approach to mitigate visual forgetting and enhance the model's long-chain reasoning capabilities (Section 2.2).", + "bbox": [ + 110, + 746, + 489, + 858 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1 Capturing the Visual Forgetting", + "text_level": 1, + "bbox": [ + 112, + 868, + 413, + 885 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Text-based reasoning systems often repeat key terms or phrases (e.g., restating \"triangle ABC\" or", + "bbox": [ + 112, + 889, + 489, + 921 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/886b0c6086c57d45b8ba57f43d20ef52e6a0a72d4e445a4baad3ac0cf0ac4cbd.jpg", + "image_caption": [ + "Figure 1: The visual forgetting phenomenon by removing the image at different reasoning stages. It shows that by the midpoint of the reasoning process, the model becomes less dependent on the image, causing text-over-relied outputs." + ], + "image_footnote": [], + "bbox": [ + 534, + 84, + 858, + 287 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "\"equation (1)\" to keep the problem's context clear and focused. By restating these critical details, they create strong connections between each step of the reasoning process and the original problem's requirements. This repetition ensures the logic stays on track, and consistent to follow.", + "bbox": [ + 505, + 397, + 884, + 494 + ], + "page_idx": 1 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {C} _ {\\mathrm {M L L M}} = f \\left(V, T _ {1}, \\dots , T _ {n}\\right) \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 596, + 506, + 882, + 521 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "On the other hand, MLLMs struggle with this approach due to their design. As formalized in Equation (1), visual inputs $V$ are confined to the initial processing stages. Unlike text, these visual evidence aren't revisited or reinforced later in the reasoning process. Because there's no built-in way to keep visual information \"active\" or relevant throughout the task, the system's ability to connect visual details with text or logic weakens over time, leading to a progressive visual attention decay. The model is more likely to reason with previously outputted text and becomes particularly problematic in visual reasoning tasks that require continuous validation of spatial relationships.", + "bbox": [ + 505, + 533, + 882, + 760 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We conduct two analytic analyses using the QVQ-72B-Preview model (QwenTeam, 2024) to capture this visual forgetting phenomenon quantitatively and qualitatively. On one hand, we remove the visual inputs at eight different stages to observe their impact. On the other hand, we depict the attention matrix to directly track the attention decay of the visual evidence over time.", + "bbox": [ + 507, + 760, + 884, + 887 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Progressive Image Removing. To assess the extent to which the reasoning process depends on", + "bbox": [ + 507, + 889, + 882, + 921 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/77eb70636c452e011386d11fd027206bb93759242176bc30a6b4ace266dfcda4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 137, + 84, + 460, + 278 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/daaae34465bb2ee45fe7e1f6871bcb2fa3f7f2424896dbd8b372eed08af85a45.jpg", + "image_caption": [ + "Figure 2: Illustration of layer-level and token-level attention weights. (a) The layer-level attention weights of image tokens across different response token positions. (b) The token-level attention weights at the middle layer. It shows that the model's attention to the image gradually decreases during the reasoning process." + ], + "image_footnote": [], + "bbox": [ + 144, + 280, + 460, + 391 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "previously generated text, we first perform a normal reasoning process, then reset the KV cache at various stages of the reasoning process. This effectively removed image tokens and forced subsequent steps to rely solely on text-based information. Specifically, the reasoning process was divided into $K = 8$ intervals based on output token counts regardless of the length of the normal reasoning process, with visual input progressively masked by resetting the KV cache at different cutoff positions $k \\in \\{0,1,\\dots ,K - 1\\}$ . In other words, the first $k / 8$ part of the normal reasoning process is now employed as a prompt, and the model now re-complete the reasoning process without image inputs. Furthermore, we discovered that for some questions (30.9% of the MathVista dataset), the model could answer correctly using only the text-based prompt. Consequently, we excluded these questions and designated the remaining dataset as the MathVista-Hard dataset.", + "bbox": [ + 112, + 517, + 489, + 838 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "As illustrated in Figure 1, we observe that the performance is 40.9 at $k = 4$ and 43.1 for the normal reasoning. This minimal $2.2\\%$ gap suggests that the model overly relies on text outputs during the later reasoning stages rather than on the image.", + "bbox": [ + 112, + 841, + 489, + 921 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "This indicates that once the model completes the half stages of reasoning, it becomes overly reliant on its own generated CoT. Over-reliance on this self-generated reasoning worsens the model's tendency to forget visual evidence over time, which can result in hallucinations during lengthy, multi-step reasoning tasks. This result also reveals a near-exponential forgetting effect in performance as $k$ increases, which we formalize as:", + "bbox": [ + 507, + 84, + 884, + 229 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {R} (k) = \\mathcal {R} _ {\\text {f u l l}} - \\Delta_ {\\text {v i s u a l}} (k), \\quad \\Delta_ {\\text {v i s u a l}} (k) \\propto e ^ {- k}, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 515, + 241, + 882, + 274 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $\\mathcal{R}_{\\mathrm{full}}$ represents the full multimodal reasoning performance, and $\\Delta_{\\mathrm{visual}}(k)$ quantifies the performance degradation caused by visual masking at position $k$ .", + "bbox": [ + 507, + 275, + 882, + 338 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Visual Attention Decay. To more intuitively observe the changes in visual feature attention across different stages of reasoning, we investigate the layer-level attention weights at various response token positions and the token-level attention weights at layer 16. Given that MLLM's attention weights reflect the focus on tokens and influence the decision-making process, we first analyze the attention weights at each layer of the MLLM. Specifically, for each layer, we calculate the proportion of attention weights on all image tokens.", + "bbox": [ + 507, + 340, + 884, + 517 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "As shown in Figure 2(a), we observe that at the $1/8$ position of the reasoning process, the model effectively focuses on the visual inputs. However, as reasoning progresses, despite fluctuations in attention to visual tokens across layers, the model's overall attention to visual evidence gradually decreases, leading to visual forgetting. Next, following the methodology of FastV (Chen et al., 2024a), we analyze the attention maps for several cases and find that the model predominantly focuses on previously generated text tokens rather than the input image. After approximately $20\\%$ tokens, the existence of image inputs on attention maps diminishes significantly, as illustrated in Figure 2(b). Both of the observations indicate a phenomenon of visual memory degradation, revealing the model's limitations in maintaining consistent attention to visual inputs throughout the reasoning process.", + "bbox": [ + 507, + 518, + 882, + 808 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2 Take-along Visual Conditioning", + "text_level": 1, + "bbox": [ + 507, + 819, + 806, + 835 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this section, we introduce our solution to tackle this problem in detail. We propose Take-along Visual Conditioning (TVC), a dynamic image retention mechanism that re-introduces visual inputs at strategic intervals throughout the reasoning pro", + "bbox": [ + 507, + 841, + 884, + 921 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "cess. TVC mitigates visual attention decay by periodically reaffirming visual information, akin to human problem-solving behaviors where individuals frequently refer back to visual inputs. Our approach enhances the model's ability to incorporate visual information continuously, improving its long-chain reasoning capacity by ensuring that visual evidence is revisited during critical decision-making moments.", + "bbox": [ + 112, + 84, + 489, + 227 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The TVC method consists of two key stages: training and testing. In the training stage, we introduce Dynamic Visual Reaffirmation (DVR), which guides the model through iterative reinforcement of visual evidence during long reasoning chains. In the testing phase, we present Periodic Visual Calibration (PVC), where visual reactivation is periodically triggered at self-reflection intervals. To prevent the model from forgetting previous text-based reasoning steps due to an excessive number of image tokens, we adopt image compression through adaptive pooling to reduce the image token size while preserving spatial semantics. This dual-modality engagement mechanism ensures consistent interaction between textual reasoning and visual evidence. We present an illustration of the TVC system in Figure 3.", + "bbox": [ + 115, + 230, + 489, + 502 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Dynamic Visual Reaffirmation. Our dynamic visual reaffirmation training strategy combines two key components: 1) Data Curation: We curate long-chain reasoning data using the pipeline described in Section 3, sourced from high-quality academic datasets (e.g., MathV360K, Geo170K, and LLaVA-OneVision). This process yields a high-quality dataset optimized for long-chain reasoning training. 2) Visual Content Injection: While the curated data ensures correctness, the QVQ model inherently lacks the ability to iteratively reference visual inputs during reasoning. Therefore, we manually re-inject the visual content (visual embeddings and bridging prompt) to triggers visual re-activation at predefined self-reflection intervals. Specifically, given the initial multimodal input $\\mathcal{M}_0 = (V,T_0)$ , DVR performs visual reactivation at self-reflection intervals $\\{r_1,\\dots,r_m\\}$ :", + "bbox": [ + 112, + 505, + 489, + 795 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {M} _ {i} = \\left(V, \\left[ T _ {\\text {p r e v}}; \\underbrace {\\text {P r o m p t}} _ {\\text {R e - a c t i v a t i o n}}; T _ {\\text {n e w}} \\right]\\right) \\text {a t s t e p} r _ {i} \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 124, + 809, + 487, + 871 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $T_{\\mathrm{prev}}$ represents the previous reasoning steps and $T_{\\mathrm{new}}$ denotes the new reasoning steps that are based on prior reasoning and reintroduce focus on", + "bbox": [ + 112, + 873, + 487, + 921 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/a5c52460f9451914f821519dcfa5dc5a3c2c8dd7793690a978a99f9e91aaa6c5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 514, + 82, + 877, + 153 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/8ded0c6d380ffc66b6e04e3afc988892b918477e055637ca5f1fcff5d86fce63.jpg", + "image_caption": [ + "Figure 3: Overview of TVC System Design. We enable the model to have take-along visual conditioning capabilities through two stages: training and inference." + ], + "image_footnote": [], + "bbox": [ + 512, + 155, + 877, + 280 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "visual information. The bridging prompt is employed to hint the existence of the image, e.g., Let me see the image again. To improve efficiency, our initial implementation adopts midpoint reactivation $(m = 1, r_1 = 0.5L$ for $L$ -step chains).", + "bbox": [ + 507, + 366, + 882, + 445 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "During the self-reflection phase of the reasoning steps, we randomly select $m$ positions to inject visual content. Specifically, we inject reactivation prompts and regenerate visual captions using the model's intrinsic capabilities. This forces joint attention to both textual reasoning and visual evidence. By leveraging the model's intrinsic image captioning capability, we continuously reinforce visual information throughout the reasoning process. This ensures that the model incorporates image evidence during reflection, rather than relying solely on textual reasoning.", + "bbox": [ + 507, + 449, + 884, + 642 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Periodic Visual Calibration. Calibrating visual attention plays a crucial role in enhancing long-chain reasoning capabilities. After training our model, we further design the periodic visual calibration process. Specifically, we coordinate operations during reactivation as follows: 1) Token Compression: We first compress visual tokens using average pooling to prevent text-based reasoning from forgetting visual information. 2) Visual Cache Reset: We then prepend an instruction (bridging prompt) to reintroduce the image and re-inject image tokens by resetting the KV cache of the generation process.", + "bbox": [ + 507, + 645, + 882, + 838 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We also provide an example of how PVC is implemented in the case study section (Section 4.5). PVC both improves reasoning efficiency and prevents the model from forgetting previous reasoning steps due to an overload of visual tokens.", + "bbox": [ + 507, + 841, + 884, + 921 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/dcbf95c1a314d7c6dcc0dffd0f08a9b502ceb96620470fa2a70540736ef0cf49.jpg", + "image_caption": [ + "Figure 4: Data Generation Pipeline of TVC. We use iterative distillation to collect long-chain reasoning data, followed by a comprehensive response filtering process to ensure high-quality reasoning." + ], + "image_footnote": [], + "bbox": [ + 119, + 82, + 480, + 236 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3 Data-Centric Implementation of Multimodal Reasoning System", + "text_level": 1, + "bbox": [ + 114, + 338, + 428, + 370 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In this section, we briefly describe our implementation of the multimodal reasoning system through a scalable curated data generation pipeline.", + "bbox": [ + 112, + 392, + 489, + 441 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.1 Long-Chain Reasoning Data Collection", + "text_level": 1, + "bbox": [ + 114, + 468, + 468, + 483 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Prior research (Qin et al., 2024; Jiang et al., 2024) has identified two dominant paradigms for constructing long-chain reasoning systems: (1) explicit search-based methods, which utilize structures such as Monte Carlo Tree Search (MCTS) combined with specially trained reward models to guide the search process toward optimal solutions, and (2) instruction distillation approaches, which fine-tune models on curated long chain-of-thought (CoT) datasets. To efficiently develop an MLLM with long-chain reasoning capabilities, we adopt the distillation paradigm. In this section, we describe the distillation process and present the data generation pipeline aimed at enhancing the reasoning capability of MLLM.", + "bbox": [ + 112, + 497, + 489, + 739 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Our distillation pipeline begins with aggregating publicly available reasoning datasets (e.g., MathV360K (Shi et al., 2024) and Geo170K (Gao et al., 2023)). Through an iterative process of distillation and response filtering, we progressively enhance the model's long-chain reasoning capabilities. Specifically, the teacher model, QVQ-72B-Preview, generates long COT reasoning data, while the student model, Qwen2-VL, undergoes supervised fine-tuning on the filtered CoT data to enhance its reasoning performance.", + "bbox": [ + 112, + 744, + 490, + 921 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2 Iterative Distillation with Reject Sampling", + "text_level": 1, + "bbox": [ + 507, + 84, + 880, + 101 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "After obtaining the long-chain reasoning responses, we employ an assessment pipeline to ensure data quality. Specifically, we introduce a dual-temperature sampling mechanism to optimize data quality through variance exploitation.", + "bbox": [ + 507, + 105, + 882, + 185 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Deterministic Initial Sampling. For the first-stage data generation, we employ temperature $\\tau = 0$ to obtain the QVQ's most confident reasoning paths:", + "bbox": [ + 507, + 186, + 880, + 233 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {D} _ {\\text {i n i t}} = \\left\\{\\left(\\mathbf {x}, \\mathbf {y} _ {\\mathrm {C o T}}\\right) \\sim \\mathcal {P} _ {\\mathrm {Q V Q}} (\\cdot | \\mathbf {x}; \\tau = 0) \\right\\}, \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 529, + 246, + 882, + 263 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathbf{y}_{\\mathrm{CoT}}$ represents the response generated by QVQ. This ensures that the model generates the most probable reasoning path for each input. Subsequently, we use these high-confidence responses as a foundation for further refinement in later stages of data generation.", + "bbox": [ + 507, + 275, + 882, + 370 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Answer-Centric Reject Sampling. To ensure the quality of the data, we employ an LLM-as-a-Judge approach for answer-centric reject sampling. Specifically, we use a strong LLM (e.g., Qwen2.5-72B-Instruct) as the verifier and prompt the model to output a 'yes' or 'no' in JSON format, indicating whether the long-chain reasoning answer is correct, i.e., $\\mathcal{V}_{\\mathrm{valid}} = \\{\\mathbf{y}_{\\mathrm{CoT}}|\\mathrm{Match}(\\mathbf{y}_{\\mathrm{CoT}},\\mathbf{y}_{\\mathrm{gt}})\\}$ . After this process, we obtain $\\sim 200\\mathrm{K}$ high-quality samples.", + "bbox": [ + 507, + 372, + 884, + 517 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Best-of-N Error Correction. While initial sampling with temperature $\\tau = 0$ yields a set of data, there are substantial residual errors $(\\mathcal{D}_{\\mathrm{error}})$ where QVQ's responses $(\\mathbf{y}_{\\mathrm{CoT}})$ unmatched with ground truth answers (55.8% of $\\mathcal{D}_{\\mathrm{init}}$ ). To fully leverage the available open-source data, we implement a contrastive regeneration strategy using $\\tau = 1$ :", + "bbox": [ + 507, + 517, + 882, + 630 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {D} _ {\\text {c o r r e c t e d}} = \\bigcup_ {\\mathbf {x} \\in \\mathcal {D} _ {\\text {e r r o r}}} \\left\\{\\arg \\min _ {\\mathbf {y} ^ {i} \\in \\mathcal {Y} _ {\\text {v a l i d}}} \\operatorname {L e n g t h} \\left(\\mathbf {y} ^ {i}\\right) \\right\\} _ {i = 1} ^ {8}, \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 507, + 640, + 884, + 695 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $\\mathcal{V}_{\\mathrm{valid}}$ denotes the subset of 8 regenerated responses (at $\\tau = 1$ ) that satisfy answer correctness. For cases with multiple valid responses, we prioritize minimal reasoning token length to enhance conciseness and efficiency:", + "bbox": [ + 507, + 696, + 882, + 776 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {L e n g t h} (\\mathbf {y}) = \\sum_ {t = 1} ^ {T} \\mathbb {I} \\left(w _ {t} \\notin \\{[ \\mathrm {P A D} ], [ \\mathrm {S E P} ] \\}\\right) \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 524, + 787, + 882, + 831 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This two-stage sampling achieves partial error recovery while reducing the average token count to minimize meaningless self-reflection, effectively balancing correctness and efficiency. To further enhance the quality of the dataset, we also perform", + "bbox": [ + 507, + 841, + 884, + 921 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "dynamic token truncation and reflection word pruning. This approach helps reduce the ineffective reflection parts in the dataset, thereby mitigating the issue of the model being unable to complete its answers. More details are in Section B.", + "bbox": [ + 112, + 84, + 487, + 164 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4 Experiments", + "text_level": 1, + "bbox": [ + 112, + 175, + 260, + 192 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We conduct comprehensive experiments across multiple vision-language benchmarks to demonstrate the effectiveness of our method. Section 4.1 provides implementation details for TVC. In Section 4.3, we present key results on visual reasoning tasks, supplemented with findings on general image understanding. Section 4.4 details ablation studies to evaluate the influence of critical design decisions. Finally, Section 4.5 presents visualizations and case studies to illustrate the method's operational characteristics and insights derived from it.", + "bbox": [ + 112, + 200, + 489, + 376 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1 Training Recipe", + "text_level": 1, + "bbox": [ + 112, + 387, + 285, + 403 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We integrate the TVC system with MLLMs of varying scales to demonstrate the effectiveness and generalizability of our approach. Initial implementation with Qwen2-VL-7B-Instruct confirmed the method's validity. To further evaluate scalability and establish robust baselines against state-of-the-art MLLMs, we expanded the approach to a 72B model. Prior to training TVC, we follow the long-chain reasoning pipeline described earlier. We use the LLaMA-Factory (Zheng et al., 2024) framework, with a learning rate of 2e-5, a batch size of 256, and 5 training epochs. During optimization, only the LLM parameters and cross-modal connector are trained, while the visual encoder remains frozen. The training process requires 10 hours on a $64\\times \\mathrm{H}20$ GPU setup for the 7B model and approximately 4 days for the 72B model. Additional details are presented in Table 4.", + "bbox": [ + 112, + 407, + 489, + 697 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2 Evaluation Setup", + "text_level": 1, + "bbox": [ + 112, + 708, + 294, + 722 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We conduct a comprehensive experimental analysis across various visual reasoning benchmarks that require advanced visual reasoning skills. To ensure a well-rounded evaluation, we select several widely recognized and representative benchmarks, including MathVista (Lu et al., 2024a), MathVerse (Zhang et al., 2024a), MathVision (Wang et al., 2024a), Dynamath (Zou et al., 2024), and OlympiadBench (He et al., 2024). MathVista consists of 6,141 examples that require fine-grained, deep visual understanding and compositional reasoning. MathVerse contains 2,612 multi-subject", + "bbox": [ + 112, + 728, + 489, + 921 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/b67cf65f88ddd239a7d839548d4229efad69558040285509805ad4ef26fee458.jpg", + "image_caption": [ + "Figure 5: Ablations on the amount of training data. TVC benefits from data scaling, continually improving the reasoning capabilities." + ], + "image_footnote": [], + "bbox": [ + 514, + 84, + 877, + 274 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "math problems from a variety of sources. MathVision includes 3,040 high-quality mathematical problems sourced from established mathematics competitions. OlympiadBench features 8,476 bilingual multimodal problems tailored to Olympic-level mathematics and physics competitions. These benchmarks evaluate problem-solving abilities in mathematics, and following standard practice, we use GPT-4o-mini as the evaluator. Following the VLMEvalKit guidelines, we exclude the text-only split from MathVerse and the theorem-proof sections from OlympiadBench. For a fair comparison, we conduct evaluations using the testmini sets of MathVerse and MathVista. Fast evaluation is made possible through the use of the VLMEvalKit (Duan et al., 2024) and vLLM (Kwon et al., 2023).", + "bbox": [ + 505, + 356, + 884, + 613 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.3 Main Results on Visual Reasoning", + "text_level": 1, + "bbox": [ + 507, + 626, + 823, + 642 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The results in Table 1 demonstrate the effectiveness and generalizability of TVC across multiple visual reasoning benchmarks. Notably, our model achieves a $16.7\\%$ improvement over QVQ-72B-Preview on MathVision and a $17.6\\%$ gain on MathVerse, highlighting enhanced reasoning capabilities. Unlike conventional datasets where textual descriptions may include implicit visual cues, MathVerse is an all-around visual math benchmark specifically designed for equitable, in-depth evaluation of MLLMs. The significant gains on MathVerse underscore the significance of TVC, given the benchmark's unique design principles. Furthermore, the TVC-7B model, despite its smaller size compared to counterparts, achieves competitive performance, even outperforming leading MLLMs in multiple cases. This demonstrates the robustness of our", + "bbox": [ + 505, + 646, + 884, + 920 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/4e2812122af8c722544df25a6b2cabd42befc062c02de7576a7911f9529d2ab6.jpg", + "table_caption": [ + "Table 1: Results on Visual Reasoning Tasks. We conduct evaluation experiments across 6 benchmarks, covering both general reasoning and task-specific reasoning assessments. TVC exhibits notable effectiveness and generalizability when applied to Qwen2-VL, surpassing other state-of-the-art MLLMs by a large margin." + ], + "table_footnote": [], + "table_body": "
ModelSizeMathVistaMathVisionMathVerseDynamathOlympiadBenchAverage
MiniCPM-V-2.6 (Yadav et al., 2025)8B60.818.417.69.8--
VITA-1.5 (Fu et al., 2025a)8B66.219.523.49.6--
LLaVA-COT (Xu et al., 2024)11B52.519.922.67.8--
Qwen2-VL (Wang et al., 2024b)7B60.916.324.611.03.223.2
InternVL2.5 (Chen et al., 2024b)8B64.517.022.89.40.122.8
POINTS1.5 (Liu et al., 2024b)8B66.422.026.614.2--
Ovis1.6-Gemma2 (Lu et al., 2024b)27B70.220.637.817.0--
InternVL2.5-COT (Chen et al., 2024b)78B71.432.540.128.5--
LLaVA-OneVision (Li et al., 2024)72B67.125.327.215.6--
Qwen2-VL (Wang et al., 2024b)72B69.726.636.220.010.332.6
QVQ-72B-preview (QwenTeam, 2024)72B71.435.941.530.720.440.0
TVC7B68.122.738.915.19.830.9
TVC72B72.241.948.830.024.343.4
", + "bbox": [ + 117, + 135, + 884, + 331 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/56fb82a543aa388c80425d95ebaaed59088b49d07b6fa1bfe12dbb29bf397700.jpg", + "table_caption": [ + "Table 2: Ablations on the TVC System. TVC enhances reasoning capabilities, showing significant improvements on both general and task-specific reasoning benchmarks." + ], + "table_footnote": [], + "table_body": "
MethodMathVistaMathVisionMathVerseAvg
Baseline60.916.324.633.9
Vanilla - Direct SFT63.519.831.638.3
TVC w/o PVC66.721.835.641.4
TVC w/o DVR66.222.334.741.0
TVC Full68.122.738.943.2
", + "bbox": [ + 117, + 420, + 485, + 500 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "methodology even with more compact model configurations. Beyond task-specific visual reasoning, we extend our evaluation to general reasoning benchmarks (e.g., MathVista), where TVC consistently delivers strong performance, achieving a $3.6\\%$ improvement over the original Qwen2-VL72B model. These results emphasize TVC's ability to excel in tasks requiring both perception and reasoning. Collectively, the findings indicate that TVC not only advances performance in specialized visual reasoning tasks but also offers substantial benefits in broader application scenarios.", + "bbox": [ + 112, + 527, + 489, + 720 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.4 Further Analysis", + "text_level": 1, + "bbox": [ + 112, + 734, + 294, + 751 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In this section, we conduct comprehensive experiments to evaluate the design choices of TVC, emphasizing the key contributions of our approach. We also present a case study to further illustrate the qualitative effectiveness of TVC.", + "bbox": [ + 112, + 758, + 489, + 839 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Effectiveness of TVC system. To evaluate the effectiveness of the TVC system, we conduct comprehensive ablation experiments on various components using the Qwen2-VL-7B as the Baseline. We begin by performing supervised fine-tuning on the", + "bbox": [ + 112, + 841, + 489, + 921 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/3bd77dbb748e6d5a4b9a4950ab47ef7ead5e90b2f52314ea31b2cfeb357c8418.jpg", + "table_caption": [ + "Table 3: Ablations on Token Compression." + ], + "table_footnote": [], + "table_body": "
MethodMathVistaMathVisionMathVerseAvg
TVC Baseline68.321.539.643.1
+ 2x2 Avg Pooling67.822.938.343.0
+ 4x4 Avg Pooling68.122.738.943.2
", + "bbox": [ + 512, + 376, + 878, + 437 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Qwen2-VL-7B model with the data from Section 3, referred to as Vanilla - Direct SFT. Furthermore, we apply the DVR training approach outlined in Section 2.2, which increases the focus on the visual information in the training data, enabling the model to implicitly learn visual conditioning capabilities. Additionally, during the testing phase, we experiment with resetting the visual KV cache midway through the reasoning process, and after compressing the visual tokens, we add them to the end of the reasoning steps. This strategy allows the model to further observe the image content during its thought process, mitigating the visual forgetting and suppressing hallucinations.", + "bbox": [ + 507, + 467, + 884, + 692 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "As demonstrated in Table 2, the results highlight that the TVC system is crucial for enhancing visual reasoning capabilities. When using only the SFT training data without the DVR strategy in TVC (Vanilla - Direct SFT), improvements in visual reasoning tasks are limited, as the model lacks the ability to reflect on visual information. Furthermore, models trained with the DVR strategy alone still produce sub-optimal results, underscoring the importance of a comprehensive approach that integrates PVC and DVR. The contributions of PVC and DVR are relatively balanced in enhancing the reasoning capabilities. To further validate the effectiveness of our visual token compression, we", + "bbox": [ + 507, + 696, + 884, + 921 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/0c4be083c4972b0ec6877c4318b42fb9003a36717e0940aeeddc10e6556d40f2.jpg", + "image_caption": [ + "Figure 6: Case Study of TVC. TVC effectively re-examines the image during the reflection process to correct mistakes, guiding the model to the correct answer." + ], + "image_footnote": [], + "bbox": [ + 115, + 82, + 884, + 428 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "conduct experiments with different pooling methods. As shown in Table 3, the TVC Baseline represents the method without image compression. We observe that the use of pooling methods has little impact on the model's capabilities. Utilizing $4 \\times 4$ average pooling for compression not only enhances the model's inference efficiency but also achieves a slight performance improvement.", + "bbox": [ + 112, + 493, + 487, + 621 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Data Scaling Law. To validate the scalability of our approach, we also conduct a data scaling law experiment for training. As shown in Figure 5, we compare models across various data size: 50K, 100K, 150K, and 200K samples, and present the performance relative to the base model (Qwen2-VL-7B). Our findings show that as the dataset size increases, the model's performance continues to improve. However, it is evident that with a small dataset, the model's reasoning ability cannot reach an optimal level solely through SFT. Therefore, data scaling proves highly effective during SFT training, and the results indicate that TVC benefits from increased data.", + "bbox": [ + 112, + 627, + 489, + 851 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "To prevent underfitting from becoming a performance bottleneck, we increase the number of training epochs with smaller datasets, which further improves model performance. This indicates that in", + "bbox": [ + 112, + 857, + 489, + 921 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "sufficient data or epochs lead to undertrained models, making it difficult to learn long-chain reasoning. Increasing both the data and training epochs can effectively alleviate this issue.", + "bbox": [ + 507, + 493, + 884, + 557 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.5 Case Study", + "text_level": 1, + "bbox": [ + 507, + 569, + 643, + 586 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We present a comprehensive case study in Figure 6 to illustrate the improvements of our TVC approach. Specifically, we provide an example that highlights advancements in the reasoning process. In this example, the model is tasked with carefully observing the objects within an image and, after eliminating certain shapes and attributes, providing the count of the remaining objects. During base CoT reasoning, the model fails to check the attributes of each object and only focuses on their shapes, leading to an incorrect final answer. Subsequently, the model learns to re-focus on the image and carefully describe the attributes of each object in detail. This process allows the model to identify the issue in its previous reasoning and provide the correct answer.", + "bbox": [ + 505, + 592, + 882, + 848 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5 Conclusion", + "text_level": 1, + "bbox": [ + 507, + 862, + 640, + 877 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we introduce Take-along Visual Conditioning (TVC), a novel strategy designed to en", + "bbox": [ + 507, + 889, + 884, + 921 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "hance the reasoning capabilities of MLLMs by addressing the challenge of visual attention degradation during long-chain reasoning. By dynamically shifting the image input to critical stages of reasoning and compressing redundant visual tokens, we ensure that the model maintains focus on the visual information throughout the process. Our extensive evaluation on several mathematical reasoning benchmarks demonstrates the effectiveness of TVC in improving multimodal reasoning, providing a robust approach to equip MLLMs with better visual grounding for complex tasks.", + "bbox": [ + 112, + 84, + 492, + 280 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Acknowledgments", + "text_level": 1, + "bbox": [ + 114, + 290, + 280, + 307 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work is partially supported by National Key R&D Program of China (2024YFE0202800), NSFC (62376118), Key Program of Jiangsu Science Foundation (BK20243012), CCF-Tencent Rhino-Bird Open Research Fund (RAGR20240101), AMED (JP25wm0625405), and Collaborative Innovation Center of Novel Software Technology and Industrialization.", + "bbox": [ + 112, + 316, + 489, + 447 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Limitations", + "text_level": 1, + "bbox": [ + 114, + 458, + 220, + 474 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Despite advancements, our method may still exhibit several limitations. First, for highly complex reasoning tasks requiring sophisticated analytical capabilities, simply increasing visual revisits proves insufficient. In contrast, it is crucial to enhance the model's inherent reasoning capacity. Second, our method assumes the availability of delayed visual processing, making it potentially unsuitable for real-time applications requiring instantaneous visual feedback, such as robotic navigation or time-sensitive decision-making scenarios.", + "bbox": [ + 112, + 485, + 489, + 663 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 114, + 689, + 213, + 703 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Meta AI. 2024. Build the future of ai with meta llama 3. Technical report, Meta AI.", + "anthropic. 2024. Introducing the next generation of claude. Technical report, anthropic.", + "Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. 2023. Qwen technical report. arXiv:2309.16609.", + "Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Junyang Lin, Chang Zhou, and Baobao Chang. 2024a. An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models. Preprint, arXiv:2403.06764." + ], + "bbox": [ + 114, + 713, + 489, + 920 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, et al. 2024b. Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling. arXiv preprint arXiv:2412.05271.", + "Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Zhong Muyan, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. 2023. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. arXiv:2312.14238.", + "DeepSeek-AI. 2024. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. Technical report, DeepSeek.", + "Yuhao Dong, Zuyan Liu, Hai-Long Sun, Jingkang Yang, Winston Hu, Yongming Rao, and Ziwei Liu. 2024. Insight-v: Exploring long-chain visual reasoning with multimodal large language models. arXiv preprint arXiv:2411.14432.", + "Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, et al. 2024. Vlmealkit: An open-source toolkit for evaluating large multi-modality models. arXiv preprint arXiv:2407.11691.", + "Chaoyou Fu, Haojia Lin, Xiong Wang, Yi-Fan Zhang, Yunhang Shen, Xiaoyu Liu, Yangze Li, Zuwei Long, Heting Gao, Ke Li, et al. 2025a. Vita-1.5: Towards gpt-4o level real-time vision and speech interaction. arXiv preprint arXiv:2501.01957.", + "Jiale Fu, Yuchu Jiang, Junkai Chen, Jiaming Fan, Xin Geng, and Xu Yang. 2025b. Speculative ensemble: Fast large language model ensemble via speculation. arXiv preprint arXiv:2502.01662.", + "Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, et al. 2023. G-llava: Solving geometric problem with multi-modal large language model. arXiv preprint arXiv:2312.11370.", + "Jarvis Guo, Tuney Zheng, Yuelin Bai, Bo Li, Yubo Wang, King Zhu, Yizhi Li, Graham Neubig, Wenhu Chen, and Xiang Yue. 2024. Mammoth-vl: Eliciting multimodal reasoning with instruction tuning at scale. arXiv preprint arXiv:2412.05237.", + "Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. 2024. Olympiad-bench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008.", + "Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, and Ranjay Krishna. 2024. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. arXiv preprint arXiv:2406.09403." + ], + "bbox": [ + 510, + 85, + 884, + 920 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Jinhao Jiang, Zhipeng Chen, Yingqian Min, Jie Chen, Xiaoxue Cheng, Jiapeng Wang, Yiru Tang, Haoxiang Sun, Jia Deng, Wayne Xin Zhao, et al. 2024. Technical report: Enhancing llm reasoning with reward-guided tree search. arXiv preprint arXiv:2411.11694.", + "Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles.", + "Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. 2024. Llavaonevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326.", + "Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. 2023. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. arXiv:2306.00890.", + "Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. 2024a. Llavanext: Improved reasoning,OCR, and world knowledge.", + "Yuan Liu, Le Tian, Xiao Zhou, Xinyu Gao, Kavio Yu, Yang Yu, and Jie Zhou. 2024b. Points1. 5: Building a vision-language model towards real world applications. arXiv preprint arXiv:2412.08443.", + "Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. 2024a. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In ICLR.", + "Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. 2023. Chameleon: Plug-and-play compositional reasoning with large language models. Advances in Neural Information Processing Systems, 36:43447-43478.", + "Shiyin Lu, Yang Li, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, and Han-Jia Ye. 2024b. Ovis: Structural embedding alignment for multimodal large language model. arXiv preprint arXiv:2405.20797.", + "Brandon McKinzie, Zhe Gan, Jean-Philippe Fauconnier, Sam Dodge, Bowen Zhang, Philipp Dufter, Dhruti Shah, Xianzhi Du, Futang Peng, Floris Weers, et al. 2024. Mm1: Methods, analysis & insights from multimodal llm pre-training. arXiv:2403.09611.", + "OpenAI. 2024a. Gpt-4o: Hello gpt-4o. Technical report, OpenAI.", + "OpenAI. 2024b. Learning to reason with llms. Technical report, OpenAI." + ], + "bbox": [ + 115, + 85, + 489, + 920 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Yingzhe Peng, Gongrui Zhang, Miaosen Zhang, Zhiyuan You, Jie Liu, Qipeng Zhu, Kai Yang, Xingzhong Xu, Xin Geng, and Xu Yang. 2025. Lmmr1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl. arXiv preprint arXiv:2503.07536.", + "Yiwei Qin, Xuefeng Li, Haoyang Zou, Yixiu Liu, Shijie Xia, Zhen Huang, Yixin Ye, Weizhe Yuan, Hector Liu, Yuanzhi Li, et al. 2024. O1 replication journey: A strategic progress report-part 1. arXiv preprint arXiv:2410.18982.", + "QwenTeam. 2024. Qvq: To see the world with wisdom. Technical report, Alibaba.", + "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML.", + "Machel Reid, Nikolay Savinov, Denis Teptyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv:2403.05530.", + "Wenhao Shi, Zhiqiang Hu, Yi Bin, Junhua Liu, Yang Yang, See-Kiong Ng, Lidong Bing, and Roy Ka-Wei Lee. 2024. Math-llava: Bootstrapping mathematical reasoning for multimodal large language models. arXiv preprint arXiv:2406.17294.", + "Hai-Long Sun, Da-Wei Zhou, Yang Li, Shiyin Lu, Chao Yi, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, De-Chuan Zhan, et al. 2025a. Parrot: Multilingual visual instruction tuning. In ICML.", + "Hai-Long Sun, Da-Wei Zhou, De-Chuan Zhan, and Han-Jia Ye. 2025b. Pilot: A pre-trained model-based continual learning toolbox. SCIENCE CHINA Information Sciences, 68(4):147101.", + "Hai-Long Sun, Da-Wei Zhou, Hanbin Zhao, Le Gan, De-Chuan Zhan, and Han-Jia Ye. 2025c. Mos: Model surgery for pre-trained model-based class-incremental learning. In AAAI, volume 39, pages 20699-20707.", + "Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805.", + "Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. 2024. Cambrian-1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860." + ], + "bbox": [ + 510, + 85, + 882, + 919 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, and Hongsheng Li. 2024a. Measuring multimodal mathematical reasoning with math-vision dataset. Preprint, arXiv:2402.14804.", + "Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. 2024b. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191.", + "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv:2201.11903.", + "Guowei Xu, Peng Jin, Li Hao, Yibing Song, Lichao Sun, and Li Yuan. 2024. Llava-o1: Let vision language models reason step-by-step. arXiv preprint arXiv:2411.10440.", + "Ankit Yadav, Lingqiao Liu, and Yuankai Qi. 2025. Exploring primitive visual measurement understanding and the role of output format in learning in vision-language models. arXiv preprint arXiv:2501.15144.", + "Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. 2023. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11975-11986.", + "Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, et al. 2024a. Math-verse: Does your multi-modal llm truly see the diagrams in visual math problems? arXiv preprint arXiv:2403.14624.", + "Renrui Zhang, Xinyu Wei, Dongzhi Jiang, Ziyu Guo, Shicheng Li, Yichi Zhang, Chengzhuo Tong, Jiaming Liu, Aojun Zhou, Bin Wei, et al. 2024b. Mavis: Mathematical visual instruction tuning with an automatic data engine. arXiv preprint arXiv:2407.08739.", + "Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024. Llamafactory: Unified efficient fine-tuning of $100+$ language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), Bangkok, Thailand. Association for Computational Linguistics.", + "Wenwen Zhuang, Xin Huang, Xiantao Zhang, and Jin Zeng. 2024. Math-puma: Progressive upward multimodal alignment to enhance mathematical reasoning. arXiv preprint arXiv:2408.08640.", + "Chengke Zou, Xingang Guo, Rui Yang, Junyu Zhang, Bin Hu, and Huan Zhang. 2024. Dynamath: A dynamic visual benchmark for evaluating mathematical reasoning robustness of vision language models. arXiv preprint arXiv:2411.00836." + ], + "bbox": [ + 115, + 85, + 485, + 914 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "A Related Work", + "text_level": 1, + "bbox": [ + 115, + 83, + 273, + 98 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Multimodal Large Language Models. Multimodal Large Language Models (MLLMs) (Li et al., 2023; Liu et al., 2024a; Sun et al., 2025a; Wang et al., 2024b; Lu et al., 2024b; McKinzie et al., 2024; Sun et al., 2025c,b; Fu et al., 2025b; Dong et al., 2024) integrate vision encoders (Radford et al., 2021; Zhai et al., 2023) with LLMs (AI, 2024; Bai et al., 2023), endowing them with robust capabilities across a wide range of domains. These include general visual understanding(OpenAI, 2024a; Li et al., 2024), mathematical reasoning (Shi et al., 2024; Gao et al., 2023), and answering college-level questions (Chen et al., 2023), demonstrating their versatility in real-world tasks. The rapid advancements in open-source models have also spurred the development of proprietary models, such as GPT-4o (OpenAI, 2024a), Gemini (Team et al., 2023; Reid et al., 2024), Qwen2-VL-MAX (Wang et al., 2024b), and Claude3 (anthropic, 2024). These models have demonstrated remarkable performance in both evaluation benchmarks and practical applications, solidifying their position at the forefront of AI research and deployment.", + "bbox": [ + 115, + 124, + 489, + 510 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Reasoning with MLLMs. Recent advancements in MLLMs have significantly enhanced performance in reasoning tasks across both text and multimodal scenarios (OpenAI, 2024b; DeepSeek-AI, 2024; QwenTeam, 2024; Peng et al., 2025). Current methods primarily rely on CoT (Wei et al., 2022) to train MLLMs for step-by-step reasoning. Data-driven approaches include MathLLaVA (Shi et al., 2024), which introduced the MathV360K dataset, and MAmmoTH-VL (Guo et al., 2024), which curates a large-scale multimodal CoT dataset in a scalable manner. Another line of research explores vision-text alignment. MAVIS (Zhang et al., 2024b) fine-tunes a math-specific vision encoder with curated caption data, while Math-PUMA (Zhuang et al., 2024) leverages the Kullback-Leibler (KL) divergence of next-token prediction distributions for modality alignment. In a different paradigm, MLLMs act as coordinators, utilizing external tools such as LLMs, web search engines, and computer programs for complex reasoning. Chameleon (Lu et al., 2023) orchestrates tool-call sequences, and Visual Sketchpad (Hu et al., 2024) enables models to generate visual sketches to aid reasoning.", + "bbox": [ + 115, + 519, + 489, + 920 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "B More Details of Reasoning Dataset", + "text_level": 1, + "bbox": [ + 510, + 83, + 845, + 99 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "In this section, we provide a detailed description of dynamic token truncation and reflection word pruning in the process of constructing the reasoning dataset. We also provide detailed information about the training data in Table 5.", + "bbox": [ + 510, + 116, + 880, + 197 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "B.1 Dynamic Token Truncation", + "text_level": 1, + "bbox": [ + 510, + 218, + 773, + 234 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "To further improve the dataset quality, we analyze the distribution of token lengths after the answer-centric rejection sampling. We find that many samples are close to the maximum token limit, and manual checks show that these long reasoning chains often have problems—such as logical errors, mistakes in multi-step calculations, and reliance on shortcuts that don't work in general cases (e.g., substituting specific values). Motivated by the correlation between extreme token lengths and compromised solution quality, we implement adaptive truncation thresholds to keep the answers within the 200-8000 token range. This dynamic filtering not only eliminates the invalid cases (overly verbose or terse responses) but also enhances the overall quality of the data. The final length distribution matches how human experts solve problems and keeps the important reasoning steps intact.", + "bbox": [ + 510, + 244, + 884, + 533 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "B.2 Reflection Word Pruning", + "text_level": 1, + "bbox": [ + 510, + 556, + 756, + 571 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Our analysis reveals a critical flaw in distilled reasoning chains: excessive metacognitive loops caused by uncontrolled reflection markers (e.g., 'Alternatively,' 'Wait'), which led to performance degradation through infinite loops or ungrounded speculation. Term frequency analysis of reflection density shows a heavy-tailed distribution— $95\\%$ of samples contained fewer than 10 reflection markers per chain, while $1\\%$ exhibited over 50 markers, with this extreme group strongly correlating to hallucination rates. To address this, we introduce a reflection token quota system that automatically prunes samples exceeding 25 reflection markers while retaining core reasoning logic using semantic-aware span detection. As shown in Figure 4, this approach significantly reduced infinite-loop instances in validation tasks while improving answer accuracy. The refined reflection pattern mirrors expert human problem-solving strategies, wherein targeted self-correction enhances, rather than disrupts, the continuity of the reasoning process.", + "bbox": [ + 510, + 583, + 884, + 920 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/e6e818e208bce7e7184afc19e3effb07a641dfed7a0d213ac1fd5c32cde5e241.jpg", + "image_caption": [ + "Figure 7: The token and reflection word distribution of the long-chain reasoning dataset." + ], + "image_footnote": [], + "bbox": [ + 115, + 82, + 499, + 236 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/19734dcb34cce543e8463529115a85acd6bad9216134c3e1ed1179348600b169.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 502, + 82, + 878, + 236 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/ed5183c30b072c3055cdbc52c33cbe7366f61b55f31d0f89626a003a8768d22c.jpg", + "table_caption": [ + "Table 4: The detailed training hyperparameters." + ], + "table_footnote": [], + "table_body": "
ConfigSFT
DeepspeedZero3
Epoch5
Warmup Ratio0.1
Max Grad Norm1.0
OptimizerAdamW
Learning rate2e-5
Learning rate schedulerCosine
Text max length8192
Batch size per GPU1
Gradient Accumulation Steps4
GPU64×H20-96G
PrecisionBf16
", + "bbox": [ + 147, + 310, + 455, + 482 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "C Discussion", + "text_level": 1, + "bbox": [ + 114, + 506, + 243, + 521 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "In this work, we investigate the phenomenon of visual information forgetting in MLLMs during long-chain reasoning. Through comprehensive analysis and experiments, we show that as reasoning chains progressively lengthen, models exhibit a gradual deterioration in retaining visual inputs, ultimately undermining their multimodal reasoning capabilities and exacerbating hallucination issues. To address this challenge, we propose a take-along visual conditioning mechanism that enables models to dynamically revisit visual inputs during reasoning steps, thereby enhancing content fidelity throughout the inference process.", + "bbox": [ + 112, + 533, + 489, + 743 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "However, as illustrated in Figure 8, we acknowledge several limitations. First, for highly complex reasoning tasks requiring sophisticated analytical capabilities, simply increasing visual revisits proves insufficient. In contrast, it is crucial to enhance the model's inherent reasoning capacity. Second, our method assumes the availability of delayed visual processing, making it potentially unsuitable for real-time applications requiring instantaneous visual feedback, such as robotic navigation or time-sensitive decision-making scenarios.", + "bbox": [ + 112, + 744, + 489, + 921 + ], + "page_idx": 12 + }, + { + "type": "table", + "img_path": "images/745a67a45e9a8fb99fc9a5e2ebcf5aebf3c4c0260658c3e43b3d580ec59cacbf.jpg", + "table_caption": [ + "Table 5: Details on the TVC's training data, which is derived from publicly available datasets." + ], + "table_footnote": [], + "table_body": "
DatasetsSamples
MathV360K (Shi et al., 2024)221K
Geo170K (Gao et al., 2023)22K
LLaVA-OneVision (Li et al., 2024)97K
Cambrian-1 (Tong et al., 2024)1K
", + "bbox": [ + 519, + 323, + 868, + 420 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Our work represents an initial exploration into mitigating visual forgetting in extended multimodal reasoning chains. We envision future research directions including: (1) Developing hybrid architectures that synergistically enhance both visual retention and intrinsic reasoning capabilities; (2) Investigating adaptive attention mechanisms for real-time multimodal applications; (3) Exploring curriculum learning strategies to progressively strengthen long-chain reasoning capacities. We hope this foundational study will inspire further advances in understanding and improving multimodal reasoning systems for complex real-world applications.", + "bbox": [ + 507, + 445, + 884, + 653 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/6a7493ff659617dcedca570aa164a80dbd4533998a3aa4bf6246b4c28549486d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 127, + 334, + 164, + 355 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/679522b38765370c555efed06eb171b6a97b036167f248e17a8a2a3a6956f88b.jpg", + "image_caption": [ + "Figure 8: Qualitative Results of TVC." + ], + "image_footnote": [], + "bbox": [ + 169, + 329, + 369, + 583 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/e4da8a755a3bfc4d8b87c6a8b3c35c5ae1c52093de2983e4e008244ee89b6da6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 127, + 596, + 169, + 621 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Which of the cubes is not identical to the unfolded net?", + "bbox": [ + 173, + 600, + 361, + 618 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Choices: (A) A (B) B (C) C (D) D (E) E", + "bbox": [ + 173, + 619, + 352, + 627 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Let's see here. I've got this problem where there's an unfolded net of a cube, and then there are several cubes shown below with letters A through E. The question is asking which one of these cubes is not identical to the unfolded net. Okay, so I need to compare each cube to the net and see if they match or not.", + "bbox": [ + 418, + 331, + 843, + 357 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Let me label the squares for clarity:", + "bbox": [ + 420, + 365, + 559, + 373 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "2. Back face (B)", + "bbox": [ + 420, + 378, + 485, + 388 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "3. Top face (T)", + "bbox": [ + 420, + 390, + 478, + 398 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "4. Bottom face (D)", + "bbox": [ + 420, + 399, + 495, + 407 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "5. Left face (L)", + "bbox": [ + 420, + 407, + 480, + 414 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "6. Right face (R)", + "bbox": [ + 420, + 414, + 487, + 423 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Let's look at each cube option:", + "bbox": [ + 420, + 430, + 539, + 439 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "A) This is a cube with letters on its faces. Wait, no, it's a cube with colors, similar to the others. Wait, actually, all cubes have combinations of teal and yellow faces.", + "bbox": [ + 420, + 439, + 789, + 455 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "I need to carefully observe the color placement on each cube.", + "bbox": [ + 420, + 455, + 658, + 463 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Option A", + "bbox": [ + 420, + 464, + 458, + 470 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "- Front face: teal", + "bbox": [ + 420, + 473, + 485, + 480 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "-Top face: yellow", + "bbox": [ + 420, + 482, + 492, + 489 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "-Right face:teal", + "bbox": [ + 420, + 489, + 485, + 497 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "- Left face: yellow", + "bbox": [ + 420, + 498, + 494, + 505 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "-Back face:teal", + "bbox": [ + 420, + 506, + 485, + 512 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "- Bottom face: yellow", + "bbox": [ + 420, + 514, + 505, + 521 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Option E:", + "bbox": [ + 420, + 530, + 458, + 537 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "- Front: teal", + "bbox": [ + 420, + 538, + 495, + 545 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "-Top: yellow", + "bbox": [ + 420, + 546, + 470, + 552 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "-Right:teal", + "bbox": [ + 420, + 554, + 467, + 561 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "- Left: yellow", + "bbox": [ + 420, + 562, + 470, + 569 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "-Back:teal", + "bbox": [ + 420, + 571, + 467, + 577 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "- Bottom: yellow", + "bbox": [ + 420, + 579, + 487, + 586 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Comparing to the net: F(t), T(y), R(t), L(y), B(y), D(t)", + "bbox": [ + 420, + 587, + 631, + 594 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "So, according to the net, B should be yellow, but in Option E, it's teal.", + "bbox": [ + 420, + 596, + 690, + 602 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Therefore, Option E differs from the net in the back face color.", + "bbox": [ + 420, + 604, + 665, + 611 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Hence, Option E is not identical to the unfolded net.", + "bbox": [ + 420, + 612, + 623, + 619 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Therefore, the answer is E.", + "bbox": [ + 420, + 620, + 532, + 627 + ], + "page_idx": 13 + } +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13360/f41871e7-052a-41b7-a381-f27188074bdf_model.json b/data/2025/2503_13xxx/2503.13360/f41871e7-052a-41b7-a381-f27188074bdf_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a7634377f4139c87fbcbe163b02adff7abc99844 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13360/f41871e7-052a-41b7-a381-f27188074bdf_model.json @@ -0,0 +1,2450 @@ +[ + [ + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.297, + 0.061, + 0.701 + ], + "angle": 270, + "content": "arXiv:2503.13360v2 [cs.CV] 3 Jun 2025" + }, + { + "type": "title", + "bbox": [ + 0.167, + 0.09, + 0.833, + 0.131 + ], + "angle": 0, + "content": "Mitigating Visual Forgetting via Take-along Visual Conditioning for Multi-modal Long CoT Reasoning" + }, + { + "type": "text", + "bbox": [ + 0.223, + 0.146, + 0.777, + 0.163 + ], + "angle": 0, + "content": "Hai-Long Sun\\(^{1,2}\\), Zhun Sun\\(^{4,5}\\), Houwen Peng\\(^{3}\\), Han-Jia Ye\\(^{1,2}\\)†" + }, + { + "type": "text", + "bbox": [ + 0.285, + 0.164, + 0.714, + 0.181 + ], + "angle": 0, + "content": "\\(^{1}\\)School of Artificial Intelligence, Nanjing University" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.181, + 0.815, + 0.197 + ], + "angle": 0, + "content": "\\(^{2}\\)National Key Laboratory for Novel Software Technology, Nanjing University" + }, + { + "type": "text", + "bbox": [ + 0.24, + 0.197, + 0.758, + 0.214 + ], + "angle": 0, + "content": "\\(^{3}\\)Tencent \\(^{4}\\)Center for Language AI Research, Tohoku University" + }, + { + "type": "text", + "bbox": [ + 0.295, + 0.214, + 0.703, + 0.231 + ], + "angle": 0, + "content": "\\(^{5}\\)RIKEN Center for Advanced Intelligence Project" + }, + { + "type": "title", + "bbox": [ + 0.261, + 0.261, + 0.341, + 0.277 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.144, + 0.289, + 0.461, + 0.773 + ], + "angle": 0, + "content": "Recent advancements in Large Language Models (LLMs) have demonstrated enhanced reasoning capabilities, evolving from Chain-of-Thought (CoT) prompting to advanced, product-oriented solutions like OpenAI o1. During our re-implementation of this model, we noticed that in multimodal tasks requiring visual input (e.g., geometry problems), Multimodal LLMs (MLLMs) struggle to maintain focus on the visual information, in other words, MLLMs suffer from a gradual decline in attention to visual information as reasoning progresses, causing text-over-relied outputs. To investigate this, we ablate image inputs during long-chain reasoning. Concretely, we truncate the reasoning process midway, then re-complete the reasoning process with the input image removed. We observe only a \\(\\sim 2\\%\\) accuracy drop on MathVista's test-hard subset, revealing the model's textual outputs dominate the following reasoning process. Motivated by this, we propose Take-along Visual Conditioning (TVC), a strategy that shifts image input to critical reasoning stages and compresses redundant visual tokens via dynamic pruning. This methodology helps the model retain attention to the visual components throughout the reasoning. Our approach achieves state-of-the-art performance on average across five mathematical reasoning benchmarks (+3.4 points vs previous sota), demonstrating the effectiveness of TVC in enhancing multimodal reasoning systems. The project page is available at https://sun-hailong.github.io/projects/TVC." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.785, + 0.26, + 0.801 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.811, + 0.489, + 0.876 + ], + "angle": 0, + "content": "Large Language Models (LLMs) have achieved significant advancements in natural language processing (NLP), particularly in the area of reasoning. These models have evolved from simple" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.261, + 0.885, + 0.502 + ], + "angle": 0, + "content": "prompt-based Chain-of-Thought (CoT) (Wei et al., 2022) techniques to sophisticated product-oriented solutions like OpenAI's o1 (OpenAI, 2024b), DeepSeek-R1 (DeepSeek-AI, 2024), and Qwen-QVQ (QwenTeam, 2024), demonstrating iterative reasoning capabilities for complex multi-step tasks, which enables them to handle tasks that require multi-step thinking, logic, and knowledge integration. Recently, several works also extended the CoT reasoning process to MLLMs settings through data-centric innovations. For instance, Math-LLaVA (Shi et al., 2024) pioneers domain-specific training with the MathV360K dataset, while MAmmoTH-VL (Guo et al., 2024) scales up multimodal CoT data generation." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.504, + 0.885, + 0.825 + ], + "angle": 0, + "content": "While such progress is notable in text-based domains, extending these advancements to multimodal scenarios presents unique challenges that transcend traditional language model boundaries. Reasoning in MLLMs requires fused understanding across different modalities, for example, in geometric reasoning tasks the model should interpret and reason about images alongside text. Therefore, the model's ability to integrate and maintain focus on both types of information is critical. Unlike text-only LLMs that reinforce problem context through linguistic recurrence, MLLMs struggle to sustain visual attention across reasoning steps. That is, as the length of the context increases, the model is more inclined to conduct the next step of reasoning based on the previously outputted text rather than the information of the image itself, which eventually leads to the continuation of the wrong text reasoning process and degraded model performance. We term this phenomenon as visual forgetting." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.826, + 0.885, + 0.922 + ], + "angle": 0, + "content": "In this work, we conduct a diagnostic analysis of the visual forgetting effect within a long-chain reasoning system. The system processes multimodal Q&A tasks through a series of interconnected reasoning steps. We demonstrate significantly reduced attentional allocation to visual inputs during multi" + }, + { + "type": "page_footnote", + "bbox": [ + 0.114, + 0.882, + 0.486, + 0.907 + ], + "angle": 0, + "content": "*Work done during their internship/employment at Tencent Hunyuan" + }, + { + "type": "page_footnote", + "bbox": [ + 0.138, + 0.907, + 0.431, + 0.921 + ], + "angle": 0, + "content": "†Corresponding author: yehj@lamda.nju.edu.cn" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.112, + 0.085, + 0.493, + 0.438 + ], + "angle": 0, + "content": "stage reasoning processes. More formally, our analysis: 1) truncates the reasoning process midway and removes the image embeddings; 2) regenerates subsequent reasoning trajectories; 3) evaluates the reasoning outcomes of the pre/post-ablation inference trajectories. Intuitively, the performance gap between normal reasoning and diagnostic reasoning reveals the model's dependency on generated text over original visual evidence. Our results (See Section 2.1) on the MathVista-Hard datasets show that, removing the image midway through the reasoning only causes an insignificant performance degradation (i.e. \\(\\sim 2\\%\\)), indicating that the model completes the reasoning process based primarily on its output text. More importantly, we also observe that the model's dependency on the visual evidence diminishes over time since the early removal of the image inputs could hurt accuracy by \\(\\sim 20\\%\\). This suggests model's reasoning employs both visual and textual information in the early stage, then over-relying on text history which limits full visual reasoning potential." + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.44, + 0.49, + 0.683 + ], + "angle": 0, + "content": "Motivated by this, we propose a novel strategy to mitigate the visual forgetting effect and maintain visual attention throughout the reasoning process. Our methodology compresses and shifts the image input to the later stages of the reasoning process, ensuring the model integrates sufficient visual evidence into its reasoning. This approach results in improved performance and achieves state-of-the-art results on average across five mathematical reasoning benchmarks (i.e., \\(+3.4\\%\\) vs previous sota). Our findings highlight the effectiveness of this strategy in enhancing the performance of multimodal reasoning systems, providing a robust solution to the problem of visual forgetting in long-chain reasoning tasks." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.69, + 0.449, + 0.741 + ], + "angle": 0, + "content": "2 Take-along Visual Conditioning: Sustaining Visual Evidence for Multi-modal Long CoT Reasoning" + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.747, + 0.49, + 0.859 + ], + "angle": 0, + "content": "In this section, we first discuss our motivation and observations of the visual forgetting phenomenon in MLLM reasoning systems (Section 2.1). Then, we propose the Take-alone Visual Conditioning (TVC) approach to mitigate visual forgetting and enhance the model's long-chain reasoning capabilities (Section 2.2)." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.869, + 0.414, + 0.886 + ], + "angle": 0, + "content": "2.1 Capturing the Visual Forgetting" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.89, + 0.49, + 0.922 + ], + "angle": 0, + "content": "Text-based reasoning systems often repeat key terms or phrases (e.g., restating \"triangle ABC\" or" + }, + { + "type": "image", + "bbox": [ + 0.536, + 0.085, + 0.86, + 0.288 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.301, + 0.885, + 0.373 + ], + "angle": 0, + "content": "Figure 1: The visual forgetting phenomenon by removing the image at different reasoning stages. It shows that by the midpoint of the reasoning process, the model becomes less dependent on the image, causing text-over-relied outputs." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.398, + 0.885, + 0.495 + ], + "angle": 0, + "content": "\"equation (1)\" to keep the problem's context clear and focused. By restating these critical details, they create strong connections between each step of the reasoning process and the original problem's requirements. This repetition ensures the logic stays on track, and consistent to follow." + }, + { + "type": "equation", + "bbox": [ + 0.597, + 0.507, + 0.884, + 0.523 + ], + "angle": 0, + "content": "\\[\n\\mathcal {C} _ {\\mathrm {M L L M}} = f \\left(V, T _ {1}, \\dots , T _ {n}\\right) \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.535, + 0.884, + 0.761 + ], + "angle": 0, + "content": "On the other hand, MLLMs struggle with this approach due to their design. As formalized in Equation (1), visual inputs \\( V \\) are confined to the initial processing stages. Unlike text, these visual evidence aren't revisited or reinforced later in the reasoning process. Because there's no built-in way to keep visual information \"active\" or relevant throughout the task, the system's ability to connect visual details with text or logic weakens over time, leading to a progressive visual attention decay. The model is more likely to reason with previously outputted text and becomes particularly problematic in visual reasoning tasks that require continuous validation of spatial relationships." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.761, + 0.885, + 0.888 + ], + "angle": 0, + "content": "We conduct two analytic analyses using the QVQ-72B-Preview model (QwenTeam, 2024) to capture this visual forgetting phenomenon quantitatively and qualitatively. On one hand, we remove the visual inputs at eight different stages to observe their impact. On the other hand, we depict the attention matrix to directly track the attention decay of the visual evidence over time." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.89, + 0.884, + 0.922 + ], + "angle": 0, + "content": "Progressive Image Removing. To assess the extent to which the reasoning process depends on" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.139, + 0.085, + 0.462, + 0.279 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.146, + 0.281, + 0.462, + 0.392 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.404, + 0.49, + 0.492 + ], + "angle": 0, + "content": "Figure 2: Illustration of layer-level and token-level attention weights. (a) The layer-level attention weights of image tokens across different response token positions. (b) The token-level attention weights at the middle layer. It shows that the model's attention to the image gradually decreases during the reasoning process." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.518, + 0.49, + 0.839 + ], + "angle": 0, + "content": "previously generated text, we first perform a normal reasoning process, then reset the KV cache at various stages of the reasoning process. This effectively removed image tokens and forced subsequent steps to rely solely on text-based information. Specifically, the reasoning process was divided into \\( K = 8 \\) intervals based on output token counts regardless of the length of the normal reasoning process, with visual input progressively masked by resetting the KV cache at different cutoff positions \\( k \\in \\{0,1,\\dots ,K - 1\\} \\). In other words, the first \\( k / 8 \\) part of the normal reasoning process is now employed as a prompt, and the model now re-complete the reasoning process without image inputs. Furthermore, we discovered that for some questions (30.9% of the MathVista dataset), the model could answer correctly using only the text-based prompt. Consequently, we excluded these questions and designated the remaining dataset as the MathVista-Hard dataset." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.842, + 0.49, + 0.922 + ], + "angle": 0, + "content": "As illustrated in Figure 1, we observe that the performance is 40.9 at \\( k = 4 \\) and 43.1 for the normal reasoning. This minimal \\( 2.2\\% \\) gap suggests that the model overly relies on text outputs during the later reasoning stages rather than on the image." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.085, + 0.885, + 0.23 + ], + "angle": 0, + "content": "This indicates that once the model completes the half stages of reasoning, it becomes overly reliant on its own generated CoT. Over-reliance on this self-generated reasoning worsens the model's tendency to forget visual evidence over time, which can result in hallucinations during lengthy, multi-step reasoning tasks. This result also reveals a near-exponential forgetting effect in performance as \\( k \\) increases, which we formalize as:" + }, + { + "type": "equation", + "bbox": [ + 0.516, + 0.242, + 0.884, + 0.275 + ], + "angle": 0, + "content": "\\[\n\\mathcal {R} (k) = \\mathcal {R} _ {\\text {f u l l}} - \\Delta_ {\\text {v i s u a l}} (k), \\quad \\Delta_ {\\text {v i s u a l}} (k) \\propto e ^ {- k}, \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.277, + 0.884, + 0.34 + ], + "angle": 0, + "content": "where \\(\\mathcal{R}_{\\mathrm{full}}\\) represents the full multimodal reasoning performance, and \\(\\Delta_{\\mathrm{visual}}(k)\\) quantifies the performance degradation caused by visual masking at position \\(k\\)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.341, + 0.885, + 0.518 + ], + "angle": 0, + "content": "Visual Attention Decay. To more intuitively observe the changes in visual feature attention across different stages of reasoning, we investigate the layer-level attention weights at various response token positions and the token-level attention weights at layer 16. Given that MLLM's attention weights reflect the focus on tokens and influence the decision-making process, we first analyze the attention weights at each layer of the MLLM. Specifically, for each layer, we calculate the proportion of attention weights on all image tokens." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.519, + 0.884, + 0.809 + ], + "angle": 0, + "content": "As shown in Figure 2(a), we observe that at the \\(1/8\\) position of the reasoning process, the model effectively focuses on the visual inputs. However, as reasoning progresses, despite fluctuations in attention to visual tokens across layers, the model's overall attention to visual evidence gradually decreases, leading to visual forgetting. Next, following the methodology of FastV (Chen et al., 2024a), we analyze the attention maps for several cases and find that the model predominantly focuses on previously generated text tokens rather than the input image. After approximately \\(20\\%\\) tokens, the existence of image inputs on attention maps diminishes significantly, as illustrated in Figure 2(b). Both of the observations indicate a phenomenon of visual memory degradation, revealing the model's limitations in maintaining consistent attention to visual inputs throughout the reasoning process." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.82, + 0.807, + 0.836 + ], + "angle": 0, + "content": "2.2 Take-along Visual Conditioning" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.842, + 0.885, + 0.922 + ], + "angle": 0, + "content": "In this section, we introduce our solution to tackle this problem in detail. We propose Take-along Visual Conditioning (TVC), a dynamic image retention mechanism that re-introduces visual inputs at strategic intervals throughout the reasoning pro" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.49, + 0.228 + ], + "angle": 0, + "content": "cess. TVC mitigates visual attention decay by periodically reaffirming visual information, akin to human problem-solving behaviors where individuals frequently refer back to visual inputs. Our approach enhances the model's ability to incorporate visual information continuously, improving its long-chain reasoning capacity by ensuring that visual evidence is revisited during critical decision-making moments." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.231, + 0.49, + 0.504 + ], + "angle": 0, + "content": "The TVC method consists of two key stages: training and testing. In the training stage, we introduce Dynamic Visual Reaffirmation (DVR), which guides the model through iterative reinforcement of visual evidence during long reasoning chains. In the testing phase, we present Periodic Visual Calibration (PVC), where visual reactivation is periodically triggered at self-reflection intervals. To prevent the model from forgetting previous text-based reasoning steps due to an excessive number of image tokens, we adopt image compression through adaptive pooling to reduce the image token size while preserving spatial semantics. This dual-modality engagement mechanism ensures consistent interaction between textual reasoning and visual evidence. We present an illustration of the TVC system in Figure 3." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.506, + 0.49, + 0.796 + ], + "angle": 0, + "content": "Dynamic Visual Reaffirmation. Our dynamic visual reaffirmation training strategy combines two key components: 1) Data Curation: We curate long-chain reasoning data using the pipeline described in Section 3, sourced from high-quality academic datasets (e.g., MathV360K, Geo170K, and LLaVA-OneVision). This process yields a high-quality dataset optimized for long-chain reasoning training. 2) Visual Content Injection: While the curated data ensures correctness, the QVQ model inherently lacks the ability to iteratively reference visual inputs during reasoning. Therefore, we manually re-inject the visual content (visual embeddings and bridging prompt) to triggers visual re-activation at predefined self-reflection intervals. Specifically, given the initial multimodal input \\(\\mathcal{M}_0 = (V,T_0)\\), DVR performs visual reactivation at self-reflection intervals \\(\\{r_1,\\dots,r_m\\}\\):" + }, + { + "type": "equation", + "bbox": [ + 0.126, + 0.81, + 0.488, + 0.872 + ], + "angle": 0, + "content": "\\[\n\\mathcal {M} _ {i} = \\left(V, \\left[ T _ {\\text {p r e v}}; \\underbrace {\\text {P r o m p t}} _ {\\text {R e - a c t i v a t i o n}}; T _ {\\text {n e w}} \\right]\\right) \\text {a t s t e p} r _ {i} \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.874, + 0.489, + 0.922 + ], + "angle": 0, + "content": "where \\( T_{\\mathrm{prev}} \\) represents the previous reasoning steps and \\( T_{\\mathrm{new}} \\) denotes the new reasoning steps that are based on prior reasoning and reintroduce focus on" + }, + { + "type": "image", + "bbox": [ + 0.515, + 0.083, + 0.878, + 0.154 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.513, + 0.156, + 0.878, + 0.281 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.293, + 0.884, + 0.337 + ], + "angle": 0, + "content": "Figure 3: Overview of TVC System Design. We enable the model to have take-along visual conditioning capabilities through two stages: training and inference." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.367, + 0.884, + 0.447 + ], + "angle": 0, + "content": "visual information. The bridging prompt is employed to hint the existence of the image, e.g., Let me see the image again. To improve efficiency, our initial implementation adopts midpoint reactivation \\( (m = 1, r_1 = 0.5L \\) for \\( L \\)-step chains)." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.45, + 0.885, + 0.643 + ], + "angle": 0, + "content": "During the self-reflection phase of the reasoning steps, we randomly select \\( m \\) positions to inject visual content. Specifically, we inject reactivation prompts and regenerate visual captions using the model's intrinsic capabilities. This forces joint attention to both textual reasoning and visual evidence. By leveraging the model's intrinsic image captioning capability, we continuously reinforce visual information throughout the reasoning process. This ensures that the model incorporates image evidence during reflection, rather than relying solely on textual reasoning." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.646, + 0.884, + 0.839 + ], + "angle": 0, + "content": "Periodic Visual Calibration. Calibrating visual attention plays a crucial role in enhancing long-chain reasoning capabilities. After training our model, we further design the periodic visual calibration process. Specifically, we coordinate operations during reactivation as follows: 1) Token Compression: We first compress visual tokens using average pooling to prevent text-based reasoning from forgetting visual information. 2) Visual Cache Reset: We then prepend an instruction (bridging prompt) to reintroduce the image and re-inject image tokens by resetting the KV cache of the generation process." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.842, + 0.885, + 0.922 + ], + "angle": 0, + "content": "We also provide an example of how PVC is implemented in the case study section (Section 4.5). PVC both improves reasoning efficiency and prevents the model from forgetting previous reasoning steps due to an overload of visual tokens." + } + ], + [ + { + "type": "image", + "bbox": [ + 0.121, + 0.083, + 0.482, + 0.237 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.114, + 0.248, + 0.49, + 0.306 + ], + "angle": 0, + "content": "Figure 4: Data Generation Pipeline of TVC. We use iterative distillation to collect long-chain reasoning data, followed by a comprehensive response filtering process to ensure high-quality reasoning." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.34, + 0.43, + 0.372 + ], + "angle": 0, + "content": "3 Data-Centric Implementation of Multimodal Reasoning System" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.393, + 0.49, + 0.442 + ], + "angle": 0, + "content": "In this section, we briefly describe our implementation of the multimodal reasoning system through a scalable curated data generation pipeline." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.469, + 0.469, + 0.484 + ], + "angle": 0, + "content": "3.1 Long-Chain Reasoning Data Collection" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.498, + 0.49, + 0.74 + ], + "angle": 0, + "content": "Prior research (Qin et al., 2024; Jiang et al., 2024) has identified two dominant paradigms for constructing long-chain reasoning systems: (1) explicit search-based methods, which utilize structures such as Monte Carlo Tree Search (MCTS) combined with specially trained reward models to guide the search process toward optimal solutions, and (2) instruction distillation approaches, which fine-tune models on curated long chain-of-thought (CoT) datasets. To efficiently develop an MLLM with long-chain reasoning capabilities, we adopt the distillation paradigm. In this section, we describe the distillation process and present the data generation pipeline aimed at enhancing the reasoning capability of MLLM." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.745, + 0.491, + 0.922 + ], + "angle": 0, + "content": "Our distillation pipeline begins with aggregating publicly available reasoning datasets (e.g., MathV360K (Shi et al., 2024) and Geo170K (Gao et al., 2023)). Through an iterative process of distillation and response filtering, we progressively enhance the model's long-chain reasoning capabilities. Specifically, the teacher model, QVQ-72B-Preview, generates long COT reasoning data, while the student model, Qwen2-VL, undergoes supervised fine-tuning on the filtered CoT data to enhance its reasoning performance." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.085, + 0.882, + 0.102 + ], + "angle": 0, + "content": "3.2 Iterative Distillation with Reject Sampling" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.106, + 0.884, + 0.186 + ], + "angle": 0, + "content": "After obtaining the long-chain reasoning responses, we employ an assessment pipeline to ensure data quality. Specifically, we introduce a dual-temperature sampling mechanism to optimize data quality through variance exploitation." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.187, + 0.882, + 0.234 + ], + "angle": 0, + "content": "Deterministic Initial Sampling. For the first-stage data generation, we employ temperature \\(\\tau = 0\\) to obtain the QVQ's most confident reasoning paths:" + }, + { + "type": "equation", + "bbox": [ + 0.53, + 0.247, + 0.884, + 0.265 + ], + "angle": 0, + "content": "\\[\n\\mathcal {D} _ {\\text {i n i t}} = \\left\\{\\left(\\mathbf {x}, \\mathbf {y} _ {\\mathrm {C o T}}\\right) \\sim \\mathcal {P} _ {\\mathrm {Q V Q}} (\\cdot | \\mathbf {x}; \\tau = 0) \\right\\}, \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.277, + 0.884, + 0.371 + ], + "angle": 0, + "content": "where \\(\\mathbf{y}_{\\mathrm{CoT}}\\) represents the response generated by QVQ. This ensures that the model generates the most probable reasoning path for each input. Subsequently, we use these high-confidence responses as a foundation for further refinement in later stages of data generation." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.373, + 0.885, + 0.518 + ], + "angle": 0, + "content": "Answer-Centric Reject Sampling. To ensure the quality of the data, we employ an LLM-as-a-Judge approach for answer-centric reject sampling. Specifically, we use a strong LLM (e.g., Qwen2.5-72B-Instruct) as the verifier and prompt the model to output a 'yes' or 'no' in JSON format, indicating whether the long-chain reasoning answer is correct, i.e., \\(\\mathcal{V}_{\\mathrm{valid}} = \\{\\mathbf{y}_{\\mathrm{CoT}}|\\mathrm{Match}(\\mathbf{y}_{\\mathrm{CoT}},\\mathbf{y}_{\\mathrm{gt}})\\}\\). After this process, we obtain \\(\\sim 200\\mathrm{K}\\) high-quality samples." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.518, + 0.884, + 0.631 + ], + "angle": 0, + "content": "Best-of-N Error Correction. While initial sampling with temperature \\(\\tau = 0\\) yields a set of data, there are substantial residual errors \\((\\mathcal{D}_{\\mathrm{error}})\\) where QVQ's responses \\((\\mathbf{y}_{\\mathrm{CoT}})\\) unmatched with ground truth answers (55.8% of \\(\\mathcal{D}_{\\mathrm{init}}\\)). To fully leverage the available open-source data, we implement a contrastive regeneration strategy using \\(\\tau = 1\\):" + }, + { + "type": "equation", + "bbox": [ + 0.509, + 0.641, + 0.885, + 0.696 + ], + "angle": 0, + "content": "\\[\n\\mathcal {D} _ {\\text {c o r r e c t e d}} = \\bigcup_ {\\mathbf {x} \\in \\mathcal {D} _ {\\text {e r r o r}}} \\left\\{\\arg \\min _ {\\mathbf {y} ^ {i} \\in \\mathcal {Y} _ {\\text {v a l i d}}} \\operatorname {L e n g t h} \\left(\\mathbf {y} ^ {i}\\right) \\right\\} _ {i = 1} ^ {8}, \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.697, + 0.884, + 0.777 + ], + "angle": 0, + "content": "where \\(\\mathcal{V}_{\\mathrm{valid}}\\) denotes the subset of 8 regenerated responses (at \\(\\tau = 1\\)) that satisfy answer correctness. For cases with multiple valid responses, we prioritize minimal reasoning token length to enhance conciseness and efficiency:" + }, + { + "type": "equation", + "bbox": [ + 0.525, + 0.788, + 0.884, + 0.832 + ], + "angle": 0, + "content": "\\[\n\\operatorname {L e n g t h} (\\mathbf {y}) = \\sum_ {t = 1} ^ {T} \\mathbb {I} \\left(w _ {t} \\notin \\{[ \\mathrm {P A D} ], [ \\mathrm {S E P} ] \\}\\right) \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.842, + 0.885, + 0.922 + ], + "angle": 0, + "content": "This two-stage sampling achieves partial error recovery while reducing the average token count to minimize meaningless self-reflection, effectively balancing correctness and efficiency. To further enhance the quality of the dataset, we also perform" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.488, + 0.165 + ], + "angle": 0, + "content": "dynamic token truncation and reflection word pruning. This approach helps reduce the ineffective reflection parts in the dataset, thereby mitigating the issue of the model being unable to complete its answers. More details are in Section B." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.176, + 0.262, + 0.193 + ], + "angle": 0, + "content": "4 Experiments" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.201, + 0.49, + 0.378 + ], + "angle": 0, + "content": "We conduct comprehensive experiments across multiple vision-language benchmarks to demonstrate the effectiveness of our method. Section 4.1 provides implementation details for TVC. In Section 4.3, we present key results on visual reasoning tasks, supplemented with findings on general image understanding. Section 4.4 details ablation studies to evaluate the influence of critical design decisions. Finally, Section 4.5 presents visualizations and case studies to illustrate the method's operational characteristics and insights derived from it." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.388, + 0.287, + 0.404 + ], + "angle": 0, + "content": "4.1 Training Recipe" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.408, + 0.49, + 0.699 + ], + "angle": 0, + "content": "We integrate the TVC system with MLLMs of varying scales to demonstrate the effectiveness and generalizability of our approach. Initial implementation with Qwen2-VL-7B-Instruct confirmed the method's validity. To further evaluate scalability and establish robust baselines against state-of-the-art MLLMs, we expanded the approach to a 72B model. Prior to training TVC, we follow the long-chain reasoning pipeline described earlier. We use the LLaMA-Factory (Zheng et al., 2024) framework, with a learning rate of 2e-5, a batch size of 256, and 5 training epochs. During optimization, only the LLM parameters and cross-modal connector are trained, while the visual encoder remains frozen. The training process requires 10 hours on a \\(64\\times \\mathrm{H}20\\) GPU setup for the 7B model and approximately 4 days for the 72B model. Additional details are presented in Table 4." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.709, + 0.295, + 0.724 + ], + "angle": 0, + "content": "4.2 Evaluation Setup" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.729, + 0.49, + 0.922 + ], + "angle": 0, + "content": "We conduct a comprehensive experimental analysis across various visual reasoning benchmarks that require advanced visual reasoning skills. To ensure a well-rounded evaluation, we select several widely recognized and representative benchmarks, including MathVista (Lu et al., 2024a), MathVerse (Zhang et al., 2024a), MathVision (Wang et al., 2024a), Dynamath (Zou et al., 2024), and OlympiadBench (He et al., 2024). MathVista consists of 6,141 examples that require fine-grained, deep visual understanding and compositional reasoning. MathVerse contains 2,612 multi-subject" + }, + { + "type": "image", + "bbox": [ + 0.515, + 0.085, + 0.878, + 0.275 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.508, + 0.288, + 0.884, + 0.332 + ], + "angle": 0, + "content": "Figure 5: Ablations on the amount of training data. TVC benefits from data scaling, continually improving the reasoning capabilities." + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.357, + 0.885, + 0.614 + ], + "angle": 0, + "content": "math problems from a variety of sources. MathVision includes 3,040 high-quality mathematical problems sourced from established mathematics competitions. OlympiadBench features 8,476 bilingual multimodal problems tailored to Olympic-level mathematics and physics competitions. These benchmarks evaluate problem-solving abilities in mathematics, and following standard practice, we use GPT-4o-mini as the evaluator. Following the VLMEvalKit guidelines, we exclude the text-only split from MathVerse and the theorem-proof sections from OlympiadBench. For a fair comparison, we conduct evaluations using the testmini sets of MathVerse and MathVista. Fast evaluation is made possible through the use of the VLMEvalKit (Duan et al., 2024) and vLLM (Kwon et al., 2023)." + }, + { + "type": "title", + "bbox": [ + 0.508, + 0.627, + 0.825, + 0.643 + ], + "angle": 0, + "content": "4.3 Main Results on Visual Reasoning" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.648, + 0.885, + 0.921 + ], + "angle": 0, + "content": "The results in Table 1 demonstrate the effectiveness and generalizability of TVC across multiple visual reasoning benchmarks. Notably, our model achieves a \\(16.7\\%\\) improvement over QVQ-72B-Preview on MathVision and a \\(17.6\\%\\) gain on MathVerse, highlighting enhanced reasoning capabilities. Unlike conventional datasets where textual descriptions may include implicit visual cues, MathVerse is an all-around visual math benchmark specifically designed for equitable, in-depth evaluation of MLLMs. The significant gains on MathVerse underscore the significance of TVC, given the benchmark's unique design principles. Furthermore, the TVC-7B model, despite its smaller size compared to counterparts, achieves competitive performance, even outperforming leading MLLMs in multiple cases. This demonstrates the robustness of our" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.083, + 0.885, + 0.129 + ], + "angle": 0, + "content": "Table 1: Results on Visual Reasoning Tasks. We conduct evaluation experiments across 6 benchmarks, covering both general reasoning and task-specific reasoning assessments. TVC exhibits notable effectiveness and generalizability when applied to Qwen2-VL, surpassing other state-of-the-art MLLMs by a large margin." + }, + { + "type": "table", + "bbox": [ + 0.119, + 0.136, + 0.885, + 0.332 + ], + "angle": 0, + "content": "
ModelSizeMathVistaMathVisionMathVerseDynamathOlympiadBenchAverage
MiniCPM-V-2.6 (Yadav et al., 2025)8B60.818.417.69.8--
VITA-1.5 (Fu et al., 2025a)8B66.219.523.49.6--
LLaVA-COT (Xu et al., 2024)11B52.519.922.67.8--
Qwen2-VL (Wang et al., 2024b)7B60.916.324.611.03.223.2
InternVL2.5 (Chen et al., 2024b)8B64.517.022.89.40.122.8
POINTS1.5 (Liu et al., 2024b)8B66.422.026.614.2--
Ovis1.6-Gemma2 (Lu et al., 2024b)27B70.220.637.817.0--
InternVL2.5-COT (Chen et al., 2024b)78B71.432.540.128.5--
LLaVA-OneVision (Li et al., 2024)72B67.125.327.215.6--
Qwen2-VL (Wang et al., 2024b)72B69.726.636.220.010.332.6
QVQ-72B-preview (QwenTeam, 2024)72B71.435.941.530.720.440.0
TVC7B68.122.738.915.19.830.9
TVC72B72.241.948.830.024.343.4
" + }, + { + "type": "table_caption", + "bbox": [ + 0.113, + 0.353, + 0.49, + 0.41 + ], + "angle": 0, + "content": "Table 2: Ablations on the TVC System. TVC enhances reasoning capabilities, showing significant improvements on both general and task-specific reasoning benchmarks." + }, + { + "type": "table", + "bbox": [ + 0.119, + 0.421, + 0.486, + 0.501 + ], + "angle": 0, + "content": "
MethodMathVistaMathVisionMathVerseAvg
Baseline60.916.324.633.9
Vanilla - Direct SFT63.519.831.638.3
TVC w/o PVC66.721.835.641.4
TVC w/o DVR66.222.334.741.0
TVC Full68.122.738.943.2
" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.528, + 0.49, + 0.721 + ], + "angle": 0, + "content": "methodology even with more compact model configurations. Beyond task-specific visual reasoning, we extend our evaluation to general reasoning benchmarks (e.g., MathVista), where TVC consistently delivers strong performance, achieving a \\(3.6\\%\\) improvement over the original Qwen2-VL72B model. These results emphasize TVC's ability to excel in tasks requiring both perception and reasoning. Collectively, the findings indicate that TVC not only advances performance in specialized visual reasoning tasks but also offers substantial benefits in broader application scenarios." + }, + { + "type": "title", + "bbox": [ + 0.114, + 0.736, + 0.295, + 0.752 + ], + "angle": 0, + "content": "4.4 Further Analysis" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.759, + 0.49, + 0.84 + ], + "angle": 0, + "content": "In this section, we conduct comprehensive experiments to evaluate the design choices of TVC, emphasizing the key contributions of our approach. We also present a case study to further illustrate the qualitative effectiveness of TVC." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.842, + 0.49, + 0.922 + ], + "angle": 0, + "content": "Effectiveness of TVC system. To evaluate the effectiveness of the TVC system, we conduct comprehensive ablation experiments on various components using the Qwen2-VL-7B as the Baseline. We begin by performing supervised fine-tuning on the" + }, + { + "type": "table_caption", + "bbox": [ + 0.543, + 0.353, + 0.848, + 0.369 + ], + "angle": 0, + "content": "Table 3: Ablations on Token Compression." + }, + { + "type": "table", + "bbox": [ + 0.514, + 0.378, + 0.88, + 0.438 + ], + "angle": 0, + "content": "
MethodMathVistaMathVisionMathVerseAvg
TVC Baseline68.321.539.643.1
+ 2x2 Avg Pooling67.822.938.343.0
+ 4x4 Avg Pooling68.122.738.943.2
" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.468, + 0.885, + 0.693 + ], + "angle": 0, + "content": "Qwen2-VL-7B model with the data from Section 3, referred to as Vanilla - Direct SFT. Furthermore, we apply the DVR training approach outlined in Section 2.2, which increases the focus on the visual information in the training data, enabling the model to implicitly learn visual conditioning capabilities. Additionally, during the testing phase, we experiment with resetting the visual KV cache midway through the reasoning process, and after compressing the visual tokens, we add them to the end of the reasoning steps. This strategy allows the model to further observe the image content during its thought process, mitigating the visual forgetting and suppressing hallucinations." + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.697, + 0.885, + 0.922 + ], + "angle": 0, + "content": "As demonstrated in Table 2, the results highlight that the TVC system is crucial for enhancing visual reasoning capabilities. When using only the SFT training data without the DVR strategy in TVC (Vanilla - Direct SFT), improvements in visual reasoning tasks are limited, as the model lacks the ability to reflect on visual information. Furthermore, models trained with the DVR strategy alone still produce sub-optimal results, underscoring the importance of a comprehensive approach that integrates PVC and DVR. The contributions of PVC and DVR are relatively balanced in enhancing the reasoning capabilities. To further validate the effectiveness of our visual token compression, we" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.117, + 0.083, + 0.885, + 0.429 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.113, + 0.439, + 0.884, + 0.469 + ], + "angle": 0, + "content": "Figure 6: Case Study of TVC. TVC effectively re-examines the image during the reflection process to correct mistakes, guiding the model to the correct answer." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.494, + 0.489, + 0.623 + ], + "angle": 0, + "content": "conduct experiments with different pooling methods. As shown in Table 3, the TVC Baseline represents the method without image compression. We observe that the use of pooling methods has little impact on the model's capabilities. Utilizing \\(4 \\times 4\\) average pooling for compression not only enhances the model's inference efficiency but also achieves a slight performance improvement." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.628, + 0.49, + 0.852 + ], + "angle": 0, + "content": "Data Scaling Law. To validate the scalability of our approach, we also conduct a data scaling law experiment for training. As shown in Figure 5, we compare models across various data size: 50K, 100K, 150K, and 200K samples, and present the performance relative to the base model (Qwen2-VL-7B). Our findings show that as the dataset size increases, the model's performance continues to improve. However, it is evident that with a small dataset, the model's reasoning ability cannot reach an optimal level solely through SFT. Therefore, data scaling proves highly effective during SFT training, and the results indicate that TVC benefits from increased data." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.858, + 0.49, + 0.922 + ], + "angle": 0, + "content": "To prevent underfitting from becoming a performance bottleneck, we increase the number of training epochs with smaller datasets, which further improves model performance. This indicates that in" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.494, + 0.885, + 0.558 + ], + "angle": 0, + "content": "sufficient data or epochs lead to undertrained models, making it difficult to learn long-chain reasoning. Increasing both the data and training epochs can effectively alleviate this issue." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.57, + 0.645, + 0.587 + ], + "angle": 0, + "content": "4.5 Case Study" + }, + { + "type": "text", + "bbox": [ + 0.507, + 0.593, + 0.884, + 0.85 + ], + "angle": 0, + "content": "We present a comprehensive case study in Figure 6 to illustrate the improvements of our TVC approach. Specifically, we provide an example that highlights advancements in the reasoning process. In this example, the model is tasked with carefully observing the objects within an image and, after eliminating certain shapes and attributes, providing the count of the remaining objects. During base CoT reasoning, the model fails to check the attributes of each object and only focuses on their shapes, leading to an incorrect final answer. Subsequently, the model learns to re-focus on the image and carefully describe the attributes of each object in detail. This process allows the model to identify the issue in its previous reasoning and provide the correct answer." + }, + { + "type": "title", + "bbox": [ + 0.509, + 0.863, + 0.642, + 0.878 + ], + "angle": 0, + "content": "5 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.89, + 0.885, + 0.922 + ], + "angle": 0, + "content": "In this paper, we introduce Take-along Visual Conditioning (TVC), a novel strategy designed to en" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.113, + 0.085, + 0.493, + 0.281 + ], + "angle": 0, + "content": "hance the reasoning capabilities of MLLMs by addressing the challenge of visual attention degradation during long-chain reasoning. By dynamically shifting the image input to critical stages of reasoning and compressing redundant visual tokens, we ensure that the model maintains focus on the visual information throughout the process. Our extensive evaluation on several mathematical reasoning benchmarks demonstrates the effectiveness of TVC in improving multimodal reasoning, providing a robust approach to equip MLLMs with better visual grounding for complex tasks." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.291, + 0.281, + 0.308 + ], + "angle": 0, + "content": "Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.317, + 0.49, + 0.448 + ], + "angle": 0, + "content": "This work is partially supported by National Key R&D Program of China (2024YFE0202800), NSFC (62376118), Key Program of Jiangsu Science Foundation (BK20243012), CCF-Tencent Rhino-Bird Open Research Fund (RAGR20240101), AMED (JP25wm0625405), and Collaborative Innovation Center of Novel Software Technology and Industrialization." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.459, + 0.221, + 0.475 + ], + "angle": 0, + "content": "Limitations" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.486, + 0.49, + 0.664 + ], + "angle": 0, + "content": "Despite advancements, our method may still exhibit several limitations. First, for highly complex reasoning tasks requiring sophisticated analytical capabilities, simply increasing visual revisits proves insufficient. In contrast, it is crucial to enhance the model's inherent reasoning capacity. Second, our method assumes the availability of delayed visual processing, making it potentially unsuitable for real-time applications requiring instantaneous visual feedback, such as robotic navigation or time-sensitive decision-making scenarios." + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.69, + 0.214, + 0.705 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.115, + 0.714, + 0.49, + 0.742 + ], + "angle": 0, + "content": "Meta AI. 2024. Build the future of ai with meta llama 3. Technical report, Meta AI." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.753, + 0.489, + 0.78 + ], + "angle": 0, + "content": "anthropic. 2024. Introducing the next generation of claude. Technical report, anthropic." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.79, + 0.49, + 0.843 + ], + "angle": 0, + "content": "Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. 2023. Qwen technical report. arXiv:2309.16609." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.855, + 0.49, + 0.921 + ], + "angle": 0, + "content": "Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Junyang Lin, Chang Zhou, and Baobao Chang. 2024a. An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models. Preprint, arXiv:2403.06764." + }, + { + "type": "list", + "bbox": [ + 0.115, + 0.714, + 0.49, + 0.921 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.511, + 0.086, + 0.883, + 0.166 + ], + "angle": 0, + "content": "Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, et al. 2024b. Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling. arXiv preprint arXiv:2412.05271." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.176, + 0.885, + 0.243 + ], + "angle": 0, + "content": "Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Zhong Muyan, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. 2023. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. arXiv:2312.14238." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.253, + 0.885, + 0.293 + ], + "angle": 0, + "content": "DeepSeek-AI. 2024. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. Technical report, DeepSeek." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.304, + 0.885, + 0.37 + ], + "angle": 0, + "content": "Yuhao Dong, Zuyan Liu, Hai-Long Sun, Jingkang Yang, Winston Hu, Yongming Rao, and Ziwei Liu. 2024. Insight-v: Exploring long-chain visual reasoning with multimodal large language models. arXiv preprint arXiv:2411.14432." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.381, + 0.885, + 0.459 + ], + "angle": 0, + "content": "Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, et al. 2024. Vlmealkit: An open-source toolkit for evaluating large multi-modality models. arXiv preprint arXiv:2407.11691." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.47, + 0.885, + 0.537 + ], + "angle": 0, + "content": "Chaoyou Fu, Haojia Lin, Xiong Wang, Yi-Fan Zhang, Yunhang Shen, Xiaoyu Liu, Yangze Li, Zuwei Long, Heting Gao, Ke Li, et al. 2025a. Vita-1.5: Towards gpt-4o level real-time vision and speech interaction. arXiv preprint arXiv:2501.01957." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.547, + 0.885, + 0.601 + ], + "angle": 0, + "content": "Jiale Fu, Yuchu Jiang, Junkai Chen, Jiaming Fan, Xin Geng, and Xu Yang. 2025b. Speculative ensemble: Fast large language model ensemble via speculation. arXiv preprint arXiv:2502.01662." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.611, + 0.885, + 0.678 + ], + "angle": 0, + "content": "Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, et al. 2023. G-llava: Solving geometric problem with multi-modal large language model. arXiv preprint arXiv:2312.11370." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.688, + 0.885, + 0.755 + ], + "angle": 0, + "content": "Jarvis Guo, Tuney Zheng, Yuelin Bai, Bo Li, Yubo Wang, King Zhu, Yizhi Li, Graham Neubig, Wenhu Chen, and Xiang Yue. 2024. Mammoth-vl: Eliciting multimodal reasoning with instruction tuning at scale. arXiv preprint arXiv:2412.05237." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.765, + 0.885, + 0.845 + ], + "angle": 0, + "content": "Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. 2024. Olympiad-bench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.855, + 0.885, + 0.921 + ], + "angle": 0, + "content": "Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, and Ranjay Krishna. 2024. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. arXiv preprint arXiv:2406.09403." + }, + { + "type": "list", + "bbox": [ + 0.511, + 0.086, + 0.885, + 0.921 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.152 + ], + "angle": 0, + "content": "Jinhao Jiang, Zhipeng Chen, Yingqian Min, Jie Chen, Xiaoxue Cheng, Jiapeng Wang, Yiru Tang, Haoxiang Sun, Jia Deng, Wayne Xin Zhao, et al. 2024. Technical report: Enhancing llm reasoning with reward-guided tree search. arXiv preprint arXiv:2411.11694." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.164, + 0.488, + 0.256 + ], + "angle": 0, + "content": "Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.268, + 0.488, + 0.333 + ], + "angle": 0, + "content": "Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. 2024. Llavaonevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.347, + 0.488, + 0.412 + ], + "angle": 0, + "content": "Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. 2023. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. arXiv:2306.00890." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.425, + 0.488, + 0.464 + ], + "angle": 0, + "content": "Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. 2024a. Llavanext: Improved reasoning,OCR, and world knowledge." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.476, + 0.488, + 0.53 + ], + "angle": 0, + "content": "Yuan Liu, Le Tian, Xiao Zhou, Xinyu Gao, Kavio Yu, Yang Yu, and Jie Zhou. 2024b. Points1. 5: Building a vision-language model towards real world applications. arXiv preprint arXiv:2412.08443." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.542, + 0.488, + 0.607 + ], + "angle": 0, + "content": "Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. 2024a. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In ICLR." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.62, + 0.488, + 0.698 + ], + "angle": 0, + "content": "Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. 2023. Chameleon: Plug-and-play compositional reasoning with large language models. Advances in Neural Information Processing Systems, 36:43447-43478." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.711, + 0.488, + 0.764 + ], + "angle": 0, + "content": "Shiyin Lu, Yang Li, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, and Han-Jia Ye. 2024b. Ovis: Structural embedding alignment for multimodal large language model. arXiv preprint arXiv:2405.20797." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.776, + 0.488, + 0.843 + ], + "angle": 0, + "content": "Brandon McKinzie, Zhe Gan, Jean-Philippe Fauconnier, Sam Dodge, Bowen Zhang, Philipp Dufter, Dhruti Shah, Xianzhi Du, Futang Peng, Floris Weers, et al. 2024. Mm1: Methods, analysis & insights from multimodal llm pre-training. arXiv:2403.09611." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.854, + 0.488, + 0.882 + ], + "angle": 0, + "content": "OpenAI. 2024a. Gpt-4o: Hello gpt-4o. Technical report, OpenAI." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.894, + 0.488, + 0.921 + ], + "angle": 0, + "content": "OpenAI. 2024b. Learning to reason with llms. Technical report, OpenAI." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.49, + 0.921 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.513, + 0.086, + 0.883, + 0.164 + ], + "angle": 0, + "content": "Yingzhe Peng, Gongrui Zhang, Miaosen Zhang, Zhiyuan You, Jie Liu, Qipeng Zhu, Kai Yang, Xingzhong Xu, Xin Geng, and Xu Yang. 2025. Lmmr1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl. arXiv preprint arXiv:2503.07536." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.178, + 0.883, + 0.242 + ], + "angle": 0, + "content": "Yiwei Qin, Xuefeng Li, Haoyang Zou, Yixiu Liu, Shijie Xia, Zhen Huang, Yixin Ye, Weizhe Yuan, Hector Liu, Yuanzhi Li, et al. 2024. O1 replication journey: A strategic progress report-part 1. arXiv preprint arXiv:2410.18982." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.255, + 0.883, + 0.283 + ], + "angle": 0, + "content": "QwenTeam. 2024. Qvq: To see the world with wisdom. Technical report, Alibaba." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.295, + 0.883, + 0.36 + ], + "angle": 0, + "content": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.373, + 0.883, + 0.451 + ], + "angle": 0, + "content": "Machel Reid, Nikolay Savinov, Denis Teptyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv:2403.05530." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.464, + 0.883, + 0.53 + ], + "angle": 0, + "content": "Wenhao Shi, Zhiqiang Hu, Yi Bin, Junhua Liu, Yang Yang, See-Kiong Ng, Lidong Bing, and Roy Ka-Wei Lee. 2024. Math-llava: Bootstrapping mathematical reasoning for multimodal large language models. arXiv preprint arXiv:2406.17294." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.542, + 0.883, + 0.595 + ], + "angle": 0, + "content": "Hai-Long Sun, Da-Wei Zhou, Yang Li, Shiyin Lu, Chao Yi, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, De-Chuan Zhan, et al. 2025a. Parrot: Multilingual visual instruction tuning. In ICML." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.607, + 0.883, + 0.66 + ], + "angle": 0, + "content": "Hai-Long Sun, Da-Wei Zhou, De-Chuan Zhan, and Han-Jia Ye. 2025b. Pilot: A pre-trained model-based continual learning toolbox. SCIENCE CHINA Information Sciences, 68(4):147101." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.672, + 0.883, + 0.738 + ], + "angle": 0, + "content": "Hai-Long Sun, Da-Wei Zhou, Hanbin Zhao, Le Gan, De-Chuan Zhan, and Han-Jia Ye. 2025c. Mos: Model surgery for pre-trained model-based class-incremental learning. In AAAI, volume 39, pages 20699-20707." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.751, + 0.883, + 0.828 + ], + "angle": 0, + "content": "Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805." + }, + { + "type": "ref_text", + "bbox": [ + 0.512, + 0.842, + 0.883, + 0.92 + ], + "angle": 0, + "content": "Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. 2024. Cambrian-1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.086, + 0.883, + 0.92 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.139 + ], + "angle": 0, + "content": "Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, and Hongsheng Li. 2024a. Measuring multimodal mathematical reasoning with math-vision dataset. Preprint, arXiv:2402.14804." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.147, + 0.487, + 0.25 + ], + "angle": 0, + "content": "Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. 2024b. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.26, + 0.487, + 0.311 + ], + "angle": 0, + "content": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv:2201.11903." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.321, + 0.487, + 0.372 + ], + "angle": 0, + "content": "Guowei Xu, Peng Jin, Li Hao, Yibing Song, Lichao Sun, and Li Yuan. 2024. Llava-o1: Let vision language models reason step-by-step. arXiv preprint arXiv:2411.10440." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.381, + 0.487, + 0.433 + ], + "angle": 0, + "content": "Ankit Yadav, Lingqiao Liu, and Yuankai Qi. 2025. Exploring primitive visual measurement understanding and the role of output format in learning in vision-language models. arXiv preprint arXiv:2501.15144." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.442, + 0.487, + 0.505 + ], + "angle": 0, + "content": "Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. 2023. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11975-11986." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.515, + 0.487, + 0.593 + ], + "angle": 0, + "content": "Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, et al. 2024a. Math-verse: Does your multi-modal llm truly see the diagrams in visual math problems? arXiv preprint arXiv:2403.14624." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.602, + 0.487, + 0.667 + ], + "angle": 0, + "content": "Renrui Zhang, Xinyu Wei, Dongzhi Jiang, Ziyu Guo, Shicheng Li, Yichi Zhang, Chengzhuo Tong, Jiaming Liu, Aojun Zhou, Bin Wei, et al. 2024b. Mavis: Mathematical visual instruction tuning with an automatic data engine. arXiv preprint arXiv:2407.08739." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.676, + 0.487, + 0.78 + ], + "angle": 0, + "content": "Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024. Llamafactory: Unified efficient fine-tuning of \\(100+\\) language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), Bangkok, Thailand. Association for Computational Linguistics." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.788, + 0.487, + 0.841 + ], + "angle": 0, + "content": "Wenwen Zhuang, Xin Huang, Xiantao Zhang, and Jin Zeng. 2024. Math-puma: Progressive upward multimodal alignment to enhance mathematical reasoning. arXiv preprint arXiv:2408.08640." + }, + { + "type": "ref_text", + "bbox": [ + 0.117, + 0.849, + 0.487, + 0.915 + ], + "angle": 0, + "content": "Chengke Zou, Xingang Guo, Rui Yang, Junyu Zhang, Bin Hu, and Huan Zhang. 2024. Dynamath: A dynamic visual benchmark for evaluating mathematical reasoning robustness of vision language models. arXiv preprint arXiv:2411.00836." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.086, + 0.487, + 0.915 + ], + "angle": 0, + "content": null + } + ], + [ + { + "type": "title", + "bbox": [ + 0.116, + 0.084, + 0.275, + 0.099 + ], + "angle": 0, + "content": "A Related Work" + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.126, + 0.49, + 0.511 + ], + "angle": 0, + "content": "Multimodal Large Language Models. Multimodal Large Language Models (MLLMs) (Li et al., 2023; Liu et al., 2024a; Sun et al., 2025a; Wang et al., 2024b; Lu et al., 2024b; McKinzie et al., 2024; Sun et al., 2025c,b; Fu et al., 2025b; Dong et al., 2024) integrate vision encoders (Radford et al., 2021; Zhai et al., 2023) with LLMs (AI, 2024; Bai et al., 2023), endowing them with robust capabilities across a wide range of domains. These include general visual understanding(OpenAI, 2024a; Li et al., 2024), mathematical reasoning (Shi et al., 2024; Gao et al., 2023), and answering college-level questions (Chen et al., 2023), demonstrating their versatility in real-world tasks. The rapid advancements in open-source models have also spurred the development of proprietary models, such as GPT-4o (OpenAI, 2024a), Gemini (Team et al., 2023; Reid et al., 2024), Qwen2-VL-MAX (Wang et al., 2024b), and Claude3 (anthropic, 2024). These models have demonstrated remarkable performance in both evaluation benchmarks and practical applications, solidifying their position at the forefront of AI research and deployment." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.52, + 0.49, + 0.921 + ], + "angle": 0, + "content": "Reasoning with MLLMs. Recent advancements in MLLMs have significantly enhanced performance in reasoning tasks across both text and multimodal scenarios (OpenAI, 2024b; DeepSeek-AI, 2024; QwenTeam, 2024; Peng et al., 2025). Current methods primarily rely on CoT (Wei et al., 2022) to train MLLMs for step-by-step reasoning. Data-driven approaches include MathLLaVA (Shi et al., 2024), which introduced the MathV360K dataset, and MAmmoTH-VL (Guo et al., 2024), which curates a large-scale multimodal CoT dataset in a scalable manner. Another line of research explores vision-text alignment. MAVIS (Zhang et al., 2024b) fine-tunes a math-specific vision encoder with curated caption data, while Math-PUMA (Zhuang et al., 2024) leverages the Kullback-Leibler (KL) divergence of next-token prediction distributions for modality alignment. In a different paradigm, MLLMs act as coordinators, utilizing external tools such as LLMs, web search engines, and computer programs for complex reasoning. Chameleon (Lu et al., 2023) orchestrates tool-call sequences, and Visual Sketchpad (Hu et al., 2024) enables models to generate visual sketches to aid reasoning." + }, + { + "type": "title", + "bbox": [ + 0.512, + 0.084, + 0.847, + 0.101 + ], + "angle": 0, + "content": "B More Details of Reasoning Dataset" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.117, + 0.882, + 0.198 + ], + "angle": 0, + "content": "In this section, we provide a detailed description of dynamic token truncation and reflection word pruning in the process of constructing the reasoning dataset. We also provide detailed information about the training data in Table 5." + }, + { + "type": "title", + "bbox": [ + 0.512, + 0.219, + 0.774, + 0.235 + ], + "angle": 0, + "content": "B.1 Dynamic Token Truncation" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.246, + 0.885, + 0.535 + ], + "angle": 0, + "content": "To further improve the dataset quality, we analyze the distribution of token lengths after the answer-centric rejection sampling. We find that many samples are close to the maximum token limit, and manual checks show that these long reasoning chains often have problems—such as logical errors, mistakes in multi-step calculations, and reliance on shortcuts that don't work in general cases (e.g., substituting specific values). Motivated by the correlation between extreme token lengths and compromised solution quality, we implement adaptive truncation thresholds to keep the answers within the 200-8000 token range. This dynamic filtering not only eliminates the invalid cases (overly verbose or terse responses) but also enhances the overall quality of the data. The final length distribution matches how human experts solve problems and keeps the important reasoning steps intact." + }, + { + "type": "title", + "bbox": [ + 0.512, + 0.557, + 0.757, + 0.573 + ], + "angle": 0, + "content": "B.2 Reflection Word Pruning" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.584, + 0.885, + 0.921 + ], + "angle": 0, + "content": "Our analysis reveals a critical flaw in distilled reasoning chains: excessive metacognitive loops caused by uncontrolled reflection markers (e.g., 'Alternatively,' 'Wait'), which led to performance degradation through infinite loops or ungrounded speculation. Term frequency analysis of reflection density shows a heavy-tailed distribution—\\(95\\%\\) of samples contained fewer than 10 reflection markers per chain, while \\(1\\%\\) exhibited over 50 markers, with this extreme group strongly correlating to hallucination rates. To address this, we introduce a reflection token quota system that automatically prunes samples exceeding 25 reflection markers while retaining core reasoning logic using semantic-aware span detection. As shown in Figure 4, this approach significantly reduced infinite-loop instances in validation tasks while improving answer accuracy. The refined reflection pattern mirrors expert human problem-solving strategies, wherein targeted self-correction enhances, rather than disrupts, the continuity of the reasoning process." + } + ], + [ + { + "type": "image", + "bbox": [ + 0.117, + 0.083, + 0.5, + 0.237 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.503, + 0.083, + 0.88, + 0.237 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.2, + 0.248, + 0.796, + 0.264 + ], + "angle": 0, + "content": "Figure 7: The token and reflection word distribution of the long-chain reasoning dataset." + }, + { + "type": "table_caption", + "bbox": [ + 0.14, + 0.286, + 0.461, + 0.301 + ], + "angle": 0, + "content": "Table 4: The detailed training hyperparameters." + }, + { + "type": "table", + "bbox": [ + 0.149, + 0.311, + 0.456, + 0.483 + ], + "angle": 0, + "content": "
ConfigSFT
DeepspeedZero3
Epoch5
Warmup Ratio0.1
Max Grad Norm1.0
OptimizerAdamW
Learning rate2e-5
Learning rate schedulerCosine
Text max length8192
Batch size per GPU1
Gradient Accumulation Steps4
GPU64×H20-96G
PrecisionBf16
" + }, + { + "type": "title", + "bbox": [ + 0.115, + 0.507, + 0.245, + 0.523 + ], + "angle": 0, + "content": "C Discussion" + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.535, + 0.49, + 0.744 + ], + "angle": 0, + "content": "In this work, we investigate the phenomenon of visual information forgetting in MLLMs during long-chain reasoning. Through comprehensive analysis and experiments, we show that as reasoning chains progressively lengthen, models exhibit a gradual deterioration in retaining visual inputs, ultimately undermining their multimodal reasoning capabilities and exacerbating hallucination issues. To address this challenge, we propose a take-along visual conditioning mechanism that enables models to dynamically revisit visual inputs during reasoning steps, thereby enhancing content fidelity throughout the inference process." + }, + { + "type": "text", + "bbox": [ + 0.113, + 0.745, + 0.49, + 0.922 + ], + "angle": 0, + "content": "However, as illustrated in Figure 8, we acknowledge several limitations. First, for highly complex reasoning tasks requiring sophisticated analytical capabilities, simply increasing visual revisits proves insufficient. In contrast, it is crucial to enhance the model's inherent reasoning capacity. Second, our method assumes the availability of delayed visual processing, making it potentially unsuitable for real-time applications requiring instantaneous visual feedback, such as robotic navigation or time-sensitive decision-making scenarios." + }, + { + "type": "table_caption", + "bbox": [ + 0.509, + 0.286, + 0.882, + 0.315 + ], + "angle": 0, + "content": "Table 5: Details on the TVC's training data, which is derived from publicly available datasets." + }, + { + "type": "table", + "bbox": [ + 0.52, + 0.324, + 0.87, + 0.422 + ], + "angle": 0, + "content": "
DatasetsSamples
MathV360K (Shi et al., 2024)221K
Geo170K (Gao et al., 2023)22K
LLaVA-OneVision (Li et al., 2024)97K
Cambrian-1 (Tong et al., 2024)1K
" + }, + { + "type": "text", + "bbox": [ + 0.508, + 0.446, + 0.885, + 0.655 + ], + "angle": 0, + "content": "Our work represents an initial exploration into mitigating visual forgetting in extended multimodal reasoning chains. We envision future research directions including: (1) Developing hybrid architectures that synergistically enhance both visual retention and intrinsic reasoning capabilities; (2) Investigating adaptive attention mechanisms for real-time multimodal applications; (3) Exploring curriculum learning strategies to progressively strengthen long-chain reasoning capacities. We hope this foundational study will inspire further advances in understanding and improving multimodal reasoning systems for complex real-world applications." + } + ], + [ + { + "type": "image", + "bbox": [ + 0.129, + 0.335, + 0.165, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.171, + 0.33, + 0.37, + 0.585 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.129, + 0.598, + 0.171, + 0.623 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.601, + 0.362, + 0.619 + ], + "angle": 0, + "content": "Which of the cubes is not identical to the unfolded net?" + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.62, + 0.353, + 0.629 + ], + "angle": 0, + "content": "Choices: (A) A (B) B (C) C (D) D (E) E" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.332, + 0.845, + 0.358 + ], + "angle": 0, + "content": "Let's see here. I've got this problem where there's an unfolded net of a cube, and then there are several cubes shown below with letters A through E. The question is asking which one of these cubes is not identical to the unfolded net. Okay, so I need to compare each cube to the net and see if they match or not." + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.366, + 0.561, + 0.374 + ], + "angle": 0, + "content": "Let me label the squares for clarity:" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.379, + 0.487, + 0.389 + ], + "angle": 0, + "content": "2. Back face (B)" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.391, + 0.48, + 0.399 + ], + "angle": 0, + "content": "3. Top face (T)" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.4, + 0.496, + 0.408 + ], + "angle": 0, + "content": "4. Bottom face (D)" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.408, + 0.482, + 0.415 + ], + "angle": 0, + "content": "5. Left face (L)" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.416, + 0.488, + 0.424 + ], + "angle": 0, + "content": "6. Right face (R)" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.431, + 0.541, + 0.44 + ], + "angle": 0, + "content": "Let's look at each cube option:" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.44, + 0.791, + 0.456 + ], + "angle": 0, + "content": "A) This is a cube with letters on its faces. Wait, no, it's a cube with colors, similar to the others. Wait, actually, all cubes have combinations of teal and yellow faces." + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.456, + 0.66, + 0.464 + ], + "angle": 0, + "content": "I need to carefully observe the color placement on each cube." + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.465, + 0.459, + 0.472 + ], + "angle": 0, + "content": "Option A" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.474, + 0.487, + 0.481 + ], + "angle": 0, + "content": "- Front face: teal" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.483, + 0.493, + 0.49 + ], + "angle": 0, + "content": "-Top face: yellow" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.491, + 0.487, + 0.498 + ], + "angle": 0, + "content": "-Right face:teal" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.499, + 0.495, + 0.506 + ], + "angle": 0, + "content": "- Left face: yellow" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.507, + 0.487, + 0.513 + ], + "angle": 0, + "content": "-Back face:teal" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.515, + 0.507, + 0.522 + ], + "angle": 0, + "content": "- Bottom face: yellow" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.531, + 0.46, + 0.538 + ], + "angle": 0, + "content": "Option E:" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.539, + 0.496, + 0.546 + ], + "angle": 0, + "content": "- Front: teal" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.547, + 0.472, + 0.554 + ], + "angle": 0, + "content": "-Top: yellow" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.555, + 0.468, + 0.562 + ], + "angle": 0, + "content": "-Right:teal" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.563, + 0.472, + 0.57 + ], + "angle": 0, + "content": "- Left: yellow" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.572, + 0.468, + 0.578 + ], + "angle": 0, + "content": "-Back:teal" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.58, + 0.489, + 0.587 + ], + "angle": 0, + "content": "- Bottom: yellow" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.588, + 0.632, + 0.595 + ], + "angle": 0, + "content": "Comparing to the net: F(t), T(y), R(t), L(y), B(y), D(t)" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.597, + 0.692, + 0.604 + ], + "angle": 0, + "content": "So, according to the net, B should be yellow, but in Option E, it's teal." + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.605, + 0.666, + 0.612 + ], + "angle": 0, + "content": "Therefore, Option E differs from the net in the back face color." + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.613, + 0.624, + 0.62 + ], + "angle": 0, + "content": "Hence, Option E is not identical to the unfolded net." + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.621, + 0.533, + 0.628 + ], + "angle": 0, + "content": "Therefore, the answer is E." + }, + { + "type": "image_caption", + "bbox": [ + 0.37, + 0.671, + 0.628, + 0.686 + ], + "angle": 0, + "content": "Figure 8: Qualitative Results of TVC." + } + ] +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13360/f41871e7-052a-41b7-a381-f27188074bdf_origin.pdf b/data/2025/2503_13xxx/2503.13360/f41871e7-052a-41b7-a381-f27188074bdf_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4368f80fb360a48c7e5bbf15a6797bc3053a9809 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13360/f41871e7-052a-41b7-a381-f27188074bdf_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3cba925af02389613036cfcbad07505bcc142814739a062826fa0391255b0aa8 +size 2762614 diff --git a/data/2025/2503_13xxx/2503.13360/full.md b/data/2025/2503_13xxx/2503.13360/full.md new file mode 100644 index 0000000000000000000000000000000000000000..85e06c7703d1b19159f6c44ee53924a59449bc3f --- /dev/null +++ b/data/2025/2503_13xxx/2503.13360/full.md @@ -0,0 +1,383 @@ +# Mitigating Visual Forgetting via Take-along Visual Conditioning for Multi-modal Long CoT Reasoning + +Hai-Long Sun $^{1,2}$ , Zhun Sun $^{4,5}$ , Houwen Peng $^{3}$ , Han-Jia Ye $^{1,2}$ † + +$^{1}$ School of Artificial Intelligence, Nanjing University + +$^{2}$ National Key Laboratory for Novel Software Technology, Nanjing University + +$^{3}$ Tencent $^{4}$ Center for Language AI Research, Tohoku University + +$^{5}$ RIKEN Center for Advanced Intelligence Project + +# Abstract + +Recent advancements in Large Language Models (LLMs) have demonstrated enhanced reasoning capabilities, evolving from Chain-of-Thought (CoT) prompting to advanced, product-oriented solutions like OpenAI o1. During our re-implementation of this model, we noticed that in multimodal tasks requiring visual input (e.g., geometry problems), Multimodal LLMs (MLLMs) struggle to maintain focus on the visual information, in other words, MLLMs suffer from a gradual decline in attention to visual information as reasoning progresses, causing text-over-relied outputs. To investigate this, we ablate image inputs during long-chain reasoning. Concretely, we truncate the reasoning process midway, then re-complete the reasoning process with the input image removed. We observe only a $\sim 2\%$ accuracy drop on MathVista's test-hard subset, revealing the model's textual outputs dominate the following reasoning process. Motivated by this, we propose Take-along Visual Conditioning (TVC), a strategy that shifts image input to critical reasoning stages and compresses redundant visual tokens via dynamic pruning. This methodology helps the model retain attention to the visual components throughout the reasoning. Our approach achieves state-of-the-art performance on average across five mathematical reasoning benchmarks (+3.4 points vs previous sota), demonstrating the effectiveness of TVC in enhancing multimodal reasoning systems. The project page is available at https://sun-hailong.github.io/projects/TVC. + +# 1 Introduction + +Large Language Models (LLMs) have achieved significant advancements in natural language processing (NLP), particularly in the area of reasoning. These models have evolved from simple + +prompt-based Chain-of-Thought (CoT) (Wei et al., 2022) techniques to sophisticated product-oriented solutions like OpenAI's o1 (OpenAI, 2024b), DeepSeek-R1 (DeepSeek-AI, 2024), and Qwen-QVQ (QwenTeam, 2024), demonstrating iterative reasoning capabilities for complex multi-step tasks, which enables them to handle tasks that require multi-step thinking, logic, and knowledge integration. Recently, several works also extended the CoT reasoning process to MLLMs settings through data-centric innovations. For instance, Math-LLaVA (Shi et al., 2024) pioneers domain-specific training with the MathV360K dataset, while MAmmoTH-VL (Guo et al., 2024) scales up multimodal CoT data generation. + +While such progress is notable in text-based domains, extending these advancements to multimodal scenarios presents unique challenges that transcend traditional language model boundaries. Reasoning in MLLMs requires fused understanding across different modalities, for example, in geometric reasoning tasks the model should interpret and reason about images alongside text. Therefore, the model's ability to integrate and maintain focus on both types of information is critical. Unlike text-only LLMs that reinforce problem context through linguistic recurrence, MLLMs struggle to sustain visual attention across reasoning steps. That is, as the length of the context increases, the model is more inclined to conduct the next step of reasoning based on the previously outputted text rather than the information of the image itself, which eventually leads to the continuation of the wrong text reasoning process and degraded model performance. We term this phenomenon as visual forgetting. + +In this work, we conduct a diagnostic analysis of the visual forgetting effect within a long-chain reasoning system. The system processes multimodal Q&A tasks through a series of interconnected reasoning steps. We demonstrate significantly reduced attentional allocation to visual inputs during multi + +stage reasoning processes. More formally, our analysis: 1) truncates the reasoning process midway and removes the image embeddings; 2) regenerates subsequent reasoning trajectories; 3) evaluates the reasoning outcomes of the pre/post-ablation inference trajectories. Intuitively, the performance gap between normal reasoning and diagnostic reasoning reveals the model's dependency on generated text over original visual evidence. Our results (See Section 2.1) on the MathVista-Hard datasets show that, removing the image midway through the reasoning only causes an insignificant performance degradation (i.e. $\sim 2\%$ ), indicating that the model completes the reasoning process based primarily on its output text. More importantly, we also observe that the model's dependency on the visual evidence diminishes over time since the early removal of the image inputs could hurt accuracy by $\sim 20\%$ . This suggests model's reasoning employs both visual and textual information in the early stage, then over-relying on text history which limits full visual reasoning potential. + +Motivated by this, we propose a novel strategy to mitigate the visual forgetting effect and maintain visual attention throughout the reasoning process. Our methodology compresses and shifts the image input to the later stages of the reasoning process, ensuring the model integrates sufficient visual evidence into its reasoning. This approach results in improved performance and achieves state-of-the-art results on average across five mathematical reasoning benchmarks (i.e., $+3.4\%$ vs previous sota). Our findings highlight the effectiveness of this strategy in enhancing the performance of multimodal reasoning systems, providing a robust solution to the problem of visual forgetting in long-chain reasoning tasks. + +# 2 Take-along Visual Conditioning: Sustaining Visual Evidence for Multi-modal Long CoT Reasoning + +In this section, we first discuss our motivation and observations of the visual forgetting phenomenon in MLLM reasoning systems (Section 2.1). Then, we propose the Take-alone Visual Conditioning (TVC) approach to mitigate visual forgetting and enhance the model's long-chain reasoning capabilities (Section 2.2). + +# 2.1 Capturing the Visual Forgetting + +Text-based reasoning systems often repeat key terms or phrases (e.g., restating "triangle ABC" or + +![](images/886b0c6086c57d45b8ba57f43d20ef52e6a0a72d4e445a4baad3ac0cf0ac4cbd.jpg) +Figure 1: The visual forgetting phenomenon by removing the image at different reasoning stages. It shows that by the midpoint of the reasoning process, the model becomes less dependent on the image, causing text-over-relied outputs. + +"equation (1)" to keep the problem's context clear and focused. By restating these critical details, they create strong connections between each step of the reasoning process and the original problem's requirements. This repetition ensures the logic stays on track, and consistent to follow. + +$$ +\mathcal {C} _ {\mathrm {M L L M}} = f \left(V, T _ {1}, \dots , T _ {n}\right) \tag {1} +$$ + +On the other hand, MLLMs struggle with this approach due to their design. As formalized in Equation (1), visual inputs $V$ are confined to the initial processing stages. Unlike text, these visual evidence aren't revisited or reinforced later in the reasoning process. Because there's no built-in way to keep visual information "active" or relevant throughout the task, the system's ability to connect visual details with text or logic weakens over time, leading to a progressive visual attention decay. The model is more likely to reason with previously outputted text and becomes particularly problematic in visual reasoning tasks that require continuous validation of spatial relationships. + +We conduct two analytic analyses using the QVQ-72B-Preview model (QwenTeam, 2024) to capture this visual forgetting phenomenon quantitatively and qualitatively. On one hand, we remove the visual inputs at eight different stages to observe their impact. On the other hand, we depict the attention matrix to directly track the attention decay of the visual evidence over time. + +Progressive Image Removing. To assess the extent to which the reasoning process depends on + +![](images/77eb70636c452e011386d11fd027206bb93759242176bc30a6b4ace266dfcda4.jpg) + +![](images/daaae34465bb2ee45fe7e1f6871bcb2fa3f7f2424896dbd8b372eed08af85a45.jpg) +Figure 2: Illustration of layer-level and token-level attention weights. (a) The layer-level attention weights of image tokens across different response token positions. (b) The token-level attention weights at the middle layer. It shows that the model's attention to the image gradually decreases during the reasoning process. + +previously generated text, we first perform a normal reasoning process, then reset the KV cache at various stages of the reasoning process. This effectively removed image tokens and forced subsequent steps to rely solely on text-based information. Specifically, the reasoning process was divided into $K = 8$ intervals based on output token counts regardless of the length of the normal reasoning process, with visual input progressively masked by resetting the KV cache at different cutoff positions $k \in \{0,1,\dots ,K - 1\}$ . In other words, the first $k / 8$ part of the normal reasoning process is now employed as a prompt, and the model now re-complete the reasoning process without image inputs. Furthermore, we discovered that for some questions (30.9% of the MathVista dataset), the model could answer correctly using only the text-based prompt. Consequently, we excluded these questions and designated the remaining dataset as the MathVista-Hard dataset. + +As illustrated in Figure 1, we observe that the performance is 40.9 at $k = 4$ and 43.1 for the normal reasoning. This minimal $2.2\%$ gap suggests that the model overly relies on text outputs during the later reasoning stages rather than on the image. + +This indicates that once the model completes the half stages of reasoning, it becomes overly reliant on its own generated CoT. Over-reliance on this self-generated reasoning worsens the model's tendency to forget visual evidence over time, which can result in hallucinations during lengthy, multi-step reasoning tasks. This result also reveals a near-exponential forgetting effect in performance as $k$ increases, which we formalize as: + +$$ +\mathcal {R} (k) = \mathcal {R} _ {\text {f u l l}} - \Delta_ {\text {v i s u a l}} (k), \quad \Delta_ {\text {v i s u a l}} (k) \propto e ^ {- k}, \tag {2} +$$ + +where $\mathcal{R}_{\mathrm{full}}$ represents the full multimodal reasoning performance, and $\Delta_{\mathrm{visual}}(k)$ quantifies the performance degradation caused by visual masking at position $k$ . + +Visual Attention Decay. To more intuitively observe the changes in visual feature attention across different stages of reasoning, we investigate the layer-level attention weights at various response token positions and the token-level attention weights at layer 16. Given that MLLM's attention weights reflect the focus on tokens and influence the decision-making process, we first analyze the attention weights at each layer of the MLLM. Specifically, for each layer, we calculate the proportion of attention weights on all image tokens. + +As shown in Figure 2(a), we observe that at the $1/8$ position of the reasoning process, the model effectively focuses on the visual inputs. However, as reasoning progresses, despite fluctuations in attention to visual tokens across layers, the model's overall attention to visual evidence gradually decreases, leading to visual forgetting. Next, following the methodology of FastV (Chen et al., 2024a), we analyze the attention maps for several cases and find that the model predominantly focuses on previously generated text tokens rather than the input image. After approximately $20\%$ tokens, the existence of image inputs on attention maps diminishes significantly, as illustrated in Figure 2(b). Both of the observations indicate a phenomenon of visual memory degradation, revealing the model's limitations in maintaining consistent attention to visual inputs throughout the reasoning process. + +# 2.2 Take-along Visual Conditioning + +In this section, we introduce our solution to tackle this problem in detail. We propose Take-along Visual Conditioning (TVC), a dynamic image retention mechanism that re-introduces visual inputs at strategic intervals throughout the reasoning pro + +cess. TVC mitigates visual attention decay by periodically reaffirming visual information, akin to human problem-solving behaviors where individuals frequently refer back to visual inputs. Our approach enhances the model's ability to incorporate visual information continuously, improving its long-chain reasoning capacity by ensuring that visual evidence is revisited during critical decision-making moments. + +The TVC method consists of two key stages: training and testing. In the training stage, we introduce Dynamic Visual Reaffirmation (DVR), which guides the model through iterative reinforcement of visual evidence during long reasoning chains. In the testing phase, we present Periodic Visual Calibration (PVC), where visual reactivation is periodically triggered at self-reflection intervals. To prevent the model from forgetting previous text-based reasoning steps due to an excessive number of image tokens, we adopt image compression through adaptive pooling to reduce the image token size while preserving spatial semantics. This dual-modality engagement mechanism ensures consistent interaction between textual reasoning and visual evidence. We present an illustration of the TVC system in Figure 3. + +Dynamic Visual Reaffirmation. Our dynamic visual reaffirmation training strategy combines two key components: 1) Data Curation: We curate long-chain reasoning data using the pipeline described in Section 3, sourced from high-quality academic datasets (e.g., MathV360K, Geo170K, and LLaVA-OneVision). This process yields a high-quality dataset optimized for long-chain reasoning training. 2) Visual Content Injection: While the curated data ensures correctness, the QVQ model inherently lacks the ability to iteratively reference visual inputs during reasoning. Therefore, we manually re-inject the visual content (visual embeddings and bridging prompt) to triggers visual re-activation at predefined self-reflection intervals. Specifically, given the initial multimodal input $\mathcal{M}_0 = (V,T_0)$ , DVR performs visual reactivation at self-reflection intervals $\{r_1,\dots,r_m\}$ : + +$$ +\mathcal {M} _ {i} = \left(V, \left[ T _ {\text {p r e v}}; \underbrace {\text {P r o m p t}} _ {\text {R e - a c t i v a t i o n}}; T _ {\text {n e w}} \right]\right) \text {a t s t e p} r _ {i} \tag {3} +$$ + +where $T_{\mathrm{prev}}$ represents the previous reasoning steps and $T_{\mathrm{new}}$ denotes the new reasoning steps that are based on prior reasoning and reintroduce focus on + +![](images/a5c52460f9451914f821519dcfa5dc5a3c2c8dd7793690a978a99f9e91aaa6c5.jpg) + +![](images/8ded0c6d380ffc66b6e04e3afc988892b918477e055637ca5f1fcff5d86fce63.jpg) +Figure 3: Overview of TVC System Design. We enable the model to have take-along visual conditioning capabilities through two stages: training and inference. + +visual information. The bridging prompt is employed to hint the existence of the image, e.g., Let me see the image again. To improve efficiency, our initial implementation adopts midpoint reactivation $(m = 1, r_1 = 0.5L$ for $L$ -step chains). + +During the self-reflection phase of the reasoning steps, we randomly select $m$ positions to inject visual content. Specifically, we inject reactivation prompts and regenerate visual captions using the model's intrinsic capabilities. This forces joint attention to both textual reasoning and visual evidence. By leveraging the model's intrinsic image captioning capability, we continuously reinforce visual information throughout the reasoning process. This ensures that the model incorporates image evidence during reflection, rather than relying solely on textual reasoning. + +Periodic Visual Calibration. Calibrating visual attention plays a crucial role in enhancing long-chain reasoning capabilities. After training our model, we further design the periodic visual calibration process. Specifically, we coordinate operations during reactivation as follows: 1) Token Compression: We first compress visual tokens using average pooling to prevent text-based reasoning from forgetting visual information. 2) Visual Cache Reset: We then prepend an instruction (bridging prompt) to reintroduce the image and re-inject image tokens by resetting the KV cache of the generation process. + +We also provide an example of how PVC is implemented in the case study section (Section 4.5). PVC both improves reasoning efficiency and prevents the model from forgetting previous reasoning steps due to an overload of visual tokens. + +![](images/dcbf95c1a314d7c6dcc0dffd0f08a9b502ceb96620470fa2a70540736ef0cf49.jpg) +Figure 4: Data Generation Pipeline of TVC. We use iterative distillation to collect long-chain reasoning data, followed by a comprehensive response filtering process to ensure high-quality reasoning. + +# 3 Data-Centric Implementation of Multimodal Reasoning System + +In this section, we briefly describe our implementation of the multimodal reasoning system through a scalable curated data generation pipeline. + +# 3.1 Long-Chain Reasoning Data Collection + +Prior research (Qin et al., 2024; Jiang et al., 2024) has identified two dominant paradigms for constructing long-chain reasoning systems: (1) explicit search-based methods, which utilize structures such as Monte Carlo Tree Search (MCTS) combined with specially trained reward models to guide the search process toward optimal solutions, and (2) instruction distillation approaches, which fine-tune models on curated long chain-of-thought (CoT) datasets. To efficiently develop an MLLM with long-chain reasoning capabilities, we adopt the distillation paradigm. In this section, we describe the distillation process and present the data generation pipeline aimed at enhancing the reasoning capability of MLLM. + +Our distillation pipeline begins with aggregating publicly available reasoning datasets (e.g., MathV360K (Shi et al., 2024) and Geo170K (Gao et al., 2023)). Through an iterative process of distillation and response filtering, we progressively enhance the model's long-chain reasoning capabilities. Specifically, the teacher model, QVQ-72B-Preview, generates long COT reasoning data, while the student model, Qwen2-VL, undergoes supervised fine-tuning on the filtered CoT data to enhance its reasoning performance. + +# 3.2 Iterative Distillation with Reject Sampling + +After obtaining the long-chain reasoning responses, we employ an assessment pipeline to ensure data quality. Specifically, we introduce a dual-temperature sampling mechanism to optimize data quality through variance exploitation. + +Deterministic Initial Sampling. For the first-stage data generation, we employ temperature $\tau = 0$ to obtain the QVQ's most confident reasoning paths: + +$$ +\mathcal {D} _ {\text {i n i t}} = \left\{\left(\mathbf {x}, \mathbf {y} _ {\mathrm {C o T}}\right) \sim \mathcal {P} _ {\mathrm {Q V Q}} (\cdot | \mathbf {x}; \tau = 0) \right\}, \tag {4} +$$ + +where $\mathbf{y}_{\mathrm{CoT}}$ represents the response generated by QVQ. This ensures that the model generates the most probable reasoning path for each input. Subsequently, we use these high-confidence responses as a foundation for further refinement in later stages of data generation. + +Answer-Centric Reject Sampling. To ensure the quality of the data, we employ an LLM-as-a-Judge approach for answer-centric reject sampling. Specifically, we use a strong LLM (e.g., Qwen2.5-72B-Instruct) as the verifier and prompt the model to output a 'yes' or 'no' in JSON format, indicating whether the long-chain reasoning answer is correct, i.e., $\mathcal{V}_{\mathrm{valid}} = \{\mathbf{y}_{\mathrm{CoT}}|\mathrm{Match}(\mathbf{y}_{\mathrm{CoT}},\mathbf{y}_{\mathrm{gt}})\}$ . After this process, we obtain $\sim 200\mathrm{K}$ high-quality samples. + +Best-of-N Error Correction. While initial sampling with temperature $\tau = 0$ yields a set of data, there are substantial residual errors $(\mathcal{D}_{\mathrm{error}})$ where QVQ's responses $(\mathbf{y}_{\mathrm{CoT}})$ unmatched with ground truth answers (55.8% of $\mathcal{D}_{\mathrm{init}}$ ). To fully leverage the available open-source data, we implement a contrastive regeneration strategy using $\tau = 1$ : + +$$ +\mathcal {D} _ {\text {c o r r e c t e d}} = \bigcup_ {\mathbf {x} \in \mathcal {D} _ {\text {e r r o r}}} \left\{\arg \min _ {\mathbf {y} ^ {i} \in \mathcal {Y} _ {\text {v a l i d}}} \operatorname {L e n g t h} \left(\mathbf {y} ^ {i}\right) \right\} _ {i = 1} ^ {8}, \tag {5} +$$ + +where $\mathcal{V}_{\mathrm{valid}}$ denotes the subset of 8 regenerated responses (at $\tau = 1$ ) that satisfy answer correctness. For cases with multiple valid responses, we prioritize minimal reasoning token length to enhance conciseness and efficiency: + +$$ +\operatorname {L e n g t h} (\mathbf {y}) = \sum_ {t = 1} ^ {T} \mathbb {I} \left(w _ {t} \notin \{[ \mathrm {P A D} ], [ \mathrm {S E P} ] \}\right) \tag {6} +$$ + +This two-stage sampling achieves partial error recovery while reducing the average token count to minimize meaningless self-reflection, effectively balancing correctness and efficiency. To further enhance the quality of the dataset, we also perform + +dynamic token truncation and reflection word pruning. This approach helps reduce the ineffective reflection parts in the dataset, thereby mitigating the issue of the model being unable to complete its answers. More details are in Section B. + +# 4 Experiments + +We conduct comprehensive experiments across multiple vision-language benchmarks to demonstrate the effectiveness of our method. Section 4.1 provides implementation details for TVC. In Section 4.3, we present key results on visual reasoning tasks, supplemented with findings on general image understanding. Section 4.4 details ablation studies to evaluate the influence of critical design decisions. Finally, Section 4.5 presents visualizations and case studies to illustrate the method's operational characteristics and insights derived from it. + +# 4.1 Training Recipe + +We integrate the TVC system with MLLMs of varying scales to demonstrate the effectiveness and generalizability of our approach. Initial implementation with Qwen2-VL-7B-Instruct confirmed the method's validity. To further evaluate scalability and establish robust baselines against state-of-the-art MLLMs, we expanded the approach to a 72B model. Prior to training TVC, we follow the long-chain reasoning pipeline described earlier. We use the LLaMA-Factory (Zheng et al., 2024) framework, with a learning rate of 2e-5, a batch size of 256, and 5 training epochs. During optimization, only the LLM parameters and cross-modal connector are trained, while the visual encoder remains frozen. The training process requires 10 hours on a $64\times \mathrm{H}20$ GPU setup for the 7B model and approximately 4 days for the 72B model. Additional details are presented in Table 4. + +# 4.2 Evaluation Setup + +We conduct a comprehensive experimental analysis across various visual reasoning benchmarks that require advanced visual reasoning skills. To ensure a well-rounded evaluation, we select several widely recognized and representative benchmarks, including MathVista (Lu et al., 2024a), MathVerse (Zhang et al., 2024a), MathVision (Wang et al., 2024a), Dynamath (Zou et al., 2024), and OlympiadBench (He et al., 2024). MathVista consists of 6,141 examples that require fine-grained, deep visual understanding and compositional reasoning. MathVerse contains 2,612 multi-subject + +![](images/b67cf65f88ddd239a7d839548d4229efad69558040285509805ad4ef26fee458.jpg) +Figure 5: Ablations on the amount of training data. TVC benefits from data scaling, continually improving the reasoning capabilities. + +math problems from a variety of sources. MathVision includes 3,040 high-quality mathematical problems sourced from established mathematics competitions. OlympiadBench features 8,476 bilingual multimodal problems tailored to Olympic-level mathematics and physics competitions. These benchmarks evaluate problem-solving abilities in mathematics, and following standard practice, we use GPT-4o-mini as the evaluator. Following the VLMEvalKit guidelines, we exclude the text-only split from MathVerse and the theorem-proof sections from OlympiadBench. For a fair comparison, we conduct evaluations using the testmini sets of MathVerse and MathVista. Fast evaluation is made possible through the use of the VLMEvalKit (Duan et al., 2024) and vLLM (Kwon et al., 2023). + +# 4.3 Main Results on Visual Reasoning + +The results in Table 1 demonstrate the effectiveness and generalizability of TVC across multiple visual reasoning benchmarks. Notably, our model achieves a $16.7\%$ improvement over QVQ-72B-Preview on MathVision and a $17.6\%$ gain on MathVerse, highlighting enhanced reasoning capabilities. Unlike conventional datasets where textual descriptions may include implicit visual cues, MathVerse is an all-around visual math benchmark specifically designed for equitable, in-depth evaluation of MLLMs. The significant gains on MathVerse underscore the significance of TVC, given the benchmark's unique design principles. Furthermore, the TVC-7B model, despite its smaller size compared to counterparts, achieves competitive performance, even outperforming leading MLLMs in multiple cases. This demonstrates the robustness of our + +Table 1: Results on Visual Reasoning Tasks. We conduct evaluation experiments across 6 benchmarks, covering both general reasoning and task-specific reasoning assessments. TVC exhibits notable effectiveness and generalizability when applied to Qwen2-VL, surpassing other state-of-the-art MLLMs by a large margin. + +
ModelSizeMathVistaMathVisionMathVerseDynamathOlympiadBenchAverage
MiniCPM-V-2.6 (Yadav et al., 2025)8B60.818.417.69.8--
VITA-1.5 (Fu et al., 2025a)8B66.219.523.49.6--
LLaVA-COT (Xu et al., 2024)11B52.519.922.67.8--
Qwen2-VL (Wang et al., 2024b)7B60.916.324.611.03.223.2
InternVL2.5 (Chen et al., 2024b)8B64.517.022.89.40.122.8
POINTS1.5 (Liu et al., 2024b)8B66.422.026.614.2--
Ovis1.6-Gemma2 (Lu et al., 2024b)27B70.220.637.817.0--
InternVL2.5-COT (Chen et al., 2024b)78B71.432.540.128.5--
LLaVA-OneVision (Li et al., 2024)72B67.125.327.215.6--
Qwen2-VL (Wang et al., 2024b)72B69.726.636.220.010.332.6
QVQ-72B-preview (QwenTeam, 2024)72B71.435.941.530.720.440.0
TVC7B68.122.738.915.19.830.9
TVC72B72.241.948.830.024.343.4
+ +Table 2: Ablations on the TVC System. TVC enhances reasoning capabilities, showing significant improvements on both general and task-specific reasoning benchmarks. + +
MethodMathVistaMathVisionMathVerseAvg
Baseline60.916.324.633.9
Vanilla - Direct SFT63.519.831.638.3
TVC w/o PVC66.721.835.641.4
TVC w/o DVR66.222.334.741.0
TVC Full68.122.738.943.2
+ +methodology even with more compact model configurations. Beyond task-specific visual reasoning, we extend our evaluation to general reasoning benchmarks (e.g., MathVista), where TVC consistently delivers strong performance, achieving a $3.6\%$ improvement over the original Qwen2-VL72B model. These results emphasize TVC's ability to excel in tasks requiring both perception and reasoning. Collectively, the findings indicate that TVC not only advances performance in specialized visual reasoning tasks but also offers substantial benefits in broader application scenarios. + +# 4.4 Further Analysis + +In this section, we conduct comprehensive experiments to evaluate the design choices of TVC, emphasizing the key contributions of our approach. We also present a case study to further illustrate the qualitative effectiveness of TVC. + +Effectiveness of TVC system. To evaluate the effectiveness of the TVC system, we conduct comprehensive ablation experiments on various components using the Qwen2-VL-7B as the Baseline. We begin by performing supervised fine-tuning on the + +Table 3: Ablations on Token Compression. + +
MethodMathVistaMathVisionMathVerseAvg
TVC Baseline68.321.539.643.1
+ 2x2 Avg Pooling67.822.938.343.0
+ 4x4 Avg Pooling68.122.738.943.2
+ +Qwen2-VL-7B model with the data from Section 3, referred to as Vanilla - Direct SFT. Furthermore, we apply the DVR training approach outlined in Section 2.2, which increases the focus on the visual information in the training data, enabling the model to implicitly learn visual conditioning capabilities. Additionally, during the testing phase, we experiment with resetting the visual KV cache midway through the reasoning process, and after compressing the visual tokens, we add them to the end of the reasoning steps. This strategy allows the model to further observe the image content during its thought process, mitigating the visual forgetting and suppressing hallucinations. + +As demonstrated in Table 2, the results highlight that the TVC system is crucial for enhancing visual reasoning capabilities. When using only the SFT training data without the DVR strategy in TVC (Vanilla - Direct SFT), improvements in visual reasoning tasks are limited, as the model lacks the ability to reflect on visual information. Furthermore, models trained with the DVR strategy alone still produce sub-optimal results, underscoring the importance of a comprehensive approach that integrates PVC and DVR. The contributions of PVC and DVR are relatively balanced in enhancing the reasoning capabilities. To further validate the effectiveness of our visual token compression, we + +![](images/0c4be083c4972b0ec6877c4318b42fb9003a36717e0940aeeddc10e6556d40f2.jpg) +Figure 6: Case Study of TVC. TVC effectively re-examines the image during the reflection process to correct mistakes, guiding the model to the correct answer. + +conduct experiments with different pooling methods. As shown in Table 3, the TVC Baseline represents the method without image compression. We observe that the use of pooling methods has little impact on the model's capabilities. Utilizing $4 \times 4$ average pooling for compression not only enhances the model's inference efficiency but also achieves a slight performance improvement. + +Data Scaling Law. To validate the scalability of our approach, we also conduct a data scaling law experiment for training. As shown in Figure 5, we compare models across various data size: 50K, 100K, 150K, and 200K samples, and present the performance relative to the base model (Qwen2-VL-7B). Our findings show that as the dataset size increases, the model's performance continues to improve. However, it is evident that with a small dataset, the model's reasoning ability cannot reach an optimal level solely through SFT. Therefore, data scaling proves highly effective during SFT training, and the results indicate that TVC benefits from increased data. + +To prevent underfitting from becoming a performance bottleneck, we increase the number of training epochs with smaller datasets, which further improves model performance. This indicates that in + +sufficient data or epochs lead to undertrained models, making it difficult to learn long-chain reasoning. Increasing both the data and training epochs can effectively alleviate this issue. + +# 4.5 Case Study + +We present a comprehensive case study in Figure 6 to illustrate the improvements of our TVC approach. Specifically, we provide an example that highlights advancements in the reasoning process. In this example, the model is tasked with carefully observing the objects within an image and, after eliminating certain shapes and attributes, providing the count of the remaining objects. During base CoT reasoning, the model fails to check the attributes of each object and only focuses on their shapes, leading to an incorrect final answer. Subsequently, the model learns to re-focus on the image and carefully describe the attributes of each object in detail. This process allows the model to identify the issue in its previous reasoning and provide the correct answer. + +# 5 Conclusion + +In this paper, we introduce Take-along Visual Conditioning (TVC), a novel strategy designed to en + +hance the reasoning capabilities of MLLMs by addressing the challenge of visual attention degradation during long-chain reasoning. By dynamically shifting the image input to critical stages of reasoning and compressing redundant visual tokens, we ensure that the model maintains focus on the visual information throughout the process. Our extensive evaluation on several mathematical reasoning benchmarks demonstrates the effectiveness of TVC in improving multimodal reasoning, providing a robust approach to equip MLLMs with better visual grounding for complex tasks. + +# Acknowledgments + +This work is partially supported by National Key R&D Program of China (2024YFE0202800), NSFC (62376118), Key Program of Jiangsu Science Foundation (BK20243012), CCF-Tencent Rhino-Bird Open Research Fund (RAGR20240101), AMED (JP25wm0625405), and Collaborative Innovation Center of Novel Software Technology and Industrialization. + +# Limitations + +Despite advancements, our method may still exhibit several limitations. First, for highly complex reasoning tasks requiring sophisticated analytical capabilities, simply increasing visual revisits proves insufficient. In contrast, it is crucial to enhance the model's inherent reasoning capacity. Second, our method assumes the availability of delayed visual processing, making it potentially unsuitable for real-time applications requiring instantaneous visual feedback, such as robotic navigation or time-sensitive decision-making scenarios. + +# References + +Meta AI. 2024. Build the future of ai with meta llama 3. Technical report, Meta AI. +anthropic. 2024. Introducing the next generation of claude. Technical report, anthropic. +Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. 2023. Qwen technical report. arXiv:2309.16609. +Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Junyang Lin, Chang Zhou, and Baobao Chang. 2024a. An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models. Preprint, arXiv:2403.06764. + +Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, et al. 2024b. Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling. arXiv preprint arXiv:2412.05271. +Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Zhong Muyan, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. 2023. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. arXiv:2312.14238. +DeepSeek-AI. 2024. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. Technical report, DeepSeek. +Yuhao Dong, Zuyan Liu, Hai-Long Sun, Jingkang Yang, Winston Hu, Yongming Rao, and Ziwei Liu. 2024. Insight-v: Exploring long-chain visual reasoning with multimodal large language models. arXiv preprint arXiv:2411.14432. +Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, et al. 2024. Vlmealkit: An open-source toolkit for evaluating large multi-modality models. arXiv preprint arXiv:2407.11691. +Chaoyou Fu, Haojia Lin, Xiong Wang, Yi-Fan Zhang, Yunhang Shen, Xiaoyu Liu, Yangze Li, Zuwei Long, Heting Gao, Ke Li, et al. 2025a. Vita-1.5: Towards gpt-4o level real-time vision and speech interaction. arXiv preprint arXiv:2501.01957. +Jiale Fu, Yuchu Jiang, Junkai Chen, Jiaming Fan, Xin Geng, and Xu Yang. 2025b. Speculative ensemble: Fast large language model ensemble via speculation. arXiv preprint arXiv:2502.01662. +Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, et al. 2023. G-llava: Solving geometric problem with multi-modal large language model. arXiv preprint arXiv:2312.11370. +Jarvis Guo, Tuney Zheng, Yuelin Bai, Bo Li, Yubo Wang, King Zhu, Yizhi Li, Graham Neubig, Wenhu Chen, and Xiang Yue. 2024. Mammoth-vl: Eliciting multimodal reasoning with instruction tuning at scale. arXiv preprint arXiv:2412.05237. +Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. 2024. Olympiad-bench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008. +Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, and Ranjay Krishna. 2024. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. arXiv preprint arXiv:2406.09403. + +Jinhao Jiang, Zhipeng Chen, Yingqian Min, Jie Chen, Xiaoxue Cheng, Jiapeng Wang, Yiru Tang, Haoxiang Sun, Jia Deng, Wayne Xin Zhao, et al. 2024. Technical report: Enhancing llm reasoning with reward-guided tree search. arXiv preprint arXiv:2411.11694. +Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles. +Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. 2024. Llavaonevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326. +Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. 2023. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. arXiv:2306.00890. +Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. 2024a. Llavanext: Improved reasoning,OCR, and world knowledge. +Yuan Liu, Le Tian, Xiao Zhou, Xinyu Gao, Kavio Yu, Yang Yu, and Jie Zhou. 2024b. Points1. 5: Building a vision-language model towards real world applications. arXiv preprint arXiv:2412.08443. +Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. 2024a. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In ICLR. +Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. 2023. Chameleon: Plug-and-play compositional reasoning with large language models. Advances in Neural Information Processing Systems, 36:43447-43478. +Shiyin Lu, Yang Li, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, and Han-Jia Ye. 2024b. Ovis: Structural embedding alignment for multimodal large language model. arXiv preprint arXiv:2405.20797. +Brandon McKinzie, Zhe Gan, Jean-Philippe Fauconnier, Sam Dodge, Bowen Zhang, Philipp Dufter, Dhruti Shah, Xianzhi Du, Futang Peng, Floris Weers, et al. 2024. Mm1: Methods, analysis & insights from multimodal llm pre-training. arXiv:2403.09611. +OpenAI. 2024a. Gpt-4o: Hello gpt-4o. Technical report, OpenAI. +OpenAI. 2024b. Learning to reason with llms. Technical report, OpenAI. + +Yingzhe Peng, Gongrui Zhang, Miaosen Zhang, Zhiyuan You, Jie Liu, Qipeng Zhu, Kai Yang, Xingzhong Xu, Xin Geng, and Xu Yang. 2025. Lmmr1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl. arXiv preprint arXiv:2503.07536. +Yiwei Qin, Xuefeng Li, Haoyang Zou, Yixiu Liu, Shijie Xia, Zhen Huang, Yixin Ye, Weizhe Yuan, Hector Liu, Yuanzhi Li, et al. 2024. O1 replication journey: A strategic progress report-part 1. arXiv preprint arXiv:2410.18982. +QwenTeam. 2024. Qvq: To see the world with wisdom. Technical report, Alibaba. +Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML. +Machel Reid, Nikolay Savinov, Denis Teptyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv:2403.05530. +Wenhao Shi, Zhiqiang Hu, Yi Bin, Junhua Liu, Yang Yang, See-Kiong Ng, Lidong Bing, and Roy Ka-Wei Lee. 2024. Math-llava: Bootstrapping mathematical reasoning for multimodal large language models. arXiv preprint arXiv:2406.17294. +Hai-Long Sun, Da-Wei Zhou, Yang Li, Shiyin Lu, Chao Yi, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, De-Chuan Zhan, et al. 2025a. Parrot: Multilingual visual instruction tuning. In ICML. +Hai-Long Sun, Da-Wei Zhou, De-Chuan Zhan, and Han-Jia Ye. 2025b. Pilot: A pre-trained model-based continual learning toolbox. SCIENCE CHINA Information Sciences, 68(4):147101. +Hai-Long Sun, Da-Wei Zhou, Hanbin Zhao, Le Gan, De-Chuan Zhan, and Han-Jia Ye. 2025c. Mos: Model surgery for pre-trained model-based class-incremental learning. In AAAI, volume 39, pages 20699-20707. +Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805. +Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. 2024. Cambrian-1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860. + +Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, and Hongsheng Li. 2024a. Measuring multimodal mathematical reasoning with math-vision dataset. Preprint, arXiv:2402.14804. +Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. 2024b. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv:2201.11903. +Guowei Xu, Peng Jin, Li Hao, Yibing Song, Lichao Sun, and Li Yuan. 2024. Llava-o1: Let vision language models reason step-by-step. arXiv preprint arXiv:2411.10440. +Ankit Yadav, Lingqiao Liu, and Yuankai Qi. 2025. Exploring primitive visual measurement understanding and the role of output format in learning in vision-language models. arXiv preprint arXiv:2501.15144. +Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. 2023. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11975-11986. +Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, et al. 2024a. Math-verse: Does your multi-modal llm truly see the diagrams in visual math problems? arXiv preprint arXiv:2403.14624. +Renrui Zhang, Xinyu Wei, Dongzhi Jiang, Ziyu Guo, Shicheng Li, Yichi Zhang, Chengzhuo Tong, Jiaming Liu, Aojun Zhou, Bin Wei, et al. 2024b. Mavis: Mathematical visual instruction tuning with an automatic data engine. arXiv preprint arXiv:2407.08739. +Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024. Llamafactory: Unified efficient fine-tuning of $100+$ language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), Bangkok, Thailand. Association for Computational Linguistics. +Wenwen Zhuang, Xin Huang, Xiantao Zhang, and Jin Zeng. 2024. Math-puma: Progressive upward multimodal alignment to enhance mathematical reasoning. arXiv preprint arXiv:2408.08640. +Chengke Zou, Xingang Guo, Rui Yang, Junyu Zhang, Bin Hu, and Huan Zhang. 2024. Dynamath: A dynamic visual benchmark for evaluating mathematical reasoning robustness of vision language models. arXiv preprint arXiv:2411.00836. + +# A Related Work + +Multimodal Large Language Models. Multimodal Large Language Models (MLLMs) (Li et al., 2023; Liu et al., 2024a; Sun et al., 2025a; Wang et al., 2024b; Lu et al., 2024b; McKinzie et al., 2024; Sun et al., 2025c,b; Fu et al., 2025b; Dong et al., 2024) integrate vision encoders (Radford et al., 2021; Zhai et al., 2023) with LLMs (AI, 2024; Bai et al., 2023), endowing them with robust capabilities across a wide range of domains. These include general visual understanding(OpenAI, 2024a; Li et al., 2024), mathematical reasoning (Shi et al., 2024; Gao et al., 2023), and answering college-level questions (Chen et al., 2023), demonstrating their versatility in real-world tasks. The rapid advancements in open-source models have also spurred the development of proprietary models, such as GPT-4o (OpenAI, 2024a), Gemini (Team et al., 2023; Reid et al., 2024), Qwen2-VL-MAX (Wang et al., 2024b), and Claude3 (anthropic, 2024). These models have demonstrated remarkable performance in both evaluation benchmarks and practical applications, solidifying their position at the forefront of AI research and deployment. + +Reasoning with MLLMs. Recent advancements in MLLMs have significantly enhanced performance in reasoning tasks across both text and multimodal scenarios (OpenAI, 2024b; DeepSeek-AI, 2024; QwenTeam, 2024; Peng et al., 2025). Current methods primarily rely on CoT (Wei et al., 2022) to train MLLMs for step-by-step reasoning. Data-driven approaches include MathLLaVA (Shi et al., 2024), which introduced the MathV360K dataset, and MAmmoTH-VL (Guo et al., 2024), which curates a large-scale multimodal CoT dataset in a scalable manner. Another line of research explores vision-text alignment. MAVIS (Zhang et al., 2024b) fine-tunes a math-specific vision encoder with curated caption data, while Math-PUMA (Zhuang et al., 2024) leverages the Kullback-Leibler (KL) divergence of next-token prediction distributions for modality alignment. In a different paradigm, MLLMs act as coordinators, utilizing external tools such as LLMs, web search engines, and computer programs for complex reasoning. Chameleon (Lu et al., 2023) orchestrates tool-call sequences, and Visual Sketchpad (Hu et al., 2024) enables models to generate visual sketches to aid reasoning. + +# B More Details of Reasoning Dataset + +In this section, we provide a detailed description of dynamic token truncation and reflection word pruning in the process of constructing the reasoning dataset. We also provide detailed information about the training data in Table 5. + +# B.1 Dynamic Token Truncation + +To further improve the dataset quality, we analyze the distribution of token lengths after the answer-centric rejection sampling. We find that many samples are close to the maximum token limit, and manual checks show that these long reasoning chains often have problems—such as logical errors, mistakes in multi-step calculations, and reliance on shortcuts that don't work in general cases (e.g., substituting specific values). Motivated by the correlation between extreme token lengths and compromised solution quality, we implement adaptive truncation thresholds to keep the answers within the 200-8000 token range. This dynamic filtering not only eliminates the invalid cases (overly verbose or terse responses) but also enhances the overall quality of the data. The final length distribution matches how human experts solve problems and keeps the important reasoning steps intact. + +# B.2 Reflection Word Pruning + +Our analysis reveals a critical flaw in distilled reasoning chains: excessive metacognitive loops caused by uncontrolled reflection markers (e.g., 'Alternatively,' 'Wait'), which led to performance degradation through infinite loops or ungrounded speculation. Term frequency analysis of reflection density shows a heavy-tailed distribution— $95\%$ of samples contained fewer than 10 reflection markers per chain, while $1\%$ exhibited over 50 markers, with this extreme group strongly correlating to hallucination rates. To address this, we introduce a reflection token quota system that automatically prunes samples exceeding 25 reflection markers while retaining core reasoning logic using semantic-aware span detection. As shown in Figure 4, this approach significantly reduced infinite-loop instances in validation tasks while improving answer accuracy. The refined reflection pattern mirrors expert human problem-solving strategies, wherein targeted self-correction enhances, rather than disrupts, the continuity of the reasoning process. + +![](images/e6e818e208bce7e7184afc19e3effb07a641dfed7a0d213ac1fd5c32cde5e241.jpg) +Figure 7: The token and reflection word distribution of the long-chain reasoning dataset. + +![](images/19734dcb34cce543e8463529115a85acd6bad9216134c3e1ed1179348600b169.jpg) + +Table 4: The detailed training hyperparameters. + +
ConfigSFT
DeepspeedZero3
Epoch5
Warmup Ratio0.1
Max Grad Norm1.0
OptimizerAdamW
Learning rate2e-5
Learning rate schedulerCosine
Text max length8192
Batch size per GPU1
Gradient Accumulation Steps4
GPU64×H20-96G
PrecisionBf16
+ +# C Discussion + +In this work, we investigate the phenomenon of visual information forgetting in MLLMs during long-chain reasoning. Through comprehensive analysis and experiments, we show that as reasoning chains progressively lengthen, models exhibit a gradual deterioration in retaining visual inputs, ultimately undermining their multimodal reasoning capabilities and exacerbating hallucination issues. To address this challenge, we propose a take-along visual conditioning mechanism that enables models to dynamically revisit visual inputs during reasoning steps, thereby enhancing content fidelity throughout the inference process. + +However, as illustrated in Figure 8, we acknowledge several limitations. First, for highly complex reasoning tasks requiring sophisticated analytical capabilities, simply increasing visual revisits proves insufficient. In contrast, it is crucial to enhance the model's inherent reasoning capacity. Second, our method assumes the availability of delayed visual processing, making it potentially unsuitable for real-time applications requiring instantaneous visual feedback, such as robotic navigation or time-sensitive decision-making scenarios. + +Table 5: Details on the TVC's training data, which is derived from publicly available datasets. + +
DatasetsSamples
MathV360K (Shi et al., 2024)221K
Geo170K (Gao et al., 2023)22K
LLaVA-OneVision (Li et al., 2024)97K
Cambrian-1 (Tong et al., 2024)1K
+ +Our work represents an initial exploration into mitigating visual forgetting in extended multimodal reasoning chains. We envision future research directions including: (1) Developing hybrid architectures that synergistically enhance both visual retention and intrinsic reasoning capabilities; (2) Investigating adaptive attention mechanisms for real-time multimodal applications; (3) Exploring curriculum learning strategies to progressively strengthen long-chain reasoning capacities. We hope this foundational study will inspire further advances in understanding and improving multimodal reasoning systems for complex real-world applications. + +![](images/6a7493ff659617dcedca570aa164a80dbd4533998a3aa4bf6246b4c28549486d.jpg) + +![](images/679522b38765370c555efed06eb171b6a97b036167f248e17a8a2a3a6956f88b.jpg) +Figure 8: Qualitative Results of TVC. + +![](images/e4da8a755a3bfc4d8b87c6a8b3c35c5ae1c52093de2983e4e008244ee89b6da6.jpg) + +Which of the cubes is not identical to the unfolded net? + +Choices: (A) A (B) B (C) C (D) D (E) E + +Let's see here. I've got this problem where there's an unfolded net of a cube, and then there are several cubes shown below with letters A through E. The question is asking which one of these cubes is not identical to the unfolded net. Okay, so I need to compare each cube to the net and see if they match or not. + +Let me label the squares for clarity: + +2. Back face (B) + +3. Top face (T) + +4. Bottom face (D) + +5. Left face (L) + +6. Right face (R) + +Let's look at each cube option: + +A) This is a cube with letters on its faces. Wait, no, it's a cube with colors, similar to the others. Wait, actually, all cubes have combinations of teal and yellow faces. + +I need to carefully observe the color placement on each cube. + +Option A + +- Front face: teal + +-Top face: yellow + +-Right face:teal + +- Left face: yellow + +-Back face:teal + +- Bottom face: yellow + +Option E: + +- Front: teal + +-Top: yellow + +-Right:teal + +- Left: yellow + +-Back:teal + +- Bottom: yellow + +Comparing to the net: F(t), T(y), R(t), L(y), B(y), D(t) + +So, according to the net, B should be yellow, but in Option E, it's teal. + +Therefore, Option E differs from the net in the back face color. + +Hence, Option E is not identical to the unfolded net. + +Therefore, the answer is E. \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13360/images.zip b/data/2025/2503_13xxx/2503.13360/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c04942b6f9707de50294cf1c71b5591d85f1026e --- /dev/null +++ b/data/2025/2503_13xxx/2503.13360/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63e0e9e97292e1d1dd24b0c5678eb16c5663af837eb3eab8d5085e071fb1eb6b +size 535793 diff --git a/data/2025/2503_13xxx/2503.13360/layout.json b/data/2025/2503_13xxx/2503.13360/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3da6107da271b2e7afc406c02eb035a01d16a5fb --- /dev/null +++ b/data/2025/2503_13xxx/2503.13360/layout.json @@ -0,0 +1,8519 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 99, + 75, + 495, + 110 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 99, + 75, + 495, + 110 + ], + "spans": [ + { + "bbox": [ + 99, + 75, + 495, + 110 + ], + "type": "text", + "content": "Mitigating Visual Forgetting via Take-along Visual Conditioning for Multi-modal Long CoT Reasoning" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 132, + 122, + 462, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 122, + 462, + 137 + ], + "spans": [ + { + "bbox": [ + 132, + 122, + 462, + 137 + ], + "type": "text", + "content": "Hai-Long Sun" + }, + { + "bbox": [ + 132, + 122, + 462, + 137 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 132, + 122, + 462, + 137 + ], + "type": "text", + "content": ", Zhun Sun" + }, + { + "bbox": [ + 132, + 122, + 462, + 137 + ], + "type": "inline_equation", + "content": "^{4,5}" + }, + { + "bbox": [ + 132, + 122, + 462, + 137 + ], + "type": "text", + "content": ", Houwen Peng" + }, + { + "bbox": [ + 132, + 122, + 462, + 137 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 132, + 122, + 462, + 137 + ], + "type": "text", + "content": ", Han-Jia Ye" + }, + { + "bbox": [ + 132, + 122, + 462, + 137 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 132, + 122, + 462, + 137 + ], + "type": "text", + "content": "†" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 169, + 137, + 424, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 169, + 137, + 424, + 152 + ], + "spans": [ + { + "bbox": [ + 169, + 137, + 424, + 152 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 169, + 137, + 424, + 152 + ], + "type": "text", + "content": "School of Artificial Intelligence, Nanjing University" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 108, + 152, + 484, + 165 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 152, + 484, + 165 + ], + "spans": [ + { + "bbox": [ + 108, + 152, + 484, + 165 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 108, + 152, + 484, + 165 + ], + "type": "text", + "content": "National Key Laboratory for Novel Software Technology, Nanjing University" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 142, + 165, + 451, + 179 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 142, + 165, + 451, + 179 + ], + "spans": [ + { + "bbox": [ + 142, + 165, + 451, + 179 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 142, + 165, + 451, + 179 + ], + "type": "text", + "content": "Tencent " + }, + { + "bbox": [ + 142, + 165, + 451, + 179 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 142, + 165, + 451, + 179 + ], + "type": "text", + "content": "Center for Language AI Research, Tohoku University" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 175, + 179, + 418, + 194 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 175, + 179, + 418, + 194 + ], + "spans": [ + { + "bbox": [ + 175, + 179, + 418, + 194 + ], + "type": "inline_equation", + "content": "^{5}" + }, + { + "bbox": [ + 175, + 179, + 418, + 194 + ], + "type": "text", + "content": "RIKEN Center for Advanced Intelligence Project" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 155, + 219, + 202, + 232 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 219, + 202, + 232 + ], + "spans": [ + { + "bbox": [ + 155, + 219, + 202, + 232 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 85, + 243, + 274, + 650 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 243, + 274, + 650 + ], + "spans": [ + { + "bbox": [ + 85, + 243, + 274, + 650 + ], + "type": "text", + "content": "Recent advancements in Large Language Models (LLMs) have demonstrated enhanced reasoning capabilities, evolving from Chain-of-Thought (CoT) prompting to advanced, product-oriented solutions like OpenAI o1. During our re-implementation of this model, we noticed that in multimodal tasks requiring visual input (e.g., geometry problems), Multimodal LLMs (MLLMs) struggle to maintain focus on the visual information, in other words, MLLMs suffer from a gradual decline in attention to visual information as reasoning progresses, causing text-over-relied outputs. To investigate this, we ablate image inputs during long-chain reasoning. Concretely, we truncate the reasoning process midway, then re-complete the reasoning process with the input image removed. We observe only a " + }, + { + "bbox": [ + 85, + 243, + 274, + 650 + ], + "type": "inline_equation", + "content": "\\sim 2\\%" + }, + { + "bbox": [ + 85, + 243, + 274, + 650 + ], + "type": "text", + "content": " accuracy drop on MathVista's test-hard subset, revealing the model's textual outputs dominate the following reasoning process. Motivated by this, we propose Take-along Visual Conditioning (TVC), a strategy that shifts image input to critical reasoning stages and compresses redundant visual tokens via dynamic pruning. This methodology helps the model retain attention to the visual components throughout the reasoning. Our approach achieves state-of-the-art performance on average across five mathematical reasoning benchmarks (+3.4 points vs previous sota), demonstrating the effectiveness of TVC in enhancing multimodal reasoning systems. The project page is available at https://sun-hailong.github.io/projects/TVC." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 68, + 660, + 154, + 673 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 660, + 154, + 673 + ], + "spans": [ + { + "bbox": [ + 68, + 660, + 154, + 673 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 682, + 290, + 736 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 682, + 290, + 736 + ], + "spans": [ + { + "bbox": [ + 67, + 682, + 290, + 736 + ], + "type": "text", + "content": "Large Language Models (LLMs) have achieved significant advancements in natural language processing (NLP), particularly in the area of reasoning. These models have evolved from simple" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 219, + 526, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 219, + 526, + 422 + ], + "spans": [ + { + "bbox": [ + 302, + 219, + 526, + 422 + ], + "type": "text", + "content": "prompt-based Chain-of-Thought (CoT) (Wei et al., 2022) techniques to sophisticated product-oriented solutions like OpenAI's o1 (OpenAI, 2024b), DeepSeek-R1 (DeepSeek-AI, 2024), and Qwen-QVQ (QwenTeam, 2024), demonstrating iterative reasoning capabilities for complex multi-step tasks, which enables them to handle tasks that require multi-step thinking, logic, and knowledge integration. Recently, several works also extended the CoT reasoning process to MLLMs settings through data-centric innovations. For instance, Math-LLaVA (Shi et al., 2024) pioneers domain-specific training with the MathV360K dataset, while MAmmoTH-VL (Guo et al., 2024) scales up multimodal CoT data generation." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 423, + 526, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 423, + 526, + 693 + ], + "spans": [ + { + "bbox": [ + 302, + 423, + 526, + 693 + ], + "type": "text", + "content": "While such progress is notable in text-based domains, extending these advancements to multimodal scenarios presents unique challenges that transcend traditional language model boundaries. Reasoning in MLLMs requires fused understanding across different modalities, for example, in geometric reasoning tasks the model should interpret and reason about images alongside text. Therefore, the model's ability to integrate and maintain focus on both types of information is critical. Unlike text-only LLMs that reinforce problem context through linguistic recurrence, MLLMs struggle to sustain visual attention across reasoning steps. That is, as the length of the context increases, the model is more inclined to conduct the next step of reasoning based on the previously outputted text rather than the information of the image itself, which eventually leads to the continuation of the wrong text reasoning process and degraded model performance. We term this phenomenon as visual forgetting." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 694, + 526, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 694, + 526, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 694, + 526, + 775 + ], + "type": "text", + "content": "In this work, we conduct a diagnostic analysis of the visual forgetting effect within a long-chain reasoning system. The system processes multimodal Q&A tasks through a series of interconnected reasoning steps. We demonstrate significantly reduced attentional allocation to visual inputs during multi" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 13, + 249, + 36, + 589 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 13, + 249, + 36, + 589 + ], + "spans": [ + { + "bbox": [ + 13, + 249, + 36, + 589 + ], + "type": "text", + "content": "arXiv:2503.13360v2 [cs.CV] 3 Jun 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 741, + 289, + 762 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 741, + 289, + 762 + ], + "spans": [ + { + "bbox": [ + 67, + 741, + 289, + 762 + ], + "type": "text", + "content": "*Work done during their internship/employment at Tencent Hunyuan" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 82, + 762, + 256, + 774 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 762, + 256, + 774 + ], + "spans": [ + { + "bbox": [ + 82, + 762, + 256, + 774 + ], + "type": "text", + "content": "†Corresponding author: yehj@lamda.nju.edu.cn" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 66, + 71, + 293, + 368 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 71, + 293, + 368 + ], + "spans": [ + { + "bbox": [ + 66, + 71, + 293, + 368 + ], + "type": "text", + "content": "stage reasoning processes. More formally, our analysis: 1) truncates the reasoning process midway and removes the image embeddings; 2) regenerates subsequent reasoning trajectories; 3) evaluates the reasoning outcomes of the pre/post-ablation inference trajectories. Intuitively, the performance gap between normal reasoning and diagnostic reasoning reveals the model's dependency on generated text over original visual evidence. Our results (See Section 2.1) on the MathVista-Hard datasets show that, removing the image midway through the reasoning only causes an insignificant performance degradation (i.e. " + }, + { + "bbox": [ + 66, + 71, + 293, + 368 + ], + "type": "inline_equation", + "content": "\\sim 2\\%" + }, + { + "bbox": [ + 66, + 71, + 293, + 368 + ], + "type": "text", + "content": "), indicating that the model completes the reasoning process based primarily on its output text. More importantly, we also observe that the model's dependency on the visual evidence diminishes over time since the early removal of the image inputs could hurt accuracy by " + }, + { + "bbox": [ + 66, + 71, + 293, + 368 + ], + "type": "inline_equation", + "content": "\\sim 20\\%" + }, + { + "bbox": [ + 66, + 71, + 293, + 368 + ], + "type": "text", + "content": ". This suggests model's reasoning employs both visual and textual information in the early stage, then over-relying on text history which limits full visual reasoning potential." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 66, + 370, + 291, + 574 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 370, + 291, + 574 + ], + "spans": [ + { + "bbox": [ + 66, + 370, + 291, + 574 + ], + "type": "text", + "content": "Motivated by this, we propose a novel strategy to mitigate the visual forgetting effect and maintain visual attention throughout the reasoning process. Our methodology compresses and shifts the image input to the later stages of the reasoning process, ensuring the model integrates sufficient visual evidence into its reasoning. This approach results in improved performance and achieves state-of-the-art results on average across five mathematical reasoning benchmarks (i.e., " + }, + { + "bbox": [ + 66, + 370, + 291, + 574 + ], + "type": "inline_equation", + "content": "+3.4\\%" + }, + { + "bbox": [ + 66, + 370, + 291, + 574 + ], + "type": "text", + "content": " vs previous sota). Our findings highlight the effectiveness of this strategy in enhancing the performance of multimodal reasoning systems, providing a robust solution to the problem of visual forgetting in long-chain reasoning tasks." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 580, + 267, + 623 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 580, + 267, + 623 + ], + "spans": [ + { + "bbox": [ + 67, + 580, + 267, + 623 + ], + "type": "text", + "content": "2 Take-along Visual Conditioning: Sustaining Visual Evidence for Multi-modal Long CoT Reasoning" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 66, + 628, + 291, + 722 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 628, + 291, + 722 + ], + "spans": [ + { + "bbox": [ + 66, + 628, + 291, + 722 + ], + "type": "text", + "content": "In this section, we first discuss our motivation and observations of the visual forgetting phenomenon in MLLM reasoning systems (Section 2.1). Then, we propose the Take-alone Visual Conditioning (TVC) approach to mitigate visual forgetting and enhance the model's long-chain reasoning capabilities (Section 2.2)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 730, + 246, + 745 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 730, + 246, + 745 + ], + "spans": [ + { + "bbox": [ + 67, + 730, + 246, + 745 + ], + "type": "text", + "content": "2.1 Capturing the Visual Forgetting" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 748, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 748, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 748, + 291, + 775 + ], + "type": "text", + "content": "Text-based reasoning systems often repeat key terms or phrases (e.g., restating \"triangle ABC\" or" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 318, + 71, + 511, + 242 + ], + "blocks": [ + { + "bbox": [ + 318, + 71, + 511, + 242 + ], + "lines": [ + { + "bbox": [ + 318, + 71, + 511, + 242 + ], + "spans": [ + { + "bbox": [ + 318, + 71, + 511, + 242 + ], + "type": "image", + "image_path": "886b0c6086c57d45b8ba57f43d20ef52e6a0a72d4e445a4baad3ac0cf0ac4cbd.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 253, + 526, + 313 + ], + "lines": [ + { + "bbox": [ + 302, + 253, + 526, + 313 + ], + "spans": [ + { + "bbox": [ + 302, + 253, + 526, + 313 + ], + "type": "text", + "content": "Figure 1: The visual forgetting phenomenon by removing the image at different reasoning stages. It shows that by the midpoint of the reasoning process, the model becomes less dependent on the image, causing text-over-relied outputs." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 301, + 334, + 526, + 416 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 334, + 526, + 416 + ], + "spans": [ + { + "bbox": [ + 301, + 334, + 526, + 416 + ], + "type": "text", + "content": "\"equation (1)\" to keep the problem's context clear and focused. By restating these critical details, they create strong connections between each step of the reasoning process and the original problem's requirements. This repetition ensures the logic stays on track, and consistent to follow." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 355, + 426, + 525, + 439 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 355, + 426, + 525, + 439 + ], + "spans": [ + { + "bbox": [ + 355, + 426, + 525, + 439 + ], + "type": "interline_equation", + "content": "\\mathcal {C} _ {\\mathrm {M L L M}} = f \\left(V, T _ {1}, \\dots , T _ {n}\\right) \\tag {1}", + "image_path": "d176a11f8ab40eecff4c9c20afd72235646431748296f97bb1c1864da39ac0d7.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 301, + 449, + 525, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 449, + 525, + 640 + ], + "spans": [ + { + "bbox": [ + 301, + 449, + 525, + 640 + ], + "type": "text", + "content": "On the other hand, MLLMs struggle with this approach due to their design. As formalized in Equation (1), visual inputs " + }, + { + "bbox": [ + 301, + 449, + 525, + 640 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 301, + 449, + 525, + 640 + ], + "type": "text", + "content": " are confined to the initial processing stages. Unlike text, these visual evidence aren't revisited or reinforced later in the reasoning process. Because there's no built-in way to keep visual information \"active\" or relevant throughout the task, the system's ability to connect visual details with text or logic weakens over time, leading to a progressive visual attention decay. The model is more likely to reason with previously outputted text and becomes particularly problematic in visual reasoning tasks that require continuous validation of spatial relationships." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 640, + 526, + 746 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 640, + 526, + 746 + ], + "spans": [ + { + "bbox": [ + 302, + 640, + 526, + 746 + ], + "type": "text", + "content": "We conduct two analytic analyses using the QVQ-72B-Preview model (QwenTeam, 2024) to capture this visual forgetting phenomenon quantitatively and qualitatively. On one hand, we remove the visual inputs at eight different stages to observe their impact. On the other hand, we depict the attention matrix to directly track the attention decay of the visual evidence over time." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 748, + 525, + 775 + ], + "type": "text", + "content": "Progressive Image Removing. To assess the extent to which the reasoning process depends on" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 82, + 71, + 274, + 234 + ], + "blocks": [ + { + "bbox": [ + 82, + 71, + 274, + 234 + ], + "lines": [ + { + "bbox": [ + 82, + 71, + 274, + 234 + ], + "spans": [ + { + "bbox": [ + 82, + 71, + 274, + 234 + ], + "type": "image", + "image_path": "77eb70636c452e011386d11fd027206bb93759242176bc30a6b4ace266dfcda4.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 86, + 236, + 274, + 329 + ], + "blocks": [ + { + "bbox": [ + 86, + 236, + 274, + 329 + ], + "lines": [ + { + "bbox": [ + 86, + 236, + 274, + 329 + ], + "spans": [ + { + "bbox": [ + 86, + 236, + 274, + 329 + ], + "type": "image", + "image_path": "daaae34465bb2ee45fe7e1f6871bcb2fa3f7f2424896dbd8b372eed08af85a45.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 339, + 291, + 413 + ], + "lines": [ + { + "bbox": [ + 67, + 339, + 291, + 413 + ], + "spans": [ + { + "bbox": [ + 67, + 339, + 291, + 413 + ], + "type": "text", + "content": "Figure 2: Illustration of layer-level and token-level attention weights. (a) The layer-level attention weights of image tokens across different response token positions. (b) The token-level attention weights at the middle layer. It shows that the model's attention to the image gradually decreases during the reasoning process." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 435, + 291, + 705 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 435, + 291, + 705 + ], + "spans": [ + { + "bbox": [ + 67, + 435, + 291, + 705 + ], + "type": "text", + "content": "previously generated text, we first perform a normal reasoning process, then reset the KV cache at various stages of the reasoning process. This effectively removed image tokens and forced subsequent steps to rely solely on text-based information. Specifically, the reasoning process was divided into " + }, + { + "bbox": [ + 67, + 435, + 291, + 705 + ], + "type": "inline_equation", + "content": "K = 8" + }, + { + "bbox": [ + 67, + 435, + 291, + 705 + ], + "type": "text", + "content": " intervals based on output token counts regardless of the length of the normal reasoning process, with visual input progressively masked by resetting the KV cache at different cutoff positions " + }, + { + "bbox": [ + 67, + 435, + 291, + 705 + ], + "type": "inline_equation", + "content": "k \\in \\{0,1,\\dots ,K - 1\\}" + }, + { + "bbox": [ + 67, + 435, + 291, + 705 + ], + "type": "text", + "content": ". In other words, the first " + }, + { + "bbox": [ + 67, + 435, + 291, + 705 + ], + "type": "inline_equation", + "content": "k / 8" + }, + { + "bbox": [ + 67, + 435, + 291, + 705 + ], + "type": "text", + "content": " part of the normal reasoning process is now employed as a prompt, and the model now re-complete the reasoning process without image inputs. Furthermore, we discovered that for some questions (30.9% of the MathVista dataset), the model could answer correctly using only the text-based prompt. Consequently, we excluded these questions and designated the remaining dataset as the MathVista-Hard dataset." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "type": "text", + "content": "As illustrated in Figure 1, we observe that the performance is 40.9 at " + }, + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "type": "inline_equation", + "content": "k = 4" + }, + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "type": "text", + "content": " and 43.1 for the normal reasoning. This minimal " + }, + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "type": "inline_equation", + "content": "2.2\\%" + }, + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "type": "text", + "content": " gap suggests that the model overly relies on text outputs during the later reasoning stages rather than on the image." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 71, + 526, + 193 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 526, + 193 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 526, + 193 + ], + "type": "text", + "content": "This indicates that once the model completes the half stages of reasoning, it becomes overly reliant on its own generated CoT. Over-reliance on this self-generated reasoning worsens the model's tendency to forget visual evidence over time, which can result in hallucinations during lengthy, multi-step reasoning tasks. This result also reveals a near-exponential forgetting effect in performance as " + }, + { + "bbox": [ + 302, + 71, + 526, + 193 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 302, + 71, + 526, + 193 + ], + "type": "text", + "content": " increases, which we formalize as:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 307, + 203, + 525, + 231 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 203, + 525, + 231 + ], + "spans": [ + { + "bbox": [ + 307, + 203, + 525, + 231 + ], + "type": "interline_equation", + "content": "\\mathcal {R} (k) = \\mathcal {R} _ {\\text {f u l l}} - \\Delta_ {\\text {v i s u a l}} (k), \\quad \\Delta_ {\\text {v i s u a l}} (k) \\propto e ^ {- k}, \\tag {2}", + "image_path": "40e67c0e65d1fb566246abfdc17cc2c08e8b4a23b495de18730407a48cb6aa36.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 232, + 525, + 285 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 232, + 525, + 285 + ], + "spans": [ + { + "bbox": [ + 302, + 232, + 525, + 285 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 302, + 232, + 525, + 285 + ], + "type": "inline_equation", + "content": "\\mathcal{R}_{\\mathrm{full}}" + }, + { + "bbox": [ + 302, + 232, + 525, + 285 + ], + "type": "text", + "content": " represents the full multimodal reasoning performance, and " + }, + { + "bbox": [ + 302, + 232, + 525, + 285 + ], + "type": "inline_equation", + "content": "\\Delta_{\\mathrm{visual}}(k)" + }, + { + "bbox": [ + 302, + 232, + 525, + 285 + ], + "type": "text", + "content": " quantifies the performance degradation caused by visual masking at position " + }, + { + "bbox": [ + 302, + 232, + 525, + 285 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 302, + 232, + 525, + 285 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 286, + 526, + 435 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 286, + 526, + 435 + ], + "spans": [ + { + "bbox": [ + 302, + 286, + 526, + 435 + ], + "type": "text", + "content": "Visual Attention Decay. To more intuitively observe the changes in visual feature attention across different stages of reasoning, we investigate the layer-level attention weights at various response token positions and the token-level attention weights at layer 16. Given that MLLM's attention weights reflect the focus on tokens and influence the decision-making process, we first analyze the attention weights at each layer of the MLLM. Specifically, for each layer, we calculate the proportion of attention weights on all image tokens." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 436, + 525, + 680 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 436, + 525, + 680 + ], + "spans": [ + { + "bbox": [ + 302, + 436, + 525, + 680 + ], + "type": "text", + "content": "As shown in Figure 2(a), we observe that at the " + }, + { + "bbox": [ + 302, + 436, + 525, + 680 + ], + "type": "inline_equation", + "content": "1/8" + }, + { + "bbox": [ + 302, + 436, + 525, + 680 + ], + "type": "text", + "content": " position of the reasoning process, the model effectively focuses on the visual inputs. However, as reasoning progresses, despite fluctuations in attention to visual tokens across layers, the model's overall attention to visual evidence gradually decreases, leading to visual forgetting. Next, following the methodology of FastV (Chen et al., 2024a), we analyze the attention maps for several cases and find that the model predominantly focuses on previously generated text tokens rather than the input image. After approximately " + }, + { + "bbox": [ + 302, + 436, + 525, + 680 + ], + "type": "inline_equation", + "content": "20\\%" + }, + { + "bbox": [ + 302, + 436, + 525, + 680 + ], + "type": "text", + "content": " tokens, the existence of image inputs on attention maps diminishes significantly, as illustrated in Figure 2(b). Both of the observations indicate a phenomenon of visual memory degradation, revealing the model's limitations in maintaining consistent attention to visual inputs throughout the reasoning process." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 689, + 480, + 703 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 689, + 480, + 703 + ], + "spans": [ + { + "bbox": [ + 302, + 689, + 480, + 703 + ], + "type": "text", + "content": "2.2 Take-along Visual Conditioning" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 708, + 526, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 708, + 526, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 708, + 526, + 775 + ], + "type": "text", + "content": "In this section, we introduce our solution to tackle this problem in detail. We propose Take-along Visual Conditioning (TVC), a dynamic image retention mechanism that re-introduces visual inputs at strategic intervals throughout the reasoning pro" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 291, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 291, + 191 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 291, + 191 + ], + "type": "text", + "content": "cess. TVC mitigates visual attention decay by periodically reaffirming visual information, akin to human problem-solving behaviors where individuals frequently refer back to visual inputs. Our approach enhances the model's ability to incorporate visual information continuously, improving its long-chain reasoning capacity by ensuring that visual evidence is revisited during critical decision-making moments." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 194, + 291, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 194, + 291, + 423 + ], + "spans": [ + { + "bbox": [ + 69, + 194, + 291, + 423 + ], + "type": "text", + "content": "The TVC method consists of two key stages: training and testing. In the training stage, we introduce Dynamic Visual Reaffirmation (DVR), which guides the model through iterative reinforcement of visual evidence during long reasoning chains. In the testing phase, we present Periodic Visual Calibration (PVC), where visual reactivation is periodically triggered at self-reflection intervals. To prevent the model from forgetting previous text-based reasoning steps due to an excessive number of image tokens, we adopt image compression through adaptive pooling to reduce the image token size while preserving spatial semantics. This dual-modality engagement mechanism ensures consistent interaction between textual reasoning and visual evidence. We present an illustration of the TVC system in Figure 3." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 425, + 291, + 669 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 425, + 291, + 669 + ], + "spans": [ + { + "bbox": [ + 67, + 425, + 291, + 669 + ], + "type": "text", + "content": "Dynamic Visual Reaffirmation. Our dynamic visual reaffirmation training strategy combines two key components: 1) Data Curation: We curate long-chain reasoning data using the pipeline described in Section 3, sourced from high-quality academic datasets (e.g., MathV360K, Geo170K, and LLaVA-OneVision). This process yields a high-quality dataset optimized for long-chain reasoning training. 2) Visual Content Injection: While the curated data ensures correctness, the QVQ model inherently lacks the ability to iteratively reference visual inputs during reasoning. Therefore, we manually re-inject the visual content (visual embeddings and bridging prompt) to triggers visual re-activation at predefined self-reflection intervals. Specifically, given the initial multimodal input " + }, + { + "bbox": [ + 67, + 425, + 291, + 669 + ], + "type": "inline_equation", + "content": "\\mathcal{M}_0 = (V,T_0)" + }, + { + "bbox": [ + 67, + 425, + 291, + 669 + ], + "type": "text", + "content": ", DVR performs visual reactivation at self-reflection intervals " + }, + { + "bbox": [ + 67, + 425, + 291, + 669 + ], + "type": "inline_equation", + "content": "\\{r_1,\\dots,r_m\\}" + }, + { + "bbox": [ + 67, + 425, + 291, + 669 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 74, + 681, + 290, + 733 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 74, + 681, + 290, + 733 + ], + "spans": [ + { + "bbox": [ + 74, + 681, + 290, + 733 + ], + "type": "interline_equation", + "content": "\\mathcal {M} _ {i} = \\left(V, \\left[ T _ {\\text {p r e v}}; \\underbrace {\\text {P r o m p t}} _ {\\text {R e - a c t i v a t i o n}}; T _ {\\text {n e w}} \\right]\\right) \\text {a t s t e p} r _ {i} \\tag {3}", + "image_path": "ca2d99ca8bbb7c32b1259bd6821f528ac69e32ff5a22df9f90fa30e8daecef36.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 735, + 290, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 735, + 290, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 735, + 290, + 775 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 67, + 735, + 290, + 775 + ], + "type": "inline_equation", + "content": "T_{\\mathrm{prev}}" + }, + { + "bbox": [ + 67, + 735, + 290, + 775 + ], + "type": "text", + "content": " represents the previous reasoning steps and " + }, + { + "bbox": [ + 67, + 735, + 290, + 775 + ], + "type": "inline_equation", + "content": "T_{\\mathrm{new}}" + }, + { + "bbox": [ + 67, + 735, + 290, + 775 + ], + "type": "text", + "content": " denotes the new reasoning steps that are based on prior reasoning and reintroduce focus on" + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 306, + 69, + 522, + 129 + ], + "blocks": [ + { + "bbox": [ + 306, + 69, + 522, + 129 + ], + "lines": [ + { + "bbox": [ + 306, + 69, + 522, + 129 + ], + "spans": [ + { + "bbox": [ + 306, + 69, + 522, + 129 + ], + "type": "image", + "image_path": "a5c52460f9451914f821519dcfa5dc5a3c2c8dd7793690a978a99f9e91aaa6c5.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 305, + 131, + 522, + 236 + ], + "blocks": [ + { + "bbox": [ + 305, + 131, + 522, + 236 + ], + "lines": [ + { + "bbox": [ + 305, + 131, + 522, + 236 + ], + "spans": [ + { + "bbox": [ + 305, + 131, + 522, + 236 + ], + "type": "image", + "image_path": "8ded0c6d380ffc66b6e04e3afc988892b918477e055637ca5f1fcff5d86fce63.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 246, + 525, + 283 + ], + "lines": [ + { + "bbox": [ + 302, + 246, + 525, + 283 + ], + "spans": [ + { + "bbox": [ + 302, + 246, + 525, + 283 + ], + "type": "text", + "content": "Figure 3: Overview of TVC System Design. We enable the model to have take-along visual conditioning capabilities through two stages: training and inference." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 308, + 525, + 375 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 308, + 525, + 375 + ], + "spans": [ + { + "bbox": [ + 302, + 308, + 525, + 375 + ], + "type": "text", + "content": "visual information. The bridging prompt is employed to hint the existence of the image, e.g., Let me see the image again. To improve efficiency, our initial implementation adopts midpoint reactivation " + }, + { + "bbox": [ + 302, + 308, + 525, + 375 + ], + "type": "inline_equation", + "content": "(m = 1, r_1 = 0.5L" + }, + { + "bbox": [ + 302, + 308, + 525, + 375 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 302, + 308, + 525, + 375 + ], + "type": "inline_equation", + "content": "L" + }, + { + "bbox": [ + 302, + 308, + 525, + 375 + ], + "type": "text", + "content": "-step chains)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 378, + 526, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 378, + 526, + 540 + ], + "spans": [ + { + "bbox": [ + 302, + 378, + 526, + 540 + ], + "type": "text", + "content": "During the self-reflection phase of the reasoning steps, we randomly select " + }, + { + "bbox": [ + 302, + 378, + 526, + 540 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 302, + 378, + 526, + 540 + ], + "type": "text", + "content": " positions to inject visual content. Specifically, we inject reactivation prompts and regenerate visual captions using the model's intrinsic capabilities. This forces joint attention to both textual reasoning and visual evidence. By leveraging the model's intrinsic image captioning capability, we continuously reinforce visual information throughout the reasoning process. This ensures that the model incorporates image evidence during reflection, rather than relying solely on textual reasoning." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 543, + 525, + 705 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 543, + 525, + 705 + ], + "spans": [ + { + "bbox": [ + 302, + 543, + 525, + 705 + ], + "type": "text", + "content": "Periodic Visual Calibration. Calibrating visual attention plays a crucial role in enhancing long-chain reasoning capabilities. After training our model, we further design the periodic visual calibration process. Specifically, we coordinate operations during reactivation as follows: 1) Token Compression: We first compress visual tokens using average pooling to prevent text-based reasoning from forgetting visual information. 2) Visual Cache Reset: We then prepend an instruction (bridging prompt) to reintroduce the image and re-inject image tokens by resetting the KV cache of the generation process." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 708, + 526, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 708, + 526, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 708, + 526, + 775 + ], + "type": "text", + "content": "We also provide an example of how PVC is implemented in the case study section (Section 4.5). PVC both improves reasoning efficiency and prevents the model from forgetting previous reasoning steps due to an overload of visual tokens." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 71, + 69, + 286, + 199 + ], + "blocks": [ + { + "bbox": [ + 71, + 69, + 286, + 199 + ], + "lines": [ + { + "bbox": [ + 71, + 69, + 286, + 199 + ], + "spans": [ + { + "bbox": [ + 71, + 69, + 286, + 199 + ], + "type": "image", + "image_path": "dcbf95c1a314d7c6dcc0dffd0f08a9b502ceb96620470fa2a70540736ef0cf49.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 208, + 291, + 257 + ], + "lines": [ + { + "bbox": [ + 67, + 208, + 291, + 257 + ], + "spans": [ + { + "bbox": [ + 67, + 208, + 291, + 257 + ], + "type": "text", + "content": "Figure 4: Data Generation Pipeline of TVC. We use iterative distillation to collect long-chain reasoning data, followed by a comprehensive response filtering process to ensure high-quality reasoning." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 285, + 255, + 312 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 285, + 255, + 312 + ], + "spans": [ + { + "bbox": [ + 68, + 285, + 255, + 312 + ], + "type": "text", + "content": "3 Data-Centric Implementation of Multimodal Reasoning System" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 330, + 291, + 371 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 330, + 291, + 371 + ], + "spans": [ + { + "bbox": [ + 67, + 330, + 291, + 371 + ], + "type": "text", + "content": "In this section, we briefly describe our implementation of the multimodal reasoning system through a scalable curated data generation pipeline." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 68, + 394, + 279, + 407 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 394, + 279, + 407 + ], + "spans": [ + { + "bbox": [ + 68, + 394, + 279, + 407 + ], + "type": "text", + "content": "3.1 Long-Chain Reasoning Data Collection" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 418, + 291, + 622 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 418, + 291, + 622 + ], + "spans": [ + { + "bbox": [ + 67, + 418, + 291, + 622 + ], + "type": "text", + "content": "Prior research (Qin et al., 2024; Jiang et al., 2024) has identified two dominant paradigms for constructing long-chain reasoning systems: (1) explicit search-based methods, which utilize structures such as Monte Carlo Tree Search (MCTS) combined with specially trained reward models to guide the search process toward optimal solutions, and (2) instruction distillation approaches, which fine-tune models on curated long chain-of-thought (CoT) datasets. To efficiently develop an MLLM with long-chain reasoning capabilities, we adopt the distillation paradigm. In this section, we describe the distillation process and present the data generation pipeline aimed at enhancing the reasoning capability of MLLM." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 626, + 292, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 626, + 292, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 626, + 292, + 775 + ], + "type": "text", + "content": "Our distillation pipeline begins with aggregating publicly available reasoning datasets (e.g., MathV360K (Shi et al., 2024) and Geo170K (Gao et al., 2023)). Through an iterative process of distillation and response filtering, we progressively enhance the model's long-chain reasoning capabilities. Specifically, the teacher model, QVQ-72B-Preview, generates long COT reasoning data, while the student model, Qwen2-VL, undergoes supervised fine-tuning on the filtered CoT data to enhance its reasoning performance." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 302, + 71, + 524, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 71, + 524, + 85 + ], + "spans": [ + { + "bbox": [ + 302, + 71, + 524, + 85 + ], + "type": "text", + "content": "3.2 Iterative Distillation with Reject Sampling" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 89, + 525, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 89, + 525, + 156 + ], + "spans": [ + { + "bbox": [ + 302, + 89, + 525, + 156 + ], + "type": "text", + "content": "After obtaining the long-chain reasoning responses, we employ an assessment pipeline to ensure data quality. Specifically, we introduce a dual-temperature sampling mechanism to optimize data quality through variance exploitation." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 157, + 524, + 196 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 157, + 524, + 196 + ], + "spans": [ + { + "bbox": [ + 302, + 157, + 524, + 196 + ], + "type": "text", + "content": "Deterministic Initial Sampling. For the first-stage data generation, we employ temperature " + }, + { + "bbox": [ + 302, + 157, + 524, + 196 + ], + "type": "inline_equation", + "content": "\\tau = 0" + }, + { + "bbox": [ + 302, + 157, + 524, + 196 + ], + "type": "text", + "content": " to obtain the QVQ's most confident reasoning paths:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 315, + 207, + 525, + 222 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 207, + 525, + 222 + ], + "spans": [ + { + "bbox": [ + 315, + 207, + 525, + 222 + ], + "type": "interline_equation", + "content": "\\mathcal {D} _ {\\text {i n i t}} = \\left\\{\\left(\\mathbf {x}, \\mathbf {y} _ {\\mathrm {C o T}}\\right) \\sim \\mathcal {P} _ {\\mathrm {Q V Q}} (\\cdot | \\mathbf {x}; \\tau = 0) \\right\\}, \\tag {4}", + "image_path": "7654ff8f86ddaf82a71b37737c7e813ba0f174d36d6208b0f7de139080c9e40f.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 232, + 525, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 232, + 525, + 312 + ], + "spans": [ + { + "bbox": [ + 302, + 232, + 525, + 312 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 302, + 232, + 525, + 312 + ], + "type": "inline_equation", + "content": "\\mathbf{y}_{\\mathrm{CoT}}" + }, + { + "bbox": [ + 302, + 232, + 525, + 312 + ], + "type": "text", + "content": " represents the response generated by QVQ. This ensures that the model generates the most probable reasoning path for each input. Subsequently, we use these high-confidence responses as a foundation for further refinement in later stages of data generation." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 313, + 526, + 435 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 313, + 526, + 435 + ], + "spans": [ + { + "bbox": [ + 302, + 313, + 526, + 435 + ], + "type": "text", + "content": "Answer-Centric Reject Sampling. To ensure the quality of the data, we employ an LLM-as-a-Judge approach for answer-centric reject sampling. Specifically, we use a strong LLM (e.g., Qwen2.5-72B-Instruct) as the verifier and prompt the model to output a 'yes' or 'no' in JSON format, indicating whether the long-chain reasoning answer is correct, i.e., " + }, + { + "bbox": [ + 302, + 313, + 526, + 435 + ], + "type": "inline_equation", + "content": "\\mathcal{V}_{\\mathrm{valid}} = \\{\\mathbf{y}_{\\mathrm{CoT}}|\\mathrm{Match}(\\mathbf{y}_{\\mathrm{CoT}},\\mathbf{y}_{\\mathrm{gt}})\\}" + }, + { + "bbox": [ + 302, + 313, + 526, + 435 + ], + "type": "text", + "content": ". After this process, we obtain " + }, + { + "bbox": [ + 302, + 313, + 526, + 435 + ], + "type": "inline_equation", + "content": "\\sim 200\\mathrm{K}" + }, + { + "bbox": [ + 302, + 313, + 526, + 435 + ], + "type": "text", + "content": " high-quality samples." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "spans": [ + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "type": "text", + "content": "Best-of-N Error Correction. While initial sampling with temperature " + }, + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "type": "inline_equation", + "content": "\\tau = 0" + }, + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "type": "text", + "content": " yields a set of data, there are substantial residual errors " + }, + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "type": "inline_equation", + "content": "(\\mathcal{D}_{\\mathrm{error}})" + }, + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "type": "text", + "content": " where QVQ's responses " + }, + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "type": "inline_equation", + "content": "(\\mathbf{y}_{\\mathrm{CoT}})" + }, + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "type": "text", + "content": " unmatched with ground truth answers (55.8% of " + }, + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{\\mathrm{init}}" + }, + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "type": "text", + "content": "). To fully leverage the available open-source data, we implement a contrastive regeneration strategy using " + }, + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "type": "inline_equation", + "content": "\\tau = 1" + }, + { + "bbox": [ + 302, + 435, + 525, + 530 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 539, + 526, + 585 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 539, + 526, + 585 + ], + "spans": [ + { + "bbox": [ + 302, + 539, + 526, + 585 + ], + "type": "interline_equation", + "content": "\\mathcal {D} _ {\\text {c o r r e c t e d}} = \\bigcup_ {\\mathbf {x} \\in \\mathcal {D} _ {\\text {e r r o r}}} \\left\\{\\arg \\min _ {\\mathbf {y} ^ {i} \\in \\mathcal {Y} _ {\\text {v a l i d}}} \\operatorname {L e n g t h} \\left(\\mathbf {y} ^ {i}\\right) \\right\\} _ {i = 1} ^ {8}, \\tag {5}", + "image_path": "697ce5fbc3dfae830c3d1ae191028b80f3e5135ed25e9b6d17b761553de08c7d.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 586, + 525, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 586, + 525, + 653 + ], + "spans": [ + { + "bbox": [ + 302, + 586, + 525, + 653 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 302, + 586, + 525, + 653 + ], + "type": "inline_equation", + "content": "\\mathcal{V}_{\\mathrm{valid}}" + }, + { + "bbox": [ + 302, + 586, + 525, + 653 + ], + "type": "text", + "content": " denotes the subset of 8 regenerated responses (at " + }, + { + "bbox": [ + 302, + 586, + 525, + 653 + ], + "type": "inline_equation", + "content": "\\tau = 1" + }, + { + "bbox": [ + 302, + 586, + 525, + 653 + ], + "type": "text", + "content": ") that satisfy answer correctness. For cases with multiple valid responses, we prioritize minimal reasoning token length to enhance conciseness and efficiency:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 312, + 662, + 525, + 699 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 662, + 525, + 699 + ], + "spans": [ + { + "bbox": [ + 312, + 662, + 525, + 699 + ], + "type": "interline_equation", + "content": "\\operatorname {L e n g t h} (\\mathbf {y}) = \\sum_ {t = 1} ^ {T} \\mathbb {I} \\left(w _ {t} \\notin \\{[ \\mathrm {P A D} ], [ \\mathrm {S E P} ] \\}\\right) \\tag {6}", + "image_path": "6210ce2bebfc06b3811825c874b06f63c700d8f06116b097bd99a7904dfcc8c2.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 302, + 708, + 526, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 708, + 526, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 708, + 526, + 775 + ], + "type": "text", + "content": "This two-stage sampling achieves partial error recovery while reducing the average token count to minimize meaningless self-reflection, effectively balancing correctness and efficiency. To further enhance the quality of the dataset, we also perform" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 290, + 138 + ], + "type": "text", + "content": "dynamic token truncation and reflection word pruning. This approach helps reduce the ineffective reflection parts in the dataset, thereby mitigating the issue of the model being unable to complete its answers. More details are in Section B." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 148, + 155, + 162 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 148, + 155, + 162 + ], + "spans": [ + { + "bbox": [ + 67, + 148, + 155, + 162 + ], + "type": "text", + "content": "4 Experiments" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 169, + 291, + 317 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 169, + 291, + 317 + ], + "spans": [ + { + "bbox": [ + 67, + 169, + 291, + 317 + ], + "type": "text", + "content": "We conduct comprehensive experiments across multiple vision-language benchmarks to demonstrate the effectiveness of our method. Section 4.1 provides implementation details for TVC. In Section 4.3, we present key results on visual reasoning tasks, supplemented with findings on general image understanding. Section 4.4 details ablation studies to evaluate the influence of critical design decisions. Finally, Section 4.5 presents visualizations and case studies to illustrate the method's operational characteristics and insights derived from it." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 326, + 170, + 339 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 326, + 170, + 339 + ], + "spans": [ + { + "bbox": [ + 67, + 326, + 170, + 339 + ], + "type": "text", + "content": "4.1 Training Recipe" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 343, + 291, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 343, + 291, + 587 + ], + "spans": [ + { + "bbox": [ + 67, + 343, + 291, + 587 + ], + "type": "text", + "content": "We integrate the TVC system with MLLMs of varying scales to demonstrate the effectiveness and generalizability of our approach. Initial implementation with Qwen2-VL-7B-Instruct confirmed the method's validity. To further evaluate scalability and establish robust baselines against state-of-the-art MLLMs, we expanded the approach to a 72B model. Prior to training TVC, we follow the long-chain reasoning pipeline described earlier. We use the LLaMA-Factory (Zheng et al., 2024) framework, with a learning rate of 2e-5, a batch size of 256, and 5 training epochs. During optimization, only the LLM parameters and cross-modal connector are trained, while the visual encoder remains frozen. The training process requires 10 hours on a " + }, + { + "bbox": [ + 67, + 343, + 291, + 587 + ], + "type": "inline_equation", + "content": "64\\times \\mathrm{H}20" + }, + { + "bbox": [ + 67, + 343, + 291, + 587 + ], + "type": "text", + "content": " GPU setup for the 7B model and approximately 4 days for the 72B model. Additional details are presented in Table 4." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 596, + 175, + 608 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 596, + 175, + 608 + ], + "spans": [ + { + "bbox": [ + 67, + 596, + 175, + 608 + ], + "type": "text", + "content": "4.2 Evaluation Setup" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 613, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 613, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 613, + 291, + 775 + ], + "type": "text", + "content": "We conduct a comprehensive experimental analysis across various visual reasoning benchmarks that require advanced visual reasoning skills. To ensure a well-rounded evaluation, we select several widely recognized and representative benchmarks, including MathVista (Lu et al., 2024a), MathVerse (Zhang et al., 2024a), MathVision (Wang et al., 2024a), Dynamath (Zou et al., 2024), and OlympiadBench (He et al., 2024). MathVista consists of 6,141 examples that require fine-grained, deep visual understanding and compositional reasoning. MathVerse contains 2,612 multi-subject" + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 306, + 71, + 522, + 231 + ], + "blocks": [ + { + "bbox": [ + 306, + 71, + 522, + 231 + ], + "lines": [ + { + "bbox": [ + 306, + 71, + 522, + 231 + ], + "spans": [ + { + "bbox": [ + 306, + 71, + 522, + 231 + ], + "type": "image", + "image_path": "b67cf65f88ddd239a7d839548d4229efad69558040285509805ad4ef26fee458.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 242, + 525, + 279 + ], + "lines": [ + { + "bbox": [ + 302, + 242, + 525, + 279 + ], + "spans": [ + { + "bbox": [ + 302, + 242, + 525, + 279 + ], + "type": "text", + "content": "Figure 5: Ablations on the amount of training data. TVC benefits from data scaling, continually improving the reasoning capabilities." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 301, + 300, + 526, + 516 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 300, + 526, + 516 + ], + "spans": [ + { + "bbox": [ + 301, + 300, + 526, + 516 + ], + "type": "text", + "content": "math problems from a variety of sources. MathVision includes 3,040 high-quality mathematical problems sourced from established mathematics competitions. OlympiadBench features 8,476 bilingual multimodal problems tailored to Olympic-level mathematics and physics competitions. These benchmarks evaluate problem-solving abilities in mathematics, and following standard practice, we use GPT-4o-mini as the evaluator. Following the VLMEvalKit guidelines, we exclude the text-only split from MathVerse and the theorem-proof sections from OlympiadBench. For a fair comparison, we conduct evaluations using the testmini sets of MathVerse and MathVista. Fast evaluation is made possible through the use of the VLMEvalKit (Duan et al., 2024) and vLLM (Kwon et al., 2023)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 527, + 490, + 540 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 527, + 490, + 540 + ], + "spans": [ + { + "bbox": [ + 302, + 527, + 490, + 540 + ], + "type": "text", + "content": "4.3 Main Results on Visual Reasoning" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 301, + 544, + 526, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 544, + 526, + 774 + ], + "spans": [ + { + "bbox": [ + 301, + 544, + 526, + 774 + ], + "type": "text", + "content": "The results in Table 1 demonstrate the effectiveness and generalizability of TVC across multiple visual reasoning benchmarks. Notably, our model achieves a " + }, + { + "bbox": [ + 301, + 544, + 526, + 774 + ], + "type": "inline_equation", + "content": "16.7\\%" + }, + { + "bbox": [ + 301, + 544, + 526, + 774 + ], + "type": "text", + "content": " improvement over QVQ-72B-Preview on MathVision and a " + }, + { + "bbox": [ + 301, + 544, + 526, + 774 + ], + "type": "inline_equation", + "content": "17.6\\%" + }, + { + "bbox": [ + 301, + 544, + 526, + 774 + ], + "type": "text", + "content": " gain on MathVerse, highlighting enhanced reasoning capabilities. Unlike conventional datasets where textual descriptions may include implicit visual cues, MathVerse is an all-around visual math benchmark specifically designed for equitable, in-depth evaluation of MLLMs. The significant gains on MathVerse underscore the significance of TVC, given the benchmark's unique design principles. Furthermore, the TVC-7B model, despite its smaller size compared to counterparts, achieves competitive performance, even outperforming leading MLLMs in multiple cases. This demonstrates the robustness of our" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 70, + 114, + 526, + 279 + ], + "blocks": [ + { + "bbox": [ + 67, + 69, + 526, + 108 + ], + "lines": [ + { + "bbox": [ + 67, + 69, + 526, + 108 + ], + "spans": [ + { + "bbox": [ + 67, + 69, + 526, + 108 + ], + "type": "text", + "content": "Table 1: Results on Visual Reasoning Tasks. We conduct evaluation experiments across 6 benchmarks, covering both general reasoning and task-specific reasoning assessments. TVC exhibits notable effectiveness and generalizability when applied to Qwen2-VL, surpassing other state-of-the-art MLLMs by a large margin." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 70, + 114, + 526, + 279 + ], + "lines": [ + { + "bbox": [ + 70, + 114, + 526, + 279 + ], + "spans": [ + { + "bbox": [ + 70, + 114, + 526, + 279 + ], + "type": "table", + "html": "
ModelSizeMathVistaMathVisionMathVerseDynamathOlympiadBenchAverage
MiniCPM-V-2.6 (Yadav et al., 2025)8B60.818.417.69.8--
VITA-1.5 (Fu et al., 2025a)8B66.219.523.49.6--
LLaVA-COT (Xu et al., 2024)11B52.519.922.67.8--
Qwen2-VL (Wang et al., 2024b)7B60.916.324.611.03.223.2
InternVL2.5 (Chen et al., 2024b)8B64.517.022.89.40.122.8
POINTS1.5 (Liu et al., 2024b)8B66.422.026.614.2--
Ovis1.6-Gemma2 (Lu et al., 2024b)27B70.220.637.817.0--
InternVL2.5-COT (Chen et al., 2024b)78B71.432.540.128.5--
LLaVA-OneVision (Li et al., 2024)72B67.125.327.215.6--
Qwen2-VL (Wang et al., 2024b)72B69.726.636.220.010.332.6
QVQ-72B-preview (QwenTeam, 2024)72B71.435.941.530.720.440.0
TVC7B68.122.738.915.19.830.9
TVC72B72.241.948.830.024.343.4
", + "image_path": "4e2812122af8c722544df25a6b2cabd42befc062c02de7576a7911f9529d2ab6.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 70, + 354, + 289, + 421 + ], + "blocks": [ + { + "bbox": [ + 67, + 296, + 291, + 344 + ], + "lines": [ + { + "bbox": [ + 67, + 296, + 291, + 344 + ], + "spans": [ + { + "bbox": [ + 67, + 296, + 291, + 344 + ], + "type": "text", + "content": "Table 2: Ablations on the TVC System. TVC enhances reasoning capabilities, showing significant improvements on both general and task-specific reasoning benchmarks." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 70, + 354, + 289, + 421 + ], + "lines": [ + { + "bbox": [ + 70, + 354, + 289, + 421 + ], + "spans": [ + { + "bbox": [ + 70, + 354, + 289, + 421 + ], + "type": "table", + "html": "
MethodMathVistaMathVisionMathVerseAvg
Baseline60.916.324.633.9
Vanilla - Direct SFT63.519.831.638.3
TVC w/o PVC66.721.835.641.4
TVC w/o DVR66.222.334.741.0
TVC Full68.122.738.943.2
", + "image_path": "56fb82a543aa388c80425d95ebaaed59088b49d07b6fa1bfe12dbb29bf397700.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 444, + 291, + 606 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 444, + 291, + 606 + ], + "spans": [ + { + "bbox": [ + 67, + 444, + 291, + 606 + ], + "type": "text", + "content": "methodology even with more compact model configurations. Beyond task-specific visual reasoning, we extend our evaluation to general reasoning benchmarks (e.g., MathVista), where TVC consistently delivers strong performance, achieving a " + }, + { + "bbox": [ + 67, + 444, + 291, + 606 + ], + "type": "inline_equation", + "content": "3.6\\%" + }, + { + "bbox": [ + 67, + 444, + 291, + 606 + ], + "type": "text", + "content": " improvement over the original Qwen2-VL72B model. These results emphasize TVC's ability to excel in tasks requiring both perception and reasoning. Collectively, the findings indicate that TVC not only advances performance in specialized visual reasoning tasks but also offers substantial benefits in broader application scenarios." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 618, + 175, + 632 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 618, + 175, + 632 + ], + "spans": [ + { + "bbox": [ + 67, + 618, + 175, + 632 + ], + "type": "text", + "content": "4.4 Further Analysis" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 638, + 291, + 706 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 638, + 291, + 706 + ], + "spans": [ + { + "bbox": [ + 67, + 638, + 291, + 706 + ], + "type": "text", + "content": "In this section, we conduct comprehensive experiments to evaluate the design choices of TVC, emphasizing the key contributions of our approach. We also present a case study to further illustrate the qualitative effectiveness of TVC." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 708, + 291, + 775 + ], + "type": "text", + "content": "Effectiveness of TVC system. To evaluate the effectiveness of the TVC system, we conduct comprehensive ablation experiments on various components using the Qwen2-VL-7B as the Baseline. We begin by performing supervised fine-tuning on the" + } + ] + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 305, + 317, + 523, + 368 + ], + "blocks": [ + { + "bbox": [ + 323, + 296, + 504, + 310 + ], + "lines": [ + { + "bbox": [ + 323, + 296, + 504, + 310 + ], + "spans": [ + { + "bbox": [ + 323, + 296, + 504, + 310 + ], + "type": "text", + "content": "Table 3: Ablations on Token Compression." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 305, + 317, + 523, + 368 + ], + "lines": [ + { + "bbox": [ + 305, + 317, + 523, + 368 + ], + "spans": [ + { + "bbox": [ + 305, + 317, + 523, + 368 + ], + "type": "table", + "html": "
MethodMathVistaMathVisionMathVerseAvg
TVC Baseline68.321.539.643.1
+ 2x2 Avg Pooling67.822.938.343.0
+ 4x4 Avg Pooling68.122.738.943.2
", + "image_path": "3bd77dbb748e6d5a4b9a4950ab47ef7ead5e90b2f52314ea31b2cfeb357c8418.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 393, + 526, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 393, + 526, + 582 + ], + "spans": [ + { + "bbox": [ + 302, + 393, + 526, + 582 + ], + "type": "text", + "content": "Qwen2-VL-7B model with the data from Section 3, referred to as Vanilla - Direct SFT. Furthermore, we apply the DVR training approach outlined in Section 2.2, which increases the focus on the visual information in the training data, enabling the model to implicitly learn visual conditioning capabilities. Additionally, during the testing phase, we experiment with resetting the visual KV cache midway through the reasoning process, and after compressing the visual tokens, we add them to the end of the reasoning steps. This strategy allows the model to further observe the image content during its thought process, mitigating the visual forgetting and suppressing hallucinations." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 302, + 586, + 526, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 586, + 526, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 586, + 526, + 775 + ], + "type": "text", + "content": "As demonstrated in Table 2, the results highlight that the TVC system is crucial for enhancing visual reasoning capabilities. When using only the SFT training data without the DVR strategy in TVC (Vanilla - Direct SFT), improvements in visual reasoning tasks are limited, as the model lacks the ability to reflect on visual information. Furthermore, models trained with the DVR strategy alone still produce sub-optimal results, underscoring the importance of a comprehensive approach that integrates PVC and DVR. The contributions of PVC and DVR are relatively balanced in enhancing the reasoning capabilities. To further validate the effectiveness of our visual token compression, we" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 69, + 69, + 526, + 360 + ], + "blocks": [ + { + "bbox": [ + 69, + 69, + 526, + 360 + ], + "lines": [ + { + "bbox": [ + 69, + 69, + 526, + 360 + ], + "spans": [ + { + "bbox": [ + 69, + 69, + 526, + 360 + ], + "type": "image", + "image_path": "0c4be083c4972b0ec6877c4318b42fb9003a36717e0940aeeddc10e6556d40f2.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 369, + 525, + 394 + ], + "lines": [ + { + "bbox": [ + 67, + 369, + 525, + 394 + ], + "spans": [ + { + "bbox": [ + 67, + 369, + 525, + 394 + ], + "type": "text", + "content": "Figure 6: Case Study of TVC. TVC effectively re-examines the image during the reflection process to correct mistakes, guiding the model to the correct answer." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 415, + 290, + 523 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 415, + 290, + 523 + ], + "spans": [ + { + "bbox": [ + 67, + 415, + 290, + 523 + ], + "type": "text", + "content": "conduct experiments with different pooling methods. As shown in Table 3, the TVC Baseline represents the method without image compression. We observe that the use of pooling methods has little impact on the model's capabilities. Utilizing " + }, + { + "bbox": [ + 67, + 415, + 290, + 523 + ], + "type": "inline_equation", + "content": "4 \\times 4" + }, + { + "bbox": [ + 67, + 415, + 290, + 523 + ], + "type": "text", + "content": " average pooling for compression not only enhances the model's inference efficiency but also achieves a slight performance improvement." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 528, + 291, + 716 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 528, + 291, + 716 + ], + "spans": [ + { + "bbox": [ + 67, + 528, + 291, + 716 + ], + "type": "text", + "content": "Data Scaling Law. To validate the scalability of our approach, we also conduct a data scaling law experiment for training. As shown in Figure 5, we compare models across various data size: 50K, 100K, 150K, and 200K samples, and present the performance relative to the base model (Qwen2-VL-7B). Our findings show that as the dataset size increases, the model's performance continues to improve. However, it is evident that with a small dataset, the model's reasoning ability cannot reach an optimal level solely through SFT. Therefore, data scaling proves highly effective during SFT training, and the results indicate that TVC benefits from increased data." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 721, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 721, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 721, + 291, + 775 + ], + "type": "text", + "content": "To prevent underfitting from becoming a performance bottleneck, we increase the number of training epochs with smaller datasets, which further improves model performance. This indicates that in" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 302, + 415, + 526, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 415, + 526, + 469 + ], + "spans": [ + { + "bbox": [ + 302, + 415, + 526, + 469 + ], + "type": "text", + "content": "sufficient data or epochs lead to undertrained models, making it difficult to learn long-chain reasoning. Increasing both the data and training epochs can effectively alleviate this issue." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 302, + 479, + 383, + 493 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 479, + 383, + 493 + ], + "spans": [ + { + "bbox": [ + 302, + 479, + 383, + 493 + ], + "type": "text", + "content": "4.5 Case Study" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 301, + 498, + 525, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 498, + 525, + 714 + ], + "spans": [ + { + "bbox": [ + 301, + 498, + 525, + 714 + ], + "type": "text", + "content": "We present a comprehensive case study in Figure 6 to illustrate the improvements of our TVC approach. Specifically, we provide an example that highlights advancements in the reasoning process. In this example, the model is tasked with carefully observing the objects within an image and, after eliminating certain shapes and attributes, providing the count of the remaining objects. During base CoT reasoning, the model fails to check the attributes of each object and only focuses on their shapes, leading to an incorrect final answer. Subsequently, the model learns to re-focus on the image and carefully describe the attributes of each object in detail. This process allows the model to identify the issue in its previous reasoning and provide the correct answer." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 302, + 725, + 381, + 738 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 725, + 381, + 738 + ], + "spans": [ + { + "bbox": [ + 302, + 725, + 381, + 738 + ], + "type": "text", + "content": "5 Conclusion" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 748, + 526, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 748, + 526, + 775 + ], + "spans": [ + { + "bbox": [ + 302, + 748, + 526, + 775 + ], + "type": "text", + "content": "In this paper, we introduce Take-along Visual Conditioning (TVC), a novel strategy designed to en" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 71, + 293, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 71, + 293, + 236 + ], + "spans": [ + { + "bbox": [ + 67, + 71, + 293, + 236 + ], + "type": "text", + "content": "hance the reasoning capabilities of MLLMs by addressing the challenge of visual attention degradation during long-chain reasoning. By dynamically shifting the image input to critical stages of reasoning and compressing redundant visual tokens, we ensure that the model maintains focus on the visual information throughout the process. Our extensive evaluation on several mathematical reasoning benchmarks demonstrates the effectiveness of TVC in improving multimodal reasoning, providing a robust approach to equip MLLMs with better visual grounding for complex tasks." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 68, + 244, + 167, + 259 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 244, + 167, + 259 + ], + "spans": [ + { + "bbox": [ + 68, + 244, + 167, + 259 + ], + "type": "text", + "content": "Acknowledgments" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 266, + 291, + 376 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 266, + 291, + 376 + ], + "spans": [ + { + "bbox": [ + 67, + 266, + 291, + 376 + ], + "type": "text", + "content": "This work is partially supported by National Key R&D Program of China (2024YFE0202800), NSFC (62376118), Key Program of Jiangsu Science Foundation (BK20243012), CCF-Tencent Rhino-Bird Open Research Fund (RAGR20240101), AMED (JP25wm0625405), and Collaborative Innovation Center of Novel Software Technology and Industrialization." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 68, + 386, + 131, + 399 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 386, + 131, + 399 + ], + "spans": [ + { + "bbox": [ + 68, + 386, + 131, + 399 + ], + "type": "text", + "content": "Limitations" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 408, + 291, + 558 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 408, + 291, + 558 + ], + "spans": [ + { + "bbox": [ + 67, + 408, + 291, + 558 + ], + "type": "text", + "content": "Despite advancements, our method may still exhibit several limitations. First, for highly complex reasoning tasks requiring sophisticated analytical capabilities, simply increasing visual revisits proves insufficient. In contrast, it is crucial to enhance the model's inherent reasoning capacity. Second, our method assumes the availability of delayed visual processing, making it potentially unsuitable for real-time applications requiring instantaneous visual feedback, such as robotic navigation or time-sensitive decision-making scenarios." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 68, + 580, + 127, + 592 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 580, + 127, + 592 + ], + "spans": [ + { + "bbox": [ + 68, + 580, + 127, + 592 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 68, + 600, + 291, + 774 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 68, + 600, + 291, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 600, + 291, + 624 + ], + "spans": [ + { + "bbox": [ + 68, + 600, + 291, + 624 + ], + "type": "text", + "content": "Meta AI. 2024. Build the future of ai with meta llama 3. Technical report, Meta AI." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 633, + 290, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 633, + 290, + 655 + ], + "spans": [ + { + "bbox": [ + 69, + 633, + 290, + 655 + ], + "type": "text", + "content": "anthropic. 2024. Introducing the next generation of claude. Technical report, anthropic." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 664, + 291, + 708 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 664, + 291, + 708 + ], + "spans": [ + { + "bbox": [ + 69, + 664, + 291, + 708 + ], + "type": "text", + "content": "Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. 2023. Qwen technical report. arXiv:2309.16609." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 719, + 291, + 774 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 719, + 291, + 774 + ], + "spans": [ + { + "bbox": [ + 69, + 719, + 291, + 774 + ], + "type": "text", + "content": "Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Junyang Lin, Chang Zhou, and Baobao Chang. 2024a. An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models. Preprint, arXiv:2403.06764." + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 526, + 774 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 304, + 72, + 525, + 139 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 72, + 525, + 139 + ], + "spans": [ + { + "bbox": [ + 304, + 72, + 525, + 139 + ], + "type": "text", + "content": "Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, et al. 2024b. Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling. arXiv preprint arXiv:2412.05271." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 148, + 526, + 204 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 148, + 526, + 204 + ], + "spans": [ + { + "bbox": [ + 304, + 148, + 526, + 204 + ], + "type": "text", + "content": "Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Zhong Muyan, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. 2023. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. arXiv:2312.14238." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 212, + 526, + 246 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 212, + 526, + 246 + ], + "spans": [ + { + "bbox": [ + 304, + 212, + 526, + 246 + ], + "type": "text", + "content": "DeepSeek-AI. 2024. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. Technical report, DeepSeek." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 255, + 526, + 311 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 255, + 526, + 311 + ], + "spans": [ + { + "bbox": [ + 304, + 255, + 526, + 311 + ], + "type": "text", + "content": "Yuhao Dong, Zuyan Liu, Hai-Long Sun, Jingkang Yang, Winston Hu, Yongming Rao, and Ziwei Liu. 2024. Insight-v: Exploring long-chain visual reasoning with multimodal large language models. arXiv preprint arXiv:2411.14432." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 320, + 526, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 320, + 526, + 386 + ], + "spans": [ + { + "bbox": [ + 304, + 320, + 526, + 386 + ], + "type": "text", + "content": "Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, et al. 2024. Vlmealkit: An open-source toolkit for evaluating large multi-modality models. arXiv preprint arXiv:2407.11691." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 395, + 526, + 451 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 395, + 526, + 451 + ], + "spans": [ + { + "bbox": [ + 304, + 395, + 526, + 451 + ], + "type": "text", + "content": "Chaoyou Fu, Haojia Lin, Xiong Wang, Yi-Fan Zhang, Yunhang Shen, Xiaoyu Liu, Yangze Li, Zuwei Long, Heting Gao, Ke Li, et al. 2025a. Vita-1.5: Towards gpt-4o level real-time vision and speech interaction. arXiv preprint arXiv:2501.01957." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 460, + 526, + 505 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 460, + 526, + 505 + ], + "spans": [ + { + "bbox": [ + 304, + 460, + 526, + 505 + ], + "type": "text", + "content": "Jiale Fu, Yuchu Jiang, Junkai Chen, Jiaming Fan, Xin Geng, and Xu Yang. 2025b. Speculative ensemble: Fast large language model ensemble via speculation. arXiv preprint arXiv:2502.01662." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 513, + 526, + 570 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 513, + 526, + 570 + ], + "spans": [ + { + "bbox": [ + 304, + 513, + 526, + 570 + ], + "type": "text", + "content": "Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, et al. 2023. G-llava: Solving geometric problem with multi-modal large language model. arXiv preprint arXiv:2312.11370." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 578, + 526, + 634 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 578, + 526, + 634 + ], + "spans": [ + { + "bbox": [ + 304, + 578, + 526, + 634 + ], + "type": "text", + "content": "Jarvis Guo, Tuney Zheng, Yuelin Bai, Bo Li, Yubo Wang, King Zhu, Yizhi Li, Graham Neubig, Wenhu Chen, and Xiang Yue. 2024. Mammoth-vl: Eliciting multimodal reasoning with instruction tuning at scale. arXiv preprint arXiv:2412.05237." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 643, + 526, + 710 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 643, + 526, + 710 + ], + "spans": [ + { + "bbox": [ + 304, + 643, + 526, + 710 + ], + "type": "text", + "content": "Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. 2024. Olympiad-bench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 719, + 526, + 774 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 719, + 526, + 774 + ], + "spans": [ + { + "bbox": [ + 304, + 719, + 526, + 774 + ], + "type": "text", + "content": "Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, and Ranjay Krishna. 2024. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. arXiv preprint arXiv:2406.09403." + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 774 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 69, + 72, + 291, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 291, + 127 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 291, + 127 + ], + "type": "text", + "content": "Jinhao Jiang, Zhipeng Chen, Yingqian Min, Jie Chen, Xiaoxue Cheng, Jiapeng Wang, Yiru Tang, Haoxiang Sun, Jia Deng, Wayne Xin Zhao, et al. 2024. Technical report: Enhancing llm reasoning with reward-guided tree search. arXiv preprint arXiv:2411.11694." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 137, + 290, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 137, + 290, + 215 + ], + "spans": [ + { + "bbox": [ + 69, + 137, + 290, + 215 + ], + "type": "text", + "content": "Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 225, + 290, + 280 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 225, + 290, + 280 + ], + "spans": [ + { + "bbox": [ + 69, + 225, + 290, + 280 + ], + "type": "text", + "content": "Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. 2024. Llavaonevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 291, + 290, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 291, + 290, + 346 + ], + "spans": [ + { + "bbox": [ + 69, + 291, + 290, + 346 + ], + "type": "text", + "content": "Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. 2023. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. arXiv:2306.00890." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 357, + 290, + 390 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 357, + 290, + 390 + ], + "spans": [ + { + "bbox": [ + 69, + 357, + 290, + 390 + ], + "type": "text", + "content": "Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. 2024a. Llavanext: Improved reasoning,OCR, and world knowledge." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 400, + 290, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 400, + 290, + 445 + ], + "spans": [ + { + "bbox": [ + 69, + 400, + 290, + 445 + ], + "type": "text", + "content": "Yuan Liu, Le Tian, Xiao Zhou, Xinyu Gao, Kavio Yu, Yang Yu, and Jie Zhou. 2024b. Points1. 5: Building a vision-language model towards real world applications. arXiv preprint arXiv:2412.08443." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 455, + 290, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 455, + 290, + 510 + ], + "spans": [ + { + "bbox": [ + 69, + 455, + 290, + 510 + ], + "type": "text", + "content": "Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. 2024a. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In ICLR." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 521, + 290, + 587 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 521, + 290, + 587 + ], + "spans": [ + { + "bbox": [ + 69, + 521, + 290, + 587 + ], + "type": "text", + "content": "Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. 2023. Chameleon: Plug-and-play compositional reasoning with large language models. Advances in Neural Information Processing Systems, 36:43447-43478." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 597, + 290, + 642 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 597, + 290, + 642 + ], + "spans": [ + { + "bbox": [ + 69, + 597, + 290, + 642 + ], + "type": "text", + "content": "Shiyin Lu, Yang Li, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, and Han-Jia Ye. 2024b. Ovis: Structural embedding alignment for multimodal large language model. arXiv preprint arXiv:2405.20797." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 652, + 290, + 708 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 652, + 290, + 708 + ], + "spans": [ + { + "bbox": [ + 69, + 652, + 290, + 708 + ], + "type": "text", + "content": "Brandon McKinzie, Zhe Gan, Jean-Philippe Fauconnier, Sam Dodge, Bowen Zhang, Philipp Dufter, Dhruti Shah, Xianzhi Du, Futang Peng, Floris Weers, et al. 2024. Mm1: Methods, analysis & insights from multimodal llm pre-training. arXiv:2403.09611." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 718, + 290, + 741 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 718, + 290, + 741 + ], + "spans": [ + { + "bbox": [ + 69, + 718, + 290, + 741 + ], + "type": "text", + "content": "OpenAI. 2024a. Gpt-4o: Hello gpt-4o. Technical report, OpenAI." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 751, + 290, + 774 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 751, + 290, + 774 + ], + "spans": [ + { + "bbox": [ + 69, + 751, + 290, + 774 + ], + "type": "text", + "content": "OpenAI. 2024b. Learning to reason with llms. Technical report, OpenAI." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 304, + 72, + 525, + 773 + ], + "type": "list", + "angle": 0, + "index": 24, + "blocks": [ + { + "bbox": [ + 305, + 72, + 525, + 137 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 72, + 525, + 137 + ], + "spans": [ + { + "bbox": [ + 305, + 72, + 525, + 137 + ], + "type": "text", + "content": "Yingzhe Peng, Gongrui Zhang, Miaosen Zhang, Zhiyuan You, Jie Liu, Qipeng Zhu, Kai Yang, Xingzhong Xu, Xin Geng, and Xu Yang. 2025. Lmmr1: Empowering 3b lmms with strong reasoning abilities through two-stage rule-based rl. arXiv preprint arXiv:2503.07536." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 149, + 525, + 203 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 149, + 525, + 203 + ], + "spans": [ + { + "bbox": [ + 304, + 149, + 525, + 203 + ], + "type": "text", + "content": "Yiwei Qin, Xuefeng Li, Haoyang Zou, Yixiu Liu, Shijie Xia, Zhen Huang, Yixin Ye, Weizhe Yuan, Hector Liu, Yuanzhi Li, et al. 2024. O1 replication journey: A strategic progress report-part 1. arXiv preprint arXiv:2410.18982." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 214, + 525, + 238 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 214, + 525, + 238 + ], + "spans": [ + { + "bbox": [ + 304, + 214, + 525, + 238 + ], + "type": "text", + "content": "QwenTeam. 2024. Qvq: To see the world with wisdom. Technical report, Alibaba." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 248, + 525, + 302 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 248, + 525, + 302 + ], + "spans": [ + { + "bbox": [ + 304, + 248, + 525, + 302 + ], + "type": "text", + "content": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 313, + 525, + 379 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 313, + 525, + 379 + ], + "spans": [ + { + "bbox": [ + 304, + 313, + 525, + 379 + ], + "type": "text", + "content": "Machel Reid, Nikolay Savinov, Denis Teptyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv:2403.05530." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 304, + 390, + 525, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 390, + 525, + 445 + ], + "spans": [ + { + "bbox": [ + 304, + 390, + 525, + 445 + ], + "type": "text", + "content": "Wenhao Shi, Zhiqiang Hu, Yi Bin, Junhua Liu, Yang Yang, See-Kiong Ng, Lidong Bing, and Roy Ka-Wei Lee. 2024. Math-llava: Bootstrapping mathematical reasoning for multimodal large language models. arXiv preprint arXiv:2406.17294." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 304, + 455, + 525, + 500 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 455, + 525, + 500 + ], + "spans": [ + { + "bbox": [ + 304, + 455, + 525, + 500 + ], + "type": "text", + "content": "Hai-Long Sun, Da-Wei Zhou, Yang Li, Shiyin Lu, Chao Yi, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, De-Chuan Zhan, et al. 2025a. Parrot: Multilingual visual instruction tuning. In ICML." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 304, + 510, + 525, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 510, + 525, + 555 + ], + "spans": [ + { + "bbox": [ + 304, + 510, + 525, + 555 + ], + "type": "text", + "content": "Hai-Long Sun, Da-Wei Zhou, De-Chuan Zhan, and Han-Jia Ye. 2025b. Pilot: A pre-trained model-based continual learning toolbox. SCIENCE CHINA Information Sciences, 68(4):147101." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 304, + 565, + 525, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 565, + 525, + 620 + ], + "spans": [ + { + "bbox": [ + 304, + 565, + 525, + 620 + ], + "type": "text", + "content": "Hai-Long Sun, Da-Wei Zhou, Hanbin Zhao, Le Gan, De-Chuan Zhan, and Han-Jia Ye. 2025c. Mos: Model surgery for pre-trained model-based class-incremental learning. In AAAI, volume 39, pages 20699-20707." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 304, + 631, + 525, + 696 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 631, + 525, + 696 + ], + "spans": [ + { + "bbox": [ + 304, + 631, + 525, + 696 + ], + "type": "text", + "content": "Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 304, + 708, + 525, + 773 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 708, + 525, + 773 + ], + "spans": [ + { + "bbox": [ + 304, + 708, + 525, + 773 + ], + "type": "text", + "content": "Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. 2024. Cambrian-1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860." + } + ] + } + ], + "index": 23 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 769 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 69, + 72, + 289, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 72, + 289, + 116 + ], + "spans": [ + { + "bbox": [ + 69, + 72, + 289, + 116 + ], + "type": "text", + "content": "Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, and Hongsheng Li. 2024a. Measuring multimodal mathematical reasoning with math-vision dataset. Preprint, arXiv:2402.14804." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 123, + 289, + 210 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 123, + 289, + 210 + ], + "spans": [ + { + "bbox": [ + 69, + 123, + 289, + 210 + ], + "type": "text", + "content": "Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. 2024b. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 218, + 289, + 261 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 218, + 289, + 261 + ], + "spans": [ + { + "bbox": [ + 69, + 218, + 289, + 261 + ], + "type": "text", + "content": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv:2201.11903." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 269, + 289, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 269, + 289, + 312 + ], + "spans": [ + { + "bbox": [ + 69, + 269, + 289, + 312 + ], + "type": "text", + "content": "Guowei Xu, Peng Jin, Li Hao, Yibing Song, Lichao Sun, and Li Yuan. 2024. Llava-o1: Let vision language models reason step-by-step. arXiv preprint arXiv:2411.10440." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 320, + 289, + 364 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 320, + 289, + 364 + ], + "spans": [ + { + "bbox": [ + 69, + 320, + 289, + 364 + ], + "type": "text", + "content": "Ankit Yadav, Lingqiao Liu, and Yuankai Qi. 2025. Exploring primitive visual measurement understanding and the role of output format in learning in vision-language models. arXiv preprint arXiv:2501.15144." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 69, + 371, + 289, + 424 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 371, + 289, + 424 + ], + "spans": [ + { + "bbox": [ + 69, + 371, + 289, + 424 + ], + "type": "text", + "content": "Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. 2023. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11975-11986." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 69, + 433, + 289, + 498 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 433, + 289, + 498 + ], + "spans": [ + { + "bbox": [ + 69, + 433, + 289, + 498 + ], + "type": "text", + "content": "Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, et al. 2024a. Math-verse: Does your multi-modal llm truly see the diagrams in visual math problems? arXiv preprint arXiv:2403.14624." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 69, + 506, + 289, + 560 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 506, + 289, + 560 + ], + "spans": [ + { + "bbox": [ + 69, + 506, + 289, + 560 + ], + "type": "text", + "content": "Renrui Zhang, Xinyu Wei, Dongzhi Jiang, Ziyu Guo, Shicheng Li, Yichi Zhang, Chengzhuo Tong, Jiaming Liu, Aojun Zhou, Bin Wei, et al. 2024b. Mavis: Mathematical visual instruction tuning with an automatic data engine. arXiv preprint arXiv:2407.08739." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 69, + 568, + 289, + 655 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 568, + 289, + 655 + ], + "spans": [ + { + "bbox": [ + 69, + 568, + 289, + 655 + ], + "type": "text", + "content": "Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024. Llamafactory: Unified efficient fine-tuning of " + }, + { + "bbox": [ + 69, + 568, + 289, + 655 + ], + "type": "inline_equation", + "content": "100+" + }, + { + "bbox": [ + 69, + 568, + 289, + 655 + ], + "type": "text", + "content": " language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), Bangkok, Thailand. Association for Computational Linguistics." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 662, + 289, + 707 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 662, + 289, + 707 + ], + "spans": [ + { + "bbox": [ + 69, + 662, + 289, + 707 + ], + "type": "text", + "content": "Wenwen Zhuang, Xin Huang, Xiantao Zhang, and Jin Zeng. 2024. Math-puma: Progressive upward multimodal alignment to enhance mathematical reasoning. arXiv preprint arXiv:2408.08640." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 714, + 289, + 769 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 714, + 289, + 769 + ], + "spans": [ + { + "bbox": [ + 69, + 714, + 289, + 769 + ], + "type": "text", + "content": "Chengke Zou, Xingang Guo, Rui Yang, Junyu Zhang, Bin Hu, and Huan Zhang. 2024. Dynamath: A dynamic visual benchmark for evaluating mathematical reasoning robustness of vision language models. arXiv preprint arXiv:2411.00836." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 69, + 70, + 163, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 70, + 163, + 83 + ], + "spans": [ + { + "bbox": [ + 69, + 70, + 163, + 83 + ], + "type": "text", + "content": "A Related Work" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 105, + 291, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 105, + 291, + 429 + ], + "spans": [ + { + "bbox": [ + 69, + 105, + 291, + 429 + ], + "type": "text", + "content": "Multimodal Large Language Models. Multimodal Large Language Models (MLLMs) (Li et al., 2023; Liu et al., 2024a; Sun et al., 2025a; Wang et al., 2024b; Lu et al., 2024b; McKinzie et al., 2024; Sun et al., 2025c,b; Fu et al., 2025b; Dong et al., 2024) integrate vision encoders (Radford et al., 2021; Zhai et al., 2023) with LLMs (AI, 2024; Bai et al., 2023), endowing them with robust capabilities across a wide range of domains. These include general visual understanding(OpenAI, 2024a; Li et al., 2024), mathematical reasoning (Shi et al., 2024; Gao et al., 2023), and answering college-level questions (Chen et al., 2023), demonstrating their versatility in real-world tasks. The rapid advancements in open-source models have also spurred the development of proprietary models, such as GPT-4o (OpenAI, 2024a), Gemini (Team et al., 2023; Reid et al., 2024), Qwen2-VL-MAX (Wang et al., 2024b), and Claude3 (anthropic, 2024). These models have demonstrated remarkable performance in both evaluation benchmarks and practical applications, solidifying their position at the forefront of AI research and deployment." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 69, + 437, + 291, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 437, + 291, + 774 + ], + "spans": [ + { + "bbox": [ + 69, + 437, + 291, + 774 + ], + "type": "text", + "content": "Reasoning with MLLMs. Recent advancements in MLLMs have significantly enhanced performance in reasoning tasks across both text and multimodal scenarios (OpenAI, 2024b; DeepSeek-AI, 2024; QwenTeam, 2024; Peng et al., 2025). Current methods primarily rely on CoT (Wei et al., 2022) to train MLLMs for step-by-step reasoning. Data-driven approaches include MathLLaVA (Shi et al., 2024), which introduced the MathV360K dataset, and MAmmoTH-VL (Guo et al., 2024), which curates a large-scale multimodal CoT dataset in a scalable manner. Another line of research explores vision-text alignment. MAVIS (Zhang et al., 2024b) fine-tunes a math-specific vision encoder with curated caption data, while Math-PUMA (Zhuang et al., 2024) leverages the Kullback-Leibler (KL) divergence of next-token prediction distributions for modality alignment. In a different paradigm, MLLMs act as coordinators, utilizing external tools such as LLMs, web search engines, and computer programs for complex reasoning. Chameleon (Lu et al., 2023) orchestrates tool-call sequences, and Visual Sketchpad (Hu et al., 2024) enables models to generate visual sketches to aid reasoning." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 304, + 70, + 503, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 70, + 503, + 84 + ], + "spans": [ + { + "bbox": [ + 304, + 70, + 503, + 84 + ], + "type": "text", + "content": "B More Details of Reasoning Dataset" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 304, + 98, + 524, + 166 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 98, + 524, + 166 + ], + "spans": [ + { + "bbox": [ + 304, + 98, + 524, + 166 + ], + "type": "text", + "content": "In this section, we provide a detailed description of dynamic token truncation and reflection word pruning in the process of constructing the reasoning dataset. We also provide detailed information about the training data in Table 5." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 304, + 184, + 460, + 197 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 184, + 460, + 197 + ], + "spans": [ + { + "bbox": [ + 304, + 184, + 460, + 197 + ], + "type": "text", + "content": "B.1 Dynamic Token Truncation" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 304, + 206, + 526, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 206, + 526, + 449 + ], + "spans": [ + { + "bbox": [ + 304, + 206, + 526, + 449 + ], + "type": "text", + "content": "To further improve the dataset quality, we analyze the distribution of token lengths after the answer-centric rejection sampling. We find that many samples are close to the maximum token limit, and manual checks show that these long reasoning chains often have problems—such as logical errors, mistakes in multi-step calculations, and reliance on shortcuts that don't work in general cases (e.g., substituting specific values). Motivated by the correlation between extreme token lengths and compromised solution quality, we implement adaptive truncation thresholds to keep the answers within the 200-8000 token range. This dynamic filtering not only eliminates the invalid cases (overly verbose or terse responses) but also enhances the overall quality of the data. The final length distribution matches how human experts solve problems and keeps the important reasoning steps intact." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 304, + 468, + 450, + 481 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 468, + 450, + 481 + ], + "spans": [ + { + "bbox": [ + 304, + 468, + 450, + 481 + ], + "type": "text", + "content": "B.2 Reflection Word Pruning" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 304, + 491, + 526, + 774 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 491, + 526, + 774 + ], + "spans": [ + { + "bbox": [ + 304, + 491, + 526, + 774 + ], + "type": "text", + "content": "Our analysis reveals a critical flaw in distilled reasoning chains: excessive metacognitive loops caused by uncontrolled reflection markers (e.g., 'Alternatively,' 'Wait'), which led to performance degradation through infinite loops or ungrounded speculation. Term frequency analysis of reflection density shows a heavy-tailed distribution—" + }, + { + "bbox": [ + 304, + 491, + 526, + 774 + ], + "type": "inline_equation", + "content": "95\\%" + }, + { + "bbox": [ + 304, + 491, + 526, + 774 + ], + "type": "text", + "content": " of samples contained fewer than 10 reflection markers per chain, while " + }, + { + "bbox": [ + 304, + 491, + 526, + 774 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 304, + 491, + 526, + 774 + ], + "type": "text", + "content": " exhibited over 50 markers, with this extreme group strongly correlating to hallucination rates. To address this, we introduce a reflection token quota system that automatically prunes samples exceeding 25 reflection markers while retaining core reasoning logic using semantic-aware span detection. As shown in Figure 4, this approach significantly reduced infinite-loop instances in validation tasks while improving answer accuracy. The refined reflection pattern mirrors expert human problem-solving strategies, wherein targeted self-correction enhances, rather than disrupts, the continuity of the reasoning process." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 69, + 69, + 297, + 199 + ], + "blocks": [ + { + "bbox": [ + 69, + 69, + 297, + 199 + ], + "lines": [ + { + "bbox": [ + 69, + 69, + 297, + 199 + ], + "spans": [ + { + "bbox": [ + 69, + 69, + 297, + 199 + ], + "type": "image", + "image_path": "e6e818e208bce7e7184afc19e3effb07a641dfed7a0d213ac1fd5c32cde5e241.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 119, + 208, + 473, + 222 + ], + "lines": [ + { + "bbox": [ + 119, + 208, + 473, + 222 + ], + "spans": [ + { + "bbox": [ + 119, + 208, + 473, + 222 + ], + "type": "text", + "content": "Figure 7: The token and reflection word distribution of the long-chain reasoning dataset." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 299, + 69, + 523, + 199 + ], + "blocks": [ + { + "bbox": [ + 299, + 69, + 523, + 199 + ], + "lines": [ + { + "bbox": [ + 299, + 69, + 523, + 199 + ], + "spans": [ + { + "bbox": [ + 299, + 69, + 523, + 199 + ], + "type": "image", + "image_path": "19734dcb34cce543e8463529115a85acd6bad9216134c3e1ed1179348600b169.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 88, + 261, + 271, + 406 + ], + "blocks": [ + { + "bbox": [ + 83, + 240, + 274, + 253 + ], + "lines": [ + { + "bbox": [ + 83, + 240, + 274, + 253 + ], + "spans": [ + { + "bbox": [ + 83, + 240, + 274, + 253 + ], + "type": "text", + "content": "Table 4: The detailed training hyperparameters." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 88, + 261, + 271, + 406 + ], + "lines": [ + { + "bbox": [ + 88, + 261, + 271, + 406 + ], + "spans": [ + { + "bbox": [ + 88, + 261, + 271, + 406 + ], + "type": "table", + "html": "
ConfigSFT
DeepspeedZero3
Epoch5
Warmup Ratio0.1
Max Grad Norm1.0
OptimizerAdamW
Learning rate2e-5
Learning rate schedulerCosine
Text max length8192
Batch size per GPU1
Gradient Accumulation Steps4
GPU64×H20-96G
PrecisionBf16
", + "image_path": "ed5183c30b072c3055cdbc52c33cbe7366f61b55f31d0f89626a003a8768d22c.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 68, + 426, + 145, + 439 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 426, + 145, + 439 + ], + "spans": [ + { + "bbox": [ + 68, + 426, + 145, + 439 + ], + "type": "text", + "content": "C Discussion" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 449, + 291, + 625 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 449, + 291, + 625 + ], + "spans": [ + { + "bbox": [ + 67, + 449, + 291, + 625 + ], + "type": "text", + "content": "In this work, we investigate the phenomenon of visual information forgetting in MLLMs during long-chain reasoning. Through comprehensive analysis and experiments, we show that as reasoning chains progressively lengthen, models exhibit a gradual deterioration in retaining visual inputs, ultimately undermining their multimodal reasoning capabilities and exacerbating hallucination issues. To address this challenge, we propose a take-along visual conditioning mechanism that enables models to dynamically revisit visual inputs during reasoning steps, thereby enhancing content fidelity throughout the inference process." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 626, + 291, + 775 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 626, + 291, + 775 + ], + "spans": [ + { + "bbox": [ + 67, + 626, + 291, + 775 + ], + "type": "text", + "content": "However, as illustrated in Figure 8, we acknowledge several limitations. First, for highly complex reasoning tasks requiring sophisticated analytical capabilities, simply increasing visual revisits proves insufficient. In contrast, it is crucial to enhance the model's inherent reasoning capacity. Second, our method assumes the availability of delayed visual processing, making it potentially unsuitable for real-time applications requiring instantaneous visual feedback, such as robotic navigation or time-sensitive decision-making scenarios." + } + ] + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 309, + 272, + 517, + 354 + ], + "blocks": [ + { + "bbox": [ + 302, + 240, + 524, + 264 + ], + "lines": [ + { + "bbox": [ + 302, + 240, + 524, + 264 + ], + "spans": [ + { + "bbox": [ + 302, + 240, + 524, + 264 + ], + "type": "text", + "content": "Table 5: Details on the TVC's training data, which is derived from publicly available datasets." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 309, + 272, + 517, + 354 + ], + "lines": [ + { + "bbox": [ + 309, + 272, + 517, + 354 + ], + "spans": [ + { + "bbox": [ + 309, + 272, + 517, + 354 + ], + "type": "table", + "html": "
DatasetsSamples
MathV360K (Shi et al., 2024)221K
Geo170K (Gao et al., 2023)22K
LLaVA-OneVision (Li et al., 2024)97K
Cambrian-1 (Tong et al., 2024)1K
", + "image_path": "745a67a45e9a8fb99fc9a5e2ebcf5aebf3c4c0260658c3e43b3d580ec59cacbf.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 302, + 375, + 526, + 550 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 375, + 526, + 550 + ], + "spans": [ + { + "bbox": [ + 302, + 375, + 526, + 550 + ], + "type": "text", + "content": "Our work represents an initial exploration into mitigating visual forgetting in extended multimodal reasoning chains. We envision future research directions including: (1) Developing hybrid architectures that synergistically enhance both visual retention and intrinsic reasoning capabilities; (2) Investigating adaptive attention mechanisms for real-time multimodal applications; (3) Exploring curriculum learning strategies to progressively strengthen long-chain reasoning capacities. We hope this foundational study will inspire further advances in understanding and improving multimodal reasoning systems for complex real-world applications." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 76, + 281, + 98, + 299 + ], + "blocks": [ + { + "bbox": [ + 76, + 281, + 98, + 299 + ], + "lines": [ + { + "bbox": [ + 76, + 281, + 98, + 299 + ], + "spans": [ + { + "bbox": [ + 76, + 281, + 98, + 299 + ], + "type": "image", + "image_path": "6a7493ff659617dcedca570aa164a80dbd4533998a3aa4bf6246b4c28549486d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 101, + 277, + 220, + 491 + ], + "blocks": [ + { + "bbox": [ + 101, + 277, + 220, + 491 + ], + "lines": [ + { + "bbox": [ + 101, + 277, + 220, + 491 + ], + "spans": [ + { + "bbox": [ + 101, + 277, + 220, + 491 + ], + "type": "image", + "image_path": "679522b38765370c555efed06eb171b6a97b036167f248e17a8a2a3a6956f88b.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 220, + 564, + 373, + 576 + ], + "lines": [ + { + "bbox": [ + 220, + 564, + 373, + 576 + ], + "spans": [ + { + "bbox": [ + 220, + 564, + 373, + 576 + ], + "type": "text", + "content": "Figure 8: Qualitative Results of TVC." + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 76, + 502, + 101, + 523 + ], + "blocks": [ + { + "bbox": [ + 76, + 502, + 101, + 523 + ], + "lines": [ + { + "bbox": [ + 76, + 502, + 101, + 523 + ], + "spans": [ + { + "bbox": [ + 76, + 502, + 101, + 523 + ], + "type": "image", + "image_path": "e4da8a755a3bfc4d8b87c6a8b3c35c5ae1c52093de2983e4e008244ee89b6da6.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 103, + 505, + 215, + 520 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 103, + 505, + 215, + 520 + ], + "spans": [ + { + "bbox": [ + 103, + 505, + 215, + 520 + ], + "type": "text", + "content": "Which of the cubes is not identical to the unfolded net?" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 103, + 521, + 210, + 528 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 103, + 521, + 210, + 528 + ], + "spans": [ + { + "bbox": [ + 103, + 521, + 210, + 528 + ], + "type": "text", + "content": "Choices: (A) A (B) B (C) C (D) D (E) E" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 249, + 279, + 502, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 249, + 279, + 502, + 301 + ], + "spans": [ + { + "bbox": [ + 249, + 279, + 502, + 301 + ], + "type": "text", + "content": "Let's see here. I've got this problem where there's an unfolded net of a cube, and then there are several cubes shown below with letters A through E. The question is asking which one of these cubes is not identical to the unfolded net. Okay, so I need to compare each cube to the net and see if they match or not." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 250, + 307, + 333, + 314 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 307, + 333, + 314 + ], + "spans": [ + { + "bbox": [ + 250, + 307, + 333, + 314 + ], + "type": "text", + "content": "Let me label the squares for clarity:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 250, + 318, + 289, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 318, + 289, + 327 + ], + "spans": [ + { + "bbox": [ + 250, + 318, + 289, + 327 + ], + "type": "text", + "content": "2. Back face (B)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 250, + 328, + 285, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 328, + 285, + 335 + ], + "spans": [ + { + "bbox": [ + 250, + 328, + 285, + 335 + ], + "type": "text", + "content": "3. Top face (T)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 250, + 336, + 295, + 343 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 336, + 295, + 343 + ], + "spans": [ + { + "bbox": [ + 250, + 336, + 295, + 343 + ], + "type": "text", + "content": "4. Bottom face (D)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 250, + 343, + 286, + 349 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 343, + 286, + 349 + ], + "spans": [ + { + "bbox": [ + 250, + 343, + 286, + 349 + ], + "type": "text", + "content": "5. Left face (L)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 250, + 349, + 290, + 356 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 349, + 290, + 356 + ], + "spans": [ + { + "bbox": [ + 250, + 349, + 290, + 356 + ], + "type": "text", + "content": "6. Right face (R)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 250, + 362, + 321, + 370 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 362, + 321, + 370 + ], + "spans": [ + { + "bbox": [ + 250, + 362, + 321, + 370 + ], + "type": "text", + "content": "Let's look at each cube option:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 250, + 370, + 470, + 383 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 370, + 470, + 383 + ], + "spans": [ + { + "bbox": [ + 250, + 370, + 470, + 383 + ], + "type": "text", + "content": "A) This is a cube with letters on its faces. Wait, no, it's a cube with colors, similar to the others. Wait, actually, all cubes have combinations of teal and yellow faces." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 250, + 383, + 392, + 390 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 383, + 392, + 390 + ], + "spans": [ + { + "bbox": [ + 250, + 383, + 392, + 390 + ], + "type": "text", + "content": "I need to carefully observe the color placement on each cube." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 250, + 391, + 273, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 391, + 273, + 396 + ], + "spans": [ + { + "bbox": [ + 250, + 391, + 273, + 396 + ], + "type": "text", + "content": "Option A" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 250, + 398, + 289, + 404 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 398, + 289, + 404 + ], + "spans": [ + { + "bbox": [ + 250, + 398, + 289, + 404 + ], + "type": "text", + "content": "- Front face: teal" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 250, + 406, + 293, + 412 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 406, + 293, + 412 + ], + "spans": [ + { + "bbox": [ + 250, + 406, + 293, + 412 + ], + "type": "text", + "content": "-Top face: yellow" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 250, + 412, + 289, + 418 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 412, + 289, + 418 + ], + "spans": [ + { + "bbox": [ + 250, + 412, + 289, + 418 + ], + "type": "text", + "content": "-Right face:teal" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 250, + 419, + 294, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 419, + 294, + 425 + ], + "spans": [ + { + "bbox": [ + 250, + 419, + 294, + 425 + ], + "type": "text", + "content": "- Left face: yellow" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 250, + 426, + 289, + 431 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 426, + 289, + 431 + ], + "spans": [ + { + "bbox": [ + 250, + 426, + 289, + 431 + ], + "type": "text", + "content": "-Back face:teal" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 250, + 433, + 301, + 439 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 433, + 301, + 439 + ], + "spans": [ + { + "bbox": [ + 250, + 433, + 301, + 439 + ], + "type": "text", + "content": "- Bottom face: yellow" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 250, + 446, + 273, + 452 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 446, + 273, + 452 + ], + "spans": [ + { + "bbox": [ + 250, + 446, + 273, + 452 + ], + "type": "text", + "content": "Option E:" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 250, + 453, + 295, + 459 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 453, + 295, + 459 + ], + "spans": [ + { + "bbox": [ + 250, + 453, + 295, + 459 + ], + "type": "text", + "content": "- Front: teal" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 250, + 460, + 280, + 465 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 460, + 280, + 465 + ], + "spans": [ + { + "bbox": [ + 250, + 460, + 280, + 465 + ], + "type": "text", + "content": "-Top: yellow" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 250, + 466, + 278, + 472 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 466, + 278, + 472 + ], + "spans": [ + { + "bbox": [ + 250, + 466, + 278, + 472 + ], + "type": "text", + "content": "-Right:teal" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 250, + 473, + 280, + 479 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 473, + 280, + 479 + ], + "spans": [ + { + "bbox": [ + 250, + 473, + 280, + 479 + ], + "type": "text", + "content": "- Left: yellow" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 250, + 481, + 278, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 481, + 278, + 486 + ], + "spans": [ + { + "bbox": [ + 250, + 481, + 278, + 486 + ], + "type": "text", + "content": "-Back:teal" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 250, + 487, + 290, + 493 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 487, + 290, + 493 + ], + "spans": [ + { + "bbox": [ + 250, + 487, + 290, + 493 + ], + "type": "text", + "content": "- Bottom: yellow" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 250, + 494, + 376, + 500 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 494, + 376, + 500 + ], + "spans": [ + { + "bbox": [ + 250, + 494, + 376, + 500 + ], + "type": "text", + "content": "Comparing to the net: F(t), T(y), R(t), L(y), B(y), D(t)" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 250, + 502, + 411, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 502, + 411, + 507 + ], + "spans": [ + { + "bbox": [ + 250, + 502, + 411, + 507 + ], + "type": "text", + "content": "So, according to the net, B should be yellow, but in Option E, it's teal." + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 250, + 508, + 396, + 514 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 508, + 396, + 514 + ], + "spans": [ + { + "bbox": [ + 250, + 508, + 396, + 514 + ], + "type": "text", + "content": "Therefore, Option E differs from the net in the back face color." + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 250, + 515, + 371, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 515, + 371, + 521 + ], + "spans": [ + { + "bbox": [ + 250, + 515, + 371, + 521 + ], + "type": "text", + "content": "Hence, Option E is not identical to the unfolded net." + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 250, + 522, + 317, + 528 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 522, + 317, + 528 + ], + "spans": [ + { + "bbox": [ + 250, + 522, + 317, + 528 + ], + "type": "text", + "content": "Therefore, the answer is E." + } + ] + } + ], + "index": 33 + } + ], + "discarded_blocks": [], + "page_size": [ + 595, + 841 + ], + "page_idx": 13 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13377/48689f30-a9be-48a0-a682-282cc19b9dc2_content_list.json b/data/2025/2503_13xxx/2503.13377/48689f30-a9be-48a0-a682-282cc19b9dc2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7eab6f7aad37f89f7f9656142e701e74692944b5 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13377/48689f30-a9be-48a0-a682-282cc19b9dc2_content_list.json @@ -0,0 +1,3721 @@ +[ + { + "type": "text", + "text": "Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding", + "text_level": 1, + "bbox": [ + 173, + 121, + 825, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Ye Wang $^{1*}$ Ziheng Wang $^{1*}$ Boshen Xu $^{1*‡}$ Yang Du $^{1}$ Kejun Lin $^{1}$ Zihan Xiao $^{3}$ \nZihao Yue $^{1}$ Jianzhong Ju $^{2}$ Liang Zhang $^{1}$ Dingyi Yang $^{1}$ Xiangnan Fang $^{1}$ Zewen He $^{2}$ \nZhenbo Luo $^{2}$ Wenxuan Wang $^{1}$ Junqi Lin $^{2}$ Jian Luan $^{2}$ Qin Jin $^{1†}$", + "bbox": [ + 189, + 224, + 807, + 272 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{1}$ AIM3 Lab, Renmin University of China $^{2}$ MiLM Plus, Xiaomi Inc. $^{3}$ Independent Researcher Project Page: https://xuboshen.github.io/Time-R1/", + "bbox": [ + 323, + 282, + 694, + 327 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 459, + 362, + 537, + 378 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Temporal Video Grounding (TVG), the task of locating specific video segments based on language queries, is a core challenge in long-form video understanding. While recent Large Vision-Language Models (LVLMs) have shown early promise in tackling TVG through supervised fine-tuning (SFT), their abilities to generalize remain limited. To address this, we propose a novel post-training framework that enhances the generalization capabilities of LVLMs via reinforcement learning (RL). Specifically, our contributions span three key directions: (1) Time-R1: we introduce a reasoning-guided post-training framework via RL with verifiable reward to enhance the capabilities of LVLMs on the TVG task. (2) TimeRFT: we explore data-efficient post-training strategies on our curated RL-friendly dataset, which trains the model to progressively comprehend difficult samples, leading to better generalization. (3) TVGBench: we carefully construct a small yet comprehensive benchmark for LVLM evaluation, assessing 11 types of queries and featuring balanced distributions across both videos and queries. Extensive experiments demonstrate that Time-R1 achieves state-of-the-art performance across multiple downstream datasets using only 2.5K training data, while improving its general video understanding capabilities.", + "bbox": [ + 228, + 395, + 767, + 630 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 171, + 666, + 313, + 681 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Understanding long-form videos has long been a core ambition in computer vision [14, 24, 9]. A critical step toward this goal is Temporal Video Grounding (TVG) [15, 61], which enables models to localize video segments corresponding to natural language queries (e.g., \"Find the segment where a person walks into the living room\"). Such capability is fundamental for real-world applications, including smart home assistants [55, 16, 46] and video retrieval systems on online platforms [5, 3].", + "bbox": [ + 169, + 698, + 826, + 768 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Traditional TVG approaches adopt a feature-based paradigm, where pretrained models (e.g., CLIP [42], I3D [6]) extract text and video features, followed by task-specific grounding models [30, 22, 29]. However, these methods suffer from error accumulation due to imperfect pretrained features. To overcome these limitations, recent efforts have shifted toward end-to-end Large Vision-Language Models (LVLMs) [44, 60], which directly process long-form videos and text queries. Despite being pretrained on datasets $100 \\times$ larger than domain-specific benchmarks [46], LVLMs (with $7\\mathrm{B}+$ parameters) often underperform compared to much smaller feature-based models (e.g.,", + "bbox": [ + 169, + 773, + 826, + 872 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2503.13377v3 [cs.CV] 29 Jun 2025", + "bbox": [ + 22, + 277, + 60, + 717 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "† Corresponding author: Qin Jin; * Equal contribution, listed in alphabetical order; ‡ Project lead.", + "bbox": [ + 196, + 883, + 774, + 898 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Preprint. Under review.", + "bbox": [ + 171, + 922, + 315, + 936 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/3d410d229d3a500cf3af047c76b60ce63002c798ae8c6e3f3a040537d00828a3.jpg", + "image_caption": [ + "TimeRFT Training training strategy, dataset", + "Time-R1 Framework data-efficient, effective, R1-like", + "TVGBench Evaluation \nata, 11 types, balanced distributions" + ], + "image_footnote": [], + "bbox": [ + 200, + 112, + 800, + 167 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/68a0bcca670a2e1de605d4cbe5acb8cf4520384516660cd871afe03ef7bcde10.jpg", + "image_caption": [ + "Improved Temporal Video Grounding (video length: 148s)" + ], + "image_footnote": [], + "bbox": [ + 210, + 185, + 781, + 236 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/ddbcfed6eab7989e24c9d085a94496876fb255a141cb877b193ccc30f3ed1fd4.jpg", + "image_caption": [ + "Question: Where is the event \"He took out cutting board\"." + ], + "image_footnote": [ + "Time-R1 thinking (ours): The event \"He took out cutting board\" occurs between 13.40 and 18.20 seconds. This is when the person is seen placing the cutting board on the counter." + ], + "bbox": [ + 205, + 246, + 794, + 318 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/ab0e03c7fcd44d74e7111cfd85238e02d3b90545d29358fa22e32d3e526193b5.jpg", + "image_caption": [ + "Improved Short Video QA (video length: 25s)" + ], + "image_footnote": [], + "bbox": [ + 202, + 366, + 450, + 393 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "(A) Y", + "bbox": [ + 205, + 411, + 228, + 424 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/11b7721a4fc8aae3ea951446ef33c233ace8be182dd0eff3f8fe2e9fd8f666f6.jpg", + "image_caption": [ + "Question: What letter is likely to be shown next?" + ], + "image_footnote": [], + "bbox": [ + 272, + 410, + 313, + 426 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/045217fdd83a82b32d557cbe28bd70f580c2da2c89b5005e6eb6f4b289633e50.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 367, + 410, + 408, + 426 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Base model thinking:", + "text_level": 1, + "bbox": [ + 459, + 349, + 555, + 358 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "... Given this sequence, the next logical letter would be the next letter in the alphabet after 'F', which is 'G'. The letters 'D', 'E', 'F', and 'G' are now on the table, forming the sequence 'DEFG'. Therefore, the next letter to be shown is likely to be 'G'.", + "bbox": [ + 457, + 359, + 797, + 393 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Time-R1 thinking (ours):", + "text_level": 1, + "bbox": [ + 459, + 395, + 575, + 404 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The sequence of letters shown in the video is \"D\", \"E\", \"F\", and \"G\". This suggests a pattern of showing the next letter in the alphabet. The letters are being placed in a specific order, and the next logical letter after \"G\" would be \"H\".", + "bbox": [ + 457, + 404, + 795, + 431 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/41a71ed56113a83ea742f50f68837de75341378355cfe16e118d39d734ebc0ab.jpg", + "image_caption": [ + "Improved Long Video QA (video length: 59m)", + "Figure 1: Our contributions include a novel post-training framework for LVLMs via reinforcement learning, Time-R1; an RL fine-tuning strategy along with its associated training dataset, TimeRFT; and a new benchmark, TVGBench, for evaluating LVLMs on the TVG task. Our Time-R1 model not only achieves SoTA TVG performance, but also enhances performance on both short- and long-form multi-choice video question answering tasks." + ], + "image_footnote": [], + "bbox": [ + 200, + 455, + 447, + 494 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Question: What do the expanding red lines on the map in the first few minutes of the video stand for?", + "bbox": [ + 204, + 496, + 450, + 517 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "(A) The Yellow River (C) Du Fu's route to Xi'an", + "bbox": [ + 204, + 520, + 318, + 546 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "(B) The Silk Road \n(D) The Yangtze River", + "bbox": [ + 349, + 520, + 452, + 542 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Base model thinking:", + "text_level": 1, + "bbox": [ + 459, + 446, + 555, + 455 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "... The map is used to illustrate the historical journey of Du Fu, a famous Chinese poet, as indicated by the text \"Du Fu's route to Xi'an\" and the historical context provided by the narrator. Given this analysis, the expanding red lines on the map in the first few minutes of the video stand for Du Fu's route to Xi'an.... Time-R1 thinking (ours):", + "bbox": [ + 457, + 455, + 797, + 503 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "... 2. The map is shown with red lines expanding from a central point, which is likely Xi'an, a major city on the Silk Road. 3. The map is labeled with \"Silk Road\" and \"The Great Silk Road,\" confirming that the red lines represent the Silk Road. ...", + "bbox": [ + 459, + 503, + 797, + 541 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "9M parameters [22]). This raises a critical question: Why do LVLMs, despite their vast pretrained knowledge, fail to excel on TVG?", + "bbox": [ + 169, + 676, + 823, + 705 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We attribute the suboptimal performance of LVLMs to over-penalization of false negatives during supervised fine-tuning (SFT). For instance, when the ground truth timestamp is [2s, 4s], even when the model makes a reasonable prediction of timestamp [1.9s, 3.9s], the autoregressive loss would still be undesirably high. Such disproportionate penalties on reasonable predictions result in overfitting and poor generalization. While previous solutions have attempted to address this by introducing new timestamp tokens into the vocabulary [18, 17, 53] or by appending a regression head to predict timestamps [65], they often sacrifice the pretrained language capabilities of LLMs.", + "bbox": [ + 169, + 710, + 823, + 809 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Inspired by recent success in reinforcement learning (RL) for post-training LLMs [39, 1] with chain-of-thought (CoT) prompting, especially in domains with deterministic answers, such as code generation and mathematical reasoning, we explore whether RL can serve as an effective alternative for TVG. Unlike SFT, RL allows direct optimization of task-specific metrics (e.g., IoU), thereby reducing rigid penalties of autoregressive losses and encouraging plausible timestamp predictions. In this work, we present an RL-based framework, Time-R1, that effectively post-trains LVLMs for TVG and pushes the performance frontier. Our contributions include:", + "bbox": [ + 169, + 814, + 826, + 912 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/fe508b93644a4c1a251811d6357c99d02cdd5eae3ffec63d502390bd25cac1fb.jpg", + "image_caption": [ + "(a) VLP" + ], + "image_footnote": [], + "bbox": [ + 187, + 88, + 361, + 253 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/baa3e64aeacdf876500af8b7229d74eb3ddc81327f265d60a53c62ca18c4a6ea.jpg", + "image_caption": [ + "(b) SFT" + ], + "image_footnote": [], + "bbox": [ + 388, + 87, + 566, + 255 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/2cd0c3c3be979c379e8d23cc83ebce767a4565f06d233336e9ec0239188547cd.jpg", + "image_caption": [ + "(c) RL (ours)", + "Figure 2: Comparison of different approaches for the TVG task, including feature-based video-language pretraining (VLP) [37, 22], supervised fine-tuning (SFT) [41, 60], and RL (ours)." + ], + "image_footnote": [], + "bbox": [ + 578, + 88, + 787, + 253 + ], + "page_idx": 2 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- RL-based framework for temporal video grounding. We introduce Time-R1, a reasoning-enhanced post-training framework via RL with verifiable rewards, where the LVLM first generates chain-of-thought descriptions and then predicts timestamps. The post-training process is optimized using Generalized Reinforcement Policy Optimization (GRPO) with a novel reward function, incorporating both a structured template reward and a timestamp-aware tIoU reward.", + "- Time-aware reinforcement fine-tuning. We propose TimeRFT, a reinforcement fine-tuning strategy with dynamic hard sampling, which mines hard samples on a curated dataset and progressively selects low-IoU samples for multi-epoch training. To ensure stable reasoning and reduce hallucinations, we adopt a cold-start approach to generate CoT with video captions. To support RL-friendly training, we curate an RFT dataset with difficulty annotations on the TVG task.", + "- Comprehensive benchmark for LVLMs on TVG. Existing TVG benchmarks are designed for the large-scale evaluation of small models. Considering the inference speed bottlenecks and general-purpose role of LVLMs, we construct TVGBench, a compact yet comprehensive benchmark for TVG. We carefully balance the video distribution, query distribution, and design specific query semantics to ensure that the benchmark is well-suited for evaluating LVLMs.", + "- State-of-the-Art results and generalization. Compared with 7B LVLMs on the temporal video grounding task, our method outperforms all prior SFT-based methods with only 2.5K training data. After fine-tuning on downstream benchmarks like Charades [46] and ActivityNet [5], it surpasses many previous feature-based approaches. Furthermore, Time-R1 also improves the general video understanding on video QA benchmarks like MVBench [26] and VideoMME [13]." + ], + "bbox": [ + 169, + 329, + 826, + 606 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 Related Works", + "text_level": 1, + "bbox": [ + 171, + 625, + 330, + 640 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Temporal video grounding. The TVG task [15, 3] aims to localize temporal segments in untrimmed long videos given natural language queries. Previous works can be broadly categorized into feature-based video-language pretraining and frame-based LVLM methods, as shown in Figure 2. Feature-based methods first extract video and language features using pre-trained encoders (e.g., I3D [6], EgoVLP [29], CLIP [42], BERT [12], etc.), then build timestamp prediction modules based on multimodal fused features. Existing methods differ mainly in their design of the multimodal fusion module and timestamp prediction module. For example, SnAG [37] adopts a late fusion strategy and regresses timestamps directly in a single forward pass without proposal generation. While effective, these approaches are fundamentally limited by the quality of the pretrained features. Recent efforts have shifted toward end-to-end frame-based methods by fine-tuning LVLMs using SFT with autoregressive losses [27, 44, 60, 19, 52, 18, 25]. For instance, TRACE [18] treats each event as a combination of timestamp, saliency score, and caption, and fine-tunes the LVLM to generate event sequences autoregressively. However, such methods still underperform on even simple TVG benchmarks like Charades [46], often lagging behind feature-based approaches. In this work, we propose a novel RL-based post-training framework that establishes new state-of-the-art performance for LVLMs on TVG tasks, even surpassing many feature-based methods.", + "bbox": [ + 169, + 656, + 826, + 878 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "RL in LLMs and LVLMs. RL is a foundational machine learning paradigm applied in diverse domains such as game playing [47], robotics [34], and increasingly, language and vision-language", + "bbox": [ + 169, + 883, + 823, + 912 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "models [39]. RL updates models by interacting with environments and maximizing reward signals. In recent years, RL has profoundly affected the field of LLM and LVLM post-training, which falls into two main categories: Reinforcement Learning with Human Feedback (RLHF) [40, 57] and Reinforcement Learning with Verifiable Reward (RLVR) [1, 39, 7]. Early works find that RLHF can align LLM to generate human preference data, which primarily reduces the safety risks in LLM and hallucination problems in LVLM. For example, RLHF-V [57] collects fine-grained pairs of incorrect and corrected captions and leverages direct preference optimization to optimize the model to generate correct captions, thus mitigating hallucinations. More recent works have explored RLVR in tasks with deterministic answers, which not only benefits mathematical problem solving and code generation in LLMs (e.g., DeepSeek-R1 [1]), but also enhances the generalization of LVLMs across a range of applications, such as visual grounding [32] and visual reasoning [48]. As a pioneer among open-source LLMs, DeepSeek-R1 [1] adopts GRPO to enhance reasoning capabilities by designing rule-based rewards that incorporate both reasoning templates and final answers. In the context of LVLMs, MM-Eureka [35] investigates multimodal image reasoning using GRPO, introducing an online filtering mechanism and a two-stage training strategy to stabilize the optimization process. However, existing approaches remain confined to language [1, 56], image understanding [7, 48, 32, 35], and short video understanding [64, 28]. It remains unclear whether and how reinforcement learning impacts long-form video understanding. To bridge this gap, we take a first step by introducing RLVR into LVLMs for the temporal video grounding task.", + "bbox": [ + 169, + 90, + 826, + 354 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3 Method", + "text_level": 1, + "bbox": [ + 171, + 373, + 272, + 388 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The TVG task aims to temporally localize video segments within long-form videos based on natural language queries. Given a video of duration $t$ seconds, which is represented as a sequence of $T$ frames $\\{x_{1},\\ldots ,x_{T}\\}$ , and a language query $q$ , the goal is to identify the temporal boundaries $[t_s,t_e]$ of the segment that best corresponds to $q$ , where $t_s,t_e\\in \\mathbb{R}^+$ . In this work, we introduce Time-R1, a framework designed to unleash the potential of LVLMs for the TVG task using RL. We first provide background on RL-based training for LLMs in Section 3.1, then detail the training procedure of TimeR1 in Section 3.2. Next, we describe specific training techniques used in TimeRFT in Section 3.3, and finally, we present the construction of our evaluation benchmark TVGBench in Section 3.4.", + "bbox": [ + 169, + 402, + 823, + 513 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1 Background of GRPO: RL for LLM", + "text_level": 1, + "bbox": [ + 171, + 530, + 465, + 545 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "As a pioneer among open-sourced R1-style LLMs, Deepseek-R1 [1] leverages GRPO to train the policy model $\\pi_{\\theta}$ (i.e., the LLM) to think before answering, making it particularly well-suited for tasks with well-defined answers, such as mathematical reasoning. In the GRPO framework, given an input question $p$ , the LLM samples $G$ candidate responses $o = \\{o_1,\\dots,o_G\\}$ , and a reward function $r(\\cdot)$ assigns a reward score to each response, yielding $\\{r(o_1),\\ldots ,r(o_G)\\}$ . GRPO encourages the LLM to generate responses that maximize a weighted sum reward $R(o)$ , defined by:", + "bbox": [ + 169, + 556, + 823, + 641 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nR (o) = \\sum_ {i = 1} ^ {G} \\frac {\\pi_ {\\theta} \\left(o _ {i}\\right)}{\\pi_ {\\theta_ {\\mathrm {o l d}}} \\left(o _ {i}\\right)} \\cdot \\frac {r \\left(o _ {i}\\right) - \\operatorname {m e a n} \\left(\\left\\{r \\left(o _ {i}\\right) \\right\\} _ {i = 1} ^ {G}\\right)}{\\operatorname {s t d} \\left(\\left\\{r \\left(o _ {i}\\right) \\right\\} _ {i = 1} ^ {G}\\right)} \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 326, + 647, + 823, + 686 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\pi_{\\theta}(o)$ denotes the probability of LLM generating the response $o$ , and $\\pi_{\\theta_{\\mathrm{old}}}$ represents the LLM parameters from a recently optimized state. To ensure training stability and avoid large deviations from the original language model behavior, the final training objective incorporates a KL-divergence regularization term [1], penalizing divergence between $\\pi_{\\theta}$ and $\\pi_{\\mathrm{ref}}$ :", + "bbox": [ + 169, + 695, + 823, + 751 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\max _ {\\pi_ {\\theta}} \\mathbb {E} _ {o \\sim \\pi_ {\\theta_ {\\mathrm {o l d}}} (p)} [ R (o) - \\beta \\mathrm {D} _ {\\mathrm {K L}} (\\pi_ {\\theta} \\| \\pi_ {\\mathrm {r e f}}) ] \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 357, + 757, + 823, + 779 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $\\beta$ is a scaling coefficient. We omit the clipping operation for simplicity.", + "bbox": [ + 169, + 785, + 684, + 801 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2 Time-R1: RL for Temporal Video Grounding", + "text_level": 1, + "bbox": [ + 169, + 816, + 529, + 830 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Since the TVG task has defined answers and well-established evaluation metrics, RL can optimize LVLMs for task-specific performance through tailored reward design. To enhance interpretability and align with human-like reasoning, we additionally incorporate an explicit \"thinking process\" prior to timestamp prediction. This process encourages the model to produce contextualized video descriptions that support its final decision. We detail our reward modeling and training process below.", + "bbox": [ + 169, + 842, + 826, + 912 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Reward modeling. The reward $r_i$ plays a crucial role in guiding the model's learning objective. To encourage effective temporal grounding with an explicit reasoning process, we design a composite reward function comprising two components: the timestamp-aware Intersection over Union (IoU) $r_{\\mathrm{tIoU}}$ and the reasoning template reward $r_{\\mathrm{form}}$ .", + "bbox": [ + 169, + 90, + 823, + 148 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "- Timestamp-aware IoU reward $r_{\\mathrm{tIoU}}(\\cdot)$ . The TVG task primarily uses IoU [58] to evaluate the quality of predicted segments against the ground-truth $[t_s', t_e']$ , computed as:", + "bbox": [ + 169, + 152, + 823, + 183 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathrm {I o U} = \\frac {\\left[ t _ {s} , t _ {e} \\right] \\cap \\left[ t _ {s} ^ {\\prime} , t _ {e} ^ {\\prime} \\right]}{\\left[ t _ {s} , t _ {e} \\right] \\cup \\left[ t _ {s} ^ {\\prime} , t _ {e} ^ {\\prime} \\right]} \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 416, + 188, + 825, + 223 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "where $A \\cap B$ and $A \\cup B$ denote the union and intersection between sets A and B, respectively. Optimizing for the IoU inherently encourages the LVLM to produce predictions that fall within a permissible range of variation $\\epsilon$ , such that $t_{s \\text{ or } e}^{\\prime} - \\epsilon \\leq t_{s \\text{ or } e} \\leq t_{s \\text{ or } e}^{\\prime} + \\epsilon$ still yields high IoUs. This encourages the LVLM to focus more on the semantic understanding of the event within possible temporal boundaries, rather than rigidly requiring exact temporal alignment like SFT. However, standard IoU may fail to accurately reflect the quality of temporal alignment in certain scenarios. For example, when the ground truth span is [0, 30] (i.e., the full video), any prediction covering more than $30\\%$ of the video would result in an IoU greater than 0.3. A prediction like [10, 25] would yield an IoU of 0.5, which overestimates its quality despite incorrect timestamps. To address this issue, we introduce the timestamp-aware IoU (tIoU) as a corrective measure. tIoU augments the standard IoU with penalties on timestamp deviations, defined as:", + "bbox": [ + 169, + 227, + 826, + 380 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nr _ {\\mathrm {t I o U}} (o) = \\operatorname {I o U} \\cdot \\left(1 - \\frac {\\left| t _ {s} - t _ {s} ^ {\\prime} \\right|}{t}\\right) \\cdot \\left(1 - \\frac {\\left| t _ {e} - t _ {e} ^ {\\prime} \\right|}{t}\\right) \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 331, + 386, + 825, + 417 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This modification penalizes predictions that deviate from the reference timestamps relative to the video duration $t$ . In the earlier example, the reward value changes from 0.5 (IoU) to 0.28 (tIoU), providing a more realistic signal for learning. Overall, tIoU acts as a stricter and more informative reward signal, encouraging the LVLM to develop a deeper temporal understanding of events, rather than relying on superficial shortcuts.", + "bbox": [ + 169, + 422, + 825, + 491 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "- Reasoning template reward $r_{\\mathrm{form}}(\\cdot)$ . In TVG, the video segments relevant to a textual query typically comprise only a small portion of the entire long video. For LVLMs, it is therefore suboptimal to directly predict timestamps without first engaging in a reasoning process to identify the relevant content. Instead, the model should allocate its computational capacity toward reasoning over visual and linguistic cues to better understand the temporal context before making predictions. For instance, given the query \"the man washes dishes\", reasoning that the person is likely in a kitchen can improve temporal localization. Such context-aware inference supports more accurate and semantically aligned predictions. To encourage this behavior, we introduce a template-based reasoning reward, which incentivizes the model to generate intermediate reasoning steps (structured in a predefined format) prior to timestamp localization. The reasoning template reward requires the LVLM to structure its response like \"<...< $t_s$ to $t_e$ >\", formulated as:", + "bbox": [ + 169, + 492, + 826, + 645 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nr _ {\\text {f o r m}} (o) = \\left\\{ \\begin{array}{l} 0, \\text {i f} o \\text {h a s w r o n g f r o m a t} \\\\ 1, \\text {i f} o \\text {h a s c o r r e c t f r o m a t} \\end{array} \\right. \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 354, + 652, + 825, + 686 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The overall reward is the sum of the two:", + "bbox": [ + 171, + 693, + 444, + 707 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nr (o) = r _ {\\mathrm {t I o U}} (o) + r _ {\\text {f o r m}} (o) \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 403, + 715, + 825, + 732 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "GRPO training. The LVLM $\\mathcal{F}(\\cdot)$ takes the video frames $x_{1},\\ldots ,x_{t}$ and the language query $q$ as input and generates $G$ candidate responses $o_1,\\dots,o_G$ , where each response is computed as $o_i = \\mathcal{F}(x_1,\\ldots ,x_t;q)$ . The reward for each response is calculated using Equation 1, and the model is optimized with the GRPO objective in Equation 2. To focus learning on the reasoning and localization capabilities, we freeze the visual encoder and update only the parameters of the LLM during training.", + "bbox": [ + 169, + 744, + 826, + 814 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3 TimeRFT: Time-Aware RL-Friendly Fine-Tuning", + "text_level": 1, + "bbox": [ + 169, + 830, + 558, + 845 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Due to the high computational cost associated with RL training, we explore data-efficient strategies to reduce sample requirements. To this end, we propose TimeRFT, which incorporates time-aware, RL-friendly dataset curation and fine-tuning techniques aimed at enhancing generalization while minimizing training overhead.", + "bbox": [ + 169, + 854, + 826, + 912 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "RL-friendly dataset curation. We construct the TimeRFT dataset by leveraging only TVG samples and assign a difficulty score to each sample based on the base model's performance. A small subset is then selected for subsequent RL training.", + "bbox": [ + 169, + 90, + 823, + 133 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Source data collection. Our training videos are sourced from Internet video datasets including YT-Temporal [54], DiDeMo [3], QuerYD [38], InternVid [49], and HowTo100M [36]. We obtain grounding data with annotations from VTG-IT [17], TimeIT [44], TimePro [60], HTStep [2], and LongVid [27]. This process yields 339K temporal grounding samples.", + "- RFT data filtering. We propose a data selection strategy based on training difficulty to significantly reduce training costs while preserving strong generalization performance. Models trained only on easy samples (e.g., IoU $\\geq 0.7$ ) tend to overfit, whereas training on overly difficult samples (e.g., IoU = 0) often suffers from sparse reward signals, making it hard for the model to receive positive feedback. To strike a balance, we select samples of moderate difficulty that are more conducive to generalization during reinforcement fine-tuning. We first estimate a difficulty score for each sample based on the performance of the base model. For grounding tasks, difficulty is quantified using the IoU between the predicted and ground-truth temporal regions. We then filter out samples that are either too easy or too hard. Specifically, we sample a subset of data from a Gaussian distribution over the IoU axis centered at 0.3, resulting in a set of 2.5K moderately difficult samples for RL training." + ], + "bbox": [ + 169, + 138, + 826, + 335 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "RFT training strategy. For selected difficult samples, the model may struggle to learn them in a single pass. However, we argue that effectively mastering these challenging cases is essential for improving overall model performance. To this end, we employ a multi-epoch training approach combined with per-epoch sample filtering, allowing the model to repeatedly focus on harder samples and gradually improve its understanding.", + "bbox": [ + 169, + 339, + 823, + 409 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Dynamic hard sampling. We adopt a multi-epoch training strategy coupled with per-epoch sample filtering to enhance learning from difficult examples. The model is trained over multiple epochs, and after each epoch, we exclude easy samples with an IoU greater than 0.7 that have become easy. This dynamic curriculum discourages overfitting on easy instances while ensuring consistent exposure to harder samples, ultimately promoting stronger generalization.", + "- Cold start fine-tuning with few CoT data. For smaller models (e.g., 3B parameters), we observe that directly training with RL to generate CoT responses often results in reasoning steps that are either unintelligible or hallucinated, which impairs answer quality. Additionally, the length of generated reasoning during early training stages is difficult to control, leading to an unstable training process. To address these issues, we introduce a cold-start fine-tuning strategy using a small set of CoT-formatted examples that encourage grounded reasoning aligned with video content. Specifically, we guide the model to produce structured sequential captions with associated timestamps with the template as:" + ], + "bbox": [ + 169, + 415, + 823, + 582 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "$<$ think $> < t_{s_1}$ to $t_{e_1}:C_1$ ; $t_{s_2}$ to $t_{e_2}:C_2 > < / \\text{think} > < \\text{answer} > t_s$ to $t_e < / \\text{answer} > (7)$ where $C_i$ represent captions corresponding to video segments $[t_{s_i}, t_{e_i}]$ , respectively.", + "bbox": [ + 169, + 583, + 823, + 617 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.4 TVGBench: Evaluation Benchmark for LVLM on Temporal Video Grounding", + "text_level": 1, + "bbox": [ + 171, + 630, + 756, + 645 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Existing benchmarks for temporal video grounding either focus on large-scale datasets tailored for smaller models within specific domains (e.g., human activities in ActivityNet) or consist of small, limited test sets (e.g., the 2K home activity samples in Charades) typically used for LVLM evaluation due to their slower inference speed. However, these benchmarks fall short in capturing the evaluation needs of LVLMs, which, despite slower inference, exhibit strong generalization capabilities. To bridge this gap, we introduce TVGBench, a lightweight yet comprehensive evaluation benchmark specifically designed for assessing the performance of LVLMs on temporal video grounding tasks.", + "bbox": [ + 169, + 655, + 826, + 753 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Data sources. To ensure a comprehensive evaluation, we construct our TVGBench by curating samples from five public benchmarks with a balanced distribution of data sources: Charades-STA [46], ActivityNet-Captions [5], HiREST [59], EgoNLQ [16], and TaCoS [43].", + "bbox": [ + 169, + 758, + 826, + 801 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Benchmark statistics. We carefully balance the video duration, video domain, and query center, and construct query semantics in TVGBench to construct 800 instances, as shown in Figure 3.", + "bbox": [ + 169, + 806, + 823, + 837 + ], + "page_idx": 5 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Distribution statistics. Video durations in the dataset have a balanced range from short clips up to 3 minutes long. To ensure temporal diversity, the center timestamps of queries are designed to be approximately uniformly distributed across the entire span of each video.", + "- Query semantics. Since the original datasets do not provide semantic labels for queries, we manually define 11 semantic categories grouped under three major types: human, object, and" + ], + "bbox": [ + 169, + 842, + 823, + 912 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 936, + 504, + 946 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/49e9a7fef9722f608faf4d9fd45ba26622fb2a14c87bc1378173e16c8e2e9f1d.jpg", + "image_caption": [ + "Figure 3: Statistics of TVGBench. TVGBench comprises 11 types of queries covering aspects related to humans, objects, and environments. As illustrated in the figure on the right, the distributions of query center, video length, and dataset source are designed to be as balanced as possible. This balanced construction allows for a comprehensive evaluation of model performance across different dimensions, enabling fine-grained analysis along each axis during benchmarking." + ], + "image_footnote": [], + "bbox": [ + 173, + 88, + 362, + 284 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/8ec672d067af753645e301703b46ab27cdeeb397f1273a933f68de77089f7d74.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 362, + 88, + 602, + 284 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/ae5ab713fcc4ca382be19915fba333312c734d0076d80ef1075081d5a192af17.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 602, + 88, + 823, + 284 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "environment. We leverage DeepSeek-V3 [11] to annotate the semantic type of each query and ensure approximate balance across these categories. While most categories are evenly represented, the Human Action Simple (HAS) category is slightly overrepresented due to inherent dataset bias (simple indoor actions are more common). In such cases, we prioritize achieving a balance across datasets from different domains while maintaining semantic diversity, accepting a skew in HAS.", + "bbox": [ + 169, + 395, + 823, + 465 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4 Experiments", + "text_level": 1, + "bbox": [ + 171, + 489, + 313, + 507 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We first present our experimental setup in Section 4.1. Then, we evaluate our model from three key perspectives: (1) Comparison with state-of-the-art methods in Section 4.2: We evaluate our model across multiple TVG benchmarks to assess its performance against existing approaches; (2) Ablation studies and analyses in Section 4.3: We examine the individual contributions of each component in our framework to better understand their roles in overall performance. We also compare RL and SFT strategies across TVG, short video QA, and long video QA tasks.", + "bbox": [ + 169, + 523, + 823, + 608 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.1 Experimental Setup", + "text_level": 1, + "bbox": [ + 171, + 628, + 352, + 643 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "**Benchmarks.** We evaluate our model on a wide range of benchmarks covering both temporal video grounding and general video understanding tasks, including: (1) Charades-STA [46] contains 6,672 long videos capturing indoor human activities. The official split for the TVG task includes 12,408 clip-query pairs for training and 3,720 for testing. (2) ActivityNet [5] comprises 20K long videos with an average of 3.65 clip-query pairs per video. Following previous work in fine-tuning setting [62, 22] for the TVG task, we use the standard dataset splits with 37,421 training, 17,505 validation, and 17,031 test samples. (3) MVBench [26] is a short video QA benchmark focused on temporal reasoning. It includes 4K QA pairs for 20 types of tasks. (4) TempCompass [31] assesses fine-grained temporal understanding with 410 short videos. We use all multi-choice QA tasks except for the video captioning task. (5) EgoSchema [33] features 5K egocentric video clips, each approximately 3 minutes long, with temporally demanding QA pairs. (6) VideoMME [13] is a general video QA benchmark covering diverse domains. It contains 2.7K QA samples over videos of varied lengths, ranging from 11 seconds to 1 hour. We use the long video split for evaluation.", + "bbox": [ + 169, + 656, + 826, + 835 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Implementation details. Unless otherwise specified, we use Qwen2.5-VL-7B [4] as the base model. To strike a balance between training efficiency and memory consumption, we sample video frames at 2 FPS and adaptively resize each video input to contain approximately 2.8 million pixels. For instance, a 50-second video yields 100 frames, each with a resolution of roughly $96 \\times 96 \\times 3$ . During the reinforcement fine-tuning phase, we train for 5 epochs using a batch size of 8 and select the final", + "bbox": [ + 169, + 840, + 826, + 912 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/275ffc7437d3a78479fde2a1b21389a3e2fa2483334aa6f8d08a7214dd4b07ae.jpg", + "table_caption": [ + "Table 1: Performance of temporal video grounding on Charades-STA, ActivityNet, and TVGBench. The methods marked in gray* represent fine-tuning on corresponding benchmarks, while those in black indicate zero-shot settings. We compare our Time-R1 against existing 7B open-source LVLMs, as well as state-of-the-art VLP models." + ], + "table_footnote": [], + "table_body": "
TypeMethodCharades-STAActivityNetTVGBench
R1@0.3R1@0.5R1@0.7R1@0.3R1@0.5R1@0.7R1@0.3R1@0.5R1@0.7
VLP2D-TAN* [63]57.345.827.960.443.425.0---
UniVTG* [30]72.660.238.656.143.424.3---
SSRN* [66]-65.542.6-54.533.2---
SnAG* [37]-64.646.2-48.630.6---
EaTR* [22]-68.444.9-58.237.6---
Gemini-2.5-Pro [10]------39.124.412.8
SFTChatVTG [41]52.733.015.940.722.59.4---
TimeChat [44]-32.213.436.220.29.522.411.95.3
HawkEye [50]50.631.414.549.129.310.7---
VTimeLLM [21]51.027.511.444.027.814.3---
TimeSuite [60]69.948.724.0---31.118.08.9
VideoChat-Flash [27]74.553.127.6---32.819.810.4
TRACE [18]-40.319.4-37.724.037.025.514.6
HawkEye* [50]72.558.328.855.934.717.9---
TimeSuite* [60]79.467.143.0------
RLTime-R1 (ours)78.160.835.358.639.021.441.829.416.4
Time-R1 (ours)*82.872.250.173.355.634.0---
", + "bbox": [ + 187, + 155, + 808, + 380 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/873721d52378b8f0aedc1559ae1943111394a4e815f0185d8745fbb2e288dd0d.jpg", + "image_caption": [ + "Figure 4: Comparison between post-training paradigms across various tasks, including temporal video grounding, short video QA, and long video QA. Both \"SFT\" and \"RL\" full-finetune the LLM, while \"SFT-LoRA\" denotes finetuning the LLM with LoRA [20]. The \"Base\" is Qwen2.5-VL-7B." + ], + "image_footnote": [], + "bbox": [ + 173, + 393, + 823, + 551 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "checkpoint for evaluation. For fine-tuning on downstream benchmarks, we train for 2 epochs. All experiments are conducted on a cluster with $8 \\times$ NVIDIA A100 GPUs.", + "bbox": [ + 169, + 630, + 823, + 657 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Evaluation metrics. For TVG, following [44, 60], we adopt the “R1@m” evaluation protocol to compare with state-of-the-art models, which computes the percentage of samples where the top-1 predicted segment has an IoU greater than a threshold $m$ , with $m \\in \\{0.3, 0.5, 0.7\\}$ . For brevity, we also adopt mIoU, which calculates the average IoU on all testing data as an alternative metric. For video QA, we report accuracy as the evaluation metric.", + "bbox": [ + 169, + 664, + 823, + 736 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.2 Comparison with State-of-the-Art", + "text_level": 1, + "bbox": [ + 171, + 751, + 449, + 766 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We compare Time-R1 with state-of-the-art TVG methods, including both traditional video-language pre-training models (VLP) and recent large video-language models fine-tuned via SFT.", + "bbox": [ + 169, + 776, + 823, + 806 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Time-R1 surpasses SFT-based models in zero-shot settings. As shown in Table 1, in the zero-shot setting, Time-R1 demonstrates strong performance, outperforming SFT-based models that rely on large-scale supervision. Despite using only $2.5\\mathrm{K}$ samples for RL, Time-R1 achieves leading results across multiple benchmarks. For example, on Charades-STA, Time-R1 attains an R1@0.7 score of 35.3, outperforming VideoChat-Flash (27.6) and TimeSuite (24.0). On ActivityNet, it achieves R1@0.7 score of 21.4, surpassing VTimeLLM (14.3) and TimeChat (10.7). On TVGBench, it scores 16.4, outperforming TRACE (14.6) and Gemini-2.5-Pro (12.8).", + "bbox": [ + 169, + 811, + 825, + 909 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Time-R1* outperforms all SFT-based LVLMs and many traditional VLP-based models. Time-R1* consistently outperforms both traditional VLP-based and SFT-based models on the TVG task. On Charades-STA, it exceeds EaTR and fine-tuned TimeSuite by 3.4-7.1 percentage points across R1@0.3 to R1@0.7. Notably, Time-R1* surpasses TimeSuite despite using far fewer RL samples compared to TimeSuite's 349K SFT examples. On the more challenging ActivityNet dataset, Time-R1* also outperforms SSRN [66] and TRACE [18], achieving significant gains across key metrics.", + "bbox": [ + 169, + 90, + 486, + 272 + ], + "page_idx": 8 + }, + { + "type": "table", + "img_path": "images/d42802b4d80f26ee7294fbd12a8a7ce704f35b47311d772edd6be46bf317bb0a.jpg", + "table_caption": [ + "Table 2: Ablation of Time-R1-7B training. GF, ME, SF refers to Gaussian Filtering, Multi-EPOCH, and Sample Filtering per epoch, respectively." + ], + "table_footnote": [], + "table_body": "
tIoUGFMESFTVGBench
R1@0.3R1@0.5R1@0.7
1XXXX38.024.813.2
2XXX36.023.612.9
3XXX37.225.013.4
4XXX39.926.014.2
5XX38.425.614.1
6XX39.426.516.4
7X41.628.515.6
841.829.416.4
", + "bbox": [ + 506, + 143, + 810, + 260 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "4.3 Ablation Study", + "text_level": 1, + "bbox": [ + 171, + 287, + 318, + 301 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "We conduct a detailed ablation on the Time-R1-7B model to investigate the contribution of various training strategies.", + "bbox": [ + 169, + 311, + 823, + 340 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Utility of TimeRFT and Time-R1 components. As shown in Table 2, both Gaussian Filtering (GF) and Multi-Epoch training (ME) individually improve performance, with ME yielding a more substantial gain, improving from R1@0.7 of 13.2 in row 1 to 14.2 in row 4. Notably, the combination of tIoU supervision and ME (Row 6) leads to a significant boost across all metrics. As more components are added, GF and ME (Row 7), followed by Sample Filtering (SF) in Row 8, the performance continues to improve, ultimately reaching R1@0.5 of 29.4 and R1@0.7 of 16.4.", + "bbox": [ + 169, + 347, + 823, + 431 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Generalization of RL vs. SFT. As shown in Figure 4, when both RL and SFT perform full fine-tuning of the LLM using a small amount of data, SFT significantly degrades the model's generalization ability across all tasks, whereas RL consistently preserves generalization. While SFT can alleviate this issue by adopting LoRA-based fine-tuning (SFT-LoRA), RL still demonstrates stronger generalization and additionally improves performance on video QA tasks, whereas SFT-LoRA leads to a decline. For example, on ActivityNet, RL improves the mIoU from 16.3 to 29.2, while SFT degrades it to 15.4, and SFT-LoRA improves it to 25.9. On the VideoMMe video QA benchmark, RL increases the performance from 53.0 to 54.2, whereas SFT-LoRA decreases it to 51.7.", + "bbox": [ + 169, + 436, + 552, + 630 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Impact of cold start. As shown in Figure 5, cold start boosts the performance of the 3B model and reduces the number of thinking tokens for both models. We attribute this to the cold start's function to suppress hallucinations, which tend to be more prevalent in weaker models.", + "bbox": [ + 169, + 636, + 550, + 705 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/9024b0a012288071a2c51ac867ec685fd8c7521c85e00d3d26347c3a064034e0.jpg", + "image_caption": [ + "Figure 5: Impact of SFT-based cold start on IoU performance and thinking token count, with token counts marked by $\\star$ on dashed lines." + ], + "image_footnote": [], + "bbox": [ + 570, + 458, + 818, + 606 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "5 Conclusion", + "text_level": 1, + "bbox": [ + 171, + 724, + 302, + 739 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "In this work, we present Time-R1, a reinforcement learning based post-training framework that significantly improves the generalization of Large Vision-Language Models for Temporal Video Grounding. Unlike prior methods that rely on large-scale supervised fine-tuning, Time-R1 leverages a verifiable reward signal to unlock strong temporal reasoning from pretrained models using limited data. Our contributions include: (1) Time-R1, a reasoning-guided post-training framework that enhances TVG via RL; (2) TimeRFT, a curated dataset and training strategy that fosters temporal grounding; (3) TVGBench, a small yet comprehensive benchmark for evaluating LVLMs on TVG. Extensive experiments show that Time-R1 achieves SoTA performance across TVG benchmarks in both zero-shot and fine-tuned settings, surpassing prior LVLMs and traditional VLP-based models, while also improving general video understanding. We hope this work inspires future directions in data-efficient and generalizable video-language understanding via reinforcement learning.", + "bbox": [ + 169, + 756, + 826, + 909 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 173, + 89, + 269, + 104 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. 2, 4", + "[2] Triantafyllos Afouras, Effrosyni Mavroudi, Tushar Nagarajan, Huiyu Wang, and Lorenzo Torresani. Ht-step: Aligning instructional articles with how-to videos. Advances in Neural Information Processing Systems, 36:50310-50326, 2023. 6", + "[3] Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing moments in video with natural language. In Proceedings of the IEEE international conference on computer vision, pages 5803-5812, 2017. 1, 3, 6", + "[4] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-v1 technical report. arXiv preprint arXiv:2502.13923, 2025.7", + "[5] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 961-970, 2015. 1, 3, 6, 7", + "[6] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6299-6308, 2017. 1, 3", + "[7] Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V, 2025.4", + "[8] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructlip: Towards general-purpose vision-language models with instruction tuning, 2023. 14", + "[9] Trevor Darrell and Alex Pentland. Space-time gestures. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 335-340. IEEE, 1993. 1", + "[10] Google DeepMind. Gemini 2.5: Our most intelligent ai model. Google DeepMind, 2025. Model ID: gemini-2.5-pro-preview-03-25. 8, 14, 15", + "[11] DeepSeek-AI. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprint arXiv:2401.02954, 2024. 7", + "[12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171–4186, 2019. 3", + "[13] Chaoyou Fu, Yuhan Dai, Yongdong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3, 7", + "[14] Adrien Gaidon, Zaid Harchaoui, and Cordelia Schmid. Temporal localization of actions with actons. IEEE transactions on pattern analysis and machine intelligence, 35(11):2782-2795, 2013. 1", + "[15] Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. Tall: Temporal activity localization via language query. In Proceedings of the IEEE international conference on computer vision, pages 5267-5275, 2017. 1, 3", + "[16] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995-19012, 2022. 1, 6", + "[17] Yongxin Guo, Jingyu Liu, Mingda Li, Dingxin Cheng, Xiaoying Tang, Dianbo Sui, Qingbin Liu, Xi Chen, and Kevin Zhao. Vtg-llm: Integrating timestamp knowledge into video llms for enhanced video temporal grounding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 3302–3310, 2025. 2, 6", + "[18] Yongxin Guo, Jingyu Liu, Mingda Li, Qingbin Liu, Xi Chen, and Xiaoying Tang. Trace: Temporal grounding video llm via causal event modeling. arXiv preprint arXiv:2410.05643, 2024. 2, 3, 8, 9, 14, 15", + "[19] Tanveer Hannan, Md Mohaiminul Islam, Jindong Gu, Thomas Seidl, and Gedas Bertasius. Revisionlmm: Recursive vision-language model for temporal grounding in hour-long videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3" + ], + "bbox": [ + 173, + 114, + 826, + 911 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[20] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuzhhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. *ICLR*, 1(2):3, 2022. 8, 14", + "[21] Bin Huang, Xin Wang, Hong Chen, Zihan Song, and Wenwu Zhu. Vtimellm: Empower llm to grasp video moments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14271-14280, 2024. 8", + "[22] Jinhyun Jang, Jungin Park, Jin Kim, Hyeongjun Kwon, and Kwanghoon Sohn. Knowing where to focus: Event-aware transformer for video grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13846-13856, 2023. 1, 2, 3, 7, 8", + "[23] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. 15", + "[24] Ivan Laptev and Patrick Pérez. Retrieving actions in movies. In 2007 IEEE 11th International Conference on Computer Vision, pages 1-8. IEEE, 2007. 1", + "[25] Jiaze Li, Yaya Shi, Zongyang Ma, Haoran Xu, Feng Cheng, Huihui Xiao, Ruiwen Kang, Fan Yang, Tingting Gao, and Di Zhang. imove: Instance-motion-aware video understanding. arXiv preprint arXiv:2502.11594, 2025.3", + "[26] Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, et al. Mvbench: A comprehensive multi-modal video understanding benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22195-22206, 2024. 3, 7", + "[27] Xinhao Li, Yi Wang, Jiashuo Yu, Xiangyu Zeng, Yuhan Zhu, Haian Huang, Jianfei Gao, Kunchang Li, Yinan He, Chenting Wang, Yu Qiao, Yali Wang, and Limin Wang. Videochat-flash: Hierarchical compression for long-context video modeling. arXiv preprint arXiv:2501.00574, 2024. 3, 6, 8, 14, 15", + "[28] Zhenyi Liao, Qingsong Xie, Yanhao Zhang, Zijian Kong, Haonan Lu, Zhenyu Yang, and Zhijie Deng. Improved visual-spatial reasoning via r1-zero-like training. arXiv preprint arXiv:2504.00883, 2025. 4", + "[29] Kevin Qinghong Lin, Jinpeng Wang, Mattia Soldan, Michael Wray, Rui Yan, Eric Z Xu, Difei Gao, Rong-Cheng Tu, Wenzhe Zhao, Weijie Kong, et al. Egocentric video-language pretraining. Advances in Neural Information Processing Systems, 35:7575-7586, 2022. 1, 3", + "[30] Kevin Qinghong Lin, Pengchuan Zhang, Joya Chen, Shraman Pramanick, Difei Gao, Alex Jinpeng Wang, Rui Yan, and Mike Zheng Shou. Univtg: Towards unified video-language temporal grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2794–2804, 2023. 1, 8", + "[31] Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei Li, Sishuo Chen, Xu Sun, and Lu Hou. Tempcompass: Do video llms really understand videos? arXiv preprint arXiv:2403.00476, 2024. 7", + "[32] Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785, 2025. 4", + "[33] Karttikeya Mangalam, Raiymbek Akshulakov, and Jitendra Malik. Egoschema: A diagnostic benchmark for very long-form video language understanding. Advances in Neural Information Processing Systems, 36:46212-46244, 2023. 7", + "[34] Gabriel B Margolis and Pulkit Agrawal. Walk these ways: Tuning robot control for generalization with multiplicity of behavior. In Conference on Robot Learning, pages 22-31. PMLR, 2023. 3", + "[35] Fanqing Meng, Lingxiao Du, Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Botian Shi, Wenhai Wang, Junjun He, Kaipeng Zhang, et al. Mm-eureka: Exploring visual aha moment with rule-based large-scale reinforcement learning. arXiv preprint arXiv:2503.07365, 2025. 4", + "[36] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2630-2640, 2019. 6", + "[37] Fangzhou Mu, Sicheng Mo, and Yin Li. Snag: Scalable and accurate video grounding. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 18930-18940, 2024. 3, 8", + "[38] Andreea-Maria Oncescu, Joao F Henriques, Yang Liu, Andrew Zisserman, and Samuel Albanie. Queryd: A video dataset with high-quality text and audio narrations. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2265-2269. IEEE, 2021. 6", + "[39] OpenAI. Openai o1, 2024. 2, 4", + "[40] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022. 4" + ], + "bbox": [ + 173, + 90, + 826, + 911 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 490, + 935, + 506, + 946 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[41] Mengxue Qu, Xiaodong Chen, Wu Liu, Alicia Li, and Yao Zhao. Chatvtg: Video temporal grounding via chat with video dialogue large language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1847-1856, 2024. 3, 8", + "[42] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PmLR, 2021. 1, 3", + "[43] Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics, 1:25-36, 2013. 6", + "[44] Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, and Lu Hou. Timechat: A time-sensitive multimodal large language model for long video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14313-14323, 2024. 1, 3, 6, 8, 14, 15", + "[45] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. 16", + "[46] Gunnar A Sigurdsson, Gúl Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In Proceedings of the European Conference on Computer Vision (ECCV), 2016. 1, 3, 6, 7", + "[47] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815, 2017. 3", + "[48] Huajie Tan, Yuheng Ji, Xiaoshuai Hao, Minglan Lin, Pengwei Wang, Zhongyuan Wang, and Shanghang Zhang. Reason-rft: Reinforcement fine-tuning for visual reasoning. arXiv preprint arXiv:2503.20752, 2025.4", + "[49] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Internvid: A large-scale video-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2307.06942, 2023. 6", + "[50] Yueqian Wang, Xiaojun Meng, Jianxin Liang, Yuxuan Wang, Qun Liu, and Dongyan Zhao. Hawkeye: Training video-text llms for grounding text in videos, 2024. 8", + "[51] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October 2020. Association for Computational Linguistics. 15", + "[52] Yongliang Wu, Xinting Hu, Yuyang Sun, Yizhou Zhou, Wenbo Zhu, Fengyun Rao, Bernt Schiele, and Xu Yang. Number it: Temporal grounding videos like flipping manga. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3", + "[53] Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid. Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10714-10726, 2023. 2", + "[54] Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid. Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10714-10726, 2023. 6, 14", + "[55] Jingkang Yang, Shuai Liu, Hongming Guo, Yuhao Dong, Xiamengwei Zhang, Sicheng Zhang, Pengyun Wang, Zitang Zhou, Binzhu Xie, Ziyue Wang, Bei Ouyang, Zhengyu Lin, Marco Cominelli, Zhongang Cai, Yuanhan Zhang, Peiyuan Zhang, Fangzhou Hong, Joerg Widmer, Francesco Gringoli, Lei Yang, Bo Li, and Ziwei Liu. Egolife: Towards egocentric life assistant. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 1", + "[56] Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476, 2025. 4, 14, 16", + "[57] Tianyu Yu, Yuan Yao, Haoye Zhang, Taiwen He, Yifeng Han, Ganqu Cui, Jinyi Hu, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun, et al. Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13807-13816, 2024. 4" + ], + "bbox": [ + 173, + 90, + 826, + 912 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[58] Yitian Yuan, Xiaohan Lan, Xin Wang, Long Chen, Zhi Wang, and Wenwu Zhu. A closer look at temporal sentence grounding in videos: Dataset and metric. In Proceedings of the 2nd international workshop on human-centric multimedia analysis, pages 13-21, 2021. 5", + "[59] Abhay Zala, Jaemin Cho, Satwik Kottur, Xilun Chen, Barlas Oguz, Yashar Mehdad, and Mohit Bansal. Hierarchical video-moment retrieval and step-captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 23056-23065, 2023. 6", + "[60] Xiangyu Zeng, Kunchang Li, Chenting Wang, Xinhao Li, Tianxiang Jiang, Ziang Yan, Songze Li, Yansong Shi, Zhengrong Yue, Yi Wang, Yali Wang, Yu Qiao, and Limin Wang. Timesuite: Improving MLLMs for long video understanding via grounded tuning. In The Thirteenth International Conference on Learning Representations, 2025. 1, 3, 6, 8, 14, 15", + "[61] Hao Zhang, Aixin Sun, Wei Jing, and Joey Tianyi Zhou. Temporal sentence grounding in videos: A survey and future directions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(8):10443-10465, 2023. 1", + "[62] Songyang Zhang, Houwen Peng, Jianlong Fu, Yijuan Lu, and Jiebo Luo. Multi-scale 2d temporal adjacency networks for moment localization with natural language. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. 7", + "[63] Songyang Zhang, Houwen Peng, Jianlong Fu, and Jiebo Luo. Learning 2d temporal adjacent networks for moment localization with natural language. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020. 8", + "[64] Xingjian Zhang, Siwei Wen, Wenjun Wu, and Lei Huang. Tinyllava-video-r1: Towards smaller lmms for video reasoning. arXiv preprint arXiv:2504.09641, 2025. 4", + "[65] Henghao Zhao, Ge-Peng Ji, Rui Yan, Huan Xiong, and Zechao Li. Videoexpert: Augmented llm for temporal-sensitive video understanding. arXiv preprint arXiv:2504.07519, 2025. 2", + "[66] Jiahao Zhu, Daizong Liu, Pan Zhou, Xing Di, Yu Cheng, Song Yang, Wenzheng Xu, Zichuan Xu, Yao Wan, Lichao Sun, and Zeyu Xiong. Rethinking the video sampling and reasoning strategies for temporal sentence grounding. In Findings of the Association for Computational Linguistics: EMNLP 2022, 2022. 8, 9" + ], + "bbox": [ + 173, + 90, + 826, + 470 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 490, + 935, + 506, + 946 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A Limitations 14", + "B Implementation Details 14", + "C Additional Analyses 15", + "D Ablation Studies 15", + "E Qualitative Result 16" + ], + "bbox": [ + 173, + 90, + 825, + 243 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "A Limitations", + "text_level": 1, + "bbox": [ + 173, + 265, + 307, + 280 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Despite achieving notable improvements on the TVG task, our approach still has several limitations. First, Time-R1 suffers from slower training and inference speeds, primarily due to its large model size and reliance on autoregressive text generation. Second, to manage GPU memory consumption, we use a relatively low frame sampling rate, which may result in the loss of fine-grained motion information across frames. Finally, Time-R1 currently cannot handle ultra-long videos, limiting its applicability in scenarios such as full-length movie understanding.", + "bbox": [ + 169, + 297, + 826, + 381 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "B Implementation Details", + "text_level": 1, + "bbox": [ + 171, + 402, + 405, + 419 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Details of Time-R1 framework. Inspired by DAPO [56], we adopt its token-level loss for training, rather than the sample-level loss used in GRPO. Apart from minor changes to the loss, all settings are identical to GRPO. Besides, we find that other techniques introduced in DAPO do not benefit the TVG task, thus aborting other techniques. We full-finetune the LLM parameters at every step, thus $\\frac{\\pi_{\\theta}(o_i)}{\\pi_{\\theta_{\\mathrm{old}}}(o_i)} = 1$ . The sample number $G$ is set to 8. The coefficient $\\beta$ is set to 0.04.", + "bbox": [ + 169, + 434, + 823, + 513 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Details of TimeRFT training. For RFT data filtering, we use a Gaussian distribution with a fixed variance of 0.2 while varying the mean to control sample selection. In our cold start phase, we construct 150 samples from our training data sources (e.g., YT-Temporal [54]) to fine-tune the LLM using LoRA [20], with a LoRA rank of 64 and a LoRA alpha of 128. All of our results are reported based on the final training epoch. For RL, we use a learning rate of 1e-6 with the AdamW optimizer with $\\beta_{1} = 0.9$ , $\\beta_{2} = 0.999$ , and a linear scheduler to decay the learning rate from 1e-6 to 0. We use a batch size of 8 with gradient accumulation set to 2. It requires 15 hours of training on 8 A100 GPUs.", + "bbox": [ + 169, + 517, + 826, + 616 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Details of our evaluation prompts. As shown in Figure 11, for temporal video grounding, the prompts used for training and testing are designed to encourage the model to reason before responding, following a template-based answer format. For VideoQA, we have two versions of prompts: one with CoT and one without CoT.", + "bbox": [ + 169, + 621, + 826, + 676 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Details of TVG baseline methods and implementations. We evaluate the baselines on TVGBench using their original best-performing setting, focusing primarily on video input and prompt design.", + "bbox": [ + 169, + 683, + 823, + 712 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- TimeChat [44] is built upon the InstructBLIP [8] architecture and introduces a video Q-former to encode video tokens. It operates at a resolution of 224 and samples 96 frames.", + "- TRACE [18] treats each combination of timestamp, saliency score, and caption as a discrete event and enables the LVLM to autoregressively generate event sequences. It operates at a higher resolution of 336 and samples 128 frames.", + "- TimeSuite [60] introduces a token shuffling strategy to compress long video token sequences and incorporates positional encoding to enhance visual understanding. It adopts a resolution of 224 and samples 128 frames.", + "- VideoChat-Flash [27] proposes a progressive visual token dropping mechanism within intermediate LLM layers to compress video inputs and extend the effective context length. It uses a resolution of 448 and samples video at 1 fps, with a maximum of 512 frames.", + "- Gemini-2.5-Pro [10]: Gemini-2.5-Pro is a state-of-the-art video understanding model capable of reasoning over videos exceeding one hour in length. It supports video question answering and temporal localization tasks." + ], + "bbox": [ + 169, + 717, + 823, + 911 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 13 + }, + { + "type": "table", + "img_path": "images/7d5c59021322c58df8bf320c59bec90239a153488bbb8d77d708ca5cadba2593.jpg", + "table_caption": [ + "Table 3: Comparison of different approaches on TVGBench for all types. We use mIoU as metric." + ], + "table_footnote": [], + "table_body": "
MethodECESHACHAPHASHPOAOCOECOESOT
TimeChat [44]22.332.816.69.814.635.115.09.22.418.010.2
TimeSuite [60]27.339.614.212.824.939.614.613.96.732.614.3
TRACE [18]57.166.825.917.526.545.117.822.112.536.824.9
VideoChat-Flash [27]38.347.212.913.927.139.414.912.76.524.312.9
Gemini-2.5-Pro [10]46.745.321.127.630.939.923.031.114.135.917.8
Time-R1 (ours)49.365.328.324.339.356.226.321.89.032.721.8
", + "bbox": [ + 210, + 114, + 784, + 210 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Details of our implemented SFT baselines. We implemented two versions of SFT fine-tuning: one is full-parameter fine-tuning of the LLM (SFT), and the other is LoRA-based fine-tuning of the LLM (SFT-LoRA). For SFT-LoRA, the LoRA rank is set to 64, and the LoRA alpha is set to 128. Both configurations use the following settings: a learning rate of 2e-5, the AdamW optimizer with $\\beta_{1} = 0.9$ , $\\beta_{2} = 0.999$ , a weight decay of 0, the batch size of 8, and accumulation steps of 2. We fine-tune for 5 epochs on our 2.5K data, and use a linear scheduler to gradually decay the learning rate to 0.", + "bbox": [ + 169, + 238, + 826, + 324 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "C Additional Analyses", + "text_level": 1, + "bbox": [ + 171, + 342, + 380, + 359 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "In-depth comparisons of different approaches on TVGBench by semantic type. Table 3 provides a detailed performance comparison of various methods on the TVGBench across different semantic categories. Specifically, the abbreviations represent: EC (Environment Change), ES (Environment State), HAC (Human Action - Complex), HAP (Human Action - Procedural), HAS (Human Action - Simple), HP (Human Pose), OA (Object Attribute), OC (Object Counting), OEC (Object Existence - Complex), OES (Object Existence - Simple), and OT (Object Transition). Detailed definition and construction process can be found in Figure 15.", + "bbox": [ + 168, + 375, + 823, + 470 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Time-R1 demonstrates strong competitiveness across multiple semantic categories. First, particularly in the four tasks of HAC, HAS, HP, and OA, Time-R1 achieved the highest scores among all compared methods, showcasing its excellent ability in understanding the details of human actions and identifying object features. For example, Time-R1 achieves an mIoU of 56.2 on HP, which is 11.1 points higher than the second-best method, TRACE, with an mIoU of 45.1. On HAS, Time-R1 reaches 39.3, outperforming Gemini-2.5-Pro's 30.9 by 8.4 points. Second, in the three tasks of ES, EC, and OT, Time-R1 demonstrates strong performance comparable to the top model TRACE, with its performance being very close or immediately following. In the HAP task, Time-R1 also performs excellently, with its performance being in the same tier as Gemini-2.5-Pro. Lastly, all models still show a noticeable gap compared to Gemini in understanding complex instructions, such as in HAP, OC, and OEC. For example, in HAP, which involves procedural activity localization, Gemini achieves 27.6, while our model ranks second with a score of 24.3. In object counting, Gemini attains 31.1, substantially outperforming our model's 21.8. In summary, Time-R1 performs well on both non-human simple instructions and human-related instructions, but there is still room for improvement in complex instruction grounding and object-related grounding.", + "bbox": [ + 169, + 478, + 826, + 686 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Comparison of speed and accuracy between inference library transformers and vLLM. We observe that the inference speed of the implementation in the transformers [51] library is very slow. To address this, we implemented an accelerated inference version using vLLM [23] for all related 7 downstream benchmarks. For example, on TVGBench, the vLLM-based implementation requires only 502 seconds to infer 800 samples using 8 GPUs, whereas the transformers library implementation takes 2520 seconds. This achieves an overall speedup of $5 \\times$ .", + "bbox": [ + 169, + 691, + 828, + 776 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "D Ablation Studies", + "text_level": 1, + "bbox": [ + 171, + 795, + 349, + 811 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Ablation of different RFT data filtering strategies. As shown in Table 4, different data filtering strategy in the initial round affects the model's performance. First, appropriate Gaussian filtering outperforms both uniform and random filtering methods. Among the Gaussian filtering settings, a standard deviation of 0.3 yields the best results, followed by 0.5 and then 0.7. These findings suggest that incorporating moderately challenging samples during RFT helps improve the model's generalization capability more effectively than using either overly easy or extremely difficult examples.", + "bbox": [ + 169, + 827, + 826, + 912 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 14 + }, + { + "type": "table", + "img_path": "images/4eb42206cc115a134940c0c8f85bc3e915eb90a9e76df46500c7ac57c7b4b3f3.jpg", + "table_caption": [ + "Table 4: Ablation of data filtering strategies." + ], + "table_footnote": [], + "table_body": "
MethodR1@0.3R1@0.5R1@0.7mIoU
random39.426.516.427.4
gaussian (0.3)41.628.515.628.6
gaussian (0.5)40.628.216.028.3
gaussian (0.7)37.226.915.526.5
uniform40.428.515.928.3
", + "bbox": [ + 197, + 114, + 467, + 186 + ], + "page_idx": 15 + }, + { + "type": "table", + "img_path": "images/4724931c4e723d0c856e0f37d2c791ca8206227a3c166a359bca6ead1211f9ba.jpg", + "table_caption": [ + "Table 5: Ablation of KL and CoT in GRPO." + ], + "table_footnote": [], + "table_body": "
KLCoTR1@0.3R1@0.5R1@0.7mIoU
XX40.429.114.928.1
X40.827.415.027.7
X42.929.515.029.1
41.628.515.628.6
", + "bbox": [ + 503, + 114, + 794, + 186 + ], + "page_idx": 15 + }, + { + "type": "table", + "img_path": "images/e8c5e7da854f9c4f13a4726598b761fb62b9f195447c1d30fe0c3c9fb10d30fd.jpg", + "table_caption": [ + "Table 6: Comparison of the token-level loss design used by DAPO [56] and the sample-level loss design used by GRPO [45]." + ], + "table_footnote": [], + "table_body": "
LossCharades-STAActivityNetTVGBench
R1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoU
GRPO76.759.834.457.055.937.120.337.840.828.016.528.4
DAPO77.460.034.157.256.237.420.438.041.628.515.628.6
", + "bbox": [ + 194, + 237, + 802, + 292 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Ablation of KL and CoT during GRPO training. As shown in Table 5, incorporating CoT reasoning during training leads to improved performance compared to the No-CoT setting, suggesting that CoT enhances the model's temporal video grounding capabilities. When KL divergence is omitted (No-KL), performance slightly decreases under the No-CoT setting but unexpectedly improves when CoT is present. However, we find that in the No-KL+CoT setting, the model often fails to produce a thinking process, directly jumping to answers. In contrast, using KL divergence helps maintain more logical reasoning that is easier to follow. To balance performance and interpretability, we adopt a training setup that includes both KL and CoT.", + "bbox": [ + 169, + 315, + 826, + 426 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Comparison of tIoU and IoU during multi-epoch training. As shown in Figure 6, tIoU consistently outperforms standard IoU during both the early and late stages of training over the first 5 epochs. Notably, while tIoU steadily improves as training progresses, IoU shows a decline in performance by the fifth epoch. This highlights the advantage of using tIoU as a more stable and reliable reward for temporal video grounding.", + "bbox": [ + 169, + 431, + 826, + 503 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Ablation of sample filtering in multi-epoch training. As shown in Figure 7, applying sample filtering (SF) to remove simpler training samples yields consistent performance improvements across epochs. This suggests that easy samples with high IoU may introduce noise or reduce the effectiveness of learning, and filtering them helps focus the model on more informative and challenging instances.", + "bbox": [ + 169, + 508, + 826, + 566 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Ablation of DAPO & GRPO. The sample-level loss used by GRPO computes the loss by averaging over each individual sample. This approach leads to unequal loss contributions for tokens when dealing with CoTs of varying lengths. DAPO addresses this issue by employing a token-level loss. The underlying principle is that the token-level loss can effectively guide the model in the process of CoT generation, allowing it to learn useful patterns from CoTs of different lengths sampled during training. In Table 6, we compare these two loss designs. We empirically find that DAPO outperforms GRPO on the majority of metrics, thus, we adopt DAPO's loss design.", + "bbox": [ + 169, + 570, + 826, + 667 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Different Model Size. Table 7 presents a performance comparison of different model sizes. These results indicate that larger models achieve better zero-shot performance and continue to outperform smaller models after fine-tuning. These findings support the notion that scaling up model capacity enhances generalization and leads to superior results on the TVG tasks.", + "bbox": [ + 169, + 672, + 825, + 731 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "E Qualitative Result", + "text_level": 1, + "bbox": [ + 171, + 750, + 362, + 766 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Case study of temporal video grounding on Charades and ActivityNet. As shown in Figure 8, in the example above, given a relatively complex language instruction, Time-R1 demonstrates more accurate localization than all baselines, successfully capturing the initial event \"goes back to the pink bucket\" within the timestamp, while other SoTA models like Gemini-2.5-Pro fail. In the example below, the model accurately localizes the event, excluding \"a person is lying on the couch,\" and correctly distinguishes between sitting and lying, unlike other models, which either localize only a small segment (TimeSuite and VideoChat-Flash) or the entire segment (TRACE and Gemini-2.5-Pro).", + "bbox": [ + 169, + 779, + 826, + 878 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Case study of short video QA on VideoMME and TempCompass. As shown in Figures 9 and 10, Time-R1 demonstrates improved performance over the base model in tasks requiring positional", + "bbox": [ + 169, + 883, + 826, + 912 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 490, + 935, + 509, + 946 + ], + "page_idx": 15 + }, + { + "type": "table", + "img_path": "images/063fc5fb30f8f6a713569dbca0b207199a4a5c97244821d721e1a617c522cdda.jpg", + "table_caption": [ + "Table 7: Performance comparison of different model sizes." + ], + "table_footnote": [], + "table_body": "
MethodCharades-STAActivityNetTVGBench
R1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoU
Time-R1-3B74.653.126.051.240.021.08.723.233.521.010.521.7
Time-R1-3B*78.764.136.959.966.846.824.746.1----
Time-R1-7B78.160.835.558.158.139.021.440.541.829.416.429.2
Time-R1-7B*82.872.250.160.973.355.634.052.1----
", + "bbox": [ + 176, + 114, + 820, + 193 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/274d87af0a8919a0a742cf4bccd701b192201dc967697683fb1144c7bb9411a0.jpg", + "image_caption": [ + "Figure 6: Performance comparison of tIoU and IoU in multi-epoch training." + ], + "image_footnote": [], + "bbox": [ + 189, + 208, + 478, + 373 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/17ec16600700266ab49664ef8b1d7c88648951c7538d40c898495f116d77d77e.jpg", + "image_caption": [ + "Figure 7: Ablation of sample filtering in multi-epoch training." + ], + "image_footnote": [], + "bbox": [ + 516, + 208, + 807, + 373 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "judgment, scene storyline reasoning, and visual reasoning. For example, in Figure 9, Time-R1 correctly identifies that a car in the video is missing its right-front wheel, a detail that the base model fails to recognize. This reflects that Time-R1 likely possesses stronger video localization capabilities, which in turn enhance its visual reasoning ability. In Figure 12, we output a CoT when answering the QA task, providing some interpretability. This example shows that Time-R1's reasoning process is more concise, whereas the base model often reasons correctly but arrives at the wrong answer. This suggests that Time-R1's reasoning may be more effective in guiding the final answer, possibly benefiting from the outcome-driven RL of GRPO.", + "bbox": [ + 169, + 441, + 826, + 553 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Case study of long video QA on EgoSchema and VideoMME. Figure 13 presents a long egocentric video QA example focused on summarizing task steps. In the \"Hanging the Dress\" case, the base model fails to identify all key steps, while our Time-R1 model correctly selects the answer by generating a more accurate chain-of-thought (CoT). In Figure 14, the task involves identifying the primary tools and ingredients used in the video and understanding their respective contributions. An additional example in Figure 9 involves animated scene reasoning, where Time-R1 correctly infers, based on elements like the presence of a guitar, that the heroes defeated the enemies using the power of music, whereas the base model misinterprets the cause. Across these complex reasoning tasks involving long videos, Time-R1 consistently demonstrates superior performance compared to the base model.", + "bbox": [ + 169, + 559, + 826, + 698 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Illustration of our prompt at training and inference time. Figure 11 presents the prompts used for the temporal video grounding and video QA tasks at both training and inference time.", + "bbox": [ + 169, + 704, + 823, + 733 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Illustration of our prompt to annotate query semantics on TVGBench. Figure 15 presents the prompt used to annotate query semantics in the TVGBench dataset. The prompt is designed to guide the DeepSeek LLM in classifying each input query into one of 11 predefined semantic categories. To improve annotation quality, we refer to the construction of existing benchmarks and carefully select both positive and negative examples for each semantic type. These examples are chosen to include queries that are either easy or difficult for the model to answer, helping to refine the model's understanding and improve labeling accuracy.", + "bbox": [ + 169, + 738, + 826, + 837 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/4fedb324bb47101ec49cfc416e6525aa7ae990795778f2de0a5f618594a37ceb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 209, + 113, + 393, + 167 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/5c83fb854d03c1084b8147fd2f0d207efd3035914b8e01683b1b2570e88116f5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 393, + 113, + 589, + 167 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/1eeca61a89d9ee552f580381979f06a57a5db62e051689c4db45a8e866a87540.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 589, + 113, + 761, + 167 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Question: Where is the event \"The man goes back to the pink bucket to wash another dark item clothing, he wrings it out in the pink bucket, throws it in the yellow bucket, then wrings his hands together a few times while staring at the camera\"?", + "bbox": [ + 174, + 172, + 818, + 196 + ], + "page_idx": 17 + }, + { + "type": "table", + "img_path": "images/e2cfcdbce816debcd62e90696e4a91f43b2b2807b46e8227fa64b5280fe4e669.jpg", + "table_caption": [ + "(video length: 74s)" + ], + "table_footnote": [], + "table_body": "
Ground Truth33.1s→ 73.5s
TimeSuite20.0s→ 30.0s
Gemini-2.5-Pro57.0s
VideoChat-Flash0.0s→ 17.2s
Trace54.9s
Time-R1 (ours)35.0s→ 75.0s
", + "bbox": [ + 178, + 209, + 821, + 287 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Time-R1 thinking (ours): The event \"The man goes back to the pink bucket to wash another dark item clothing, he wrings it out in the pink bucket, throws it in the yellow bucket, then wrings his hands together a few times while staring at the camera.\" occurs between 35.00 and 75.00 seconds in the video.", + "bbox": [ + 178, + 290, + 807, + 324 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/9968b6c45f4adeda2cacdb485fed70f4a7bc0887931405f750f4da5ed4176b15.jpg", + "image_caption": [ + "Figure 8: Temporal video grounding cases from Charades and ActivityNet benchmark. Time-R1 outperforms other SoTA models." + ], + "image_footnote": [], + "bbox": [ + 218, + 334, + 488, + 387 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/985cd9a1289f0cb3f62a495e911ced0c0cd918a5cdd7fc8cbd83d6f3cdbfeae1.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 488, + 335, + 785, + 387 + ], + "page_idx": 17 + }, + { + "type": "table", + "img_path": "images/29395b9f4c424bce5807ab10a9ce77f0ecf37d2a5de35000268c70bbfc270982.jpg", + "table_caption": [ + "Question: Where is the event \"a person is seen sitting on a couch\"?", + "(video length: 31s)" + ], + "table_footnote": [], + "table_body": "
Ground Truth0.0s←12.8s
TimeSuite0.0s←5.0s
Gemini-2.5-Pro0.0s←30.0s
VideoChat-Flash0.0s←5.0s
Trace0.0s←31.6s
Time-R1 (ours)0.0s←10.0s
", + "bbox": [ + 179, + 409, + 818, + 487 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Time-R1 thinking (ours): The event \"a person is seen sitting on a couch\" occurs at the beginning of the video. The person is seated on a couch, holding a phone, and appears to be engaged with it. This aligns with the initial frames of the video.", + "bbox": [ + 178, + 489, + 789, + 513 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/973df124e00e64d66c24e68d6bbc28517340ef9054637a51a8b5356881a59b55.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 187, + 623, + 357, + 700 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/faec5a13c0067d9b642efba9369da44a4764c136a36678a932e8dec98cf64bc5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 408, + 623, + 578, + 700 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/7263b40507e658da86f0c681610219abe6e1979ce94d39e0bea81772100821c5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 630, + 625, + 800, + 700 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "(video length: 38m) Question: What do heroes of legend use to defeat the enemy based on the video?", + "bbox": [ + 207, + 700, + 792, + 713 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/218347b27395c7ef9db786d9def3ca073d718495208f3fd53493ff1d52eb2704.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 209, + 713, + 225, + 722 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Their wisdom", + "bbox": [ + 225, + 713, + 299, + 722 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/f84c05262d317e51bd08a7be8e23d18718f79e49c0dc3cc7341e7a68b8d61ef2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 210, + 722, + 225, + 729 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Their superpower", + "bbox": [ + 225, + 723, + 316, + 734 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/b3b0e585c0db345043984495ff909736d2924ea3bd232b0d7c13c7bbb8198251.jpg", + "image_caption": [ + "Figure 9: Case study on VideoMME (w/o CoT), demonstrating that Time-R1 achieves better performance than the base model." + ], + "image_footnote": [], + "bbox": [ + 187, + 736, + 357, + 811 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "(B) A big robot", + "bbox": [ + 465, + 713, + 540, + 722 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/09928d2c575f65fba1f3260fc7ce829e74cec6bcf7107f4b746c46c51ba5284e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 465, + 722, + 480, + 729 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "4 Power of music", + "bbox": [ + 480, + 723, + 560, + 734 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/12d8fce88e4faed9a6d0609ccef584422d1473a3d74f01c1339a1abef3ffeb4d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 408, + 736, + 578, + 811 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/d309857026d6d143f04db4e4465e618ec7bb434a45191c8731c880da1c0f21fe.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 630, + 736, + 800, + 811 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "(video length: 17s) Question: What's wrong with this car?", + "bbox": [ + 205, + 811, + 545, + 823 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/95b3560e9f81ee0ecbf781caf46006fe72d005eca8cfe18df5624de0505a0dc9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 205, + 823, + 223, + 832 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "It doesn't have a left rear wheel.", + "bbox": [ + 225, + 823, + 380, + 833 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/4a4078e7b7aee98137f556a4e4cde6e7b54c3cfb6912441b8a5d32a36feed0e8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 205, + 832, + 223, + 840 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Its headlamp is broken.", + "bbox": [ + 225, + 833, + 338, + 844 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(B) It doesn't have a right front wheel.", + "(D) Its right door is broken." + ], + "bbox": [ + 496, + 823, + 681, + 844 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 490, + 935, + 506, + 946 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/19eed5f5babcebf1b58cafcfcdc600fe94c6be4b8245ccf23ee8d61f4a6deb8e.jpg", + "image_caption": [ + "Figure 10: Case study on TempCompass (w/o CoT), demonstrating that Time-R1 achieves better performance than the base model." + ], + "image_footnote": [], + "bbox": [ + 197, + 132, + 370, + 208 + ], + "page_idx": 18 + }, + { + "type": "image", + "img_path": "images/3909e72e1f638024ea6b20d4b16e72de1e32666727cf506471db760bb6c9fc8b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 405, + 132, + 576, + 208 + ], + "page_idx": 18 + }, + { + "type": "image", + "img_path": "images/1cbc539d50fd51452839d0189ad47f5bacbad57e73f8ffc6a087f0e5bbfdf5ba.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 614, + 132, + 785, + 209 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "(video length: 10s)Question: Which sentence better captures the essence of the video?", + "bbox": [ + 186, + 209, + 689, + 220 + ], + "page_idx": 18 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(A) First, two hand are holding the clay pot and then three hands are holding it.", + "In the video, three hands are holding the clay pot, then two hands are holding the clay pot." + ], + "bbox": [ + 186, + 222, + 645, + 250 + ], + "page_idx": 18 + }, + { + "type": "image", + "img_path": "images/d75c15a560b56adadfafa9355f8b6b0aaad45eebf96fa76dc432641d2e2bf666.jpg", + "image_caption": [ + "Figure 11: Illustration of prompts at both training and inference time." + ], + "image_footnote": [], + "bbox": [ + 197, + 251, + 367, + 325 + ], + "page_idx": 18 + }, + { + "type": "image", + "img_path": "images/f8ca333393071a5a60e1bc1552cf64f7116c991ea0edeeef783b03299fc0fe47.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 405, + 251, + 575, + 325 + ], + "page_idx": 18 + }, + { + "type": "image", + "img_path": "images/980926ad5d91bec503d9caba1d831d0c388eaffc5efd5b99e3790c6840b603f2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 614, + 251, + 785, + 325 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "(video length: 14s)Question: What is the sequence of actions performed by the girl?", + "bbox": [ + 186, + 325, + 674, + 339 + ], + "page_idx": 18 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(A) Putting on clothes, jumping into water, taking off clothes", + "(B) Jumping into water, putting on clothes, taking off clothes", + "(C) Taking off clothes, putting on clothes, jumping into water" + ], + "bbox": [ + 186, + 340, + 483, + 381 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Temporal Video Grounding", + "text_level": 1, + "bbox": [ + 401, + 527, + 594, + 541 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "System Prompt", + "text_level": 1, + "bbox": [ + 174, + 547, + 272, + 559 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "You are a video analysis expert.", + "bbox": [ + 174, + 560, + 356, + 571 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Input Prompt", + "text_level": 1, + "bbox": [ + 176, + 571, + 263, + 584 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "To accurately pinpoint the event \"[EVENT]\" in the video, determine the precise time period of the event. Output your thought process within the tags, including analysis with either specific time ranges (xx.xx to xx.xx) in < timestep> tags. Then, provide the start and end times (in seconds, precise to two decimal places) in the format \"start time to end time\" within the tags. For example: \"12.54 to 17.83\".\"\"\"", + "bbox": [ + 174, + 584, + 815, + 645 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Video QA", + "text_level": 1, + "bbox": [ + 464, + 654, + 532, + 667 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "System Prompt", + "text_level": 1, + "bbox": [ + 174, + 670, + 272, + 681 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "You are a video analysis expert.", + "bbox": [ + 174, + 683, + 356, + 694 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Input Prompt (w/o CoT)", + "text_level": 1, + "bbox": [ + 174, + 695, + 328, + 707 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Carefully watch the video and pay attention to the cause and sequence of events, the detail and movement of objects and the action and pose of persons. Based on your observations, select the best option that accurately addresses the question. Provide the letter of the best option wrapped in parentheses within the tags. Question: {} {}\"\"", + "bbox": [ + 174, + 708, + 797, + 757 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Input Prompt (w/ CoT)", + "text_level": 1, + "bbox": [ + 174, + 757, + 320, + 768 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Carefully watch the video and pay attention to the cause and sequence of events, the detail and movement of objects and the action and pose of persons. Based on your observations, select the best option that accurately addresses the question. Output your thought process within the tags, including analysis with either specific timestamps (xx.xx) or time ranges (xx.xx to xx.xx) in < timestep> tags. Then, provide the letter of the best option wrapped in parentheses within the tags. Question: {} {}", + "bbox": [ + 174, + 768, + 815, + 830 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 18 + }, + { + "type": "image", + "img_path": "images/8bbda051c7ce9bd5fe9b6cb397b3b363d50c32c6adc1866bcc17300fdf32b49d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 174, + 140, + 387, + 234 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/8a70b3af9b1cef5820dbbe2185acb3e04c5212fbd35dc7b48b92d939b2140b3e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 388, + 140, + 606, + 234 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/b02c0aecb373fffd3ad6fe4410fe80d7d12502b93d13f4db9fc258d37fa09dd4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 609, + 140, + 823, + 234 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "(video length: 8s) Question: Is the pineapple being pushed forward by a person?", + "bbox": [ + 174, + 239, + 712, + 255 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/fab3078a8b7c583cae33c013f97fbc814dca78d0c61a8b95825374e61adbf8ef.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 176, + 257, + 225, + 277 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/94f7d103b06790159b96771fa16bcda15048fc523193a7bdd6ea2c99b0d987f2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 359, + 258, + 413, + 277 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Base model thinking:", + "text_level": 1, + "bbox": [ + 174, + 295, + 328, + 309 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "To determine if the pineapple is being pushed forward by a person, let's analyze the video:", + "bbox": [ + 174, + 311, + 691, + 325 + ], + "page_idx": 19 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. The video shows a pineapple on a pink surface.", + "2. A hand appears in the frame and interacts with the pineapple.", + "3. The hand lifts the pineapple, indicating that it is being moved by the person." + ], + "bbox": [ + 174, + 328, + 627, + 372 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Given this analysis, the pineapple is indeed being pushed forward by a person.", + "bbox": [ + 174, + 375, + 620, + 387 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Time-R1 thinking (ours):", + "text_level": 1, + "bbox": [ + 174, + 393, + 357, + 407 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "The video shows a hand reaching into the frame and grasping the pineapple. The hand then lifts the pineapple, indicating that the pineapple is not being pushed forward by a person but rather being moved by the hand.", + "bbox": [ + 174, + 410, + 816, + 439 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/f869e0da1398553a7a9496e63122abdaa9511f576a3870693cbfead808c33cd5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 174, + 448, + 390, + 542 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/aa73db1e7f126f232846a4a713ba79a6e8528ec63d50a068a539b99f93b8343e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 392, + 448, + 607, + 542 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/a342e247ff90b415a7531500e076e1acdf562b8cb12a3d127ca2b196c21df3c5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 609, + 449, + 823, + 542 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "(video length: 9s) Question: In which sequence does the man perform the actions?", + "bbox": [ + 174, + 551, + 723, + 566 + ], + "page_idx": 19 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(A) Gliding on the skateboard and then jumping", + "(B) Jumping and then tying his shoelaces", + "(C) Tying his shoelaces and then gliding on the skateboard", + "(D) Gliding on the skateboard and then tying his shoelaces" + ], + "bbox": [ + 176, + 570, + 511, + 635 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Base model thinking:", + "text_level": 1, + "bbox": [ + 174, + 652, + 326, + 667 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Let's analyze the sequence of actions performed by the man in the video:", + "bbox": [ + 174, + 670, + 589, + 683 + ], + "page_idx": 19 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. The man is initially seen standing on a skateboard.", + "2. He then proceeds to tie his shoelaces.", + "3. After tying his shoelaces, he glides on the skateboard." + ], + "bbox": [ + 174, + 686, + 500, + 729 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Given this sequence, the correct answer is: (D) gliding on the skateboard and then tying his shoelaces", + "bbox": [ + 174, + 732, + 751, + 744 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Time-R1 thinking (ours):", + "text_level": 1, + "bbox": [ + 174, + 751, + 357, + 765 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "The sequence of actions performed by the man in the video is as follows: first, he ties his shoelaces, and then he glides on the skateboard. This can be observed from the video where the man is seen tying his shoelaces before he starts gliding on the skateboard.", + "bbox": [ + 173, + 768, + 816, + 811 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Figure 12: Case study on TempCompass (w/ CoT), demonstrating that Time-R1 achieves better performance than the base model.", + "bbox": [ + 169, + 828, + 825, + 858 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "20", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/81279d7eae18fb1bce4cbee47004116069ae0b3e42a55e520c6dc6cf7e80f94e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 174, + 154, + 377, + 273 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/4c0320e18d7afaecdd60f0655f830871000e4ec263ba8222ed59654b57aea321.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 397, + 155, + 599, + 272 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/53c6db539aaaa3e755ac6983de5dc5d897ec26c76ca462857240666e21e22f9b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 620, + 155, + 823, + 272 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "(video length: 3m) Question: Summarize the steps taken by c to complete the tasks in this video. don't list every action narrated, but rather focus on major components and how they're executed.", + "bbox": [ + 176, + 286, + 821, + 340 + ], + "page_idx": 20 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(A) C folds the dress, places it on the ironing board, and then hangs it up.", + "(B) C folds the dress, places it on the ironing board, and then packs it.", + "(C) C folds the dress, places it on the ironing board, and then washes it.", + "(D) C folds the dress, places it on the ironing board, and then irons it and hangs it up.", + "(E) C folds the dress, places it on the ironing board, irons it, and then folds it again." + ], + "bbox": [ + 179, + 348, + 732, + 434 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Base model thinking:", + "text_level": 1, + "bbox": [ + 178, + 458, + 349, + 473 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Let's analyze the video step by step to determine the correct sequence of actions:", + "bbox": [ + 178, + 478, + 705, + 492 + ], + "page_idx": 20 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Folding the Dress: The video starts with C folding a blue dress on the ironing board. This is the first major action.", + "2. Placing the Dress on the Ironing Board: After folding the dress, C places it on the ironing board.", + "3. Ironing the Dress: C then irons the dress, ensuring it is smooth and free of wrinkles." + ], + "bbox": [ + 178, + 494, + 818, + 561 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "4. Hanging the Dress: Finally, C hangs the dress up, completing the task.", + "bbox": [ + 178, + 566, + 656, + 580 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Now, let's evaluate the options:", + "bbox": [ + 178, + 584, + 383, + 598 + ], + "page_idx": 20 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(A) C folds the dress, places it on the ironing board, and then hangs it up.", + "(B) C folds the dress, places it on the ironing board, and then packs it.", + "(C) C folds the dress, places it on the ironing board, and then washes it.", + "(D) C folds the dress, places it on the ironing board, and then irons it and hangs it up.", + "(E) C folds the dress, places it on the ironing board, irons it, and then folds it again. The correct sequence of actions matches option (D)." + ], + "bbox": [ + 179, + 602, + 728, + 705 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Time-R1 thinking (ours):", + "text_level": 1, + "bbox": [ + 178, + 729, + 382, + 744 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "The video shows C folding a dress, placing it on the ironing board, and then ironing it. The sequence of actions is clear and follows a logical order. The dress is folded, placed on the ironing board, and then ironed, which is a common sequence for completing a task like ironing a dress.", + "bbox": [ + 176, + 750, + 821, + 799 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Figure 13: Case study on EgoSchema (w/ CoT), demonstrating that Time-R1 achieves better performance than the base model.", + "bbox": [ + 169, + 823, + 826, + 851 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "21", + "bbox": [ + 488, + 935, + 506, + 946 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/265232fafe8299073c4c7c4ff4a77c04143193438124ab075d3728dc10ab911d.jpg", + "image_caption": [ + "Figure 14: Case study on EgoSchema (w/o CoT), demonstrating that Time-R1 achieves better performance than the base model." + ], + "image_footnote": [], + "bbox": [ + 179, + 147, + 372, + 258 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/328bb94b3b32a66cd62b1ed91525cc8163cc93fb06159121e58a9d4a4cb19b48.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 403, + 147, + 593, + 258 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/14920b340d6c0ba3c4d19e69ffea4b668b0f967e1acf371880dfc3a8a72bc09d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 625, + 147, + 816, + 258 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "(video length: 3m) Question: Identify the primary tools and ingredients c used throughout the video, and discuss how they contributed to the overall dessert preparation process.", + "bbox": [ + 176, + 258, + 820, + 315 + ], + "page_idx": 21 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(A) The primary tools and ingredients c used throughout the video were a spatula, a strainer, a dessert bowl, cocoa powder, and tissue paper. the spatula was used to spread the batter in the bowl. the strainer was used to sieve the cocoa powder into the bowl. the dessert bowl was used to hold the batter. the cocoa powder was used to add flavor and color to the batter. the tissue paper was used to wipe the edges of the bowl.", + "(B) The primary tools and ingredients c used throughout the video were a mixing bowl, a whisk, X a baking dish, flour, sugar, eggs, and butter. the mixing bowl was used to mix the batter. the whisk was used to beat the eggs. the baking dish was used to bake the cake. the flour was used to add structure to the cake. the sugar was used to add sweetness to the cake. the eggs were used to add moisture to the cake. the butter was used to add flavor and richness to the cake.", + "(C) The primary tools and ingredients c used throughout the video were a muffin tin, a spoon, chocolate chips, flour, sugar, eggs, and butter. the muffin tin was used to bake the muffins. the spoon was used to scoop the batter into the muffin tin. the chocolate chips were used to add flavor and color to the muffins. the flour was used to add structure to the muffins. the sugar was used to add sweetness to the muffins. the eggs were used to add moisture to the muffins. the butter was used to add flavor and richness to the muffins.", + "(D) The primary tools and ingredients c used throughout the video were a cake pan, a spatula, chocolate frosting, flour, sugar, eggs, and butter. The cake pan was used to bake the cake. The spatula was used to spread the frosting on the cake. The chocolate frosting was used to add flavor and color to the cake. The flour was used to add structure to the cake. The sugar was used to add sweetness to the cake. The eggs were used to add moisture to the cake. The butter was used to add flavor and richness to the cake.", + "(E) The primary tools and ingredients c used throughout the video were a mixing bowl, a whisk, a baking sheet, flour, sugar, eggs, and butter. the mixing bowl was used to mix the batter. the whisk was used to beat the eggs. the baking sheet was used to bake the cookies. the flour was used to add structure to the cookies. the sugar was used to add sweetness to the cookies. the eggs were used to add moisture to the cookies. the butter was used to add flavor and richness to the cookies." + ], + "bbox": [ + 178, + 320, + 821, + 811 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "22", + "bbox": [ + 488, + 935, + 508, + 946 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "## Task", + "text_level": 1, + "bbox": [ + 176, + 109, + 235, + 121 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Classify input queries into exactly one of the following categories based on their semantic content:", + "bbox": [ + 176, + 125, + 619, + 136 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "1. Human Action (Simple)", + "text_level": 1, + "bbox": [ + 176, + 138, + 313, + 148 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Definition: Singular physical movements or basic interactions.", + "- Examples: - person opens a book over their head. - The person gets out some ginger. - who did I talk to in the shopping mall?" + ], + "bbox": [ + 176, + 151, + 766, + 174 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "2. Human Action (Complex)", + "text_level": 1, + "bbox": [ + 176, + 175, + 323, + 186 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Definition: Single continuous event with intricate components or concurrent elements.", + "- Examples: - He is talking while several people are using rowing machines." + ], + "bbox": [ + 176, + 188, + 571, + 210 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- One man wearing blue shirt wearing a jumping leg extension and another man wearing red pants play on a field.", + "- who did I interact with when I did activity of fixing camping tent?" + ], + "bbox": [ + 243, + 212, + 751, + 234 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "3. Human Action (procedural)", + "text_level": 1, + "bbox": [ + 176, + 237, + 334, + 248 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Definition: contains multiple sequential events with explicit temporal boundaries. contains multiple actions, each with a clear start and end.", + "- Examples: - The person procures a condiment from the pantry, takes a spoon from the drawer which he uses to scoop it into the pan, then returns the condiment to the pantry, places the spoon in the sink and again stirs the pan." + ], + "bbox": [ + 176, + 250, + 810, + 284 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "- The person takes out a spoon from the drawer, scoops some sugar into the glass, stirs it with the juice, and returns the package to the pantry.", + "bbox": [ + 176, + 286, + 777, + 308 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "- Negative Examples: - Then the man juices some lemons in a juicer: only one action", + "bbox": [ + 176, + 311, + 558, + 321 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- She gets out a cutting board and knife: only one action", + "- He then finishes by doing tricks: only one action", + "- She removes bits of shell until there is a small hole: only one action" + ], + "bbox": [ + 279, + 323, + 584, + 357 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "4. Human Pose", + "text_level": 1, + "bbox": [ + 176, + 359, + 256, + 369 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Definition: Static body positions or group configurations. Posture descriptors, positional prepositions", + "- Examples: - Several other people are in the background working out on the equipment." + ], + "bbox": [ + 176, + 372, + 635, + 395 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "- A young child is seen standing before a set of monkey bars.", + "bbox": [ + 243, + 396, + 514, + 407 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "5. Object Existence (Simple)", + "text_level": 1, + "bbox": [ + 176, + 409, + 323, + 421 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Definition: Current location/status queries. Simple location prepositions.", + "- Examples: - Where is the tap?" + ], + "bbox": [ + 176, + 422, + 509, + 444 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- where is the chopsticks?", + "- In what location did i see the blue tent?" + ], + "bbox": [ + 245, + 446, + 428, + 468 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "6. Object Existence (Complex)", + "text_level": 1, + "bbox": [ + 176, + 470, + 334, + 482 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Definition: Queries about historical object positions changed by human actions, requiring temporal-action context (e.g., \"after/before [action]\").", + "- Examples: - Where was the spatula after I first used it?" + ], + "bbox": [ + 176, + 484, + 776, + 518 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Where was the sieve before I picked it?", + "- what bolt did I pick?", + "- What mushroom did i chop" + ], + "bbox": [ + 245, + 520, + 426, + 554 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "7. Object Attribute", + "text_level": 1, + "bbox": [ + 176, + 556, + 274, + 568 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Definition: Physical/abstract property inquiries. Property descriptors (color/size/material)", + "- Examples: - what material did I pick from the shelf?" + ], + "bbox": [ + 176, + 570, + 593, + 592 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "- what color is the toilet bin?", + "bbox": [ + 245, + 594, + 374, + 603 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "8. Object Counting", + "text_level": 1, + "bbox": [ + 176, + 606, + 276, + 617 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Definition: Quantitative object presence queries. Numeric quantifiers, plural objects", + "- Examples: - how many tissue paper were on the floor?" + ], + "bbox": [ + 176, + 619, + 558, + 641 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "- how many rolls are in the tray", + "bbox": [ + 245, + 643, + 387, + 652 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "9. Object Transition", + "text_level": 1, + "bbox": [ + 176, + 656, + 282, + 666 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Definition: State/position change confirmation. Transformation verbs, completion checks", + "- Examples: - The bulb is broken apart." + ], + "bbox": [ + 176, + 669, + 583, + 691 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "- Did I close fridge?", + "bbox": [ + 245, + 693, + 336, + 703 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "10. Environment Change", + "text_level": 1, + "bbox": [ + 176, + 705, + 307, + 715 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Definition: Dynamic scene modifications. Transient elements, overlay content", + "- Examples: - video ends with clothes/captions scrolling down" + ], + "bbox": [ + 176, + 719, + 529, + 741 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "11. Environment State", + "text_level": 1, + "bbox": [ + 176, + 743, + 295, + 753 + ], + "page_idx": 22 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Definition: Persistent scene elements. Static overlays, permanent fixtures", + "- Examples: - Intro states 'Progression: Lisa's First Season'" + ], + "bbox": [ + 176, + 756, + 509, + 777 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "-Trend Routing Technology' logo appears", + "bbox": [ + 245, + 780, + 436, + 791 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Output Format", + "text_level": 1, + "bbox": [ + 176, + 794, + 305, + 806 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Return ONLY the exact category name from: [Human Action (Procedural), Human Action (Complex), Human Action (Simple), Human Pose, Object Existence (Simple), Object Existence (Complex), Object Attribute, Object Counting, Object Transition, Environment Change, Environment State]", + "bbox": [ + 174, + 809, + 790, + 845 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "INPUT_prompt = \"\"Given the query below, classify it into one of the categories mentioned above. Query: {query} Your response:", + "bbox": [ + 176, + 845, + 750, + 857 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Figure 15: Prompts for LLM used to annotate the semantics of each query on TVGBench.", + "bbox": [ + 200, + 872, + 792, + 887 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "23", + "bbox": [ + 488, + 935, + 506, + 946 + ], + "page_idx": 22 + } +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13377/48689f30-a9be-48a0-a682-282cc19b9dc2_model.json b/data/2025/2503_13xxx/2503.13377/48689f30-a9be-48a0-a682-282cc19b9dc2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..088e68de1da875aec2f9747ed2acda4770e90adf --- /dev/null +++ b/data/2025/2503_13xxx/2503.13377/48689f30-a9be-48a0-a682-282cc19b9dc2_model.json @@ -0,0 +1,5350 @@ +[ + [ + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.279, + 0.061, + 0.718 + ], + "angle": 270, + "content": "arXiv:2503.13377v3 [cs.CV] 29 Jun 2025" + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.122, + 0.826, + 0.175 + ], + "angle": 0, + "content": "Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding" + }, + { + "type": "text", + "bbox": [ + 0.19, + 0.225, + 0.808, + 0.273 + ], + "angle": 0, + "content": "Ye Wang\\(^{1*}\\) Ziheng Wang\\(^{1*}\\) Boshen Xu\\(^{1*‡}\\) Yang Du\\(^{1}\\) Kejun Lin\\(^{1}\\) Zihan Xiao\\(^{3}\\) \nZihao Yue\\(^{1}\\) Jianzhong Ju\\(^{2}\\) Liang Zhang\\(^{1}\\) Dingyi Yang\\(^{1}\\) Xiangnan Fang\\(^{1}\\) Zewen He\\(^{2}\\) \nZhenbo Luo\\(^{2}\\) Wenxuan Wang\\(^{1}\\) Junqi Lin\\(^{2}\\) Jian Luan\\(^{2}\\) Qin Jin\\(^{1†}\\)" + }, + { + "type": "text", + "bbox": [ + 0.324, + 0.283, + 0.696, + 0.328 + ], + "angle": 0, + "content": "\\(^{1}\\)AIM3 Lab, Renmin University of China \\(^{2}\\)MiLM Plus, Xiaomi Inc. \\(^{3}\\)Independent Researcher Project Page: https://xuboshen.github.io/Time-R1/" + }, + { + "type": "title", + "bbox": [ + 0.46, + 0.363, + 0.538, + 0.379 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.23, + 0.396, + 0.768, + 0.631 + ], + "angle": 0, + "content": "Temporal Video Grounding (TVG), the task of locating specific video segments based on language queries, is a core challenge in long-form video understanding. While recent Large Vision-Language Models (LVLMs) have shown early promise in tackling TVG through supervised fine-tuning (SFT), their abilities to generalize remain limited. To address this, we propose a novel post-training framework that enhances the generalization capabilities of LVLMs via reinforcement learning (RL). Specifically, our contributions span three key directions: (1) Time-R1: we introduce a reasoning-guided post-training framework via RL with verifiable reward to enhance the capabilities of LVLMs on the TVG task. (2) TimeRFT: we explore data-efficient post-training strategies on our curated RL-friendly dataset, which trains the model to progressively comprehend difficult samples, leading to better generalization. (3) TVGBench: we carefully construct a small yet comprehensive benchmark for LVLM evaluation, assessing 11 types of queries and featuring balanced distributions across both videos and queries. Extensive experiments demonstrate that Time-R1 achieves state-of-the-art performance across multiple downstream datasets using only 2.5K training data, while improving its general video understanding capabilities." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.667, + 0.314, + 0.683 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.699, + 0.827, + 0.77 + ], + "angle": 0, + "content": "Understanding long-form videos has long been a core ambition in computer vision [14, 24, 9]. A critical step toward this goal is Temporal Video Grounding (TVG) [15, 61], which enables models to localize video segments corresponding to natural language queries (e.g., \"Find the segment where a person walks into the living room\"). Such capability is fundamental for real-world applications, including smart home assistants [55, 16, 46] and video retrieval systems on online platforms [5, 3]." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.775, + 0.828, + 0.873 + ], + "angle": 0, + "content": "Traditional TVG approaches adopt a feature-based paradigm, where pretrained models (e.g., CLIP [42], I3D [6]) extract text and video features, followed by task-specific grounding models [30, 22, 29]. However, these methods suffer from error accumulation due to imperfect pretrained features. To overcome these limitations, recent efforts have shifted toward end-to-end Large Vision-Language Models (LVLMs) [44, 60], which directly process long-form videos and text queries. Despite being pretrained on datasets \\(100 \\times\\) larger than domain-specific benchmarks [46], LVLMs (with \\(7\\mathrm{B}+\\) parameters) often underperform compared to much smaller feature-based models (e.g.," + }, + { + "type": "page_footnote", + "bbox": [ + 0.197, + 0.884, + 0.776, + 0.899 + ], + "angle": 0, + "content": "† Corresponding author: Qin Jin; * Equal contribution, listed in alphabetical order; ‡ Project lead." + }, + { + "type": "footer", + "bbox": [ + 0.172, + 0.923, + 0.316, + 0.937 + ], + "angle": 0, + "content": "Preprint. Under review." + } + ], + [ + { + "type": "image_caption", + "bbox": [ + 0.204, + 0.091, + 0.324, + 0.113 + ], + "angle": 0, + "content": "TimeRFT Training training strategy, dataset" + }, + { + "type": "image_caption", + "bbox": [ + 0.382, + 0.091, + 0.529, + 0.112 + ], + "angle": 0, + "content": "Time-R1 Framework data-efficient, effective, R1-like" + }, + { + "type": "image_caption", + "bbox": [ + 0.619, + 0.091, + 0.79, + 0.112 + ], + "angle": 0, + "content": "TVGBench Evaluation \nata, 11 types, balanced distributions" + }, + { + "type": "image", + "bbox": [ + 0.201, + 0.113, + 0.802, + 0.168 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.204, + 0.173, + 0.504, + 0.185 + ], + "angle": 0, + "content": "Improved Temporal Video Grounding (video length: 148s)" + }, + { + "type": "image", + "bbox": [ + 0.212, + 0.186, + 0.782, + 0.237 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.205, + 0.238, + 0.464, + 0.247 + ], + "angle": 0, + "content": "Question: Where is the event \"He took out cutting board\"." + }, + { + "type": "image", + "bbox": [ + 0.206, + 0.247, + 0.795, + 0.319 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.207, + 0.32, + 0.772, + 0.34 + ], + "angle": 0, + "content": "Time-R1 thinking (ours): The event \"He took out cutting board\" occurs between 13.40 and 18.20 seconds. This is when the person is seen placing the cutting board on the counter." + }, + { + "type": "image_caption", + "bbox": [ + 0.204, + 0.352, + 0.436, + 0.363 + ], + "angle": 0, + "content": "Improved Short Video QA (video length: 25s)" + }, + { + "type": "image", + "bbox": [ + 0.204, + 0.367, + 0.452, + 0.394 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.205, + 0.399, + 0.427, + 0.409 + ], + "angle": 0, + "content": "Question: What letter is likely to be shown next?" + }, + { + "type": "text", + "bbox": [ + 0.206, + 0.412, + 0.23, + 0.425 + ], + "angle": 0, + "content": "(A) Y" + }, + { + "type": "image", + "bbox": [ + 0.274, + 0.411, + 0.315, + 0.427 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.411, + 0.409, + 0.428 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.46, + 0.35, + 0.557, + 0.359 + ], + "angle": 0, + "content": "Base model thinking:" + }, + { + "type": "text", + "bbox": [ + 0.459, + 0.36, + 0.798, + 0.394 + ], + "angle": 0, + "content": "... Given this sequence, the next logical letter would be the next letter in the alphabet after 'F', which is 'G'. The letters 'D', 'E', 'F', and 'G' are now on the table, forming the sequence 'DEFG'. Therefore, the next letter to be shown is likely to be 'G'." + }, + { + "type": "title", + "bbox": [ + 0.46, + 0.396, + 0.576, + 0.405 + ], + "angle": 0, + "content": "Time-R1 thinking (ours):" + }, + { + "type": "text", + "bbox": [ + 0.459, + 0.405, + 0.797, + 0.432 + ], + "angle": 0, + "content": "The sequence of letters shown in the video is \"D\", \"E\", \"F\", and \"G\". This suggests a pattern of showing the next letter in the alphabet. The letters are being placed in a specific order, and the next logical letter after \"G\" would be \"H\"." + }, + { + "type": "image_caption", + "bbox": [ + 0.204, + 0.444, + 0.437, + 0.455 + ], + "angle": 0, + "content": "Improved Long Video QA (video length: 59m)" + }, + { + "type": "image", + "bbox": [ + 0.202, + 0.457, + 0.449, + 0.495 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.205, + 0.497, + 0.452, + 0.518 + ], + "angle": 0, + "content": "Question: What do the expanding red lines on the map in the first few minutes of the video stand for?" + }, + { + "type": "text", + "bbox": [ + 0.205, + 0.521, + 0.32, + 0.547 + ], + "angle": 0, + "content": "(A) The Yellow River (C) Du Fu's route to Xi'an" + }, + { + "type": "text", + "bbox": [ + 0.35, + 0.521, + 0.454, + 0.544 + ], + "angle": 0, + "content": "(B) The Silk Road \n(D) The Yangtze River" + }, + { + "type": "title", + "bbox": [ + 0.46, + 0.447, + 0.557, + 0.456 + ], + "angle": 0, + "content": "Base model thinking:" + }, + { + "type": "text", + "bbox": [ + 0.459, + 0.457, + 0.798, + 0.504 + ], + "angle": 0, + "content": "... The map is used to illustrate the historical journey of Du Fu, a famous Chinese poet, as indicated by the text \"Du Fu's route to Xi'an\" and the historical context provided by the narrator. Given this analysis, the expanding red lines on the map in the first few minutes of the video stand for Du Fu's route to Xi'an.... Time-R1 thinking (ours):" + }, + { + "type": "text", + "bbox": [ + 0.46, + 0.505, + 0.798, + 0.542 + ], + "angle": 0, + "content": "... 2. The map is shown with red lines expanding from a central point, which is likely Xi'an, a major city on the Silk Road. 3. The map is labeled with \"Silk Road\" and \"The Great Silk Road,\" confirming that the red lines represent the Silk Road. ..." + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.561, + 0.825, + 0.631 + ], + "angle": 0, + "content": "Figure 1: Our contributions include a novel post-training framework for LVLMs via reinforcement learning, Time-R1; an RL fine-tuning strategy along with its associated training dataset, TimeRFT; and a new benchmark, TVGBench, for evaluating LVLMs on the TVG task. Our Time-R1 model not only achieves SoTA TVG performance, but also enhances performance on both short- and long-form multi-choice video question answering tasks." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.677, + 0.825, + 0.706 + ], + "angle": 0, + "content": "9M parameters [22]). This raises a critical question: Why do LVLMs, despite their vast pretrained knowledge, fail to excel on TVG?" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.711, + 0.825, + 0.81 + ], + "angle": 0, + "content": "We attribute the suboptimal performance of LVLMs to over-penalization of false negatives during supervised fine-tuning (SFT). For instance, when the ground truth timestamp is [2s, 4s], even when the model makes a reasonable prediction of timestamp [1.9s, 3.9s], the autoregressive loss would still be undesirably high. Such disproportionate penalties on reasonable predictions result in overfitting and poor generalization. While previous solutions have attempted to address this by introducing new timestamp tokens into the vocabulary [18, 17, 53] or by appending a regression head to predict timestamps [65], they often sacrifice the pretrained language capabilities of LLMs." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.815, + 0.827, + 0.913 + ], + "angle": 0, + "content": "Inspired by recent success in reinforcement learning (RL) for post-training LLMs [39, 1] with chain-of-thought (CoT) prompting, especially in domains with deterministic answers, such as code generation and mathematical reasoning, we explore whether RL can serve as an effective alternative for TVG. Unlike SFT, RL allows direct optimization of task-specific metrics (e.g., IoU), thereby reducing rigid penalties of autoregressive losses and encouraging plausible timestamp predictions. In this work, we present an RL-based framework, Time-R1, that effectively post-trains LVLMs for TVG and pushes the performance frontier. Our contributions include:" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.947 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.188, + 0.089, + 0.362, + 0.255 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.249, + 0.258, + 0.301, + 0.272 + ], + "angle": 0, + "content": "(a) VLP" + }, + { + "type": "image", + "bbox": [ + 0.389, + 0.088, + 0.567, + 0.256 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.451, + 0.257, + 0.502, + 0.271 + ], + "angle": 0, + "content": "(b) SFT" + }, + { + "type": "image", + "bbox": [ + 0.58, + 0.089, + 0.788, + 0.255 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.637, + 0.257, + 0.723, + 0.271 + ], + "angle": 0, + "content": "(c) RL (ours)" + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.276, + 0.828, + 0.307 + ], + "angle": 0, + "content": "Figure 2: Comparison of different approaches for the TVG task, including feature-based video-language pretraining (VLP) [37, 22], supervised fine-tuning (SFT) [41, 60], and RL (ours)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.33, + 0.827, + 0.4 + ], + "angle": 0, + "content": "- RL-based framework for temporal video grounding. We introduce Time-R1, a reasoning-enhanced post-training framework via RL with verifiable rewards, where the LVLM first generates chain-of-thought descriptions and then predicts timestamps. The post-training process is optimized using Generalized Reinforcement Policy Optimization (GRPO) with a novel reward function, incorporating both a structured template reward and a timestamp-aware tIoU reward." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.4, + 0.826, + 0.469 + ], + "angle": 0, + "content": "- Time-aware reinforcement fine-tuning. We propose TimeRFT, a reinforcement fine-tuning strategy with dynamic hard sampling, which mines hard samples on a curated dataset and progressively selects low-IoU samples for multi-epoch training. To ensure stable reasoning and reduce hallucinations, we adopt a cold-start approach to generate CoT with video captions. To support RL-friendly training, we curate an RFT dataset with difficulty annotations on the TVG task." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.469, + 0.825, + 0.538 + ], + "angle": 0, + "content": "- Comprehensive benchmark for LVLMs on TVG. Existing TVG benchmarks are designed for the large-scale evaluation of small models. Considering the inference speed bottlenecks and general-purpose role of LVLMs, we construct TVGBench, a compact yet comprehensive benchmark for TVG. We carefully balance the video distribution, query distribution, and design specific query semantics to ensure that the benchmark is well-suited for evaluating LVLMs." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.538, + 0.827, + 0.607 + ], + "angle": 0, + "content": "- State-of-the-Art results and generalization. Compared with 7B LVLMs on the temporal video grounding task, our method outperforms all prior SFT-based methods with only 2.5K training data. After fine-tuning on downstream benchmarks like Charades [46] and ActivityNet [5], it surpasses many previous feature-based approaches. Furthermore, Time-R1 also improves the general video understanding on video QA benchmarks like MVBench [26] and VideoMME [13]." + }, + { + "type": "list", + "bbox": [ + 0.171, + 0.33, + 0.827, + 0.607 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.626, + 0.331, + 0.641 + ], + "angle": 0, + "content": "2 Related Works" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.657, + 0.828, + 0.879 + ], + "angle": 0, + "content": "Temporal video grounding. The TVG task [15, 3] aims to localize temporal segments in untrimmed long videos given natural language queries. Previous works can be broadly categorized into feature-based video-language pretraining and frame-based LVLM methods, as shown in Figure 2. Feature-based methods first extract video and language features using pre-trained encoders (e.g., I3D [6], EgoVLP [29], CLIP [42], BERT [12], etc.), then build timestamp prediction modules based on multimodal fused features. Existing methods differ mainly in their design of the multimodal fusion module and timestamp prediction module. For example, SnAG [37] adopts a late fusion strategy and regresses timestamps directly in a single forward pass without proposal generation. While effective, these approaches are fundamentally limited by the quality of the pretrained features. Recent efforts have shifted toward end-to-end frame-based methods by fine-tuning LVLMs using SFT with autoregressive losses [27, 44, 60, 19, 52, 18, 25]. For instance, TRACE [18] treats each event as a combination of timestamp, saliency score, and caption, and fine-tunes the LVLM to generate event sequences autoregressively. However, such methods still underperform on even simple TVG benchmarks like Charades [46], often lagging behind feature-based approaches. In this work, we propose a novel RL-based post-training framework that establishes new state-of-the-art performance for LVLMs on TVG tasks, even surpassing many feature-based methods." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.884, + 0.825, + 0.914 + ], + "angle": 0, + "content": "RL in LLMs and LVLMs. RL is a foundational machine learning paradigm applied in diverse domains such as game playing [47], robotics [34], and increasingly, language and vision-language" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.948 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.091, + 0.827, + 0.355 + ], + "angle": 0, + "content": "models [39]. RL updates models by interacting with environments and maximizing reward signals. In recent years, RL has profoundly affected the field of LLM and LVLM post-training, which falls into two main categories: Reinforcement Learning with Human Feedback (RLHF) [40, 57] and Reinforcement Learning with Verifiable Reward (RLVR) [1, 39, 7]. Early works find that RLHF can align LLM to generate human preference data, which primarily reduces the safety risks in LLM and hallucination problems in LVLM. For example, RLHF-V [57] collects fine-grained pairs of incorrect and corrected captions and leverages direct preference optimization to optimize the model to generate correct captions, thus mitigating hallucinations. More recent works have explored RLVR in tasks with deterministic answers, which not only benefits mathematical problem solving and code generation in LLMs (e.g., DeepSeek-R1 [1]), but also enhances the generalization of LVLMs across a range of applications, such as visual grounding [32] and visual reasoning [48]. As a pioneer among open-source LLMs, DeepSeek-R1 [1] adopts GRPO to enhance reasoning capabilities by designing rule-based rewards that incorporate both reasoning templates and final answers. In the context of LVLMs, MM-Eureka [35] investigates multimodal image reasoning using GRPO, introducing an online filtering mechanism and a two-stage training strategy to stabilize the optimization process. However, existing approaches remain confined to language [1, 56], image understanding [7, 48, 32, 35], and short video understanding [64, 28]. It remains unclear whether and how reinforcement learning impacts long-form video understanding. To bridge this gap, we take a first step by introducing RLVR into LVLMs for the temporal video grounding task." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.374, + 0.273, + 0.389 + ], + "angle": 0, + "content": "3 Method" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.404, + 0.825, + 0.515 + ], + "angle": 0, + "content": "The TVG task aims to temporally localize video segments within long-form videos based on natural language queries. Given a video of duration \\(t\\) seconds, which is represented as a sequence of \\(T\\) frames \\(\\{x_{1},\\ldots ,x_{T}\\}\\), and a language query \\(q\\), the goal is to identify the temporal boundaries \\([t_s,t_e]\\) of the segment that best corresponds to \\(q\\), where \\(t_s,t_e\\in \\mathbb{R}^+\\). In this work, we introduce Time-R1, a framework designed to unleash the potential of LVLMs for the TVG task using RL. We first provide background on RL-based training for LLMs in Section 3.1, then detail the training procedure of TimeR1 in Section 3.2. Next, we describe specific training techniques used in TimeRFT in Section 3.3, and finally, we present the construction of our evaluation benchmark TVGBench in Section 3.4." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.531, + 0.466, + 0.546 + ], + "angle": 0, + "content": "3.1 Background of GRPO: RL for LLM" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.557, + 0.825, + 0.642 + ], + "angle": 0, + "content": "As a pioneer among open-sourced R1-style LLMs, Deepseek-R1 [1] leverages GRPO to train the policy model \\(\\pi_{\\theta}\\) (i.e., the LLM) to think before answering, making it particularly well-suited for tasks with well-defined answers, such as mathematical reasoning. In the GRPO framework, given an input question \\(p\\), the LLM samples \\(G\\) candidate responses \\(o = \\{o_1,\\dots,o_G\\}\\), and a reward function \\(r(\\cdot)\\) assigns a reward score to each response, yielding \\(\\{r(o_1),\\ldots ,r(o_G)\\}\\). GRPO encourages the LLM to generate responses that maximize a weighted sum reward \\(R(o)\\), defined by:" + }, + { + "type": "equation", + "bbox": [ + 0.328, + 0.648, + 0.825, + 0.688 + ], + "angle": 0, + "content": "\\[\nR (o) = \\sum_ {i = 1} ^ {G} \\frac {\\pi_ {\\theta} \\left(o _ {i}\\right)}{\\pi_ {\\theta_ {\\mathrm {o l d}}} \\left(o _ {i}\\right)} \\cdot \\frac {r \\left(o _ {i}\\right) - \\operatorname {m e a n} \\left(\\left\\{r \\left(o _ {i}\\right) \\right\\} _ {i = 1} ^ {G}\\right)}{\\operatorname {s t d} \\left(\\left\\{r \\left(o _ {i}\\right) \\right\\} _ {i = 1} ^ {G}\\right)} \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.696, + 0.825, + 0.752 + ], + "angle": 0, + "content": "where \\(\\pi_{\\theta}(o)\\) denotes the probability of LLM generating the response \\(o\\), and \\(\\pi_{\\theta_{\\mathrm{old}}}\\) represents the LLM parameters from a recently optimized state. To ensure training stability and avoid large deviations from the original language model behavior, the final training objective incorporates a KL-divergence regularization term [1], penalizing divergence between \\(\\pi_{\\theta}\\) and \\(\\pi_{\\mathrm{ref}}\\):" + }, + { + "type": "equation", + "bbox": [ + 0.358, + 0.758, + 0.825, + 0.78 + ], + "angle": 0, + "content": "\\[\n\\max _ {\\pi_ {\\theta}} \\mathbb {E} _ {o \\sim \\pi_ {\\theta_ {\\mathrm {o l d}}} (p)} [ R (o) - \\beta \\mathrm {D} _ {\\mathrm {K L}} (\\pi_ {\\theta} \\| \\pi_ {\\mathrm {r e f}}) ] \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.786, + 0.686, + 0.802 + ], + "angle": 0, + "content": "where \\(\\beta\\) is a scaling coefficient. We omit the clipping operation for simplicity." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.817, + 0.53, + 0.832 + ], + "angle": 0, + "content": "3.2 Time-R1: RL for Temporal Video Grounding" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.843, + 0.827, + 0.913 + ], + "angle": 0, + "content": "Since the TVG task has defined answers and well-established evaluation metrics, RL can optimize LVLMs for task-specific performance through tailored reward design. To enhance interpretability and align with human-like reasoning, we additionally incorporate an explicit \"thinking process\" prior to timestamp prediction. This process encourages the model to produce contextualized video descriptions that support its final decision. We detail our reward modeling and training process below." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.947 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.092, + 0.825, + 0.15 + ], + "angle": 0, + "content": "Reward modeling. The reward \\( r_i \\) plays a crucial role in guiding the model's learning objective. To encourage effective temporal grounding with an explicit reasoning process, we design a composite reward function comprising two components: the timestamp-aware Intersection over Union (IoU) \\( r_{\\mathrm{tIoU}} \\) and the reasoning template reward \\( r_{\\mathrm{form}} \\)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.154, + 0.825, + 0.184 + ], + "angle": 0, + "content": "- Timestamp-aware IoU reward \\(r_{\\mathrm{tIoU}}(\\cdot)\\). The TVG task primarily uses IoU [58] to evaluate the quality of predicted segments against the ground-truth \\([t_s', t_e']\\), computed as:" + }, + { + "type": "equation", + "bbox": [ + 0.418, + 0.189, + 0.826, + 0.224 + ], + "angle": 0, + "content": "\\[\n\\mathrm {I o U} = \\frac {\\left[ t _ {s} , t _ {e} \\right] \\cap \\left[ t _ {s} ^ {\\prime} , t _ {e} ^ {\\prime} \\right]}{\\left[ t _ {s} , t _ {e} \\right] \\cup \\left[ t _ {s} ^ {\\prime} , t _ {e} ^ {\\prime} \\right]} \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.228, + 0.827, + 0.381 + ], + "angle": 0, + "content": "where \\(A \\cap B\\) and \\(A \\cup B\\) denote the union and intersection between sets A and B, respectively. Optimizing for the IoU inherently encourages the LVLM to produce predictions that fall within a permissible range of variation \\(\\epsilon\\), such that \\(t_{s \\text{ or } e}^{\\prime} - \\epsilon \\leq t_{s \\text{ or } e} \\leq t_{s \\text{ or } e}^{\\prime} + \\epsilon\\) still yields high IoUs. This encourages the LVLM to focus more on the semantic understanding of the event within possible temporal boundaries, rather than rigidly requiring exact temporal alignment like SFT. However, standard IoU may fail to accurately reflect the quality of temporal alignment in certain scenarios. For example, when the ground truth span is [0, 30] (i.e., the full video), any prediction covering more than \\(30\\%\\) of the video would result in an IoU greater than 0.3. A prediction like [10, 25] would yield an IoU of 0.5, which overestimates its quality despite incorrect timestamps. To address this issue, we introduce the timestamp-aware IoU (tIoU) as a corrective measure. tIoU augments the standard IoU with penalties on timestamp deviations, defined as:" + }, + { + "type": "equation", + "bbox": [ + 0.332, + 0.387, + 0.826, + 0.419 + ], + "angle": 0, + "content": "\\[\nr _ {\\mathrm {t I o U}} (o) = \\operatorname {I o U} \\cdot \\left(1 - \\frac {\\left| t _ {s} - t _ {s} ^ {\\prime} \\right|}{t}\\right) \\cdot \\left(1 - \\frac {\\left| t _ {e} - t _ {e} ^ {\\prime} \\right|}{t}\\right) \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.424, + 0.826, + 0.492 + ], + "angle": 0, + "content": "This modification penalizes predictions that deviate from the reference timestamps relative to the video duration \\( t \\). In the earlier example, the reward value changes from 0.5 (IoU) to 0.28 (tIoU), providing a more realistic signal for learning. Overall, tIoU acts as a stricter and more informative reward signal, encouraging the LVLM to develop a deeper temporal understanding of events, rather than relying on superficial shortcuts." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.493, + 0.827, + 0.646 + ], + "angle": 0, + "content": "- Reasoning template reward \\( r_{\\mathrm{form}}(\\cdot) \\). In TVG, the video segments relevant to a textual query typically comprise only a small portion of the entire long video. For LVLMs, it is therefore suboptimal to directly predict timestamps without first engaging in a reasoning process to identify the relevant content. Instead, the model should allocate its computational capacity toward reasoning over visual and linguistic cues to better understand the temporal context before making predictions. For instance, given the query \"the man washes dishes\", reasoning that the person is likely in a kitchen can improve temporal localization. Such context-aware inference supports more accurate and semantically aligned predictions. To encourage this behavior, we introduce a template-based reasoning reward, which incentivizes the model to generate intermediate reasoning steps (structured in a predefined format) prior to timestamp localization. The reasoning template reward requires the LVLM to structure its response like \"<...<\\( t_s \\) to \\( t_e \\)>\", formulated as:" + }, + { + "type": "equation", + "bbox": [ + 0.356, + 0.653, + 0.826, + 0.687 + ], + "angle": 0, + "content": "\\[\nr _ {\\text {f o r m}} (o) = \\left\\{ \\begin{array}{l} 0, \\text {i f} o \\text {h a s w r o n g f r o m a t} \\\\ 1, \\text {i f} o \\text {h a s c o r r e c t f r o m a t} \\end{array} \\right. \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.172, + 0.694, + 0.445, + 0.708 + ], + "angle": 0, + "content": "The overall reward is the sum of the two:" + }, + { + "type": "equation", + "bbox": [ + 0.405, + 0.716, + 0.826, + 0.733 + ], + "angle": 0, + "content": "\\[\nr (o) = r _ {\\mathrm {t I o U}} (o) + r _ {\\text {f o r m}} (o) \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.745, + 0.827, + 0.815 + ], + "angle": 0, + "content": "GRPO training. The LVLM \\(\\mathcal{F}(\\cdot)\\) takes the video frames \\(x_{1},\\ldots ,x_{t}\\) and the language query \\(q\\) as input and generates \\(G\\) candidate responses \\(o_1,\\dots,o_G\\), where each response is computed as \\(o_i = \\mathcal{F}(x_1,\\ldots ,x_t;q)\\). The reward for each response is calculated using Equation 1, and the model is optimized with the GRPO objective in Equation 2. To focus learning on the reasoning and localization capabilities, we freeze the visual encoder and update only the parameters of the LLM during training." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.831, + 0.559, + 0.847 + ], + "angle": 0, + "content": "3.3 TimeRFT: Time-Aware RL-Friendly Fine-Tuning" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.856, + 0.827, + 0.913 + ], + "angle": 0, + "content": "Due to the high computational cost associated with RL training, we explore data-efficient strategies to reduce sample requirements. To this end, we propose TimeRFT, which incorporates time-aware, RL-friendly dataset curation and fine-tuning techniques aimed at enhancing generalization while minimizing training overhead." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.948 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.092, + 0.825, + 0.135 + ], + "angle": 0, + "content": "RL-friendly dataset curation. We construct the TimeRFT dataset by leveraging only TVG samples and assign a difficulty score to each sample based on the base model's performance. A small subset is then selected for subsequent RL training." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.14, + 0.825, + 0.196 + ], + "angle": 0, + "content": "- Source data collection. Our training videos are sourced from Internet video datasets including YT-Temporal [54], DiDeMo [3], QuerYD [38], InternVid [49], and HowTo100M [36]. We obtain grounding data with annotations from VTG-IT [17], TimeIT [44], TimePro [60], HTStep [2], and LongVid [27]. This process yields 339K temporal grounding samples." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.196, + 0.827, + 0.336 + ], + "angle": 0, + "content": "- RFT data filtering. We propose a data selection strategy based on training difficulty to significantly reduce training costs while preserving strong generalization performance. Models trained only on easy samples (e.g., IoU \\(\\geq 0.7\\)) tend to overfit, whereas training on overly difficult samples (e.g., IoU = 0) often suffers from sparse reward signals, making it hard for the model to receive positive feedback. To strike a balance, we select samples of moderate difficulty that are more conducive to generalization during reinforcement fine-tuning. We first estimate a difficulty score for each sample based on the performance of the base model. For grounding tasks, difficulty is quantified using the IoU between the predicted and ground-truth temporal regions. We then filter out samples that are either too easy or too hard. Specifically, we sample a subset of data from a Gaussian distribution over the IoU axis centered at 0.3, resulting in a set of 2.5K moderately difficult samples for RL training." + }, + { + "type": "list", + "bbox": [ + 0.171, + 0.14, + 0.827, + 0.336 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.34, + 0.825, + 0.41 + ], + "angle": 0, + "content": "RFT training strategy. For selected difficult samples, the model may struggle to learn them in a single pass. However, we argue that effectively mastering these challenging cases is essential for improving overall model performance. To this end, we employ a multi-epoch training approach combined with per-epoch sample filtering, allowing the model to repeatedly focus on harder samples and gradually improve its understanding." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.416, + 0.825, + 0.485 + ], + "angle": 0, + "content": "- Dynamic hard sampling. We adopt a multi-epoch training strategy coupled with per-epoch sample filtering to enhance learning from difficult examples. The model is trained over multiple epochs, and after each epoch, we exclude easy samples with an IoU greater than 0.7 that have become easy. This dynamic curriculum discourages overfitting on easy instances while ensuring consistent exposure to harder samples, ultimately promoting stronger generalization." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.485, + 0.825, + 0.583 + ], + "angle": 0, + "content": "- Cold start fine-tuning with few CoT data. For smaller models (e.g., 3B parameters), we observe that directly training with RL to generate CoT responses often results in reasoning steps that are either unintelligible or hallucinated, which impairs answer quality. Additionally, the length of generated reasoning during early training stages is difficult to control, leading to an unstable training process. To address these issues, we introduce a cold-start fine-tuning strategy using a small set of CoT-formatted examples that encourage grounded reasoning aligned with video content. Specifically, we guide the model to produce structured sequential captions with associated timestamps with the template as:" + }, + { + "type": "list", + "bbox": [ + 0.171, + 0.416, + 0.825, + 0.583 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.584, + 0.825, + 0.618 + ], + "angle": 0, + "content": "\\(<\\) think \\(> < t_{s_1}\\) to \\(t_{e_1}:C_1\\); \\(t_{s_2}\\) to \\(t_{e_2}:C_2 > < / \\text{think} > < \\text{answer} > t_s\\) to \\(t_e < / \\text{answer} > (7)\\) where \\(C_i\\) represent captions corresponding to video segments \\([t_{s_i}, t_{e_i}]\\), respectively." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.631, + 0.757, + 0.646 + ], + "angle": 0, + "content": "3.4 TVGBench: Evaluation Benchmark for LVLM on Temporal Video Grounding" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.656, + 0.827, + 0.755 + ], + "angle": 0, + "content": "Existing benchmarks for temporal video grounding either focus on large-scale datasets tailored for smaller models within specific domains (e.g., human activities in ActivityNet) or consist of small, limited test sets (e.g., the 2K home activity samples in Charades) typically used for LVLM evaluation due to their slower inference speed. However, these benchmarks fall short in capturing the evaluation needs of LVLMs, which, despite slower inference, exhibit strong generalization capabilities. To bridge this gap, we introduce TVGBench, a lightweight yet comprehensive evaluation benchmark specifically designed for assessing the performance of LVLMs on temporal video grounding tasks." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.76, + 0.827, + 0.803 + ], + "angle": 0, + "content": "Data sources. To ensure a comprehensive evaluation, we construct our TVGBench by curating samples from five public benchmarks with a balanced distribution of data sources: Charades-STA [46], ActivityNet-Captions [5], HiREST [59], EgoNLQ [16], and TaCoS [43]." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.808, + 0.825, + 0.838 + ], + "angle": 0, + "content": "Benchmark statistics. We carefully balance the video duration, video domain, and query center, and construct query semantics in TVGBench to construct 800 instances, as shown in Figure 3." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.843, + 0.825, + 0.884 + ], + "angle": 0, + "content": "- Distribution statistics. Video durations in the dataset have a balanced range from short clips up to 3 minutes long. To ensure temporal diversity, the center timestamps of queries are designed to be approximately uniformly distributed across the entire span of each video." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.884, + 0.825, + 0.913 + ], + "angle": 0, + "content": "- Query semantics. Since the original datasets do not provide semantic labels for queries, we manually define 11 semantic categories grouped under three major types: human, object, and" + }, + { + "type": "list", + "bbox": [ + 0.171, + 0.843, + 0.825, + 0.913 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.937, + 0.505, + 0.948 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.174, + 0.089, + 0.363, + 0.285 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.364, + 0.089, + 0.603, + 0.285 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.604, + 0.089, + 0.824, + 0.285 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.295, + 0.825, + 0.366 + ], + "angle": 0, + "content": "Figure 3: Statistics of TVGBench. TVGBench comprises 11 types of queries covering aspects related to humans, objects, and environments. As illustrated in the figure on the right, the distributions of query center, video length, and dataset source are designed to be as balanced as possible. This balanced construction allows for a comprehensive evaluation of model performance across different dimensions, enabling fine-grained analysis along each axis during benchmarking." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.396, + 0.825, + 0.466 + ], + "angle": 0, + "content": "environment. We leverage DeepSeek-V3 [11] to annotate the semantic type of each query and ensure approximate balance across these categories. While most categories are evenly represented, the Human Action Simple (HAS) category is slightly overrepresented due to inherent dataset bias (simple indoor actions are more common). In such cases, we prioritize achieving a balance across datasets from different domains while maintaining semantic diversity, accepting a skew in HAS." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.49, + 0.314, + 0.508 + ], + "angle": 0, + "content": "4 Experiments" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.524, + 0.825, + 0.609 + ], + "angle": 0, + "content": "We first present our experimental setup in Section 4.1. Then, we evaluate our model from three key perspectives: (1) Comparison with state-of-the-art methods in Section 4.2: We evaluate our model across multiple TVG benchmarks to assess its performance against existing approaches; (2) Ablation studies and analyses in Section 4.3: We examine the individual contributions of each component in our framework to better understand their roles in overall performance. We also compare RL and SFT strategies across TVG, short video QA, and long video QA tasks." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.629, + 0.353, + 0.645 + ], + "angle": 0, + "content": "4.1 Experimental Setup" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.657, + 0.827, + 0.837 + ], + "angle": 0, + "content": "**Benchmarks.** We evaluate our model on a wide range of benchmarks covering both temporal video grounding and general video understanding tasks, including: (1) Charades-STA [46] contains 6,672 long videos capturing indoor human activities. The official split for the TVG task includes 12,408 clip-query pairs for training and 3,720 for testing. (2) ActivityNet [5] comprises 20K long videos with an average of 3.65 clip-query pairs per video. Following previous work in fine-tuning setting [62, 22] for the TVG task, we use the standard dataset splits with 37,421 training, 17,505 validation, and 17,031 test samples. (3) MVBench [26] is a short video QA benchmark focused on temporal reasoning. It includes 4K QA pairs for 20 types of tasks. (4) TempCompass [31] assesses fine-grained temporal understanding with 410 short videos. We use all multi-choice QA tasks except for the video captioning task. (5) EgoSchema [33] features 5K egocentric video clips, each approximately 3 minutes long, with temporally demanding QA pairs. (6) VideoMME [13] is a general video QA benchmark covering diverse domains. It contains 2.7K QA samples over videos of varied lengths, ranging from 11 seconds to 1 hour. We use the long video split for evaluation." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.842, + 0.827, + 0.913 + ], + "angle": 0, + "content": "Implementation details. Unless otherwise specified, we use Qwen2.5-VL-7B [4] as the base model. To strike a balance between training efficiency and memory consumption, we sample video frames at 2 FPS and adaptively resize each video input to contain approximately 2.8 million pixels. For instance, a 50-second video yields 100 frames, each with a resolution of roughly \\(96 \\times 96 \\times 3\\). During the reinforcement fine-tuning phase, we train for 5 epochs using a batch size of 8 and select the final" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.948 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.089, + 0.828, + 0.145 + ], + "angle": 0, + "content": "Table 1: Performance of temporal video grounding on Charades-STA, ActivityNet, and TVGBench. The methods marked in gray* represent fine-tuning on corresponding benchmarks, while those in black indicate zero-shot settings. We compare our Time-R1 against existing 7B open-source LVLMs, as well as state-of-the-art VLP models." + }, + { + "type": "table", + "bbox": [ + 0.188, + 0.156, + 0.81, + 0.381 + ], + "angle": 0, + "content": "
TypeMethodCharades-STAActivityNetTVGBench
R1@0.3R1@0.5R1@0.7R1@0.3R1@0.5R1@0.7R1@0.3R1@0.5R1@0.7
VLP2D-TAN* [63]57.345.827.960.443.425.0---
UniVTG* [30]72.660.238.656.143.424.3---
SSRN* [66]-65.542.6-54.533.2---
SnAG* [37]-64.646.2-48.630.6---
EaTR* [22]-68.444.9-58.237.6---
Gemini-2.5-Pro [10]------39.124.412.8
SFTChatVTG [41]52.733.015.940.722.59.4---
TimeChat [44]-32.213.436.220.29.522.411.95.3
HawkEye [50]50.631.414.549.129.310.7---
VTimeLLM [21]51.027.511.444.027.814.3---
TimeSuite [60]69.948.724.0---31.118.08.9
VideoChat-Flash [27]74.553.127.6---32.819.810.4
TRACE [18]-40.319.4-37.724.037.025.514.6
HawkEye* [50]72.558.328.855.934.717.9---
TimeSuite* [60]79.467.143.0------
RLTime-R1 (ours)78.160.835.358.639.021.441.829.416.4
Time-R1 (ours)*82.872.250.173.355.634.0---
" + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.395, + 0.825, + 0.552 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.561, + 0.825, + 0.604 + ], + "angle": 0, + "content": "Figure 4: Comparison between post-training paradigms across various tasks, including temporal video grounding, short video QA, and long video QA. Both \"SFT\" and \"RL\" full-finetune the LLM, while \"SFT-LoRA\" denotes finetuning the LLM with LoRA [20]. The \"Base\" is Qwen2.5-VL-7B." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.631, + 0.825, + 0.659 + ], + "angle": 0, + "content": "checkpoint for evaluation. For fine-tuning on downstream benchmarks, we train for 2 epochs. All experiments are conducted on a cluster with \\(8 \\times\\) NVIDIA A100 GPUs." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.665, + 0.825, + 0.737 + ], + "angle": 0, + "content": "Evaluation metrics. For TVG, following [44, 60], we adopt the “R1@m” evaluation protocol to compare with state-of-the-art models, which computes the percentage of samples where the top-1 predicted segment has an IoU greater than a threshold \\( m \\), with \\( m \\in \\{0.3, 0.5, 0.7\\} \\). For brevity, we also adopt mIoU, which calculates the average IoU on all testing data as an alternative metric. For video QA, we report accuracy as the evaluation metric." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.752, + 0.45, + 0.767 + ], + "angle": 0, + "content": "4.2 Comparison with State-of-the-Art" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.777, + 0.825, + 0.807 + ], + "angle": 0, + "content": "We compare Time-R1 with state-of-the-art TVG methods, including both traditional video-language pre-training models (VLP) and recent large video-language models fine-tuned via SFT." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.812, + 0.826, + 0.91 + ], + "angle": 0, + "content": "Time-R1 surpasses SFT-based models in zero-shot settings. As shown in Table 1, in the zero-shot setting, Time-R1 demonstrates strong performance, outperforming SFT-based models that rely on large-scale supervision. Despite using only \\(2.5\\mathrm{K}\\) samples for RL, Time-R1 achieves leading results across multiple benchmarks. For example, on Charades-STA, Time-R1 attains an R1@0.7 score of 35.3, outperforming VideoChat-Flash (27.6) and TimeSuite (24.0). On ActivityNet, it achieves R1@0.7 score of 21.4, surpassing VTimeLLM (14.3) and TimeChat (10.7). On TVGBench, it scores 16.4, outperforming TRACE (14.6) and Gemini-2.5-Pro (12.8)." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.948 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.092, + 0.488, + 0.273 + ], + "angle": 0, + "content": "Time-R1* outperforms all SFT-based LVLMs and many traditional VLP-based models. Time-R1* consistently outperforms both traditional VLP-based and SFT-based models on the TVG task. On Charades-STA, it exceeds EaTR and fine-tuned TimeSuite by 3.4-7.1 percentage points across R1@0.3 to R1@0.7. Notably, Time-R1* surpasses TimeSuite despite using far fewer RL samples compared to TimeSuite's 349K SFT examples. On the more challenging ActivityNet dataset, Time-R1* also outperforms SSRN [66] and TRACE [18], achieving significant gains across key metrics." + }, + { + "type": "table_caption", + "bbox": [ + 0.495, + 0.093, + 0.827, + 0.138 + ], + "angle": 0, + "content": "Table 2: Ablation of Time-R1-7B training. GF, ME, SF refers to Gaussian Filtering, Multi-EPOCH, and Sample Filtering per epoch, respectively." + }, + { + "type": "table", + "bbox": [ + 0.508, + 0.144, + 0.812, + 0.261 + ], + "angle": 0, + "content": "
tIoUGFMESFTVGBench
R1@0.3R1@0.5R1@0.7
1XXXX38.024.813.2
2XXX36.023.612.9
3XXX37.225.013.4
4XXX39.926.014.2
5XX38.425.614.1
6XX39.426.516.4
7X41.628.515.6
841.829.416.4
" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.288, + 0.32, + 0.303 + ], + "angle": 0, + "content": "4.3 Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.313, + 0.825, + 0.342 + ], + "angle": 0, + "content": "We conduct a detailed ablation on the Time-R1-7B model to investigate the contribution of various training strategies." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.348, + 0.825, + 0.432 + ], + "angle": 0, + "content": "Utility of TimeRFT and Time-R1 components. As shown in Table 2, both Gaussian Filtering (GF) and Multi-Epoch training (ME) individually improve performance, with ME yielding a more substantial gain, improving from R1@0.7 of 13.2 in row 1 to 14.2 in row 4. Notably, the combination of tIoU supervision and ME (Row 6) leads to a significant boost across all metrics. As more components are added, GF and ME (Row 7), followed by Sample Filtering (SF) in Row 8, the performance continues to improve, ultimately reaching R1@0.5 of 29.4 and R1@0.7 of 16.4." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.438, + 0.553, + 0.631 + ], + "angle": 0, + "content": "Generalization of RL vs. SFT. As shown in Figure 4, when both RL and SFT perform full fine-tuning of the LLM using a small amount of data, SFT significantly degrades the model's generalization ability across all tasks, whereas RL consistently preserves generalization. While SFT can alleviate this issue by adopting LoRA-based fine-tuning (SFT-LoRA), RL still demonstrates stronger generalization and additionally improves performance on video QA tasks, whereas SFT-LoRA leads to a decline. For example, on ActivityNet, RL improves the mIoU from 16.3 to 29.2, while SFT degrades it to 15.4, and SFT-LoRA improves it to 25.9. On the VideoMMe video QA benchmark, RL increases the performance from 53.0 to 54.2, whereas SFT-LoRA decreases it to 51.7." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.637, + 0.552, + 0.707 + ], + "angle": 0, + "content": "Impact of cold start. As shown in Figure 5, cold start boosts the performance of the 3B model and reduces the number of thinking tokens for both models. We attribute this to the cold start's function to suppress hallucinations, which tend to be more prevalent in weaker models." + }, + { + "type": "image", + "bbox": [ + 0.571, + 0.459, + 0.819, + 0.607 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.56, + 0.622, + 0.825, + 0.678 + ], + "angle": 0, + "content": "Figure 5: Impact of SFT-based cold start on IoU performance and thinking token count, with token counts marked by \\(\\star\\) on dashed lines." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.726, + 0.303, + 0.741 + ], + "angle": 0, + "content": "5 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.757, + 0.827, + 0.91 + ], + "angle": 0, + "content": "In this work, we present Time-R1, a reinforcement learning based post-training framework that significantly improves the generalization of Large Vision-Language Models for Temporal Video Grounding. Unlike prior methods that rely on large-scale supervised fine-tuning, Time-R1 leverages a verifiable reward signal to unlock strong temporal reasoning from pretrained models using limited data. Our contributions include: (1) Time-R1, a reasoning-guided post-training framework that enhances TVG via RL; (2) TimeRFT, a curated dataset and training strategy that fosters temporal grounding; (3) TVGBench, a small yet comprehensive benchmark for evaluating LVLMs on TVG. Extensive experiments show that Time-R1 achieves SoTA performance across TVG benchmarks in both zero-shot and fine-tuned settings, surpassing prior LVLMs and traditional VLP-based models, while also improving general video understanding. We hope this work inspires future directions in data-efficient and generalizable video-language understanding via reinforcement learning." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.948 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.174, + 0.09, + 0.27, + 0.106 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.116, + 0.826, + 0.142 + ], + "angle": 0, + "content": "[1] Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.147, + 0.827, + 0.186 + ], + "angle": 0, + "content": "[2] Triantafyllos Afouras, Effrosyni Mavroudi, Tushar Nagarajan, Huiyu Wang, and Lorenzo Torresani. Ht-step: Aligning instructional articles with how-to videos. Advances in Neural Information Processing Systems, 36:50310-50326, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.19, + 0.827, + 0.23 + ], + "angle": 0, + "content": "[3] Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing moments in video with natural language. In Proceedings of the IEEE international conference on computer vision, pages 5803-5812, 2017. 1, 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.234, + 0.827, + 0.296 + ], + "angle": 0, + "content": "[4] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-v1 technical report. arXiv preprint arXiv:2502.13923, 2025.7" + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.302, + 0.825, + 0.342 + ], + "angle": 0, + "content": "[5] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 961-970, 2015. 1, 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.346, + 0.827, + 0.384 + ], + "angle": 0, + "content": "[6] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6299-6308, 2017. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.389, + 0.825, + 0.417 + ], + "angle": 0, + "content": "[7] Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V, 2025.4" + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.42, + 0.825, + 0.46 + ], + "angle": 0, + "content": "[8] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructlip: Towards general-purpose vision-language models with instruction tuning, 2023. 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.464, + 0.825, + 0.491 + ], + "angle": 0, + "content": "[9] Trevor Darrell and Alex Pentland. Space-time gestures. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 335-340. IEEE, 1993. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.494, + 0.825, + 0.522 + ], + "angle": 0, + "content": "[10] Google DeepMind. Gemini 2.5: Our most intelligent ai model. Google DeepMind, 2025. Model ID: gemini-2.5-pro-preview-03-25. 8, 14, 15" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.525, + 0.825, + 0.552 + ], + "angle": 0, + "content": "[11] DeepSeek-AI. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprint arXiv:2401.02954, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.556, + 0.825, + 0.595 + ], + "angle": 0, + "content": "[12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171–4186, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.6, + 0.827, + 0.651 + ], + "angle": 0, + "content": "[13] Chaoyou Fu, Yuhan Dai, Yongdong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.656, + 0.825, + 0.682 + ], + "angle": 0, + "content": "[14] Adrien Gaidon, Zaid Harchaoui, and Cordelia Schmid. Temporal localization of actions with actons. IEEE transactions on pattern analysis and machine intelligence, 35(11):2782-2795, 2013. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.687, + 0.827, + 0.725 + ], + "angle": 0, + "content": "[15] Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. Tall: Temporal activity localization via language query. In Proceedings of the IEEE international conference on computer vision, pages 5267-5275, 2017. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.73, + 0.827, + 0.782 + ], + "angle": 0, + "content": "[16] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995-19012, 2022. 1, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.786, + 0.827, + 0.837 + ], + "angle": 0, + "content": "[17] Yongxin Guo, Jingyu Liu, Mingda Li, Dingxin Cheng, Xiaoying Tang, Dianbo Sui, Qingbin Liu, Xi Chen, and Kevin Zhao. Vtg-llm: Integrating timestamp knowledge into video llms for enhanced video temporal grounding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 3302–3310, 2025. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.842, + 0.825, + 0.869 + ], + "angle": 0, + "content": "[18] Yongxin Guo, Jingyu Liu, Mingda Li, Qingbin Liu, Xi Chen, and Xiaoying Tang. Trace: Temporal grounding video llm via causal event modeling. arXiv preprint arXiv:2410.05643, 2024. 2, 3, 8, 9, 14, 15" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.873, + 0.825, + 0.912 + ], + "angle": 0, + "content": "[19] Tanveer Hannan, Md Mohaiminul Islam, Jindong Gu, Thomas Seidl, and Gedas Bertasius. Revisionlmm: Recursive vision-language model for temporal grounding in hour-long videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3" + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.116, + 0.827, + 0.912 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.092, + 0.826, + 0.12 + ], + "angle": 0, + "content": "[20] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuzhhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. *ICLR*, 1(2):3, 2022. 8, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.123, + 0.827, + 0.163 + ], + "angle": 0, + "content": "[21] Bin Huang, Xin Wang, Hong Chen, Zihan Song, and Wenwu Zhu. Vtimellm: Empower llm to grasp video moments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14271-14280, 2024. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.167, + 0.827, + 0.207 + ], + "angle": 0, + "content": "[22] Jinhyun Jang, Jungin Park, Jin Kim, Hyeongjun Kwon, and Kwanghoon Sohn. Knowing where to focus: Event-aware transformer for video grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13846-13856, 2023. 1, 2, 3, 7, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.211, + 0.827, + 0.263 + ], + "angle": 0, + "content": "[23] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. 15" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.268, + 0.825, + 0.295 + ], + "angle": 0, + "content": "[24] Ivan Laptev and Patrick Pérez. Retrieving actions in movies. In 2007 IEEE 11th International Conference on Computer Vision, pages 1-8. IEEE, 2007. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.3, + 0.827, + 0.338 + ], + "angle": 0, + "content": "[25] Jiaze Li, Yaya Shi, Zongyang Ma, Haoran Xu, Feng Cheng, Huihui Xiao, Ruiwen Kang, Fan Yang, Tingting Gao, and Di Zhang. imove: Instance-motion-aware video understanding. arXiv preprint arXiv:2502.11594, 2025.3" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.344, + 0.827, + 0.384 + ], + "angle": 0, + "content": "[26] Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, et al. Mvbench: A comprehensive multi-modal video understanding benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22195-22206, 2024. 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.388, + 0.825, + 0.428 + ], + "angle": 0, + "content": "[27] Xinhao Li, Yi Wang, Jiashuo Yu, Xiangyu Zeng, Yuhan Zhu, Haian Huang, Jianfei Gao, Kunchang Li, Yinan He, Chenting Wang, Yu Qiao, Yali Wang, and Limin Wang. Videochat-flash: Hierarchical compression for long-context video modeling. arXiv preprint arXiv:2501.00574, 2024. 3, 6, 8, 14, 15" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.432, + 0.827, + 0.459 + ], + "angle": 0, + "content": "[28] Zhenyi Liao, Qingsong Xie, Yanhao Zhang, Zijian Kong, Haonan Lu, Zhenyu Yang, and Zhijie Deng. Improved visual-spatial reasoning via r1-zero-like training. arXiv preprint arXiv:2504.00883, 2025. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.464, + 0.827, + 0.503 + ], + "angle": 0, + "content": "[29] Kevin Qinghong Lin, Jinpeng Wang, Mattia Soldan, Michael Wray, Rui Yan, Eric Z Xu, Difei Gao, Rong-Cheng Tu, Wenzhe Zhao, Weijie Kong, et al. Egocentric video-language pretraining. Advances in Neural Information Processing Systems, 35:7575-7586, 2022. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.508, + 0.827, + 0.548 + ], + "angle": 0, + "content": "[30] Kevin Qinghong Lin, Pengchuan Zhang, Joya Chen, Shraman Pramanick, Difei Gao, Alex Jinpeng Wang, Rui Yan, and Mike Zheng Shou. Univtg: Towards unified video-language temporal grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2794–2804, 2023. 1, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.552, + 0.827, + 0.579 + ], + "angle": 0, + "content": "[31] Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei Li, Sishuo Chen, Xu Sun, and Lu Hou. Tempcompass: Do video llms really understand videos? arXiv preprint arXiv:2403.00476, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.583, + 0.827, + 0.61 + ], + "angle": 0, + "content": "[32] Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785, 2025. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.615, + 0.827, + 0.653 + ], + "angle": 0, + "content": "[33] Karttikeya Mangalam, Raiymbek Akshulakov, and Jitendra Malik. Egoschema: A diagnostic benchmark for very long-form video language understanding. Advances in Neural Information Processing Systems, 36:46212-46244, 2023. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.659, + 0.825, + 0.686 + ], + "angle": 0, + "content": "[34] Gabriel B Margolis and Pulkit Agrawal. Walk these ways: Tuning robot control for generalization with multiplicity of behavior. In Conference on Robot Learning, pages 22-31. PMLR, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.69, + 0.827, + 0.73 + ], + "angle": 0, + "content": "[35] Fanqing Meng, Lingxiao Du, Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Botian Shi, Wenhai Wang, Junjun He, Kaipeng Zhang, et al. Mm-eureka: Exploring visual aha moment with rule-based large-scale reinforcement learning. arXiv preprint arXiv:2503.07365, 2025. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.734, + 0.827, + 0.773 + ], + "angle": 0, + "content": "[36] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2630-2640, 2019. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.778, + 0.825, + 0.805 + ], + "angle": 0, + "content": "[37] Fangzhou Mu, Sicheng Mo, and Yin Li. Snag: Scalable and accurate video grounding. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 18930-18940, 2024. 3, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.81, + 0.827, + 0.85 + ], + "angle": 0, + "content": "[38] Andreea-Maria Oncescu, Joao F Henriques, Yang Liu, Andrew Zisserman, and Samuel Albanie. Queryd: A video dataset with high-quality text and audio narrations. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2265-2269. IEEE, 2021. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.854, + 0.394, + 0.869 + ], + "angle": 0, + "content": "[39] OpenAI. Openai o1, 2024. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.873, + 0.827, + 0.912 + ], + "angle": 0, + "content": "[40] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022. 4" + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.092, + 0.827, + 0.912 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.092, + 0.826, + 0.132 + ], + "angle": 0, + "content": "[41] Mengxue Qu, Xiaodong Chen, Wu Liu, Alicia Li, and Yao Zhao. Chatvtg: Video temporal grounding via chat with video dialogue large language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1847-1856, 2024. 3, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.134, + 0.827, + 0.186 + ], + "angle": 0, + "content": "[42] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PmLR, 2021. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.189, + 0.827, + 0.229 + ], + "angle": 0, + "content": "[43] Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics, 1:25-36, 2013. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.232, + 0.825, + 0.273 + ], + "angle": 0, + "content": "[44] Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, and Lu Hou. Timechat: A time-sensitive multimodal large language model for long video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14313-14323, 2024. 1, 3, 6, 8, 14, 15" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.275, + 0.825, + 0.315 + ], + "angle": 0, + "content": "[45] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. 16" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.317, + 0.825, + 0.357 + ], + "angle": 0, + "content": "[46] Gunnar A Sigurdsson, Gúl Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In Proceedings of the European Conference on Computer Vision (ECCV), 2016. 1, 3, 6, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.36, + 0.825, + 0.401 + ], + "angle": 0, + "content": "[47] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815, 2017. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.402, + 0.827, + 0.441 + ], + "angle": 0, + "content": "[48] Huajie Tan, Yuheng Ji, Xiaoshuai Hao, Minglan Lin, Pengwei Wang, Zhongyuan Wang, and Shanghang Zhang. Reason-rft: Reinforcement fine-tuning for visual reasoning. arXiv preprint arXiv:2503.20752, 2025.4" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.444, + 0.825, + 0.485 + ], + "angle": 0, + "content": "[49] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Internvid: A large-scale video-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2307.06942, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.487, + 0.827, + 0.514 + ], + "angle": 0, + "content": "[50] Yueqian Wang, Xiaojun Meng, Jianxin Liang, Yuxuan Wang, Qun Liu, and Dongyan Zhao. Hawkeye: Training video-text llms for grounding text in videos, 2024. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.517, + 0.827, + 0.595 + ], + "angle": 0, + "content": "[51] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October 2020. Association for Computational Linguistics. 15" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.597, + 0.825, + 0.637 + ], + "angle": 0, + "content": "[52] Yongliang Wu, Xinting Hu, Yuyang Sun, Yizhou Zhou, Wenbo Zhu, Fengyun Rao, Bernt Schiele, and Xu Yang. Number it: Temporal grounding videos like flipping manga. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.639, + 0.827, + 0.692 + ], + "angle": 0, + "content": "[53] Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid. Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10714-10726, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.695, + 0.827, + 0.748 + ], + "angle": 0, + "content": "[54] Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid. Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10714-10726, 2023. 6, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.75, + 0.827, + 0.814 + ], + "angle": 0, + "content": "[55] Jingkang Yang, Shuai Liu, Hongming Guo, Yuhao Dong, Xiamengwei Zhang, Sicheng Zhang, Pengyun Wang, Zitang Zhou, Binzhu Xie, Ziyue Wang, Bei Ouyang, Zhengyu Lin, Marco Cominelli, Zhongang Cai, Yuanhan Zhang, Peiyuan Zhang, Fangzhou Hong, Joerg Widmer, Francesco Gringoli, Lei Yang, Bo Li, and Ziwei Liu. Egolife: Towards egocentric life assistant. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.818, + 0.827, + 0.857 + ], + "angle": 0, + "content": "[56] Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476, 2025. 4, 14, 16" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.86, + 0.825, + 0.913 + ], + "angle": 0, + "content": "[57] Tianyu Yu, Yuan Yao, Haoye Zhang, Taiwen He, Yifeng Han, Ganqu Cui, Jinyi Hu, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun, et al. Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13807-13816, 2024. 4" + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.092, + 0.827, + 0.913 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.092, + 0.826, + 0.132 + ], + "angle": 0, + "content": "[58] Yitian Yuan, Xiaohan Lan, Xin Wang, Long Chen, Zhi Wang, and Wenwu Zhu. A closer look at temporal sentence grounding in videos: Dataset and metric. In Proceedings of the 2nd international workshop on human-centric multimedia analysis, pages 13-21, 2021. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.135, + 0.827, + 0.175 + ], + "angle": 0, + "content": "[59] Abhay Zala, Jaemin Cho, Satwik Kottur, Xilun Chen, Barlas Oguz, Yashar Mehdad, and Mohit Bansal. Hierarchical video-moment retrieval and step-captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 23056-23065, 2023. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.178, + 0.826, + 0.23 + ], + "angle": 0, + "content": "[60] Xiangyu Zeng, Kunchang Li, Chenting Wang, Xinhao Li, Tianxiang Jiang, Ziang Yan, Songze Li, Yansong Shi, Zhengrong Yue, Yi Wang, Yali Wang, Yu Qiao, and Limin Wang. Timesuite: Improving MLLMs for long video understanding via grounded tuning. In The Thirteenth International Conference on Learning Representations, 2025. 1, 3, 6, 8, 14, 15" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.233, + 0.827, + 0.272 + ], + "angle": 0, + "content": "[61] Hao Zhang, Aixin Sun, Wei Jing, and Joey Tianyi Zhou. Temporal sentence grounding in videos: A survey and future directions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(8):10443-10465, 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.276, + 0.826, + 0.315 + ], + "angle": 0, + "content": "[62] Songyang Zhang, Houwen Peng, Jianlong Fu, Yijuan Lu, and Jiebo Luo. Multi-scale 2d temporal adjacency networks for moment localization with natural language. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.319, + 0.826, + 0.358 + ], + "angle": 0, + "content": "[63] Songyang Zhang, Houwen Peng, Jianlong Fu, and Jiebo Luo. Learning 2d temporal adjacent networks for moment localization with natural language. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020. 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.362, + 0.826, + 0.389 + ], + "angle": 0, + "content": "[64] Xingjian Zhang, Siwei Wen, Wenjun Wu, and Lei Huang. Tinyllava-video-r1: Towards smaller lmms for video reasoning. arXiv preprint arXiv:2504.09641, 2025. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.392, + 0.826, + 0.419 + ], + "angle": 0, + "content": "[65] Henghao Zhao, Ge-Peng Ji, Rui Yan, Huan Xiong, and Zechao Li. Videoexpert: Augmented llm for temporal-sensitive video understanding. arXiv preprint arXiv:2504.07519, 2025. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.422, + 0.826, + 0.472 + ], + "angle": 0, + "content": "[66] Jiahao Zhu, Daizong Liu, Pan Zhou, Xing Di, Yu Cheng, Song Yang, Wenzheng Xu, Zichuan Xu, Yao Wan, Lichao Sun, and Zeyu Xiong. Rethinking the video sampling and reasoning strategies for temporal sentence grounding. In Findings of the Association for Computational Linguistics: EMNLP 2022, 2022. 8, 9" + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.092, + 0.827, + 0.472 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.174, + 0.092, + 0.826, + 0.106 + ], + "angle": 0, + "content": "A Limitations 14" + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.126, + 0.826, + 0.141 + ], + "angle": 0, + "content": "B Implementation Details 14" + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.16, + 0.826, + 0.175 + ], + "angle": 0, + "content": "C Additional Analyses 15" + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.195, + 0.826, + 0.209 + ], + "angle": 0, + "content": "D Ablation Studies 15" + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.229, + 0.826, + 0.244 + ], + "angle": 0, + "content": "E Qualitative Result 16" + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.092, + 0.826, + 0.244 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.174, + 0.266, + 0.308, + 0.281 + ], + "angle": 0, + "content": "A Limitations" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.298, + 0.827, + 0.382 + ], + "angle": 0, + "content": "Despite achieving notable improvements on the TVG task, our approach still has several limitations. First, Time-R1 suffers from slower training and inference speeds, primarily due to its large model size and reliance on autoregressive text generation. Second, to manage GPU memory consumption, we use a relatively low frame sampling rate, which may result in the loss of fine-grained motion information across frames. Finally, Time-R1 currently cannot handle ultra-long videos, limiting its applicability in scenarios such as full-length movie understanding." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.403, + 0.406, + 0.42 + ], + "angle": 0, + "content": "B Implementation Details" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.435, + 0.825, + 0.514 + ], + "angle": 0, + "content": "Details of Time-R1 framework. Inspired by DAPO [56], we adopt its token-level loss for training, rather than the sample-level loss used in GRPO. Apart from minor changes to the loss, all settings are identical to GRPO. Besides, we find that other techniques introduced in DAPO do not benefit the TVG task, thus aborting other techniques. We full-finetune the LLM parameters at every step, thus \\(\\frac{\\pi_{\\theta}(o_i)}{\\pi_{\\theta_{\\mathrm{old}}}(o_i)} = 1\\). The sample number \\(G\\) is set to 8. The coefficient \\(\\beta\\) is set to 0.04." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.518, + 0.827, + 0.617 + ], + "angle": 0, + "content": "Details of TimeRFT training. For RFT data filtering, we use a Gaussian distribution with a fixed variance of 0.2 while varying the mean to control sample selection. In our cold start phase, we construct 150 samples from our training data sources (e.g., YT-Temporal [54]) to fine-tune the LLM using LoRA [20], with a LoRA rank of 64 and a LoRA alpha of 128. All of our results are reported based on the final training epoch. For RL, we use a learning rate of 1e-6 with the AdamW optimizer with \\(\\beta_{1} = 0.9\\), \\(\\beta_{2} = 0.999\\), and a linear scheduler to decay the learning rate from 1e-6 to 0. We use a batch size of 8 with gradient accumulation set to 2. It requires 15 hours of training on 8 A100 GPUs." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.622, + 0.827, + 0.678 + ], + "angle": 0, + "content": "Details of our evaluation prompts. As shown in Figure 11, for temporal video grounding, the prompts used for training and testing are designed to encourage the model to reason before responding, following a template-based answer format. For VideoQA, we have two versions of prompts: one with CoT and one without CoT." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.684, + 0.825, + 0.713 + ], + "angle": 0, + "content": "Details of TVG baseline methods and implementations. We evaluate the baselines on TVGBench using their original best-performing setting, focusing primarily on video input and prompt design." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.718, + 0.825, + 0.746 + ], + "angle": 0, + "content": "- TimeChat [44] is built upon the InstructBLIP [8] architecture and introduces a video Q-former to encode video tokens. It operates at a resolution of 224 and samples 96 frames." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.747, + 0.825, + 0.787 + ], + "angle": 0, + "content": "- TRACE [18] treats each combination of timestamp, saliency score, and caption as a discrete event and enables the LVLM to autoregressively generate event sequences. It operates at a higher resolution of 336 and samples 128 frames." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.788, + 0.825, + 0.828 + ], + "angle": 0, + "content": "- TimeSuite [60] introduces a token shuffling strategy to compress long video token sequences and incorporates positional encoding to enhance visual understanding. It adopts a resolution of 224 and samples 128 frames." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.829, + 0.825, + 0.87 + ], + "angle": 0, + "content": "- VideoChat-Flash [27] proposes a progressive visual token dropping mechanism within intermediate LLM layers to compress video inputs and extend the effective context length. It uses a resolution of 448 and samples video at 1 fps, with a maximum of 512 frames." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.871, + 0.825, + 0.912 + ], + "angle": 0, + "content": "- Gemini-2.5-Pro [10]: Gemini-2.5-Pro is a state-of-the-art video understanding model capable of reasoning over videos exceeding one hour in length. It supports video question answering and temporal localization tasks." + }, + { + "type": "list", + "bbox": [ + 0.171, + 0.718, + 0.825, + 0.912 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "14" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.177, + 0.089, + 0.82, + 0.105 + ], + "angle": 0, + "content": "Table 3: Comparison of different approaches on TVGBench for all types. We use mIoU as metric." + }, + { + "type": "table", + "bbox": [ + 0.212, + 0.115, + 0.785, + 0.212 + ], + "angle": 0, + "content": "
MethodECESHACHAPHASHPOAOCOECOESOT
TimeChat [44]22.332.816.69.814.635.115.09.22.418.010.2
TimeSuite [60]27.339.614.212.824.939.614.613.96.732.614.3
TRACE [18]57.166.825.917.526.545.117.822.112.536.824.9
VideoChat-Flash [27]38.347.212.913.927.139.414.912.76.524.312.9
Gemini-2.5-Pro [10]46.745.321.127.630.939.923.031.114.135.917.8
Time-R1 (ours)49.365.328.324.339.356.226.321.89.032.721.8
" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.239, + 0.828, + 0.325 + ], + "angle": 0, + "content": "Details of our implemented SFT baselines. We implemented two versions of SFT fine-tuning: one is full-parameter fine-tuning of the LLM (SFT), and the other is LoRA-based fine-tuning of the LLM (SFT-LoRA). For SFT-LoRA, the LoRA rank is set to 64, and the LoRA alpha is set to 128. Both configurations use the following settings: a learning rate of 2e-5, the AdamW optimizer with \\(\\beta_{1} = 0.9\\), \\(\\beta_{2} = 0.999\\), a weight decay of 0, the batch size of 8, and accumulation steps of 2. We fine-tune for 5 epochs on our 2.5K data, and use a linear scheduler to gradually decay the learning rate to 0." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.343, + 0.381, + 0.361 + ], + "angle": 0, + "content": "C Additional Analyses" + }, + { + "type": "text", + "bbox": [ + 0.169, + 0.375, + 0.825, + 0.472 + ], + "angle": 0, + "content": "In-depth comparisons of different approaches on TVGBench by semantic type. Table 3 provides a detailed performance comparison of various methods on the TVGBench across different semantic categories. Specifically, the abbreviations represent: EC (Environment Change), ES (Environment State), HAC (Human Action - Complex), HAP (Human Action - Procedural), HAS (Human Action - Simple), HP (Human Pose), OA (Object Attribute), OC (Object Counting), OEC (Object Existence - Complex), OES (Object Existence - Simple), and OT (Object Transition). Detailed definition and construction process can be found in Figure 15." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.479, + 0.827, + 0.688 + ], + "angle": 0, + "content": "Time-R1 demonstrates strong competitiveness across multiple semantic categories. First, particularly in the four tasks of HAC, HAS, HP, and OA, Time-R1 achieved the highest scores among all compared methods, showcasing its excellent ability in understanding the details of human actions and identifying object features. For example, Time-R1 achieves an mIoU of 56.2 on HP, which is 11.1 points higher than the second-best method, TRACE, with an mIoU of 45.1. On HAS, Time-R1 reaches 39.3, outperforming Gemini-2.5-Pro's 30.9 by 8.4 points. Second, in the three tasks of ES, EC, and OT, Time-R1 demonstrates strong performance comparable to the top model TRACE, with its performance being very close or immediately following. In the HAP task, Time-R1 also performs excellently, with its performance being in the same tier as Gemini-2.5-Pro. Lastly, all models still show a noticeable gap compared to Gemini in understanding complex instructions, such as in HAP, OC, and OEC. For example, in HAP, which involves procedural activity localization, Gemini achieves 27.6, while our model ranks second with a score of 24.3. In object counting, Gemini attains 31.1, substantially outperforming our model's 21.8. In summary, Time-R1 performs well on both non-human simple instructions and human-related instructions, but there is still room for improvement in complex instruction grounding and object-related grounding." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.692, + 0.829, + 0.777 + ], + "angle": 0, + "content": "Comparison of speed and accuracy between inference library transformers and vLLM. We observe that the inference speed of the implementation in the transformers [51] library is very slow. To address this, we implemented an accelerated inference version using vLLM [23] for all related 7 downstream benchmarks. For example, on TVGBench, the vLLM-based implementation requires only 502 seconds to infer 800 samples using 8 GPUs, whereas the transformers library implementation takes 2520 seconds. This achieves an overall speedup of \\(5 \\times\\)." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.796, + 0.351, + 0.813 + ], + "angle": 0, + "content": "D Ablation Studies" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.828, + 0.828, + 0.914 + ], + "angle": 0, + "content": "Ablation of different RFT data filtering strategies. As shown in Table 4, different data filtering strategy in the initial round affects the model's performance. First, appropriate Gaussian filtering outperforms both uniform and random filtering methods. Among the Gaussian filtering settings, a standard deviation of 0.3 yields the best results, followed by 0.5 and then 0.7. These findings suggest that incorporating moderately challenging samples during RFT helps improve the model's generalization capability more effectively than using either overly easy or extremely difficult examples." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "15" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.187, + 0.089, + 0.48, + 0.104 + ], + "angle": 0, + "content": "Table 4: Ablation of data filtering strategies." + }, + { + "type": "table", + "bbox": [ + 0.199, + 0.115, + 0.468, + 0.187 + ], + "angle": 0, + "content": "
MethodR1@0.3R1@0.5R1@0.7mIoU
random39.426.516.427.4
gaussian (0.3)41.628.515.628.6
gaussian (0.5)40.628.216.028.3
gaussian (0.7)37.226.915.526.5
uniform40.428.515.928.3
" + }, + { + "type": "table_caption", + "bbox": [ + 0.505, + 0.089, + 0.796, + 0.103 + ], + "angle": 0, + "content": "Table 5: Ablation of KL and CoT in GRPO." + }, + { + "type": "table", + "bbox": [ + 0.504, + 0.115, + 0.795, + 0.187 + ], + "angle": 0, + "content": "
KLCoTR1@0.3R1@0.5R1@0.7mIoU
XX40.429.114.928.1
X40.827.415.027.7
X42.929.515.029.1
41.628.515.628.6
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.198, + 0.825, + 0.227 + ], + "angle": 0, + "content": "Table 6: Comparison of the token-level loss design used by DAPO [56] and the sample-level loss design used by GRPO [45]." + }, + { + "type": "table", + "bbox": [ + 0.195, + 0.238, + 0.803, + 0.293 + ], + "angle": 0, + "content": "
LossCharades-STAActivityNetTVGBench
R1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoU
GRPO76.759.834.457.055.937.120.337.840.828.016.528.4
DAPO77.460.034.157.256.237.420.438.041.628.515.628.6
" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.316, + 0.827, + 0.428 + ], + "angle": 0, + "content": "Ablation of KL and CoT during GRPO training. As shown in Table 5, incorporating CoT reasoning during training leads to improved performance compared to the No-CoT setting, suggesting that CoT enhances the model's temporal video grounding capabilities. When KL divergence is omitted (No-KL), performance slightly decreases under the No-CoT setting but unexpectedly improves when CoT is present. However, we find that in the No-KL+CoT setting, the model often fails to produce a thinking process, directly jumping to answers. In contrast, using KL divergence helps maintain more logical reasoning that is easier to follow. To balance performance and interpretability, we adopt a training setup that includes both KL and CoT." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.433, + 0.828, + 0.504 + ], + "angle": 0, + "content": "Comparison of tIoU and IoU during multi-epoch training. As shown in Figure 6, tIoU consistently outperforms standard IoU during both the early and late stages of training over the first 5 epochs. Notably, while tIoU steadily improves as training progresses, IoU shows a decline in performance by the fifth epoch. This highlights the advantage of using tIoU as a more stable and reliable reward for temporal video grounding." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.509, + 0.827, + 0.567 + ], + "angle": 0, + "content": "Ablation of sample filtering in multi-epoch training. As shown in Figure 7, applying sample filtering (SF) to remove simpler training samples yields consistent performance improvements across epochs. This suggests that easy samples with high IoU may introduce noise or reduce the effectiveness of learning, and filtering them helps focus the model on more informative and challenging instances." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.571, + 0.827, + 0.669 + ], + "angle": 0, + "content": "Ablation of DAPO & GRPO. The sample-level loss used by GRPO computes the loss by averaging over each individual sample. This approach leads to unequal loss contributions for tokens when dealing with CoTs of varying lengths. DAPO addresses this issue by employing a token-level loss. The underlying principle is that the token-level loss can effectively guide the model in the process of CoT generation, allowing it to learn useful patterns from CoTs of different lengths sampled during training. In Table 6, we compare these two loss designs. We empirically find that DAPO outperforms GRPO on the majority of metrics, thus, we adopt DAPO's loss design." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.674, + 0.826, + 0.732 + ], + "angle": 0, + "content": "Different Model Size. Table 7 presents a performance comparison of different model sizes. These results indicate that larger models achieve better zero-shot performance and continue to outperform smaller models after fine-tuning. These findings support the notion that scaling up model capacity enhances generalization and leads to superior results on the TVG tasks." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.75, + 0.364, + 0.767 + ], + "angle": 0, + "content": "E Qualitative Result" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.78, + 0.827, + 0.879 + ], + "angle": 0, + "content": "Case study of temporal video grounding on Charades and ActivityNet. As shown in Figure 8, in the example above, given a relatively complex language instruction, Time-R1 demonstrates more accurate localization than all baselines, successfully capturing the initial event \"goes back to the pink bucket\" within the timestamp, while other SoTA models like Gemini-2.5-Pro fail. In the example below, the model accurately localizes the event, excluding \"a person is lying on the couch,\" and correctly distinguishes between sitting and lying, unlike other models, which either localize only a small segment (TimeSuite and VideoChat-Flash) or the entire segment (TRACE and Gemini-2.5-Pro)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.884, + 0.827, + 0.914 + ], + "angle": 0, + "content": "Case study of short video QA on VideoMME and TempCompass. As shown in Figures 9 and 10, Time-R1 demonstrates improved performance over the base model in tasks requiring positional" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.51, + 0.948 + ], + "angle": 0, + "content": "16" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.305, + 0.089, + 0.693, + 0.104 + ], + "angle": 0, + "content": "Table 7: Performance comparison of different model sizes." + }, + { + "type": "table", + "bbox": [ + 0.178, + 0.115, + 0.821, + 0.194 + ], + "angle": 0, + "content": "
MethodCharades-STAActivityNetTVGBench
R1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoU
Time-R1-3B74.653.126.051.240.021.08.723.233.521.010.521.7
Time-R1-3B*78.764.136.959.966.846.824.746.1----
Time-R1-7B78.160.835.558.158.139.021.440.541.829.416.429.2
Time-R1-7B*82.872.250.160.973.355.634.052.1----
" + }, + { + "type": "image", + "bbox": [ + 0.191, + 0.209, + 0.48, + 0.374 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.176, + 0.387, + 0.495, + 0.417 + ], + "angle": 0, + "content": "Figure 6: Performance comparison of tIoU and IoU in multi-epoch training." + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.209, + 0.808, + 0.374 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.504, + 0.387, + 0.825, + 0.417 + ], + "angle": 0, + "content": "Figure 7: Ablation of sample filtering in multi-epoch training." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.443, + 0.828, + 0.554 + ], + "angle": 0, + "content": "judgment, scene storyline reasoning, and visual reasoning. For example, in Figure 9, Time-R1 correctly identifies that a car in the video is missing its right-front wheel, a detail that the base model fails to recognize. This reflects that Time-R1 likely possesses stronger video localization capabilities, which in turn enhance its visual reasoning ability. In Figure 12, we output a CoT when answering the QA task, providing some interpretability. This example shows that Time-R1's reasoning process is more concise, whereas the base model often reasons correctly but arrives at the wrong answer. This suggests that Time-R1's reasoning may be more effective in guiding the final answer, possibly benefiting from the outcome-driven RL of GRPO." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.56, + 0.828, + 0.699 + ], + "angle": 0, + "content": "Case study of long video QA on EgoSchema and VideoMME. Figure 13 presents a long egocentric video QA example focused on summarizing task steps. In the \"Hanging the Dress\" case, the base model fails to identify all key steps, while our Time-R1 model correctly selects the answer by generating a more accurate chain-of-thought (CoT). In Figure 14, the task involves identifying the primary tools and ingredients used in the video and understanding their respective contributions. An additional example in Figure 9 involves animated scene reasoning, where Time-R1 correctly infers, based on elements like the presence of a guitar, that the heroes defeated the enemies using the power of music, whereas the base model misinterprets the cause. Across these complex reasoning tasks involving long videos, Time-R1 consistently demonstrates superior performance compared to the base model." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.705, + 0.825, + 0.734 + ], + "angle": 0, + "content": "Illustration of our prompt at training and inference time. Figure 11 presents the prompts used for the temporal video grounding and video QA tasks at both training and inference time." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.739, + 0.828, + 0.838 + ], + "angle": 0, + "content": "Illustration of our prompt to annotate query semantics on TVGBench. Figure 15 presents the prompt used to annotate query semantics in the TVGBench dataset. The prompt is designed to guide the DeepSeek LLM in classifying each input query into one of 11 predefined semantic categories. To improve annotation quality, we refer to the construction of existing benchmarks and carefully select both positive and negative examples for each semantic type. These examples are chosen to include queries that are either easy or difficult for the model to answer, helping to refine the model's understanding and improve labeling accuracy." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "17" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.21, + 0.114, + 0.394, + 0.169 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.394, + 0.114, + 0.59, + 0.169 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.59, + 0.114, + 0.763, + 0.169 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.174, + 0.82, + 0.198 + ], + "angle": 0, + "content": "Question: Where is the event \"The man goes back to the pink bucket to wash another dark item clothing, he wrings it out in the pink bucket, throws it in the yellow bucket, then wrings his hands together a few times while staring at the camera\"?" + }, + { + "type": "table_caption", + "bbox": [ + 0.738, + 0.199, + 0.822, + 0.208 + ], + "angle": 0, + "content": "(video length: 74s)" + }, + { + "type": "table", + "bbox": [ + 0.179, + 0.21, + 0.822, + 0.288 + ], + "angle": 0, + "content": "
Ground Truth33.1s→ 73.5s
TimeSuite20.0s→ 30.0s
Gemini-2.5-Pro57.0s
VideoChat-Flash0.0s→ 17.2s
Trace54.9s
Time-R1 (ours)35.0s→ 75.0s
" + }, + { + "type": "text", + "bbox": [ + 0.179, + 0.291, + 0.808, + 0.325 + ], + "angle": 0, + "content": "Time-R1 thinking (ours): The event \"The man goes back to the pink bucket to wash another dark item clothing, he wrings it out in the pink bucket, throws it in the yellow bucket, then wrings his hands together a few times while staring at the camera.\" occurs between 35.00 and 75.00 seconds in the video." + }, + { + "type": "image", + "bbox": [ + 0.219, + 0.335, + 0.489, + 0.388 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.49, + 0.336, + 0.786, + 0.388 + ], + "angle": 0, + "content": null + }, + { + "type": "table_caption", + "bbox": [ + 0.178, + 0.395, + 0.505, + 0.406 + ], + "angle": 0, + "content": "Question: Where is the event \"a person is seen sitting on a couch\"?" + }, + { + "type": "table_caption", + "bbox": [ + 0.735, + 0.395, + 0.819, + 0.404 + ], + "angle": 0, + "content": "(video length: 31s)" + }, + { + "type": "table", + "bbox": [ + 0.18, + 0.41, + 0.82, + 0.488 + ], + "angle": 0, + "content": "
Ground Truth0.0s←12.8s
TimeSuite0.0s←5.0s
Gemini-2.5-Pro0.0s←30.0s
VideoChat-Flash0.0s←5.0s
Trace0.0s←31.6s
Time-R1 (ours)0.0s←10.0s
" + }, + { + "type": "text", + "bbox": [ + 0.179, + 0.491, + 0.79, + 0.514 + ], + "angle": 0, + "content": "Time-R1 thinking (ours): The event \"a person is seen sitting on a couch\" occurs at the beginning of the video. The person is seated on a couch, holding a phone, and appears to be engaged with it. This aligns with the initial frames of the video." + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.541, + 0.825, + 0.571 + ], + "angle": 0, + "content": "Figure 8: Temporal video grounding cases from Charades and ActivityNet benchmark. Time-R1 outperforms other SoTA models." + }, + { + "type": "image", + "bbox": [ + 0.188, + 0.624, + 0.359, + 0.701 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.409, + 0.624, + 0.58, + 0.701 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.631, + 0.626, + 0.802, + 0.701 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.209, + 0.702, + 0.794, + 0.714 + ], + "angle": 0, + "content": "(video length: 38m) Question: What do heroes of legend use to defeat the enemy based on the video?" + }, + { + "type": "image", + "bbox": [ + 0.21, + 0.714, + 0.226, + 0.723 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.714, + 0.3, + 0.723 + ], + "angle": 0, + "content": "Their wisdom" + }, + { + "type": "image", + "bbox": [ + 0.211, + 0.723, + 0.226, + 0.731 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.724, + 0.318, + 0.735 + ], + "angle": 0, + "content": "Their superpower" + }, + { + "type": "image", + "bbox": [ + 0.188, + 0.737, + 0.359, + 0.812 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.466, + 0.714, + 0.541, + 0.723 + ], + "angle": 0, + "content": "(B) A big robot" + }, + { + "type": "image", + "bbox": [ + 0.466, + 0.723, + 0.482, + 0.731 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.482, + 0.724, + 0.562, + 0.735 + ], + "angle": 0, + "content": "4 Power of music" + }, + { + "type": "image", + "bbox": [ + 0.41, + 0.737, + 0.58, + 0.812 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.631, + 0.737, + 0.801, + 0.812 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.207, + 0.812, + 0.547, + 0.824 + ], + "angle": 0, + "content": "(video length: 17s) Question: What's wrong with this car?" + }, + { + "type": "image", + "bbox": [ + 0.207, + 0.824, + 0.225, + 0.833 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.824, + 0.382, + 0.834 + ], + "angle": 0, + "content": "It doesn't have a left rear wheel." + }, + { + "type": "image", + "bbox": [ + 0.207, + 0.833, + 0.225, + 0.841 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.226, + 0.834, + 0.339, + 0.845 + ], + "angle": 0, + "content": "Its headlamp is broken." + }, + { + "type": "text", + "bbox": [ + 0.497, + 0.824, + 0.683, + 0.835 + ], + "angle": 0, + "content": "(B) It doesn't have a right front wheel." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.835, + 0.632, + 0.845 + ], + "angle": 0, + "content": "(D) Its right door is broken." + }, + { + "type": "list", + "bbox": [ + 0.497, + 0.824, + 0.683, + 0.845 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.857, + 0.827, + 0.884 + ], + "angle": 0, + "content": "Figure 9: Case study on VideoMME (w/o CoT), demonstrating that Time-R1 achieves better performance than the base model." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "18" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.199, + 0.133, + 0.371, + 0.209 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.406, + 0.133, + 0.577, + 0.209 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.615, + 0.133, + 0.786, + 0.21 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.21, + 0.691, + 0.222 + ], + "angle": 0, + "content": "(video length: 10s)Question: Which sentence better captures the essence of the video?" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.223, + 0.581, + 0.238 + ], + "angle": 0, + "content": "(A) First, two hand are holding the clay pot and then three hands are holding it." + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.238, + 0.647, + 0.251 + ], + "angle": 0, + "content": "In the video, three hands are holding the clay pot, then two hands are holding the clay pot." + }, + { + "type": "list", + "bbox": [ + 0.187, + 0.223, + 0.647, + 0.251 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.199, + 0.252, + 0.369, + 0.327 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.406, + 0.252, + 0.576, + 0.327 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.615, + 0.252, + 0.786, + 0.327 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.327, + 0.675, + 0.34 + ], + "angle": 0, + "content": "(video length: 14s)Question: What is the sequence of actions performed by the girl?" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.342, + 0.484, + 0.355 + ], + "angle": 0, + "content": "(A) Putting on clothes, jumping into water, taking off clothes" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.356, + 0.482, + 0.367 + ], + "angle": 0, + "content": "(B) Jumping into water, putting on clothes, taking off clothes" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.368, + 0.484, + 0.382 + ], + "angle": 0, + "content": "(C) Taking off clothes, putting on clothes, jumping into water" + }, + { + "type": "list", + "bbox": [ + 0.187, + 0.342, + 0.484, + 0.382 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.396, + 0.825, + 0.425 + ], + "angle": 0, + "content": "Figure 10: Case study on TempCompass (w/o CoT), demonstrating that Time-R1 achieves better performance than the base model." + }, + { + "type": "title", + "bbox": [ + 0.403, + 0.528, + 0.596, + 0.542 + ], + "angle": 0, + "content": "Temporal Video Grounding" + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.548, + 0.274, + 0.56 + ], + "angle": 0, + "content": "System Prompt" + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.561, + 0.357, + 0.573 + ], + "angle": 0, + "content": "You are a video analysis expert." + }, + { + "type": "title", + "bbox": [ + 0.177, + 0.573, + 0.264, + 0.585 + ], + "angle": 0, + "content": "Input Prompt" + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.585, + 0.816, + 0.646 + ], + "angle": 0, + "content": "To accurately pinpoint the event \"[EVENT]\" in the video, determine the precise time period of the event. Output your thought process within the tags, including analysis with either specific time ranges (xx.xx to xx.xx) in < timestep> tags. Then, provide the start and end times (in seconds, precise to two decimal places) in the format \"start time to end time\" within the tags. For example: \"12.54 to 17.83\".\"\"\"" + }, + { + "type": "title", + "bbox": [ + 0.465, + 0.655, + 0.533, + 0.668 + ], + "angle": 0, + "content": "Video QA" + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.671, + 0.273, + 0.683 + ], + "angle": 0, + "content": "System Prompt" + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.684, + 0.357, + 0.695 + ], + "angle": 0, + "content": "You are a video analysis expert." + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.696, + 0.329, + 0.708 + ], + "angle": 0, + "content": "Input Prompt (w/o CoT)" + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.709, + 0.798, + 0.758 + ], + "angle": 0, + "content": "Carefully watch the video and pay attention to the cause and sequence of events, the detail and movement of objects and the action and pose of persons. Based on your observations, select the best option that accurately addresses the question. Provide the letter of the best option wrapped in parentheses within the tags. Question: {} {}\"\"" + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.758, + 0.321, + 0.77 + ], + "angle": 0, + "content": "Input Prompt (w/ CoT)" + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.77, + 0.816, + 0.832 + ], + "angle": 0, + "content": "Carefully watch the video and pay attention to the cause and sequence of events, the detail and movement of objects and the action and pose of persons. Based on your observations, select the best option that accurately addresses the question. Output your thought process within the tags, including analysis with either specific timestamps (xx.xx) or time ranges (xx.xx to xx.xx) in < timestep> tags. Then, provide the letter of the best option wrapped in parentheses within the tags. Question: {} {}" + }, + { + "type": "image_caption", + "bbox": [ + 0.269, + 0.849, + 0.726, + 0.864 + ], + "angle": 0, + "content": "Figure 11: Illustration of prompts at both training and inference time." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "19" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.175, + 0.141, + 0.388, + 0.235 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.39, + 0.141, + 0.607, + 0.235 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.61, + 0.141, + 0.824, + 0.235 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.241, + 0.714, + 0.256 + ], + "angle": 0, + "content": "(video length: 8s) Question: Is the pineapple being pushed forward by a person?" + }, + { + "type": "image", + "bbox": [ + 0.177, + 0.258, + 0.226, + 0.278 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.361, + 0.259, + 0.415, + 0.278 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.296, + 0.329, + 0.31 + ], + "angle": 0, + "content": "Base model thinking:" + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.313, + 0.692, + 0.327 + ], + "angle": 0, + "content": "To determine if the pineapple is being pushed forward by a person, let's analyze the video:" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.329, + 0.467, + 0.342 + ], + "angle": 0, + "content": "1. The video shows a pineapple on a pink surface." + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.344, + 0.55, + 0.358 + ], + "angle": 0, + "content": "2. A hand appears in the frame and interacts with the pineapple." + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.36, + 0.629, + 0.373 + ], + "angle": 0, + "content": "3. The hand lifts the pineapple, indicating that it is being moved by the person." + }, + { + "type": "list", + "bbox": [ + 0.176, + 0.329, + 0.629, + 0.373 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.375, + 0.622, + 0.388 + ], + "angle": 0, + "content": "Given this analysis, the pineapple is indeed being pushed forward by a person." + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.394, + 0.359, + 0.408 + ], + "angle": 0, + "content": "Time-R1 thinking (ours):" + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.411, + 0.817, + 0.44 + ], + "angle": 0, + "content": "The video shows a hand reaching into the frame and grasping the pineapple. The hand then lifts the pineapple, indicating that the pineapple is not being pushed forward by a person but rather being moved by the hand." + }, + { + "type": "image", + "bbox": [ + 0.175, + 0.449, + 0.391, + 0.544 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.393, + 0.449, + 0.608, + 0.544 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.61, + 0.45, + 0.824, + 0.544 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.552, + 0.725, + 0.567 + ], + "angle": 0, + "content": "(video length: 9s) Question: In which sequence does the man perform the actions?" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.571, + 0.452, + 0.584 + ], + "angle": 0, + "content": "(A) Gliding on the skateboard and then jumping" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.586, + 0.414, + 0.599 + ], + "angle": 0, + "content": "(B) Jumping and then tying his shoelaces" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.601, + 0.512, + 0.615 + ], + "angle": 0, + "content": "(C) Tying his shoelaces and then gliding on the skateboard" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.617, + 0.512, + 0.636 + ], + "angle": 0, + "content": "(D) Gliding on the skateboard and then tying his shoelaces" + }, + { + "type": "list", + "bbox": [ + 0.177, + 0.571, + 0.512, + 0.636 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.654, + 0.328, + 0.668 + ], + "angle": 0, + "content": "Base model thinking:" + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.671, + 0.591, + 0.684 + ], + "angle": 0, + "content": "Let's analyze the sequence of actions performed by the man in the video:" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.687, + 0.485, + 0.7 + ], + "angle": 0, + "content": "1. The man is initially seen standing on a skateboard." + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.702, + 0.411, + 0.715 + ], + "angle": 0, + "content": "2. He then proceeds to tie his shoelaces." + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.718, + 0.501, + 0.73 + ], + "angle": 0, + "content": "3. After tying his shoelaces, he glides on the skateboard." + }, + { + "type": "list", + "bbox": [ + 0.176, + 0.687, + 0.501, + 0.73 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.733, + 0.752, + 0.746 + ], + "angle": 0, + "content": "Given this sequence, the correct answer is: (D) gliding on the skateboard and then tying his shoelaces" + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.752, + 0.359, + 0.766 + ], + "angle": 0, + "content": "Time-R1 thinking (ours):" + }, + { + "type": "text", + "bbox": [ + 0.174, + 0.77, + 0.818, + 0.813 + ], + "angle": 0, + "content": "The sequence of actions performed by the man in the video is as follows: first, he ties his shoelaces, and then he glides on the skateboard. This can be observed from the video where the man is seen tying his shoelaces before he starts gliding on the skateboard." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.829, + 0.826, + 0.859 + ], + "angle": 0, + "content": "Figure 12: Case study on TempCompass (w/ CoT), demonstrating that Time-R1 achieves better performance than the base model." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "20" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.176, + 0.155, + 0.379, + 0.274 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.398, + 0.156, + 0.601, + 0.273 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.622, + 0.156, + 0.824, + 0.273 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.287, + 0.822, + 0.342 + ], + "angle": 0, + "content": "(video length: 3m) Question: Summarize the steps taken by c to complete the tasks in this video. don't list every action narrated, but rather focus on major components and how they're executed." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.349, + 0.652, + 0.365 + ], + "angle": 0, + "content": "(A) C folds the dress, places it on the ironing board, and then hangs it up." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.367, + 0.631, + 0.381 + ], + "angle": 0, + "content": "(B) C folds the dress, places it on the ironing board, and then packs it." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.385, + 0.641, + 0.399 + ], + "angle": 0, + "content": "(C) C folds the dress, places it on the ironing board, and then washes it." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.401, + 0.733, + 0.417 + ], + "angle": 0, + "content": "(D) C folds the dress, places it on the ironing board, and then irons it and hangs it up." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.419, + 0.717, + 0.435 + ], + "angle": 0, + "content": "(E) C folds the dress, places it on the ironing board, irons it, and then folds it again." + }, + { + "type": "list", + "bbox": [ + 0.18, + 0.349, + 0.733, + 0.435 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.179, + 0.459, + 0.351, + 0.474 + ], + "angle": 0, + "content": "Base model thinking:" + }, + { + "type": "text", + "bbox": [ + 0.179, + 0.479, + 0.707, + 0.493 + ], + "angle": 0, + "content": "Let's analyze the video step by step to determine the correct sequence of actions:" + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.496, + 0.82, + 0.528 + ], + "angle": 0, + "content": "1. Folding the Dress: The video starts with C folding a blue dress on the ironing board. This is the first major action." + }, + { + "type": "text", + "bbox": [ + 0.179, + 0.532, + 0.815, + 0.547 + ], + "angle": 0, + "content": "2. Placing the Dress on the Ironing Board: After folding the dress, C places it on the ironing board." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.549, + 0.743, + 0.563 + ], + "angle": 0, + "content": "3. Ironing the Dress: C then irons the dress, ensuring it is smooth and free of wrinkles." + }, + { + "type": "list", + "bbox": [ + 0.179, + 0.496, + 0.82, + 0.563 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.179, + 0.567, + 0.657, + 0.582 + ], + "angle": 0, + "content": "4. Hanging the Dress: Finally, C hangs the dress up, completing the task." + }, + { + "type": "text", + "bbox": [ + 0.179, + 0.585, + 0.385, + 0.599 + ], + "angle": 0, + "content": "Now, let's evaluate the options:" + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.603, + 0.652, + 0.618 + ], + "angle": 0, + "content": "(A) C folds the dress, places it on the ironing board, and then hangs it up." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.62, + 0.631, + 0.635 + ], + "angle": 0, + "content": "(B) C folds the dress, places it on the ironing board, and then packs it." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.638, + 0.641, + 0.653 + ], + "angle": 0, + "content": "(C) C folds the dress, places it on the ironing board, and then washes it." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.655, + 0.73, + 0.67 + ], + "angle": 0, + "content": "(D) C folds the dress, places it on the ironing board, and then irons it and hangs it up." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.673, + 0.715, + 0.706 + ], + "angle": 0, + "content": "(E) C folds the dress, places it on the ironing board, irons it, and then folds it again. The correct sequence of actions matches option (D)." + }, + { + "type": "list", + "bbox": [ + 0.18, + 0.603, + 0.73, + 0.706 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.179, + 0.73, + 0.383, + 0.746 + ], + "angle": 0, + "content": "Time-R1 thinking (ours):" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.75, + 0.823, + 0.8 + ], + "angle": 0, + "content": "The video shows C folding a dress, placing it on the ironing board, and then ironing it. The sequence of actions is clear and follows a logical order. The dress is folded, placed on the ironing board, and then ironed, which is a common sequence for completing a task like ironing a dress." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.824, + 0.828, + 0.852 + ], + "angle": 0, + "content": "Figure 13: Case study on EgoSchema (w/ CoT), demonstrating that Time-R1 achieves better performance than the base model." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.507, + 0.948 + ], + "angle": 0, + "content": "21" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.18, + 0.148, + 0.373, + 0.259 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.405, + 0.148, + 0.594, + 0.259 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.627, + 0.148, + 0.818, + 0.259 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.26, + 0.821, + 0.316 + ], + "angle": 0, + "content": "(video length: 3m) Question: Identify the primary tools and ingredients c used throughout the video, and discuss how they contributed to the overall dessert preparation process." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.321, + 0.822, + 0.407 + ], + "angle": 0, + "content": "(A) The primary tools and ingredients c used throughout the video were a spatula, a strainer, a dessert bowl, cocoa powder, and tissue paper. the spatula was used to spread the batter in the bowl. the strainer was used to sieve the cocoa powder into the bowl. the dessert bowl was used to hold the batter. the cocoa powder was used to add flavor and color to the batter. the tissue paper was used to wipe the edges of the bowl." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.41, + 0.82, + 0.496 + ], + "angle": 0, + "content": "(B) The primary tools and ingredients c used throughout the video were a mixing bowl, a whisk, X a baking dish, flour, sugar, eggs, and butter. the mixing bowl was used to mix the batter. the whisk was used to beat the eggs. the baking dish was used to bake the cake. the flour was used to add structure to the cake. the sugar was used to add sweetness to the cake. the eggs were used to add moisture to the cake. the butter was used to add flavor and richness to the cake." + }, + { + "type": "text", + "bbox": [ + 0.18, + 0.498, + 0.82, + 0.601 + ], + "angle": 0, + "content": "(C) The primary tools and ingredients c used throughout the video were a muffin tin, a spoon, chocolate chips, flour, sugar, eggs, and butter. the muffin tin was used to bake the muffins. the spoon was used to scoop the batter into the muffin tin. the chocolate chips were used to add flavor and color to the muffins. the flour was used to add structure to the muffins. the sugar was used to add sweetness to the muffins. the eggs were used to add moisture to the muffins. the butter was used to add flavor and richness to the muffins." + }, + { + "type": "text", + "bbox": [ + 0.179, + 0.604, + 0.82, + 0.707 + ], + "angle": 0, + "content": "(D) The primary tools and ingredients c used throughout the video were a cake pan, a spatula, chocolate frosting, flour, sugar, eggs, and butter. The cake pan was used to bake the cake. The spatula was used to spread the frosting on the cake. The chocolate frosting was used to add flavor and color to the cake. The flour was used to add structure to the cake. The sugar was used to add sweetness to the cake. The eggs were used to add moisture to the cake. The butter was used to add flavor and richness to the cake." + }, + { + "type": "text", + "bbox": [ + 0.179, + 0.71, + 0.82, + 0.812 + ], + "angle": 0, + "content": "(E) The primary tools and ingredients c used throughout the video were a mixing bowl, a whisk, a baking sheet, flour, sugar, eggs, and butter. the mixing bowl was used to mix the batter. the whisk was used to beat the eggs. the baking sheet was used to bake the cookies. the flour was used to add structure to the cookies. the sugar was used to add sweetness to the cookies. the eggs were used to add moisture to the cookies. the butter was used to add flavor and richness to the cookies." + }, + { + "type": "list", + "bbox": [ + 0.179, + 0.321, + 0.822, + 0.812 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.824, + 0.825, + 0.854 + ], + "angle": 0, + "content": "Figure 14: Case study on EgoSchema (w/o CoT), demonstrating that Time-R1 achieves better performance than the base model." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "22" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.177, + 0.11, + 0.236, + 0.122 + ], + "angle": 0, + "content": "## Task" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.126, + 0.62, + 0.137 + ], + "angle": 0, + "content": "Classify input queries into exactly one of the following categories based on their semantic content:" + }, + { + "type": "title", + "bbox": [ + 0.177, + 0.139, + 0.314, + 0.15 + ], + "angle": 0, + "content": "1. Human Action (Simple)" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.152, + 0.462, + 0.162 + ], + "angle": 0, + "content": "- Definition: Singular physical movements or basic interactions." + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.164, + 0.767, + 0.175 + ], + "angle": 0, + "content": "- Examples: - person opens a book over their head. - The person gets out some ginger. - who did I talk to in the shopping mall?" + }, + { + "type": "list", + "bbox": [ + 0.177, + 0.152, + 0.767, + 0.175 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.177, + 0.176, + 0.324, + 0.187 + ], + "angle": 0, + "content": "2. Human Action (Complex)" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.189, + 0.572, + 0.199 + ], + "angle": 0, + "content": "- Definition: Single continuous event with intricate components or concurrent elements." + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.201, + 0.528, + 0.212 + ], + "angle": 0, + "content": "- Examples: - He is talking while several people are using rowing machines." + }, + { + "type": "list", + "bbox": [ + 0.177, + 0.189, + 0.572, + 0.212 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.245, + 0.213, + 0.753, + 0.224 + ], + "angle": 0, + "content": "- One man wearing blue shirt wearing a jumping leg extension and another man wearing red pants play on a field." + }, + { + "type": "text", + "bbox": [ + 0.245, + 0.225, + 0.545, + 0.236 + ], + "angle": 0, + "content": "- who did I interact with when I did activity of fixing camping tent?" + }, + { + "type": "list", + "bbox": [ + 0.245, + 0.213, + 0.753, + 0.236 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.177, + 0.238, + 0.336, + 0.249 + ], + "angle": 0, + "content": "3. Human Action (procedural)" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.251, + 0.807, + 0.261 + ], + "angle": 0, + "content": "- Definition: contains multiple sequential events with explicit temporal boundaries. contains multiple actions, each with a clear start and end." + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.263, + 0.811, + 0.285 + ], + "angle": 0, + "content": "- Examples: - The person procures a condiment from the pantry, takes a spoon from the drawer which he uses to scoop it into the pan, then returns the condiment to the pantry, places the spoon in the sink and again stirs the pan." + }, + { + "type": "list", + "bbox": [ + 0.177, + 0.251, + 0.811, + 0.285 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.287, + 0.779, + 0.309 + ], + "angle": 0, + "content": "- The person takes out a spoon from the drawer, scoops some sugar into the glass, stirs it with the juice, and returns the package to the pantry." + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.312, + 0.559, + 0.322 + ], + "angle": 0, + "content": "- Negative Examples: - Then the man juices some lemons in a juicer: only one action" + }, + { + "type": "text", + "bbox": [ + 0.28, + 0.324, + 0.529, + 0.334 + ], + "angle": 0, + "content": "- She gets out a cutting board and knife: only one action" + }, + { + "type": "text", + "bbox": [ + 0.28, + 0.336, + 0.502, + 0.346 + ], + "angle": 0, + "content": "- He then finishes by doing tricks: only one action" + }, + { + "type": "text", + "bbox": [ + 0.28, + 0.348, + 0.586, + 0.358 + ], + "angle": 0, + "content": "- She removes bits of shell until there is a small hole: only one action" + }, + { + "type": "list", + "bbox": [ + 0.28, + 0.324, + 0.586, + 0.358 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.177, + 0.36, + 0.257, + 0.37 + ], + "angle": 0, + "content": "4. Human Pose" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.373, + 0.637, + 0.384 + ], + "angle": 0, + "content": "- Definition: Static body positions or group configurations. Posture descriptors, positional prepositions" + }, + { + "type": "text", + "bbox": [ + 0.178, + 0.385, + 0.586, + 0.396 + ], + "angle": 0, + "content": "- Examples: - Several other people are in the background working out on the equipment." + }, + { + "type": "list", + "bbox": [ + 0.177, + 0.373, + 0.637, + 0.396 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.245, + 0.397, + 0.515, + 0.408 + ], + "angle": 0, + "content": "- A young child is seen standing before a set of monkey bars." + }, + { + "type": "title", + "bbox": [ + 0.177, + 0.41, + 0.325, + 0.422 + ], + "angle": 0, + "content": "5. Object Existence (Simple)" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.423, + 0.511, + 0.433 + ], + "angle": 0, + "content": "- Definition: Current location/status queries. Simple location prepositions." + }, + { + "type": "text", + "bbox": [ + 0.178, + 0.435, + 0.333, + 0.445 + ], + "angle": 0, + "content": "- Examples: - Where is the tap?" + }, + { + "type": "list", + "bbox": [ + 0.177, + 0.423, + 0.511, + 0.445 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.246, + 0.447, + 0.363, + 0.457 + ], + "angle": 0, + "content": "- where is the chopsticks?" + }, + { + "type": "text", + "bbox": [ + 0.246, + 0.46, + 0.429, + 0.469 + ], + "angle": 0, + "content": "- In what location did i see the blue tent?" + }, + { + "type": "list", + "bbox": [ + 0.246, + 0.447, + 0.429, + 0.469 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.177, + 0.471, + 0.335, + 0.483 + ], + "angle": 0, + "content": "6. Object Existence (Complex)" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.485, + 0.777, + 0.507 + ], + "angle": 0, + "content": "- Definition: Queries about historical object positions changed by human actions, requiring temporal-action context (e.g., \"after/before [action]\")." + }, + { + "type": "text", + "bbox": [ + 0.178, + 0.509, + 0.443, + 0.52 + ], + "angle": 0, + "content": "- Examples: - Where was the spatula after I first used it?" + }, + { + "type": "list", + "bbox": [ + 0.177, + 0.485, + 0.777, + 0.52 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.246, + 0.521, + 0.428, + 0.531 + ], + "angle": 0, + "content": "- Where was the sieve before I picked it?" + }, + { + "type": "text", + "bbox": [ + 0.246, + 0.533, + 0.346, + 0.543 + ], + "angle": 0, + "content": "- what bolt did I pick?" + }, + { + "type": "text", + "bbox": [ + 0.246, + 0.545, + 0.375, + 0.555 + ], + "angle": 0, + "content": "- What mushroom did i chop" + }, + { + "type": "list", + "bbox": [ + 0.246, + 0.521, + 0.428, + 0.555 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.177, + 0.558, + 0.276, + 0.569 + ], + "angle": 0, + "content": "7. Object Attribute" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.571, + 0.594, + 0.581 + ], + "angle": 0, + "content": "- Definition: Physical/abstract property inquiries. Property descriptors (color/size/material)" + }, + { + "type": "text", + "bbox": [ + 0.178, + 0.583, + 0.43, + 0.593 + ], + "angle": 0, + "content": "- Examples: - what material did I pick from the shelf?" + }, + { + "type": "list", + "bbox": [ + 0.177, + 0.571, + 0.594, + 0.593 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.246, + 0.595, + 0.375, + 0.604 + ], + "angle": 0, + "content": "- what color is the toilet bin?" + }, + { + "type": "title", + "bbox": [ + 0.177, + 0.607, + 0.277, + 0.618 + ], + "angle": 0, + "content": "8. Object Counting" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.62, + 0.559, + 0.631 + ], + "angle": 0, + "content": "- Definition: Quantitative object presence queries. Numeric quantifiers, plural objects" + }, + { + "type": "text", + "bbox": [ + 0.178, + 0.633, + 0.442, + 0.642 + ], + "angle": 0, + "content": "- Examples: - how many tissue paper were on the floor?" + }, + { + "type": "list", + "bbox": [ + 0.177, + 0.62, + 0.559, + 0.642 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.246, + 0.644, + 0.388, + 0.654 + ], + "angle": 0, + "content": "- how many rolls are in the tray" + }, + { + "type": "title", + "bbox": [ + 0.177, + 0.657, + 0.284, + 0.667 + ], + "angle": 0, + "content": "9. Object Transition" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.67, + 0.584, + 0.68 + ], + "angle": 0, + "content": "- Definition: State/position change confirmation. Transformation verbs, completion checks" + }, + { + "type": "text", + "bbox": [ + 0.178, + 0.682, + 0.366, + 0.692 + ], + "angle": 0, + "content": "- Examples: - The bulb is broken apart." + }, + { + "type": "list", + "bbox": [ + 0.177, + 0.67, + 0.584, + 0.692 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.246, + 0.694, + 0.337, + 0.704 + ], + "angle": 0, + "content": "- Did I close fridge?" + }, + { + "type": "title", + "bbox": [ + 0.178, + 0.707, + 0.308, + 0.717 + ], + "angle": 0, + "content": "10. Environment Change" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.72, + 0.531, + 0.73 + ], + "angle": 0, + "content": "- Definition: Dynamic scene modifications. Transient elements, overlay content" + }, + { + "type": "text", + "bbox": [ + 0.178, + 0.732, + 0.458, + 0.742 + ], + "angle": 0, + "content": "- Examples: - video ends with clothes/captions scrolling down" + }, + { + "type": "list", + "bbox": [ + 0.177, + 0.72, + 0.531, + 0.742 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.177, + 0.744, + 0.296, + 0.754 + ], + "angle": 0, + "content": "11. Environment State" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.757, + 0.511, + 0.767 + ], + "angle": 0, + "content": "- Definition: Persistent scene elements. Static overlays, permanent fixtures" + }, + { + "type": "text", + "bbox": [ + 0.178, + 0.769, + 0.452, + 0.779 + ], + "angle": 0, + "content": "- Examples: - Intro states 'Progression: Lisa's First Season'" + }, + { + "type": "list", + "bbox": [ + 0.177, + 0.757, + 0.511, + 0.779 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.246, + 0.781, + 0.437, + 0.792 + ], + "angle": 0, + "content": "-Trend Routing Technology' logo appears" + }, + { + "type": "title", + "bbox": [ + 0.177, + 0.795, + 0.306, + 0.808 + ], + "angle": 0, + "content": "Output Format" + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.81, + 0.791, + 0.846 + ], + "angle": 0, + "content": "Return ONLY the exact category name from: [Human Action (Procedural), Human Action (Complex), Human Action (Simple), Human Pose, Object Existence (Simple), Object Existence (Complex), Object Attribute, Object Counting, Object Transition, Environment Change, Environment State]" + }, + { + "type": "text", + "bbox": [ + 0.177, + 0.847, + 0.751, + 0.858 + ], + "angle": 0, + "content": "INPUT_prompt = \"\"Given the query below, classify it into one of the categories mentioned above. Query: {query} Your response:" + }, + { + "type": "text", + "bbox": [ + 0.202, + 0.873, + 0.793, + 0.888 + ], + "angle": 0, + "content": "Figure 15: Prompts for LLM used to annotate the semantics of each query on TVGBench." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "23" + } + ] +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13377/48689f30-a9be-48a0-a682-282cc19b9dc2_origin.pdf b/data/2025/2503_13xxx/2503.13377/48689f30-a9be-48a0-a682-282cc19b9dc2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..86feb744e0cfd1aa336632bdd0cdb6077e70ccaa --- /dev/null +++ b/data/2025/2503_13xxx/2503.13377/48689f30-a9be-48a0-a682-282cc19b9dc2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:530716976b130bbdd34ea7da590f9663a1e77405bbcc8a6eeccaf58d2de08fea +size 4081235 diff --git a/data/2025/2503_13xxx/2503.13377/full.md b/data/2025/2503_13xxx/2503.13377/full.md new file mode 100644 index 0000000000000000000000000000000000000000..05282462305bbd577d1942512bee0972756dc9ea --- /dev/null +++ b/data/2025/2503_13xxx/2503.13377/full.md @@ -0,0 +1,716 @@ +# Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding + +Ye Wang $^{1*}$ Ziheng Wang $^{1*}$ Boshen Xu $^{1*‡}$ Yang Du $^{1}$ Kejun Lin $^{1}$ Zihan Xiao $^{3}$ +Zihao Yue $^{1}$ Jianzhong Ju $^{2}$ Liang Zhang $^{1}$ Dingyi Yang $^{1}$ Xiangnan Fang $^{1}$ Zewen He $^{2}$ +Zhenbo Luo $^{2}$ Wenxuan Wang $^{1}$ Junqi Lin $^{2}$ Jian Luan $^{2}$ Qin Jin $^{1†}$ + +$^{1}$ AIM3 Lab, Renmin University of China $^{2}$ MiLM Plus, Xiaomi Inc. $^{3}$ Independent Researcher Project Page: https://xuboshen.github.io/Time-R1/ + +# Abstract + +Temporal Video Grounding (TVG), the task of locating specific video segments based on language queries, is a core challenge in long-form video understanding. While recent Large Vision-Language Models (LVLMs) have shown early promise in tackling TVG through supervised fine-tuning (SFT), their abilities to generalize remain limited. To address this, we propose a novel post-training framework that enhances the generalization capabilities of LVLMs via reinforcement learning (RL). Specifically, our contributions span three key directions: (1) Time-R1: we introduce a reasoning-guided post-training framework via RL with verifiable reward to enhance the capabilities of LVLMs on the TVG task. (2) TimeRFT: we explore data-efficient post-training strategies on our curated RL-friendly dataset, which trains the model to progressively comprehend difficult samples, leading to better generalization. (3) TVGBench: we carefully construct a small yet comprehensive benchmark for LVLM evaluation, assessing 11 types of queries and featuring balanced distributions across both videos and queries. Extensive experiments demonstrate that Time-R1 achieves state-of-the-art performance across multiple downstream datasets using only 2.5K training data, while improving its general video understanding capabilities. + +# 1 Introduction + +Understanding long-form videos has long been a core ambition in computer vision [14, 24, 9]. A critical step toward this goal is Temporal Video Grounding (TVG) [15, 61], which enables models to localize video segments corresponding to natural language queries (e.g., "Find the segment where a person walks into the living room"). Such capability is fundamental for real-world applications, including smart home assistants [55, 16, 46] and video retrieval systems on online platforms [5, 3]. + +Traditional TVG approaches adopt a feature-based paradigm, where pretrained models (e.g., CLIP [42], I3D [6]) extract text and video features, followed by task-specific grounding models [30, 22, 29]. However, these methods suffer from error accumulation due to imperfect pretrained features. To overcome these limitations, recent efforts have shifted toward end-to-end Large Vision-Language Models (LVLMs) [44, 60], which directly process long-form videos and text queries. Despite being pretrained on datasets $100 \times$ larger than domain-specific benchmarks [46], LVLMs (with $7\mathrm{B}+$ parameters) often underperform compared to much smaller feature-based models (e.g., + +![](images/3d410d229d3a500cf3af047c76b60ce63002c798ae8c6e3f3a040537d00828a3.jpg) +TimeRFT Training training strategy, dataset +Time-R1 Framework data-efficient, effective, R1-like +TVGBench Evaluation +ata, 11 types, balanced distributions + +![](images/68a0bcca670a2e1de605d4cbe5acb8cf4520384516660cd871afe03ef7bcde10.jpg) +Improved Temporal Video Grounding (video length: 148s) + +Question: Where is the event "He took out cutting board". +![](images/ddbcfed6eab7989e24c9d085a94496876fb255a141cb877b193ccc30f3ed1fd4.jpg) +Time-R1 thinking (ours): The event "He took out cutting board" occurs between 13.40 and 18.20 seconds. This is when the person is seen placing the cutting board on the counter. + +![](images/ab0e03c7fcd44d74e7111cfd85238e02d3b90545d29358fa22e32d3e526193b5.jpg) +Improved Short Video QA (video length: 25s) + +(A) Y + +![](images/11b7721a4fc8aae3ea951446ef33c233ace8be182dd0eff3f8fe2e9fd8f666f6.jpg) +Question: What letter is likely to be shown next? + +![](images/045217fdd83a82b32d557cbe28bd70f580c2da2c89b5005e6eb6f4b289633e50.jpg) + +# Base model thinking: + +... Given this sequence, the next logical letter would be the next letter in the alphabet after 'F', which is 'G'. The letters 'D', 'E', 'F', and 'G' are now on the table, forming the sequence 'DEFG'. Therefore, the next letter to be shown is likely to be 'G'. + +# Time-R1 thinking (ours): + +The sequence of letters shown in the video is "D", "E", "F", and "G". This suggests a pattern of showing the next letter in the alphabet. The letters are being placed in a specific order, and the next logical letter after "G" would be "H". + +![](images/41a71ed56113a83ea742f50f68837de75341378355cfe16e118d39d734ebc0ab.jpg) +Improved Long Video QA (video length: 59m) +Figure 1: Our contributions include a novel post-training framework for LVLMs via reinforcement learning, Time-R1; an RL fine-tuning strategy along with its associated training dataset, TimeRFT; and a new benchmark, TVGBench, for evaluating LVLMs on the TVG task. Our Time-R1 model not only achieves SoTA TVG performance, but also enhances performance on both short- and long-form multi-choice video question answering tasks. + +Question: What do the expanding red lines on the map in the first few minutes of the video stand for? + +(A) The Yellow River (C) Du Fu's route to Xi'an + +(B) The Silk Road +(D) The Yangtze River + +# Base model thinking: + +... The map is used to illustrate the historical journey of Du Fu, a famous Chinese poet, as indicated by the text "Du Fu's route to Xi'an" and the historical context provided by the narrator. Given this analysis, the expanding red lines on the map in the first few minutes of the video stand for Du Fu's route to Xi'an.... Time-R1 thinking (ours): + +... 2. The map is shown with red lines expanding from a central point, which is likely Xi'an, a major city on the Silk Road. 3. The map is labeled with "Silk Road" and "The Great Silk Road," confirming that the red lines represent the Silk Road. ... + +9M parameters [22]). This raises a critical question: Why do LVLMs, despite their vast pretrained knowledge, fail to excel on TVG? + +We attribute the suboptimal performance of LVLMs to over-penalization of false negatives during supervised fine-tuning (SFT). For instance, when the ground truth timestamp is [2s, 4s], even when the model makes a reasonable prediction of timestamp [1.9s, 3.9s], the autoregressive loss would still be undesirably high. Such disproportionate penalties on reasonable predictions result in overfitting and poor generalization. While previous solutions have attempted to address this by introducing new timestamp tokens into the vocabulary [18, 17, 53] or by appending a regression head to predict timestamps [65], they often sacrifice the pretrained language capabilities of LLMs. + +Inspired by recent success in reinforcement learning (RL) for post-training LLMs [39, 1] with chain-of-thought (CoT) prompting, especially in domains with deterministic answers, such as code generation and mathematical reasoning, we explore whether RL can serve as an effective alternative for TVG. Unlike SFT, RL allows direct optimization of task-specific metrics (e.g., IoU), thereby reducing rigid penalties of autoregressive losses and encouraging plausible timestamp predictions. In this work, we present an RL-based framework, Time-R1, that effectively post-trains LVLMs for TVG and pushes the performance frontier. Our contributions include: + +![](images/fe508b93644a4c1a251811d6357c99d02cdd5eae3ffec63d502390bd25cac1fb.jpg) +(a) VLP + +![](images/baa3e64aeacdf876500af8b7229d74eb3ddc81327f265d60a53c62ca18c4a6ea.jpg) +(b) SFT + +![](images/2cd0c3c3be979c379e8d23cc83ebce767a4565f06d233336e9ec0239188547cd.jpg) +(c) RL (ours) +Figure 2: Comparison of different approaches for the TVG task, including feature-based video-language pretraining (VLP) [37, 22], supervised fine-tuning (SFT) [41, 60], and RL (ours). + +- RL-based framework for temporal video grounding. We introduce Time-R1, a reasoning-enhanced post-training framework via RL with verifiable rewards, where the LVLM first generates chain-of-thought descriptions and then predicts timestamps. The post-training process is optimized using Generalized Reinforcement Policy Optimization (GRPO) with a novel reward function, incorporating both a structured template reward and a timestamp-aware tIoU reward. +- Time-aware reinforcement fine-tuning. We propose TimeRFT, a reinforcement fine-tuning strategy with dynamic hard sampling, which mines hard samples on a curated dataset and progressively selects low-IoU samples for multi-epoch training. To ensure stable reasoning and reduce hallucinations, we adopt a cold-start approach to generate CoT with video captions. To support RL-friendly training, we curate an RFT dataset with difficulty annotations on the TVG task. +- Comprehensive benchmark for LVLMs on TVG. Existing TVG benchmarks are designed for the large-scale evaluation of small models. Considering the inference speed bottlenecks and general-purpose role of LVLMs, we construct TVGBench, a compact yet comprehensive benchmark for TVG. We carefully balance the video distribution, query distribution, and design specific query semantics to ensure that the benchmark is well-suited for evaluating LVLMs. +- State-of-the-Art results and generalization. Compared with 7B LVLMs on the temporal video grounding task, our method outperforms all prior SFT-based methods with only 2.5K training data. After fine-tuning on downstream benchmarks like Charades [46] and ActivityNet [5], it surpasses many previous feature-based approaches. Furthermore, Time-R1 also improves the general video understanding on video QA benchmarks like MVBench [26] and VideoMME [13]. + +# 2 Related Works + +Temporal video grounding. The TVG task [15, 3] aims to localize temporal segments in untrimmed long videos given natural language queries. Previous works can be broadly categorized into feature-based video-language pretraining and frame-based LVLM methods, as shown in Figure 2. Feature-based methods first extract video and language features using pre-trained encoders (e.g., I3D [6], EgoVLP [29], CLIP [42], BERT [12], etc.), then build timestamp prediction modules based on multimodal fused features. Existing methods differ mainly in their design of the multimodal fusion module and timestamp prediction module. For example, SnAG [37] adopts a late fusion strategy and regresses timestamps directly in a single forward pass without proposal generation. While effective, these approaches are fundamentally limited by the quality of the pretrained features. Recent efforts have shifted toward end-to-end frame-based methods by fine-tuning LVLMs using SFT with autoregressive losses [27, 44, 60, 19, 52, 18, 25]. For instance, TRACE [18] treats each event as a combination of timestamp, saliency score, and caption, and fine-tunes the LVLM to generate event sequences autoregressively. However, such methods still underperform on even simple TVG benchmarks like Charades [46], often lagging behind feature-based approaches. In this work, we propose a novel RL-based post-training framework that establishes new state-of-the-art performance for LVLMs on TVG tasks, even surpassing many feature-based methods. + +RL in LLMs and LVLMs. RL is a foundational machine learning paradigm applied in diverse domains such as game playing [47], robotics [34], and increasingly, language and vision-language + +models [39]. RL updates models by interacting with environments and maximizing reward signals. In recent years, RL has profoundly affected the field of LLM and LVLM post-training, which falls into two main categories: Reinforcement Learning with Human Feedback (RLHF) [40, 57] and Reinforcement Learning with Verifiable Reward (RLVR) [1, 39, 7]. Early works find that RLHF can align LLM to generate human preference data, which primarily reduces the safety risks in LLM and hallucination problems in LVLM. For example, RLHF-V [57] collects fine-grained pairs of incorrect and corrected captions and leverages direct preference optimization to optimize the model to generate correct captions, thus mitigating hallucinations. More recent works have explored RLVR in tasks with deterministic answers, which not only benefits mathematical problem solving and code generation in LLMs (e.g., DeepSeek-R1 [1]), but also enhances the generalization of LVLMs across a range of applications, such as visual grounding [32] and visual reasoning [48]. As a pioneer among open-source LLMs, DeepSeek-R1 [1] adopts GRPO to enhance reasoning capabilities by designing rule-based rewards that incorporate both reasoning templates and final answers. In the context of LVLMs, MM-Eureka [35] investigates multimodal image reasoning using GRPO, introducing an online filtering mechanism and a two-stage training strategy to stabilize the optimization process. However, existing approaches remain confined to language [1, 56], image understanding [7, 48, 32, 35], and short video understanding [64, 28]. It remains unclear whether and how reinforcement learning impacts long-form video understanding. To bridge this gap, we take a first step by introducing RLVR into LVLMs for the temporal video grounding task. + +# 3 Method + +The TVG task aims to temporally localize video segments within long-form videos based on natural language queries. Given a video of duration $t$ seconds, which is represented as a sequence of $T$ frames $\{x_{1},\ldots ,x_{T}\}$ , and a language query $q$ , the goal is to identify the temporal boundaries $[t_s,t_e]$ of the segment that best corresponds to $q$ , where $t_s,t_e\in \mathbb{R}^+$ . In this work, we introduce Time-R1, a framework designed to unleash the potential of LVLMs for the TVG task using RL. We first provide background on RL-based training for LLMs in Section 3.1, then detail the training procedure of TimeR1 in Section 3.2. Next, we describe specific training techniques used in TimeRFT in Section 3.3, and finally, we present the construction of our evaluation benchmark TVGBench in Section 3.4. + +# 3.1 Background of GRPO: RL for LLM + +As a pioneer among open-sourced R1-style LLMs, Deepseek-R1 [1] leverages GRPO to train the policy model $\pi_{\theta}$ (i.e., the LLM) to think before answering, making it particularly well-suited for tasks with well-defined answers, such as mathematical reasoning. In the GRPO framework, given an input question $p$ , the LLM samples $G$ candidate responses $o = \{o_1,\dots,o_G\}$ , and a reward function $r(\cdot)$ assigns a reward score to each response, yielding $\{r(o_1),\ldots ,r(o_G)\}$ . GRPO encourages the LLM to generate responses that maximize a weighted sum reward $R(o)$ , defined by: + +$$ +R (o) = \sum_ {i = 1} ^ {G} \frac {\pi_ {\theta} \left(o _ {i}\right)}{\pi_ {\theta_ {\mathrm {o l d}}} \left(o _ {i}\right)} \cdot \frac {r \left(o _ {i}\right) - \operatorname {m e a n} \left(\left\{r \left(o _ {i}\right) \right\} _ {i = 1} ^ {G}\right)}{\operatorname {s t d} \left(\left\{r \left(o _ {i}\right) \right\} _ {i = 1} ^ {G}\right)} \tag {1} +$$ + +where $\pi_{\theta}(o)$ denotes the probability of LLM generating the response $o$ , and $\pi_{\theta_{\mathrm{old}}}$ represents the LLM parameters from a recently optimized state. To ensure training stability and avoid large deviations from the original language model behavior, the final training objective incorporates a KL-divergence regularization term [1], penalizing divergence between $\pi_{\theta}$ and $\pi_{\mathrm{ref}}$ : + +$$ +\max _ {\pi_ {\theta}} \mathbb {E} _ {o \sim \pi_ {\theta_ {\mathrm {o l d}}} (p)} [ R (o) - \beta \mathrm {D} _ {\mathrm {K L}} (\pi_ {\theta} \| \pi_ {\mathrm {r e f}}) ] \tag {2} +$$ + +where $\beta$ is a scaling coefficient. We omit the clipping operation for simplicity. + +# 3.2 Time-R1: RL for Temporal Video Grounding + +Since the TVG task has defined answers and well-established evaluation metrics, RL can optimize LVLMs for task-specific performance through tailored reward design. To enhance interpretability and align with human-like reasoning, we additionally incorporate an explicit "thinking process" prior to timestamp prediction. This process encourages the model to produce contextualized video descriptions that support its final decision. We detail our reward modeling and training process below. + +Reward modeling. The reward $r_i$ plays a crucial role in guiding the model's learning objective. To encourage effective temporal grounding with an explicit reasoning process, we design a composite reward function comprising two components: the timestamp-aware Intersection over Union (IoU) $r_{\mathrm{tIoU}}$ and the reasoning template reward $r_{\mathrm{form}}$ . + +- Timestamp-aware IoU reward $r_{\mathrm{tIoU}}(\cdot)$ . The TVG task primarily uses IoU [58] to evaluate the quality of predicted segments against the ground-truth $[t_s', t_e']$ , computed as: + +$$ +\mathrm {I o U} = \frac {\left[ t _ {s} , t _ {e} \right] \cap \left[ t _ {s} ^ {\prime} , t _ {e} ^ {\prime} \right]}{\left[ t _ {s} , t _ {e} \right] \cup \left[ t _ {s} ^ {\prime} , t _ {e} ^ {\prime} \right]} \tag {3} +$$ + +where $A \cap B$ and $A \cup B$ denote the union and intersection between sets A and B, respectively. Optimizing for the IoU inherently encourages the LVLM to produce predictions that fall within a permissible range of variation $\epsilon$ , such that $t_{s \text{ or } e}^{\prime} - \epsilon \leq t_{s \text{ or } e} \leq t_{s \text{ or } e}^{\prime} + \epsilon$ still yields high IoUs. This encourages the LVLM to focus more on the semantic understanding of the event within possible temporal boundaries, rather than rigidly requiring exact temporal alignment like SFT. However, standard IoU may fail to accurately reflect the quality of temporal alignment in certain scenarios. For example, when the ground truth span is [0, 30] (i.e., the full video), any prediction covering more than $30\%$ of the video would result in an IoU greater than 0.3. A prediction like [10, 25] would yield an IoU of 0.5, which overestimates its quality despite incorrect timestamps. To address this issue, we introduce the timestamp-aware IoU (tIoU) as a corrective measure. tIoU augments the standard IoU with penalties on timestamp deviations, defined as: + +$$ +r _ {\mathrm {t I o U}} (o) = \operatorname {I o U} \cdot \left(1 - \frac {\left| t _ {s} - t _ {s} ^ {\prime} \right|}{t}\right) \cdot \left(1 - \frac {\left| t _ {e} - t _ {e} ^ {\prime} \right|}{t}\right) \tag {4} +$$ + +This modification penalizes predictions that deviate from the reference timestamps relative to the video duration $t$ . In the earlier example, the reward value changes from 0.5 (IoU) to 0.28 (tIoU), providing a more realistic signal for learning. Overall, tIoU acts as a stricter and more informative reward signal, encouraging the LVLM to develop a deeper temporal understanding of events, rather than relying on superficial shortcuts. + +- Reasoning template reward $r_{\mathrm{form}}(\cdot)$ . In TVG, the video segments relevant to a textual query typically comprise only a small portion of the entire long video. For LVLMs, it is therefore suboptimal to directly predict timestamps without first engaging in a reasoning process to identify the relevant content. Instead, the model should allocate its computational capacity toward reasoning over visual and linguistic cues to better understand the temporal context before making predictions. For instance, given the query "the man washes dishes", reasoning that the person is likely in a kitchen can improve temporal localization. Such context-aware inference supports more accurate and semantically aligned predictions. To encourage this behavior, we introduce a template-based reasoning reward, which incentivizes the model to generate intermediate reasoning steps (structured in a predefined format) prior to timestamp localization. The reasoning template reward requires the LVLM to structure its response like "<...< $t_s$ to $t_e$ >", formulated as: + +$$ +r _ {\text {f o r m}} (o) = \left\{ \begin{array}{l} 0, \text {i f} o \text {h a s w r o n g f r o m a t} \\ 1, \text {i f} o \text {h a s c o r r e c t f r o m a t} \end{array} \right. \tag {5} +$$ + +The overall reward is the sum of the two: + +$$ +r (o) = r _ {\mathrm {t I o U}} (o) + r _ {\text {f o r m}} (o) \tag {6} +$$ + +GRPO training. The LVLM $\mathcal{F}(\cdot)$ takes the video frames $x_{1},\ldots ,x_{t}$ and the language query $q$ as input and generates $G$ candidate responses $o_1,\dots,o_G$ , where each response is computed as $o_i = \mathcal{F}(x_1,\ldots ,x_t;q)$ . The reward for each response is calculated using Equation 1, and the model is optimized with the GRPO objective in Equation 2. To focus learning on the reasoning and localization capabilities, we freeze the visual encoder and update only the parameters of the LLM during training. + +# 3.3 TimeRFT: Time-Aware RL-Friendly Fine-Tuning + +Due to the high computational cost associated with RL training, we explore data-efficient strategies to reduce sample requirements. To this end, we propose TimeRFT, which incorporates time-aware, RL-friendly dataset curation and fine-tuning techniques aimed at enhancing generalization while minimizing training overhead. + +RL-friendly dataset curation. We construct the TimeRFT dataset by leveraging only TVG samples and assign a difficulty score to each sample based on the base model's performance. A small subset is then selected for subsequent RL training. + +- Source data collection. Our training videos are sourced from Internet video datasets including YT-Temporal [54], DiDeMo [3], QuerYD [38], InternVid [49], and HowTo100M [36]. We obtain grounding data with annotations from VTG-IT [17], TimeIT [44], TimePro [60], HTStep [2], and LongVid [27]. This process yields 339K temporal grounding samples. +- RFT data filtering. We propose a data selection strategy based on training difficulty to significantly reduce training costs while preserving strong generalization performance. Models trained only on easy samples (e.g., IoU $\geq 0.7$ ) tend to overfit, whereas training on overly difficult samples (e.g., IoU = 0) often suffers from sparse reward signals, making it hard for the model to receive positive feedback. To strike a balance, we select samples of moderate difficulty that are more conducive to generalization during reinforcement fine-tuning. We first estimate a difficulty score for each sample based on the performance of the base model. For grounding tasks, difficulty is quantified using the IoU between the predicted and ground-truth temporal regions. We then filter out samples that are either too easy or too hard. Specifically, we sample a subset of data from a Gaussian distribution over the IoU axis centered at 0.3, resulting in a set of 2.5K moderately difficult samples for RL training. + +RFT training strategy. For selected difficult samples, the model may struggle to learn them in a single pass. However, we argue that effectively mastering these challenging cases is essential for improving overall model performance. To this end, we employ a multi-epoch training approach combined with per-epoch sample filtering, allowing the model to repeatedly focus on harder samples and gradually improve its understanding. + +- Dynamic hard sampling. We adopt a multi-epoch training strategy coupled with per-epoch sample filtering to enhance learning from difficult examples. The model is trained over multiple epochs, and after each epoch, we exclude easy samples with an IoU greater than 0.7 that have become easy. This dynamic curriculum discourages overfitting on easy instances while ensuring consistent exposure to harder samples, ultimately promoting stronger generalization. +- Cold start fine-tuning with few CoT data. For smaller models (e.g., 3B parameters), we observe that directly training with RL to generate CoT responses often results in reasoning steps that are either unintelligible or hallucinated, which impairs answer quality. Additionally, the length of generated reasoning during early training stages is difficult to control, leading to an unstable training process. To address these issues, we introduce a cold-start fine-tuning strategy using a small set of CoT-formatted examples that encourage grounded reasoning aligned with video content. Specifically, we guide the model to produce structured sequential captions with associated timestamps with the template as: + +$<$ think $> < t_{s_1}$ to $t_{e_1}:C_1$ ; $t_{s_2}$ to $t_{e_2}:C_2 > < / \text{think} > < \text{answer} > t_s$ to $t_e < / \text{answer} > (7)$ where $C_i$ represent captions corresponding to video segments $[t_{s_i}, t_{e_i}]$ , respectively. + +# 3.4 TVGBench: Evaluation Benchmark for LVLM on Temporal Video Grounding + +Existing benchmarks for temporal video grounding either focus on large-scale datasets tailored for smaller models within specific domains (e.g., human activities in ActivityNet) or consist of small, limited test sets (e.g., the 2K home activity samples in Charades) typically used for LVLM evaluation due to their slower inference speed. However, these benchmarks fall short in capturing the evaluation needs of LVLMs, which, despite slower inference, exhibit strong generalization capabilities. To bridge this gap, we introduce TVGBench, a lightweight yet comprehensive evaluation benchmark specifically designed for assessing the performance of LVLMs on temporal video grounding tasks. + +Data sources. To ensure a comprehensive evaluation, we construct our TVGBench by curating samples from five public benchmarks with a balanced distribution of data sources: Charades-STA [46], ActivityNet-Captions [5], HiREST [59], EgoNLQ [16], and TaCoS [43]. + +Benchmark statistics. We carefully balance the video duration, video domain, and query center, and construct query semantics in TVGBench to construct 800 instances, as shown in Figure 3. + +- Distribution statistics. Video durations in the dataset have a balanced range from short clips up to 3 minutes long. To ensure temporal diversity, the center timestamps of queries are designed to be approximately uniformly distributed across the entire span of each video. +- Query semantics. Since the original datasets do not provide semantic labels for queries, we manually define 11 semantic categories grouped under three major types: human, object, and + +![](images/49e9a7fef9722f608faf4d9fd45ba26622fb2a14c87bc1378173e16c8e2e9f1d.jpg) +Figure 3: Statistics of TVGBench. TVGBench comprises 11 types of queries covering aspects related to humans, objects, and environments. As illustrated in the figure on the right, the distributions of query center, video length, and dataset source are designed to be as balanced as possible. This balanced construction allows for a comprehensive evaluation of model performance across different dimensions, enabling fine-grained analysis along each axis during benchmarking. + +![](images/8ec672d067af753645e301703b46ab27cdeeb397f1273a933f68de77089f7d74.jpg) + +![](images/ae5ab713fcc4ca382be19915fba333312c734d0076d80ef1075081d5a192af17.jpg) + +environment. We leverage DeepSeek-V3 [11] to annotate the semantic type of each query and ensure approximate balance across these categories. While most categories are evenly represented, the Human Action Simple (HAS) category is slightly overrepresented due to inherent dataset bias (simple indoor actions are more common). In such cases, we prioritize achieving a balance across datasets from different domains while maintaining semantic diversity, accepting a skew in HAS. + +# 4 Experiments + +We first present our experimental setup in Section 4.1. Then, we evaluate our model from three key perspectives: (1) Comparison with state-of-the-art methods in Section 4.2: We evaluate our model across multiple TVG benchmarks to assess its performance against existing approaches; (2) Ablation studies and analyses in Section 4.3: We examine the individual contributions of each component in our framework to better understand their roles in overall performance. We also compare RL and SFT strategies across TVG, short video QA, and long video QA tasks. + +# 4.1 Experimental Setup + +**Benchmarks.** We evaluate our model on a wide range of benchmarks covering both temporal video grounding and general video understanding tasks, including: (1) Charades-STA [46] contains 6,672 long videos capturing indoor human activities. The official split for the TVG task includes 12,408 clip-query pairs for training and 3,720 for testing. (2) ActivityNet [5] comprises 20K long videos with an average of 3.65 clip-query pairs per video. Following previous work in fine-tuning setting [62, 22] for the TVG task, we use the standard dataset splits with 37,421 training, 17,505 validation, and 17,031 test samples. (3) MVBench [26] is a short video QA benchmark focused on temporal reasoning. It includes 4K QA pairs for 20 types of tasks. (4) TempCompass [31] assesses fine-grained temporal understanding with 410 short videos. We use all multi-choice QA tasks except for the video captioning task. (5) EgoSchema [33] features 5K egocentric video clips, each approximately 3 minutes long, with temporally demanding QA pairs. (6) VideoMME [13] is a general video QA benchmark covering diverse domains. It contains 2.7K QA samples over videos of varied lengths, ranging from 11 seconds to 1 hour. We use the long video split for evaluation. + +Implementation details. Unless otherwise specified, we use Qwen2.5-VL-7B [4] as the base model. To strike a balance between training efficiency and memory consumption, we sample video frames at 2 FPS and adaptively resize each video input to contain approximately 2.8 million pixels. For instance, a 50-second video yields 100 frames, each with a resolution of roughly $96 \times 96 \times 3$ . During the reinforcement fine-tuning phase, we train for 5 epochs using a batch size of 8 and select the final + +Table 1: Performance of temporal video grounding on Charades-STA, ActivityNet, and TVGBench. The methods marked in gray* represent fine-tuning on corresponding benchmarks, while those in black indicate zero-shot settings. We compare our Time-R1 against existing 7B open-source LVLMs, as well as state-of-the-art VLP models. + +
TypeMethodCharades-STAActivityNetTVGBench
R1@0.3R1@0.5R1@0.7R1@0.3R1@0.5R1@0.7R1@0.3R1@0.5R1@0.7
VLP2D-TAN* [63]57.345.827.960.443.425.0---
UniVTG* [30]72.660.238.656.143.424.3---
SSRN* [66]-65.542.6-54.533.2---
SnAG* [37]-64.646.2-48.630.6---
EaTR* [22]-68.444.9-58.237.6---
Gemini-2.5-Pro [10]------39.124.412.8
SFTChatVTG [41]52.733.015.940.722.59.4---
TimeChat [44]-32.213.436.220.29.522.411.95.3
HawkEye [50]50.631.414.549.129.310.7---
VTimeLLM [21]51.027.511.444.027.814.3---
TimeSuite [60]69.948.724.0---31.118.08.9
VideoChat-Flash [27]74.553.127.6---32.819.810.4
TRACE [18]-40.319.4-37.724.037.025.514.6
HawkEye* [50]72.558.328.855.934.717.9---
TimeSuite* [60]79.467.143.0------
RLTime-R1 (ours)78.160.835.358.639.021.441.829.416.4
Time-R1 (ours)*82.872.250.173.355.634.0---
+ +![](images/873721d52378b8f0aedc1559ae1943111394a4e815f0185d8745fbb2e288dd0d.jpg) +Figure 4: Comparison between post-training paradigms across various tasks, including temporal video grounding, short video QA, and long video QA. Both "SFT" and "RL" full-finetune the LLM, while "SFT-LoRA" denotes finetuning the LLM with LoRA [20]. The "Base" is Qwen2.5-VL-7B. + +checkpoint for evaluation. For fine-tuning on downstream benchmarks, we train for 2 epochs. All experiments are conducted on a cluster with $8 \times$ NVIDIA A100 GPUs. + +Evaluation metrics. For TVG, following [44, 60], we adopt the “R1@m” evaluation protocol to compare with state-of-the-art models, which computes the percentage of samples where the top-1 predicted segment has an IoU greater than a threshold $m$ , with $m \in \{0.3, 0.5, 0.7\}$ . For brevity, we also adopt mIoU, which calculates the average IoU on all testing data as an alternative metric. For video QA, we report accuracy as the evaluation metric. + +# 4.2 Comparison with State-of-the-Art + +We compare Time-R1 with state-of-the-art TVG methods, including both traditional video-language pre-training models (VLP) and recent large video-language models fine-tuned via SFT. + +Time-R1 surpasses SFT-based models in zero-shot settings. As shown in Table 1, in the zero-shot setting, Time-R1 demonstrates strong performance, outperforming SFT-based models that rely on large-scale supervision. Despite using only $2.5\mathrm{K}$ samples for RL, Time-R1 achieves leading results across multiple benchmarks. For example, on Charades-STA, Time-R1 attains an R1@0.7 score of 35.3, outperforming VideoChat-Flash (27.6) and TimeSuite (24.0). On ActivityNet, it achieves R1@0.7 score of 21.4, surpassing VTimeLLM (14.3) and TimeChat (10.7). On TVGBench, it scores 16.4, outperforming TRACE (14.6) and Gemini-2.5-Pro (12.8). + +Time-R1* outperforms all SFT-based LVLMs and many traditional VLP-based models. Time-R1* consistently outperforms both traditional VLP-based and SFT-based models on the TVG task. On Charades-STA, it exceeds EaTR and fine-tuned TimeSuite by 3.4-7.1 percentage points across R1@0.3 to R1@0.7. Notably, Time-R1* surpasses TimeSuite despite using far fewer RL samples compared to TimeSuite's 349K SFT examples. On the more challenging ActivityNet dataset, Time-R1* also outperforms SSRN [66] and TRACE [18], achieving significant gains across key metrics. + +Table 2: Ablation of Time-R1-7B training. GF, ME, SF refers to Gaussian Filtering, Multi-EPOCH, and Sample Filtering per epoch, respectively. + +
tIoUGFMESFTVGBench
R1@0.3R1@0.5R1@0.7
1XXXX38.024.813.2
2XXX36.023.612.9
3XXX37.225.013.4
4XXX39.926.014.2
5XX38.425.614.1
6XX39.426.516.4
7X41.628.515.6
841.829.416.4
+ +# 4.3 Ablation Study + +We conduct a detailed ablation on the Time-R1-7B model to investigate the contribution of various training strategies. + +Utility of TimeRFT and Time-R1 components. As shown in Table 2, both Gaussian Filtering (GF) and Multi-Epoch training (ME) individually improve performance, with ME yielding a more substantial gain, improving from R1@0.7 of 13.2 in row 1 to 14.2 in row 4. Notably, the combination of tIoU supervision and ME (Row 6) leads to a significant boost across all metrics. As more components are added, GF and ME (Row 7), followed by Sample Filtering (SF) in Row 8, the performance continues to improve, ultimately reaching R1@0.5 of 29.4 and R1@0.7 of 16.4. + +Generalization of RL vs. SFT. As shown in Figure 4, when both RL and SFT perform full fine-tuning of the LLM using a small amount of data, SFT significantly degrades the model's generalization ability across all tasks, whereas RL consistently preserves generalization. While SFT can alleviate this issue by adopting LoRA-based fine-tuning (SFT-LoRA), RL still demonstrates stronger generalization and additionally improves performance on video QA tasks, whereas SFT-LoRA leads to a decline. For example, on ActivityNet, RL improves the mIoU from 16.3 to 29.2, while SFT degrades it to 15.4, and SFT-LoRA improves it to 25.9. On the VideoMMe video QA benchmark, RL increases the performance from 53.0 to 54.2, whereas SFT-LoRA decreases it to 51.7. + +Impact of cold start. As shown in Figure 5, cold start boosts the performance of the 3B model and reduces the number of thinking tokens for both models. We attribute this to the cold start's function to suppress hallucinations, which tend to be more prevalent in weaker models. + +![](images/9024b0a012288071a2c51ac867ec685fd8c7521c85e00d3d26347c3a064034e0.jpg) +Figure 5: Impact of SFT-based cold start on IoU performance and thinking token count, with token counts marked by $\star$ on dashed lines. + +# 5 Conclusion + +In this work, we present Time-R1, a reinforcement learning based post-training framework that significantly improves the generalization of Large Vision-Language Models for Temporal Video Grounding. Unlike prior methods that rely on large-scale supervised fine-tuning, Time-R1 leverages a verifiable reward signal to unlock strong temporal reasoning from pretrained models using limited data. Our contributions include: (1) Time-R1, a reasoning-guided post-training framework that enhances TVG via RL; (2) TimeRFT, a curated dataset and training strategy that fosters temporal grounding; (3) TVGBench, a small yet comprehensive benchmark for evaluating LVLMs on TVG. Extensive experiments show that Time-R1 achieves SoTA performance across TVG benchmarks in both zero-shot and fine-tuned settings, surpassing prior LVLMs and traditional VLP-based models, while also improving general video understanding. We hope this work inspires future directions in data-efficient and generalizable video-language understanding via reinforcement learning. + +# References + +[1] Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. 2, 4 +[2] Triantafyllos Afouras, Effrosyni Mavroudi, Tushar Nagarajan, Huiyu Wang, and Lorenzo Torresani. Ht-step: Aligning instructional articles with how-to videos. Advances in Neural Information Processing Systems, 36:50310-50326, 2023. 6 +[3] Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing moments in video with natural language. In Proceedings of the IEEE international conference on computer vision, pages 5803-5812, 2017. 1, 3, 6 +[4] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-v1 technical report. arXiv preprint arXiv:2502.13923, 2025.7 +[5] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 961-970, 2015. 1, 3, 6, 7 +[6] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6299-6308, 2017. 1, 3 +[7] Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V, 2025.4 +[8] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructlip: Towards general-purpose vision-language models with instruction tuning, 2023. 14 +[9] Trevor Darrell and Alex Pentland. Space-time gestures. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 335-340. IEEE, 1993. 1 +[10] Google DeepMind. Gemini 2.5: Our most intelligent ai model. Google DeepMind, 2025. Model ID: gemini-2.5-pro-preview-03-25. 8, 14, 15 +[11] DeepSeek-AI. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprint arXiv:2401.02954, 2024. 7 +[12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171–4186, 2019. 3 +[13] Chaoyou Fu, Yuhan Dai, Yongdong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3, 7 +[14] Adrien Gaidon, Zaid Harchaoui, and Cordelia Schmid. Temporal localization of actions with actons. IEEE transactions on pattern analysis and machine intelligence, 35(11):2782-2795, 2013. 1 +[15] Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. Tall: Temporal activity localization via language query. In Proceedings of the IEEE international conference on computer vision, pages 5267-5275, 2017. 1, 3 +[16] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995-19012, 2022. 1, 6 +[17] Yongxin Guo, Jingyu Liu, Mingda Li, Dingxin Cheng, Xiaoying Tang, Dianbo Sui, Qingbin Liu, Xi Chen, and Kevin Zhao. Vtg-llm: Integrating timestamp knowledge into video llms for enhanced video temporal grounding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 3302–3310, 2025. 2, 6 +[18] Yongxin Guo, Jingyu Liu, Mingda Li, Qingbin Liu, Xi Chen, and Xiaoying Tang. Trace: Temporal grounding video llm via causal event modeling. arXiv preprint arXiv:2410.05643, 2024. 2, 3, 8, 9, 14, 15 +[19] Tanveer Hannan, Md Mohaiminul Islam, Jindong Gu, Thomas Seidl, and Gedas Bertasius. Revisionlmm: Recursive vision-language model for temporal grounding in hour-long videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3 + +[20] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuzhhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. *ICLR*, 1(2):3, 2022. 8, 14 +[21] Bin Huang, Xin Wang, Hong Chen, Zihan Song, and Wenwu Zhu. Vtimellm: Empower llm to grasp video moments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14271-14280, 2024. 8 +[22] Jinhyun Jang, Jungin Park, Jin Kim, Hyeongjun Kwon, and Kwanghoon Sohn. Knowing where to focus: Event-aware transformer for video grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13846-13856, 2023. 1, 2, 3, 7, 8 +[23] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. 15 +[24] Ivan Laptev and Patrick Pérez. Retrieving actions in movies. In 2007 IEEE 11th International Conference on Computer Vision, pages 1-8. IEEE, 2007. 1 +[25] Jiaze Li, Yaya Shi, Zongyang Ma, Haoran Xu, Feng Cheng, Huihui Xiao, Ruiwen Kang, Fan Yang, Tingting Gao, and Di Zhang. imove: Instance-motion-aware video understanding. arXiv preprint arXiv:2502.11594, 2025.3 +[26] Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, et al. Mvbench: A comprehensive multi-modal video understanding benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22195-22206, 2024. 3, 7 +[27] Xinhao Li, Yi Wang, Jiashuo Yu, Xiangyu Zeng, Yuhan Zhu, Haian Huang, Jianfei Gao, Kunchang Li, Yinan He, Chenting Wang, Yu Qiao, Yali Wang, and Limin Wang. Videochat-flash: Hierarchical compression for long-context video modeling. arXiv preprint arXiv:2501.00574, 2024. 3, 6, 8, 14, 15 +[28] Zhenyi Liao, Qingsong Xie, Yanhao Zhang, Zijian Kong, Haonan Lu, Zhenyu Yang, and Zhijie Deng. Improved visual-spatial reasoning via r1-zero-like training. arXiv preprint arXiv:2504.00883, 2025. 4 +[29] Kevin Qinghong Lin, Jinpeng Wang, Mattia Soldan, Michael Wray, Rui Yan, Eric Z Xu, Difei Gao, Rong-Cheng Tu, Wenzhe Zhao, Weijie Kong, et al. Egocentric video-language pretraining. Advances in Neural Information Processing Systems, 35:7575-7586, 2022. 1, 3 +[30] Kevin Qinghong Lin, Pengchuan Zhang, Joya Chen, Shraman Pramanick, Difei Gao, Alex Jinpeng Wang, Rui Yan, and Mike Zheng Shou. Univtg: Towards unified video-language temporal grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2794–2804, 2023. 1, 8 +[31] Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei Li, Sishuo Chen, Xu Sun, and Lu Hou. Tempcompass: Do video llms really understand videos? arXiv preprint arXiv:2403.00476, 2024. 7 +[32] Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785, 2025. 4 +[33] Karttikeya Mangalam, Raiymbek Akshulakov, and Jitendra Malik. Egoschema: A diagnostic benchmark for very long-form video language understanding. Advances in Neural Information Processing Systems, 36:46212-46244, 2023. 7 +[34] Gabriel B Margolis and Pulkit Agrawal. Walk these ways: Tuning robot control for generalization with multiplicity of behavior. In Conference on Robot Learning, pages 22-31. PMLR, 2023. 3 +[35] Fanqing Meng, Lingxiao Du, Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Botian Shi, Wenhai Wang, Junjun He, Kaipeng Zhang, et al. Mm-eureka: Exploring visual aha moment with rule-based large-scale reinforcement learning. arXiv preprint arXiv:2503.07365, 2025. 4 +[36] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2630-2640, 2019. 6 +[37] Fangzhou Mu, Sicheng Mo, and Yin Li. Snag: Scalable and accurate video grounding. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 18930-18940, 2024. 3, 8 +[38] Andreea-Maria Oncescu, Joao F Henriques, Yang Liu, Andrew Zisserman, and Samuel Albanie. Queryd: A video dataset with high-quality text and audio narrations. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2265-2269. IEEE, 2021. 6 +[39] OpenAI. Openai o1, 2024. 2, 4 +[40] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022. 4 + +[41] Mengxue Qu, Xiaodong Chen, Wu Liu, Alicia Li, and Yao Zhao. Chatvtg: Video temporal grounding via chat with video dialogue large language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1847-1856, 2024. 3, 8 +[42] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PmLR, 2021. 1, 3 +[43] Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics, 1:25-36, 2013. 6 +[44] Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, and Lu Hou. Timechat: A time-sensitive multimodal large language model for long video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14313-14323, 2024. 1, 3, 6, 8, 14, 15 +[45] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. 16 +[46] Gunnar A Sigurdsson, Gúl Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In Proceedings of the European Conference on Computer Vision (ECCV), 2016. 1, 3, 6, 7 +[47] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815, 2017. 3 +[48] Huajie Tan, Yuheng Ji, Xiaoshuai Hao, Minglan Lin, Pengwei Wang, Zhongyuan Wang, and Shanghang Zhang. Reason-rft: Reinforcement fine-tuning for visual reasoning. arXiv preprint arXiv:2503.20752, 2025.4 +[49] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Internvid: A large-scale video-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2307.06942, 2023. 6 +[50] Yueqian Wang, Xiaojun Meng, Jianxin Liang, Yuxuan Wang, Qun Liu, and Dongyan Zhao. Hawkeye: Training video-text llms for grounding text in videos, 2024. 8 +[51] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October 2020. Association for Computational Linguistics. 15 +[52] Yongliang Wu, Xinting Hu, Yuyang Sun, Yizhou Zhou, Wenbo Zhu, Fengyun Rao, Bernt Schiele, and Xu Yang. Number it: Temporal grounding videos like flipping manga. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3 +[53] Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid. Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10714-10726, 2023. 2 +[54] Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid. Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10714-10726, 2023. 6, 14 +[55] Jingkang Yang, Shuai Liu, Hongming Guo, Yuhao Dong, Xiamengwei Zhang, Sicheng Zhang, Pengyun Wang, Zitang Zhou, Binzhu Xie, Ziyue Wang, Bei Ouyang, Zhengyu Lin, Marco Cominelli, Zhongang Cai, Yuanhan Zhang, Peiyuan Zhang, Fangzhou Hong, Joerg Widmer, Francesco Gringoli, Lei Yang, Bo Li, and Ziwei Liu. Egolife: Towards egocentric life assistant. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 1 +[56] Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476, 2025. 4, 14, 16 +[57] Tianyu Yu, Yuan Yao, Haoye Zhang, Taiwen He, Yifeng Han, Ganqu Cui, Jinyi Hu, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun, et al. Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13807-13816, 2024. 4 + +[58] Yitian Yuan, Xiaohan Lan, Xin Wang, Long Chen, Zhi Wang, and Wenwu Zhu. A closer look at temporal sentence grounding in videos: Dataset and metric. In Proceedings of the 2nd international workshop on human-centric multimedia analysis, pages 13-21, 2021. 5 +[59] Abhay Zala, Jaemin Cho, Satwik Kottur, Xilun Chen, Barlas Oguz, Yashar Mehdad, and Mohit Bansal. Hierarchical video-moment retrieval and step-captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 23056-23065, 2023. 6 +[60] Xiangyu Zeng, Kunchang Li, Chenting Wang, Xinhao Li, Tianxiang Jiang, Ziang Yan, Songze Li, Yansong Shi, Zhengrong Yue, Yi Wang, Yali Wang, Yu Qiao, and Limin Wang. Timesuite: Improving MLLMs for long video understanding via grounded tuning. In The Thirteenth International Conference on Learning Representations, 2025. 1, 3, 6, 8, 14, 15 +[61] Hao Zhang, Aixin Sun, Wei Jing, and Joey Tianyi Zhou. Temporal sentence grounding in videos: A survey and future directions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(8):10443-10465, 2023. 1 +[62] Songyang Zhang, Houwen Peng, Jianlong Fu, Yijuan Lu, and Jiebo Luo. Multi-scale 2d temporal adjacency networks for moment localization with natural language. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. 7 +[63] Songyang Zhang, Houwen Peng, Jianlong Fu, and Jiebo Luo. Learning 2d temporal adjacent networks for moment localization with natural language. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020. 8 +[64] Xingjian Zhang, Siwei Wen, Wenjun Wu, and Lei Huang. Tinyllava-video-r1: Towards smaller lmms for video reasoning. arXiv preprint arXiv:2504.09641, 2025. 4 +[65] Henghao Zhao, Ge-Peng Ji, Rui Yan, Huan Xiong, and Zechao Li. Videoexpert: Augmented llm for temporal-sensitive video understanding. arXiv preprint arXiv:2504.07519, 2025. 2 +[66] Jiahao Zhu, Daizong Liu, Pan Zhou, Xing Di, Yu Cheng, Song Yang, Wenzheng Xu, Zichuan Xu, Yao Wan, Lichao Sun, and Zeyu Xiong. Rethinking the video sampling and reasoning strategies for temporal sentence grounding. In Findings of the Association for Computational Linguistics: EMNLP 2022, 2022. 8, 9 + +A Limitations 14 +B Implementation Details 14 +C Additional Analyses 15 +D Ablation Studies 15 +E Qualitative Result 16 + +# A Limitations + +Despite achieving notable improvements on the TVG task, our approach still has several limitations. First, Time-R1 suffers from slower training and inference speeds, primarily due to its large model size and reliance on autoregressive text generation. Second, to manage GPU memory consumption, we use a relatively low frame sampling rate, which may result in the loss of fine-grained motion information across frames. Finally, Time-R1 currently cannot handle ultra-long videos, limiting its applicability in scenarios such as full-length movie understanding. + +# B Implementation Details + +Details of Time-R1 framework. Inspired by DAPO [56], we adopt its token-level loss for training, rather than the sample-level loss used in GRPO. Apart from minor changes to the loss, all settings are identical to GRPO. Besides, we find that other techniques introduced in DAPO do not benefit the TVG task, thus aborting other techniques. We full-finetune the LLM parameters at every step, thus $\frac{\pi_{\theta}(o_i)}{\pi_{\theta_{\mathrm{old}}}(o_i)} = 1$ . The sample number $G$ is set to 8. The coefficient $\beta$ is set to 0.04. + +Details of TimeRFT training. For RFT data filtering, we use a Gaussian distribution with a fixed variance of 0.2 while varying the mean to control sample selection. In our cold start phase, we construct 150 samples from our training data sources (e.g., YT-Temporal [54]) to fine-tune the LLM using LoRA [20], with a LoRA rank of 64 and a LoRA alpha of 128. All of our results are reported based on the final training epoch. For RL, we use a learning rate of 1e-6 with the AdamW optimizer with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , and a linear scheduler to decay the learning rate from 1e-6 to 0. We use a batch size of 8 with gradient accumulation set to 2. It requires 15 hours of training on 8 A100 GPUs. + +Details of our evaluation prompts. As shown in Figure 11, for temporal video grounding, the prompts used for training and testing are designed to encourage the model to reason before responding, following a template-based answer format. For VideoQA, we have two versions of prompts: one with CoT and one without CoT. + +Details of TVG baseline methods and implementations. We evaluate the baselines on TVGBench using their original best-performing setting, focusing primarily on video input and prompt design. + +- TimeChat [44] is built upon the InstructBLIP [8] architecture and introduces a video Q-former to encode video tokens. It operates at a resolution of 224 and samples 96 frames. +- TRACE [18] treats each combination of timestamp, saliency score, and caption as a discrete event and enables the LVLM to autoregressively generate event sequences. It operates at a higher resolution of 336 and samples 128 frames. +- TimeSuite [60] introduces a token shuffling strategy to compress long video token sequences and incorporates positional encoding to enhance visual understanding. It adopts a resolution of 224 and samples 128 frames. +- VideoChat-Flash [27] proposes a progressive visual token dropping mechanism within intermediate LLM layers to compress video inputs and extend the effective context length. It uses a resolution of 448 and samples video at 1 fps, with a maximum of 512 frames. +- Gemini-2.5-Pro [10]: Gemini-2.5-Pro is a state-of-the-art video understanding model capable of reasoning over videos exceeding one hour in length. It supports video question answering and temporal localization tasks. + +Table 3: Comparison of different approaches on TVGBench for all types. We use mIoU as metric. + +
MethodECESHACHAPHASHPOAOCOECOESOT
TimeChat [44]22.332.816.69.814.635.115.09.22.418.010.2
TimeSuite [60]27.339.614.212.824.939.614.613.96.732.614.3
TRACE [18]57.166.825.917.526.545.117.822.112.536.824.9
VideoChat-Flash [27]38.347.212.913.927.139.414.912.76.524.312.9
Gemini-2.5-Pro [10]46.745.321.127.630.939.923.031.114.135.917.8
Time-R1 (ours)49.365.328.324.339.356.226.321.89.032.721.8
+ +Details of our implemented SFT baselines. We implemented two versions of SFT fine-tuning: one is full-parameter fine-tuning of the LLM (SFT), and the other is LoRA-based fine-tuning of the LLM (SFT-LoRA). For SFT-LoRA, the LoRA rank is set to 64, and the LoRA alpha is set to 128. Both configurations use the following settings: a learning rate of 2e-5, the AdamW optimizer with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , a weight decay of 0, the batch size of 8, and accumulation steps of 2. We fine-tune for 5 epochs on our 2.5K data, and use a linear scheduler to gradually decay the learning rate to 0. + +# C Additional Analyses + +In-depth comparisons of different approaches on TVGBench by semantic type. Table 3 provides a detailed performance comparison of various methods on the TVGBench across different semantic categories. Specifically, the abbreviations represent: EC (Environment Change), ES (Environment State), HAC (Human Action - Complex), HAP (Human Action - Procedural), HAS (Human Action - Simple), HP (Human Pose), OA (Object Attribute), OC (Object Counting), OEC (Object Existence - Complex), OES (Object Existence - Simple), and OT (Object Transition). Detailed definition and construction process can be found in Figure 15. + +Time-R1 demonstrates strong competitiveness across multiple semantic categories. First, particularly in the four tasks of HAC, HAS, HP, and OA, Time-R1 achieved the highest scores among all compared methods, showcasing its excellent ability in understanding the details of human actions and identifying object features. For example, Time-R1 achieves an mIoU of 56.2 on HP, which is 11.1 points higher than the second-best method, TRACE, with an mIoU of 45.1. On HAS, Time-R1 reaches 39.3, outperforming Gemini-2.5-Pro's 30.9 by 8.4 points. Second, in the three tasks of ES, EC, and OT, Time-R1 demonstrates strong performance comparable to the top model TRACE, with its performance being very close or immediately following. In the HAP task, Time-R1 also performs excellently, with its performance being in the same tier as Gemini-2.5-Pro. Lastly, all models still show a noticeable gap compared to Gemini in understanding complex instructions, such as in HAP, OC, and OEC. For example, in HAP, which involves procedural activity localization, Gemini achieves 27.6, while our model ranks second with a score of 24.3. In object counting, Gemini attains 31.1, substantially outperforming our model's 21.8. In summary, Time-R1 performs well on both non-human simple instructions and human-related instructions, but there is still room for improvement in complex instruction grounding and object-related grounding. + +Comparison of speed and accuracy between inference library transformers and vLLM. We observe that the inference speed of the implementation in the transformers [51] library is very slow. To address this, we implemented an accelerated inference version using vLLM [23] for all related 7 downstream benchmarks. For example, on TVGBench, the vLLM-based implementation requires only 502 seconds to infer 800 samples using 8 GPUs, whereas the transformers library implementation takes 2520 seconds. This achieves an overall speedup of $5 \times$ . + +# D Ablation Studies + +Ablation of different RFT data filtering strategies. As shown in Table 4, different data filtering strategy in the initial round affects the model's performance. First, appropriate Gaussian filtering outperforms both uniform and random filtering methods. Among the Gaussian filtering settings, a standard deviation of 0.3 yields the best results, followed by 0.5 and then 0.7. These findings suggest that incorporating moderately challenging samples during RFT helps improve the model's generalization capability more effectively than using either overly easy or extremely difficult examples. + +Table 4: Ablation of data filtering strategies. + +
MethodR1@0.3R1@0.5R1@0.7mIoU
random39.426.516.427.4
gaussian (0.3)41.628.515.628.6
gaussian (0.5)40.628.216.028.3
gaussian (0.7)37.226.915.526.5
uniform40.428.515.928.3
+ +Table 5: Ablation of KL and CoT in GRPO. + +
KLCoTR1@0.3R1@0.5R1@0.7mIoU
XX40.429.114.928.1
X40.827.415.027.7
X42.929.515.029.1
41.628.515.628.6
+ +Table 6: Comparison of the token-level loss design used by DAPO [56] and the sample-level loss design used by GRPO [45]. + +
LossCharades-STAActivityNetTVGBench
R1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoU
GRPO76.759.834.457.055.937.120.337.840.828.016.528.4
DAPO77.460.034.157.256.237.420.438.041.628.515.628.6
+ +Ablation of KL and CoT during GRPO training. As shown in Table 5, incorporating CoT reasoning during training leads to improved performance compared to the No-CoT setting, suggesting that CoT enhances the model's temporal video grounding capabilities. When KL divergence is omitted (No-KL), performance slightly decreases under the No-CoT setting but unexpectedly improves when CoT is present. However, we find that in the No-KL+CoT setting, the model often fails to produce a thinking process, directly jumping to answers. In contrast, using KL divergence helps maintain more logical reasoning that is easier to follow. To balance performance and interpretability, we adopt a training setup that includes both KL and CoT. + +Comparison of tIoU and IoU during multi-epoch training. As shown in Figure 6, tIoU consistently outperforms standard IoU during both the early and late stages of training over the first 5 epochs. Notably, while tIoU steadily improves as training progresses, IoU shows a decline in performance by the fifth epoch. This highlights the advantage of using tIoU as a more stable and reliable reward for temporal video grounding. + +Ablation of sample filtering in multi-epoch training. As shown in Figure 7, applying sample filtering (SF) to remove simpler training samples yields consistent performance improvements across epochs. This suggests that easy samples with high IoU may introduce noise or reduce the effectiveness of learning, and filtering them helps focus the model on more informative and challenging instances. + +Ablation of DAPO & GRPO. The sample-level loss used by GRPO computes the loss by averaging over each individual sample. This approach leads to unequal loss contributions for tokens when dealing with CoTs of varying lengths. DAPO addresses this issue by employing a token-level loss. The underlying principle is that the token-level loss can effectively guide the model in the process of CoT generation, allowing it to learn useful patterns from CoTs of different lengths sampled during training. In Table 6, we compare these two loss designs. We empirically find that DAPO outperforms GRPO on the majority of metrics, thus, we adopt DAPO's loss design. + +Different Model Size. Table 7 presents a performance comparison of different model sizes. These results indicate that larger models achieve better zero-shot performance and continue to outperform smaller models after fine-tuning. These findings support the notion that scaling up model capacity enhances generalization and leads to superior results on the TVG tasks. + +# E Qualitative Result + +Case study of temporal video grounding on Charades and ActivityNet. As shown in Figure 8, in the example above, given a relatively complex language instruction, Time-R1 demonstrates more accurate localization than all baselines, successfully capturing the initial event "goes back to the pink bucket" within the timestamp, while other SoTA models like Gemini-2.5-Pro fail. In the example below, the model accurately localizes the event, excluding "a person is lying on the couch," and correctly distinguishes between sitting and lying, unlike other models, which either localize only a small segment (TimeSuite and VideoChat-Flash) or the entire segment (TRACE and Gemini-2.5-Pro). + +Case study of short video QA on VideoMME and TempCompass. As shown in Figures 9 and 10, Time-R1 demonstrates improved performance over the base model in tasks requiring positional + +Table 7: Performance comparison of different model sizes. + +
MethodCharades-STAActivityNetTVGBench
R1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoU
Time-R1-3B74.653.126.051.240.021.08.723.233.521.010.521.7
Time-R1-3B*78.764.136.959.966.846.824.746.1----
Time-R1-7B78.160.835.558.158.139.021.440.541.829.416.429.2
Time-R1-7B*82.872.250.160.973.355.634.052.1----
+ +![](images/274d87af0a8919a0a742cf4bccd701b192201dc967697683fb1144c7bb9411a0.jpg) +Figure 6: Performance comparison of tIoU and IoU in multi-epoch training. + +![](images/17ec16600700266ab49664ef8b1d7c88648951c7538d40c898495f116d77d77e.jpg) +Figure 7: Ablation of sample filtering in multi-epoch training. + +judgment, scene storyline reasoning, and visual reasoning. For example, in Figure 9, Time-R1 correctly identifies that a car in the video is missing its right-front wheel, a detail that the base model fails to recognize. This reflects that Time-R1 likely possesses stronger video localization capabilities, which in turn enhance its visual reasoning ability. In Figure 12, we output a CoT when answering the QA task, providing some interpretability. This example shows that Time-R1's reasoning process is more concise, whereas the base model often reasons correctly but arrives at the wrong answer. This suggests that Time-R1's reasoning may be more effective in guiding the final answer, possibly benefiting from the outcome-driven RL of GRPO. + +Case study of long video QA on EgoSchema and VideoMME. Figure 13 presents a long egocentric video QA example focused on summarizing task steps. In the "Hanging the Dress" case, the base model fails to identify all key steps, while our Time-R1 model correctly selects the answer by generating a more accurate chain-of-thought (CoT). In Figure 14, the task involves identifying the primary tools and ingredients used in the video and understanding their respective contributions. An additional example in Figure 9 involves animated scene reasoning, where Time-R1 correctly infers, based on elements like the presence of a guitar, that the heroes defeated the enemies using the power of music, whereas the base model misinterprets the cause. Across these complex reasoning tasks involving long videos, Time-R1 consistently demonstrates superior performance compared to the base model. + +Illustration of our prompt at training and inference time. Figure 11 presents the prompts used for the temporal video grounding and video QA tasks at both training and inference time. + +Illustration of our prompt to annotate query semantics on TVGBench. Figure 15 presents the prompt used to annotate query semantics in the TVGBench dataset. The prompt is designed to guide the DeepSeek LLM in classifying each input query into one of 11 predefined semantic categories. To improve annotation quality, we refer to the construction of existing benchmarks and carefully select both positive and negative examples for each semantic type. These examples are chosen to include queries that are either easy or difficult for the model to answer, helping to refine the model's understanding and improve labeling accuracy. + +![](images/4fedb324bb47101ec49cfc416e6525aa7ae990795778f2de0a5f618594a37ceb.jpg) + +![](images/5c83fb854d03c1084b8147fd2f0d207efd3035914b8e01683b1b2570e88116f5.jpg) + +![](images/1eeca61a89d9ee552f580381979f06a57a5db62e051689c4db45a8e866a87540.jpg) + +Question: Where is the event "The man goes back to the pink bucket to wash another dark item clothing, he wrings it out in the pink bucket, throws it in the yellow bucket, then wrings his hands together a few times while staring at the camera"? + +(video length: 74s) + +
Ground Truth33.1s→ 73.5s
TimeSuite20.0s→ 30.0s
Gemini-2.5-Pro57.0s
VideoChat-Flash0.0s→ 17.2s
Trace54.9s
Time-R1 (ours)35.0s→ 75.0s
+ +Time-R1 thinking (ours): The event "The man goes back to the pink bucket to wash another dark item clothing, he wrings it out in the pink bucket, throws it in the yellow bucket, then wrings his hands together a few times while staring at the camera." occurs between 35.00 and 75.00 seconds in the video. + +![](images/9968b6c45f4adeda2cacdb485fed70f4a7bc0887931405f750f4da5ed4176b15.jpg) +Figure 8: Temporal video grounding cases from Charades and ActivityNet benchmark. Time-R1 outperforms other SoTA models. + +![](images/985cd9a1289f0cb3f62a495e911ced0c0cd918a5cdd7fc8cbd83d6f3cdbfeae1.jpg) + +Question: Where is the event "a person is seen sitting on a couch"? +(video length: 31s) + +
Ground Truth0.0s←12.8s
TimeSuite0.0s←5.0s
Gemini-2.5-Pro0.0s←30.0s
VideoChat-Flash0.0s←5.0s
Trace0.0s←31.6s
Time-R1 (ours)0.0s←10.0s
+ +Time-R1 thinking (ours): The event "a person is seen sitting on a couch" occurs at the beginning of the video. The person is seated on a couch, holding a phone, and appears to be engaged with it. This aligns with the initial frames of the video. + +![](images/973df124e00e64d66c24e68d6bbc28517340ef9054637a51a8b5356881a59b55.jpg) + +![](images/faec5a13c0067d9b642efba9369da44a4764c136a36678a932e8dec98cf64bc5.jpg) + +![](images/7263b40507e658da86f0c681610219abe6e1979ce94d39e0bea81772100821c5.jpg) + +(video length: 38m) Question: What do heroes of legend use to defeat the enemy based on the video? + +![](images/218347b27395c7ef9db786d9def3ca073d718495208f3fd53493ff1d52eb2704.jpg) + +Their wisdom + +![](images/f84c05262d317e51bd08a7be8e23d18718f79e49c0dc3cc7341e7a68b8d61ef2.jpg) + +Their superpower + +![](images/b3b0e585c0db345043984495ff909736d2924ea3bd232b0d7c13c7bbb8198251.jpg) +Figure 9: Case study on VideoMME (w/o CoT), demonstrating that Time-R1 achieves better performance than the base model. + +(B) A big robot + +![](images/09928d2c575f65fba1f3260fc7ce829e74cec6bcf7107f4b746c46c51ba5284e.jpg) + +4 Power of music + +![](images/12d8fce88e4faed9a6d0609ccef584422d1473a3d74f01c1339a1abef3ffeb4d.jpg) + +![](images/d309857026d6d143f04db4e4465e618ec7bb434a45191c8731c880da1c0f21fe.jpg) + +(video length: 17s) Question: What's wrong with this car? + +![](images/95b3560e9f81ee0ecbf781caf46006fe72d005eca8cfe18df5624de0505a0dc9.jpg) + +It doesn't have a left rear wheel. + +![](images/4a4078e7b7aee98137f556a4e4cde6e7b54c3cfb6912441b8a5d32a36feed0e8.jpg) + +Its headlamp is broken. + +(B) It doesn't have a right front wheel. +(D) Its right door is broken. + +![](images/19eed5f5babcebf1b58cafcfcdc600fe94c6be4b8245ccf23ee8d61f4a6deb8e.jpg) +Figure 10: Case study on TempCompass (w/o CoT), demonstrating that Time-R1 achieves better performance than the base model. + +![](images/3909e72e1f638024ea6b20d4b16e72de1e32666727cf506471db760bb6c9fc8b.jpg) + +![](images/1cbc539d50fd51452839d0189ad47f5bacbad57e73f8ffc6a087f0e5bbfdf5ba.jpg) + +(video length: 10s)Question: Which sentence better captures the essence of the video? + +(A) First, two hand are holding the clay pot and then three hands are holding it. +In the video, three hands are holding the clay pot, then two hands are holding the clay pot. + +![](images/d75c15a560b56adadfafa9355f8b6b0aaad45eebf96fa76dc432641d2e2bf666.jpg) +Figure 11: Illustration of prompts at both training and inference time. + +![](images/f8ca333393071a5a60e1bc1552cf64f7116c991ea0edeeef783b03299fc0fe47.jpg) + +![](images/980926ad5d91bec503d9caba1d831d0c388eaffc5efd5b99e3790c6840b603f2.jpg) + +(video length: 14s)Question: What is the sequence of actions performed by the girl? + +(A) Putting on clothes, jumping into water, taking off clothes +(B) Jumping into water, putting on clothes, taking off clothes +(C) Taking off clothes, putting on clothes, jumping into water + +# Temporal Video Grounding + +# System Prompt + +You are a video analysis expert. + +# Input Prompt + +To accurately pinpoint the event "[EVENT]" in the video, determine the precise time period of the event. Output your thought process within the tags, including analysis with either specific time ranges (xx.xx to xx.xx) in < timestep> tags. Then, provide the start and end times (in seconds, precise to two decimal places) in the format "start time to end time" within the tags. For example: "12.54 to 17.83".""" + +# Video QA + +# System Prompt + +You are a video analysis expert. + +# Input Prompt (w/o CoT) + +Carefully watch the video and pay attention to the cause and sequence of events, the detail and movement of objects and the action and pose of persons. Based on your observations, select the best option that accurately addresses the question. Provide the letter of the best option wrapped in parentheses within the tags. Question: {} {}"" + +# Input Prompt (w/ CoT) + +Carefully watch the video and pay attention to the cause and sequence of events, the detail and movement of objects and the action and pose of persons. Based on your observations, select the best option that accurately addresses the question. Output your thought process within the tags, including analysis with either specific timestamps (xx.xx) or time ranges (xx.xx to xx.xx) in < timestep> tags. Then, provide the letter of the best option wrapped in parentheses within the tags. Question: {} {} + +![](images/8bbda051c7ce9bd5fe9b6cb397b3b363d50c32c6adc1866bcc17300fdf32b49d.jpg) + +![](images/8a70b3af9b1cef5820dbbe2185acb3e04c5212fbd35dc7b48b92d939b2140b3e.jpg) + +![](images/b02c0aecb373fffd3ad6fe4410fe80d7d12502b93d13f4db9fc258d37fa09dd4.jpg) + +(video length: 8s) Question: Is the pineapple being pushed forward by a person? + +![](images/fab3078a8b7c583cae33c013f97fbc814dca78d0c61a8b95825374e61adbf8ef.jpg) + +![](images/94f7d103b06790159b96771fa16bcda15048fc523193a7bdd6ea2c99b0d987f2.jpg) + +# Base model thinking: + +To determine if the pineapple is being pushed forward by a person, let's analyze the video: + +1. The video shows a pineapple on a pink surface. +2. A hand appears in the frame and interacts with the pineapple. +3. The hand lifts the pineapple, indicating that it is being moved by the person. + +Given this analysis, the pineapple is indeed being pushed forward by a person. + +# Time-R1 thinking (ours): + +The video shows a hand reaching into the frame and grasping the pineapple. The hand then lifts the pineapple, indicating that the pineapple is not being pushed forward by a person but rather being moved by the hand. + +![](images/f869e0da1398553a7a9496e63122abdaa9511f576a3870693cbfead808c33cd5.jpg) + +![](images/aa73db1e7f126f232846a4a713ba79a6e8528ec63d50a068a539b99f93b8343e.jpg) + +![](images/a342e247ff90b415a7531500e076e1acdf562b8cb12a3d127ca2b196c21df3c5.jpg) + +(video length: 9s) Question: In which sequence does the man perform the actions? + +(A) Gliding on the skateboard and then jumping +(B) Jumping and then tying his shoelaces +(C) Tying his shoelaces and then gliding on the skateboard +(D) Gliding on the skateboard and then tying his shoelaces + +# Base model thinking: + +Let's analyze the sequence of actions performed by the man in the video: + +1. The man is initially seen standing on a skateboard. +2. He then proceeds to tie his shoelaces. +3. After tying his shoelaces, he glides on the skateboard. + +Given this sequence, the correct answer is: (D) gliding on the skateboard and then tying his shoelaces + +# Time-R1 thinking (ours): + +The sequence of actions performed by the man in the video is as follows: first, he ties his shoelaces, and then he glides on the skateboard. This can be observed from the video where the man is seen tying his shoelaces before he starts gliding on the skateboard. + +Figure 12: Case study on TempCompass (w/ CoT), demonstrating that Time-R1 achieves better performance than the base model. + +![](images/81279d7eae18fb1bce4cbee47004116069ae0b3e42a55e520c6dc6cf7e80f94e.jpg) + +![](images/4c0320e18d7afaecdd60f0655f830871000e4ec263ba8222ed59654b57aea321.jpg) + +![](images/53c6db539aaaa3e755ac6983de5dc5d897ec26c76ca462857240666e21e22f9b.jpg) + +(video length: 3m) Question: Summarize the steps taken by c to complete the tasks in this video. don't list every action narrated, but rather focus on major components and how they're executed. + +(A) C folds the dress, places it on the ironing board, and then hangs it up. +(B) C folds the dress, places it on the ironing board, and then packs it. +(C) C folds the dress, places it on the ironing board, and then washes it. +(D) C folds the dress, places it on the ironing board, and then irons it and hangs it up. +(E) C folds the dress, places it on the ironing board, irons it, and then folds it again. + +# Base model thinking: + +Let's analyze the video step by step to determine the correct sequence of actions: + +1. Folding the Dress: The video starts with C folding a blue dress on the ironing board. This is the first major action. +2. Placing the Dress on the Ironing Board: After folding the dress, C places it on the ironing board. +3. Ironing the Dress: C then irons the dress, ensuring it is smooth and free of wrinkles. + +4. Hanging the Dress: Finally, C hangs the dress up, completing the task. + +Now, let's evaluate the options: + +(A) C folds the dress, places it on the ironing board, and then hangs it up. +(B) C folds the dress, places it on the ironing board, and then packs it. +(C) C folds the dress, places it on the ironing board, and then washes it. +(D) C folds the dress, places it on the ironing board, and then irons it and hangs it up. +(E) C folds the dress, places it on the ironing board, irons it, and then folds it again. The correct sequence of actions matches option (D). + +# Time-R1 thinking (ours): + +The video shows C folding a dress, placing it on the ironing board, and then ironing it. The sequence of actions is clear and follows a logical order. The dress is folded, placed on the ironing board, and then ironed, which is a common sequence for completing a task like ironing a dress. + +Figure 13: Case study on EgoSchema (w/ CoT), demonstrating that Time-R1 achieves better performance than the base model. + +![](images/265232fafe8299073c4c7c4ff4a77c04143193438124ab075d3728dc10ab911d.jpg) +Figure 14: Case study on EgoSchema (w/o CoT), demonstrating that Time-R1 achieves better performance than the base model. + +![](images/328bb94b3b32a66cd62b1ed91525cc8163cc93fb06159121e58a9d4a4cb19b48.jpg) + +![](images/14920b340d6c0ba3c4d19e69ffea4b668b0f967e1acf371880dfc3a8a72bc09d.jpg) + +(video length: 3m) Question: Identify the primary tools and ingredients c used throughout the video, and discuss how they contributed to the overall dessert preparation process. + +(A) The primary tools and ingredients c used throughout the video were a spatula, a strainer, a dessert bowl, cocoa powder, and tissue paper. the spatula was used to spread the batter in the bowl. the strainer was used to sieve the cocoa powder into the bowl. the dessert bowl was used to hold the batter. the cocoa powder was used to add flavor and color to the batter. the tissue paper was used to wipe the edges of the bowl. +(B) The primary tools and ingredients c used throughout the video were a mixing bowl, a whisk, X a baking dish, flour, sugar, eggs, and butter. the mixing bowl was used to mix the batter. the whisk was used to beat the eggs. the baking dish was used to bake the cake. the flour was used to add structure to the cake. the sugar was used to add sweetness to the cake. the eggs were used to add moisture to the cake. the butter was used to add flavor and richness to the cake. +(C) The primary tools and ingredients c used throughout the video were a muffin tin, a spoon, chocolate chips, flour, sugar, eggs, and butter. the muffin tin was used to bake the muffins. the spoon was used to scoop the batter into the muffin tin. the chocolate chips were used to add flavor and color to the muffins. the flour was used to add structure to the muffins. the sugar was used to add sweetness to the muffins. the eggs were used to add moisture to the muffins. the butter was used to add flavor and richness to the muffins. +(D) The primary tools and ingredients c used throughout the video were a cake pan, a spatula, chocolate frosting, flour, sugar, eggs, and butter. The cake pan was used to bake the cake. The spatula was used to spread the frosting on the cake. The chocolate frosting was used to add flavor and color to the cake. The flour was used to add structure to the cake. The sugar was used to add sweetness to the cake. The eggs were used to add moisture to the cake. The butter was used to add flavor and richness to the cake. +(E) The primary tools and ingredients c used throughout the video were a mixing bowl, a whisk, a baking sheet, flour, sugar, eggs, and butter. the mixing bowl was used to mix the batter. the whisk was used to beat the eggs. the baking sheet was used to bake the cookies. the flour was used to add structure to the cookies. the sugar was used to add sweetness to the cookies. the eggs were used to add moisture to the cookies. the butter was used to add flavor and richness to the cookies. + +# ## Task + +Classify input queries into exactly one of the following categories based on their semantic content: + +# 1. Human Action (Simple) + +- Definition: Singular physical movements or basic interactions. +- Examples: - person opens a book over their head. - The person gets out some ginger. - who did I talk to in the shopping mall? + +# 2. Human Action (Complex) + +- Definition: Single continuous event with intricate components or concurrent elements. +- Examples: - He is talking while several people are using rowing machines. + +- One man wearing blue shirt wearing a jumping leg extension and another man wearing red pants play on a field. +- who did I interact with when I did activity of fixing camping tent? + +# 3. Human Action (procedural) + +- Definition: contains multiple sequential events with explicit temporal boundaries. contains multiple actions, each with a clear start and end. +- Examples: - The person procures a condiment from the pantry, takes a spoon from the drawer which he uses to scoop it into the pan, then returns the condiment to the pantry, places the spoon in the sink and again stirs the pan. + +- The person takes out a spoon from the drawer, scoops some sugar into the glass, stirs it with the juice, and returns the package to the pantry. + +- Negative Examples: - Then the man juices some lemons in a juicer: only one action + +- She gets out a cutting board and knife: only one action +- He then finishes by doing tricks: only one action +- She removes bits of shell until there is a small hole: only one action + +# 4. Human Pose + +- Definition: Static body positions or group configurations. Posture descriptors, positional prepositions +- Examples: - Several other people are in the background working out on the equipment. + +- A young child is seen standing before a set of monkey bars. + +# 5. Object Existence (Simple) + +- Definition: Current location/status queries. Simple location prepositions. +- Examples: - Where is the tap? + +- where is the chopsticks? +- In what location did i see the blue tent? + +# 6. Object Existence (Complex) + +- Definition: Queries about historical object positions changed by human actions, requiring temporal-action context (e.g., "after/before [action]"). +- Examples: - Where was the spatula after I first used it? + +- Where was the sieve before I picked it? +- what bolt did I pick? +- What mushroom did i chop + +# 7. Object Attribute + +- Definition: Physical/abstract property inquiries. Property descriptors (color/size/material) +- Examples: - what material did I pick from the shelf? + +- what color is the toilet bin? + +# 8. Object Counting + +- Definition: Quantitative object presence queries. Numeric quantifiers, plural objects +- Examples: - how many tissue paper were on the floor? + +- how many rolls are in the tray + +# 9. Object Transition + +- Definition: State/position change confirmation. Transformation verbs, completion checks +- Examples: - The bulb is broken apart. + +- Did I close fridge? + +# 10. Environment Change + +- Definition: Dynamic scene modifications. Transient elements, overlay content +- Examples: - video ends with clothes/captions scrolling down + +# 11. Environment State + +- Definition: Persistent scene elements. Static overlays, permanent fixtures +- Examples: - Intro states 'Progression: Lisa's First Season' + +-Trend Routing Technology' logo appears + +# Output Format + +Return ONLY the exact category name from: [Human Action (Procedural), Human Action (Complex), Human Action (Simple), Human Pose, Object Existence (Simple), Object Existence (Complex), Object Attribute, Object Counting, Object Transition, Environment Change, Environment State] + +INPUT_prompt = ""Given the query below, classify it into one of the categories mentioned above. Query: {query} Your response: + +Figure 15: Prompts for LLM used to annotate the semantics of each query on TVGBench. \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13377/images.zip b/data/2025/2503_13xxx/2503.13377/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6e10125d0874aa5e20c37f3d2c97d7145fc4624a --- /dev/null +++ b/data/2025/2503_13xxx/2503.13377/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cab935b45a76b83197c91cb35b8824c90c8f0083ce9de30f8515a98e4f2e15f +size 908273 diff --git a/data/2025/2503_13xxx/2503.13377/layout.json b/data/2025/2503_13xxx/2503.13377/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7c80edfc7cc4159cbfb93925fd5fc8c285f242f0 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13377/layout.json @@ -0,0 +1,17929 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 106, + 96, + 505, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 96, + 505, + 138 + ], + "spans": [ + { + "bbox": [ + 106, + 96, + 505, + 138 + ], + "type": "text", + "content": "Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "spans": [ + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": "Ye Wang" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{1*}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Ziheng Wang" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{1*}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Boshen Xu" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{1*‡}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Yang Du" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Kejun Lin" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Zihan Xiao" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " \nZihao Yue" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Jianzhong Ju" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Liang Zhang" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Dingyi Yang" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Xiangnan Fang" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Zewen He" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " \nZhenbo Luo" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Wenxuan Wang" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Junqi Lin" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Jian Luan" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "text", + "content": " Qin Jin" + }, + { + "bbox": [ + 116, + 178, + 494, + 216 + ], + "type": "inline_equation", + "content": "^{1†}" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 198, + 224, + 425, + 259 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 198, + 224, + 425, + 259 + ], + "spans": [ + { + "bbox": [ + 198, + 224, + 425, + 259 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 198, + 224, + 425, + 259 + ], + "type": "text", + "content": "AIM3 Lab, Renmin University of China " + }, + { + "bbox": [ + 198, + 224, + 425, + 259 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 198, + 224, + 425, + 259 + ], + "type": "text", + "content": "MiLM Plus, Xiaomi Inc. " + }, + { + "bbox": [ + 198, + 224, + 425, + 259 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 198, + 224, + 425, + 259 + ], + "type": "text", + "content": "Independent Researcher Project Page: https://xuboshen.github.io/Time-R1/" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 281, + 287, + 329, + 300 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 281, + 287, + 329, + 300 + ], + "spans": [ + { + "bbox": [ + 281, + 287, + 329, + 300 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 140, + 313, + 470, + 499 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 313, + 470, + 499 + ], + "spans": [ + { + "bbox": [ + 140, + 313, + 470, + 499 + ], + "type": "text", + "content": "Temporal Video Grounding (TVG), the task of locating specific video segments based on language queries, is a core challenge in long-form video understanding. While recent Large Vision-Language Models (LVLMs) have shown early promise in tackling TVG through supervised fine-tuning (SFT), their abilities to generalize remain limited. To address this, we propose a novel post-training framework that enhances the generalization capabilities of LVLMs via reinforcement learning (RL). Specifically, our contributions span three key directions: (1) Time-R1: we introduce a reasoning-guided post-training framework via RL with verifiable reward to enhance the capabilities of LVLMs on the TVG task. (2) TimeRFT: we explore data-efficient post-training strategies on our curated RL-friendly dataset, which trains the model to progressively comprehend difficult samples, leading to better generalization. (3) TVGBench: we carefully construct a small yet comprehensive benchmark for LVLM evaluation, assessing 11 types of queries and featuring balanced distributions across both videos and queries. Extensive experiments demonstrate that Time-R1 achieves state-of-the-art performance across multiple downstream datasets using only 2.5K training data, while improving its general video understanding capabilities." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 528, + 192, + 540 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 528, + 192, + 540 + ], + "spans": [ + { + "bbox": [ + 105, + 528, + 192, + 540 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 553, + 506, + 609 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 553, + 506, + 609 + ], + "spans": [ + { + "bbox": [ + 104, + 553, + 506, + 609 + ], + "type": "text", + "content": "Understanding long-form videos has long been a core ambition in computer vision [14, 24, 9]. A critical step toward this goal is Temporal Video Grounding (TVG) [15, 61], which enables models to localize video segments corresponding to natural language queries (e.g., \"Find the segment where a person walks into the living room\"). Such capability is fundamental for real-world applications, including smart home assistants [55, 16, 46] and video retrieval systems on online platforms [5, 3]." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 613, + 506, + 691 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 613, + 506, + 691 + ], + "spans": [ + { + "bbox": [ + 104, + 613, + 506, + 691 + ], + "type": "text", + "content": "Traditional TVG approaches adopt a feature-based paradigm, where pretrained models (e.g., CLIP [42], I3D [6]) extract text and video features, followed by task-specific grounding models [30, 22, 29]. However, these methods suffer from error accumulation due to imperfect pretrained features. To overcome these limitations, recent efforts have shifted toward end-to-end Large Vision-Language Models (LVLMs) [44, 60], which directly process long-form videos and text queries. Despite being pretrained on datasets " + }, + { + "bbox": [ + 104, + 613, + 506, + 691 + ], + "type": "inline_equation", + "content": "100 \\times" + }, + { + "bbox": [ + 104, + 613, + 506, + 691 + ], + "type": "text", + "content": " larger than domain-specific benchmarks [46], LVLMs (with " + }, + { + "bbox": [ + 104, + 613, + 506, + 691 + ], + "type": "inline_equation", + "content": "7\\mathrm{B}+" + }, + { + "bbox": [ + 104, + 613, + 506, + 691 + ], + "type": "text", + "content": " parameters) often underperform compared to much smaller feature-based models (e.g.," + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 14, + 220, + 37, + 568 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 220, + 37, + 568 + ], + "spans": [ + { + "bbox": [ + 14, + 220, + 37, + 568 + ], + "type": "text", + "content": "arXiv:2503.13377v3 [cs.CV] 29 Jun 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 120, + 700, + 474, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 120, + 700, + 474, + 712 + ], + "spans": [ + { + "bbox": [ + 120, + 700, + 474, + 712 + ], + "type": "text", + "content": "† Corresponding author: Qin Jin; * Equal contribution, listed in alphabetical order; ‡ Project lead." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 731, + 193, + 742 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 731, + 193, + 742 + ], + "spans": [ + { + "bbox": [ + 105, + 731, + 193, + 742 + ], + "type": "text", + "content": "Preprint. Under review." + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 123, + 89, + 490, + 133 + ], + "blocks": [ + { + "bbox": [ + 124, + 72, + 198, + 89 + ], + "lines": [ + { + "bbox": [ + 124, + 72, + 198, + 89 + ], + "spans": [ + { + "bbox": [ + 124, + 72, + 198, + 89 + ], + "type": "text", + "content": "TimeRFT Training training strategy, dataset" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 233, + 72, + 323, + 88 + ], + "lines": [ + { + "bbox": [ + 233, + 72, + 323, + 88 + ], + "spans": [ + { + "bbox": [ + 233, + 72, + 323, + 88 + ], + "type": "text", + "content": "Time-R1 Framework data-efficient, effective, R1-like" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 378, + 72, + 483, + 88 + ], + "lines": [ + { + "bbox": [ + 378, + 72, + 483, + 88 + ], + "spans": [ + { + "bbox": [ + 378, + 72, + 483, + 88 + ], + "type": "text", + "content": "TVGBench Evaluation \nata, 11 types, balanced distributions" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 123, + 89, + 490, + 133 + ], + "lines": [ + { + "bbox": [ + 123, + 89, + 490, + 133 + ], + "spans": [ + { + "bbox": [ + 123, + 89, + 490, + 133 + ], + "type": "image", + "image_path": "3d410d229d3a500cf3af047c76b60ce63002c798ae8c6e3f3a040537d00828a3.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 129, + 147, + 478, + 187 + ], + "blocks": [ + { + "bbox": [ + 124, + 137, + 308, + 146 + ], + "lines": [ + { + "bbox": [ + 124, + 137, + 308, + 146 + ], + "spans": [ + { + "bbox": [ + 124, + 137, + 308, + 146 + ], + "type": "text", + "content": "Improved Temporal Video Grounding (video length: 148s)" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 129, + 147, + 478, + 187 + ], + "lines": [ + { + "bbox": [ + 129, + 147, + 478, + 187 + ], + "spans": [ + { + "bbox": [ + 129, + 147, + 478, + 187 + ], + "type": "image", + "image_path": "68a0bcca670a2e1de605d4cbe5acb8cf4520384516660cd871afe03ef7bcde10.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 126, + 195, + 486, + 252 + ], + "blocks": [ + { + "bbox": [ + 125, + 188, + 283, + 195 + ], + "lines": [ + { + "bbox": [ + 125, + 188, + 283, + 195 + ], + "spans": [ + { + "bbox": [ + 125, + 188, + 283, + 195 + ], + "type": "text", + "content": "Question: Where is the event \"He took out cutting board\"." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 126, + 195, + 486, + 252 + ], + "lines": [ + { + "bbox": [ + 126, + 195, + 486, + 252 + ], + "spans": [ + { + "bbox": [ + 126, + 195, + 486, + 252 + ], + "type": "image", + "image_path": "ddbcfed6eab7989e24c9d085a94496876fb255a141cb877b193ccc30f3ed1fd4.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 126, + 253, + 472, + 269 + ], + "lines": [ + { + "bbox": [ + 126, + 253, + 472, + 269 + ], + "spans": [ + { + "bbox": [ + 126, + 253, + 472, + 269 + ], + "type": "text", + "content": "Time-R1 thinking (ours): The event \"He took out cutting board\" occurs between 13.40 and 18.20 seconds. This is when the person is seen placing the cutting board on the counter." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 124, + 290, + 276, + 312 + ], + "blocks": [ + { + "bbox": [ + 124, + 278, + 266, + 287 + ], + "lines": [ + { + "bbox": [ + 124, + 278, + 266, + 287 + ], + "spans": [ + { + "bbox": [ + 124, + 278, + 266, + 287 + ], + "type": "text", + "content": "Improved Short Video QA (video length: 25s)" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 124, + 290, + 276, + 312 + ], + "lines": [ + { + "bbox": [ + 124, + 290, + 276, + 312 + ], + "spans": [ + { + "bbox": [ + 124, + 290, + 276, + 312 + ], + "type": "image", + "image_path": "ab0e03c7fcd44d74e7111cfd85238e02d3b90545d29358fa22e32d3e526193b5.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "bbox": [ + 126, + 326, + 140, + 336 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 326, + 140, + 336 + ], + "spans": [ + { + "bbox": [ + 126, + 326, + 140, + 336 + ], + "type": "text", + "content": "(A) Y" + } + ] + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 167, + 325, + 192, + 338 + ], + "blocks": [ + { + "bbox": [ + 125, + 316, + 261, + 323 + ], + "lines": [ + { + "bbox": [ + 125, + 316, + 261, + 323 + ], + "spans": [ + { + "bbox": [ + 125, + 316, + 261, + 323 + ], + "type": "text", + "content": "Question: What letter is likely to be shown next?" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 167, + 325, + 192, + 338 + ], + "lines": [ + { + "bbox": [ + 167, + 325, + 192, + 338 + ], + "spans": [ + { + "bbox": [ + 167, + 325, + 192, + 338 + ], + "type": "image", + "image_path": "11b7721a4fc8aae3ea951446ef33c233ace8be182dd0eff3f8fe2e9fd8f666f6.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 225, + 325, + 250, + 338 + ], + "blocks": [ + { + "bbox": [ + 225, + 325, + 250, + 338 + ], + "lines": [ + { + "bbox": [ + 225, + 325, + 250, + 338 + ], + "spans": [ + { + "bbox": [ + 225, + 325, + 250, + 338 + ], + "type": "image", + "image_path": "045217fdd83a82b32d557cbe28bd70f580c2da2c89b5005e6eb6f4b289633e50.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "bbox": [ + 281, + 277, + 340, + 284 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 281, + 277, + 340, + 284 + ], + "spans": [ + { + "bbox": [ + 281, + 277, + 340, + 284 + ], + "type": "text", + "content": "Base model thinking:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 280, + 285, + 488, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 280, + 285, + 488, + 312 + ], + "spans": [ + { + "bbox": [ + 280, + 285, + 488, + 312 + ], + "type": "text", + "content": "... Given this sequence, the next logical letter would be the next letter in the alphabet after 'F', which is 'G'. The letters 'D', 'E', 'F', and 'G' are now on the table, forming the sequence 'DEFG'. Therefore, the next letter to be shown is likely to be 'G'." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 281, + 313, + 352, + 320 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 281, + 313, + 352, + 320 + ], + "spans": [ + { + "bbox": [ + 281, + 313, + 352, + 320 + ], + "type": "text", + "content": "Time-R1 thinking (ours):" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 280, + 320, + 487, + 342 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 280, + 320, + 487, + 342 + ], + "spans": [ + { + "bbox": [ + 280, + 320, + 487, + 342 + ], + "type": "text", + "content": "The sequence of letters shown in the video is \"D\", \"E\", \"F\", and \"G\". This suggests a pattern of showing the next letter in the alphabet. The letters are being placed in a specific order, and the next logical letter after \"G\" would be \"H\"." + } + ] + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 123, + 361, + 274, + 392 + ], + "blocks": [ + { + "bbox": [ + 124, + 351, + 267, + 360 + ], + "lines": [ + { + "bbox": [ + 124, + 351, + 267, + 360 + ], + "spans": [ + { + "bbox": [ + 124, + 351, + 267, + 360 + ], + "type": "text", + "content": "Improved Long Video QA (video length: 59m)" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 123, + 361, + 274, + 392 + ], + "lines": [ + { + "bbox": [ + 123, + 361, + 274, + 392 + ], + "spans": [ + { + "bbox": [ + 123, + 361, + 274, + 392 + ], + "type": "image", + "image_path": "41a71ed56113a83ea742f50f68837de75341378355cfe16e118d39d734ebc0ab.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 444, + 504, + 499 + ], + "lines": [ + { + "bbox": [ + 104, + 444, + 504, + 499 + ], + "spans": [ + { + "bbox": [ + 104, + 444, + 504, + 499 + ], + "type": "text", + "content": "Figure 1: Our contributions include a novel post-training framework for LVLMs via reinforcement learning, Time-R1; an RL fine-tuning strategy along with its associated training dataset, TimeRFT; and a new benchmark, TVGBench, for evaluating LVLMs on the TVG task. Our Time-R1 model not only achieves SoTA TVG performance, but also enhances performance on both short- and long-form multi-choice video question answering tasks." + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "bbox": [ + 125, + 393, + 276, + 410 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 125, + 393, + 276, + 410 + ], + "spans": [ + { + "bbox": [ + 125, + 393, + 276, + 410 + ], + "type": "text", + "content": "Question: What do the expanding red lines on the map in the first few minutes of the video stand for?" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 125, + 412, + 195, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 125, + 412, + 195, + 433 + ], + "spans": [ + { + "bbox": [ + 125, + 412, + 195, + 433 + ], + "type": "text", + "content": "(A) The Yellow River (C) Du Fu's route to Xi'an" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 214, + 412, + 277, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 214, + 412, + 277, + 430 + ], + "spans": [ + { + "bbox": [ + 214, + 412, + 277, + 430 + ], + "type": "text", + "content": "(B) The Silk Road \n(D) The Yangtze River" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 281, + 354, + 340, + 361 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 281, + 354, + 340, + 361 + ], + "spans": [ + { + "bbox": [ + 281, + 354, + 340, + 361 + ], + "type": "text", + "content": "Base model thinking:" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 280, + 361, + 488, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 280, + 361, + 488, + 399 + ], + "spans": [ + { + "bbox": [ + 280, + 361, + 488, + 399 + ], + "type": "text", + "content": "... The map is used to illustrate the historical journey of Du Fu, a famous Chinese poet, as indicated by the text \"Du Fu's route to Xi'an\" and the historical context provided by the narrator. Given this analysis, the expanding red lines on the map in the first few minutes of the video stand for Du Fu's route to Xi'an.... Time-R1 thinking (ours):" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 281, + 399, + 488, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 281, + 399, + 488, + 429 + ], + "spans": [ + { + "bbox": [ + 281, + 399, + 488, + 429 + ], + "type": "text", + "content": "... 2. The map is shown with red lines expanding from a central point, which is likely Xi'an, a major city on the Silk Road. 3. The map is labeled with \"Silk Road\" and \"The Great Silk Road,\" confirming that the red lines represent the Silk Road. ..." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 104, + 536, + 504, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 536, + 504, + 559 + ], + "spans": [ + { + "bbox": [ + 104, + 536, + 504, + 559 + ], + "type": "text", + "content": "9M parameters [22]). This raises a critical question: Why do LVLMs, despite their vast pretrained knowledge, fail to excel on TVG?" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 104, + 563, + 504, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 563, + 504, + 641 + ], + "spans": [ + { + "bbox": [ + 104, + 563, + 504, + 641 + ], + "type": "text", + "content": "We attribute the suboptimal performance of LVLMs to over-penalization of false negatives during supervised fine-tuning (SFT). For instance, when the ground truth timestamp is [2s, 4s], even when the model makes a reasonable prediction of timestamp [1.9s, 3.9s], the autoregressive loss would still be undesirably high. Such disproportionate penalties on reasonable predictions result in overfitting and poor generalization. While previous solutions have attempted to address this by introducing new timestamp tokens into the vocabulary [18, 17, 53] or by appending a regression head to predict timestamps [65], they often sacrifice the pretrained language capabilities of LLMs." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 104, + 645, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 645, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 645, + 506, + 723 + ], + "type": "text", + "content": "Inspired by recent success in reinforcement learning (RL) for post-training LLMs [39, 1] with chain-of-thought (CoT) prompting, especially in domains with deterministic answers, such as code generation and mathematical reasoning, we explore whether RL can serve as an effective alternative for TVG. Unlike SFT, RL allows direct optimization of task-specific metrics (e.g., IoU), thereby reducing rigid penalties of autoregressive losses and encouraging plausible timestamp predictions. In this work, we present an RL-based framework, Time-R1, that effectively post-trains LVLMs for TVG and pushes the performance frontier. Our contributions include:" + } + ] + } + ], + "index": 30 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 115, + 70, + 221, + 201 + ], + "blocks": [ + { + "bbox": [ + 115, + 70, + 221, + 201 + ], + "lines": [ + { + "bbox": [ + 115, + 70, + 221, + 201 + ], + "spans": [ + { + "bbox": [ + 115, + 70, + 221, + 201 + ], + "type": "image", + "image_path": "fe508b93644a4c1a251811d6357c99d02cdd5eae3ffec63d502390bd25cac1fb.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 152, + 204, + 184, + 215 + ], + "lines": [ + { + "bbox": [ + 152, + 204, + 184, + 215 + ], + "spans": [ + { + "bbox": [ + 152, + 204, + 184, + 215 + ], + "type": "text", + "content": "(a) VLP" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 238, + 69, + 347, + 202 + ], + "blocks": [ + { + "bbox": [ + 238, + 69, + 347, + 202 + ], + "lines": [ + { + "bbox": [ + 238, + 69, + 347, + 202 + ], + "spans": [ + { + "bbox": [ + 238, + 69, + 347, + 202 + ], + "type": "image", + "image_path": "baa3e64aeacdf876500af8b7229d74eb3ddc81327f265d60a53c62ca18c4a6ea.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 276, + 203, + 307, + 214 + ], + "lines": [ + { + "bbox": [ + 276, + 203, + 307, + 214 + ], + "spans": [ + { + "bbox": [ + 276, + 203, + 307, + 214 + ], + "type": "text", + "content": "(b) SFT" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 354, + 70, + 482, + 201 + ], + "blocks": [ + { + "bbox": [ + 354, + 70, + 482, + 201 + ], + "lines": [ + { + "bbox": [ + 354, + 70, + 482, + 201 + ], + "spans": [ + { + "bbox": [ + 354, + 70, + 482, + 201 + ], + "type": "image", + "image_path": "2cd0c3c3be979c379e8d23cc83ebce767a4565f06d233336e9ec0239188547cd.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 389, + 203, + 442, + 214 + ], + "lines": [ + { + "bbox": [ + 389, + 203, + 442, + 214 + ], + "spans": [ + { + "bbox": [ + 389, + 203, + 442, + 214 + ], + "type": "text", + "content": "(c) RL (ours)" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 218, + 506, + 243 + ], + "lines": [ + { + "bbox": [ + 104, + 218, + 506, + 243 + ], + "spans": [ + { + "bbox": [ + 104, + 218, + 506, + 243 + ], + "type": "text", + "content": "Figure 2: Comparison of different approaches for the TVG task, including feature-based video-language pretraining (VLP) [37, 22], supervised fine-tuning (SFT) [41, 60], and RL (ours)." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 261, + 506, + 480 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 104, + 261, + 506, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 261, + 506, + 316 + ], + "spans": [ + { + "bbox": [ + 104, + 261, + 506, + 316 + ], + "type": "text", + "content": "- RL-based framework for temporal video grounding. We introduce Time-R1, a reasoning-enhanced post-training framework via RL with verifiable rewards, where the LVLM first generates chain-of-thought descriptions and then predicts timestamps. The post-training process is optimized using Generalized Reinforcement Policy Optimization (GRPO) with a novel reward function, incorporating both a structured template reward and a timestamp-aware tIoU reward." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 316, + 505, + 371 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 316, + 505, + 371 + ], + "spans": [ + { + "bbox": [ + 104, + 316, + 505, + 371 + ], + "type": "text", + "content": "- Time-aware reinforcement fine-tuning. We propose TimeRFT, a reinforcement fine-tuning strategy with dynamic hard sampling, which mines hard samples on a curated dataset and progressively selects low-IoU samples for multi-epoch training. To ensure stable reasoning and reduce hallucinations, we adopt a cold-start approach to generate CoT with video captions. To support RL-friendly training, we curate an RFT dataset with difficulty annotations on the TVG task." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 371, + 504, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 371, + 504, + 426 + ], + "spans": [ + { + "bbox": [ + 104, + 371, + 504, + 426 + ], + "type": "text", + "content": "- Comprehensive benchmark for LVLMs on TVG. Existing TVG benchmarks are designed for the large-scale evaluation of small models. Considering the inference speed bottlenecks and general-purpose role of LVLMs, we construct TVGBench, a compact yet comprehensive benchmark for TVG. We carefully balance the video distribution, query distribution, and design specific query semantics to ensure that the benchmark is well-suited for evaluating LVLMs." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 426, + 506, + 480 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 426, + 506, + 480 + ], + "spans": [ + { + "bbox": [ + 104, + 426, + 506, + 480 + ], + "type": "text", + "content": "- State-of-the-Art results and generalization. Compared with 7B LVLMs on the temporal video grounding task, our method outperforms all prior SFT-based methods with only 2.5K training data. After fine-tuning on downstream benchmarks like Charades [46] and ActivityNet [5], it surpasses many previous feature-based approaches. Furthermore, Time-R1 also improves the general video understanding on video QA benchmarks like MVBench [26] and VideoMME [13]." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 105, + 495, + 202, + 507 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 495, + 202, + 507 + ], + "spans": [ + { + "bbox": [ + 105, + 495, + 202, + 507 + ], + "type": "text", + "content": "2 Related Works" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 520, + 506, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 520, + 506, + 696 + ], + "spans": [ + { + "bbox": [ + 104, + 520, + 506, + 696 + ], + "type": "text", + "content": "Temporal video grounding. The TVG task [15, 3] aims to localize temporal segments in untrimmed long videos given natural language queries. Previous works can be broadly categorized into feature-based video-language pretraining and frame-based LVLM methods, as shown in Figure 2. Feature-based methods first extract video and language features using pre-trained encoders (e.g., I3D [6], EgoVLP [29], CLIP [42], BERT [12], etc.), then build timestamp prediction modules based on multimodal fused features. Existing methods differ mainly in their design of the multimodal fusion module and timestamp prediction module. For example, SnAG [37] adopts a late fusion strategy and regresses timestamps directly in a single forward pass without proposal generation. While effective, these approaches are fundamentally limited by the quality of the pretrained features. Recent efforts have shifted toward end-to-end frame-based methods by fine-tuning LVLMs using SFT with autoregressive losses [27, 44, 60, 19, 52, 18, 25]. For instance, TRACE [18] treats each event as a combination of timestamp, saliency score, and caption, and fine-tunes the LVLM to generate event sequences autoregressively. However, such methods still underperform on even simple TVG benchmarks like Charades [46], often lagging behind feature-based approaches. In this work, we propose a novel RL-based post-training framework that establishes new state-of-the-art performance for LVLMs on TVG tasks, even surpassing many feature-based methods." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 700, + 504, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 700, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 700, + 504, + 723 + ], + "type": "text", + "content": "RL in LLMs and LVLMs. RL is a foundational machine learning paradigm applied in diverse domains such as game playing [47], robotics [34], and increasingly, language and vision-language" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 506, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 506, + 281 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 506, + 281 + ], + "type": "text", + "content": "models [39]. RL updates models by interacting with environments and maximizing reward signals. In recent years, RL has profoundly affected the field of LLM and LVLM post-training, which falls into two main categories: Reinforcement Learning with Human Feedback (RLHF) [40, 57] and Reinforcement Learning with Verifiable Reward (RLVR) [1, 39, 7]. Early works find that RLHF can align LLM to generate human preference data, which primarily reduces the safety risks in LLM and hallucination problems in LVLM. For example, RLHF-V [57] collects fine-grained pairs of incorrect and corrected captions and leverages direct preference optimization to optimize the model to generate correct captions, thus mitigating hallucinations. More recent works have explored RLVR in tasks with deterministic answers, which not only benefits mathematical problem solving and code generation in LLMs (e.g., DeepSeek-R1 [1]), but also enhances the generalization of LVLMs across a range of applications, such as visual grounding [32] and visual reasoning [48]. As a pioneer among open-source LLMs, DeepSeek-R1 [1] adopts GRPO to enhance reasoning capabilities by designing rule-based rewards that incorporate both reasoning templates and final answers. In the context of LVLMs, MM-Eureka [35] investigates multimodal image reasoning using GRPO, introducing an online filtering mechanism and a two-stage training strategy to stabilize the optimization process. However, existing approaches remain confined to language [1, 56], image understanding [7, 48, 32, 35], and short video understanding [64, 28]. It remains unclear whether and how reinforcement learning impacts long-form video understanding. To bridge this gap, we take a first step by introducing RLVR into LVLMs for the temporal video grounding task." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 105, + 296, + 167, + 308 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 296, + 167, + 308 + ], + "spans": [ + { + "bbox": [ + 105, + 296, + 167, + 308 + ], + "type": "text", + "content": "3 Method" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "spans": [ + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "text", + "content": "The TVG task aims to temporally localize video segments within long-form videos based on natural language queries. Given a video of duration " + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "text", + "content": " seconds, which is represented as a sequence of " + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "text", + "content": " frames " + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "inline_equation", + "content": "\\{x_{1},\\ldots ,x_{T}\\}" + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "text", + "content": ", and a language query " + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "text", + "content": ", the goal is to identify the temporal boundaries " + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "inline_equation", + "content": "[t_s,t_e]" + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "text", + "content": " of the segment that best corresponds to " + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "inline_equation", + "content": "t_s,t_e\\in \\mathbb{R}^+" + }, + { + "bbox": [ + 104, + 319, + 504, + 407 + ], + "type": "text", + "content": ". In this work, we introduce Time-R1, a framework designed to unleash the potential of LVLMs for the TVG task using RL. We first provide background on RL-based training for LLMs in Section 3.1, then detail the training procedure of TimeR1 in Section 3.2. Next, we describe specific training techniques used in TimeRFT in Section 3.3, and finally, we present the construction of our evaluation benchmark TVGBench in Section 3.4." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 420, + 285, + 432 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 420, + 285, + 432 + ], + "spans": [ + { + "bbox": [ + 105, + 420, + 285, + 432 + ], + "type": "text", + "content": "3.1 Background of GRPO: RL for LLM" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "spans": [ + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "text", + "content": "As a pioneer among open-sourced R1-style LLMs, Deepseek-R1 [1] leverages GRPO to train the policy model " + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "inline_equation", + "content": "\\pi_{\\theta}" + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "text", + "content": " (i.e., the LLM) to think before answering, making it particularly well-suited for tasks with well-defined answers, such as mathematical reasoning. In the GRPO framework, given an input question " + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "text", + "content": ", the LLM samples " + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "inline_equation", + "content": "G" + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "text", + "content": " candidate responses " + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "inline_equation", + "content": "o = \\{o_1,\\dots,o_G\\}" + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "text", + "content": ", and a reward function " + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "inline_equation", + "content": "r(\\cdot)" + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "text", + "content": " assigns a reward score to each response, yielding " + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "inline_equation", + "content": "\\{r(o_1),\\ldots ,r(o_G)\\}" + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "text", + "content": ". GRPO encourages the LLM to generate responses that maximize a weighted sum reward " + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "inline_equation", + "content": "R(o)" + }, + { + "bbox": [ + 104, + 441, + 504, + 508 + ], + "type": "text", + "content": ", defined by:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 200, + 513, + 504, + 544 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 200, + 513, + 504, + 544 + ], + "spans": [ + { + "bbox": [ + 200, + 513, + 504, + 544 + ], + "type": "interline_equation", + "content": "R (o) = \\sum_ {i = 1} ^ {G} \\frac {\\pi_ {\\theta} \\left(o _ {i}\\right)}{\\pi_ {\\theta_ {\\mathrm {o l d}}} \\left(o _ {i}\\right)} \\cdot \\frac {r \\left(o _ {i}\\right) - \\operatorname {m e a n} \\left(\\left\\{r \\left(o _ {i}\\right) \\right\\} _ {i = 1} ^ {G}\\right)}{\\operatorname {s t d} \\left(\\left\\{r \\left(o _ {i}\\right) \\right\\} _ {i = 1} ^ {G}\\right)} \\tag {1}", + "image_path": "373839842f827f90325d5fd063c5990915421a1b97fc27bf663fbbd6b9716b6c.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "spans": [ + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "type": "inline_equation", + "content": "\\pi_{\\theta}(o)" + }, + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "type": "text", + "content": " denotes the probability of LLM generating the response " + }, + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "type": "inline_equation", + "content": "o" + }, + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "type": "inline_equation", + "content": "\\pi_{\\theta_{\\mathrm{old}}}" + }, + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "type": "text", + "content": " represents the LLM parameters from a recently optimized state. To ensure training stability and avoid large deviations from the original language model behavior, the final training objective incorporates a KL-divergence regularization term [1], penalizing divergence between " + }, + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "type": "inline_equation", + "content": "\\pi_{\\theta}" + }, + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "type": "inline_equation", + "content": "\\pi_{\\mathrm{ref}}" + }, + { + "bbox": [ + 104, + 551, + 504, + 595 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 219, + 600, + 504, + 617 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 219, + 600, + 504, + 617 + ], + "spans": [ + { + "bbox": [ + 219, + 600, + 504, + 617 + ], + "type": "interline_equation", + "content": "\\max _ {\\pi_ {\\theta}} \\mathbb {E} _ {o \\sim \\pi_ {\\theta_ {\\mathrm {o l d}}} (p)} [ R (o) - \\beta \\mathrm {D} _ {\\mathrm {K L}} (\\pi_ {\\theta} \\| \\pi_ {\\mathrm {r e f}}) ] \\tag {2}", + "image_path": "f739ce47dba11cda080e0b6264b4eb0f546f81a13d3c34b65f2bdde2ab65573b.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 622, + 419, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 622, + 419, + 635 + ], + "spans": [ + { + "bbox": [ + 104, + 622, + 419, + 635 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 622, + 419, + 635 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 104, + 622, + 419, + 635 + ], + "type": "text", + "content": " is a scaling coefficient. We omit the clipping operation for simplicity." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 647, + 324, + 658 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 647, + 324, + 658 + ], + "spans": [ + { + "bbox": [ + 104, + 647, + 324, + 658 + ], + "type": "text", + "content": "3.2 Time-R1: RL for Temporal Video Grounding" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 667, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 667, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 667, + 506, + 723 + ], + "type": "text", + "content": "Since the TVG task has defined answers and well-established evaluation metrics, RL can optimize LVLMs for task-specific performance through tailored reward design. To enhance interpretability and align with human-like reasoning, we additionally incorporate an explicit \"thinking process\" prior to timestamp prediction. This process encourages the model to produce contextualized video descriptions that support its final decision. We detail our reward modeling and training process below." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "content": "Reward modeling. The reward " + }, + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "inline_equation", + "content": "r_i" + }, + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "content": " plays a crucial role in guiding the model's learning objective. To encourage effective temporal grounding with an explicit reasoning process, we design a composite reward function comprising two components: the timestamp-aware Intersection over Union (IoU) " + }, + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "inline_equation", + "content": "r_{\\mathrm{tIoU}}" + }, + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "content": " and the reasoning template reward " + }, + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "inline_equation", + "content": "r_{\\mathrm{form}}" + }, + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 121, + 504, + 145 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 121, + 504, + 145 + ], + "spans": [ + { + "bbox": [ + 104, + 121, + 504, + 145 + ], + "type": "text", + "content": "- Timestamp-aware IoU reward " + }, + { + "bbox": [ + 104, + 121, + 504, + 145 + ], + "type": "inline_equation", + "content": "r_{\\mathrm{tIoU}}(\\cdot)" + }, + { + "bbox": [ + 104, + 121, + 504, + 145 + ], + "type": "text", + "content": ". The TVG task primarily uses IoU [58] to evaluate the quality of predicted segments against the ground-truth " + }, + { + "bbox": [ + 104, + 121, + 504, + 145 + ], + "type": "inline_equation", + "content": "[t_s', t_e']" + }, + { + "bbox": [ + 104, + 121, + 504, + 145 + ], + "type": "text", + "content": ", computed as:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 255, + 149, + 505, + 177 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 149, + 505, + 177 + ], + "spans": [ + { + "bbox": [ + 255, + 149, + 505, + 177 + ], + "type": "interline_equation", + "content": "\\mathrm {I o U} = \\frac {\\left[ t _ {s} , t _ {e} \\right] \\cap \\left[ t _ {s} ^ {\\prime} , t _ {e} ^ {\\prime} \\right]}{\\left[ t _ {s} , t _ {e} \\right] \\cup \\left[ t _ {s} ^ {\\prime} , t _ {e} ^ {\\prime} \\right]} \\tag {3}", + "image_path": "c8cd777715d39a3a6dc02bdc1008d5b6e365e2dbd28350b32781ae7f3b8125df.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "spans": [ + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "type": "inline_equation", + "content": "A \\cap B" + }, + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "type": "inline_equation", + "content": "A \\cup B" + }, + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "type": "text", + "content": " denote the union and intersection between sets A and B, respectively. Optimizing for the IoU inherently encourages the LVLM to produce predictions that fall within a permissible range of variation " + }, + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "type": "text", + "content": ", such that " + }, + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "type": "inline_equation", + "content": "t_{s \\text{ or } e}^{\\prime} - \\epsilon \\leq t_{s \\text{ or } e} \\leq t_{s \\text{ or } e}^{\\prime} + \\epsilon" + }, + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "type": "text", + "content": " still yields high IoUs. This encourages the LVLM to focus more on the semantic understanding of the event within possible temporal boundaries, rather than rigidly requiring exact temporal alignment like SFT. However, standard IoU may fail to accurately reflect the quality of temporal alignment in certain scenarios. For example, when the ground truth span is [0, 30] (i.e., the full video), any prediction covering more than " + }, + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "type": "inline_equation", + "content": "30\\%" + }, + { + "bbox": [ + 104, + 180, + 506, + 301 + ], + "type": "text", + "content": " of the video would result in an IoU greater than 0.3. A prediction like [10, 25] would yield an IoU of 0.5, which overestimates its quality despite incorrect timestamps. To address this issue, we introduce the timestamp-aware IoU (tIoU) as a corrective measure. tIoU augments the standard IoU with penalties on timestamp deviations, defined as:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 203, + 306, + 505, + 331 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 306, + 505, + 331 + ], + "spans": [ + { + "bbox": [ + 203, + 306, + 505, + 331 + ], + "type": "interline_equation", + "content": "r _ {\\mathrm {t I o U}} (o) = \\operatorname {I o U} \\cdot \\left(1 - \\frac {\\left| t _ {s} - t _ {s} ^ {\\prime} \\right|}{t}\\right) \\cdot \\left(1 - \\frac {\\left| t _ {e} - t _ {e} ^ {\\prime} \\right|}{t}\\right) \\tag {4}", + "image_path": "bc4280879d66ed7dde4226b29fd7527cf9dda9914c529a486faec93b4e9b6281.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 335, + 505, + 389 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 335, + 505, + 389 + ], + "spans": [ + { + "bbox": [ + 104, + 335, + 505, + 389 + ], + "type": "text", + "content": "This modification penalizes predictions that deviate from the reference timestamps relative to the video duration " + }, + { + "bbox": [ + 104, + 335, + 505, + 389 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 335, + 505, + 389 + ], + "type": "text", + "content": ". In the earlier example, the reward value changes from 0.5 (IoU) to 0.28 (tIoU), providing a more realistic signal for learning. Overall, tIoU acts as a stricter and more informative reward signal, encouraging the LVLM to develop a deeper temporal understanding of events, rather than relying on superficial shortcuts." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 390, + 506, + 511 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 390, + 506, + 511 + ], + "spans": [ + { + "bbox": [ + 104, + 390, + 506, + 511 + ], + "type": "text", + "content": "- Reasoning template reward " + }, + { + "bbox": [ + 104, + 390, + 506, + 511 + ], + "type": "inline_equation", + "content": "r_{\\mathrm{form}}(\\cdot)" + }, + { + "bbox": [ + 104, + 390, + 506, + 511 + ], + "type": "text", + "content": ". In TVG, the video segments relevant to a textual query typically comprise only a small portion of the entire long video. For LVLMs, it is therefore suboptimal to directly predict timestamps without first engaging in a reasoning process to identify the relevant content. Instead, the model should allocate its computational capacity toward reasoning over visual and linguistic cues to better understand the temporal context before making predictions. For instance, given the query \"the man washes dishes\", reasoning that the person is likely in a kitchen can improve temporal localization. Such context-aware inference supports more accurate and semantically aligned predictions. To encourage this behavior, we introduce a template-based reasoning reward, which incentivizes the model to generate intermediate reasoning steps (structured in a predefined format) prior to timestamp localization. The reasoning template reward requires the LVLM to structure its response like \"<...<" + }, + { + "bbox": [ + 104, + 390, + 506, + 511 + ], + "type": "inline_equation", + "content": "t_s" + }, + { + "bbox": [ + 104, + 390, + 506, + 511 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 104, + 390, + 506, + 511 + ], + "type": "inline_equation", + "content": "t_e" + }, + { + "bbox": [ + 104, + 390, + 506, + 511 + ], + "type": "text", + "content": ">\", formulated as:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 217, + 517, + 505, + 544 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 217, + 517, + 505, + 544 + ], + "spans": [ + { + "bbox": [ + 217, + 517, + 505, + 544 + ], + "type": "interline_equation", + "content": "r _ {\\text {f o r m}} (o) = \\left\\{ \\begin{array}{l} 0, \\text {i f} o \\text {h a s w r o n g f r o m a t} \\\\ 1, \\text {i f} o \\text {h a s c o r r e c t f r o m a t} \\end{array} \\right. \\tag {5}", + "image_path": "d91bc35c9d1d2b3f9bdb6bd935909d6fe958ca16790361828393bd6e0fe007c1.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 549, + 272, + 560 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 549, + 272, + 560 + ], + "spans": [ + { + "bbox": [ + 105, + 549, + 272, + 560 + ], + "type": "text", + "content": "The overall reward is the sum of the two:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 247, + 567, + 505, + 580 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 247, + 567, + 505, + 580 + ], + "spans": [ + { + "bbox": [ + 247, + 567, + 505, + 580 + ], + "type": "interline_equation", + "content": "r (o) = r _ {\\mathrm {t I o U}} (o) + r _ {\\text {f o r m}} (o) \\tag {6}", + "image_path": "06ce8287bb442f82e8b1af33f7460843bfe111515417e43044a9218c92cef40a.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "spans": [ + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "text", + "content": "GRPO training. The LVLM " + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(\\cdot)" + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "text", + "content": " takes the video frames " + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "inline_equation", + "content": "x_{1},\\ldots ,x_{t}" + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "text", + "content": " and the language query " + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "text", + "content": " as input and generates " + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "inline_equation", + "content": "G" + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "text", + "content": " candidate responses " + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "inline_equation", + "content": "o_1,\\dots,o_G" + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "text", + "content": ", where each response is computed as " + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "inline_equation", + "content": "o_i = \\mathcal{F}(x_1,\\ldots ,x_t;q)" + }, + { + "bbox": [ + 104, + 590, + 506, + 645 + ], + "type": "text", + "content": ". The reward for each response is calculated using Equation 1, and the model is optimized with the GRPO objective in Equation 2. To focus learning on the reasoning and localization capabilities, we freeze the visual encoder and update only the parameters of the LLM during training." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 658, + 342, + 670 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 658, + 342, + 670 + ], + "spans": [ + { + "bbox": [ + 104, + 658, + 342, + 670 + ], + "type": "text", + "content": "3.3 TimeRFT: Time-Aware RL-Friendly Fine-Tuning" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 677, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 677, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 677, + 506, + 723 + ], + "type": "text", + "content": "Due to the high computational cost associated with RL training, we explore data-efficient strategies to reduce sample requirements. To this end, we propose TimeRFT, which incorporates time-aware, RL-friendly dataset curation and fine-tuning techniques aimed at enhancing generalization while minimizing training overhead." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 504, + 106 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 504, + 106 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 504, + 106 + ], + "type": "text", + "content": "RL-friendly dataset curation. We construct the TimeRFT dataset by leveraging only TVG samples and assign a difficulty score to each sample based on the base model's performance. A small subset is then selected for subsequent RL training." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 110, + 506, + 266 + ], + "type": "list", + "angle": 0, + "index": 3, + "blocks": [ + { + "bbox": [ + 104, + 110, + 504, + 155 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 110, + 504, + 155 + ], + "spans": [ + { + "bbox": [ + 104, + 110, + 504, + 155 + ], + "type": "text", + "content": "- Source data collection. Our training videos are sourced from Internet video datasets including YT-Temporal [54], DiDeMo [3], QuerYD [38], InternVid [49], and HowTo100M [36]. We obtain grounding data with annotations from VTG-IT [17], TimeIT [44], TimePro [60], HTStep [2], and LongVid [27]. This process yields 339K temporal grounding samples." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 155, + 506, + 266 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 155, + 506, + 266 + ], + "spans": [ + { + "bbox": [ + 104, + 155, + 506, + 266 + ], + "type": "text", + "content": "- RFT data filtering. We propose a data selection strategy based on training difficulty to significantly reduce training costs while preserving strong generalization performance. Models trained only on easy samples (e.g., IoU " + }, + { + "bbox": [ + 104, + 155, + 506, + 266 + ], + "type": "inline_equation", + "content": "\\geq 0.7" + }, + { + "bbox": [ + 104, + 155, + 506, + 266 + ], + "type": "text", + "content": ") tend to overfit, whereas training on overly difficult samples (e.g., IoU = 0) often suffers from sparse reward signals, making it hard for the model to receive positive feedback. To strike a balance, we select samples of moderate difficulty that are more conducive to generalization during reinforcement fine-tuning. We first estimate a difficulty score for each sample based on the performance of the base model. For grounding tasks, difficulty is quantified using the IoU between the predicted and ground-truth temporal regions. We then filter out samples that are either too easy or too hard. Specifically, we sample a subset of data from a Gaussian distribution over the IoU axis centered at 0.3, resulting in a set of 2.5K moderately difficult samples for RL training." + } + ] + } + ], + "index": 2 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 104, + 269, + 504, + 324 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 269, + 504, + 324 + ], + "spans": [ + { + "bbox": [ + 104, + 269, + 504, + 324 + ], + "type": "text", + "content": "RFT training strategy. For selected difficult samples, the model may struggle to learn them in a single pass. However, we argue that effectively mastering these challenging cases is essential for improving overall model performance. To this end, we employ a multi-epoch training approach combined with per-epoch sample filtering, allowing the model to repeatedly focus on harder samples and gradually improve its understanding." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 329, + 504, + 461 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 104, + 329, + 504, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 329, + 504, + 384 + ], + "spans": [ + { + "bbox": [ + 104, + 329, + 504, + 384 + ], + "type": "text", + "content": "- Dynamic hard sampling. We adopt a multi-epoch training strategy coupled with per-epoch sample filtering to enhance learning from difficult examples. The model is trained over multiple epochs, and after each epoch, we exclude easy samples with an IoU greater than 0.7 that have become easy. This dynamic curriculum discourages overfitting on easy instances while ensuring consistent exposure to harder samples, ultimately promoting stronger generalization." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 384, + 504, + 461 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 384, + 504, + 461 + ], + "spans": [ + { + "bbox": [ + 104, + 384, + 504, + 461 + ], + "type": "text", + "content": "- Cold start fine-tuning with few CoT data. For smaller models (e.g., 3B parameters), we observe that directly training with RL to generate CoT responses often results in reasoning steps that are either unintelligible or hallucinated, which impairs answer quality. Additionally, the length of generated reasoning during early training stages is difficult to control, leading to an unstable training process. To address these issues, we introduce a cold-start fine-tuning strategy using a small set of CoT-formatted examples that encourage grounded reasoning aligned with video content. Specifically, we guide the model to produce structured sequential captions with associated timestamps with the template as:" + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "spans": [ + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "inline_equation", + "content": "<" + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "text", + "content": " think " + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "inline_equation", + "content": "> < t_{s_1}" + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "inline_equation", + "content": "t_{e_1}:C_1" + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "text", + "content": "; " + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "inline_equation", + "content": "t_{s_2}" + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "inline_equation", + "content": "t_{e_2}:C_2 > < / \\text{think} > < \\text{answer} > t_s" + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "inline_equation", + "content": "t_e < / \\text{answer} > (7)" + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "inline_equation", + "content": "C_i" + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "text", + "content": " represent captions corresponding to video segments " + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "inline_equation", + "content": "[t_{s_i}, t_{e_i}]" + }, + { + "bbox": [ + 104, + 462, + 504, + 489 + ], + "type": "text", + "content": ", respectively." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 499, + 463, + 511 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 499, + 463, + 511 + ], + "spans": [ + { + "bbox": [ + 105, + 499, + 463, + 511 + ], + "type": "text", + "content": "3.4 TVGBench: Evaluation Benchmark for LVLM on Temporal Video Grounding" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 519, + 506, + 597 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 519, + 506, + 597 + ], + "spans": [ + { + "bbox": [ + 104, + 519, + 506, + 597 + ], + "type": "text", + "content": "Existing benchmarks for temporal video grounding either focus on large-scale datasets tailored for smaller models within specific domains (e.g., human activities in ActivityNet) or consist of small, limited test sets (e.g., the 2K home activity samples in Charades) typically used for LVLM evaluation due to their slower inference speed. However, these benchmarks fall short in capturing the evaluation needs of LVLMs, which, despite slower inference, exhibit strong generalization capabilities. To bridge this gap, we introduce TVGBench, a lightweight yet comprehensive evaluation benchmark specifically designed for assessing the performance of LVLMs on temporal video grounding tasks." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 601, + 506, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 601, + 506, + 635 + ], + "spans": [ + { + "bbox": [ + 104, + 601, + 506, + 635 + ], + "type": "text", + "content": "Data sources. To ensure a comprehensive evaluation, we construct our TVGBench by curating samples from five public benchmarks with a balanced distribution of data sources: Charades-STA [46], ActivityNet-Captions [5], HiREST [59], EgoNLQ [16], and TaCoS [43]." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 639, + 504, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 639, + 504, + 663 + ], + "spans": [ + { + "bbox": [ + 104, + 639, + 504, + 663 + ], + "type": "text", + "content": "Benchmark statistics. We carefully balance the video duration, video domain, and query center, and construct query semantics in TVGBench to construct 800 instances, as shown in Figure 3." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 667, + 504, + 723 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 104, + 667, + 504, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 667, + 504, + 700 + ], + "spans": [ + { + "bbox": [ + 104, + 667, + 504, + 700 + ], + "type": "text", + "content": "- Distribution statistics. Video durations in the dataset have a balanced range from short clips up to 3 minutes long. To ensure temporal diversity, the center timestamps of queries are designed to be approximately uniformly distributed across the entire span of each video." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 700, + 504, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 700, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 700, + 504, + 723 + ], + "type": "text", + "content": "- Query semantics. Since the original datasets do not provide semantic labels for queries, we manually define 11 semantic categories grouped under three major types: human, object, and" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 106, + 70, + 222, + 225 + ], + "blocks": [ + { + "bbox": [ + 106, + 70, + 222, + 225 + ], + "lines": [ + { + "bbox": [ + 106, + 70, + 222, + 225 + ], + "spans": [ + { + "bbox": [ + 106, + 70, + 222, + 225 + ], + "type": "image", + "image_path": "49e9a7fef9722f608faf4d9fd45ba26622fb2a14c87bc1378173e16c8e2e9f1d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 233, + 504, + 289 + ], + "lines": [ + { + "bbox": [ + 104, + 233, + 504, + 289 + ], + "spans": [ + { + "bbox": [ + 104, + 233, + 504, + 289 + ], + "type": "text", + "content": "Figure 3: Statistics of TVGBench. TVGBench comprises 11 types of queries covering aspects related to humans, objects, and environments. As illustrated in the figure on the right, the distributions of query center, video length, and dataset source are designed to be as balanced as possible. This balanced construction allows for a comprehensive evaluation of model performance across different dimensions, enabling fine-grained analysis along each axis during benchmarking." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 222, + 70, + 369, + 225 + ], + "blocks": [ + { + "bbox": [ + 222, + 70, + 369, + 225 + ], + "lines": [ + { + "bbox": [ + 222, + 70, + 369, + 225 + ], + "spans": [ + { + "bbox": [ + 222, + 70, + 369, + 225 + ], + "type": "image", + "image_path": "8ec672d067af753645e301703b46ab27cdeeb397f1273a933f68de77089f7d74.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 369, + 70, + 504, + 225 + ], + "blocks": [ + { + "bbox": [ + 369, + 70, + 504, + 225 + ], + "lines": [ + { + "bbox": [ + 369, + 70, + 504, + 225 + ], + "spans": [ + { + "bbox": [ + 369, + 70, + 504, + 225 + ], + "type": "image", + "image_path": "ae5ab713fcc4ca382be19915fba333312c734d0076d80ef1075081d5a192af17.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 313, + 504, + 369 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 313, + 504, + 369 + ], + "spans": [ + { + "bbox": [ + 104, + 313, + 504, + 369 + ], + "type": "text", + "content": "environment. We leverage DeepSeek-V3 [11] to annotate the semantic type of each query and ensure approximate balance across these categories. While most categories are evenly represented, the Human Action Simple (HAS) category is slightly overrepresented due to inherent dataset bias (simple indoor actions are more common). In such cases, we prioritize achieving a balance across datasets from different domains while maintaining semantic diversity, accepting a skew in HAS." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 388, + 192, + 402 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 388, + 192, + 402 + ], + "spans": [ + { + "bbox": [ + 105, + 388, + 192, + 402 + ], + "type": "text", + "content": "4 Experiments" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 415, + 504, + 482 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 415, + 504, + 482 + ], + "spans": [ + { + "bbox": [ + 104, + 415, + 504, + 482 + ], + "type": "text", + "content": "We first present our experimental setup in Section 4.1. Then, we evaluate our model from three key perspectives: (1) Comparison with state-of-the-art methods in Section 4.2: We evaluate our model across multiple TVG benchmarks to assess its performance against existing approaches; (2) Ablation studies and analyses in Section 4.3: We examine the individual contributions of each component in our framework to better understand their roles in overall performance. We also compare RL and SFT strategies across TVG, short video QA, and long video QA tasks." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 498, + 216, + 510 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 498, + 216, + 510 + ], + "spans": [ + { + "bbox": [ + 105, + 498, + 216, + 510 + ], + "type": "text", + "content": "4.1 Experimental Setup" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 520, + 506, + 662 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 520, + 506, + 662 + ], + "spans": [ + { + "bbox": [ + 104, + 520, + 506, + 662 + ], + "type": "text", + "content": "**Benchmarks.** We evaluate our model on a wide range of benchmarks covering both temporal video grounding and general video understanding tasks, including: (1) Charades-STA [46] contains 6,672 long videos capturing indoor human activities. The official split for the TVG task includes 12,408 clip-query pairs for training and 3,720 for testing. (2) ActivityNet [5] comprises 20K long videos with an average of 3.65 clip-query pairs per video. Following previous work in fine-tuning setting [62, 22] for the TVG task, we use the standard dataset splits with 37,421 training, 17,505 validation, and 17,031 test samples. (3) MVBench [26] is a short video QA benchmark focused on temporal reasoning. It includes 4K QA pairs for 20 types of tasks. (4) TempCompass [31] assesses fine-grained temporal understanding with 410 short videos. We use all multi-choice QA tasks except for the video captioning task. (5) EgoSchema [33] features 5K egocentric video clips, each approximately 3 minutes long, with temporally demanding QA pairs. (6) VideoMME [13] is a general video QA benchmark covering diverse domains. It contains 2.7K QA samples over videos of varied lengths, ranging from 11 seconds to 1 hour. We use the long video split for evaluation." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 666, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 666, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 666, + 506, + 723 + ], + "type": "text", + "content": "Implementation details. Unless otherwise specified, we use Qwen2.5-VL-7B [4] as the base model. To strike a balance between training efficiency and memory consumption, we sample video frames at 2 FPS and adaptively resize each video input to contain approximately 2.8 million pixels. For instance, a 50-second video yields 100 frames, each with a resolution of roughly " + }, + { + "bbox": [ + 104, + 666, + 506, + 723 + ], + "type": "inline_equation", + "content": "96 \\times 96 \\times 3" + }, + { + "bbox": [ + 104, + 666, + 506, + 723 + ], + "type": "text", + "content": ". During the reinforcement fine-tuning phase, we train for 5 epochs using a batch size of 8 and select the final" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 115, + 123, + 495, + 301 + ], + "blocks": [ + { + "bbox": [ + 104, + 70, + 506, + 114 + ], + "lines": [ + { + "bbox": [ + 104, + 70, + 506, + 114 + ], + "spans": [ + { + "bbox": [ + 104, + 70, + 506, + 114 + ], + "type": "text", + "content": "Table 1: Performance of temporal video grounding on Charades-STA, ActivityNet, and TVGBench. The methods marked in gray* represent fine-tuning on corresponding benchmarks, while those in black indicate zero-shot settings. We compare our Time-R1 against existing 7B open-source LVLMs, as well as state-of-the-art VLP models." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 115, + 123, + 495, + 301 + ], + "lines": [ + { + "bbox": [ + 115, + 123, + 495, + 301 + ], + "spans": [ + { + "bbox": [ + 115, + 123, + 495, + 301 + ], + "type": "table", + "html": "
TypeMethodCharades-STAActivityNetTVGBench
R1@0.3R1@0.5R1@0.7R1@0.3R1@0.5R1@0.7R1@0.3R1@0.5R1@0.7
VLP2D-TAN* [63]57.345.827.960.443.425.0---
UniVTG* [30]72.660.238.656.143.424.3---
SSRN* [66]-65.542.6-54.533.2---
SnAG* [37]-64.646.2-48.630.6---
EaTR* [22]-68.444.9-58.237.6---
Gemini-2.5-Pro [10]------39.124.412.8
SFTChatVTG [41]52.733.015.940.722.59.4---
TimeChat [44]-32.213.436.220.29.522.411.95.3
HawkEye [50]50.631.414.549.129.310.7---
VTimeLLM [21]51.027.511.444.027.814.3---
TimeSuite [60]69.948.724.0---31.118.08.9
VideoChat-Flash [27]74.553.127.6---32.819.810.4
TRACE [18]-40.319.4-37.724.037.025.514.6
HawkEye* [50]72.558.328.855.934.717.9---
TimeSuite* [60]79.467.143.0------
RLTime-R1 (ours)78.160.835.358.639.021.441.829.416.4
Time-R1 (ours)*82.872.250.173.355.634.0---
", + "image_path": "275ffc7437d3a78479fde2a1b21389a3e2fa2483334aa6f8d08a7214dd4b07ae.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 106, + 312, + 504, + 437 + ], + "blocks": [ + { + "bbox": [ + 106, + 312, + 504, + 437 + ], + "lines": [ + { + "bbox": [ + 106, + 312, + 504, + 437 + ], + "spans": [ + { + "bbox": [ + 106, + 312, + 504, + 437 + ], + "type": "image", + "image_path": "873721d52378b8f0aedc1559ae1943111394a4e815f0185d8745fbb2e288dd0d.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 444, + 504, + 478 + ], + "lines": [ + { + "bbox": [ + 104, + 444, + 504, + 478 + ], + "spans": [ + { + "bbox": [ + 104, + 444, + 504, + 478 + ], + "type": "text", + "content": "Figure 4: Comparison between post-training paradigms across various tasks, including temporal video grounding, short video QA, and long video QA. Both \"SFT\" and \"RL\" full-finetune the LLM, while \"SFT-LoRA\" denotes finetuning the LLM with LoRA [20]. The \"Base\" is Qwen2.5-VL-7B." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 499, + 504, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 499, + 504, + 521 + ], + "spans": [ + { + "bbox": [ + 104, + 499, + 504, + 521 + ], + "type": "text", + "content": "checkpoint for evaluation. For fine-tuning on downstream benchmarks, we train for 2 epochs. All experiments are conducted on a cluster with " + }, + { + "bbox": [ + 104, + 499, + 504, + 521 + ], + "type": "inline_equation", + "content": "8 \\times" + }, + { + "bbox": [ + 104, + 499, + 504, + 521 + ], + "type": "text", + "content": " NVIDIA A100 GPUs." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 526, + 504, + 583 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 526, + 504, + 583 + ], + "spans": [ + { + "bbox": [ + 104, + 526, + 504, + 583 + ], + "type": "text", + "content": "Evaluation metrics. For TVG, following [44, 60], we adopt the “R1@m” evaluation protocol to compare with state-of-the-art models, which computes the percentage of samples where the top-1 predicted segment has an IoU greater than a threshold " + }, + { + "bbox": [ + 104, + 526, + 504, + 583 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 104, + 526, + 504, + 583 + ], + "type": "text", + "content": ", with " + }, + { + "bbox": [ + 104, + 526, + 504, + 583 + ], + "type": "inline_equation", + "content": "m \\in \\{0.3, 0.5, 0.7\\}" + }, + { + "bbox": [ + 104, + 526, + 504, + 583 + ], + "type": "text", + "content": ". For brevity, we also adopt mIoU, which calculates the average IoU on all testing data as an alternative metric. For video QA, we report accuracy as the evaluation metric." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 595, + 275, + 607 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 595, + 275, + 607 + ], + "spans": [ + { + "bbox": [ + 105, + 595, + 275, + 607 + ], + "type": "text", + "content": "4.2 Comparison with State-of-the-Art" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 615, + 504, + 639 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 615, + 504, + 639 + ], + "spans": [ + { + "bbox": [ + 104, + 615, + 504, + 639 + ], + "type": "text", + "content": "We compare Time-R1 with state-of-the-art TVG methods, including both traditional video-language pre-training models (VLP) and recent large video-language models fine-tuned via SFT." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 643, + 505, + 720 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 643, + 505, + 720 + ], + "spans": [ + { + "bbox": [ + 104, + 643, + 505, + 720 + ], + "type": "text", + "content": "Time-R1 surpasses SFT-based models in zero-shot settings. As shown in Table 1, in the zero-shot setting, Time-R1 demonstrates strong performance, outperforming SFT-based models that rely on large-scale supervision. Despite using only " + }, + { + "bbox": [ + 104, + 643, + 505, + 720 + ], + "type": "inline_equation", + "content": "2.5\\mathrm{K}" + }, + { + "bbox": [ + 104, + 643, + 505, + 720 + ], + "type": "text", + "content": " samples for RL, Time-R1 achieves leading results across multiple benchmarks. For example, on Charades-STA, Time-R1 attains an R1@0.7 score of 35.3, outperforming VideoChat-Flash (27.6) and TimeSuite (24.0). On ActivityNet, it achieves R1@0.7 score of 21.4, surpassing VTimeLLM (14.3) and TimeChat (10.7). On TVGBench, it scores 16.4, outperforming TRACE (14.6) and Gemini-2.5-Pro (12.8)." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 298, + 216 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 298, + 216 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 298, + 216 + ], + "type": "text", + "content": "Time-R1* outperforms all SFT-based LVLMs and many traditional VLP-based models. Time-R1* consistently outperforms both traditional VLP-based and SFT-based models on the TVG task. On Charades-STA, it exceeds EaTR and fine-tuned TimeSuite by 3.4-7.1 percentage points across R1@0.3 to R1@0.7. Notably, Time-R1* surpasses TimeSuite despite using far fewer RL samples compared to TimeSuite's 349K SFT examples. On the more challenging ActivityNet dataset, Time-R1* also outperforms SSRN [66] and TRACE [18], achieving significant gains across key metrics." + } + ] + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 310, + 114, + 496, + 206 + ], + "blocks": [ + { + "bbox": [ + 302, + 73, + 506, + 109 + ], + "lines": [ + { + "bbox": [ + 302, + 73, + 506, + 109 + ], + "spans": [ + { + "bbox": [ + 302, + 73, + 506, + 109 + ], + "type": "text", + "content": "Table 2: Ablation of Time-R1-7B training. GF, ME, SF refers to Gaussian Filtering, Multi-EPOCH, and Sample Filtering per epoch, respectively." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 310, + 114, + 496, + 206 + ], + "lines": [ + { + "bbox": [ + 310, + 114, + 496, + 206 + ], + "spans": [ + { + "bbox": [ + 310, + 114, + 496, + 206 + ], + "type": "table", + "html": "
tIoUGFMESFTVGBench
R1@0.3R1@0.5R1@0.7
1XXXX38.024.813.2
2XXX36.023.612.9
3XXX37.225.013.4
4XXX39.926.014.2
5XX38.425.614.1
6XX39.426.516.4
7X41.628.515.6
841.829.416.4
", + "image_path": "d42802b4d80f26ee7294fbd12a8a7ce704f35b47311d772edd6be46bf317bb0a.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 228, + 195, + 239 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 228, + 195, + 239 + ], + "spans": [ + { + "bbox": [ + 105, + 228, + 195, + 239 + ], + "type": "text", + "content": "4.3 Ablation Study" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 247, + 504, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 247, + 504, + 270 + ], + "spans": [ + { + "bbox": [ + 104, + 247, + 504, + 270 + ], + "type": "text", + "content": "We conduct a detailed ablation on the Time-R1-7B model to investigate the contribution of various training strategies." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 275, + 504, + 342 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 275, + 504, + 342 + ], + "spans": [ + { + "bbox": [ + 104, + 275, + 504, + 342 + ], + "type": "text", + "content": "Utility of TimeRFT and Time-R1 components. As shown in Table 2, both Gaussian Filtering (GF) and Multi-Epoch training (ME) individually improve performance, with ME yielding a more substantial gain, improving from R1@0.7 of 13.2 in row 1 to 14.2 in row 4. Notably, the combination of tIoU supervision and ME (Row 6) leads to a significant boost across all metrics. As more components are added, GF and ME (Row 7), followed by Sample Filtering (SF) in Row 8, the performance continues to improve, ultimately reaching R1@0.5 of 29.4 and R1@0.7 of 16.4." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 346, + 338, + 499 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 346, + 338, + 499 + ], + "spans": [ + { + "bbox": [ + 104, + 346, + 338, + 499 + ], + "type": "text", + "content": "Generalization of RL vs. SFT. As shown in Figure 4, when both RL and SFT perform full fine-tuning of the LLM using a small amount of data, SFT significantly degrades the model's generalization ability across all tasks, whereas RL consistently preserves generalization. While SFT can alleviate this issue by adopting LoRA-based fine-tuning (SFT-LoRA), RL still demonstrates stronger generalization and additionally improves performance on video QA tasks, whereas SFT-LoRA leads to a decline. For example, on ActivityNet, RL improves the mIoU from 16.3 to 29.2, while SFT degrades it to 15.4, and SFT-LoRA improves it to 25.9. On the VideoMMe video QA benchmark, RL increases the performance from 53.0 to 54.2, whereas SFT-LoRA decreases it to 51.7." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 504, + 337, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 504, + 337, + 559 + ], + "spans": [ + { + "bbox": [ + 104, + 504, + 337, + 559 + ], + "type": "text", + "content": "Impact of cold start. As shown in Figure 5, cold start boosts the performance of the 3B model and reduces the number of thinking tokens for both models. We attribute this to the cold start's function to suppress hallucinations, which tend to be more prevalent in weaker models." + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 349, + 363, + 501, + 480 + ], + "blocks": [ + { + "bbox": [ + 349, + 363, + 501, + 480 + ], + "lines": [ + { + "bbox": [ + 349, + 363, + 501, + 480 + ], + "spans": [ + { + "bbox": [ + 349, + 363, + 501, + 480 + ], + "type": "image", + "image_path": "9024b0a012288071a2c51ac867ec685fd8c7521c85e00d3d26347c3a064034e0.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 342, + 492, + 504, + 536 + ], + "lines": [ + { + "bbox": [ + 342, + 492, + 504, + 536 + ], + "spans": [ + { + "bbox": [ + 342, + 492, + 504, + 536 + ], + "type": "text", + "content": "Figure 5: Impact of SFT-based cold start on IoU performance and thinking token count, with token counts marked by " + }, + { + "bbox": [ + 342, + 492, + 504, + 536 + ], + "type": "inline_equation", + "content": "\\star" + }, + { + "bbox": [ + 342, + 492, + 504, + 536 + ], + "type": "text", + "content": " on dashed lines." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 574, + 185, + 586 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 574, + 185, + 586 + ], + "spans": [ + { + "bbox": [ + 105, + 574, + 185, + 586 + ], + "type": "text", + "content": "5 Conclusion" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 599, + 506, + 720 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 599, + 506, + 720 + ], + "spans": [ + { + "bbox": [ + 104, + 599, + 506, + 720 + ], + "type": "text", + "content": "In this work, we present Time-R1, a reinforcement learning based post-training framework that significantly improves the generalization of Large Vision-Language Models for Temporal Video Grounding. Unlike prior methods that rely on large-scale supervised fine-tuning, Time-R1 leverages a verifiable reward signal to unlock strong temporal reasoning from pretrained models using limited data. Our contributions include: (1) Time-R1, a reasoning-guided post-training framework that enhances TVG via RL; (2) TimeRFT, a curated dataset and training strategy that fosters temporal grounding; (3) TVGBench, a small yet comprehensive benchmark for evaluating LVLMs on TVG. Extensive experiments show that Time-R1 achieves SoTA performance across TVG benchmarks in both zero-shot and fine-tuned settings, surpassing prior LVLMs and traditional VLP-based models, while also improving general video understanding. We hope this work inspires future directions in data-efficient and generalizable video-language understanding via reinforcement learning." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 71, + 165, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 71, + 165, + 83 + ], + "spans": [ + { + "bbox": [ + 106, + 71, + 165, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 91, + 506, + 722 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 111, + 91, + 505, + 112 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 91, + 505, + 112 + ], + "spans": [ + { + "bbox": [ + 111, + 91, + 505, + 112 + ], + "type": "text", + "content": "[1] Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. 2, 4" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 111, + 116, + 506, + 147 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 116, + 506, + 147 + ], + "spans": [ + { + "bbox": [ + 111, + 116, + 506, + 147 + ], + "type": "text", + "content": "[2] Triantafyllos Afouras, Effrosyni Mavroudi, Tushar Nagarajan, Huiyu Wang, and Lorenzo Torresani. Ht-step: Aligning instructional articles with how-to videos. Advances in Neural Information Processing Systems, 36:50310-50326, 2023. 6" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 111, + 150, + 506, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 150, + 506, + 182 + ], + "spans": [ + { + "bbox": [ + 111, + 150, + 506, + 182 + ], + "type": "text", + "content": "[3] Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing moments in video with natural language. In Proceedings of the IEEE international conference on computer vision, pages 5803-5812, 2017. 1, 3, 6" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 111, + 185, + 506, + 234 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 185, + 506, + 234 + ], + "spans": [ + { + "bbox": [ + 111, + 185, + 506, + 234 + ], + "type": "text", + "content": "[4] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-v1 technical report. arXiv preprint arXiv:2502.13923, 2025.7" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 111, + 239, + 504, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 239, + 504, + 270 + ], + "spans": [ + { + "bbox": [ + 111, + 239, + 504, + 270 + ], + "type": "text", + "content": "[5] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 961-970, 2015. 1, 3, 6, 7" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 111, + 274, + 506, + 304 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 274, + 506, + 304 + ], + "spans": [ + { + "bbox": [ + 111, + 274, + 506, + 304 + ], + "type": "text", + "content": "[6] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6299-6308, 2017. 1, 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 111, + 308, + 504, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 308, + 504, + 330 + ], + "spans": [ + { + "bbox": [ + 111, + 308, + 504, + 330 + ], + "type": "text", + "content": "[7] Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V, 2025.4" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 111, + 332, + 504, + 364 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 332, + 504, + 364 + ], + "spans": [ + { + "bbox": [ + 111, + 332, + 504, + 364 + ], + "type": "text", + "content": "[8] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructlip: Towards general-purpose vision-language models with instruction tuning, 2023. 14" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 111, + 367, + 504, + 388 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 367, + 504, + 388 + ], + "spans": [ + { + "bbox": [ + 111, + 367, + 504, + 388 + ], + "type": "text", + "content": "[9] Trevor Darrell and Alex Pentland. Space-time gestures. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 335-340. IEEE, 1993. 1" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 106, + 391, + 504, + 413 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 391, + 504, + 413 + ], + "spans": [ + { + "bbox": [ + 106, + 391, + 504, + 413 + ], + "type": "text", + "content": "[10] Google DeepMind. Gemini 2.5: Our most intelligent ai model. Google DeepMind, 2025. Model ID: gemini-2.5-pro-preview-03-25. 8, 14, 15" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 106, + 415, + 504, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 415, + 504, + 437 + ], + "spans": [ + { + "bbox": [ + 106, + 415, + 504, + 437 + ], + "type": "text", + "content": "[11] DeepSeek-AI. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprint arXiv:2401.02954, 2024. 7" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 106, + 440, + 504, + 471 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 440, + 504, + 471 + ], + "spans": [ + { + "bbox": [ + 106, + 440, + 504, + 471 + ], + "type": "text", + "content": "[12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171–4186, 2019. 3" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 106, + 475, + 506, + 515 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 475, + 506, + 515 + ], + "spans": [ + { + "bbox": [ + 106, + 475, + 506, + 515 + ], + "type": "text", + "content": "[13] Chaoyou Fu, Yuhan Dai, Yongdong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3, 7" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 106, + 519, + 504, + 540 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 519, + 504, + 540 + ], + "spans": [ + { + "bbox": [ + 106, + 519, + 504, + 540 + ], + "type": "text", + "content": "[14] Adrien Gaidon, Zaid Harchaoui, and Cordelia Schmid. Temporal localization of actions with actons. IEEE transactions on pattern analysis and machine intelligence, 35(11):2782-2795, 2013. 1" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 106, + 544, + 506, + 574 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 544, + 506, + 574 + ], + "spans": [ + { + "bbox": [ + 106, + 544, + 506, + 574 + ], + "type": "text", + "content": "[15] Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. Tall: Temporal activity localization via language query. In Proceedings of the IEEE international conference on computer vision, pages 5267-5275, 2017. 1, 3" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 106, + 578, + 506, + 619 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 578, + 506, + 619 + ], + "spans": [ + { + "bbox": [ + 106, + 578, + 506, + 619 + ], + "type": "text", + "content": "[16] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995-19012, 2022. 1, 6" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 106, + 622, + 506, + 662 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 622, + 506, + 662 + ], + "spans": [ + { + "bbox": [ + 106, + 622, + 506, + 662 + ], + "type": "text", + "content": "[17] Yongxin Guo, Jingyu Liu, Mingda Li, Dingxin Cheng, Xiaoying Tang, Dianbo Sui, Qingbin Liu, Xi Chen, and Kevin Zhao. Vtg-llm: Integrating timestamp knowledge into video llms for enhanced video temporal grounding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 3302–3310, 2025. 2, 6" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 106, + 666, + 504, + 688 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 666, + 504, + 688 + ], + "spans": [ + { + "bbox": [ + 106, + 666, + 504, + 688 + ], + "type": "text", + "content": "[18] Yongxin Guo, Jingyu Liu, Mingda Li, Qingbin Liu, Xi Chen, and Xiaoying Tang. Trace: Temporal grounding video llm via causal event modeling. arXiv preprint arXiv:2410.05643, 2024. 2, 3, 8, 9, 14, 15" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 106, + 691, + 504, + 722 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 691, + 504, + 722 + ], + "spans": [ + { + "bbox": [ + 106, + 691, + 504, + 722 + ], + "type": "text", + "content": "[19] Tanveer Hannan, Md Mohaiminul Islam, Jindong Gu, Thomas Seidl, and Gedas Bertasius. Revisionlmm: Recursive vision-language model for temporal grounding in hour-long videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3" + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 72, + 506, + 722 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 106, + 72, + 505, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 72, + 505, + 95 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 505, + 95 + ], + "type": "text", + "content": "[20] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuzhhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. *ICLR*, 1(2):3, 2022. 8, 14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 97, + 506, + 129 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 97, + 506, + 129 + ], + "spans": [ + { + "bbox": [ + 106, + 97, + 506, + 129 + ], + "type": "text", + "content": "[21] Bin Huang, Xin Wang, Hong Chen, Zihan Song, and Wenwu Zhu. Vtimellm: Empower llm to grasp video moments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14271-14280, 2024. 8" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 132, + 506, + 163 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 132, + 506, + 163 + ], + "spans": [ + { + "bbox": [ + 107, + 132, + 506, + 163 + ], + "type": "text", + "content": "[22] Jinhyun Jang, Jungin Park, Jin Kim, Hyeongjun Kwon, and Kwanghoon Sohn. Knowing where to focus: Event-aware transformer for video grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13846-13856, 2023. 1, 2, 3, 7, 8" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 106, + 167, + 506, + 208 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 167, + 506, + 208 + ], + "spans": [ + { + "bbox": [ + 106, + 167, + 506, + 208 + ], + "type": "text", + "content": "[23] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. 15" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 212, + 504, + 233 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 212, + 504, + 233 + ], + "spans": [ + { + "bbox": [ + 107, + 212, + 504, + 233 + ], + "type": "text", + "content": "[24] Ivan Laptev and Patrick Pérez. Retrieving actions in movies. In 2007 IEEE 11th International Conference on Computer Vision, pages 1-8. IEEE, 2007. 1" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 237, + 506, + 267 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 237, + 506, + 267 + ], + "spans": [ + { + "bbox": [ + 107, + 237, + 506, + 267 + ], + "type": "text", + "content": "[25] Jiaze Li, Yaya Shi, Zongyang Ma, Haoran Xu, Feng Cheng, Huihui Xiao, Ruiwen Kang, Fan Yang, Tingting Gao, and Di Zhang. imove: Instance-motion-aware video understanding. arXiv preprint arXiv:2502.11594, 2025.3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 272, + 506, + 304 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 272, + 506, + 304 + ], + "spans": [ + { + "bbox": [ + 107, + 272, + 506, + 304 + ], + "type": "text", + "content": "[26] Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, et al. Mvbench: A comprehensive multi-modal video understanding benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22195-22206, 2024. 3, 7" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 307, + 504, + 338 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 307, + 504, + 338 + ], + "spans": [ + { + "bbox": [ + 107, + 307, + 504, + 338 + ], + "type": "text", + "content": "[27] Xinhao Li, Yi Wang, Jiashuo Yu, Xiangyu Zeng, Yuhan Zhu, Haian Huang, Jianfei Gao, Kunchang Li, Yinan He, Chenting Wang, Yu Qiao, Yali Wang, and Limin Wang. Videochat-flash: Hierarchical compression for long-context video modeling. arXiv preprint arXiv:2501.00574, 2024. 3, 6, 8, 14, 15" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 342, + 506, + 363 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 342, + 506, + 363 + ], + "spans": [ + { + "bbox": [ + 107, + 342, + 506, + 363 + ], + "type": "text", + "content": "[28] Zhenyi Liao, Qingsong Xie, Yanhao Zhang, Zijian Kong, Haonan Lu, Zhenyu Yang, and Zhijie Deng. Improved visual-spatial reasoning via r1-zero-like training. arXiv preprint arXiv:2504.00883, 2025. 4" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 367, + 506, + 398 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 367, + 506, + 398 + ], + "spans": [ + { + "bbox": [ + 107, + 367, + 506, + 398 + ], + "type": "text", + "content": "[29] Kevin Qinghong Lin, Jinpeng Wang, Mattia Soldan, Michael Wray, Rui Yan, Eric Z Xu, Difei Gao, Rong-Cheng Tu, Wenzhe Zhao, Weijie Kong, et al. Egocentric video-language pretraining. Advances in Neural Information Processing Systems, 35:7575-7586, 2022. 1, 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 402, + 506, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 402, + 506, + 434 + ], + "spans": [ + { + "bbox": [ + 107, + 402, + 506, + 434 + ], + "type": "text", + "content": "[30] Kevin Qinghong Lin, Pengchuan Zhang, Joya Chen, Shraman Pramanick, Difei Gao, Alex Jinpeng Wang, Rui Yan, and Mike Zheng Shou. Univtg: Towards unified video-language temporal grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2794–2804, 2023. 1, 8" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 437, + 506, + 458 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 437, + 506, + 458 + ], + "spans": [ + { + "bbox": [ + 107, + 437, + 506, + 458 + ], + "type": "text", + "content": "[31] Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, Lei Li, Sishuo Chen, Xu Sun, and Lu Hou. Tempcompass: Do video llms really understand videos? arXiv preprint arXiv:2403.00476, 2024. 7" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 461, + 506, + 483 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 461, + 506, + 483 + ], + "spans": [ + { + "bbox": [ + 107, + 461, + 506, + 483 + ], + "type": "text", + "content": "[32] Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785, 2025. 4" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 487, + 506, + 517 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 487, + 506, + 517 + ], + "spans": [ + { + "bbox": [ + 107, + 487, + 506, + 517 + ], + "type": "text", + "content": "[33] Karttikeya Mangalam, Raiymbek Akshulakov, and Jitendra Malik. Egoschema: A diagnostic benchmark for very long-form video language understanding. Advances in Neural Information Processing Systems, 36:46212-46244, 2023. 7" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 521, + 504, + 543 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 521, + 504, + 543 + ], + "spans": [ + { + "bbox": [ + 107, + 521, + 504, + 543 + ], + "type": "text", + "content": "[34] Gabriel B Margolis and Pulkit Agrawal. Walk these ways: Tuning robot control for generalization with multiplicity of behavior. In Conference on Robot Learning, pages 22-31. PMLR, 2023. 3" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 107, + 546, + 506, + 578 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 546, + 506, + 578 + ], + "spans": [ + { + "bbox": [ + 107, + 546, + 506, + 578 + ], + "type": "text", + "content": "[35] Fanqing Meng, Lingxiao Du, Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Botian Shi, Wenhai Wang, Junjun He, Kaipeng Zhang, et al. Mm-eureka: Exploring visual aha moment with rule-based large-scale reinforcement learning. arXiv preprint arXiv:2503.07365, 2025. 4" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 107, + 581, + 506, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 581, + 506, + 612 + ], + "spans": [ + { + "bbox": [ + 107, + 581, + 506, + 612 + ], + "type": "text", + "content": "[36] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE/CVF international conference on computer vision, pages 2630-2640, 2019. 6" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 107, + 616, + 504, + 637 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 616, + 504, + 637 + ], + "spans": [ + { + "bbox": [ + 107, + 616, + 504, + 637 + ], + "type": "text", + "content": "[37] Fangzhou Mu, Sicheng Mo, and Yin Li. Snag: Scalable and accurate video grounding. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 18930-18940, 2024. 3, 8" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 107, + 641, + 506, + 673 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 641, + 506, + 673 + ], + "spans": [ + { + "bbox": [ + 107, + 641, + 506, + 673 + ], + "type": "text", + "content": "[38] Andreea-Maria Oncescu, Joao F Henriques, Yang Liu, Andrew Zisserman, and Samuel Albanie. Queryd: A video dataset with high-quality text and audio narrations. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2265-2269. IEEE, 2021. 6" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 107, + 676, + 241, + 688 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 676, + 241, + 688 + ], + "spans": [ + { + "bbox": [ + 107, + 676, + 241, + 688 + ], + "type": "text", + "content": "[39] OpenAI. Openai o1, 2024. 2, 4" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 107, + 691, + 506, + 722 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 691, + 506, + 722 + ], + "spans": [ + { + "bbox": [ + 107, + 691, + 506, + 722 + ], + "type": "text", + "content": "[40] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744, 2022. 4" + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 72, + 506, + 723 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 106, + 72, + 505, + 104 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 72, + 505, + 104 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 505, + 104 + ], + "type": "text", + "content": "[41] Mengxue Qu, Xiaodong Chen, Wu Liu, Alicia Li, and Yao Zhao. Chatvtg: Video temporal grounding via chat with video dialogue large language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1847-1856, 2024. 3, 8" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 106, + 506, + 147 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 106, + 506, + 147 + ], + "spans": [ + { + "bbox": [ + 106, + 106, + 506, + 147 + ], + "type": "text", + "content": "[42] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PmLR, 2021. 1, 3" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 149, + 506, + 181 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 149, + 506, + 181 + ], + "spans": [ + { + "bbox": [ + 107, + 149, + 506, + 181 + ], + "type": "text", + "content": "[43] Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics, 1:25-36, 2013. 6" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 183, + 504, + 216 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 183, + 504, + 216 + ], + "spans": [ + { + "bbox": [ + 107, + 183, + 504, + 216 + ], + "type": "text", + "content": "[44] Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, and Lu Hou. Timechat: A time-sensitive multimodal large language model for long video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14313-14323, 2024. 1, 3, 6, 8, 14, 15" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 217, + 504, + 249 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 217, + 504, + 249 + ], + "spans": [ + { + "bbox": [ + 107, + 217, + 504, + 249 + ], + "type": "text", + "content": "[45] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. 16" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 251, + 504, + 282 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 251, + 504, + 282 + ], + "spans": [ + { + "bbox": [ + 107, + 251, + 504, + 282 + ], + "type": "text", + "content": "[46] Gunnar A Sigurdsson, Gúl Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In Proceedings of the European Conference on Computer Vision (ECCV), 2016. 1, 3, 6, 7" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 285, + 504, + 317 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 285, + 504, + 317 + ], + "spans": [ + { + "bbox": [ + 107, + 285, + 504, + 317 + ], + "type": "text", + "content": "[47] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815, 2017. 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 318, + 506, + 349 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 318, + 506, + 349 + ], + "spans": [ + { + "bbox": [ + 107, + 318, + 506, + 349 + ], + "type": "text", + "content": "[48] Huajie Tan, Yuheng Ji, Xiaoshuai Hao, Minglan Lin, Pengwei Wang, Zhongyuan Wang, and Shanghang Zhang. Reason-rft: Reinforcement fine-tuning for visual reasoning. arXiv preprint arXiv:2503.20752, 2025.4" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 351, + 504, + 384 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 351, + 504, + 384 + ], + "spans": [ + { + "bbox": [ + 107, + 351, + 504, + 384 + ], + "type": "text", + "content": "[49] Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Internvid: A large-scale video-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2307.06942, 2023. 6" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 385, + 506, + 407 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 385, + 506, + 407 + ], + "spans": [ + { + "bbox": [ + 107, + 385, + 506, + 407 + ], + "type": "text", + "content": "[50] Yueqian Wang, Xiaojun Meng, Jianxin Liang, Yuxuan Wang, Qun Liu, and Dongyan Zhao. Hawkeye: Training video-text llms for grounding text in videos, 2024. 8" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 409, + 506, + 471 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 409, + 506, + 471 + ], + "spans": [ + { + "bbox": [ + 107, + 409, + 506, + 471 + ], + "type": "text", + "content": "[51] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October 2020. Association for Computational Linguistics. 15" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 472, + 504, + 504 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 472, + 504, + 504 + ], + "spans": [ + { + "bbox": [ + 107, + 472, + 504, + 504 + ], + "type": "text", + "content": "[52] Yongliang Wu, Xinting Hu, Yuyang Sun, Yizhou Zhou, Wenbo Zhu, Fengyun Rao, Bernt Schiele, and Xu Yang. Number it: Temporal grounding videos like flipping manga. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 506, + 506, + 548 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 506, + 506, + 548 + ], + "spans": [ + { + "bbox": [ + 107, + 506, + 506, + 548 + ], + "type": "text", + "content": "[53] Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid. Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10714-10726, 2023. 2" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 550, + 506, + 592 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 550, + 506, + 592 + ], + "spans": [ + { + "bbox": [ + 107, + 550, + 506, + 592 + ], + "type": "text", + "content": "[54] Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, and Cordelia Schmid. Vid2seq: Large-scale pretraining of a visual language model for dense video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10714-10726, 2023. 6, 14" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 594, + 506, + 644 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 594, + 506, + 644 + ], + "spans": [ + { + "bbox": [ + 107, + 594, + 506, + 644 + ], + "type": "text", + "content": "[55] Jingkang Yang, Shuai Liu, Hongming Guo, Yuhao Dong, Xiamengwei Zhang, Sicheng Zhang, Pengyun Wang, Zitang Zhou, Binzhu Xie, Ziyue Wang, Bei Ouyang, Zhengyu Lin, Marco Cominelli, Zhongang Cai, Yuanhan Zhang, Peiyuan Zhang, Fangzhou Hong, Joerg Widmer, Francesco Gringoli, Lei Yang, Bo Li, and Ziwei Liu. Egolife: Towards egocentric life assistant. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 1" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 107, + 647, + 506, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 647, + 506, + 678 + ], + "spans": [ + { + "bbox": [ + 107, + 647, + 506, + 678 + ], + "type": "text", + "content": "[56] Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476, 2025. 4, 14, 16" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 107, + 681, + 504, + 723 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 681, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 107, + 681, + 504, + 723 + ], + "type": "text", + "content": "[57] Tianyu Yu, Yuan Yao, Haoye Zhang, Taiwen He, Yifeng Han, Ganqu Cui, Jinyi Hu, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun, et al. Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13807-13816, 2024. 4" + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 72, + 506, + 373 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 106, + 72, + 505, + 104 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 72, + 505, + 104 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 505, + 104 + ], + "type": "text", + "content": "[58] Yitian Yuan, Xiaohan Lan, Xin Wang, Long Chen, Zhi Wang, and Wenwu Zhu. A closer look at temporal sentence grounding in videos: Dataset and metric. In Proceedings of the 2nd international workshop on human-centric multimedia analysis, pages 13-21, 2021. 5" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 106, + 506, + 138 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 106, + 506, + 138 + ], + "spans": [ + { + "bbox": [ + 106, + 106, + 506, + 138 + ], + "type": "text", + "content": "[59] Abhay Zala, Jaemin Cho, Satwik Kottur, Xilun Chen, Barlas Oguz, Yashar Mehdad, and Mohit Bansal. Hierarchical video-moment retrieval and step-captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 23056-23065, 2023. 6" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 106, + 140, + 505, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 140, + 505, + 182 + ], + "spans": [ + { + "bbox": [ + 106, + 140, + 505, + 182 + ], + "type": "text", + "content": "[60] Xiangyu Zeng, Kunchang Li, Chenting Wang, Xinhao Li, Tianxiang Jiang, Ziang Yan, Songze Li, Yansong Shi, Zhengrong Yue, Yi Wang, Yali Wang, Yu Qiao, and Limin Wang. Timesuite: Improving MLLMs for long video understanding via grounded tuning. In The Thirteenth International Conference on Learning Representations, 2025. 1, 3, 6, 8, 14, 15" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 106, + 184, + 506, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 184, + 506, + 215 + ], + "spans": [ + { + "bbox": [ + 106, + 184, + 506, + 215 + ], + "type": "text", + "content": "[61] Hao Zhang, Aixin Sun, Wei Jing, and Joey Tianyi Zhou. Temporal sentence grounding in videos: A survey and future directions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(8):10443-10465, 2023. 1" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 106, + 218, + 505, + 249 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 218, + 505, + 249 + ], + "spans": [ + { + "bbox": [ + 106, + 218, + 505, + 249 + ], + "type": "text", + "content": "[62] Songyang Zhang, Houwen Peng, Jianlong Fu, Yijuan Lu, and Jiebo Luo. Multi-scale 2d temporal adjacency networks for moment localization with natural language. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. 7" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 106, + 252, + 505, + 283 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 252, + 505, + 283 + ], + "spans": [ + { + "bbox": [ + 106, + 252, + 505, + 283 + ], + "type": "text", + "content": "[63] Songyang Zhang, Houwen Peng, Jianlong Fu, and Jiebo Luo. Learning 2d temporal adjacent networks for moment localization with natural language. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020. 8" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 106, + 286, + 505, + 308 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 286, + 505, + 308 + ], + "spans": [ + { + "bbox": [ + 106, + 286, + 505, + 308 + ], + "type": "text", + "content": "[64] Xingjian Zhang, Siwei Wen, Wenjun Wu, and Lei Huang. Tinyllava-video-r1: Towards smaller lmms for video reasoning. arXiv preprint arXiv:2504.09641, 2025. 4" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 106, + 310, + 505, + 331 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 310, + 505, + 331 + ], + "spans": [ + { + "bbox": [ + 106, + 310, + 505, + 331 + ], + "type": "text", + "content": "[65] Henghao Zhao, Ge-Peng Ji, Rui Yan, Huan Xiong, and Zechao Li. Videoexpert: Augmented llm for temporal-sensitive video understanding. arXiv preprint arXiv:2504.07519, 2025. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 106, + 334, + 505, + 373 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 334, + 505, + 373 + ], + "spans": [ + { + "bbox": [ + 106, + 334, + 505, + 373 + ], + "type": "text", + "content": "[66] Jiahao Zhu, Daizong Liu, Pan Zhou, Xing Di, Yu Cheng, Song Yang, Wenzheng Xu, Zichuan Xu, Yao Wan, Lichao Sun, and Zeyu Xiong. Rethinking the video sampling and reasoning strategies for temporal sentence grounding. In Findings of the Association for Computational Linguistics: EMNLP 2022, 2022. 8, 9" + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 72, + 505, + 193 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 106, + 72, + 505, + 83 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 72, + 505, + 83 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 505, + 83 + ], + "type": "text", + "content": "A Limitations 14" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 99, + 505, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 99, + 505, + 111 + ], + "spans": [ + { + "bbox": [ + 106, + 99, + 505, + 111 + ], + "type": "text", + "content": "B Implementation Details 14" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 106, + 126, + 505, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 126, + 505, + 138 + ], + "spans": [ + { + "bbox": [ + 106, + 126, + 505, + 138 + ], + "type": "text", + "content": "C Additional Analyses 15" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 106, + 154, + 505, + 165 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 154, + 505, + 165 + ], + "spans": [ + { + "bbox": [ + 106, + 154, + 505, + 165 + ], + "type": "text", + "content": "D Ablation Studies 15" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 106, + 181, + 505, + 193 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 181, + 505, + 193 + ], + "spans": [ + { + "bbox": [ + 106, + 181, + 505, + 193 + ], + "type": "text", + "content": "E Qualitative Result 16" + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 106, + 210, + 188, + 222 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 210, + 188, + 222 + ], + "spans": [ + { + "bbox": [ + 106, + 210, + 188, + 222 + ], + "type": "text", + "content": "A Limitations" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 236, + 506, + 302 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 236, + 506, + 302 + ], + "spans": [ + { + "bbox": [ + 104, + 236, + 506, + 302 + ], + "type": "text", + "content": "Despite achieving notable improvements on the TVG task, our approach still has several limitations. First, Time-R1 suffers from slower training and inference speeds, primarily due to its large model size and reliance on autoregressive text generation. Second, to manage GPU memory consumption, we use a relatively low frame sampling rate, which may result in the loss of fine-grained motion information across frames. Finally, Time-R1 currently cannot handle ultra-long videos, limiting its applicability in scenarios such as full-length movie understanding." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 319, + 248, + 332 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 319, + 248, + 332 + ], + "spans": [ + { + "bbox": [ + 105, + 319, + 248, + 332 + ], + "type": "text", + "content": "B Implementation Details" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 344, + 504, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 344, + 504, + 407 + ], + "spans": [ + { + "bbox": [ + 104, + 344, + 504, + 407 + ], + "type": "text", + "content": "Details of Time-R1 framework. Inspired by DAPO [56], we adopt its token-level loss for training, rather than the sample-level loss used in GRPO. Apart from minor changes to the loss, all settings are identical to GRPO. Besides, we find that other techniques introduced in DAPO do not benefit the TVG task, thus aborting other techniques. We full-finetune the LLM parameters at every step, thus " + }, + { + "bbox": [ + 104, + 344, + 504, + 407 + ], + "type": "inline_equation", + "content": "\\frac{\\pi_{\\theta}(o_i)}{\\pi_{\\theta_{\\mathrm{old}}}(o_i)} = 1" + }, + { + "bbox": [ + 104, + 344, + 504, + 407 + ], + "type": "text", + "content": ". The sample number " + }, + { + "bbox": [ + 104, + 344, + 504, + 407 + ], + "type": "inline_equation", + "content": "G" + }, + { + "bbox": [ + 104, + 344, + 504, + 407 + ], + "type": "text", + "content": " is set to 8. The coefficient " + }, + { + "bbox": [ + 104, + 344, + 504, + 407 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 104, + 344, + 504, + 407 + ], + "type": "text", + "content": " is set to 0.04." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 410, + 506, + 488 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 410, + 506, + 488 + ], + "spans": [ + { + "bbox": [ + 104, + 410, + 506, + 488 + ], + "type": "text", + "content": "Details of TimeRFT training. For RFT data filtering, we use a Gaussian distribution with a fixed variance of 0.2 while varying the mean to control sample selection. In our cold start phase, we construct 150 samples from our training data sources (e.g., YT-Temporal [54]) to fine-tune the LLM using LoRA [20], with a LoRA rank of 64 and a LoRA alpha of 128. All of our results are reported based on the final training epoch. For RL, we use a learning rate of 1e-6 with the AdamW optimizer with " + }, + { + "bbox": [ + 104, + 410, + 506, + 488 + ], + "type": "inline_equation", + "content": "\\beta_{1} = 0.9" + }, + { + "bbox": [ + 104, + 410, + 506, + 488 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 410, + 506, + 488 + ], + "type": "inline_equation", + "content": "\\beta_{2} = 0.999" + }, + { + "bbox": [ + 104, + 410, + 506, + 488 + ], + "type": "text", + "content": ", and a linear scheduler to decay the learning rate from 1e-6 to 0. We use a batch size of 8 with gradient accumulation set to 2. It requires 15 hours of training on 8 A100 GPUs." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 492, + 506, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 492, + 506, + 536 + ], + "spans": [ + { + "bbox": [ + 104, + 492, + 506, + 536 + ], + "type": "text", + "content": "Details of our evaluation prompts. As shown in Figure 11, for temporal video grounding, the prompts used for training and testing are designed to encourage the model to reason before responding, following a template-based answer format. For VideoQA, we have two versions of prompts: one with CoT and one without CoT." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 541, + 504, + 564 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 541, + 504, + 564 + ], + "spans": [ + { + "bbox": [ + 104, + 541, + 504, + 564 + ], + "type": "text", + "content": "Details of TVG baseline methods and implementations. We evaluate the baselines on TVGBench using their original best-performing setting, focusing primarily on video input and prompt design." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 568, + 504, + 722 + ], + "type": "list", + "angle": 0, + "index": 18, + "blocks": [ + { + "bbox": [ + 104, + 568, + 504, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 568, + 504, + 590 + ], + "spans": [ + { + "bbox": [ + 104, + 568, + 504, + 590 + ], + "type": "text", + "content": "- TimeChat [44] is built upon the InstructBLIP [8] architecture and introduces a video Q-former to encode video tokens. It operates at a resolution of 224 and samples 96 frames." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 591, + 504, + 623 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 591, + 504, + 623 + ], + "spans": [ + { + "bbox": [ + 104, + 591, + 504, + 623 + ], + "type": "text", + "content": "- TRACE [18] treats each combination of timestamp, saliency score, and caption as a discrete event and enables the LVLM to autoregressively generate event sequences. It operates at a higher resolution of 336 and samples 128 frames." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 624, + 504, + 655 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 624, + 504, + 655 + ], + "spans": [ + { + "bbox": [ + 104, + 624, + 504, + 655 + ], + "type": "text", + "content": "- TimeSuite [60] introduces a token shuffling strategy to compress long video token sequences and incorporates positional encoding to enhance visual understanding. It adopts a resolution of 224 and samples 128 frames." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 656, + 504, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 656, + 504, + 689 + ], + "spans": [ + { + "bbox": [ + 104, + 656, + 504, + 689 + ], + "type": "text", + "content": "- VideoChat-Flash [27] proposes a progressive visual token dropping mechanism within intermediate LLM layers to compress video inputs and extend the effective context length. It uses a resolution of 448 and samples video at 1 fps, with a maximum of 512 frames." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 104, + 689, + 504, + 722 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 689, + 504, + 722 + ], + "spans": [ + { + "bbox": [ + 104, + 689, + 504, + 722 + ], + "type": "text", + "content": "- Gemini-2.5-Pro [10]: Gemini-2.5-Pro is a state-of-the-art video understanding model capable of reasoning over videos exceeding one hour in length. It supports video question answering and temporal localization tasks." + } + ] + } + ], + "index": 17 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 129, + 91, + 480, + 167 + ], + "blocks": [ + { + "bbox": [ + 108, + 70, + 501, + 83 + ], + "lines": [ + { + "bbox": [ + 108, + 70, + 501, + 83 + ], + "spans": [ + { + "bbox": [ + 108, + 70, + 501, + 83 + ], + "type": "text", + "content": "Table 3: Comparison of different approaches on TVGBench for all types. We use mIoU as metric." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 129, + 91, + 480, + 167 + ], + "lines": [ + { + "bbox": [ + 129, + 91, + 480, + 167 + ], + "spans": [ + { + "bbox": [ + 129, + 91, + 480, + 167 + ], + "type": "table", + "html": "
MethodECESHACHAPHASHPOAOCOECOESOT
TimeChat [44]22.332.816.69.814.635.115.09.22.418.010.2
TimeSuite [60]27.339.614.212.824.939.614.613.96.732.614.3
TRACE [18]57.166.825.917.526.545.117.822.112.536.824.9
VideoChat-Flash [27]38.347.212.913.927.139.414.912.76.524.312.9
Gemini-2.5-Pro [10]46.745.321.127.630.939.923.031.114.135.917.8
Time-R1 (ours)49.365.328.324.339.356.226.321.89.032.721.8
", + "image_path": "7d5c59021322c58df8bf320c59bec90239a153488bbb8d77d708ca5cadba2593.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 189, + 506, + 257 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 189, + 506, + 257 + ], + "spans": [ + { + "bbox": [ + 104, + 189, + 506, + 257 + ], + "type": "text", + "content": "Details of our implemented SFT baselines. We implemented two versions of SFT fine-tuning: one is full-parameter fine-tuning of the LLM (SFT), and the other is LoRA-based fine-tuning of the LLM (SFT-LoRA). For SFT-LoRA, the LoRA rank is set to 64, and the LoRA alpha is set to 128. Both configurations use the following settings: a learning rate of 2e-5, the AdamW optimizer with " + }, + { + "bbox": [ + 104, + 189, + 506, + 257 + ], + "type": "inline_equation", + "content": "\\beta_{1} = 0.9" + }, + { + "bbox": [ + 104, + 189, + 506, + 257 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 189, + 506, + 257 + ], + "type": "inline_equation", + "content": "\\beta_{2} = 0.999" + }, + { + "bbox": [ + 104, + 189, + 506, + 257 + ], + "type": "text", + "content": ", a weight decay of 0, the batch size of 8, and accumulation steps of 2. We fine-tune for 5 epochs on our 2.5K data, and use a linear scheduler to gradually decay the learning rate to 0." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 271, + 233, + 285 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 271, + 233, + 285 + ], + "spans": [ + { + "bbox": [ + 105, + 271, + 233, + 285 + ], + "type": "text", + "content": "C Additional Analyses" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 103, + 297, + 504, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 103, + 297, + 504, + 373 + ], + "spans": [ + { + "bbox": [ + 103, + 297, + 504, + 373 + ], + "type": "text", + "content": "In-depth comparisons of different approaches on TVGBench by semantic type. Table 3 provides a detailed performance comparison of various methods on the TVGBench across different semantic categories. Specifically, the abbreviations represent: EC (Environment Change), ES (Environment State), HAC (Human Action - Complex), HAP (Human Action - Procedural), HAS (Human Action - Simple), HP (Human Pose), OA (Object Attribute), OC (Object Counting), OEC (Object Existence - Complex), OES (Object Existence - Simple), and OT (Object Transition). Detailed definition and construction process can be found in Figure 15." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 379, + 506, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 379, + 506, + 544 + ], + "spans": [ + { + "bbox": [ + 104, + 379, + 506, + 544 + ], + "type": "text", + "content": "Time-R1 demonstrates strong competitiveness across multiple semantic categories. First, particularly in the four tasks of HAC, HAS, HP, and OA, Time-R1 achieved the highest scores among all compared methods, showcasing its excellent ability in understanding the details of human actions and identifying object features. For example, Time-R1 achieves an mIoU of 56.2 on HP, which is 11.1 points higher than the second-best method, TRACE, with an mIoU of 45.1. On HAS, Time-R1 reaches 39.3, outperforming Gemini-2.5-Pro's 30.9 by 8.4 points. Second, in the three tasks of ES, EC, and OT, Time-R1 demonstrates strong performance comparable to the top model TRACE, with its performance being very close or immediately following. In the HAP task, Time-R1 also performs excellently, with its performance being in the same tier as Gemini-2.5-Pro. Lastly, all models still show a noticeable gap compared to Gemini in understanding complex instructions, such as in HAP, OC, and OEC. For example, in HAP, which involves procedural activity localization, Gemini achieves 27.6, while our model ranks second with a score of 24.3. In object counting, Gemini attains 31.1, substantially outperforming our model's 21.8. In summary, Time-R1 performs well on both non-human simple instructions and human-related instructions, but there is still room for improvement in complex instruction grounding and object-related grounding." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 548, + 507, + 615 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 548, + 507, + 615 + ], + "spans": [ + { + "bbox": [ + 104, + 548, + 507, + 615 + ], + "type": "text", + "content": "Comparison of speed and accuracy between inference library transformers and vLLM. We observe that the inference speed of the implementation in the transformers [51] library is very slow. To address this, we implemented an accelerated inference version using vLLM [23] for all related 7 downstream benchmarks. For example, on TVGBench, the vLLM-based implementation requires only 502 seconds to infer 800 samples using 8 GPUs, whereas the transformers library implementation takes 2520 seconds. This achieves an overall speedup of " + }, + { + "bbox": [ + 104, + 548, + 507, + 615 + ], + "type": "inline_equation", + "content": "5 \\times" + }, + { + "bbox": [ + 104, + 548, + 507, + 615 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 630, + 214, + 643 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 630, + 214, + 643 + ], + "spans": [ + { + "bbox": [ + 105, + 630, + 214, + 643 + ], + "type": "text", + "content": "D Ablation Studies" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 655, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 655, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 655, + 506, + 723 + ], + "type": "text", + "content": "Ablation of different RFT data filtering strategies. As shown in Table 4, different data filtering strategy in the initial round affects the model's performance. First, appropriate Gaussian filtering outperforms both uniform and random filtering methods. Among the Gaussian filtering settings, a standard deviation of 0.3 yields the best results, followed by 0.5 and then 0.7. These findings suggest that incorporating moderately challenging samples during RFT helps improve the model's generalization capability more effectively than using either overly easy or extremely difficult examples." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 121, + 91, + 286, + 148 + ], + "blocks": [ + { + "bbox": [ + 114, + 70, + 293, + 82 + ], + "lines": [ + { + "bbox": [ + 114, + 70, + 293, + 82 + ], + "spans": [ + { + "bbox": [ + 114, + 70, + 293, + 82 + ], + "type": "text", + "content": "Table 4: Ablation of data filtering strategies." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 121, + 91, + 286, + 148 + ], + "lines": [ + { + "bbox": [ + 121, + 91, + 286, + 148 + ], + "spans": [ + { + "bbox": [ + 121, + 91, + 286, + 148 + ], + "type": "table", + "html": "
MethodR1@0.3R1@0.5R1@0.7mIoU
random39.426.516.427.4
gaussian (0.3)41.628.515.628.6
gaussian (0.5)40.628.216.028.3
gaussian (0.7)37.226.915.526.5
uniform40.428.515.928.3
", + "image_path": "4eb42206cc115a134940c0c8f85bc3e915eb90a9e76df46500c7ac57c7b4b3f3.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 308, + 91, + 486, + 148 + ], + "blocks": [ + { + "bbox": [ + 309, + 70, + 487, + 81 + ], + "lines": [ + { + "bbox": [ + 309, + 70, + 487, + 81 + ], + "spans": [ + { + "bbox": [ + 309, + 70, + 487, + 81 + ], + "type": "text", + "content": "Table 5: Ablation of KL and CoT in GRPO." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 308, + 91, + 486, + 148 + ], + "lines": [ + { + "bbox": [ + 308, + 91, + 486, + 148 + ], + "spans": [ + { + "bbox": [ + 308, + 91, + 486, + 148 + ], + "type": "table", + "html": "
KLCoTR1@0.3R1@0.5R1@0.7mIoU
XX40.429.114.928.1
X40.827.415.027.7
X42.929.515.029.1
41.628.515.628.6
", + "image_path": "4724931c4e723d0c856e0f37d2c791ca8206227a3c166a359bca6ead1211f9ba.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 119, + 188, + 491, + 232 + ], + "blocks": [ + { + "bbox": [ + 104, + 156, + 504, + 179 + ], + "lines": [ + { + "bbox": [ + 104, + 156, + 504, + 179 + ], + "spans": [ + { + "bbox": [ + 104, + 156, + 504, + 179 + ], + "type": "text", + "content": "Table 6: Comparison of the token-level loss design used by DAPO [56] and the sample-level loss design used by GRPO [45]." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 119, + 188, + 491, + 232 + ], + "lines": [ + { + "bbox": [ + 119, + 188, + 491, + 232 + ], + "spans": [ + { + "bbox": [ + 119, + 188, + 491, + 232 + ], + "type": "table", + "html": "
LossCharades-STAActivityNetTVGBench
R1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoU
GRPO76.759.834.457.055.937.120.337.840.828.016.528.4
DAPO77.460.034.157.256.237.420.438.041.628.515.628.6
", + "image_path": "e8c5e7da854f9c4f13a4726598b761fb62b9f195447c1d30fe0c3c9fb10d30fd.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 250, + 506, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 250, + 506, + 338 + ], + "spans": [ + { + "bbox": [ + 104, + 250, + 506, + 338 + ], + "type": "text", + "content": "Ablation of KL and CoT during GRPO training. As shown in Table 5, incorporating CoT reasoning during training leads to improved performance compared to the No-CoT setting, suggesting that CoT enhances the model's temporal video grounding capabilities. When KL divergence is omitted (No-KL), performance slightly decreases under the No-CoT setting but unexpectedly improves when CoT is present. However, we find that in the No-KL+CoT setting, the model often fails to produce a thinking process, directly jumping to answers. In contrast, using KL divergence helps maintain more logical reasoning that is easier to follow. To balance performance and interpretability, we adopt a training setup that includes both KL and CoT." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 342, + 506, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 342, + 506, + 399 + ], + "spans": [ + { + "bbox": [ + 104, + 342, + 506, + 399 + ], + "type": "text", + "content": "Comparison of tIoU and IoU during multi-epoch training. As shown in Figure 6, tIoU consistently outperforms standard IoU during both the early and late stages of training over the first 5 epochs. Notably, while tIoU steadily improves as training progresses, IoU shows a decline in performance by the fifth epoch. This highlights the advantage of using tIoU as a more stable and reliable reward for temporal video grounding." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 403, + 506, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 403, + 506, + 449 + ], + "spans": [ + { + "bbox": [ + 104, + 403, + 506, + 449 + ], + "type": "text", + "content": "Ablation of sample filtering in multi-epoch training. As shown in Figure 7, applying sample filtering (SF) to remove simpler training samples yields consistent performance improvements across epochs. This suggests that easy samples with high IoU may introduce noise or reduce the effectiveness of learning, and filtering them helps focus the model on more informative and challenging instances." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 452, + 506, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 452, + 506, + 529 + ], + "spans": [ + { + "bbox": [ + 104, + 452, + 506, + 529 + ], + "type": "text", + "content": "Ablation of DAPO & GRPO. The sample-level loss used by GRPO computes the loss by averaging over each individual sample. This approach leads to unequal loss contributions for tokens when dealing with CoTs of varying lengths. DAPO addresses this issue by employing a token-level loss. The underlying principle is that the token-level loss can effectively guide the model in the process of CoT generation, allowing it to learn useful patterns from CoTs of different lengths sampled during training. In Table 6, we compare these two loss designs. We empirically find that DAPO outperforms GRPO on the majority of metrics, thus, we adopt DAPO's loss design." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 533, + 505, + 579 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 533, + 505, + 579 + ], + "spans": [ + { + "bbox": [ + 104, + 533, + 505, + 579 + ], + "type": "text", + "content": "Different Model Size. Table 7 presents a performance comparison of different model sizes. These results indicate that larger models achieve better zero-shot performance and continue to outperform smaller models after fine-tuning. These findings support the notion that scaling up model capacity enhances generalization and leads to superior results on the TVG tasks." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 594, + 222, + 607 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 594, + 222, + 607 + ], + "spans": [ + { + "bbox": [ + 105, + 594, + 222, + 607 + ], + "type": "text", + "content": "E Qualitative Result" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 617, + 506, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 617, + 506, + 696 + ], + "spans": [ + { + "bbox": [ + 104, + 617, + 506, + 696 + ], + "type": "text", + "content": "Case study of temporal video grounding on Charades and ActivityNet. As shown in Figure 8, in the example above, given a relatively complex language instruction, Time-R1 demonstrates more accurate localization than all baselines, successfully capturing the initial event \"goes back to the pink bucket\" within the timestamp, while other SoTA models like Gemini-2.5-Pro fail. In the example below, the model accurately localizes the event, excluding \"a person is lying on the couch,\" and correctly distinguishes between sitting and lying, unlike other models, which either localize only a small segment (TimeSuite and VideoChat-Flash) or the entire segment (TRACE and Gemini-2.5-Pro)." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 700, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 700, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 700, + 506, + 723 + ], + "type": "text", + "content": "Case study of short video QA on VideoMME and TempCompass. As shown in Figures 9 and 10, Time-R1 demonstrates improved performance over the base model in tasks requiring positional" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 108, + 91, + 502, + 153 + ], + "blocks": [ + { + "bbox": [ + 186, + 70, + 424, + 82 + ], + "lines": [ + { + "bbox": [ + 186, + 70, + 424, + 82 + ], + "spans": [ + { + "bbox": [ + 186, + 70, + 424, + 82 + ], + "type": "text", + "content": "Table 7: Performance comparison of different model sizes." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 91, + 502, + 153 + ], + "lines": [ + { + "bbox": [ + 108, + 91, + 502, + 153 + ], + "spans": [ + { + "bbox": [ + 108, + 91, + 502, + 153 + ], + "type": "table", + "html": "
MethodCharades-STAActivityNetTVGBench
R1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoUR1@0.3R1@0.5R1@0.7mIoU
Time-R1-3B74.653.126.051.240.021.08.723.233.521.010.521.7
Time-R1-3B*78.764.136.959.966.846.824.746.1----
Time-R1-7B78.160.835.558.158.139.021.440.541.829.416.429.2
Time-R1-7B*82.872.250.160.973.355.634.052.1----
", + "image_path": "063fc5fb30f8f6a713569dbca0b207199a4a5c97244821d721e1a617c522cdda.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 116, + 165, + 293, + 296 + ], + "blocks": [ + { + "bbox": [ + 116, + 165, + 293, + 296 + ], + "lines": [ + { + "bbox": [ + 116, + 165, + 293, + 296 + ], + "spans": [ + { + "bbox": [ + 116, + 165, + 293, + 296 + ], + "type": "image", + "image_path": "274d87af0a8919a0a742cf4bccd701b192201dc967697683fb1144c7bb9411a0.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 107, + 306, + 302, + 330 + ], + "lines": [ + { + "bbox": [ + 107, + 306, + 302, + 330 + ], + "spans": [ + { + "bbox": [ + 107, + 306, + 302, + 330 + ], + "type": "text", + "content": "Figure 6: Performance comparison of tIoU and IoU in multi-epoch training." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 316, + 165, + 494, + 296 + ], + "blocks": [ + { + "bbox": [ + 316, + 165, + 494, + 296 + ], + "lines": [ + { + "bbox": [ + 316, + 165, + 494, + 296 + ], + "spans": [ + { + "bbox": [ + 316, + 165, + 494, + 296 + ], + "type": "image", + "image_path": "17ec16600700266ab49664ef8b1d7c88648951c7538d40c898495f116d77d77e.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 308, + 306, + 504, + 330 + ], + "lines": [ + { + "bbox": [ + 308, + 306, + 504, + 330 + ], + "spans": [ + { + "bbox": [ + 308, + 306, + 504, + 330 + ], + "type": "text", + "content": "Figure 7: Ablation of sample filtering in multi-epoch training." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 350, + 506, + 438 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 350, + 506, + 438 + ], + "spans": [ + { + "bbox": [ + 104, + 350, + 506, + 438 + ], + "type": "text", + "content": "judgment, scene storyline reasoning, and visual reasoning. For example, in Figure 9, Time-R1 correctly identifies that a car in the video is missing its right-front wheel, a detail that the base model fails to recognize. This reflects that Time-R1 likely possesses stronger video localization capabilities, which in turn enhance its visual reasoning ability. In Figure 12, we output a CoT when answering the QA task, providing some interpretability. This example shows that Time-R1's reasoning process is more concise, whereas the base model often reasons correctly but arrives at the wrong answer. This suggests that Time-R1's reasoning may be more effective in guiding the final answer, possibly benefiting from the outcome-driven RL of GRPO." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 443, + 506, + 553 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 443, + 506, + 553 + ], + "spans": [ + { + "bbox": [ + 104, + 443, + 506, + 553 + ], + "type": "text", + "content": "Case study of long video QA on EgoSchema and VideoMME. Figure 13 presents a long egocentric video QA example focused on summarizing task steps. In the \"Hanging the Dress\" case, the base model fails to identify all key steps, while our Time-R1 model correctly selects the answer by generating a more accurate chain-of-thought (CoT). In Figure 14, the task involves identifying the primary tools and ingredients used in the video and understanding their respective contributions. An additional example in Figure 9 involves animated scene reasoning, where Time-R1 correctly infers, based on elements like the presence of a guitar, that the heroes defeated the enemies using the power of music, whereas the base model misinterprets the cause. Across these complex reasoning tasks involving long videos, Time-R1 consistently demonstrates superior performance compared to the base model." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 558, + 504, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 558, + 504, + 581 + ], + "spans": [ + { + "bbox": [ + 104, + 558, + 504, + 581 + ], + "type": "text", + "content": "Illustration of our prompt at training and inference time. Figure 11 presents the prompts used for the temporal video grounding and video QA tasks at both training and inference time." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 585, + 506, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 585, + 506, + 663 + ], + "spans": [ + { + "bbox": [ + 104, + 585, + 506, + 663 + ], + "type": "text", + "content": "Illustration of our prompt to annotate query semantics on TVGBench. Figure 15 presents the prompt used to annotate query semantics in the TVGBench dataset. The prompt is designed to guide the DeepSeek LLM in classifying each input query into one of 11 predefined semantic categories. To improve annotation quality, we refer to the construction of existing benchmarks and carefully select both positive and negative examples for each semantic type. These examples are chosen to include queries that are either easy or difficult for the model to answer, helping to refine the model's understanding and improve labeling accuracy." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 128, + 90, + 241, + 133 + ], + "blocks": [ + { + "bbox": [ + 128, + 90, + 241, + 133 + ], + "lines": [ + { + "bbox": [ + 128, + 90, + 241, + 133 + ], + "spans": [ + { + "bbox": [ + 128, + 90, + 241, + 133 + ], + "type": "image", + "image_path": "4fedb324bb47101ec49cfc416e6525aa7ae990795778f2de0a5f618594a37ceb.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 241, + 90, + 361, + 133 + ], + "blocks": [ + { + "bbox": [ + 241, + 90, + 361, + 133 + ], + "lines": [ + { + "bbox": [ + 241, + 90, + 361, + 133 + ], + "spans": [ + { + "bbox": [ + 241, + 90, + 361, + 133 + ], + "type": "image", + "image_path": "5c83fb854d03c1084b8147fd2f0d207efd3035914b8e01683b1b2570e88116f5.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 361, + 90, + 466, + 133 + ], + "blocks": [ + { + "bbox": [ + 361, + 90, + 466, + 133 + ], + "lines": [ + { + "bbox": [ + 361, + 90, + 466, + 133 + ], + "spans": [ + { + "bbox": [ + 361, + 90, + 466, + 133 + ], + "type": "image", + "image_path": "1eeca61a89d9ee552f580381979f06a57a5db62e051689c4db45a8e866a87540.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 137, + 501, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 137, + 501, + 156 + ], + "spans": [ + { + "bbox": [ + 107, + 137, + 501, + 156 + ], + "type": "text", + "content": "Question: Where is the event \"The man goes back to the pink bucket to wash another dark item clothing, he wrings it out in the pink bucket, throws it in the yellow bucket, then wrings his hands together a few times while staring at the camera\"?" + } + ] + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 109, + 166, + 503, + 228 + ], + "blocks": [ + { + "bbox": [ + 451, + 157, + 503, + 164 + ], + "lines": [ + { + "bbox": [ + 451, + 157, + 503, + 164 + ], + "spans": [ + { + "bbox": [ + 451, + 157, + 503, + 164 + ], + "type": "text", + "content": "(video length: 74s)" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 109, + 166, + 503, + 228 + ], + "lines": [ + { + "bbox": [ + 109, + 166, + 503, + 228 + ], + "spans": [ + { + "bbox": [ + 109, + 166, + 503, + 228 + ], + "type": "table", + "html": "
Ground Truth33.1s→ 73.5s
TimeSuite20.0s→ 30.0s
Gemini-2.5-Pro57.0s
VideoChat-Flash0.0s→ 17.2s
Trace54.9s
Time-R1 (ours)35.0s→ 75.0s
", + "image_path": "e2cfcdbce816debcd62e90696e4a91f43b2b2807b46e8227fa64b5280fe4e669.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 109, + 230, + 494, + 257 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 230, + 494, + 257 + ], + "spans": [ + { + "bbox": [ + 109, + 230, + 494, + 257 + ], + "type": "text", + "content": "Time-R1 thinking (ours): The event \"The man goes back to the pink bucket to wash another dark item clothing, he wrings it out in the pink bucket, throws it in the yellow bucket, then wrings his hands together a few times while staring at the camera.\" occurs between 35.00 and 75.00 seconds in the video." + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 134, + 265, + 299, + 307 + ], + "blocks": [ + { + "bbox": [ + 134, + 265, + 299, + 307 + ], + "lines": [ + { + "bbox": [ + 134, + 265, + 299, + 307 + ], + "spans": [ + { + "bbox": [ + 134, + 265, + 299, + 307 + ], + "type": "image", + "image_path": "9968b6c45f4adeda2cacdb485fed70f4a7bc0887931405f750f4da5ed4176b15.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 428, + 504, + 452 + ], + "lines": [ + { + "bbox": [ + 104, + 428, + 504, + 452 + ], + "spans": [ + { + "bbox": [ + 104, + 428, + 504, + 452 + ], + "type": "text", + "content": "Figure 8: Temporal video grounding cases from Charades and ActivityNet benchmark. Time-R1 outperforms other SoTA models." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 299, + 266, + 481, + 307 + ], + "blocks": [ + { + "bbox": [ + 299, + 266, + 481, + 307 + ], + "lines": [ + { + "bbox": [ + 299, + 266, + 481, + 307 + ], + "spans": [ + { + "bbox": [ + 299, + 266, + 481, + 307 + ], + "type": "image", + "image_path": "985cd9a1289f0cb3f62a495e911ced0c0cd918a5cdd7fc8cbd83d6f3cdbfeae1.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 110, + 324, + 501, + 386 + ], + "blocks": [ + { + "bbox": [ + 108, + 312, + 309, + 321 + ], + "lines": [ + { + "bbox": [ + 108, + 312, + 309, + 321 + ], + "spans": [ + { + "bbox": [ + 108, + 312, + 309, + 321 + ], + "type": "text", + "content": "Question: Where is the event \"a person is seen sitting on a couch\"?" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 449, + 312, + 501, + 319 + ], + "lines": [ + { + "bbox": [ + 449, + 312, + 501, + 319 + ], + "spans": [ + { + "bbox": [ + 449, + 312, + 501, + 319 + ], + "type": "text", + "content": "(video length: 31s)" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 110, + 324, + 501, + 386 + ], + "lines": [ + { + "bbox": [ + 110, + 324, + 501, + 386 + ], + "spans": [ + { + "bbox": [ + 110, + 324, + 501, + 386 + ], + "type": "table", + "html": "
Ground Truth0.0s←12.8s
TimeSuite0.0s←5.0s
Gemini-2.5-Pro0.0s←30.0s
VideoChat-Flash0.0s←5.0s
Trace0.0s←31.6s
Time-R1 (ours)0.0s←10.0s
", + "image_path": "29395b9f4c424bce5807ab10a9ce77f0ecf37d2a5de35000268c70bbfc270982.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_body" + } + ], + "index": 11 + }, + { + "bbox": [ + 109, + 388, + 483, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 388, + 483, + 407 + ], + "spans": [ + { + "bbox": [ + 109, + 388, + 483, + 407 + ], + "type": "text", + "content": "Time-R1 thinking (ours): The event \"a person is seen sitting on a couch\" occurs at the beginning of the video. The person is seated on a couch, holding a phone, and appears to be engaged with it. This aligns with the initial frames of the video." + } + ] + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 115, + 494, + 219, + 555 + ], + "blocks": [ + { + "bbox": [ + 115, + 494, + 219, + 555 + ], + "lines": [ + { + "bbox": [ + 115, + 494, + 219, + 555 + ], + "spans": [ + { + "bbox": [ + 115, + 494, + 219, + 555 + ], + "type": "image", + "image_path": "973df124e00e64d66c24e68d6bbc28517340ef9054637a51a8b5356881a59b55.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 250, + 494, + 354, + 555 + ], + "blocks": [ + { + "bbox": [ + 250, + 494, + 354, + 555 + ], + "lines": [ + { + "bbox": [ + 250, + 494, + 354, + 555 + ], + "spans": [ + { + "bbox": [ + 250, + 494, + 354, + 555 + ], + "type": "image", + "image_path": "faec5a13c0067d9b642efba9369da44a4764c136a36678a932e8dec98cf64bc5.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 386, + 495, + 490, + 555 + ], + "blocks": [ + { + "bbox": [ + 386, + 495, + 490, + 555 + ], + "lines": [ + { + "bbox": [ + 386, + 495, + 490, + 555 + ], + "spans": [ + { + "bbox": [ + 386, + 495, + 490, + 555 + ], + "type": "image", + "image_path": "7263b40507e658da86f0c681610219abe6e1979ce94d39e0bea81772100821c5.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "bbox": [ + 127, + 555, + 485, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 555, + 485, + 565 + ], + "spans": [ + { + "bbox": [ + 127, + 555, + 485, + 565 + ], + "type": "text", + "content": "(video length: 38m) Question: What do heroes of legend use to defeat the enemy based on the video?" + } + ] + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 128, + 565, + 138, + 572 + ], + "blocks": [ + { + "bbox": [ + 128, + 565, + 138, + 572 + ], + "lines": [ + { + "bbox": [ + 128, + 565, + 138, + 572 + ], + "spans": [ + { + "bbox": [ + 128, + 565, + 138, + 572 + ], + "type": "image", + "image_path": "218347b27395c7ef9db786d9def3ca073d718495208f3fd53493ff1d52eb2704.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "bbox": [ + 138, + 565, + 183, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 565, + 183, + 572 + ], + "spans": [ + { + "bbox": [ + 138, + 565, + 183, + 572 + ], + "type": "text", + "content": "Their wisdom" + } + ] + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 129, + 572, + 138, + 578 + ], + "blocks": [ + { + "bbox": [ + 129, + 572, + 138, + 578 + ], + "lines": [ + { + "bbox": [ + 129, + 572, + 138, + 578 + ], + "spans": [ + { + "bbox": [ + 129, + 572, + 138, + 578 + ], + "type": "image", + "image_path": "f84c05262d317e51bd08a7be8e23d18718f79e49c0dc3cc7341e7a68b8d61ef2.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + } + ], + "index": 20 + }, + { + "bbox": [ + 138, + 573, + 194, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 573, + 194, + 582 + ], + "spans": [ + { + "bbox": [ + 138, + 573, + 194, + 582 + ], + "type": "text", + "content": "Their superpower" + } + ] + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 115, + 583, + 219, + 643 + ], + "blocks": [ + { + "bbox": [ + 115, + 583, + 219, + 643 + ], + "lines": [ + { + "bbox": [ + 115, + 583, + 219, + 643 + ], + "spans": [ + { + "bbox": [ + 115, + 583, + 219, + 643 + ], + "type": "image", + "image_path": "b3b0e585c0db345043984495ff909736d2924ea3bd232b0d7c13c7bbb8198251.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 678, + 506, + 700 + ], + "lines": [ + { + "bbox": [ + 104, + 678, + 506, + 700 + ], + "spans": [ + { + "bbox": [ + 104, + 678, + 506, + 700 + ], + "type": "text", + "content": "Figure 9: Case study on VideoMME (w/o CoT), demonstrating that Time-R1 achieves better performance than the base model." + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_caption" + } + ], + "index": 22 + }, + { + "bbox": [ + 285, + 565, + 331, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 285, + 565, + 331, + 572 + ], + "spans": [ + { + "bbox": [ + 285, + 565, + 331, + 572 + ], + "type": "text", + "content": "(B) A big robot" + } + ] + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 285, + 572, + 294, + 578 + ], + "blocks": [ + { + "bbox": [ + 285, + 572, + 294, + 578 + ], + "lines": [ + { + "bbox": [ + 285, + 572, + 294, + 578 + ], + "spans": [ + { + "bbox": [ + 285, + 572, + 294, + 578 + ], + "type": "image", + "image_path": "09928d2c575f65fba1f3260fc7ce829e74cec6bcf7107f4b746c46c51ba5284e.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + } + ], + "index": 24 + }, + { + "bbox": [ + 294, + 573, + 343, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 294, + 573, + 343, + 582 + ], + "spans": [ + { + "bbox": [ + 294, + 573, + 343, + 582 + ], + "type": "text", + "content": "4 Power of music" + } + ] + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 250, + 583, + 354, + 643 + ], + "blocks": [ + { + "bbox": [ + 250, + 583, + 354, + 643 + ], + "lines": [ + { + "bbox": [ + 250, + 583, + 354, + 643 + ], + "spans": [ + { + "bbox": [ + 250, + 583, + 354, + 643 + ], + "type": "image", + "image_path": "12d8fce88e4faed9a6d0609ccef584422d1473a3d74f01c1339a1abef3ffeb4d.jpg" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_body" + } + ], + "index": 26 + }, + { + "type": "image", + "bbox": [ + 386, + 583, + 490, + 643 + ], + "blocks": [ + { + "bbox": [ + 386, + 583, + 490, + 643 + ], + "lines": [ + { + "bbox": [ + 386, + 583, + 490, + 643 + ], + "spans": [ + { + "bbox": [ + 386, + 583, + 490, + 643 + ], + "type": "image", + "image_path": "d309857026d6d143f04db4e4465e618ec7bb434a45191c8731c880da1c0f21fe.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + } + ], + "index": 27 + }, + { + "bbox": [ + 126, + 643, + 334, + 652 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 643, + 334, + 652 + ], + "spans": [ + { + "bbox": [ + 126, + 643, + 334, + 652 + ], + "type": "text", + "content": "(video length: 17s) Question: What's wrong with this car?" + } + ] + } + ], + "index": 28 + }, + { + "type": "image", + "bbox": [ + 126, + 652, + 137, + 659 + ], + "blocks": [ + { + "bbox": [ + 126, + 652, + 137, + 659 + ], + "lines": [ + { + "bbox": [ + 126, + 652, + 137, + 659 + ], + "spans": [ + { + "bbox": [ + 126, + 652, + 137, + 659 + ], + "type": "image", + "image_path": "95b3560e9f81ee0ecbf781caf46006fe72d005eca8cfe18df5624de0505a0dc9.jpg" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_body" + } + ], + "index": 29 + }, + { + "bbox": [ + 138, + 652, + 233, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 652, + 233, + 660 + ], + "spans": [ + { + "bbox": [ + 138, + 652, + 233, + 660 + ], + "type": "text", + "content": "It doesn't have a left rear wheel." + } + ] + } + ], + "index": 30 + }, + { + "type": "image", + "bbox": [ + 126, + 659, + 137, + 666 + ], + "blocks": [ + { + "bbox": [ + 126, + 659, + 137, + 666 + ], + "lines": [ + { + "bbox": [ + 126, + 659, + 137, + 666 + ], + "spans": [ + { + "bbox": [ + 126, + 659, + 137, + 666 + ], + "type": "image", + "image_path": "4a4078e7b7aee98137f556a4e4cde6e7b54c3cfb6912441b8a5d32a36feed0e8.jpg" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_body" + } + ], + "index": 31 + }, + { + "bbox": [ + 138, + 660, + 207, + 669 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 660, + 207, + 669 + ], + "spans": [ + { + "bbox": [ + 138, + 660, + 207, + 669 + ], + "type": "text", + "content": "Its headlamp is broken." + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 304, + 652, + 417, + 669 + ], + "type": "list", + "angle": 0, + "index": 35, + "blocks": [ + { + "bbox": [ + 304, + 652, + 417, + 661 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 652, + 417, + 661 + ], + "spans": [ + { + "bbox": [ + 304, + 652, + 417, + 661 + ], + "type": "text", + "content": "(B) It doesn't have a right front wheel." + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 304, + 661, + 386, + 669 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 661, + 386, + 669 + ], + "spans": [ + { + "bbox": [ + 304, + 661, + 386, + 669 + ], + "type": "text", + "content": "(D) Its right door is broken." + } + ] + } + ], + "index": 34 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 37 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 121, + 105, + 227, + 165 + ], + "blocks": [ + { + "bbox": [ + 121, + 105, + 227, + 165 + ], + "lines": [ + { + "bbox": [ + 121, + 105, + 227, + 165 + ], + "spans": [ + { + "bbox": [ + 121, + 105, + 227, + 165 + ], + "type": "image", + "image_path": "19eed5f5babcebf1b58cafcfcdc600fe94c6be4b8245ccf23ee8d61f4a6deb8e.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 313, + 504, + 336 + ], + "lines": [ + { + "bbox": [ + 104, + 313, + 504, + 336 + ], + "spans": [ + { + "bbox": [ + 104, + 313, + 504, + 336 + ], + "type": "text", + "content": "Figure 10: Case study on TempCompass (w/o CoT), demonstrating that Time-R1 achieves better performance than the base model." + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 248, + 105, + 353, + 165 + ], + "blocks": [ + { + "bbox": [ + 248, + 105, + 353, + 165 + ], + "lines": [ + { + "bbox": [ + 248, + 105, + 353, + 165 + ], + "spans": [ + { + "bbox": [ + 248, + 105, + 353, + 165 + ], + "type": "image", + "image_path": "3909e72e1f638024ea6b20d4b16e72de1e32666727cf506471db760bb6c9fc8b.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 376, + 105, + 481, + 166 + ], + "blocks": [ + { + "bbox": [ + 376, + 105, + 481, + 166 + ], + "lines": [ + { + "bbox": [ + 376, + 105, + 481, + 166 + ], + "spans": [ + { + "bbox": [ + 376, + 105, + 481, + 166 + ], + "type": "image", + "image_path": "1cbc539d50fd51452839d0189ad47f5bacbad57e73f8ffc6a087f0e5bbfdf5ba.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 114, + 166, + 422, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 166, + 422, + 175 + ], + "spans": [ + { + "bbox": [ + 114, + 166, + 422, + 175 + ], + "type": "text", + "content": "(video length: 10s)Question: Which sentence better captures the essence of the video?" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 114, + 176, + 395, + 198 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 114, + 176, + 355, + 188 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 176, + 355, + 188 + ], + "spans": [ + { + "bbox": [ + 114, + 176, + 355, + 188 + ], + "type": "text", + "content": "(A) First, two hand are holding the clay pot and then three hands are holding it." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 114, + 188, + 395, + 198 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 188, + 395, + 198 + ], + "spans": [ + { + "bbox": [ + 114, + 188, + 395, + 198 + ], + "type": "text", + "content": "In the video, three hands are holding the clay pot, then two hands are holding the clay pot." + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "type": "image", + "bbox": [ + 121, + 199, + 225, + 258 + ], + "blocks": [ + { + "bbox": [ + 121, + 199, + 225, + 258 + ], + "lines": [ + { + "bbox": [ + 121, + 199, + 225, + 258 + ], + "spans": [ + { + "bbox": [ + 121, + 199, + 225, + 258 + ], + "type": "image", + "image_path": "d75c15a560b56adadfafa9355f8b6b0aaad45eebf96fa76dc432641d2e2bf666.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 164, + 672, + 444, + 684 + ], + "lines": [ + { + "bbox": [ + 164, + 672, + 444, + 684 + ], + "spans": [ + { + "bbox": [ + 164, + 672, + 444, + 684 + ], + "type": "text", + "content": "Figure 11: Illustration of prompts at both training and inference time." + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 248, + 199, + 352, + 258 + ], + "blocks": [ + { + "bbox": [ + 248, + 199, + 352, + 258 + ], + "lines": [ + { + "bbox": [ + 248, + 199, + 352, + 258 + ], + "spans": [ + { + "bbox": [ + 248, + 199, + 352, + 258 + ], + "type": "image", + "image_path": "f8ca333393071a5a60e1bc1552cf64f7116c991ea0edeeef783b03299fc0fe47.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 376, + 199, + 481, + 258 + ], + "blocks": [ + { + "bbox": [ + 376, + 199, + 481, + 258 + ], + "lines": [ + { + "bbox": [ + 376, + 199, + 481, + 258 + ], + "spans": [ + { + "bbox": [ + 376, + 199, + 481, + 258 + ], + "type": "image", + "image_path": "980926ad5d91bec503d9caba1d831d0c388eaffc5efd5b99e3790c6840b603f2.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 114, + 258, + 413, + 269 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 258, + 413, + 269 + ], + "spans": [ + { + "bbox": [ + 114, + 258, + 413, + 269 + ], + "type": "text", + "content": "(video length: 14s)Question: What is the sequence of actions performed by the girl?" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 114, + 270, + 296, + 302 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 114, + 270, + 296, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 270, + 296, + 281 + ], + "spans": [ + { + "bbox": [ + 114, + 270, + 296, + 281 + ], + "type": "text", + "content": "(A) Putting on clothes, jumping into water, taking off clothes" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 114, + 281, + 294, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 281, + 294, + 290 + ], + "spans": [ + { + "bbox": [ + 114, + 281, + 294, + 290 + ], + "type": "text", + "content": "(B) Jumping into water, putting on clothes, taking off clothes" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 114, + 291, + 296, + 302 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 291, + 296, + 302 + ], + "spans": [ + { + "bbox": [ + 114, + 291, + 296, + 302 + ], + "type": "text", + "content": "(C) Taking off clothes, putting on clothes, jumping into water" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 246, + 418, + 364, + 429 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 418, + 364, + 429 + ], + "spans": [ + { + "bbox": [ + 246, + 418, + 364, + 429 + ], + "type": "text", + "content": "Temporal Video Grounding" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 107, + 434, + 167, + 443 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 434, + 167, + 443 + ], + "spans": [ + { + "bbox": [ + 107, + 434, + 167, + 443 + ], + "type": "text", + "content": "System Prompt" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 107, + 444, + 218, + 453 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 444, + 218, + 453 + ], + "spans": [ + { + "bbox": [ + 107, + 444, + 218, + 453 + ], + "type": "text", + "content": "You are a video analysis expert." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 108, + 453, + 161, + 463 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 453, + 161, + 463 + ], + "spans": [ + { + "bbox": [ + 108, + 453, + 161, + 463 + ], + "type": "text", + "content": "Input Prompt" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 107, + 463, + 499, + 511 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 463, + 499, + 511 + ], + "spans": [ + { + "bbox": [ + 107, + 463, + 499, + 511 + ], + "type": "text", + "content": "To accurately pinpoint the event \"[EVENT]\" in the video, determine the precise time period of the event. Output your thought process within the tags, including analysis with either specific time ranges (xx.xx to xx.xx) in < timestep> tags. Then, provide the start and end times (in seconds, precise to two decimal places) in the format \"start time to end time\" within the tags. For example: \"12.54 to 17.83\".\"\"\"" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 284, + 518, + 326, + 529 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 284, + 518, + 326, + 529 + ], + "spans": [ + { + "bbox": [ + 284, + 518, + 326, + 529 + ], + "type": "text", + "content": "Video QA" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 107, + 531, + 167, + 540 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 531, + 167, + 540 + ], + "spans": [ + { + "bbox": [ + 107, + 531, + 167, + 540 + ], + "type": "text", + "content": "System Prompt" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 107, + 541, + 218, + 550 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 541, + 218, + 550 + ], + "spans": [ + { + "bbox": [ + 107, + 541, + 218, + 550 + ], + "type": "text", + "content": "You are a video analysis expert." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 107, + 551, + 201, + 560 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 551, + 201, + 560 + ], + "spans": [ + { + "bbox": [ + 107, + 551, + 201, + 560 + ], + "type": "text", + "content": "Input Prompt (w/o CoT)" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 107, + 561, + 488, + 600 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 561, + 488, + 600 + ], + "spans": [ + { + "bbox": [ + 107, + 561, + 488, + 600 + ], + "type": "text", + "content": "Carefully watch the video and pay attention to the cause and sequence of events, the detail and movement of objects and the action and pose of persons. Based on your observations, select the best option that accurately addresses the question. Provide the letter of the best option wrapped in parentheses within the tags. Question: {} {}\"\"" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 107, + 600, + 196, + 609 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 600, + 196, + 609 + ], + "spans": [ + { + "bbox": [ + 107, + 600, + 196, + 609 + ], + "type": "text", + "content": "Input Prompt (w/ CoT)" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 107, + 609, + 499, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 609, + 499, + 658 + ], + "spans": [ + { + "bbox": [ + 107, + 609, + 499, + 658 + ], + "type": "text", + "content": "Carefully watch the video and pay attention to the cause and sequence of events, the detail and movement of objects and the action and pose of persons. Based on your observations, select the best option that accurately addresses the question. Output your thought process within the tags, including analysis with either specific timestamps (xx.xx) or time ranges (xx.xx to xx.xx) in < timestep> tags. Then, provide the letter of the best option wrapped in parentheses within the tags. Question: {} {}" + } + ] + } + ], + "index": 27 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 107, + 111, + 237, + 186 + ], + "blocks": [ + { + "bbox": [ + 107, + 111, + 237, + 186 + ], + "lines": [ + { + "bbox": [ + 107, + 111, + 237, + 186 + ], + "spans": [ + { + "bbox": [ + 107, + 111, + 237, + 186 + ], + "type": "image", + "image_path": "8bbda051c7ce9bd5fe9b6cb397b3b363d50c32c6adc1866bcc17300fdf32b49d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 238, + 111, + 371, + 186 + ], + "blocks": [ + { + "bbox": [ + 238, + 111, + 371, + 186 + ], + "lines": [ + { + "bbox": [ + 238, + 111, + 371, + 186 + ], + "spans": [ + { + "bbox": [ + 238, + 111, + 371, + 186 + ], + "type": "image", + "image_path": "8a70b3af9b1cef5820dbbe2185acb3e04c5212fbd35dc7b48b92d939b2140b3e.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 373, + 111, + 504, + 186 + ], + "blocks": [ + { + "bbox": [ + 373, + 111, + 504, + 186 + ], + "lines": [ + { + "bbox": [ + 373, + 111, + 504, + 186 + ], + "spans": [ + { + "bbox": [ + 373, + 111, + 504, + 186 + ], + "type": "image", + "image_path": "b02c0aecb373fffd3ad6fe4410fe80d7d12502b93d13f4db9fc258d37fa09dd4.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 190, + 436, + 202 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 190, + 436, + 202 + ], + "spans": [ + { + "bbox": [ + 107, + 190, + 436, + 202 + ], + "type": "text", + "content": "(video length: 8s) Question: Is the pineapple being pushed forward by a person?" + } + ] + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 108, + 204, + 138, + 220 + ], + "blocks": [ + { + "bbox": [ + 108, + 204, + 138, + 220 + ], + "lines": [ + { + "bbox": [ + 108, + 204, + 138, + 220 + ], + "spans": [ + { + "bbox": [ + 108, + 204, + 138, + 220 + ], + "type": "image", + "image_path": "fab3078a8b7c583cae33c013f97fbc814dca78d0c61a8b95825374e61adbf8ef.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 220, + 205, + 253, + 220 + ], + "blocks": [ + { + "bbox": [ + 220, + 205, + 253, + 220 + ], + "lines": [ + { + "bbox": [ + 220, + 205, + 253, + 220 + ], + "spans": [ + { + "bbox": [ + 220, + 205, + 253, + 220 + ], + "type": "image", + "image_path": "94f7d103b06790159b96771fa16bcda15048fc523193a7bdd6ea2c99b0d987f2.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 234, + 201, + 245 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 234, + 201, + 245 + ], + "spans": [ + { + "bbox": [ + 107, + 234, + 201, + 245 + ], + "type": "text", + "content": "Base model thinking:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 247, + 423, + 258 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 247, + 423, + 258 + ], + "spans": [ + { + "bbox": [ + 107, + 247, + 423, + 258 + ], + "type": "text", + "content": "To determine if the pineapple is being pushed forward by a person, let's analyze the video:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 260, + 384, + 295 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 108, + 260, + 285, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 260, + 285, + 270 + ], + "spans": [ + { + "bbox": [ + 108, + 260, + 285, + 270 + ], + "type": "text", + "content": "1. The video shows a pineapple on a pink surface." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 272, + 336, + 283 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 272, + 336, + 283 + ], + "spans": [ + { + "bbox": [ + 107, + 272, + 336, + 283 + ], + "type": "text", + "content": "2. A hand appears in the frame and interacts with the pineapple." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 285, + 384, + 295 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 285, + 384, + 295 + ], + "spans": [ + { + "bbox": [ + 107, + 285, + 384, + 295 + ], + "type": "text", + "content": "3. The hand lifts the pineapple, indicating that it is being moved by the person." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 107, + 297, + 380, + 307 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 297, + 380, + 307 + ], + "spans": [ + { + "bbox": [ + 107, + 297, + 380, + 307 + ], + "type": "text", + "content": "Given this analysis, the pineapple is indeed being pushed forward by a person." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 312, + 219, + 323 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 312, + 219, + 323 + ], + "spans": [ + { + "bbox": [ + 107, + 312, + 219, + 323 + ], + "type": "text", + "content": "Time-R1 thinking (ours):" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 325, + 500, + 348 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 325, + 500, + 348 + ], + "spans": [ + { + "bbox": [ + 107, + 325, + 500, + 348 + ], + "type": "text", + "content": "The video shows a hand reaching into the frame and grasping the pineapple. The hand then lifts the pineapple, indicating that the pineapple is not being pushed forward by a person but rather being moved by the hand." + } + ] + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 107, + 355, + 239, + 430 + ], + "blocks": [ + { + "bbox": [ + 107, + 355, + 239, + 430 + ], + "lines": [ + { + "bbox": [ + 107, + 355, + 239, + 430 + ], + "spans": [ + { + "bbox": [ + 107, + 355, + 239, + 430 + ], + "type": "image", + "image_path": "f869e0da1398553a7a9496e63122abdaa9511f576a3870693cbfead808c33cd5.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 240, + 355, + 372, + 430 + ], + "blocks": [ + { + "bbox": [ + 240, + 355, + 372, + 430 + ], + "lines": [ + { + "bbox": [ + 240, + 355, + 372, + 430 + ], + "spans": [ + { + "bbox": [ + 240, + 355, + 372, + 430 + ], + "type": "image", + "image_path": "aa73db1e7f126f232846a4a713ba79a6e8528ec63d50a068a539b99f93b8343e.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 373, + 356, + 504, + 430 + ], + "blocks": [ + { + "bbox": [ + 373, + 356, + 504, + 430 + ], + "lines": [ + { + "bbox": [ + 373, + 356, + 504, + 430 + ], + "spans": [ + { + "bbox": [ + 373, + 356, + 504, + 430 + ], + "type": "image", + "image_path": "a342e247ff90b415a7531500e076e1acdf562b8cb12a3d127ca2b196c21df3c5.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "bbox": [ + 107, + 437, + 443, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 437, + 443, + 449 + ], + "spans": [ + { + "bbox": [ + 107, + 437, + 443, + 449 + ], + "type": "text", + "content": "(video length: 9s) Question: In which sequence does the man perform the actions?" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 108, + 452, + 313, + 503 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 108, + 452, + 276, + 462 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 452, + 276, + 462 + ], + "spans": [ + { + "bbox": [ + 108, + 452, + 276, + 462 + ], + "type": "text", + "content": "(A) Gliding on the skateboard and then jumping" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 108, + 464, + 253, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 464, + 253, + 474 + ], + "spans": [ + { + "bbox": [ + 108, + 464, + 253, + 474 + ], + "type": "text", + "content": "(B) Jumping and then tying his shoelaces" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 108, + 475, + 313, + 487 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 475, + 313, + 487 + ], + "spans": [ + { + "bbox": [ + 108, + 475, + 313, + 487 + ], + "type": "text", + "content": "(C) Tying his shoelaces and then gliding on the skateboard" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 108, + 488, + 313, + 503 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 488, + 313, + 503 + ], + "spans": [ + { + "bbox": [ + 108, + 488, + 313, + 503 + ], + "type": "text", + "content": "(D) Gliding on the skateboard and then tying his shoelaces" + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 107, + 517, + 200, + 529 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 517, + 200, + 529 + ], + "spans": [ + { + "bbox": [ + 107, + 517, + 200, + 529 + ], + "type": "text", + "content": "Base model thinking:" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 107, + 531, + 361, + 541 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 531, + 361, + 541 + ], + "spans": [ + { + "bbox": [ + 107, + 531, + 361, + 541 + ], + "type": "text", + "content": "Let's analyze the sequence of actions performed by the man in the video:" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 107, + 544, + 306, + 578 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 108, + 544, + 296, + 554 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 544, + 296, + 554 + ], + "spans": [ + { + "bbox": [ + 108, + 544, + 296, + 554 + ], + "type": "text", + "content": "1. The man is initially seen standing on a skateboard." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 108, + 555, + 251, + 566 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 555, + 251, + 566 + ], + "spans": [ + { + "bbox": [ + 108, + 555, + 251, + 566 + ], + "type": "text", + "content": "2. He then proceeds to tie his shoelaces." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 107, + 568, + 306, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 568, + 306, + 578 + ], + "spans": [ + { + "bbox": [ + 107, + 568, + 306, + 578 + ], + "type": "text", + "content": "3. After tying his shoelaces, he glides on the skateboard." + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 107, + 580, + 460, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 580, + 460, + 590 + ], + "spans": [ + { + "bbox": [ + 107, + 580, + 460, + 590 + ], + "type": "text", + "content": "Given this sequence, the correct answer is: (D) gliding on the skateboard and then tying his shoelaces" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 107, + 595, + 219, + 606 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 595, + 219, + 606 + ], + "spans": [ + { + "bbox": [ + 107, + 595, + 219, + 606 + ], + "type": "text", + "content": "Time-R1 thinking (ours):" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 106, + 609, + 500, + 643 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 609, + 500, + 643 + ], + "spans": [ + { + "bbox": [ + 106, + 609, + 500, + 643 + ], + "type": "text", + "content": "The sequence of actions performed by the man in the video is as follows: first, he ties his shoelaces, and then he glides on the skateboard. This can be observed from the video where the man is seen tying his shoelaces before he starts gliding on the skateboard." + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 104, + 656, + 505, + 680 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 656, + 505, + 680 + ], + "spans": [ + { + "bbox": [ + 104, + 656, + 505, + 680 + ], + "type": "text", + "content": "Figure 12: Case study on TempCompass (w/ CoT), demonstrating that Time-R1 achieves better performance than the base model." + } + ] + } + ], + "index": 33 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 34 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 107, + 122, + 231, + 217 + ], + "blocks": [ + { + "bbox": [ + 107, + 122, + 231, + 217 + ], + "lines": [ + { + "bbox": [ + 107, + 122, + 231, + 217 + ], + "spans": [ + { + "bbox": [ + 107, + 122, + 231, + 217 + ], + "type": "image", + "image_path": "81279d7eae18fb1bce4cbee47004116069ae0b3e42a55e520c6dc6cf7e80f94e.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 243, + 123, + 367, + 216 + ], + "blocks": [ + { + "bbox": [ + 243, + 123, + 367, + 216 + ], + "lines": [ + { + "bbox": [ + 243, + 123, + 367, + 216 + ], + "spans": [ + { + "bbox": [ + 243, + 123, + 367, + 216 + ], + "type": "image", + "image_path": "4c0320e18d7afaecdd60f0655f830871000e4ec263ba8222ed59654b57aea321.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 380, + 123, + 504, + 216 + ], + "blocks": [ + { + "bbox": [ + 380, + 123, + 504, + 216 + ], + "lines": [ + { + "bbox": [ + 380, + 123, + 504, + 216 + ], + "spans": [ + { + "bbox": [ + 380, + 123, + 504, + 216 + ], + "type": "image", + "image_path": "53c6db539aaaa3e755ac6983de5dc5d897ec26c76ca462857240666e21e22f9b.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 108, + 227, + 503, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 227, + 503, + 270 + ], + "spans": [ + { + "bbox": [ + 108, + 227, + 503, + 270 + ], + "type": "text", + "content": "(video length: 3m) Question: Summarize the steps taken by c to complete the tasks in this video. don't list every action narrated, but rather focus on major components and how they're executed." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 110, + 276, + 448, + 344 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 110, + 276, + 399, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 276, + 399, + 289 + ], + "spans": [ + { + "bbox": [ + 110, + 276, + 399, + 289 + ], + "type": "text", + "content": "(A) C folds the dress, places it on the ironing board, and then hangs it up." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 110, + 290, + 386, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 290, + 386, + 301 + ], + "spans": [ + { + "bbox": [ + 110, + 290, + 386, + 301 + ], + "type": "text", + "content": "(B) C folds the dress, places it on the ironing board, and then packs it." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 110, + 304, + 392, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 304, + 392, + 316 + ], + "spans": [ + { + "bbox": [ + 110, + 304, + 392, + 316 + ], + "type": "text", + "content": "(C) C folds the dress, places it on the ironing board, and then washes it." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 110, + 317, + 448, + 330 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 317, + 448, + 330 + ], + "spans": [ + { + "bbox": [ + 110, + 317, + 448, + 330 + ], + "type": "text", + "content": "(D) C folds the dress, places it on the ironing board, and then irons it and hangs it up." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 110, + 331, + 438, + 344 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 331, + 438, + 344 + ], + "spans": [ + { + "bbox": [ + 110, + 331, + 438, + 344 + ], + "type": "text", + "content": "(E) C folds the dress, places it on the ironing board, irons it, and then folds it again." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 109, + 363, + 214, + 375 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 363, + 214, + 375 + ], + "spans": [ + { + "bbox": [ + 109, + 363, + 214, + 375 + ], + "type": "text", + "content": "Base model thinking:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 109, + 379, + 432, + 390 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 379, + 432, + 390 + ], + "spans": [ + { + "bbox": [ + 109, + 379, + 432, + 390 + ], + "type": "text", + "content": "Let's analyze the video step by step to determine the correct sequence of actions:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 109, + 392, + 501, + 445 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 110, + 392, + 501, + 418 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 392, + 501, + 418 + ], + "spans": [ + { + "bbox": [ + 110, + 392, + 501, + 418 + ], + "type": "text", + "content": "1. Folding the Dress: The video starts with C folding a blue dress on the ironing board. This is the first major action." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 109, + 421, + 498, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 421, + 498, + 433 + ], + "spans": [ + { + "bbox": [ + 109, + 421, + 498, + 433 + ], + "type": "text", + "content": "2. Placing the Dress on the Ironing Board: After folding the dress, C places it on the ironing board." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 110, + 434, + 454, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 434, + 454, + 445 + ], + "spans": [ + { + "bbox": [ + 110, + 434, + 454, + 445 + ], + "type": "text", + "content": "3. Ironing the Dress: C then irons the dress, ensuring it is smooth and free of wrinkles." + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 109, + 449, + 402, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 449, + 402, + 460 + ], + "spans": [ + { + "bbox": [ + 109, + 449, + 402, + 460 + ], + "type": "text", + "content": "4. Hanging the Dress: Finally, C hangs the dress up, completing the task." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 109, + 463, + 235, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 463, + 235, + 474 + ], + "spans": [ + { + "bbox": [ + 109, + 463, + 235, + 474 + ], + "type": "text", + "content": "Now, let's evaluate the options:" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 110, + 477, + 446, + 559 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 110, + 477, + 399, + 489 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 477, + 399, + 489 + ], + "spans": [ + { + "bbox": [ + 110, + 477, + 399, + 489 + ], + "type": "text", + "content": "(A) C folds the dress, places it on the ironing board, and then hangs it up." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 110, + 491, + 386, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 491, + 386, + 502 + ], + "spans": [ + { + "bbox": [ + 110, + 491, + 386, + 502 + ], + "type": "text", + "content": "(B) C folds the dress, places it on the ironing board, and then packs it." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 110, + 505, + 392, + 517 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 505, + 392, + 517 + ], + "spans": [ + { + "bbox": [ + 110, + 505, + 392, + 517 + ], + "type": "text", + "content": "(C) C folds the dress, places it on the ironing board, and then washes it." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 110, + 518, + 446, + 530 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 518, + 446, + 530 + ], + "spans": [ + { + "bbox": [ + 110, + 518, + 446, + 530 + ], + "type": "text", + "content": "(D) C folds the dress, places it on the ironing board, and then irons it and hangs it up." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 110, + 533, + 437, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 533, + 437, + 559 + ], + "spans": [ + { + "bbox": [ + 110, + 533, + 437, + 559 + ], + "type": "text", + "content": "(E) C folds the dress, places it on the ironing board, irons it, and then folds it again. The correct sequence of actions matches option (D)." + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 109, + 578, + 234, + 590 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 578, + 234, + 590 + ], + "spans": [ + { + "bbox": [ + 109, + 578, + 234, + 590 + ], + "type": "text", + "content": "Time-R1 thinking (ours):" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 108, + 594, + 503, + 633 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 594, + 503, + 633 + ], + "spans": [ + { + "bbox": [ + 108, + 594, + 503, + 633 + ], + "type": "text", + "content": "The video shows C folding a dress, placing it on the ironing board, and then ironing it. The sequence of actions is clear and follows a logical order. The dress is folded, placed on the ironing board, and then ironed, which is a common sequence for completing a task like ironing a dress." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 104, + 652, + 506, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 652, + 506, + 674 + ], + "spans": [ + { + "bbox": [ + 104, + 652, + 506, + 674 + ], + "type": "text", + "content": "Figure 13: Case study on EgoSchema (w/ CoT), demonstrating that Time-R1 achieves better performance than the base model." + } + ] + } + ], + "index": 26 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 110, + 117, + 228, + 205 + ], + "blocks": [ + { + "bbox": [ + 110, + 117, + 228, + 205 + ], + "lines": [ + { + "bbox": [ + 110, + 117, + 228, + 205 + ], + "spans": [ + { + "bbox": [ + 110, + 117, + 228, + 205 + ], + "type": "image", + "image_path": "265232fafe8299073c4c7c4ff4a77c04143193438124ab075d3728dc10ab911d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 652, + 504, + 676 + ], + "lines": [ + { + "bbox": [ + 104, + 652, + 504, + 676 + ], + "spans": [ + { + "bbox": [ + 104, + 652, + 504, + 676 + ], + "type": "text", + "content": "Figure 14: Case study on EgoSchema (w/o CoT), demonstrating that Time-R1 achieves better performance than the base model." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 247, + 117, + 363, + 205 + ], + "blocks": [ + { + "bbox": [ + 247, + 117, + 363, + 205 + ], + "lines": [ + { + "bbox": [ + 247, + 117, + 363, + 205 + ], + "spans": [ + { + "bbox": [ + 247, + 117, + 363, + 205 + ], + "type": "image", + "image_path": "328bb94b3b32a66cd62b1ed91525cc8163cc93fb06159121e58a9d4a4cb19b48.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 383, + 117, + 500, + 205 + ], + "blocks": [ + { + "bbox": [ + 383, + 117, + 500, + 205 + ], + "lines": [ + { + "bbox": [ + 383, + 117, + 500, + 205 + ], + "spans": [ + { + "bbox": [ + 383, + 117, + 500, + 205 + ], + "type": "image", + "image_path": "14920b340d6c0ba3c4d19e69ffea4b668b0f967e1acf371880dfc3a8a72bc09d.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 108, + 205, + 502, + 250 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 205, + 502, + 250 + ], + "spans": [ + { + "bbox": [ + 108, + 205, + 502, + 250 + ], + "type": "text", + "content": "(video length: 3m) Question: Identify the primary tools and ingredients c used throughout the video, and discuss how they contributed to the overall dessert preparation process." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 109, + 254, + 503, + 643 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 110, + 254, + 503, + 322 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 254, + 503, + 322 + ], + "spans": [ + { + "bbox": [ + 110, + 254, + 503, + 322 + ], + "type": "text", + "content": "(A) The primary tools and ingredients c used throughout the video were a spatula, a strainer, a dessert bowl, cocoa powder, and tissue paper. the spatula was used to spread the batter in the bowl. the strainer was used to sieve the cocoa powder into the bowl. the dessert bowl was used to hold the batter. the cocoa powder was used to add flavor and color to the batter. the tissue paper was used to wipe the edges of the bowl." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 110, + 324, + 501, + 392 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 324, + 501, + 392 + ], + "spans": [ + { + "bbox": [ + 110, + 324, + 501, + 392 + ], + "type": "text", + "content": "(B) The primary tools and ingredients c used throughout the video were a mixing bowl, a whisk, X a baking dish, flour, sugar, eggs, and butter. the mixing bowl was used to mix the batter. the whisk was used to beat the eggs. the baking dish was used to bake the cake. the flour was used to add structure to the cake. the sugar was used to add sweetness to the cake. the eggs were used to add moisture to the cake. the butter was used to add flavor and richness to the cake." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 110, + 394, + 501, + 475 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 394, + 501, + 475 + ], + "spans": [ + { + "bbox": [ + 110, + 394, + 501, + 475 + ], + "type": "text", + "content": "(C) The primary tools and ingredients c used throughout the video were a muffin tin, a spoon, chocolate chips, flour, sugar, eggs, and butter. the muffin tin was used to bake the muffins. the spoon was used to scoop the batter into the muffin tin. the chocolate chips were used to add flavor and color to the muffins. the flour was used to add structure to the muffins. the sugar was used to add sweetness to the muffins. the eggs were used to add moisture to the muffins. the butter was used to add flavor and richness to the muffins." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 109, + 478, + 501, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 478, + 501, + 559 + ], + "spans": [ + { + "bbox": [ + 109, + 478, + 501, + 559 + ], + "type": "text", + "content": "(D) The primary tools and ingredients c used throughout the video were a cake pan, a spatula, chocolate frosting, flour, sugar, eggs, and butter. The cake pan was used to bake the cake. The spatula was used to spread the frosting on the cake. The chocolate frosting was used to add flavor and color to the cake. The flour was used to add structure to the cake. The sugar was used to add sweetness to the cake. The eggs were used to add moisture to the cake. The butter was used to add flavor and richness to the cake." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 109, + 562, + 501, + 643 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 562, + 501, + 643 + ], + "spans": [ + { + "bbox": [ + 109, + 562, + 501, + 643 + ], + "type": "text", + "content": "(E) The primary tools and ingredients c used throughout the video were a mixing bowl, a whisk, a baking sheet, flour, sugar, eggs, and butter. the mixing bowl was used to mix the batter. the whisk was used to beat the eggs. the baking sheet was used to bake the cookies. the flour was used to add structure to the cookies. the sugar was used to add sweetness to the cookies. the eggs were used to add moisture to the cookies. the butter was used to add flavor and richness to the cookies." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 311, + 750 + ], + "type": "text", + "content": "22" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "bbox": [ + 108, + 87, + 144, + 96 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 87, + 144, + 96 + ], + "spans": [ + { + "bbox": [ + 108, + 87, + 144, + 96 + ], + "type": "text", + "content": "## Task" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 108, + 99, + 379, + 108 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 99, + 379, + 108 + ], + "spans": [ + { + "bbox": [ + 108, + 99, + 379, + 108 + ], + "type": "text", + "content": "Classify input queries into exactly one of the following categories based on their semantic content:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 108, + 110, + 192, + 118 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 110, + 192, + 118 + ], + "spans": [ + { + "bbox": [ + 108, + 110, + 192, + 118 + ], + "type": "text", + "content": "1. Human Action (Simple)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 108, + 120, + 469, + 138 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 108, + 120, + 282, + 128 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 120, + 282, + 128 + ], + "spans": [ + { + "bbox": [ + 108, + 120, + 282, + 128 + ], + "type": "text", + "content": "- Definition: Singular physical movements or basic interactions." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 108, + 129, + 469, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 129, + 469, + 138 + ], + "spans": [ + { + "bbox": [ + 108, + 129, + 469, + 138 + ], + "type": "text", + "content": "- Examples: - person opens a book over their head. - The person gets out some ginger. - who did I talk to in the shopping mall?" + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 108, + 139, + 198, + 148 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 139, + 198, + 148 + ], + "spans": [ + { + "bbox": [ + 108, + 139, + 198, + 148 + ], + "type": "text", + "content": "2. Human Action (Complex)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 108, + 149, + 350, + 167 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 108, + 149, + 350, + 157 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 149, + 350, + 157 + ], + "spans": [ + { + "bbox": [ + 108, + 149, + 350, + 157 + ], + "type": "text", + "content": "- Definition: Single continuous event with intricate components or concurrent elements." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 108, + 159, + 323, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 159, + 323, + 167 + ], + "spans": [ + { + "bbox": [ + 108, + 159, + 323, + 167 + ], + "type": "text", + "content": "- Examples: - He is talking while several people are using rowing machines." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 149, + 168, + 460, + 186 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 149, + 168, + 460, + 177 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 168, + 460, + 177 + ], + "spans": [ + { + "bbox": [ + 149, + 168, + 460, + 177 + ], + "type": "text", + "content": "- One man wearing blue shirt wearing a jumping leg extension and another man wearing red pants play on a field." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 149, + 178, + 333, + 186 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 178, + 333, + 186 + ], + "spans": [ + { + "bbox": [ + 149, + 178, + 333, + 186 + ], + "type": "text", + "content": "- who did I interact with when I did activity of fixing camping tent?" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 108, + 188, + 205, + 197 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 188, + 205, + 197 + ], + "spans": [ + { + "bbox": [ + 108, + 188, + 205, + 197 + ], + "type": "text", + "content": "3. Human Action (procedural)" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 108, + 198, + 496, + 225 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 108, + 198, + 493, + 206 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 198, + 493, + 206 + ], + "spans": [ + { + "bbox": [ + 108, + 198, + 493, + 206 + ], + "type": "text", + "content": "- Definition: contains multiple sequential events with explicit temporal boundaries. contains multiple actions, each with a clear start and end." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 108, + 208, + 496, + 225 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 208, + 496, + 225 + ], + "spans": [ + { + "bbox": [ + 108, + 208, + 496, + 225 + ], + "type": "text", + "content": "- Examples: - The person procures a condiment from the pantry, takes a spoon from the drawer which he uses to scoop it into the pan, then returns the condiment to the pantry, places the spoon in the sink and again stirs the pan." + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 108, + 227, + 476, + 244 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 227, + 476, + 244 + ], + "spans": [ + { + "bbox": [ + 108, + 227, + 476, + 244 + ], + "type": "text", + "content": "- The person takes out a spoon from the drawer, scoops some sugar into the glass, stirs it with the juice, and returns the package to the pantry." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 108, + 247, + 342, + 255 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 247, + 342, + 255 + ], + "spans": [ + { + "bbox": [ + 108, + 247, + 342, + 255 + ], + "type": "text", + "content": "- Negative Examples: - Then the man juices some lemons in a juicer: only one action" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 171, + 256, + 358, + 283 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 171, + 256, + 323, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 171, + 256, + 323, + 264 + ], + "spans": [ + { + "bbox": [ + 171, + 256, + 323, + 264 + ], + "type": "text", + "content": "- She gets out a cutting board and knife: only one action" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 171, + 266, + 307, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 171, + 266, + 307, + 274 + ], + "spans": [ + { + "bbox": [ + 171, + 266, + 307, + 274 + ], + "type": "text", + "content": "- He then finishes by doing tricks: only one action" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 171, + 275, + 358, + 283 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 171, + 275, + 358, + 283 + ], + "spans": [ + { + "bbox": [ + 171, + 275, + 358, + 283 + ], + "type": "text", + "content": "- She removes bits of shell until there is a small hole: only one action" + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 108, + 285, + 157, + 293 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 285, + 157, + 293 + ], + "spans": [ + { + "bbox": [ + 108, + 285, + 157, + 293 + ], + "type": "text", + "content": "4. Human Pose" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 108, + 295, + 389, + 313 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 108, + 295, + 389, + 304 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 295, + 389, + 304 + ], + "spans": [ + { + "bbox": [ + 108, + 295, + 389, + 304 + ], + "type": "text", + "content": "- Definition: Static body positions or group configurations. Posture descriptors, positional prepositions" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 108, + 304, + 358, + 313 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 304, + 358, + 313 + ], + "spans": [ + { + "bbox": [ + 108, + 304, + 358, + 313 + ], + "type": "text", + "content": "- Examples: - Several other people are in the background working out on the equipment." + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 149, + 314, + 315, + 323 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 314, + 315, + 323 + ], + "spans": [ + { + "bbox": [ + 149, + 314, + 315, + 323 + ], + "type": "text", + "content": "- A young child is seen standing before a set of monkey bars." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 108, + 324, + 198, + 334 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 324, + 198, + 334 + ], + "spans": [ + { + "bbox": [ + 108, + 324, + 198, + 334 + ], + "type": "text", + "content": "5. Object Existence (Simple)" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 108, + 335, + 312, + 352 + ], + "type": "list", + "angle": 0, + "index": 31, + "blocks": [ + { + "bbox": [ + 108, + 335, + 312, + 342 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 335, + 312, + 342 + ], + "spans": [ + { + "bbox": [ + 108, + 335, + 312, + 342 + ], + "type": "text", + "content": "- Definition: Current location/status queries. Simple location prepositions." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 108, + 344, + 203, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 344, + 203, + 352 + ], + "spans": [ + { + "bbox": [ + 108, + 344, + 203, + 352 + ], + "type": "text", + "content": "- Examples: - Where is the tap?" + } + ] + } + ], + "index": 30 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 150, + 354, + 262, + 371 + ], + "type": "list", + "angle": 0, + "index": 34, + "blocks": [ + { + "bbox": [ + 150, + 354, + 222, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 150, + 354, + 222, + 361 + ], + "spans": [ + { + "bbox": [ + 150, + 354, + 222, + 361 + ], + "type": "text", + "content": "- where is the chopsticks?" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 150, + 364, + 262, + 371 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 150, + 364, + 262, + 371 + ], + "spans": [ + { + "bbox": [ + 150, + 364, + 262, + 371 + ], + "type": "text", + "content": "- In what location did i see the blue tent?" + } + ] + } + ], + "index": 33 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 108, + 373, + 205, + 382 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 373, + 205, + 382 + ], + "spans": [ + { + "bbox": [ + 108, + 373, + 205, + 382 + ], + "type": "text", + "content": "6. Object Existence (Complex)" + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 108, + 384, + 475, + 411 + ], + "type": "list", + "angle": 0, + "index": 38, + "blocks": [ + { + "bbox": [ + 108, + 384, + 475, + 401 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 384, + 475, + 401 + ], + "spans": [ + { + "bbox": [ + 108, + 384, + 475, + 401 + ], + "type": "text", + "content": "- Definition: Queries about historical object positions changed by human actions, requiring temporal-action context (e.g., \"after/before [action]\")." + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 108, + 403, + 271, + 411 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 403, + 271, + 411 + ], + "spans": [ + { + "bbox": [ + 108, + 403, + 271, + 411 + ], + "type": "text", + "content": "- Examples: - Where was the spatula after I first used it?" + } + ] + } + ], + "index": 37 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 150, + 412, + 261, + 439 + ], + "type": "list", + "angle": 0, + "index": 42, + "blocks": [ + { + "bbox": [ + 150, + 412, + 261, + 420 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 150, + 412, + 261, + 420 + ], + "spans": [ + { + "bbox": [ + 150, + 412, + 261, + 420 + ], + "type": "text", + "content": "- Where was the sieve before I picked it?" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 150, + 422, + 211, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 150, + 422, + 211, + 430 + ], + "spans": [ + { + "bbox": [ + 150, + 422, + 211, + 430 + ], + "type": "text", + "content": "- what bolt did I pick?" + } + ] + } + ], + "index": 40 + }, + { + "bbox": [ + 150, + 431, + 229, + 439 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 150, + 431, + 229, + 439 + ], + "spans": [ + { + "bbox": [ + 150, + 431, + 229, + 439 + ], + "type": "text", + "content": "- What mushroom did i chop" + } + ] + } + ], + "index": 41 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 108, + 441, + 168, + 450 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 441, + 168, + 450 + ], + "spans": [ + { + "bbox": [ + 108, + 441, + 168, + 450 + ], + "type": "text", + "content": "7. Object Attribute" + } + ] + } + ], + "index": 43 + }, + { + "bbox": [ + 108, + 452, + 363, + 469 + ], + "type": "list", + "angle": 0, + "index": 46, + "blocks": [ + { + "bbox": [ + 108, + 452, + 363, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 452, + 363, + 460 + ], + "spans": [ + { + "bbox": [ + 108, + 452, + 363, + 460 + ], + "type": "text", + "content": "- Definition: Physical/abstract property inquiries. Property descriptors (color/size/material)" + } + ] + } + ], + "index": 44 + }, + { + "bbox": [ + 108, + 461, + 263, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 461, + 263, + 469 + ], + "spans": [ + { + "bbox": [ + 108, + 461, + 263, + 469 + ], + "type": "text", + "content": "- Examples: - what material did I pick from the shelf?" + } + ] + } + ], + "index": 45 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 150, + 471, + 229, + 478 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 150, + 471, + 229, + 478 + ], + "spans": [ + { + "bbox": [ + 150, + 471, + 229, + 478 + ], + "type": "text", + "content": "- what color is the toilet bin?" + } + ] + } + ], + "index": 47 + }, + { + "bbox": [ + 108, + 480, + 169, + 489 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 480, + 169, + 489 + ], + "spans": [ + { + "bbox": [ + 108, + 480, + 169, + 489 + ], + "type": "text", + "content": "8. Object Counting" + } + ] + } + ], + "index": 48 + }, + { + "bbox": [ + 108, + 491, + 342, + 508 + ], + "type": "list", + "angle": 0, + "index": 51, + "blocks": [ + { + "bbox": [ + 108, + 491, + 342, + 499 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 491, + 342, + 499 + ], + "spans": [ + { + "bbox": [ + 108, + 491, + 342, + 499 + ], + "type": "text", + "content": "- Definition: Quantitative object presence queries. Numeric quantifiers, plural objects" + } + ] + } + ], + "index": 49 + }, + { + "bbox": [ + 108, + 501, + 270, + 508 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 501, + 270, + 508 + ], + "spans": [ + { + "bbox": [ + 108, + 501, + 270, + 508 + ], + "type": "text", + "content": "- Examples: - how many tissue paper were on the floor?" + } + ] + } + ], + "index": 50 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 150, + 510, + 237, + 517 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 150, + 510, + 237, + 517 + ], + "spans": [ + { + "bbox": [ + 150, + 510, + 237, + 517 + ], + "type": "text", + "content": "- how many rolls are in the tray" + } + ] + } + ], + "index": 52 + }, + { + "bbox": [ + 108, + 520, + 173, + 528 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 520, + 173, + 528 + ], + "spans": [ + { + "bbox": [ + 108, + 520, + 173, + 528 + ], + "type": "text", + "content": "9. Object Transition" + } + ] + } + ], + "index": 53 + }, + { + "bbox": [ + 108, + 530, + 357, + 548 + ], + "type": "list", + "angle": 0, + "index": 56, + "blocks": [ + { + "bbox": [ + 108, + 530, + 357, + 538 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 530, + 357, + 538 + ], + "spans": [ + { + "bbox": [ + 108, + 530, + 357, + 538 + ], + "type": "text", + "content": "- Definition: State/position change confirmation. Transformation verbs, completion checks" + } + ] + } + ], + "index": 54 + }, + { + "bbox": [ + 108, + 540, + 223, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 540, + 223, + 548 + ], + "spans": [ + { + "bbox": [ + 108, + 540, + 223, + 548 + ], + "type": "text", + "content": "- Examples: - The bulb is broken apart." + } + ] + } + ], + "index": 55 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 150, + 549, + 206, + 557 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 150, + 549, + 206, + 557 + ], + "spans": [ + { + "bbox": [ + 150, + 549, + 206, + 557 + ], + "type": "text", + "content": "- Did I close fridge?" + } + ] + } + ], + "index": 57 + }, + { + "bbox": [ + 108, + 559, + 188, + 567 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 559, + 188, + 567 + ], + "spans": [ + { + "bbox": [ + 108, + 559, + 188, + 567 + ], + "type": "text", + "content": "10. Environment Change" + } + ] + } + ], + "index": 58 + }, + { + "bbox": [ + 108, + 570, + 324, + 587 + ], + "type": "list", + "angle": 0, + "index": 61, + "blocks": [ + { + "bbox": [ + 108, + 570, + 324, + 578 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 570, + 324, + 578 + ], + "spans": [ + { + "bbox": [ + 108, + 570, + 324, + 578 + ], + "type": "text", + "content": "- Definition: Dynamic scene modifications. Transient elements, overlay content" + } + ] + } + ], + "index": 59 + }, + { + "bbox": [ + 108, + 579, + 280, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 579, + 280, + 587 + ], + "spans": [ + { + "bbox": [ + 108, + 579, + 280, + 587 + ], + "type": "text", + "content": "- Examples: - video ends with clothes/captions scrolling down" + } + ] + } + ], + "index": 60 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 108, + 589, + 181, + 597 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 589, + 181, + 597 + ], + "spans": [ + { + "bbox": [ + 108, + 589, + 181, + 597 + ], + "type": "text", + "content": "11. Environment State" + } + ] + } + ], + "index": 62 + }, + { + "bbox": [ + 108, + 599, + 312, + 616 + ], + "type": "list", + "angle": 0, + "index": 65, + "blocks": [ + { + "bbox": [ + 108, + 599, + 312, + 607 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 599, + 312, + 607 + ], + "spans": [ + { + "bbox": [ + 108, + 599, + 312, + 607 + ], + "type": "text", + "content": "- Definition: Persistent scene elements. Static overlays, permanent fixtures" + } + ] + } + ], + "index": 63 + }, + { + "bbox": [ + 108, + 609, + 276, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 609, + 276, + 616 + ], + "spans": [ + { + "bbox": [ + 108, + 609, + 276, + 616 + ], + "type": "text", + "content": "- Examples: - Intro states 'Progression: Lisa's First Season'" + } + ] + } + ], + "index": 64 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 150, + 618, + 267, + 627 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 150, + 618, + 267, + 627 + ], + "spans": [ + { + "bbox": [ + 150, + 618, + 267, + 627 + ], + "type": "text", + "content": "-Trend Routing Technology' logo appears" + } + ] + } + ], + "index": 66 + }, + { + "bbox": [ + 108, + 629, + 187, + 639 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 629, + 187, + 639 + ], + "spans": [ + { + "bbox": [ + 108, + 629, + 187, + 639 + ], + "type": "text", + "content": "Output Format" + } + ] + } + ], + "index": 67 + }, + { + "bbox": [ + 107, + 641, + 484, + 670 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 641, + 484, + 670 + ], + "spans": [ + { + "bbox": [ + 107, + 641, + 484, + 670 + ], + "type": "text", + "content": "Return ONLY the exact category name from: [Human Action (Procedural), Human Action (Complex), Human Action (Simple), Human Pose, Object Existence (Simple), Object Existence (Complex), Object Attribute, Object Counting, Object Transition, Environment Change, Environment State]" + } + ] + } + ], + "index": 68 + }, + { + "bbox": [ + 108, + 670, + 459, + 679 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 670, + 459, + 679 + ], + "spans": [ + { + "bbox": [ + 108, + 670, + 459, + 679 + ], + "type": "text", + "content": "INPUT_prompt = \"\"Given the query below, classify it into one of the categories mentioned above. Query: {query} Your response:" + } + ] + } + ], + "index": 69 + }, + { + "bbox": [ + 123, + 691, + 485, + 703 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 123, + 691, + 485, + 703 + ], + "spans": [ + { + "bbox": [ + 123, + 691, + 485, + 703 + ], + "type": "text", + "content": "Figure 15: Prompts for LLM used to annotate the semantics of each query on TVGBench." + } + ] + } + ], + "index": 70 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 299, + 741, + 310, + 750 + ], + "type": "text", + "content": "23" + } + ] + } + ], + "index": 71 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 22 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13399/3216c4a9-6622-4f8f-9268-b62c8b700c4e_content_list.json b/data/2025/2503_13xxx/2503.13399/3216c4a9-6622-4f8f-9268-b62c8b700c4e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d1d9a60134f8cd93beee0e7f63000a697bd724cc --- /dev/null +++ b/data/2025/2503_13xxx/2503.13399/3216c4a9-6622-4f8f-9268-b62c8b700c4e_content_list.json @@ -0,0 +1,7281 @@ +[ + { + "type": "text", + "text": "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research", + "text_level": 1, + "bbox": [ + 133, + 128, + 864, + 174 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "James Burgess*¹ Jeffrey J Nirschl*¹ Laura Bravo-Sánchez¹,*", + "bbox": [ + 236, + 203, + 756, + 220 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Alejandro Lozano $^{1}$ Sanket Rajan Gupte $^{1}$ Jesus G. Galaz-Montoya $^{1}$ Yuhui Zhang $^{1}$", + "bbox": [ + 156, + 220, + 839, + 239 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Yuchang Su² Disha Bhowmik³ Zachary Coman³ Sarina M. Hasan⁴", + "bbox": [ + 212, + 239, + 782, + 255 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Alexandra Johannesson $^{5}$ William D. Leineweber $^{1}$ Malvika G Nair $^{3}$ Ridhi Yarlagadda $^{3}$", + "bbox": [ + 130, + 256, + 866, + 273 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Connor Zuraski $^{1}$ Wah Chiu $^{1}$ Sarah Cohen $^{3}$ Jan N. Hansen $^{1}$ Manuel D Leonetti $^{6}$", + "bbox": [ + 156, + 273, + 839, + 290 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Chad Liu6 Emma Lundberg1,5,6 Serena Yeung-Levy1,6", + "bbox": [ + 267, + 291, + 723, + 310 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1Stanford University, 2Tsinghua University, 3University of North Carolina at Chapel Hill,", + "bbox": [ + 202, + 318, + 792, + 334 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{4}$ Princeton University, $^{5}$ KTH Royal Institute of Technology, $^{6}$ Chan Zuckerberg Biohub Network", + "bbox": [ + 183, + 334, + 813, + 349 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 ) Expert Visual Understanding", + "text_level": 1, + "bbox": [ + 109, + 369, + 323, + 383 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "What is unusual about the result?", + "bbox": [ + 112, + 393, + 328, + 407 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Perception", + "bbox": [ + 187, + 414, + 259, + 426 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "\"How is the Seipin localized within the endoplasmic reticulum (ER)?\"", + "bbox": [ + 94, + 441, + 204, + 484 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/48fb0b015fe336e7f1f2ee20f79fa8128697bbf493c68df1a3c102ba11f79ade.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 209, + 438, + 349, + 492 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "\"Do cells treated with BafA1 (left) express more p26 compared to control (right)?\"", + "bbox": [ + 102, + 513, + 205, + 565 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/fbf6225fd6cc5a03ce08245eb457e6ac241f62677aa68ba0b8663ce128f2d34a.jpg", + "image_caption": [ + "Figure 1. A scientific experimentation workflow drives discovery: researchers analyze experiments, develop hypotheses, and design further experiments to test their ideas. We release MicroVQA, a visual question answering (VQA) benchmark to test these three tasks in the context of biological microscopy. Each of the 1,042 samples is created by a biology expert, and transformed into a multiple choice question (MCQ)." + ], + "image_footnote": [], + "bbox": [ + 215, + 503, + 341, + 583 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "2 ) Hypothesis generation", + "text_level": 1, + "bbox": [ + 385, + 369, + 584, + 383 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Why does this happen in my experiment?", + "bbox": [ + 370, + 396, + 630, + 410 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Assessment", + "bbox": [ + 462, + 412, + 544, + 422 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "\"Which mechanism might explain why ASLV particles show signs of merging in a CryoEM image?\"", + "bbox": [ + 367, + 436, + 506, + 489 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "\"Which glial cell is likely responsible for this abnormal reticular fiber pattern?\"", + "bbox": [ + 379, + 518, + 506, + 560 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/743ab2730b296e0179253c4096d078e836c67aeb0542c24381b8e5557e9ff0ed.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 519, + 435, + 622, + 508 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/d49b3c5b889aacafd67f16612e1e0234e4bbf363b0baafdfa4c53e1eb51a055a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 521, + 510, + 622, + 580 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "3 Experimental proposal", + "text_level": 1, + "bbox": [ + 663, + 369, + 864, + 383 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "How do I test my hypothesis?", + "bbox": [ + 683, + 395, + 870, + 407 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Action", + "bbox": [ + 756, + 412, + 799, + 422 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "\"What experiment could you perform to test if CCNB1 protein levels relate to cell cycle stages in human A-431 cells?\"", + "bbox": [ + 650, + 438, + 792, + 489 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/49ee27fd732ed49af49867b6852d6dc9224ef77778edb08746e4310eb67393ad.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 805, + 433, + 903, + 507 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "\"What experimental change can increase the likelihood of achieving a high-resolution structure?\"", + "bbox": [ + 669, + 513, + 792, + 565 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/69d02fafbdedd7b8e5a3996c1814917433fee933752157a05313b1d02141cd0c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 807, + 508, + 893, + 547 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/41c5c5c6b79455a8e5bed80c85cbc8336ec8c76d066a5f1bc1a77fb088e9a319.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 807, + 549, + 893, + 579 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 250, + 648, + 325, + 662 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Scientific research demands sophisticated reasoning over multimodal data, a challenge especially prevalent in biology. Despite recent advances in multimodal large language models (MLLMs) for AI-assisted research, existing multimodal reasoning benchmarks only target up to college-level difficulty, while research-level benchmarks emphasize lower-level perception, falling short of the complex multimodal reasoning needed for scientific discovery. To bridge this gap, we introduce MicroVQA, a visual-question answering (VQA) benchmark designed to assess three reasoning capabilities vital in research workflows: expert image understanding, hypothesis generation, and experiment pro", + "bbox": [ + 89, + 681, + 483, + 863 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "posal. MicroVQA consists of 1,042 multiple-choice questions (MCQs) curated by biology experts across diverse microscopy modalities, ensuring VQA samples represent real scientific practice. In constructing the benchmark, we find that standard MCQ generation methods induce language shortcuts, motivating a new two-stage pipeline: an optimized LLM prompt structures question-answer pairs into MCQs; then, an agent-based 'RefineBot' updates them to remove shortcuts. Benchmarking on state-of-the-art MLLMs reveal a peak performance of $53\\%$ ; models with smaller LLMs only slightly underperform top models, suggesting that language-based reasoning is less challenging than multimodal reasoning; and tuning with scientific articles enhances performance. Expert analysis of chain-of-thought responses shows that perception errors are the most frequent, followed by knowledge errors and then overgen", + "bbox": [ + 511, + 650, + 906, + 892 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2503.13399v1 [cs.CV] 17 Mar 2025", + "bbox": [ + 22, + 260, + 57, + 705 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "Correspondence {jmhb, jnirschl, lmbravo}@stanford.edu", + "bbox": [ + 89, + 875, + 464, + 887 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*These authors contributed equally", + "bbox": [ + 93, + 887, + 279, + 900 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 493, + 925, + 501, + 935 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "eralization errors. These insights highlight the challenges in multimodal scientific reasoning, showing MicroVQA is a valuable resource advancing AI-driven biomedical research. MicroVQA is available here, project here.", + "bbox": [ + 89, + 90, + 483, + 154 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 89, + 166, + 222, + 181 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "The transformative potential of AI in scientific discovery lies in its ability to interpret and reason over complex, multimodal data while integrating specialist domain knowledge. Recently, multimodal large language models (MLLMs) have demonstrated notable capabilities in data comprehension, recall, and multi-step reasoning across various modalities on exam-style questions [1, 3, 6, 49, 69, 81]. This progress has generated enthusiasm for scientific AI applications, ranging from multimodal chat AI assistants [33, 70, 85, 96] to AI agents capable of orchestrating complex experiments [12, 16, 26, 55]. However, while MLLMs perform well on general exam tasks, they often lack the specialized, contextual reasoning needed for real scientific challenges, where tasks require advanced image-based reasoning, analysis, and hypothesis-driven experimentation.", + "bbox": [ + 89, + 191, + 482, + 417 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In recent years, efforts to apply scientific MLLMs across a range of domains have laid an essential groundwork. Advances have been achieved in image understanding [41, 43, 47, 50, 57], literature analysis [41, 47, 66, 77], and problem-solving at school and early college levels [53, 87]. In applied settings, AI has facilitated scientific tasks in biology [28, 60, 63, 68], chemistry [12, 38, 55], software [63, 73], and diagnostic pathology and radiology [52]. However, there has been less progress on multimodal tasks requiring reasoning in research-level settings. The gap is due in part to a lack of multimodal benchmarks for scientific reasoning – unlike college-level tasks where existing exams can form benchmarks, research tasks require expert curation.", + "bbox": [ + 89, + 417, + 482, + 613 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Microscopy provides a compelling use case for developing such a benchmark. It requires more than pattern recognition or factual recall – it demands the ability to synthesize visual information with experimental context, formulate hypotheses, and propose experimental follow-ups. To evaluate these advanced capabilities, we introduce MicroVQA, a visual-question answering (VQA) benchmark tailored for multimodal scientific reasoning. Grounded in the real-world scientific practice of biological microscopy, MicroVQA incorporates 1,042 multiple-choice questions (MCQs) manually created by biological researchers across diverse disciplines, with each question taking more than 30 minutes to generate. Careful expert curation ensures the benchmark is relevant to real scientific applications.", + "bbox": [ + 89, + 614, + 482, + 824 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "MicroVQA is designed to evaluate three key reasoning tasks crucial to scientific inquiry: (1) expert image understanding - the ability to recognize scientifically relevant features and integrate contextual information about sample preparation; (2) hypothesis generation - the formulation", + "bbox": [ + 89, + 825, + 482, + 901 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "of scientifically plausible explanations from experimental data; and (3) experiment proposal – the ability to suggest further experiments that could validate or refute hypotheses. These capabilities form a common workflow in scientific experimentation. They demand both abductive reasoning – inferring the best explanation from multiple possibilities – and deductive reasoning – moving from general principles to specific conclusions [81].", + "bbox": [ + 511, + 90, + 903, + 210 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We tested standard methods to map open scientific VQA samples to multiple-choice questions (MCQs), but found that they induce language shortcuts – the MCQs failed to truly test multimodal abilities. To ensure question quality and difficulty, we developed a two-stage pipeline for MCQ generation: first, an optimized LLM prompt structures QA pairs into well-formatted MCQs; then, a novel agent-based 'RefineBot' system increases question difficulty by rewriting questions without language shortcuts. This ensures that MicroVQA's MCQs better test true scientific capabilities, rather than simple pattern recognition.", + "bbox": [ + 511, + 212, + 903, + 378 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We benchmarked frontier MLLMs on MicroVQA, finding that the best performers achieved a modest peak of $53\\%$ , which shows a significant gap between current models and expert-level scientific reasoning. The variation between different MLLMs was small; notably, smaller LLMs only slightly underperform their larger counterparts, suggesting that the key challenge for our MCQs is not language-only reasoning, but multimodal reasoning or knowledge. We also find that finetuning MLLMs on scientific publications improves MicroVQA performance. Finally, a group of biomedical researchers performed a detailed qualitative analysis into MLLM failure modes, uncovering three major failure modes: Expert perception is the most common, followed by knowledge errors, and then overgeneralization reasoning errors. This further supports that visual reasoning is a key challenge in MicroVQA.", + "bbox": [ + 511, + 378, + 903, + 619 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In summary, our contributions are as follows:", + "bbox": [ + 532, + 619, + 834, + 633 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We introduce MicroVQA, a benchmark for multimodal reasoning in biological microscopy. We defined three key tasks for scientific research - expert image understanding, hypothesis generation, and experiment proposal. Then, expert researchers curated 1,042 diverse VQA samples.", + "- We develop a novel two-stage MCQ generation pipeline to address the challenge of creating MCQs from long-format scientific questions without language shortcuts.", + "- We provide quantitative and qualitative benchmarking of frontier MLLMs, highlighting areas for improvement." + ], + "bbox": [ + 513, + 635, + 903, + 786 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related work", + "text_level": 1, + "bbox": [ + 513, + 799, + 651, + 814 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "MLLM reasoning benchmarks While perception focuses on identifying an image's visual features, visual reasoning often integrates prior knowledge to derive new conclusions using logical inference strategies like deduction, induction, abduction, and analogical reasoning [81].", + "bbox": [ + 511, + 825, + 903, + 901 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 924, + 504, + 935 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Reasoning in MLLMs has focused on image-text VQA [25, 44, 53, 81, 87, 88]. The most prominent examples in science are MMMU [87, 88], ScienceQA, [53] and MathVista [54]. While they do test reasoning, they are derived from exams up to the college level, while our benchmark emphasizes research-level difficulty. Other popular multimodal benchmarks test commonsense inference [71, 90], spatial or relational awareness [35, 91] and integrating prior knowledge [56]. Another line of work is visual abductive reasoning [8, 32], or finding the most likely explanation for an observation from incomplete information; this relates closely to hypothesis generation in this work.", + "bbox": [ + 89, + 90, + 483, + 272 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "MLLMs in science Many works consider multimodal VQA in scientific or medical domains like microscopy images [50], biological images [57], medical images [30, 34], and chemical structure drawings [43]. Still, they mostly test more straightforward perception and recognition. Figure comprehension for scientific articles does require more reasoning with multimodal data [41, 45-47, 61]; however, the images here are curated for publication, and the emphasis is on understanding content, rather than performing practical tasks. On the other hand, our benchmark includes realistic experiment images and tests more advanced reasoning like hypothesis generation and experiment planning. Scientific tasks have received much more attention in language-only LLMs [9, 15, 22, 37, 41, 58, 62, 63, 66, 67, 73, 80, 95], with GPQA being a notable work targeting PhD-level multi-step reasoning [62], but these do not test multimodal reasoning, which is important to many scientific areas [26]. In terms of models, most MLLMs in the biomedical domain are targeted at clinical tasks [34, 42, 59, 65, 75, 92].", + "bbox": [ + 91, + 292, + 483, + 582 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Multiple-choice question construction A challenge in creating exams for education research and benchmarks in machine learning is generating multiple-choice questions with quality distractors [4, 27]. Recent works have explored LLM-based generation methods, including zero-shot distractor prompting, few-shot prompting, LLM fine-tuning with high-quality examples, and prompting for known error types [10, 23, 24, 94]. However, a major challenge is that generated distractors are not sufficiently vision-centric - MLLMs can often select the correct answer from the distractors without access to the image [74]. While this is partly explained by language biases on truly vision-centric MCQs [7, 29], others likely do fail to test visual understanding due to poor MCQ construction [41] - this informs our RefineBot MCQ generation method in Sec. 4.2.", + "bbox": [ + 89, + 602, + 483, + 829 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. The MicroVQA benchmark", + "text_level": 1, + "bbox": [ + 89, + 844, + 348, + 861 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Here, we present the MicroVQA benchmark for visual-question-answering (VQA) which advances multimodal", + "bbox": [ + 89, + 869, + 483, + 900 + ], + "page_idx": 2 + }, + { + "type": "table", + "img_path": "images/d32da879ee1c18b59513c036e6d33c328cd2312dd4e8a7063de5e0cc6577a05c.jpg", + "table_caption": [ + "Table 1. MicroVQA benchmark attributes." + ], + "table_footnote": [], + "table_body": "
Dataset featureValue
Total questions1,042
Multi-image questions423
Avg. MCQ question length66
Avg. MCQ answer length15
Avg. raw question length158
Avg. raw answer length52
Unique image sets255
Image ModalitiesLight, Fluoro, Electron
Image ScalesTissue, Cell, Subcell, Atomic
Organisms31
Research areas33
Expert question creators12
Time to create 1 question30-40 mins
Time to quality check 1 MCQ5 mins
", + "bbox": [ + 524, + 114, + 895, + 306 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/03d2e6afce9ebf9ce549a1902ba73c25c37b17096a0491c3439a9eed68299682.jpg", + "image_caption": [ + "Figure 2. MicroVQA taxonomy of sub-tasks." + ], + "image_footnote": [], + "bbox": [ + 516, + 324, + 893, + 436 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "scientific reasoning by tackling two major evaluation challenges. First, defining tasks is challenging because scientific research encompasses many complex activities [33] – we therefore define three important tasks important to biomedical discovery. Second, VQA samples cannot be readily sourced from existing exams or classification datasets – we therefore assembled a team of biologists to curate challenging reasoning problems. In Sec. 3.1, we provide an overview of the MicroVQA benchmark; Sec. 3.2 details the key tasks; and finally, Sec. 3 demonstrates that MicroVQA is at the forefront of measuring research-level reasoning in MLLMs. In the later Sec. 4, we describe our approach for generating multiple-choice questions.", + "bbox": [ + 511, + 484, + 906, + 681 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Overview of MicroVQA", + "text_level": 1, + "bbox": [ + 511, + 694, + 733, + 710 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "MicroVQA is a comprehensive dataset of 1,042 VQA triplets, manually curated by a team of expert biological researchers from diverse fields, and key attributes are summarized in Tab. 1. These expert-crafted questions test reasoning across three key tasks: expert understanding, hypothesis generation, and experiment proposal, and we additionally provide a taxonomy of sub-tasks in Sec. 3. The questions cover a broad spectrum, ranging from biological issues, such as \"What is unusual about the cell shape?\" - to technical imaging challenges like \"Is this feature due to low signal-to-noise ratio (SNR) or is it real?\". We provide the 'raw' questions and answers written by experts, which", + "bbox": [ + 511, + 719, + 906, + 900 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "are often long and nuanced, along with converted multiple-choice questions (MCQs) suitable for MLLM evaluation.", + "bbox": [ + 89, + 90, + 482, + 119 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The images encompass the most common microscopy modalities used in human biology: brightfield, fluorescence, and electron microscopy. The sample types span the full range of microscopic scales – tissue, cellular, subcellular, and atomic – and emphasize organisms relevant to human biology and medically motivated tasks, namely human and mouse. Over $60\\%$ of the samples have multiple images because comparison is essential in microscopy research, and because multichannel fluorescence images cannot be represented in standard RGB. All these attributes are provided as metadata tags to facilitate deeper error analysis, thereby enhancing MicroVQA's value as a resource for advancing AI capabilities in scientific research.", + "bbox": [ + 88, + 121, + 482, + 316 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. MicroVQA scientific reasoning tasks", + "text_level": 1, + "bbox": [ + 89, + 327, + 406, + 343 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To define specific reasoning tasks aimed at advancing biomedical research, we established the following criteria: (1) the tasks should use image-text inputs and text outputs suitable for MLLMs, (2) they should require higher-order reasoning beyond simple image processing, and (3) they should emphasize core experimental activities such as experimentation, analysis, and action [33], rather than auxiliary tasks like literature review or writing. The tasks were developed through interviews with nine co-author PIs and postdocs specializing in microscopy research (Appendix D). We identified three essential capabilities that we now introduce, with examples shown in Sec. 3.", + "bbox": [ + 89, + 349, + 483, + 531 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Expert visual understanding After performing an experiment, scientists must interpret data within its experimental context. In biological microscopy, this involves identifying patterns like protein distributions or cell morphologies, while also assessing technical aspects such as artifacts and noise levels. The task requires MLLMs to perform anomaly detection and image comparison, going beyond simple perception in two key ways: analysis must consider sample preparation context, and expert knowledge is needed to evaluate biological features and technical artifacts.", + "bbox": [ + 89, + 550, + 483, + 702 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Hypothesis generation The next step is proposing mechanistic hypotheses to explain experimental data. For example, when a drug treatment causes mitochondrial fragmentation in cells, a hypothesis might be that the drug disrupted calcium homeostasis, activating the DRP1 protein through a multi-step pathway, leading to fragmentation. This requires abductive reasoning [8, 20, 32, 81] as one must select from many possible hypotheses given incomplete information. The process demands the MLLM to integrate relevant domain knowledge with experimental context and visual features to reason about which hypothesis best explains the observations.", + "bbox": [ + 89, + 719, + 483, + 900 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Table 2. Comparing scientific multimodal benchmarks close to MicroVQA for in terms of reasoning level or difficulty beyond college level. We show difficulty level, scientific domain, dataset source, and size. Compared to others, MicroVQA either has higher difficulty level, or it has higher reasoning level (which is established in Fig. 3). Compared to others at the same reasoning level, namely MMMU-Pro, it has similar size.", + "bbox": [ + 511, + 89, + 906, + 186 + ], + "page_idx": 3 + }, + { + "type": "table", + "img_path": "images/74382198ca3b9ec195519c7bd834f9055a7627c18103536daa7765928d1e68d7.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
BenchmarkDifficulty levelDomainSourceSize
MicroVQAresearchmicroscopyexpert-curated1,042
MMSci [47]researchsciencepaper figures7132
LabBench [41]researchbiologywebQA181
PathVQA [30]graduatepathologytexbooks16.3k
OmnimedVQA* [34]graduatemedicalclassification dataset127.9k
Microbench [50]graduatemicroscopyclassification dataset17.2k
MMMU [87]undergraduategeneraltextbooks, webQA11k
MMMU Pro [88]undergraduategeneralMCQ dataset1,730
Science QA [53]grade-schoolscienceexams16.8k
", + "bbox": [ + 514, + 196, + 906, + 318 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/5295c1b7d26a09eff3d6c5cf0c423b626244b32e9626088d654ec8fc1a9df192.jpg", + "image_caption": [ + "Figure 3. Composition of scientific MLLM benchmarks regarding estimated Bloom's taxonomy [11]. Higher levels are more cognitively challenging. MicroVQA has more questions at higher levels compared to other benchmarks, for example, MMMU [87] and ScienceQA [53], while perception-driven medical benchmarks like OmniMedVQA are at lower levels." + ], + "image_footnote": [], + "bbox": [ + 516, + 335, + 906, + 656 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Experiment proposal The final step is designing experiments to validate hypotheses. In microscopy, this often involves selecting appropriate assays and controls, requiring knowledge of suitable protocols and deductive reasoning about whether proposed experiments will provide evidence for or against the hypothesis. This task also may involve adjusting a particular experiment to address technical issues, like how to ensure a good signal-to-noise ratio,", + "bbox": [ + 511, + 780, + 908, + 902 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 493, + 925, + 504, + 935 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "and this requires both image understanding and knowledge of protocols.", + "bbox": [ + 89, + 90, + 480, + 119 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In Appendix F.7.1, we show an example question in each task with an expert-annotated 'reasoning trace' - it shows the type of reasoning required for typical questions.", + "bbox": [ + 89, + 121, + 480, + 167 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3. Analysis of MicroVQA benchmark", + "text_level": 1, + "bbox": [ + 89, + 178, + 395, + 193 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Scientific discovery requires complex reasoning beyond basic perception and knowledge, and it requires expertise beyond school-level education. Here we show that MicroVQA addresses a gap that prior multimodal scientific benchmarks do not test high level reasoning at research-level difficulty.", + "bbox": [ + 89, + 200, + 480, + 291 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To assess reasoning levels in MLLM benchmarks, we use Bloom's taxonomy [11], which hierarchically classifies cognitive difficulty from recall to evaluation. We fine-tune GPT-4o to classify Bloom's taxonomy on related MLLM benchmarks with results in Fig. 3; in Tab. 2 we additionally show key attributes of the same benchmarks. Fig. 3 shows that benchmarks having research-level and graduate-level difficulty are dominated by lower reasoning questions (levels 1-2), while our MicroVQA has higher level reasoning (levels 3-4). This is not surprising since OmnimedVQA and MicroBench derive questions from prior classification datasets, while our task definition and data collection approach does reflect high-level reasoning.", + "bbox": [ + 89, + 292, + 482, + 488 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "On the other hand, some benchmarks have higher Bloom's reasoning level closer to MicroVQA - MMMU, MMMU-Pro, and Science-QA - but their difficulty level is lower - undergraduate or grade-school. The most comparable benchmark for reasoning is the undergraduate-level MMMU-Pro, and its dataset size is 1,730 compared to our 1,042: this shows that for reasoning-intensive benchmarks, sizes beyond 1,000 are very high.", + "bbox": [ + 89, + 489, + 482, + 609 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. MCQ generation process", + "text_level": 1, + "bbox": [ + 89, + 625, + 323, + 642 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Creation of raw VQA samples by experts", + "text_level": 1, + "bbox": [ + 89, + 650, + 441, + 666 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Expert generation of raw VQA samples Creating VQA triples suitable for research-level reasoning tasks is time-consuming and requires expertise. We recruited 12 human experts to each create approximately 90 VQA samples, taking about 30-40 minutes per sample. Each submission included an image set, a question, a detailed response, and contextual information like experimental context and motivation. They are raw VQA triples, $(v_{0},q_{0},a_{0}) -$ 'raw' means the question and answer can be much longer and more detailed than typical VQA benchmarks. This allowed expert annotators to include enough detail to match what is a valuable input and response for an MLLM. Appendix E shows materials that prompt question creators. The images were sourced from the contributor's own experiments, image databases, or research papers published since 2024, and", + "bbox": [ + 89, + 674, + 482, + 900 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "all image licenses permit redistribution.", + "bbox": [ + 513, + 90, + 774, + 106 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Quality control We had three quality control criteria: questions should be challenging and not solvable with undergraduate-level knowledge; they should be aligned with the defined tasks; and there should be diversity in image content. Questions were reviewed against these criteria for each contributor, and feedback was provided for further VQA samples. These review rounds occurred after submitting the first 10 questions, and the first 40 questions.", + "bbox": [ + 511, + 127, + 903, + 247 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.2. MCQ generation from raw VQA samples", + "text_level": 1, + "bbox": [ + 511, + 258, + 864, + 273 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To benchmark MLLMs, we transform raw expert-created VQA samples to multiple-choice questions (MCQs) and propose new methods for MCQ generation.", + "bbox": [ + 511, + 282, + 903, + 328 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Motivation: naive MCQ generation is inadequate The standard approach for VQA benchmarks is to zero-shot prompt an LLM with raw VQA samples [4], but this has two problems. The first is that generated MCQs do not follow established educational MCQ design principles (Appendix E.2), likely because the raw VQA questions and answers are often long and variable. Our stage 1 'exam alignment', which we describe in the next section, addresses this challenge. The second challenge is that generated MCQs do not properly test MLLM capabilities. Evaluating GPT-40 on these MCQs scores $93\\%$ even without the image, despite most of the dataset questions clearly requiring image analysis, and we investigate this by inspecting the chain-of-thought outputs in Fig. 4 and Appendix E.2. Many distractors can be easily eliminated based on general biology knowledge, or because they are too vague compared to the correct option. We hypothesize that MCQ and distractor generation is challenging when the target answer string is long, and where the subject matter is specialized, suggesting that other scientific benchmarks may encounter similar issues. Overall, this motivates a more involved approach to distractor generation.", + "bbox": [ + 511, + 348, + 906, + 680 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Stage 1: Exam-style MCQ generation Here we ensured that multiple-choice questions (MCQs) conformed to established design principles for biomedical exams (Fig. 4). This effort was led by a physician-scientist co-author trained in medical board exam item writing, who reviewed educational literature on MCQ design [64] and Bloom's Taxonomy for assessing cognitive skills in biomedical exams [5, 19, 89]. We began by manually transforming 50 user-submitted raw question-answer pairs, $(q_0, a_0)$ , into one-best-answer MCQs with $k$ distractors, $(q_1, a_1, \\mathbf{d}_1)$ . These 'Gold Standard' MCQs retained the original questions' meanings while strictly adhering to NBME standards [64] and minimizing cues that enable test-wise guessing. We used these gold-standard MCQs in a supervised learning", + "bbox": [ + 511, + 689, + 906, + 900 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/6963e397abdb5827ff6b3efd8fc096e97daf7d2bd82b69915beb1726b9190dba.jpg", + "image_caption": [ + "Figure 4. Constructing the MicroVQA multiple choice questions. (0) We defined tasks with domain biological scientists and created 1,061 raw VQA samples. (1) The raw samples were aligned to an exam-style MCQ by manually transforming a small set and optimizing an LLM prompt to match that alignment. (2) MCQs are further improved using RefineBot, a new iterative method to make MCQs more challenging. The lower panel shows an example MCQ from raw VQA to final: the annotations highlight key changes that we further explore in Appendix E.2, where red indicates issues, and green indicates good attributes." + ], + "image_footnote": [], + "bbox": [ + 130, + 74, + 854, + 448 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "setup to develop a general LLM prompt, $p$ , that maps raw question-answer pairs to exam-style MCQs. We leverage the DSPy framework [39, 40], which optimizes $p$ to match the gold-standard MCQ outputs. Additionally, the optimization process incorporated supervision from LLM-based quality metrics: content similarity, NBME-aligned formatting, and the absence of extraneous clues. Further details are in Appendix E.2.", + "bbox": [ + 88, + 537, + 485, + 657 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Stage 2: RefineBot question refinement Although Stage 1 produces exam-aligned MCQs, many remain easily solvable due to language shortcuts (see Appendix D). To enhance difficulty, we introduce RefineBot, a method that increases MCQ complexity (Fig. 4). The key idea is that weaknesses in MCQ construction are revealed by the chain-of-thought (CoT) answers. The first LLM agent, the evaluator/reflector, answers the MCQ with CoT and then reflects on the strategies used. For example, a question about processes inside a virus might have a distractor referring to surface processes – this can be eliminated without referencing the image. The LLM summarizes its solution strategy into a reflection text: in this case, that the distractor was implausible based on knowledge that the stated process", + "bbox": [ + 89, + 688, + 483, + 901 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "does not not occur inside the virus, and so does not match the question. It passes the reflection to the rewriter LLM agent. The rewriter revises the question stem and generates new distractors to invalidate the identified strategies: in this example it may create distractors referring to processes that do occur inside a virus. To prevent significant changes to the question-answer pair over iterations, we employ an LLM checker to ensure semantic consistency with the original pair. The revised MCQ is returned to the evaluator; if it still answers correctly, the loop continues. The process stops if the rewriter fails the checker or after $n$ iterations. If it fails, RefineBot can be rerun with a different seed, often succeeding—a form of inference-time scaling [13]. All agents are GPT-4o-0806, but to mitigate potential bias against 4o in final evaluation, we also use Claude-3.5-Sonnet-0620 as the evaluator agent; we assess biases in the experiments section.", + "bbox": [ + 511, + 537, + 908, + 792 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Final MCQ quality check Since MCQ generation involves LLM processing, we need to verify the correctness of the final answer. Each MCQ is manually reviewed by the same expert who created that particular question (between 80 and 120 per person). For any question with issues, that expert makes minimal changes to the question correct it.", + "bbox": [ + 511, + 809, + 910, + 902 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 925, + 504, + 936 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5. Experiments", + "text_level": 1, + "bbox": [ + 89, + 89, + 223, + 107 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.1. Benchmarking MLLMs with MicroVQA", + "text_level": 1, + "bbox": [ + 89, + 113, + 439, + 131 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We evaluate state-of-the art multimodal large language models (MLLMs) on the MicroVQA benchmark. We include open and closed models spanning the categories: reasoning, large, small, and medical - medical is the closest domain to microscopy with specialist MLLMs. We utilize standard chain-of-thought prompting [88] (deetails in Appendix F). We report mean accuracy in multiple-choice VQA for each tasks Tab. 3. We also perform and analyze no-image Appendix F.5.", + "bbox": [ + 89, + 137, + 483, + 273 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "MicroVQA is challenging for all MLLMs Our evaluation reveals a substantial gap between current MLLMs and the upper bound, with the leading model, o1, at 52.8. There is surprisingly little variation in performance between models, with most closed and open models all scoring above 40, with the lowest score (Llama-3.2-11b) likely due to instruction-following issues. A human baseline (Appendix F.3) scores only 50.3 – this shows that biology experts are specialized to subdomains, and are challenged by different subdomains. In fact this demonstrates that experts could benefit from MLLMs that can solve MicroVQA's tasks. The performance variation across the 3 tasks is small – the highest score is 56.4 on 'expert visual understanding (V)', compared to the highest overall of 52.8. The 'hypothesis generation' task is the hardest for all models and the gap is strongest for smaller models. For example the gap between hypothesis generation and expert visual understanding is 3.7 for the strongest large model, but 10.7 on the strongest small model.", + "bbox": [ + 91, + 291, + 483, + 579 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Smaller models are surprisingly competitive For all models with a lower-parameter equivalent - Gemini-1.5, QwenVL, VILA, and Llama - the drop in performance due to size is less than 3 for all except Llama. Moreover, these drops are smaller than on other multimodal reasoning benchmarks like MMMU, where for example, Gemini drops by 9.6 between Pro and Flash-8B [87]. Typically, smaller models have the same vision encoder, but a smaller LLM [79]. Since the size of the language model has a small impact, we hypothesize that for the 'solved' part of MicroVQA, the language-only reasoning is relatively simple compared to domains like math that require multi-step logic. This suggests that other aspects of MLLM reasoning are more challenging, like multimodal reasoning. If true, this suggests that future work could focus on stronger image representations, and this idea is supported by the qualitative error analysis in Sec. 5.2.", + "bbox": [ + 89, + 595, + 483, + 853 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Specialist biomedical training does improve performance While no specialist MLLMs have been developed", + "bbox": [ + 89, + 869, + 483, + 902 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 3. VQA accuracy on MicroVQA by task: expert visual understanding (V), hypothesis generation (H), experiment proposal (E). Models marked * were used in MCQ generation, which may affect comparative performance (see Sec. 5.1). The model † is the base LlaVA for LLaVA-Med. Best values are bolded.", + "bbox": [ + 511, + 89, + 906, + 159 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/25ea5f9019a11013a86d27e025e57525ea0afa293313e7e67d2c997a88f57ceb.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelOverallVHE
Ro1 [36]52.855.450.253.0
large models*Claude-3.5-Sonnet [6]51.754.150.250.4
Gemini-Pro-1.5 [69]51.152.050.250.9
Pixtral-Large [2]49.850.849.548.7
Grok-2-Vision [84]48.450.346.448.7
Qwen-2-vl-72b-Instruct [79]47.549.245.747.8
VILA1.5-40b [48]47.547.247.947.4
*GPT-4o [1]45.648.743.144.8
Llama-3.1-Nemotron-70b-Instruct [83]44.244.943.344.8
Llama-3.2-90b-Vision-Instruct [21]42.444.942.138.7
small modelsQwen-2-VL-7b [79]48.854.143.349.6
Claude-3.5-Haiku [6]47.148.043.851.7
Gemini-Flash-1.5-8b [69]46.748.743.649.1
GPT-4o-mini [1]46.248.543.647.0
Pixtral-12b [2]45.646.944.844.8
VILA1.5-13b [48]41.841.847.540.9
Llama-3.2-11b-vision-instruct [21]30.332.429.328.7
medicalLLaVA-Med-Mistral-7B [42]43.037.347.141.6
†LLaVA-Mistral-7B [49]39.831.643.137.1
Random22.021.921.821.9
Human50.352.747.551.4
", + "bbox": [ + 514, + 170, + 903, + 429 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/0d276a6c3025cbc820c010979d7949d8cc8c3b10e99164ba5034c627462ec952.jpg", + "table_caption": [ + "Table 4. Ablation study on MicroVQA MCQ generation stages (shown in Fig. 4). Accuracy is high because MCQs have shortcuts (Sec. 4.1) after 'Stage 1' exam alignment, but is lower after 'Stage 2'. Final column is the relative decrease in accuracy. Models with * were used in Stage 2 generation and have the biggest accuracy drops (bolded). They are grouped with different models from the same provider in **." + ], + "table_footnote": [], + "table_body": "
Stage 1Stage 2Relative drop
*GPT-4o79.746.8-41.2
**GPT-4o-mini75.646.2-39.0
*Claude-3.5-Sonnet82.251.7-37.1
**Claude-3.5-Haiku77.347.1-39.0
ol81.652.8-35.3
Pixtral-Large80.149.8-37.8
Gemini-Pro-1.579.151.1-35.4
", + "bbox": [ + 537, + 551, + 883, + 659 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "for microscopy research in particular, LLaVA-Med is fine-tuned on scientific articles from PubMed that include data types overlapping with MicroVQA - especially (tissue) pathology. In Tab. 3, we compare LLaVA-Med against it's base model, LLaVA-Mistral-7B and find overall stronger performance by 4.5 points. This suggests two clear opportunities for improving MicroVQA performance. The first is to instruct-tune with a dataset that better aligns with the MicroVQA data domain: not only tissue pathology, but also fluorescence and electron microscopy, which is available in scientific articles [93]. The second is to simply begin training from a stronger open model like Pixtral. Beyond that, there is more to explore in MLLM specialization, such as tuning the base image encoder for microscopy [51].", + "bbox": [ + 511, + 689, + 906, + 902 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 493, + 924, + 504, + 935 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/fb7ae61d2c764b73e9ea659d9fa49d8dfbcb46b282473ec901f7169f1e274c0a.jpg", + "image_caption": [ + "Figure 5. Performance by sub-task and Bloom's level for best models: Gemini-1.5-Pro (closed source), VILA1.5-40B (open-source), and LlaVA-Med-Mistral-7B (medical)." + ], + "image_footnote": [], + "bbox": [ + 133, + 93, + 444, + 313 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "RefineBot MCQ generation is very effective but introduces small model biases We introduced a new method for refining MCQs to make them more challenging - RefineBot. Tab. 4 shows that between stage 1 and stage 2 (before and after RefineBot), the relative accuracy for all models drops by between $35\\%$ and $42\\%$ . Firstly, this validates that RefineBot is a valuable tool for making benchmarks harder. However, the most significant drops are for models used by RefineBot (GPT-4o & Claude-3.5-Sonnet, and their smaller versions, GPT-4o-mini & Claude-3-Haiku). The adversarial process introduced a small bias against these models (and we hypothesise the bias would be higher if using only one model instead of two). Despite making fair evaluation slightly more challenging, our use of frontier LLMs in the refinement process is well-motivated, as these models are best positioned to generate challenging, high-quality MCQs that probe the boundaries of current capabilities. Moreover, the bias seems modest - less than 10 points of relative drop - while the benefit is a huge increase in difficulty across all test MLLMs, as seen in Tab. 4.", + "bbox": [ + 91, + 382, + 482, + 684 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Metadata-based analysis supports deeper error analysis Fig. 5 shows error rates based on attribute tag. In Appendix F we explore results further - namely that multi-image reasoning has surprisingly good performance, higher level Bloom's questions are harder, and (not shown) rare image modalities perform worse.", + "bbox": [ + 89, + 702, + 482, + 792 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5.2. Error analysis on MicroVQA", + "text_level": 1, + "bbox": [ + 89, + 801, + 352, + 816 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "To understand model errors, three biomedical experts reviewed the Chain-of-Thought (CoT) responses of 30 samples on Claude 3.5 Sonnet, with each review requiring at least 45 mins of careful analysis. Errors were classified into a major category and minor contributing errors were also", + "bbox": [ + 89, + 824, + 482, + 901 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "noted. Expert perception accounted for $50\\%$ of errors, $30\\%$ were knowledge errors, and $13\\%$ were due to overgeneralization; the final $7\\%$ were text hallucination or general reasoning error. Appendix F.6 shows examples for the most common types. We expand on this analysis with an automated version across the complete dataset in Appendix F.7.", + "bbox": [ + 511, + 90, + 903, + 181 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Expert perception errors occurred when the model misinterpreted visual features, leading to incorrect inferences. For example, in Fig. 13, the model judges the small electron-dense objects as ribosomes, rather than stress granules – ribosomes tend to be smaller, have a more regular shape, and lower contrast. Language bias may have contributed to the preference for ribosomes since they are very commonly studied in EM literature available to LLM training. Overall, perception errors were the dominant error type (50%), which suggests that future methods should improve vision representations in MLLMs for microscopy.", + "bbox": [ + 511, + 186, + 906, + 354 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Knowledge errors highlight gaps in nuanced biomedical knowledge. One example in Fig. 17 deals with understanding how protein localization changes during cell signaling. An experiment gives information about a transmembrane signaling protein, and the question asks to interpret the image and explain the function in these cells. The model correctly perceives a punctate distribution, but assumes the role of coordinating intracellular vesicle trafficking. This represents a shallow interpretation that could have been improved with specialized biomedical knowledge. These errors could be mitigated by improving knowledge in MLLMs, either in training or as a database at inference.", + "bbox": [ + 511, + 359, + 908, + 542 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Overgeneralization errors (and simplification errors) reflect the model's tendency to apply broad scientific principles without regard for specific context – arguably they are reasoning errors. In one example in Fig. 21, the question is to compare the risk of malignancy of a tumor, schwannoma, to other nerve sheath tumors. During reasoning, the model proposes a simplified question – what is true about schwannoma – which it answers while ignoring the comparison.", + "bbox": [ + 511, + 547, + 908, + 671 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Conclusion", + "text_level": 1, + "bbox": [ + 513, + 702, + 633, + 718 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "MLLMs hold great potential for advancing scientific discovery. We aim to inspire the development of broadly applicable scientific AI systems. Future directions include training models for more robust image representations, integrating knowledge bases, exploring open evaluation methods, and studying LLMs' reasoning over hypotheses. Beyond microscopy, we hope our benchmark serves as a blueprint for AI evaluation in expert-driven domains. More generally we aspire for different application domains – biomedicine, chemistry, materials science – to share methods towards the common goal of building scientific AI systems.", + "bbox": [ + 511, + 734, + 906, + 902 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 924, + 504, + 935 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 89, + 187, + 104 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 2, 7", + "[2] Pravesh Agrawal, Szymon Antoniak, Emma Bou Hanna, Devendra Chaplot, Jessica Chudnovsky, Saurabh Garg, Theophile Gervet, Soham Ghosh, Amélie Héliiou, Paul Jacob, et al. Pixtral 12b. arXiv preprint arXiv:2410.07073, 2024. 7", + "[3] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35:23716-23736, 2022. 2", + "[4] Elaf Alhazmi, Quan Z Sheng, Wei Emma Zhang, Munazza Zaib, and Ahoud Alhazmi. Distractor generation for multiple-choice questions: A survey of methods, datasets, and evaluation. arXiv preprint arXiv:2402.01512, 2024. 3, 5", + "[5] Lorin W. Anderson and David R. Krathwohl, editors. A Taxonomy for Learning, Teaching, and Assessing. A Revision of Bloom's Taxonomy of Educational Objectives. Allyn & Bacon, New York, 2 edition, 2001. 5, 8", + "[6] Anthropic. The claude 3 model family: Opus, sonnet, haiku. Model card, Anthropic, 2024. 2, 7", + "[7] Nishant Balepur, Abhilasha Ravichander, and Rachel Rudinger. Artifacts or abduction: How do llms answer multiple-choice questions without the question? arXiv preprint arXiv:2402.12483, 2024. 3, 18", + "[8] Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. Abductive commonsense reasoning. arXiv preprint arXiv:1908.05739, 2019. 3, 4", + "[9] Owen Bianchi, Maya Willey, Chelsea X Alvarado, Benjamin Danek, Marzieh Khani, Nicole Kuznetsov, Anant Dadu, Syed Shah, Mathew J Koretsky, Mary B Makarious, et al. Cardbiomedbench: A benchmark for evaluating large language model performance in biomedical research. bioRxiv, pages 2025-01, 2025. 3", + "[10] Semere Kiros Bitew, Johannes Deleu, Chris Develder, and Thomas Demeester. Distractor generation for multiple-choice questions with predictive prompting and large language models. arXiv preprint arXiv:2307.16338, 2023. 3", + "[11] Benjamin S. Bloom. Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: The Cognitive Domain. David McKay Co Inc, New York, 1956. 4, 5", + "[12] Daniil A Boiko, Robert MacKnight, Ben Kline, and Gabe Gomes. Autonomous chemical research with large language models. Nature, 624(7992):570-578, 2023. 2", + "[13] Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with" + ], + "bbox": [ + 93, + 114, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "repeated sampling. arXiv preprint arXiv:2407.21787, 2024.6", + "[14] James Burgess, Jeffrey J Nirschl, Maria-Clara Zanellati, Alejandro Lozano, Sarah Cohen, and Serena Yeung-Levy. Orientation-invariant autoencoders learn robust representations for shape profiling of cells and organelles. Nature Communications, 15(1):1022, 2024. 7", + "[15] Hengxing Cai, Xiaochen Cai, Junhan Chang, Sihang Li, Lin Yao, Changxin Wang, Zhifeng Gao, Yongge Li, Mujie Lin, Shuwen Yang, et al. Sciassess: Benchmarking llm proficiency in scientific literature analysis. arXiv preprint arXiv:2403.01976, 2024. 3", + "[16] Anne E Carpenter, Beth A Cimini, and Kevin W Eliceiri. Smart microscopes of the future. nature methods, 20(7):962-964, 2023. 2", + "[17] Nathan H Cho, Keith C Cheveralls, Andreas-David Brunner, Kibeom Kim, André C Michaelis, Preethi Raghavan, Hirofumi Kobayashi, Laura Savy, Jason Y Li, Hera Canaj, et al. Opencell: Endogenous tagging for the cartography of human cellular organization. Science, 375(6585):eabi6983, 2022. 7", + "[18] Kenneth Clark, Bruce Vendt, Kirk Smith, John Freymann, Justin Kirby, Paul Koppel, Stephen Moore, Stanley Phillips, David Maffitt, Michael Pringle, et al. The cancer imaging archive (tcia): maintaining and operating a public information repository. Journal of digital imaging, 26:1045-1057, 2013. 7", + "[19] Alison Crowe, Clarissa Dirks, and Mary Pat Wenderoth. Biology in bloom: implementing bloom's taxonomy to enhance student learning in biology. CBE Life Sci. Educ., 7 (4):368-381, 2008. 5, 8", + "[20] Igor Douven. Abduction, 2011. 4", + "[21] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. 7", + "[22] Kehua Feng, Keyan Ding, Weijie Wang, Xiang Zhuang, Zeyuan Wang, Ming Qin, Yu Zhao, Jianhua Yao, Qiang Zhang, and Huajun Chen. Sciknoweval: Evaluating multilevel scientific knowledge of large language models. arXiv preprint arXiv:2406.09098, 2024. 3", + "[23] Wanyong Feng, Jaewook Lee, Hunter McNichols, Alexander Scarlatos, Digory Smith, Simon Woodhead, Nancy Otero Ornelas, and Andrew Lan. Exploring automated distractor generation for math multiple-choice questions via large language models. arXiv preprint arXiv:2404.02124, 2024. 3", + "[24] Nigel Fernandez, Alexander Scarlatos, Simon Woodhead, and Andrew Lan. Divert: Distractor generation with variational errors represented as text for math multiple-choice questions. arXiv preprint arXiv:2406.19356, 2024. 3", + "[25] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023. 3", + "[26] Shanghua Gao, Ada Fang, Yepeng Huang, Valentina Giunchiglia, Ayush Noori, Jonathan Richard Schwarz, Yasha" + ], + "bbox": [ + 516, + 92, + 903, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 924, + 504, + 936 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Ektefaie, Jovana Kondic, and Marinka Zitnik. Empowering biomedical discovery with ai agents. arXiv preprint arXiv:2404.02831, 2024. 2, 3", + "[27] Mark J Gierl, Okan Bulut, Qi Guo, and Xinxin Zhang. Developing, analyzing, and using distractors for multiple-choice tests in education: A comprehensive review. Review of educational research, 87(6):1082-1116, 2017. 3, 8", + "[28] Juraj Gottweis, Wei-Hung Weng, Alexander Daryin, Tao Tu, Anil Palepu, Petar Sirkovic, Artiom Myaskovsky, Felix Weissenberger, Keran Rong, Ryutaro Tanno, et al. Towards an ai co-scientist. arXiv preprint arXiv:2502.18864, 2025. 2", + "[29] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904-6913, 2017. 3, 13, 18", + "[30] Xuehai He, Yichen Zhang, Luntian Mou, Eric Xing, and Pengtao Xie. Pathvqa: 30000+ questions for medical visual question answering. arXiv preprint arXiv:2003.10286, 2020. 3, 4", + "[31] Marco Y Hein, Duo Peng, Verina Todorova, Frank McCarthy, Kibeom Kim, Chad Liu, Laura Savy, Camille Januel, Rodrigo Baltazar-Nunez, Madhurya Sekhar, et al. Global organelle profiling reveals subcellular localization and remodeling at proteome scale. Cell, 188(4):1137-1155, 2025. 7", + "[32] Jack Hessel, Jena D Hwang, Jae Sung Park, Rowan Zellers, Chandra Bhagavatula, Anna Rohrbach, Kate Saenko, and Yejin Choi. The abduction of sherlock holmes: A dataset for visual abductive reasoning. In European Conference on Computer Vision, pages 558-575. Springer, 2022. 3, 4", + "[33] Tom Hope, Doug Downey, Daniel S Weld, Oren Etzioni, and Eric Horvitz. A computational inflection for scientific discovery. Communications of the ACM, 66(8):62-73, 2023. 2, 3, 4, 6", + "[34] Yutao Hu, Tianbin Li, Quanfeng Lu, Wenqi Shao, Junjun He, Yu Qiao, and Ping Luo. Omnimedvqa: A new large-scale comprehensive evaluation benchmark for medical lvm. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22170-22183, 2024. 3, 4, 2, 7", + "[35] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700-6709, 2019. 3", + "[36] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024. 7", + "[37] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W Cohen, and Xinghua Lu. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146, 2019. 3", + "[38] Yeonghun Kang and Jihan Kim. ChatMOF: an artificial intelligence system for predicting and generating metal-organic frameworks using large language models. Nat. Commun., 15 (1):4705, 2024. 2" + ], + "bbox": [ + 91, + 90, + 482, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[39] Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive NLP. arXiv preprint arXiv:2212.14024, 2022. 6", + "[40] Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, Zhiyuan Zhang, Keshav Santhanam, Sri Vardhamanan, Saiful Haq, Ashutosh Sharma, Thomas T. Joshi, Hanna Moazam, Heather Miller, Matei Zaharia, and Christopher Potts. Dspy: Compiling declarative language model calls into self-improving pipelines. arXiv preprint arXiv:2310.03714, 2023.6", + "[41] Jon M Laurent, Joseph D Janizek, Michael Ruzo, Michaela M Hinks, Michael J Hammerling, Siddharth Narayanan, Manvitha Ponnapati, Andrew D White, and Samuel G Rodriques. Lab-bench: Measuring capabilities of language models for biology research. arXiv preprint arXiv:2407.10362, 2024. 2, 3, 4", + "[42] Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. Advances in Neural Information Processing Systems, 36, 2024. 3, 7", + "[43] Junxian Li, Di Zhang, Xunzhi Wang, Zeying Hao, Jingdi Lei, Qian Tan, Cai Zhou, Wei Liu, Yaotian Yang, Xinrui Xiong, et al. Chemvlm: Exploring the power of multimodal large language models in chemistry area. CoRR, 2024. 2, 3", + "[44] Lin Li, Guikun Chen, Hanrong Shi, Jun Xiao, and Long Chen. A survey on multimodal benchmarks: In the era of large ai models. arXiv preprint arXiv:2409.18142, 2024. 3", + "[45] Lei Li, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, and Qi Liu. Multimodal arxiv: A dataset for improving scientific comprehension of large vision-language models. arXiv preprint arXiv:2403.00231, 2024. 3", + "[46] Shengzhi Li and Nima Tajbakhsh. Scigraphqa: A large-scale synthetic multi-turn question-answering dataset for scientific graphs. arXiv preprint arXiv:2308.03349, 2023.", + "[47] Zekun Li, Xianjun Yang, Kyuri Choi, Wanrong Zhu, Ryan Hsieh, HyeonJung Kim, Jin Hyuk Lim, Sungyoung Ji, Byungju Lee, Xifeng Yan, et al. Mmsci: A multimodal multi-discipline dataset for phd-level scientific comprehension. arXiv preprint arXiv:2407.04903, 2024. 2, 3, 4", + "[48] Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, and Song Han. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26689-26699, 2024. 7", + "[49] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024. 2, 7", + "[50] Alejandro Lozano, Jeffrey Nirschl, James Burgess, Sanket Rajan Gupte, Yuhui Zhang, Alyssa Unell, and Serena Yeung. Micro-bench: A microscopy benchmark for vision-language understanding. Advances in Neural Information Processing Systems, 37:30670-30685, 2024. 2, 3, 4", + "[51] Alejandro Lozano, Min Woo Sun, James Burgess, Liangyu Chen, Jeffrey J Nirschl, Jeffrey Gu, Ivan Lopez, Josiah" + ], + "bbox": [ + 516, + 90, + 905, + 902 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 490, + 924, + 508, + 936 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Aklilu, Austin Wolfgang Katzer, Collin Chiu, et al. Biomedica: An open biomedical image-caption archive, dataset, and vision-language models derived from scientific literature. arXiv preprint arXiv:2501.07171, 2025. 7", + "[52] Ming Y \"Lu, Bowen Chen, Drew F K Williamson, Richard J Chen, Melissa Zhao, Aaron K Chow, Kenji Ikemura, Ahrong Kim, Dimitra Pouli, Ankush Patel, Amr Soliman, Chengkuan Chen, Tong Ding, Judy J Wang, Georg Gerber, Ivy Liang, Long Phi Le, Anil V Parwani, Luca L Weishaupt, and Faisal\" Mahmood. \"a multimodal generative AI copilot for human pathology\". \"Nature\", 634(8033):466-473\", 2024. 2", + "[53] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507-2521, 2022. 2, 3, 4", + "[54] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255, 2023. 3", + "[55] Andres M. Bran, Sam Cox, Oliver Schilter, Carlo Baldassari, Andrew D White, and Philippe Schwaller. Augmenting large language models with chemistry tools. Nature Machine Intelligence, pages 1-11, 2024. 2", + "[56] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195-3204, 2019. 3", + "[57] M Maruf, Arka Daw, Kazi Sajeed Mehrab, Harish Babu Manogaran, Abhilash Neog, Medha Sawhney, Mridul Khurana, James P Balhoff, Yasin Bakis, Bahadir Altintas, et al. Vlm4bio: A benchmark dataset to evaluate pretrained vision-language models for trait discovery from biological images. arXiv preprint arXiv:2408.16176, 2024. 2, 3", + "[58] Adrian Mirza, Nawaf Alampara, Sreekanth Kunchapu, Benedict Emoekabu, Aswanth Krishnan, Mara Wilhelmi, Macjonathan Okereke, Juliane Eberhardt, Amir Mohammad Elahi, Maximilian Greiner, et al. Are large language models superhuman chemists? arXiv preprint arXiv:2404.01475, 2024.3", + "[59] Michael Moor, Oishi Banerjee, Zahra Shakeri Hossein Abad, Harlan M Krumholz, Jure Leskovec, Eric J Topol, and Pranav Rajpurkar. Foundation models for generalist medical artificial intelligence. Nature, 616(7956):259-265, 2023. 3", + "[60] Siddharth Narayanan, James D Braza, Ryan-Rhys Griffiths, Manu Ponnapati, Albert Bou, Jon Laurent, Ori Kabeli, Geemi Wellawatte, Sam Cox, Samuel G Rodriques, et al. Aviary: training language agents on challenging scientific tasks. arXiv preprint arXiv:2412.21154, 2024. 2", + "[61] Shraman Pramanick, Rama Chellappa, and Subhashini Venugopalan. Spiqa: A dataset for multimodal question answering on scientific papers. arXiv preprint arXiv:2407.09413, 2024. 3" + ], + "bbox": [ + 91, + 90, + 482, + 898 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[62] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. arXiv preprint arXiv:2311.12022, 2023. 3", + "[63] Loic A Royer. Omega—harnessing the power of large language models for bioimage analysis. nature methods, 21(8): 1371-1373, 2024. 2, 3", + "[64] Billings Melissa S, DeRuchie Kristine, Hussie Kieran, Kulesher Allison, Merrell Jacquelyn, Morales Amy, Panaguaa Miguel A, Sherlock Jennifer, Swygert Kimberly, and Tyson Julie. Nbme item-writing guide: Constructing written test questions for the health sciences, 2021. 5, 8", + "[65] Khaled Saab, Tao Tu, Wei-Hung Weng, Ryutaro Tanno, David Stutz, Ellery Wulczyn, Fan Zhang, Tim Strother, Chunjong Park, Elahe Vedadi, et al. Capabilities of gemini models in medicine. arXiv preprint arXiv:2404.18416, 2024.3", + "[66] Michael D Skarlinski, Sam Cox, Jon M Laurent, James D Braza, Michaela Hinks, Michael J Hammerling, Manvitha Ponnapati, Samuel G Rodriques, and Andrew D White. Language agents achieve superhuman synthesis of scientific knowledge. arXiv preprint arXiv:2409.13740, 2024. 2, 3", + "[67] Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, and Kai Yu. Scieval: A multi-level large language model evaluation benchmark for scientific research. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 19053-19061, 2024. 3", + "[68] Kyle Swanson, Wesley Wu, Nash L Bulaong, John E Pak, and James Zou. The virtual lab: Ai agents design new sarscov-2 nanobodies with experimental validation. *bioRxiv*, pages 2024–11, 2024. 2", + "[69] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. 2, 7", + "[70] Surendrabikram Thapa and Surabhi Adhikari. Chatgpt, bard, and large language models for biomedical research: opportunities and pitfalls. Annals of biomedical engineering, 51 (12):2647-2651, 2023. 2", + "[71] Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Candace Ross. Winoground: Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5238–5248, 2022. 3", + "[72] Peter J Thul, Lovisa Åkesson, Mikaela Wiking, Diana Mahdessian, Aikaterini Geladaki, Hammou Ait Blal, Tove Alm, Anna Asplund, Lars Björk, Lisa M Breckels, et al. A subcellular map of the human proteome. Science, 356(6340): eaal3321, 2017. 7", + "[73] Minyang Tian, Luyu Gao, Shizhuo Dylan Zhang, Xinan Chen, Cunwei Fan, Xuefei Guo, Roland Haas, Pan Ji, Kittithat Krongchon, Yao Li, et al. Scicode: A research coding benchmark curated by scientists. arXiv preprint arXiv:2407.13168, 2024. 2, 3" + ], + "bbox": [ + 516, + 92, + 903, + 898 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 490, + 924, + 506, + 936 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[74] Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. Cambrian1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860, 2024. 3, 18", + "[75] Tao Tu, Shekoofeh Azizi, Danny Driess, Mike Schaekermann, Mohamed Amin, Pi-Chuan Chang, Andrew Carroll, Charles Lau, Ryutaro Tanno, Ira Ktena, et al. Towards generalist biomedical ai. NEJM AI, 1(3):Aioa2300138, 2024. 3", + "[76] Matheus P Viana, Jianxu Chen, Theo A Knijnenburg, Ritvik Vasan, Calysta Yan, Joy E Arakaki, Matte Bailey, Ben Berry, Antoine Borensztejn, Eva M Brown, et al. Integrated intracellular organization and its variations in human ips cells. Nature, 613(7943):345-354, 2023. 7", + "[77] David Wadden, Kejian Shi, Jacob Morrison, Aakanksha Naik, Shruti Singh, Nitzan Barzilay, Kyle Lo, Tom Hope, Luca Soldaini, Shannon Zejiang Shen, et al. Scirff: A resource to enhance language model instruction-following over scientific literature. arXiv preprint arXiv:2406.07835, 2024. 2", + "[78] Fei Wang, Xingyu Fu, James Y Huang, Zekun Li, Qin Liu, Xiaogeng Liu, Mingyu Derek Ma, Nan Xu, Wenxuan Zhou, Kai Zhang, et al. Muirbench: A comprehensive benchmark for robust multi-image understanding. arXiv preprint arXiv:2406.09411, 2024. 17", + "[79] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 7", + "[80] Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scientific problem-solving abilities of large language models. arXiv preprint arXiv:2307.10635, 2023. 3", + "[81] Yiqi Wang, Wentao Chen, Xiaotian Han, Xudong Lin, Haiteng Zhao, Yongfei Liu, Bohan Zhai, Jianbo Yuan, Quanzeng You, and Hongxia Yang. Exploring the reasoning abilities of multimodal large language models (mllms): A comprehensive survey on emerging trends in multimodal reasoning. arXiv preprint arXiv:2401.06805, 2024. 2, 3, 4", + "[82] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. arXiv preprint arXiv:2406.01574, 2024. 7", + "[83] Zhilin Wang, Alexander Bukharin, Olivier Delalleau, Daniel Egert, Gerald Shen, Jiaqi Zeng, Oleksii Kuchaiev, and Yi Dong. Helpsteer2-preference: Complementing ratings with preferences. arXiv preprint arXiv:2410.01257, 2024. 7", + "[84] xAI. Bringing grok to everyone, 2024. Accessed: 2025-03-13. 7", + "[85] Tong Xie, Yuwei Wan, Wei Huang, Yufei Zhou, Yixuan Liu, Qingyuan Linghu, Shaozhou Wang, Chunyu Kit, Clara Grazian, Wenjie Zhang, et al. Large language models as master key: unlocking the secrets of materials science with gpt. arXiv preprint arXiv:2304.02213, 2023. 2" + ], + "bbox": [ + 91, + 92, + 480, + 900 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[86] Kaining Ying, Fanqing Meng, Jin Wang, Zhiqian Li, Han Lin, Yue Yang, Hao Zhang, Wenbo Zhang, Yuqi Lin, Shuo Liu, et al. Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models towards multitask agi. arXiv preprint arXiv:2404.16006, 2024. 2", + "[87] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9556-9567, 2024. 2, 3, 4, 7", + "[88] Xiang Yue, Tianyu Zheng, Yuansheng Ni, Yubo Wang, Kai Zhang, Shengbang Tong, Yuxuan Sun, Ming Yin, Botao Yu, Ge Zhang, et al. Mmmu-pro: A more robust multidiscipline multimodal understanding benchmark. arXiv preprint arXiv:2409.02813, 2024. 3, 4, 7, 15", + "[89] Nikki B Zaidi, Charles Hwang, Sara Scott, Stefanie Stallard, Joel Purkiss, and Michael Hortsch. Climbing bloom's taxonomy pyramid: Lessons from a graduate histology course. Anat. Sci. Educ., 10(5):456-464, 2017. 5, 8", + "[90] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6720-6731, 2019. 3", + "[91] Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5317-5327, 2019. 3", + "[92] Kai Zhang, Rong Zhou, Eashan Adhikarla, Zhiling Yan, Yixin Liu, Jun Yu, Zhengliang Liu, Xun Chen, Brian D Davison, Hui Ren, et al. A generalist vision-language foundation model for diverse biomedical tasks. Nature Medicine, pages 1-13, 2024. 3", + "[93] Sheng Zhang, Yanbo Xu, Naoto Usuyama, Hanwen Xu, Jaspreet Bagga, Robert Tinn, Sam Preston, Rajesh Rao, Mu Wei, Naveen Valluri, et al. Biomedclip: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs. arXiv preprint arXiv:2303.00915, 2023. 7, 3, 17", + "[94] Yuhui Zhang, Yuchang Su, Yiming Liu, Xiaohan Wang, James Burgess, Elaine Sui, Chenyu Wang, Josiah Akiliu, Alejandro Lozano, Anjiang Wei, et al. Automated generation of challenging multiple-choice questions for vision language model evaluation. arXiv preprint arXiv:2501.03225, 2025. 3", + "[95] Haiteng Zhao, Chang Ma, FangZhi Xu, Lingpeng Kong, and Zhi-Hong Deng. Biomaze: Benchmarking and enhancing large language models for biological pathway reasoning. arXiv preprint arXiv:2502.16660, 2025. 3", + "[96] Zhiling Zheng, Oufan Zhang, Ha L Nguyen, Nakul Rampal, Ali H Alawadhi, Zichao Rong, Teresa Head-Gordon, Christian Borgs, Jennifer T Chayes, and Omar M Yaghi. Chatgpt research group for optimizing the crystallinity of mofs and cofs. ACS Central Science, 9(11):2161-2170, 2023. 2" + ], + "bbox": [ + 516, + 92, + 903, + 878 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 490, + 924, + 508, + 936 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research", + "text_level": 1, + "bbox": [ + 133, + 85, + 867, + 128 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Supplementary Material", + "bbox": [ + 380, + 141, + 614, + 162 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Appendices A-C include acknowledgments, author contributions, ethics, and discussion of limitations and future work. The remaining sections correspond to sections in the paper. Appendix D is benchmark details from the main Sec. 3. Appendix E is MCQ generation details from the main Sec. 4. Appendix F is the experiments details from the main Sec. 5.", + "bbox": [ + 89, + 189, + 906, + 236 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Table of Contents", + "text_level": 1, + "bbox": [ + 91, + 252, + 272, + 270 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A Acknowledgments and author contributions 1", + "B Ethics 2", + "C Limitations and future work 2", + "D Benchmark details 3" + ], + "bbox": [ + 130, + 279, + 867, + 375 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "D.1. Accessing MicroVQA benchmark 3", + "D.2 Dataset Schema and Structure 3", + "D.3 Training contamination mitigation 3", + "D.4 Benchmark attribute details 4", + "D.5 Related benchmarks 4", + "D.6 Bloom's taxonomy 5", + "D.7.Methodology for proposing tasks 6", + "D.8 Guidance for question creators 7", + "D.9 Microscopy image representations 7" + ], + "bbox": [ + 153, + 378, + 864, + 520 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "E.MCQ generation details 7", + "bbox": [ + 132, + 534, + 867, + 547 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "E.1. Question collection process details 7", + "E.2. Experiment on naive MCQ generation 7", + "E.3. Stage 1 MCQ generation (exam alignment) 8", + "E.4. Stage 2 MCQ generation (RefineBot) details 8", + "E.5.MCQ generation results: stage 1 exam alignment 13", + "E.6.MCQ generation results: stage 2 RefineBot 13" + ], + "bbox": [ + 153, + 550, + 864, + 643 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "F. Experiments details 15", + "bbox": [ + 132, + 657, + 867, + 671 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "F.1. Evaluation prompts 15", + "F.2. Model details 16", + "F.3. Human baseline on MicroVQA 17", + "F.4. More experiments on attribute-level information 17", + "F.5. Language shortcut ablations 17", + "F.6. Error analysis 20", + "F.7. Reasoning trace analysis 31" + ], + "bbox": [ + 153, + 674, + 864, + 784 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "A. Acknowledgments and author contributions", + "text_level": 1, + "bbox": [ + 89, + 814, + 488, + 832 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Acknowledgements We thank Christian Kindermann for feedback on motivation for benchmarking and reasoning taxonomies; Siddharth Doshi for feedback on motivation for applications in scientific chat systems; and Krisjanis Mazurs for feedback on structuring user interviews. We also thank Assaf Zaritsky, Andrew S Moore, Pedro Guedes Dias, Angus Toland, Helen Healy, and Andrew Kassianos, and Loic Royer for discussions on defining the VQA tasks.", + "bbox": [ + 89, + 839, + 906, + 902 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Funding acknowledgements We gratefully acknowledge funding from HAI Hoffman-Yee (E.L., S.Y-L., J.M.H.B.), Schmidt Futures (E.L.), the Bridge2AI Program (NIH Common Fund; OT2 OD032742; E.L.), Goran Gustafsson Foundation and Knut and Alice Wallenberg Foundation (KAW 2021.0346; E.L.) as well as to MU (HPA). S.C. received funding from NIH/NIGMS R35GM133460 and CZI A23-0264-001. J.N.H. was supported by a Postdoctoral Fellowship from EMBO Postdoctoral Fellowship (ALTF 556-2022).", + "bbox": [ + 89, + 90, + 906, + 165 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Author contributions", + "text_level": 1, + "bbox": [ + 109, + 167, + 263, + 179 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Project conception: JB, JN, LBS, AL, SYL", + "- Defining the VQA tasks: JB, JN, JGG, WC, SC, JNH, CL, SYL", + "- Managing benchmark collection processes: JN, JB", + "- Creating benchmark questions: DB, ZC, SMH, AJ, WDL, MGN, RY, CZ, JNH, CL", + "- MCQ conversion pipeline: JB, JN, LBS", + "- Evaluations: JB, SRG, YS", + "- Qualitative error analysis: JN, JGG, CZ", + "- Quantitative error analysis: LBS", + "- Paper writing and figures: JB, JN, LBS", + "Supervision: SYL, EL, CL, MDL, CNH, SC, WC" + ], + "bbox": [ + 89, + 181, + 651, + 332 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "B. Ethics", + "text_level": 1, + "bbox": [ + 89, + 345, + 171, + 359 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Ethical use of biomedical data: MicroVQA was developed with a commitment to ethical practices in handling biomedical research data. All microscopy images included in the dataset were either original, unpublished data or sourced from open-access articles (e.g., CC-BY licensed) published after January 2024. The dataset does not include patient-identifiable information, and MicroVQA adheres to applicable privacy and research ethics guidelines. The dataset does not constitute human subjects research.", + "bbox": [ + 88, + 371, + 906, + 446 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Potential societal impacts: Biomedical AI models, including those trained on MicroVQA, have the potential to influence research and healthcare significantly, both positively and negatively. While these models can accelerate scientific discovery, they may also perpetuate biases in training data or result in uneven performance across demographic or biological groups. To mitigate these risks, MicroVQA includes metadata annotations to support the analysis of potential biases and emphasizes diverse data sourcing. We will engage with the research community to address emerging ethical concerns.", + "bbox": [ + 88, + 446, + 905, + 522 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Data licensing and usage: MicroVQA is distributed under the Creative Commons Attribution-ShareAlike 4.0 International (https://creativecommons.org/licenses/by-sa/4.0/) license to promote transparency and collaboration in the research community. This license reflects our efforts to balance open access with the ethical use of data, enabling both academic and commercial applications of the dataset while adhering to original license requirements. Full licensing details are available at https://creativecommons.org/licenses/by-sa/4.0/.", + "bbox": [ + 88, + 522, + 905, + 598 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "C. Limitations and future work", + "text_level": 1, + "bbox": [ + 89, + 611, + 357, + 627 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "While MicroVQA advances research-level reasoning benchmark significantly, we can identify a number of limitations and avenues for future work.", + "bbox": [ + 88, + 637, + 905, + 667 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Open vs closed evaluation Like most VQA benchmarks, we used multiple-choice (MCQ) evaluation. However downstream applications will obviously operate in an open setting, without options and requiring more detail. MLLM research does have some open benchmarks [86], though impartial evaluation is a significant challenge. Our early experiments with open prompting suggest that models like GPT-4o tend to give very vague answers, and tend to depend strongly on the text part of the input prompt.", + "bbox": [ + 88, + 685, + 905, + 762 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Dataset scale Our final dataset had 1042 samples, which is smaller than some other VQA benchmarks like MicroBench [50] and OmniMedVQA [34]. MicroVQA's scale sufficient to compare overall scores between models, however statistical power starts to become limiting when trying to compare different models on subsets of the dataset. How does this compare to other benchmarks? As we can see from Tab. 2 and Fig. 3, the larger datasets tend to be those with lower-level reasoning according to Bloom's taxonomy. Their samples tend to be derived automatically from metadata tags or classification labels. Benchmarks with comparably high-level reasoning, like MMMU-Pro, have a similar scale to ours (1,700). In the language-only domain, the research-level reasoning benchmark GPQA has 448 samples. This trend reflects a clear, practical limitation: it is very time-consuming for humans to create difficult reasoning questions, and the pool of capable experts is small.", + "bbox": [ + 88, + 779, + 906, + 902 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 924, + 504, + 935 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Breadth of dataset attributes in microscopy We aimed for MicroVQA to cover a broad range of the microscopy discipline; for example, we have samples from all four relevant scales - tissues, cells, subcellular, atomic. However, each human annotator is an expert in a subset of microscopy and biology, so there are practical limits on dataset coverage. We, therefore, chose to focus on the most common imaging modalities - brightfield, fluorescence, electron - while skipping less common modalities like Raman spectroscopy. In terms of sample types, we emphasized those relevant to humans - human and mouse - while having only a few samples from rare organisms like bottlenose dolphin.", + "bbox": [ + 88, + 90, + 906, + 183 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Coverage of scientific reasoning Looking even more broadly, the motivation of this work is to advance reasoning in scientific research generally. However, we use microscopy in particular. This is because of a clear trade-off in breadth vs depth. Our dataset strikes a good balance, given that microscopy represents a large portion of visual data in biomedicine and biology. Future work can reuse our framework to generate reasoning VQA benchmarks in other fields of visual biomedicine and biology, such as in medicine [34] or ecology [93].", + "bbox": [ + 88, + 199, + 906, + 276 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Resolving MCQ shortcuts A key contribution in this paper is the RefineBot for generating multiple choice questions (MCQs) that are too challenging to solve using text-only shortcuts. Our approach used the outputs of existing MLLMs as signal to make distractors harder. However, this cannot guarantee the removal of shortcuts; more generally, harder distractors may exist. Therefore, the next generation of LLMs may be able to identify and exploit new shortcuts. If that occurs, we can run RefineBot again using the new LLM, creating MicroVQA-2. This strategy could be used for other VQA evaluations.", + "bbox": [ + 88, + 291, + 906, + 369 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "D. Benchmark details", + "text_level": 1, + "bbox": [ + 89, + 378, + 277, + 393 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "D.1. Accessing MicroVQA benchmark", + "text_level": 1, + "bbox": [ + 89, + 402, + 388, + 419 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "MicroVQA is an expert-curated benchmark for multimodal scientific reasoning, specifically in microscopy-based biological research. The dataset comprises 1,042 multiple-choice questions (MCQs) created by biological researchers to represent authentic scientific practice. Each MCQ is paired with an associated microscopy image and metadata describing the experimental context. A demonstration subset of 28 instances is available for initial exploration.", + "bbox": [ + 88, + 426, + 905, + 487 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "The MicroVQA dataset is publicly accessible on HuggingFace at https://huggingface.co/datasets/jmhb/microvqa. It is distributed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license, allowing reuse and share-alike redistribution with attribution. The dataset is available in Apache Arrow and Parquet formats for efficient data handling. These formats support fast serialization, memory mapping, and streaming from the web, facilitating seamless integration into machine learning pipelines.", + "bbox": [ + 88, + 487, + 905, + 561 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "D.2. Dataset Schema and Structure", + "text_level": 1, + "bbox": [ + 89, + 569, + 364, + 584 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "The dataset schema defines the fields included for each data instance. Figure 6 provides a schema summary, while a sample data instance is detailed below 7.", + "bbox": [ + 88, + 592, + 905, + 622 + ], + "page_idx": 14 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "{ \"image_id\": datasets.Value(\"string\"), \"image\": datasets.Image(decode=True, id=None), \"label\": datasets.ClassLabel(), \"label_name\": datasets.Value(\"string\"), \"domain\": datasets.Value(\"string\"), \"subdomain\": datasets.Value(\"string\"), \"modality\": datasets.Value(\"string\"), \"submodality\": datasets.Value(\"string\"), \"stain\": datasets.Value(\"string\"), \"microns_perpixel\": datasets.Value(\"string\"), \"questions\": datasets.Value(\"string\"), }", + "guess_lang": "python", + "bbox": [ + 232, + 631, + 767, + 773 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Figure 6. Example data schema.", + "bbox": [ + 398, + 794, + 596, + 809 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "D.3. Training contamination mitigation", + "text_level": 1, + "bbox": [ + 89, + 830, + 397, + 848 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "To mitigate contamination, all data in MicroVQA were sourced from original, unpublished microscopy images or open-access articles (e.g., CC-BY-licensed) published after January 2024. Additionally, canary strings were embedded in the dataset to detect potential contamination during fine-tuning or model evaluation.", + "bbox": [ + 88, + 854, + 905, + 901 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/db454ed8bb095d61fcba6351b941e5ce35de9a752d37024f4308c9dae146a736.jpg", + "image_caption": [ + "Figure 7. Example data instance." + ], + "image_footnote": [], + "bbox": [ + 181, + 87, + 812, + 402 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "D.4. Benchmark attribute details", + "text_level": 1, + "bbox": [ + 89, + 464, + 346, + 479 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "The MicroVQA benchmark is designed to evaluate multimodal reasoning capabilities in biomedical research contexts. The dataset incorporates diverse attributes such as organism, research subject, imaging modalities, and biological length scales (image scales) to ensure a comprehensive assessment1. These metadata not only enhance the scientific relevance but also enable analysis of reasoning across different biological and microscopy contexts. This section describes the methodology used to define, compute, and annotate these key attributes.", + "bbox": [ + 88, + 487, + 903, + 563 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "The benchmark comprises 1,042 total questions, including 423 questions containing multiple images combined into panels. A collage of sample images is shown in 25. The average MCQ token lengths were computed using the o200k_base tokenizer, with separate calculations for the raw input and formatted multiple-choice questions. Imaging modalities are broadly categorized into light, fluorescence, and electron microscopy. Biological length scales span four levels: tissue (millimeter scale), cellular (micrometer scale), subcellular (nanometer scale), and atomic (angstrom scale), enabling reasoning across diverse structural hierarchies.", + "bbox": [ + 88, + 564, + 903, + 652 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "To annotate attributes such as organism, specimen, and research subject, we manually reviewed random samples from the dataset and created a taxonomy to cover the most likely categories. A biomedical domain expert manually labeled these categories for 50 raw input question-answer pairs and performed initial prompt engineering. We used DSPy to optimize the prompt and few-shot examples on the labeled dataset. The input context included the taxonomy of organisms8 and research subjects 9.", + "bbox": [ + 88, + 655, + 903, + 728 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "These annotations were the gold standard to guide DSPy prompt engineering for automated attribute labeling on the full dataset. Prompt instructions included fallback rules (e.g., tagging \"None\" for indeterminate attributes) to ensure accuracy and minimize hallucination. The optimized pipeline annotated all questions while ensuring the validity of the dataset attributes.", + "bbox": [ + 88, + 729, + 903, + 776 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "D.5. Related benchmarks", + "text_level": 1, + "bbox": [ + 89, + 786, + 289, + 800 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "In section 3.3 and Tab. 2 we compare MicroVQA to related scientific multimodal benchmarks, focusing on its intersection of high-difficulty and advanced reasoning questions. Beyond these key attributes, we examine the sources of the questions. Most benchmarks rely on existing question repositories, such as exams, web-based QA datasets, textbooks, or other MCQ datasets. This approach bypasses the need for MCQ generation methods, as the questions and options are pre-tested and curated, making these benchmarks more representative of established tasks. Another common strategy involves reformating classification datasets into MCQs using question templates, which increases task diversity but often limits scenario unique-", + "bbox": [ + 88, + 809, + 906, + 902 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 15 + }, + { + "type": "image", + "img_path": "images/62b824bbb820fb6c64cfa94694ffb4e53ede1cd5e46aaac7ebc2b76b55093535.jpg", + "image_caption": [ + "Figure 8. Examples of taxonomy classes used as context to LLM to assign an organism to a question. A YAML file with the full taxonomy will be released with the code." + ], + "image_footnote": [], + "bbox": [ + 181, + 88, + 812, + 483 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "ness and language variability. In contrast, MicroVQA is built by directly querying experts for challenging, domain-specific research questions. By starting with long form questions and answers, our approach requires an innovative MCQ generation method, ensuring uniquely tailored questions that push the boundaries of domain-specific reasoning and knowledge assessment.", + "bbox": [ + 88, + 560, + 906, + 619 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "D.6. Bloom's taxonomy", + "text_level": 1, + "bbox": [ + 89, + 633, + 274, + 648 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Bloom's Taxonomy is a model for evaluating cognitive skill levels in educational assessments. It has six hierarchical levels of learning: recall, understand/comprehend, apply, analyze, evaluate, and create. Multiple-choice questions can test five of Bloom's levels but not the sixth level, \"create\". These levels provide a structured framework for assessing lower-order and higher-order thinking skills, which are essential for designing robust benchmarks like MicroVQA.", + "bbox": [ + 88, + 657, + 906, + 717 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Initial attempts to use LLMs to classify Bloom's levels for biomedical visual question answering (VQA) benchmarks revealed a systematic overestimation of cognitive skill levels. This discrepancy limits the naive use LLMs for automated Bloom's classification, perhaps due to the nuances of applying Bloom's taxonomy in domain-specific contexts. To ensure accurate classification of Bloom's level, a physician-scientist with NBME training in item writing and familiarity with Bloom's classification manually labeled Bloom's level for 200 examples from public MCQ questions, MicroVQA, MicroBench, and ScienceQA. These examples were used to fine-tune GPT-4o-mini (gpt-4o-mini-2024-07-18) to classify Bloom's level and align with expert judgments.", + "bbox": [ + 88, + 718, + 905, + 824 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "The fine-tuned Bloom's classifier was applied to MicroVQA and numerous public science/biomedical benchmarks to assess the cognitive skills evaluated by each question. Each question was mapped to one of Bloom's levels, providing insights into the distribution of cognitive tasks across the benchmark. This analysis revealed that while existing benchmarks predominantly tested lower-order skills (e.g., recall and comprehension), MicroVQA has a greater emphasis on questions that require higher-order reasoning (e.g., analyzing, evaluating), filling a critical gap in multimodal scientific reasoning", + "bbox": [ + 88, + 825, + 905, + 902 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/c8a86566ed0b521b3e2535d57c8324cfca644cb74345c31456c9beb0e3e4a4d9.jpg", + "image_caption": [ + "Figure 9. Examples of taxonomy classes used as context to LLM to assign a research subject to a question. A YAML file with the full taxonomy will be released with the code." + ], + "image_footnote": [], + "bbox": [ + 205, + 88, + 790, + 494 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "assessments.", + "bbox": [ + 89, + 575, + 178, + 589 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "D.7. Methodology for proposing tasks", + "text_level": 1, + "bbox": [ + 89, + 608, + 382, + 625 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Here, we expand on the task proposal methodology introduced in Sec. 3.2. To define specific reasoning tasks aimed at advancing biomedical research, we conducted a comprehensive expert consultation process. Through structured interviews with nine Principal Investigators and postdocs specializing in microscopy research, including participants from both academia and industry, we developed a framework for tasks that would meaningfully integrate multimodal large language models (MLLMs) into microscopy research workflows.", + "bbox": [ + 89, + 635, + 906, + 710 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "The task selection process followed specific criteria: tasks must utilize image-text inputs and text outputs suitable for MLLMs; they should require higher-order reasoning beyond simple image processing; and they should emphasize core experimental activities (experimentation, analysis, and action) rather than auxiliary tasks like literature review or writing [33].", + "bbox": [ + 89, + 713, + 905, + 773 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Our methodology involved a two-phase interview process. The first phase consisted of initial one-hour individual sessions with eight research groups featuring open-ended, unstructured discussions about key challenges in microscopy experiments, potential applications of MLLMs, current workflows and unmet needs, and possible AI system integration points. One week later, we then conducted follow-up interviews to consolidate findings and build consensus.", + "bbox": [ + 89, + 776, + 905, + 837 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "To avoid anchoring biases, we intentionally withheld predefined categories during initial interviews. This approach enabled unbiased task identification and allowed us to capture a broad spectrum of potential use cases. Through this iterative process, we identified three essential capabilities that align with the scientific life cycle for microscopy experiments: advanced image interpretation, hypothesis generation, and experimental design/hypothesis testing.", + "bbox": [ + 89, + 839, + 905, + 902 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 925, + 504, + 936 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "D.8. Guidance for question creators", + "text_level": 1, + "bbox": [ + 89, + 90, + 369, + 107 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Question creators were given detailed guidance as pdfs, that we share at https://github.com/jmhb0/microvqa under folder benchmark/human.collection. During the submission process, we completed two rounds of quality control to ensure difficulty and alignment to the prompts, as described in Sec. 4.1.", + "bbox": [ + 88, + 113, + 906, + 159 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "D.9. Microscopy image representations", + "text_level": 1, + "bbox": [ + 89, + 167, + 393, + 184 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "MLLMs are trained primarily on three-channel RGB images, however microscopy images are more diverse. To ensure alignment with how common MLLMs process images, we required that each image artifact for a single question was represented as no more than six RGB images. Where multiple images were needed, they could be multiple image files, or a single image that is concatenated. For videos, users could submit up to six frames; for 3d, they could submit slices or z-projections.", + "bbox": [ + 88, + 191, + 906, + 251 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "An important difference is multichannel images. Many fluorescence microscopy images are multichannel, where the image collection ensures that each channel represents a certain structure. For example channel 0 may be nucleus, and channel 1 may be mitochondria. There are a few approaches. Some questions represent this situation as separate grayscale images for each channel. Others show a 'composite image', which is a single image where one each channel is mapped to one of the RGB channels; e.g. a common visualization will put nucleus in blue and mitochondria in green. Some questions do both: grayscale images for each channel, along with a composite image merging them.", + "bbox": [ + 88, + 252, + 906, + 340 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Finally, we allowed screenshots from research papers or image software, since this is a realistic way that future LLM-users would use AI tools.", + "bbox": [ + 89, + 342, + 906, + 371 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "E. MCQ generation details", + "text_level": 1, + "bbox": [ + 89, + 383, + 320, + 402 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "E.1. Question collection process details", + "text_level": 1, + "bbox": [ + 89, + 409, + 392, + 425 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Data collection used a structured submission form shared with collaborators. The form required users to provide the following:", + "bbox": [ + 88, + 433, + 905, + 463 + ], + "page_idx": 18 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Image or Image Set: Users uploaded microscopy images, optionally including multiple images for comparative questions.", + "- Experiment Details: A brief description of the experimental setup, imaging modalities, and any relevant biological context.", + "- Question Text: Each question was aligned with one of three predefined use cases: identifying interesting features, explaining mechanisms, or proposing next steps.", + "- Answer and Distractors: Contributors provided a correct answer and biologically plausible incorrect answers." + ], + "bbox": [ + 89, + 463, + 903, + 537 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Quality control was performed in multiple rounds. To familiarize users with the task and data requirements, each user submitted an initial form with 4-6 questions for manual quality control and feedback. Feedback was provided to users on improving question specificity, ensuring the questions required image interpretation rather than relying solely on general knowledge. Contributors were encouraged to refine their submissions based on the feedback to better align with the tasks and goals of the MicroVQA benchmark.", + "bbox": [ + 88, + 539, + 906, + 613 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Users were encouraged to submit data from multiple sources, imaging modalities (e.g., fluorescence, electron microscopy), and biological scales (e.g., tissue, cellular, subcellular) to promote diversity. Contributors could upload microscopy images from their experiments or use CC-BY publicly available images. Common image sources included preprints or publications released after January 2024 (to mitigate contamination), or from established image repositories [14, 17, 18, 31, 72, 76]. This ensured that all images were either original or sourced from open-license publications, enabling their use in an open-source benchmark.", + "bbox": [ + 88, + 614, + 905, + 704 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "E.2. Experiment on naive MCQ generation", + "text_level": 1, + "bbox": [ + 89, + 713, + 423, + 729 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "In Sec. 4.2, we discuss how the naive approach to MCQ generation – simply zero-shot prompting GPT – leads to MCQs that are very easy for frontier models to solve. Which ultimately motivated our two-stage MCQ generation approach. Here, we discuss those experiments in more detail. We create a naive version of MicroVQA using a standard prompt [34, 82]:", + "bbox": [ + 88, + 736, + 906, + 782 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "You are an expert in molecular and cell biology, and in microscopy.", + "bbox": [ + 106, + 801, + 570, + 815 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "I will give you an original biology-related question and its answer, your task is to rephrase an equivalent question with identical answer. The question related to an image, and we don't show the image.", + "bbox": [ + 106, + 821, + 844, + 843 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Meanwhile, I want to transfer this QA-pair into a multi-choice question. Please generate 5 incorrect options to construct the candidate options.", + "bbox": [ + 106, + 843, + 870, + 864 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "{QUESTION}", + "bbox": [ + 109, + 872, + 194, + 885 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 493, + 924, + 504, + 935 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "{ANSWER}", + "bbox": [ + 109, + 103, + 179, + 116 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Then, we evaluated this naive MicroVQA on the top closed source models with the image, and to study the over-reliance on language shortcuts, we evaluated a text-only version. In the text-only setting, we add the following to the prompt: \"If an image is mentioned, ignore this information and try your best to answer the question\". Tab. 5 shows the results of this experiment.", + "bbox": [ + 89, + 135, + 906, + 195 + ], + "page_idx": 19 + }, + { + "type": "table", + "img_path": "images/980068988f10ec7a5b60d0e0617e1b40223c65b30e0affe08933c1187678d70c.jpg", + "table_caption": [ + "Table 5. Performance on MicroVQA MCQs with naive MCQ generation. We report overall VQA and VQA without the image." + ], + "table_footnote": [], + "table_body": "
VQAVQA-no-image
GPT-4o85.182.7
Claude-3.5-Sonnet91.488.4
Gemini-1.5-Pro88.582.4
", + "bbox": [ + 361, + 233, + 635, + 292 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "All models score over $80\\%$ on questions that are from a specialized domain. We hypothesize that this reflects a problem with the non-trivial task of MCQ generation from raw VQAs for several reasons. First, the high 'VQA-no-image' scores across all models suggest that selecting the correct option does not require image understanding, despite the raw VQA samples being designed to incorporate image-based reasoning. An alternative explanation is that the questions themselves may be too easy for advanced models. Second, we test this alternate explanation by performing open evaluation on a subset of the raw VQA queries-- that is, we queried GPT-4o with the questions but without the options. Here, the models rarely gave good responses, even after careful prompting. This suggests that models are leveraging test-taking strategies to eliminate distractors [27]. Third, we examined the chain-of-thought responses in the VQA-no-image case, which suggested the models often eliminate distractors based on information from the text question. Therefore, we conclude that there are issues with the improper generation of the MCQs using the naive strategy.", + "bbox": [ + 88, + 306, + 906, + 455 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "What exactly are the problems with MCQ generation? Qualitatively, the types of errors are the same as those we discuss in Appendix E.6, so we refer readers to that section to get a deeper understanding.", + "bbox": [ + 88, + 455, + 905, + 488 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "E.3. Stage 1 MCQ generation (exam alignment)", + "text_level": 1, + "bbox": [ + 89, + 494, + 457, + 512 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "The first stage transformed raw, open-ended questions and long-form answers into well-structured MCQs aligned with established principles of biomedical exam design. A physician-scientist trained in medical board exam item writing initiated the process by reviewing biomedical educational literature, NBME guidelines [64], and Bloom's taxonomy for assessing cognitive skills [5, 19, 89]. The expert manually converted 50 raw question-answer pairs into one-best-answer MCQs with biologically relevant distractors using the literature review and their prior training. This manual effort ensured that the MCQs preserved the original question's intent and scientific validity while adhering to educational standards. Distractors were crafted to require reasoning and prevent reliance on superficial cues like word association or general knowledge, thus maintaining the depth and complexity of the original content.", + "bbox": [ + 88, + 518, + 905, + 638 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "To scale this process, the expert designed an initial LLM prompt for automated Stage 1 MCQ generation. This prompt and the manually curated training data were input for DSPy-based prompt optimization. Using the MIPROv2 optimizer, the prompt was refined within a Chain of Thought (CoT) framework combined with retrieval-augmented generation (RAG) pipelines powered by o1-mini. Optimization was guided by a custom metric evaluated by a separate LLM judge (o1-mini). The judge assessed each revised MCQ across three criteria: (1) semantic similarity to the original question and answer, (2) adherence to NBME formatting standards, and (3) minimization of extraneous information that could provide unintended clues. The optimization metric was weighted to prioritize preserving the original meaning and scientific validity, with lesser emphasis on formatting and extraneous text. Additional implementation details are provided in the source code.", + "bbox": [ + 88, + 640, + 905, + 775 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "E.4. Stage 2 MCQ generation (RefineBot) details", + "text_level": 1, + "bbox": [ + 89, + 782, + 467, + 800 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "The goal of RefineBot is to take MCQs that are too easy for MLLMs - especially where they are leveraging language-only shortcuts - and to rewrite them to make them more challenging. It is the third panel in Fig. 4. For more details on what types of improvements are achieved, see Appendix E.6. The complete code for RefineBot is released.", + "bbox": [ + 88, + 806, + 905, + 853 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "A single RefineBot iteration Below, we show the pseudo-code for a single-question revision. The input is a question string, a list of options, and the index of the correct answer in the options. It returns a tuple (code, MCQ) where code is a", + "bbox": [ + 88, + 869, + 905, + 902 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 925, + 503, + 935 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "string on whether the run was successful, and MCQ is either the revised multiple choice question if the rewrite was successful or otherwise None.", + "bbox": [ + 89, + 90, + 906, + 119 + ], + "page_idx": 20 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "def revise_mcq(question: str, choices: list, correctidx: int, max_iters: int = 5):\n # Store original version\n original = {'question': question, 'choices': choices, 'correctidx': correctidx}\n reflections = []\nfor iter in range(max_iters):\n # Try answering without image\n result = evaluate_without_image(question, choices, correctidx)\nif not result.can_answer:\n code = 'SUCCEEDNESS_CHANGES' if iter == 0 else 'SUCCEED_REWRITE'\nreturn code, (question, choices, correctidx)\nif iter == max_iters - 1:\n return 'FAIL ITERATIONS', None\n# Reflect on strategies used to solve\nreflection = analyze WHY_answerable(result)\nreflections += reflection\n# Rewrite based on all reflections from all iterations\nnew_version = rewrite_question(reflections, original)\n# Verify the rewrite didn't change the meaning meaning preserved\nif not check_meaning_preserved(original, new_version):\n return 'FAIL_REWRITE', None\n# Update current version\nquestion = new_version['question']\nchoices = new_version['choices']\ncorrectidx = new_version['correctidx']\nreturn 'FAIL_UNKNOWN', None", + "guess_lang": "python", + "bbox": [ + 94, + 125, + 910, + 531 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "This process is also described in the main text. One detail to note: the rewrite_question function receives the reflections from all prior iterations, not just the current one. In this way, the rewriting function can consider all possible strategies that have been identified. All LLM calls are to gpt-4o-2024-08-06.", + "bbox": [ + 89, + 537, + 906, + 582 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "To better illustrate the method, we step through a single example. This way, we can show the prompts and understand responses.", + "bbox": [ + 89, + 583, + 906, + 612 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "1. The evaluation of a question with the evaluate_without_image prompt and response:", + "bbox": [ + 91, + 613, + 720, + 628 + ], + "page_idx": 20 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "Prompt \nThe following question is supposed to be paired with an image. We will not provide the image, so answer to the best of your ability. \nIn an electron microscopy image of a mouse primary embryonic cortical neuron expressing full-length human mHTT (BACHD), \"eye-ball\" shaped features are observed in the top left corner that appear anomalous. What is the most likely cause of these features? Think step by step and then output the answer in the format of \"The answer is (X)\" at the end.. \n(a) Aggregation of human mHTT protein within the neuron. \n(b) Ice contamination from liquid nitrogen during sample preparation. \n(c) Formation of intracellular vesicles during neuron development. \n(d) Artifacts arising from excessive heavy metal staining. \nResponse (target answer is b) \nTo determine the cause of the \"eye-ball\" shaped features observed in the electron microscopy image, we can analyze each option step by step: \n(a) Aggregation of human mHTT protein within the neuron: Aggregations of proteins can indeed appear as anomalous structures under electron microscopy. However, these typically have irregular shapes and may not consistently resemble \"eye-ball\" shapes. \n(b) Ice contamination from liquid nitrogen during sample preparation: Ice contamination can occur during sample preparation, particularly if samples are not handled properly when flash-frozen. This can lead to", + "guess_lang": "txt", + "bbox": [ + 109, + 632, + 910, + 901 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 924, + 504, + 936 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "anomalous features appearing in microscopy images. These features might appear as circular or irregular shapes, potentially looking like \"eye-balls.\"", + "bbox": [ + 163, + 94, + 875, + 116 + ], + "page_idx": 21 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(c) Formation of intracellular vesicles during neuron development: Intracellular vesicles are normal structures within neurons and other cells. Although they can appear during development, they are typically well characterized and documented, and are less likely to be described as anomalous in the context given.", + "(d) Artifacts arising from excessive heavy metal staining: Over staining can indeed produce artifacts in electron microscopy images. While staining artifacts are plausible, they might not specifically result in \"eye-ball\" shapes unless the staining process consistently targets specific round structures." + ], + "bbox": [ + 135, + 125, + 903, + 196 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Considering the nature of the descriptors (\"eye-ball\" shaped, anomalous), option (b) ice contamination is often a source of unexpected features in microscopy images that could fit the appearance described, as ice can form circular shapes.", + "bbox": [ + 133, + 204, + 898, + 236 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Therefore, the most likely cause of these features is (b) Ice contamination from liquid nitrogen during sample preparation.", + "bbox": [ + 133, + 244, + 890, + 267 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "The answer is (b).", + "bbox": [ + 133, + 275, + 259, + 285 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "2. If the answer is incorrect, then the question is already hard enough, and we can exit. Otherwise, we call the analyze WHY Answerable function, which reflects on how this answer was achieved. It continues in the same conversation:", + "bbox": [ + 89, + 297, + 906, + 340 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "User", + "bbox": [ + 133, + 359, + 165, + 369 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "That is correct.", + "bbox": [ + 135, + 381, + 246, + 391 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Explain how you were able to answer the question without access to the image - only the question_stem and choices.", + "bbox": [ + 135, + 400, + 857, + 421 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "What strategies did you use?", + "bbox": [ + 135, + 421, + 330, + 431 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Then, briefly summarize these \"language-only strategies\" into a short list.", + "bbox": [ + 135, + 441, + 648, + 452 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Finally, say whether your reasoning was closer to:", + "bbox": [ + 135, + 460, + 478, + 472 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "- 'deduction': you answered with high confidence because you had enough information.", + "bbox": [ + 135, + 472, + 710, + 482 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "- 'abduction': you answered with lower confidence you made an informed guess.", + "bbox": [ + 135, + 482, + 661, + 492 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Be concise in your final response.", + "bbox": [ + 135, + 501, + 369, + 512 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Assistant", + "bbox": [ + 135, + 531, + 199, + 541 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "To answer the question without the image, I relied entirely on analyzing the given information and the plausible characteristics of each option. Here's a breakdown of my strategy:", + "bbox": [ + 135, + 551, + 836, + 573 + ], + "page_idx": 21 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. **Keyword and Context Analysis**: I focused on key descriptors such as \"eye-ball\" shaped features and their being \"anomalous\" in an electron microscopy image of neurons expressing human mHTT. This guided me to consider characteristics commonly seen under electron microscopy.", + "2. **Knowledge of the Subject Matter**: I applied general knowledge about electron microscopy artifacts and protein behavior. This helped me weigh the likelihood of each option based on what typically causes specific image features.", + "3. **Process of Elimination**: By understanding the nature of each option-including potential artifacts or structures (such as vesicles or ice) in microscopy-I systematically assessed the validity of each scenario in relation to the described features.", + "4. **Logical Inference**: I inferred which scenario most commonly results in anomalies resembling \"eye-ball\" features, favoring the simplest explanation." + ], + "bbox": [ + 135, + 582, + 895, + 724 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Summary of \"language only strategies\":", + "bbox": [ + 135, + 733, + 395, + 744 + ], + "page_idx": 21 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Analyze keywords and context", + "- Apply relevant subject matter knowledge", + "- Use process of elimination", + "- Make logical inferences" + ], + "bbox": [ + 135, + 744, + 418, + 784 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "The reasoning was closer to 'abduction' as I made an informed guess based on patterns and typical causes, given that I lacked direct visual confirmation.", + "bbox": [ + 135, + 792, + 898, + 814 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "3. Next, we call rewrite_question. Here, the templated variable {{conversations}} has the full conversations produced by the function analyze WHY_answerable in the previous call and from all other iterations. The {{n chatting}} just indicates how many conversations there are included:", + "bbox": [ + 89, + 825, + 906, + 871 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Below, I will display {{n chatting}} chat conversations between a 'user' and an LLM 'assistant'.", + "bbox": [ + 133, + 878, + 766, + 888 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 490, + 924, + 508, + 936 + ], + "page_idx": 21 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [ + "The output of this function is a JSON and is:" + ], + "code_body": "In each conversation\n- a user asks the assistant to answer a multichoice VQA question, however they do not provide the image. They only get the question_stem and choices.\n- the question_stem and correct answer are similar to other conversations, however the distractors are different\n- the assistant then answers correctly.\n- the user then asks the assistant to explain how it answered the question with only the text.\n- the assistant then summarizes what strategy they used to answer the question.\nAltogether, these conversations give examples of language shortcuts that need to be avoided when constructing the question_stem and choices.\nHere are the conversations:\n{{conversations}}\nYour task is to rewrite the question_stem and choices so that a different LLM 'assistant' cannot use the language-only strategies that were identified in these past conversations.\nInclude an 'explanation' about why your new set of distractors are better.\nYour revised choices should include the correct answer at the 'correct_index'.\nYour revised question_stem and choices should not significantly change the meaning of the question and correct answer.\nYou are free to change the distractors a lot to achieve this task.\nInclude {{nchoices}} choices.\nThe choices should not be made too different in length and detail to the original question stem and choices.\nNote that the question stem may contain important cues that cannot be removed. E.g. if a question asks about the \"green stain\" in an image, you cannot change it to \"stain\" because it introduces ambiguity.\nThe choices should not include the letters in front like (a), (b), ...\nReturn a json", + "guess_lang": "txt", + "bbox": [ + 116, + 94, + 908, + 401 + ], + "page_idx": 22 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "{ \"question_stem\": \"In an electron microscopy image of a neuron expressing human mHTT (BACHD) protein, distinct, spherical features are prominent and seem to be atypical. What is the most probable explanation for these features?\", \"choices\": [ \"Concentration of cellular waste products leading to distinct morphological anomalies\", \"Formation of air bubbles introduced during sample immobilization processes\", \"Ice contamination from liquid nitrogen during sample preparation\", \"Inclusion bodies formed by aggregated proteins expressing during stress response\", \"Cellular morphological changes during mitotic division\" ], \"correct_index\": 2, \"explanation\": \"The revised options introduce plausible scientific phenomena that could also result in spherical or distinct anomalies. This set of distractors is more varied and grounded in legitimate alternative explanations without leading the responder towards the correct answer through language patterns. For example, air bubbles and protein inclusion bodies might mimic sphere-like structures. Additionally, by adding an unrelated phenomenon like cellular changes during division, the set confuses the language pattern strategy previously used. Thus, the correct choice of \\(\" Ice\\) contamination from liquid nitrogen\"\\( \" stands out less and necessitates either visual confirmation or real understanding of sample preparation outcomes to identify correctly.\"", + "guess_lang": "json", + "bbox": [ + 116, + 433, + 908, + 652 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Notice that the response includes an explanation of why it made the changes, and the changes relate to what was in the reflection, adding interpretability.", + "bbox": [ + 109, + 662, + 906, + 691 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "4. Finally, we check that the rewrite has not changed the semantic meaning of the question and answer.", + "bbox": [ + 89, + 691, + 769, + 708 + ], + "page_idx": 22 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "Prompt \n\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\* \nBelow are two question-answer pairs. \nThe question-answer pairs are part of VQA triplets, and both pairs use the same image. \nAre these question-answer pairs semantically equivalent? \nOr are they significantly different? \nGive a true/false and also an explanation. \nQUESTION 1: \nIn an electron microscopy image of a mouse primary embryonic cortical neuron expressing full-length human mHTT (BACHD), \"eye-ball\" shaped features are observed in the top left corner that appear anomalous. What is the most likely cause of these features? \nANSWER 1: \nIce contamination from liquid nitrogen during sample preparation \nQUESTION 2:", + "guess_lang": "txt", + "bbox": [ + 133, + 717, + 898, + 898 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 490, + 924, + 506, + 936 + ], + "page_idx": 22 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "In an electron microscopy image of a neuron expressing human mHTT (BACHD) protein, distinct, spherical features are prominent and seem to be atypical. What is the most probable explanation for these features? \nANSWER 2: \nIce contamination from liquid nitrogen during sample preparation \n\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*", + "guess_lang": "txt", + "bbox": [ + 109, + 93, + 910, + 271 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "5. If the meaning check step fails, then we exit with a fail as we notice that once the meaning deviates it will become more distorted with further iterations. Otherwise, we update the MCQ, and start the loop again. Where the first function call in the new loop will check whether this revised MCQ is now too hard or needs to be rewritten.", + "bbox": [ + 89, + 277, + 906, + 324 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Calibrating the preservation of meaning Since there is a risk that the rewriter will change the semantic meaning of the question and answer, it is crucial that the check_meaning_preserved is effective. For topics requiring domain expertise, this requires prompt engineering with a domain expert providing feedback. This is one of the key human bottlenecks in tuning the prompts in this system.", + "bbox": [ + 88, + 343, + 906, + 405 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Other sources of MCQ invalidation While the check_meaning_preserved function checks that the question and answer is correct, an additional risk is that the rewrite introduces distractors that are more correct than the target answer. In our manual review evaluations (we reviewed all 1,042 MCQs), this was very rare. However, if this were a significant problem in future systems, it may be addressable by tuning the rewriter bot, with an expert human evaluator in the prompt engineering loop.", + "bbox": [ + 88, + 422, + 906, + 500 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Extending RefineBot beyond language shortcuts We constructed RefineBot specifically to address the issue of language shortcuts – where the question is answerable without using the image. This is appropriate because we work with vision-centric VQA, meaning the questions should require image interpretation. However, the framework should work in other settings – namely language-only QA. This would require adjusting the prompts slightly. For example, the current reflection prompt asks how the question was successfully evaluated without using language; also, the current rewriter prompt directs the LLM to address language shortcuts in its rewrite. Therefore, adapting Refinebot to language-only MCQ should make these prompts more general.", + "bbox": [ + 88, + 518, + 905, + 625 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "On the other hand, the method could also be updated for the more general VQA setting – to improve questions that do not have language shortcuts, but are easy for some other reason. Here, the evaluator and reflector agents, which are currently LLMs, could be switched for an MLLM, which also ingests the image (though this would be expensive for current state-of-the-art models). We hypothesize that current MLLMs are so weak at image perception, that this is not necessary for MicroVQA – that simply removing language shortcuts will suffice for creating a challenging benchmark. As MLLMs improve at perceiving microscopy images, this general VQA RefineBot could become more useful for ensuring that the benchmark remains challenging.", + "bbox": [ + 88, + 625, + 906, + 731 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Repeated RefineBot iterations A single run of RefineBot may exit as a FAIL, either because the rewrite changed the meaning of the question, or because the maximum number of iterations was reached. However, RefineBot can give very different outputs when changing the random seed, and we find that simply running again with a different random seed can lead to a successful run. Therefore, if the bot fails, we rerun it.", + "bbox": [ + 88, + 750, + 903, + 809 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "If an MCQ exited RefineBot successfully, then the evaluator LLM was fooled by the question (got the answer incorrect). Yet, the question could still be suboptimal in detectable ways: the LLM could be capable of narrowing the MCQ down to two options, but have guessed the wrong option; moreover, this question could be easier for different models. To ensure it is difficult enough overall, we require that the MCQ is answered incorrectly for two random seeds from two different models: gpt-4o-2024-08-06 and claude-3-5-sonnet-20241022. If not, then it fails, and we run the RefineBot again with a different random seed.", + "bbox": [ + 88, + 810, + 906, + 902 + ], + "page_idx": 23 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 490, + 924, + 508, + 936 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "We discuss in the experiments Sec. 5.1 how this process leads to a small evaluation bias against the models used as evaluators. That section shows that despite the bias, this process makes the MCQs generally harder for all models. Future versions of RefineBot could mitigate this by including a larger family of evaluator models.", + "bbox": [ + 89, + 90, + 903, + 135 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Finally, some MCQs fail to pass the RefineBot check. In these cases, we simply keep the stage 1 MCQ.", + "bbox": [ + 107, + 137, + 789, + 152 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "E.5. MCQ generation results: stage 1 exam alignment", + "text_level": 1, + "bbox": [ + 89, + 166, + 504, + 181 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "The Stage 1 pipeline produced one-best-answer MCQs that preserved the content, testing focus, and validity of the expert-generated question-answer pairs. It generated an initial set of distractors following best practices outlined in educational and NBME guidelines. A key design goal of Stage 1 was to preserve the original question content without altering or introducing new information. Secondary objectives included ensuring proper formatting according to NBME guidelines and minimizing information that might provide clues to the correct answer without reference to the image.", + "bbox": [ + 88, + 191, + 905, + 267 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "While the pipeline ensured alignment with these goals, preserving the original content occasionally resulted in MCQs containing extra information, such as image descriptions or specific biological terms (i.e., gene or protein name), that could inadvertently help narrow the options. For example, stage 1 question stems sometimes included protein names, disease descriptions, or descriptions of image features, which could reduce the challenge. Figure 11 illustrates Stage 1 outputs that are appropriately formatted but may preserve details from the original input, making questions less challenging. While the distractors were biologically plausible, some options were overly easy to eliminate, reducing the challenge. These limitations are addressed and refined in Stage 2.", + "bbox": [ + 88, + 268, + 905, + 372 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "The o1-mini-2024-09-12 LLM judge, employed during DSPy optimization, compared the Stage 1 MCQs to the original inputs to evaluate (1) semantic similarity and (2) adherence to MCQ formatting. The results showed that $96\\%$ of questions preserved the original meaning, and $100\\%$ adhered to NBME formatting guidelines. Manual analysis of the $4\\%$ labeled as at least partially dissimilar revealed that these cases typically involved open-ended original questions being reformulated into closed-ended MCQs. This transformation, a necessary step for creating one-best-answer MCQs, was not a failure but rather an expected outcome of Stage 1. In rare instances where the original submission included multiple correct answers, the output of Stage 1 retained one correct answer and excluded others to comply with the one-best-answer format. The LLM judge flagged these cases as partially dissimilar but these changes were essential for adhering to the MCQ structure.", + "bbox": [ + 88, + 375, + 905, + 496 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "A common issue in MCQ design is that correct answers tend to be longer and more detailed than distractors. To assess this, we computed the ratio of tokens in incorrect options to those in the correct answer, where values $\\leq 1$ indicate longer correct answers, and values $\\geq 1$ suggest shorter correct answers. Stage 1 MCQs showed a ratio of 0.87, indicating that correct answers were only slightly longer than distractors. By comparison, naive MCQ generation yielded a ratio of 0.46, meaning correct answers were nearly twice as long as distractors, making them easier to identify. Additionally, naive MCQ generation preserved the original meaning in only $89\\%$ of cases and met NBME formatting guidelines in just $70\\%$ , highlighting the efficacy of the Stage 1 process at preserving information and ensuring proper MCQ formatting.", + "bbox": [ + 88, + 497, + 903, + 603 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "E.6. MCQ generation results: stage 2 RefineBot", + "text_level": 1, + "bbox": [ + 89, + 616, + 460, + 633 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "The second stage of our MCQ generation method focuses on increasing the difficulty of the questions and options. The key idea is that we want to remove language shortcuts – information in the MCQ that allows answering the question without access to the image. To aid in understanding, we construct a toy example in Fig. 10 with the three types of language shortcuts that we identify. The questions should be paired with a stain that is mitochondria, and we want the VQA to require actually inferring that from the image. In shortcut 1, the text ‘gives away’ the image content so it’s trivial to answer the question. In shortcut 2: The LLM generates implausible or weak distractors. In shortcut 3: ‘Language bias’, is a known problem in VQA [29].", + "bbox": [ + 88, + 641, + 905, + 746 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Moving to real MCQs, in Fig. 4 we illustrate the transformation of a question at each stage, highlighting issues and corresponding improvements. While stage 1 produces exam-style questions that preserve the QA objective, the wording of the questions and distractors often enable MLLMs to exploit shortcuts rather than demonstrate knowledge. To address this, stage 2 (RefineBot) employs a chain-of-thought process. We analyze additional examples of improvements achieved in stage 2, showing how RefineBot improves both the style and content. The key strategies correspond to the three shortcut types that we identified:", + "bbox": [ + 88, + 748, + 905, + 837 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "1. (Visual giveaway) Over-description of the image eliminates the need for image interpretation. For example, in Fig. 11 (middle row), stage 1 includes details that reveal the specifics of the image, allowing the model to propose common issues without analyzing the image. Stage 2 replaces these specifics with general terms in the question stem.", + "bbox": [ + 138, + 839, + 903, + 898 + ], + "page_idx": 24 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 490, + 924, + 506, + 935 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Language shortcut 1: Visual giveaway", + "text_level": 1, + "bbox": [ + 217, + 93, + 339, + 117 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Q: The green TOMM20 outer mitochondrial membrane protein localizes where?", + "bbox": [ + 184, + 125, + 364, + 160 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "A) Mitochondria", + "bbox": [ + 184, + 170, + 277, + 181 + ], + "page_idx": 25 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "B) Nucleus", + "C) ER", + "D) Golgi" + ], + "bbox": [ + 186, + 181, + 251, + 217 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Why? The 'M' in \"TOMM20\" stands for 'mitochondria', so that's enough to answer the question.", + "bbox": [ + 181, + 239, + 372, + 275 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Language shortcut 2: Weak distractors", + "text_level": 1, + "bbox": [ + 437, + 93, + 557, + 116 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Q: What is the localization of the green stain?", + "bbox": [ + 405, + 131, + 576, + 155 + ], + "page_idx": 25 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A) Mitochondria", + "B) Butane", + "C) Quokka", + "D) Coatf" + ], + "bbox": [ + 405, + 164, + 496, + 210 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Why? The LLM generates distractors that are not plausible, and can easily be eliminated", + "bbox": [ + 401, + 239, + 576, + 275 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Language shortcut 3: Language bias", + "text_level": 1, + "bbox": [ + 656, + 93, + 777, + 117 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Q: For a study on Parkinson's disease, we stain a target green. Where does it localize?", + "bbox": [ + 622, + 125, + 800, + 160 + ], + "page_idx": 25 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A) Mitochondria", + "B) Nucleus", + "C) ER", + "D) Golgi" + ], + "bbox": [ + 622, + 170, + 717, + 217 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Why? 'Mitochondria' is the most plausible answer because the question mentions Parkinson's disease, which frequently studies mitochondrial dysfunction", + "bbox": [ + 620, + 227, + 799, + 286 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Figure 10. Three types of language shortcut relevant to MicroVQA. The target VQA has an image that is fluorescence microscopy stained with TOMM20 which would show a pattern consistent with visualizing mitochondria.", + "bbox": [ + 89, + 301, + 906, + 330 + ], + "page_idx": 25 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "2. (Weak distractors) Distractors unrelated to the specifics of the scenario described by the question. In Fig. 11 (bottom row), stage 2 increases distractor complexity by incorporating elements tied to the question's context, making them less predictable.", + "3. (Language bias) Overly precise wording in the question stem: Precise phrasing can make it easy to identify the correct answer through pattern matching. For example, Fig. 11 (top row), changing 'eye-ball shaped' to 'spherical' retains the question's intent while creating more challenging distractors referencing plausible alternative processes." + ], + "bbox": [ + 138, + 357, + 906, + 450 + ], + "page_idx": 25 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 490, + 924, + 508, + 936 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/be6415061000edb79f7e653656032f75e763b7c4fe5f97d82e4c52b46e24c788.jpg", + "image_caption": [ + "Figure 11. Examples of changes to questions and options between stage 1 and stage 2 (RefineBot) of our MCQ generation method. In red elements that need to be improved and in green improvements." + ], + "image_footnote": [], + "bbox": [ + 89, + 87, + 903, + 642 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "F. Experiments details", + "text_level": 1, + "bbox": [ + 89, + 710, + 284, + 728 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "F.1. Evaluation prompts", + "text_level": 1, + "bbox": [ + 89, + 737, + 282, + 753 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "All prompts except for LLaVA and LLaVA-Med used the chain-of-thought prompt template from the MMMU-Pro code [88]:", + "bbox": [ + 89, + 762, + 906, + 792 + ], + "page_idx": 26 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "The following is a multiple choice question (with answers). Think step by step and then output the answer in the format of $\"The$ answer is $(\\mathrm{X})\\backslash \"$ at the end. {{QUESTION}}} Options: {{CHOICES}}", + "guess_lang": "txt", + "bbox": [ + 107, + 816, + 767, + 891 + ], + "page_idx": 26 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 490, + 924, + 508, + 936 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "An example complete question is:", + "text_level": 1, + "bbox": [ + 109, + 92, + 336, + 106 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "The following is a multiple choice question (with answers).", + "bbox": [ + 107, + 128, + 514, + 141 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Think step by step and then output the answer in the format of \"The answer is (X)\" at the end.", + "bbox": [ + 107, + 141, + 754, + 151 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "A fluorescence microscopy image displays cultured mouse primary neurons stained with a specific marker localized to mitochondria. The fluorescence signal exhibits an uneven distribution across the neuronal cell, and there is an absence of signal within the nucleus. What is the most probable cause for the uneven distribution of the mitochondrial signal?", + "bbox": [ + 107, + 160, + 883, + 202 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Options:", + "bbox": [ + 109, + 210, + 166, + 220 + ], + "page_idx": 27 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(1): Mitochondrial transport along axonal and dendritic processes ensures uneven distribution based on cellular energy demands.", + "(2): Differential expression of mitochondrial proteins leads to variable fluorescence signal across the neuron.", + "(3): Mitochondrial biogenesis occurs at specific neuronal regions, causing localized fluorescence accumulation.", + "(4): Mitochondrial anchoring to the cytoskeleton restricts their movement, resulting in uneven distribution.", + "(5): Mitochondrial dynamics, including fission and fusion, allow mitochondria to move and distribute throughout the neuron." + ], + "bbox": [ + 125, + 222, + 885, + 291 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "The answer is then extracted with the regex pattern: answer is $(?([0-9])$ ?", + "bbox": [ + 109, + 319, + 684, + 333 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "For LLaVA-Med, we added an extra parsing strategy. This was because we observed that while LLaVA-Med often gave reasonable responses to questions, it failed to follow instructions well. First, it did not follow the CoT prompt consistently to think step by step. It also didn't use the right answer formatting: \"The answer is (X) \\\". Since we were more interested in biomedical reasoning adhering to VQA formatting, we decided to handle this model differently.", + "bbox": [ + 88, + 335, + 905, + 395 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Specifically, to encourage CoT reasoning, we fixed the prefix of the response to:", + "bbox": [ + 109, + 397, + 637, + 411 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Response: Let's think through this step by step and evaluate each option..", + "bbox": [ + 107, + 412, + 857, + 426 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Then we added a suffix to the end:", + "bbox": [ + 89, + 426, + 318, + 440 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "After carefully considering all the options, the most likely answer is (1). Then we continued generation, and did regular regex parsing to get the final answer.", + "bbox": [ + 89, + 441, + 851, + 472 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "We also find that a few models make minor but consistent errors with formatting the answer and elect to fix them. For example x-ai/grok-2-vision-1212 commonly adds two asterisks before the answer like \"The answer is $^{**}(4)^{**}$ \" instead of \"The answer is (4)\" (GPT-4o also did this sometimes). Some other common errors that we saw but did not handle (because it would be difficult to do rigorously) was shown by meta-llama/llama-3.2-11b-vision-instruct: the response would write one of the MCQ options but not state its number.", + "bbox": [ + 88, + 473, + 905, + 547 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "F.2. Model details", + "text_level": 1, + "bbox": [ + 89, + 561, + 230, + 575 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Our main benchmarking experiments in Tab. 3 refer to models that have multiple versions. The exact model versions used for inference are shown in Tab. 6", + "bbox": [ + 88, + 585, + 905, + 614 + ], + "page_idx": 27 + }, + { + "type": "table", + "img_path": "images/ed6c38f64fb2524c7e826b1f14a467709fb078eff5a02a3d354453b7495b1d70.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Model NameAPI EndpointSource & Details
olol-2024-12-17OpenAI API
Claude-3.5-Sonnet*anthropic/claude-3.5-sonnet-20240620Openrouter API
Gemini-Pro-1.5google/gemini-pro-1.5Openrouter API
Pixtral-Largemistralai/pixtral-large-2411Openrouter API
Grok-2-Visionx-ai/grok-2-vision-1212Openrouter API
Qwen-2-vl-72b-Instructqwen/qwen-2-vl-72b-instructOpenrouter API
VILA1.5-40bVILA1.5-40bHuggingFace, local inference
GPT-4ogpt-4o-2024-08-06OpenAI API
Llama-3.1-Nemotron-70b-Instructnvidia/llama-3.1-nemotron-70b-instructOpenrouter API
Llama-3.2-90b-Vision-Instructmeta-llama/llama-3.2-90b-vision-instructOpenrouter API
Qwen-2-VL-7bqwen/qwen-2-vl-7b-instructOpenrouter API
Claude-3.5-Haikuanthropic/claude-3.5-haikuOpenrouter API
Gemini-Flash-1.5-8bgoogle/gemini-flash-1.5-8bOpenrouter API
GPT-4o-minigpt-4o-mini-2024-07-18OpenAI API
Pixtral-12bmistralai/pixtral-12bOpenrouter API
VILA1.5-13bVILA1.5-13bHuggingFace, local inference
Llama-3.2-11b-vision-instructmeta-llama/llama-3.2-11b-vision-instructOpenrouter API
LLaVA-Med-Mistral-7BLLaVA-Med-Mistral-7BHuggingFace, local inference
LLaVA-Mistral-7BLLaVA-Mistral-7BHuggingFace, local inference
", + "bbox": [ + 142, + 627, + 852, + 873 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Table 6. AI model API endpoints and their sources", + "bbox": [ + 346, + 882, + 650, + 896 + ], + "page_idx": 27 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 488, + 924, + 509, + 936 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "For LLaVA and LLaVA-med, although the version number 1.6 is different from LLaVA-Med with 1.5, we do believe that LLaVA-Med was either trained from this LLaVA version, or from one that is very similar. They have identical architecture, and the version is very close; it is possible the LLaVA-Med copied the number incorrectly. The LLaVA-Med codebase or github issues does not clarify this at the time of writing.", + "bbox": [ + 88, + 90, + 906, + 151 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "Note that for OpenRouter function calls to Anthropic, the date is not included in the string. We believe they use timestamps for older model versions, but not for the latest version. So reproducing it may require checking if the model has been updated.", + "bbox": [ + 88, + 152, + 906, + 184 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "F.3. Human baseline on MicroVQA", + "text_level": 1, + "bbox": [ + 89, + 194, + 366, + 210 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "To perform a human baseline, we recruited co-authors who created questions. The conditions were:", + "bbox": [ + 89, + 218, + 743, + 233 + ], + "page_idx": 28 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Questions were provided in a google form.", + "- Participants did not view any question that they created.", + "- Internet access was allowed, but LLM access was not. This is because realistic biological practice is knowledge-intensive, and may require looking up facts or references.", + "- Time limit was 3 minutes per question. They were free to take as many breaks as they wanted.", + "- They answered the multiple choice question without further written justification." + ], + "bbox": [ + 89, + 238, + 903, + 329 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "These conditions – 3 minutes and internet access – give sufficient time for an expert to make a reasonable attempt. Since the score was far below $100\\%$ , we claim that these questions could be of value to real experts as an LLM chat assistant. Each participant only took a subset of the questions (about 100 each). The final score is the aggregate.", + "bbox": [ + 88, + 333, + 906, + 378 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "F.4. More experiments on attribute-level information", + "text_level": 1, + "bbox": [ + 89, + 390, + 500, + 406 + ], + "page_idx": 28 + }, + { + "type": "image", + "img_path": "images/ebc6e3c6cd413d919dbcc1c3973a6968c02e37ed4f14c33c578aea3801c93a6c.jpg", + "image_caption": [ + "Figure 12. Performance by image modality type for the best models: Gemini-1.5-Pro (closed source), VILA1.5-40B (open-source), and LlaVA-Med-Mistral-7B (medical)." + ], + "image_footnote": [], + "bbox": [ + 367, + 430, + 643, + 654 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "We described our metadata features in Sec. 3, and we can now leverage them to benchmark models on different attributes. Fig. 5 (left) shows all sub-tasks, and shows that 'compare image sets', which is common in biology to compare 'treatment' and 'control' groups. All models perform relatively well, despite multi-image reasoning being understudied in MLLMs [78]. Fig. 5 (right) shows that for Gemini and LLaVa-Med, higher Bloom's levels - corresponding to more advanced reasoning - leads to worse performance. This supports the intuitive result that higher-level reasoning is more challenging - although VILA does not follow this trend. A final result in Fig. 12 shows that all models performed worse on cryo-electron microscopy images than brightfield or fluorescence microscopy. This may be due to cryo-EM's challenging features (low contrast, low signal-to-noise ratio) and their under-representation in pretraining datasets [93].", + "bbox": [ + 88, + 713, + 906, + 835 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "F.5. Language shortcut ablations", + "text_level": 1, + "bbox": [ + 89, + 845, + 346, + 863 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "In Tab. 4 we do language shortcut ablations for some top-performing models and some lower-performing models - both no-image and choices-only ablations.", + "bbox": [ + 88, + 869, + 906, + 902 + ], + "page_idx": 28 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 490, + 924, + 508, + 936 + ], + "page_idx": 28 + }, + { + "type": "table", + "img_path": "images/2b63a3f4b0c5d1729ff388f508d13cf7d089a92cb9958a9b0d21a860aa55dd42.jpg", + "table_caption": [ + "Table 7. Ablation study on MicroVQA MCQs. Column 2 is overall accuracy. Cols 3-4 are the no-image ablation accuracy and drop compared to overall accuracy. Cols 5-6 are the choices-only ablation accuracy and drop compared to overall accuracy. See the text for discussion." + ], + "table_footnote": [], + "table_body": "
AccuracyNo-image ablationChoices-only ablation
AccuracyDropAccuracydiff
o152.849.2-3.637.7-15.1
Claude-3.5-Sonnet51.746.0-5.844.0-7.7
Gemini-Pro-1.551.147.2-3.836.8-14.3
Pixtral-Large49.846.3-3.636.7-13.1
Grok-2-Vision48.446.3-2.140.5-7.9
GPT-4o-mini46.244.2-1.934.3-11.9
Pixtral-12b45.643.7-1.931.8-13.8
Random22.0
", + "bbox": [ + 235, + 142, + 761, + 305 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "The no-image ablation is a common test in VQA evaluations[29, 74]. The MLLM only receives the text prompt, and we add the following sentence to the prompt template:", + "bbox": [ + 89, + 332, + 906, + 362 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "If an image is mentioned ignore this information and try your best to answer the question.", + "bbox": [ + 89, + 364, + 883, + 391 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "The choices-only ablation has been explored in (non-visual) question answering [7]. The model does not receive the image or the question and must guess an answer from only the choices. We add this sentence to the prompt template:", + "bbox": [ + 89, + 393, + 903, + 422 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "However I will not give you the question text or the images, I will only give you the choices, so please try your best to answer the question.", + "bbox": [ + 89, + 424, + 893, + 454 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Both ablations lower the scores, but they are still above random. While this is initially surprising, it is actually consistent with other benchmarks. A key challenge in VQA benchmarks is for questions to be \"vision-centric\": that the question requires using the image features to answer. We still have very strong evidence that MicroVQA is vision-centric, which we discuss below.", + "bbox": [ + 89, + 455, + 905, + 513 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "First, let's recall the language shortcuts by considering a VQA sample with an image of a green field, with the question \"what color is the grass\".", + "bbox": [ + 89, + 516, + 903, + 546 + ], + "page_idx": 29 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- 'Language bias' is well-known for enabling models to make educated guesses about MCQs without access to the image [29]. For the question \"what color is the grass?\", then without the image, \"green\" is a likely guess, although there are other possible answers, like \"brown\".", + "- 'Visual giveaway' means there are too many image details in the question, for example if the question were \"what color is the green grass?\" This removes any need to look at the image.", + "- 'Weak distractors' gives other question choices that are easily eliminated. For example, for question \"what color is the grass?\", a bad distractors would be \"pink\". This also removes any need to look at the image." + ], + "bbox": [ + 89, + 547, + 903, + 652 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "(In Appendix E.6, we also discussed language shortcuts with biology examples as it related to MCQ construction.) Now we argue that MicroVQA is still vision-centric despite the ablation results.", + "bbox": [ + 89, + 654, + 903, + 684 + ], + "page_idx": 29 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Language bias allows cheating on vision-centric questions If answerable due to a 'language bias', the question may still require perception. A model may guess that the grass is green, but it is only a guess - the grass really could be brown [29]. Therefore correct answers under the 'no-image' ablation may still be vision-centric. On the other hand problems with 'visual giveaway' and 'weak distractor' shortcuts really do render the question not-vision-centric. The RefineBot method we employed in MCQ generation was designed to target language shortcut removal. Anecdotally, our qualitative analysis (from Appendix E.6) found that RefineBot tends to focus on fixing the visual giveaway and weak distractor shortcuts. Ideally we would perform a quantitative analysis to identify which language shortcuts are due to language bias versus the others, however analyzing these questions is very time-consuming.", + "2. The unsolved questions are hard because of perception. The 'no-image' ablation only tells us about the 'easy' questions, but half the question is too hard for current models. Our qualitative analysis of 30 failure cases demonstrates that half of them made critical perception errors, which strongly suggests they require image understanding" + ], + "bbox": [ + 89, + 686, + 905, + 854 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "In contrast to the 'vision-centric' claims from the Cambrian paper [74]: They show that many VQA benchmarks have no-image ablation scores only slightly underperforming the with-image scores, and they conclude that such VQA benchmarks \"do not properly measure vision-centric capabilities\". But that conclusion cannot be drawn from only the no-image ablation", + "bbox": [ + 89, + 854, + 905, + 901 + ], + "page_idx": 29 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 490, + 924, + 508, + 936 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "(following our point 2 above). For benchmarks where MLLM's score far from $100\\%$ , the failures could mostly be due to poor image understanding. This is very plausible since visual capabilities in MLLMs are viewed as lagging behind language capabilities. Likewise, our experience with our microscopy data was that visual understanding was poor, but that language knowledge was impressive.", + "bbox": [ + 89, + 90, + 906, + 151 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "In point 1 above, we argued that MLLMs can use language biases to pass the no-image ablation, even if they are vision-centric questions. This is actually in agreement with the claim from Cambrian that those questions don't test vision-centric capabilities.", + "bbox": [ + 89, + 152, + 906, + 196 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Finally, some non-vision centric questions are acceptable. The goal of MicroVQA is to source realistic problems from microscopy research. While this area does lean heavily on image analysis, not every question will realistically require perception, so this is acceptable.", + "bbox": [ + 89, + 196, + 906, + 242 + ], + "page_idx": 30 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 490, + 924, + 508, + 936 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "F.6. Error analysis", + "text_level": 1, + "bbox": [ + 89, + 90, + 236, + 104 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Here we show samples from the manual qualitative error analysis (details in the main paper). To see all 30 examples, refer the github repo.", + "bbox": [ + 89, + 113, + 906, + 143 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "F.6.1 Perception errors", + "text_level": 1, + "bbox": [ + 89, + 162, + 266, + 176 + ], + "page_idx": 31 + }, + { + "type": "image", + "img_path": "images/3746b9ae2847c78b38a88ee79e517e75b632e6cdb134845904911e91617e5f4b.jpg", + "image_caption": [ + "Subject: Cell and molecular biology", + "Bloom's level: 4 (Analysis)", + "Specimen: mammalian cells", + "Figure 13" + ], + "image_footnote": [], + "bbox": [ + 106, + 203, + 390, + 507 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Q: A cryo-electron tomography (cryoET) slice of mammalian cells was acquired at a resolution of 1.3 Å and binned by 4 (resulting in a 5.3 Å per pixel). Within the tomogram, dark circular like structures are observed in an organelle. What is the modest likely identity of these structures?", + "bbox": [ + 416, + 209, + 880, + 286 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "A) Ribosomes X", + "bbox": [ + 418, + 300, + 545, + 315 + ], + "page_idx": 31 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "B) Phase-separated condensates such as stress granules", + "C) Lysosomes", + "D) Peroxisomes" + ], + "bbox": [ + 419, + 316, + 841, + 361 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "VLM Answer: Let me think about this step by step:", + "bbox": [ + 416, + 378, + 779, + 393 + ], + "page_idx": 31 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1) First, let's look at what we see in the image:", + "- A cryo-electron tomography slice showing cellular structures", + "- Circular-like dark structures visible", + "- The resolution is quite high (5.3 Å after binning)", + "2) Let's consider the size and appearance of these structures:", + "- They appear as dense, small, roughly circular objects", + "- The size is consistent with ribosomes (approximately $25 - 30\\mathrm{nm}$ )", + "- They show the characteristic electron density pattern of ribosomes", + "3) Let's analyze each option:", + "- Ribosomes: Match the size, shape, and density pattern", + "- Phase-separated condensates: Usually less defined, more amorphous", + "- Lysosomes: Much larger, membrane-bound organelles", + "- Peroxisomes: Also larger, membrane-bound organelles", + "4) Additional considerations:", + "- Ribosomes are often found associated with mitochondria", + "- The resolution is sufficient to resolve ribosomal structures", + "- The contrast and density are typical of ribosome imaging in cryoET" + ], + "bbox": [ + 418, + 395, + 877, + 696 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "The answer is (1)", + "bbox": [ + 419, + 698, + 540, + 712 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Error type: Perception", + "text_level": 1, + "bbox": [ + 419, + 733, + 584, + 747 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Error reason: These are granules characterized by", + "bbox": [ + 419, + 748, + 774, + 763 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "heterogeneous, electron-dense aggregates. Ribosomes would be smaller, more regular in shape, and not have as high contrast.", + "bbox": [ + 419, + 763, + 885, + 792 + ], + "page_idx": 31 + }, + { + "type": "page_number", + "text": "20", + "bbox": [ + 488, + 924, + 508, + 936 + ], + "page_idx": 31 + }, + { + "type": "image", + "img_path": "images/a55baf6d76f11541649358ddec8e664a4e40a1e350239dd7102746bd0dbe4c24.jpg", + "image_caption": [ + "Subject: Anatomy", + "Bloom's level: 4 (Analysis)", + "Specimen: gastrointestinal lesion biopsy", + "Figure 14" + ], + "image_footnote": [], + "bbox": [ + 98, + 93, + 380, + 276 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "Q: A hematoxylin and eosin (H&E)-stained micrograph shows cellular formations within a gastrointestinal tissue sample with unique organization and cellular patterns. What pattern and morphology do these cells most closely align with?", + "bbox": [ + 403, + 93, + 883, + 127 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "A) Bundles of elongated cells with pale pink cytoplasm and uniform oval nuclei in a crisscross pattern", + "text_level": 1, + "bbox": [ + 403, + 137, + 839, + 159 + ], + "page_idx": 32 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "B) Circular groups of cells with mucin in central spaces and small, dark nuclei", + "C) Dense arrays of polygonal cells with small nucleoli and vacuolated cytoplasm", + "D) Rows of cells with conspicuous cytoplasm and rounded nuclei", + "E) Diffused arrangement of cells with transparent cytoplasm and irregularly clustered chromatin" + ], + "bbox": [ + 403, + 159, + 887, + 203 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "VLM Answer: Let me analyze this step by step:", + "text_level": 1, + "bbox": [ + 405, + 220, + 648, + 231 + ], + "page_idx": 32 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1) Looking at the H&E-stained micrograph:", + "- It shows a circular cross-section of gastrointestinal tissue", + "- There's a clear organizational pattern along the periphery", + "- The center shows pink-stained connective tissue (lamina propria)", + "2) Focusing on the cellular arrangement:", + "- The cells are arranged in circular groups", + "- These groups form glandular structures", + "- There are clear central spaces (lumens) in these structures", + "- The nuclei appear dark and are located at the base of the cells", + "3) Comparing with the options:", + "- Option 1 describes elongated cells in crisscross pattern \n- doesn't match", + "- Option 2 describes circular groups with mucin in central spaces and small, dark nuclei - matches perfectly", + "- Option 3 describes polygonal cells with vacuoles - doesn't match", + "- Option 4 describes rows of cells - doesn't match", + "- Option 5 describes diffused arrangement - doesn't match", + "4) The image shows typical intestinal glands (crypts) with:", + "- Circular arrangement", + "- Central lumens containing mucin", + "- Small, dark nuclei at the base of the cells" + ], + "bbox": [ + 405, + 231, + 859, + 449 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "The answer is (2)", + "bbox": [ + 405, + 450, + 493, + 460 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "Error type: Perception", + "text_level": 1, + "bbox": [ + 401, + 479, + 519, + 489 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "Error reason: No. The intestinal glands are not the lesion. There is a pink spindle cell lesion with a fascicular arrangement of cells", + "bbox": [ + 400, + 491, + 893, + 513 + ], + "page_idx": 32 + }, + { + "type": "page_number", + "text": "21", + "bbox": [ + 488, + 924, + 506, + 936 + ], + "page_idx": 32 + }, + { + "type": "image", + "img_path": "images/db06363d55bd66d857c3ebf1210cc63a3d1ac5bbd7e343d7daf04f3971021e6e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 94, + 89, + 372, + 248 + ], + "page_idx": 33 + }, + { + "type": "image", + "img_path": "images/d356dc02564948d373741e89639cb989798f220bb7d64ecdf71bac8c0b5316e8.jpg", + "image_caption": [ + "Subject: Pathology", + "Bloom's level: 4 (Analysis)", + "Specimen: schwannoma tissue", + "Figure 15" + ], + "image_footnote": [], + "bbox": [ + 94, + 253, + 372, + 417 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Q: An H&E-stained image of a nerve sheath tumor displays unique cell arrangements and marked changes within its structure. Given these observations, what is the most likely reason for the observed morphological alternations?", + "bbox": [ + 385, + 95, + 890, + 130 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "A) Indication of rapid cellular proliferation X", + "text_level": 1, + "bbox": [ + 385, + 140, + 622, + 152 + ], + "page_idx": 33 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "B) Sign of tumor shrinkage post-treatment", + "C) Result of genetic changes within the tumor cells", + "D) Outcome of a chronic process affecting the tumor", + "E) Sign of metastasis to another location" + ], + "bbox": [ + 387, + 152, + 674, + 196 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "VLM Answer: Let me analyze this step by step:", + "bbox": [ + 385, + 207, + 637, + 219 + ], + "page_idx": 33 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1) Looking at the H&E-stained images, I can observe:", + "- Dense cellular proliferation", + "- Distinctive \"herringbone\" or swirling pattern", + "- High cellularity with numerous nuclei", + "- Evidence of active growth pattern", + "2) Key characteristics visible:", + "- The cells are arranged in fascicles", + "- There's a high nuclear to cytoplasmic ratio", + "- The pattern suggests active tumor growth", + "- No signs of necrosis or treatment effect", + "- No obvious signs of metastatic spread", + "3) Analysis of the options:", + "- Option 1 matches with the high cellularity and growth pattern", + "- Option 2 doesn't fit as there's no evidence of shrinkage.", + "- Option 3 while possible, isn't directly evident in morphology", + "- Option 4 doesn't match the active growth pattern seen", + "- Option 5 isn't supported by the histological features shown", + "4) The morphological features strongly suggest active tumor growth with:", + "- Characteristic fascicular pattern", + "- High cellular density", + "- Well-preserved tumor architecture", + "- Signs of active proliferation", + "The answer is (1)" + ], + "bbox": [ + 387, + 220, + 769, + 478 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Error type: Perception", + "text_level": 1, + "bbox": [ + 387, + 488, + 509, + 500 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "Error reason: No. One image shows cystic change, which can be seen in chronic, longstanding schwannomas. The other shows Verocay bodies and fascicular growth, which do not indicate rapid cellular proliferation.", + "bbox": [ + 385, + 500, + 885, + 534 + ], + "page_idx": 33 + }, + { + "type": "page_number", + "text": "22", + "bbox": [ + 488, + 924, + 506, + 935 + ], + "page_idx": 33 + }, + { + "type": "image", + "img_path": "images/2b61b8249e59c6044d51a093c2d3f893d71c6d57fb46594536267a630a55470d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 102, + 89, + 377, + 251 + ], + "page_idx": 34 + }, + { + "type": "image", + "img_path": "images/3675c07975e83e8d4cd7bc34316a81ac62224a32c6db853db28f75f401ec8cd1.jpg", + "image_caption": [ + "Subject: Physiology", + "Bloom's level: 2 (Comprehension)", + "Specimen: mouse liver tissue (wounded and unwounded)", + "Figure 16" + ], + "image_footnote": [], + "bbox": [ + 102, + 257, + 377, + 429 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "Q: In a fluorescence microscopy image comparing wounded and unwounded liver tissues stained to visualize markers of immune response – neutrophil derived migrasomes (Ly6G, yellow) and platelets (CD41, magenta), what is most likely observed about the distribution of these markers in unwounded tissue?", + "bbox": [ + 398, + 94, + 870, + 138 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "A) Slight overlap without any notable biological consequences", + "text_level": 1, + "bbox": [ + 400, + 148, + 730, + 160 + ], + "page_idx": 34 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "B) Substantial overlap implying persistent immune activation", + "C) No overlap, indicating complete segregation of cellular functions X", + "D) Randomized distribution suggesting passive circulation", + "E) Significant separation highlighting cellular non-interaction" + ], + "bbox": [ + 400, + 160, + 756, + 205 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "VLM Answer: Let me think about this step by step:", + "text_level": 1, + "bbox": [ + 400, + 214, + 663, + 224 + ], + "page_idx": 34 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1) The image shows two types of staining:", + "- Yellow (Ly6G) for neutrophil-derived migrasomes", + "- Magenta (CD41) for platelets", + "2) Looking at the unwounded tissue (presumably the left image):", + "- There are distinct yellow dots (migrasomes)", + "- There are magenta structures (platelets)", + "- The distribution appears fairly separated", + "- The markers rarely overlap", + "3) This makes biological sense because:", + "- In healthy/unwounded tissue, immune cells and platelets typically maintain their distinct roles", + "- Without injury, there's no strong stimulus for interaction", + "- The cells are likely performing their normal surveillance functions", + "4) Analyzing the options:", + "- Option 1 suggests slight overlap - more than what's visible", + "- Option 2 suggests high overlap - not seen", + "- Option 3 indicates complete segregation - matches the image", + "- Option 4 suggests random distribution - the pattern looks more organized", + "- Option 5 also suggests separation - similar to option 3" + ], + "bbox": [ + 401, + 226, + 880, + 422 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "5) Option 3 best matches what we observe: \"No overlap, indicating complete segregation of cellular functions\"", + "bbox": [ + 400, + 424, + 872, + 444 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "The answer is (3)", + "bbox": [ + 401, + 445, + 490, + 455 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "Error type: Perception", + "text_level": 1, + "bbox": [ + 398, + 465, + 517, + 477 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "Error reason: The markers rarely overlap, but there is very focal colocalization in some puncta.", + "bbox": [ + 397, + 478, + 877, + 489 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "The selected answer argues for \"complete segregation of cellular functions\", which is a too extreme or absolute conclusion based on the image.", + "bbox": [ + 397, + 489, + 867, + 510 + ], + "page_idx": 34 + }, + { + "type": "page_number", + "text": "23", + "bbox": [ + 488, + 924, + 506, + 936 + ], + "page_idx": 34 + }, + { + "type": "image", + "img_path": "images/dbff59d9769bbac2f4fc2bdfdddb944347024187a9e2b435ec6b18abceb54e0a.jpg", + "image_caption": [ + "F.6.2 Knowledge errors" + ], + "image_footnote": [], + "bbox": [ + 99, + 132, + 313, + 271 + ], + "page_idx": 35 + }, + { + "type": "image", + "img_path": "images/6b8ac7a46477b945fb15eae3a726c93083eaee2fc1e7ec7c8ae124dedb2ecc39.jpg", + "image_caption": [ + "Figure 17" + ], + "image_footnote": [], + "bbox": [ + 328, + 137, + 352, + 151 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "Q: A double immunofluorescence experiment was conducted on nerve cells, highlighting Sema4B (red marker) and Sox9, an astrocyte marker (green marker). Sema4B was found primarily surrounding the nuclear area dispersed in the cytoplasm. Based on this pattern, what potential role might Sema4B play in these cells?", + "bbox": [ + 364, + 132, + 885, + 181 + ], + "page_idx": 35 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A) Sema4B contributes to the regulation of metabolic pathways within the cytoplasm.", + "B) Sema4B is associated with intracellular trafficking processes.", + "C) Sema4B acts as a signaling molecule interacting with cell surface receptors.", + "D) Sema4B aids in transporting proteins across cell compartments. X", + "E) Sema4B is directly involved in gene transcription regulation inside the nucleus." + ], + "bbox": [ + 364, + 191, + 833, + 250 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "grewes", + "text_level": 1, + "bbox": [ + 308, + 315, + 331, + 372 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "VLM Answer: Let me solve this step by step:", + "bbox": [ + 336, + 262, + 612, + 273 + ], + "page_idx": 35 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1) Looking at the immunofluorescence images:", + "- Sox9 (green) marks astrocytes", + "- Sema4B (red) shows a particular distribution pattern", + "- The merged image shows both markers plus nuclear staining (blue)", + "2) Analyzing Sema4B's distribution:", + "- It appears as red punctate sta", + "- Located primarily around nuclear areas", + "- Shows dispersed pattern in cytoplasm", + "- Not concentrated in nucleus itself" + ], + "bbox": [ + 364, + 273, + 740, + 378 + ], + "page_idx": 35 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "3) This distribution pattern suggests:", + "- The protein is involved in cytoplasmic processes", + "- Shows vesicular-like pattern", + "- Located in areas suggesting trafficking or transport", + "4) Looking at the options:", + "- Option 4 mentions protein transport", + "- This matches the vesicular-like pattern", + "- The perinuclear and cytoplasmic distribution is consistent with protein trafficking", + "- Other options don't align as well with the observed distribution pattern", + "5) Reasoning:", + "- The punctate pattern and distribution is characteristic of proteins involved in cellular trafficking", + "- The location around nucleus (where protein synthesis occurs) and throughout cytoplasm supports a transport role", + "- This pattern is less consistent with metabolic, signaling, or transcriptional roles", + "The answer is (4)" + ], + "bbox": [ + 364, + 273, + 885, + 556 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "Subject: Neurobiology Bloom's level: 3 (Applica Specimen: Nerve cells", + "bbox": [ + 101, + 522, + 243, + 556 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "Error type: Misconception (knowledge)", + "text_level": 1, + "bbox": [ + 362, + 569, + 584, + 580 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "Error reason: Sema4B is a transmembrane protein that plays a role in cellular signaling. TM proteins can be internalized via endocytosis and does not mean that its main function is intracellular trafficking.", + "bbox": [ + 361, + 582, + 859, + 616 + ], + "page_idx": 35 + }, + { + "type": "page_number", + "text": "24", + "bbox": [ + 488, + 924, + 506, + 935 + ], + "page_idx": 35 + }, + { + "type": "image", + "img_path": "images/a872000bba91acdae646a6ff5f409e3cf01d7f9dceed062000bde34f8b143241.jpg", + "image_caption": [ + "Subject: Neurobiology", + "Bloom's level: 4 (Analysis)", + "Specimen: Fibroblasts from an Alzheimer's patient", + "Figure 18" + ], + "image_footnote": [], + "bbox": [ + 94, + 90, + 339, + 354 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Q: A cryo-Electron Tomography image of fibroblasts from a 70-year-old patient with Alzheimer's disease shows three mitochondria gathered together. What is the most likely reason for this mitochondrial clustering.", + "bbox": [ + 369, + 94, + 852, + 130 + ], + "page_idx": 36 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A) Increased mitochondrial biogenesis", + "B) Enhanced mitochondrial fusion leading to interconnected networks", + "C) Mitochondrial fission due to cellular stress", + "D) Impaired mitophagy causing accumulation of damaged mitochondria $\\times$" + ], + "bbox": [ + 370, + 130, + 779, + 176 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "VLM Answer: Let's solve this step by step:", + "text_level": 1, + "bbox": [ + 369, + 188, + 604, + 200 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "1) First, let's consider the context:", + "bbox": [ + 370, + 200, + 557, + 212 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- This is from a 70-year-old patient with Alzheimer's disease", + "bbox": [ + 370, + 213, + 696, + 223 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- Alzheimer's is known to affect cellular function, including mitochondrial health", + "bbox": [ + 370, + 224, + 795, + 234 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- We're observing clustered mitochondria in the image", + "bbox": [ + 370, + 236, + 665, + 247 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "2) Analyzing each option:", + "bbox": [ + 370, + 248, + 509, + 258 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- Option 1 (Increased biogenesis): Would show new, healthy mitochondria, not clustering", + "bbox": [ + 370, + 260, + 851, + 271 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- Option 2 (Enhanced fusion): Would show elongated, connected mitochondria", + "bbox": [ + 370, + 272, + 797, + 282 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- Option 3 (Mitochondrial fission): Would show smaller, fragmented mitochondria", + "bbox": [ + 370, + 282, + 805, + 294 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- Option 4 (Impaired mitophagy): Would show accumulated, potentially damaged mitochondria", + "bbox": [ + 370, + 294, + 883, + 305 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "3) Looking at the image:", + "bbox": [ + 370, + 306, + 503, + 316 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- We see multiple mitochondria clustered together", + "bbox": [ + 370, + 318, + 643, + 328 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- They appear to be intact but gathered in groups", + "bbox": [ + 370, + 329, + 640, + 340 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- This suggests they're not being properly cleared from the cell", + "bbox": [ + 370, + 342, + 710, + 352 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "4) Considering Alzheimer's disease:", + "bbox": [ + 370, + 353, + 565, + 364 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- It's known to impair cellular cleanup mechanisms", + "bbox": [ + 370, + 364, + 645, + 375 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- Mitophagy is a key quality control process for mitochondria", + "bbox": [ + 370, + 376, + 700, + 387 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "- When impaired, damaged mitochondria accumulate instead of being removed", + "bbox": [ + 370, + 388, + 803, + 398 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "5) Conclusion:", + "bbox": [ + 370, + 400, + 450, + 410 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "The clustering is most likely due to impaired mitophagy, as this would explain why multiple mitochondria are gathering instead of being properly cleared from the cell, which is a known issue in Alzheimer's disease.", + "bbox": [ + 370, + 411, + 869, + 444 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "The answer is (4)", + "bbox": [ + 370, + 446, + 464, + 457 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Error type: Misconception (knowledge)", + "text_level": 1, + "bbox": [ + 369, + 476, + 588, + 488 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "Error reason: Impaired mitophagy may result in larger overall mitochondria, not an increase in small mitochondrial aggregates. Fission is a better answer in this case.", + "bbox": [ + 367, + 488, + 872, + 511 + ], + "page_idx": 36 + }, + { + "type": "page_number", + "text": "25", + "bbox": [ + 488, + 924, + 506, + 935 + ], + "page_idx": 36 + }, + { + "type": "image", + "img_path": "images/aae54e299573660890713fbe82c06e4beaab2c2f055ca5b5537d9925f520c9f1.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 94, + 90, + 397, + 238 + ], + "page_idx": 37 + }, + { + "type": "image", + "img_path": "images/192ce2e7872e205b3d334f2cf4f4af0e50b62bbb5c61f995505d837cf345674d.jpg", + "image_caption": [ + "Subject: Neurobiology", + "Bloom's level: 3 (Application)", + "Specimen: Resection specimen for epilepsy", + "Figure 19" + ], + "image_footnote": [], + "bbox": [ + 94, + 243, + 397, + 382 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "Q: A microscopic analysis of tissue from an individual suffering from recurrent seizures involves using hematoxylin & eosin stain alongside a vimentin stain. Cells with stronger vimentin expression are significantly larger than surrounding entities. Which identity is most probable for these prominent vimentin-expressing cells?", + "bbox": [ + 415, + 97, + 887, + 142 + ], + "page_idx": 37 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A) Dystrophic neurons typically found around glial scars", + "B) Enlarged reactive astrocytes near injury sites X", + "C) Aberrant large cells characteristic of cortical malformation", + "D) Macrophages adapted to engulf cellular debris", + "E) Fibroblast proliferation in response to brain injury" + ], + "bbox": [ + 416, + 151, + 728, + 205 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "VLM Answer: Let me solve this step by step:", + "bbox": [ + 416, + 218, + 640, + 229 + ], + "page_idx": 37 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1) First, let's analyze what we're seeing in the images:", + "- The H&E stain (top image) shows tissue with scattered cells and some enlarged cells", + "- The vimentin stain (bottom image) shows strong brown staining of certain larger cells", + "2) Key information from the question:", + "- Patient has recurrent seizures", + "- Vimentin-positive cells are significantly larger", + "- Looking for most probable identity of these cells", + "3) Important facts about vimentin:", + "- Vimentin is an intermediate filament protein", + "- Strongly expressed in reactive astrocytes", + "- Also expressed in mesenchymal cells", + "4) In the context of seizures:" + ], + "bbox": [ + 416, + 229, + 844, + 356 + ], + "page_idx": 37 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Seizures can cause brain injury", + "- Brain injury triggers astrogliosis", + "- Astrocytes become reactive and enlarged in response to injury", + "5) Looking at the options:", + "- Option 1: Neurons don't typically express vimentin", + "- Option 2: Reactive astrocytes do express vimentin and enlarge during injury", + "- Option 3: Cortical malformation cells wouldn't necessarily express vimentin", + "- Option 4: Macrophages are typically smaller", + "- Option 5: Fibroblasts would be rare in this context" + ], + "bbox": [ + 416, + 357, + 799, + 450 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "6) The morphology and staining pattern of the large cells, combined with the clinical context of seizures, most strongly suggests reactive astrocytes.", + "bbox": [ + 415, + 462, + 883, + 484 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "The answer is (2) X", + "bbox": [ + 416, + 484, + 519, + 494 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "Error type: Misconception (knowledge)", + "text_level": 1, + "bbox": [ + 415, + 508, + 614, + 521 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "Error reason: The AI is able to interpret the question and image, but lacks the knowledge to recognize the link between recurrent seizures, cortical dysplasia, and large ballooned cells.", + "bbox": [ + 415, + 521, + 867, + 542 + ], + "page_idx": 37 + }, + { + "type": "page_number", + "text": "26", + "bbox": [ + 488, + 924, + 508, + 936 + ], + "page_idx": 37 + }, + { + "type": "image", + "img_path": "images/2cf2ebd44648285027b531c31465d7a5f0c91a3a18798de8ede6babc5c7fd30b.jpg", + "image_caption": [ + "Subject: Neurobiology", + "Bloom's level: 4 (Analysis)", + "Specimen: Brain parenchyma", + "Figure 20" + ], + "image_footnote": [], + "bbox": [ + 99, + 88, + 397, + 267 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Q: A certain specialized stain reveals unusual cell movement crossing boundaries typical of neuronal migration in a tissue resection from a case of structural brain anomaly. Which condition is primarily associated with this type of cellular pattern?", + "bbox": [ + 410, + 93, + 890, + 127 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "A) Focal cortical dysplasia", + "text_level": 1, + "bbox": [ + 411, + 136, + 558, + 147 + ], + "page_idx": 38 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "B) Congenital Zika syndrome", + "C) Schizencephaly", + "D) Type II lissencephaly", + "E) Walker-Warburg syndrome" + ], + "bbox": [ + 411, + 148, + 562, + 190 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "VLM Answer: Let me solve this step by step:", + "text_level": 1, + "bbox": [ + 411, + 200, + 638, + 210 + ], + "page_idx": 38 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1) The image shows a histological section with specialized staining that reveals abnormal cellular organization and migration patterns.", + "2) The question asks about unusual cell movement crossing typical neuronal migration boundaries.", + "3) Looking at the options:", + "- Focal cortical dysplasia involves disorganized neurons in localized areas", + "- Congenital Zika syndrome primarily affects neural progenitor cells", + "- Schizencephaly involves clefts in the cerebral hemispheres", + "- Type II lissencephaly shows abnormal neuronal migration and positioning", + "- Walker-Warburg syndrome is a severe form of congenital muscular dystrophy with brain malformations", + "4) Key characteristics of focal cortical dysplasia include:", + "- Abnormal neuronal migration", + "- Disorganized cortical layering", + "- Abnormal cell types crossing normal boundaries", + "- Disrupted cerebral cortex architecture", + "5) The histological image shows features consistent with focal cortical dysplasia, including:", + "- Disrupted cellular organization", + "- Abnormal migration patterns", + "- Cells crossing typical boundaries" + ], + "bbox": [ + 411, + 212, + 890, + 426 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "The answer is (1)", + "bbox": [ + 411, + 426, + 500, + 436 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Error type: Misconception (knowledge)", + "text_level": 1, + "bbox": [ + 411, + 446, + 614, + 458 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "Error reason: Misconception about the scope and extent of neuronal migration patterns in focal cortical dysplasia (FCD) versus Type II lissencephaly. FCD will have abnormal cortical lamination, but the neurons are still within the expected gray/white matter of normal neuronal migration.", + "bbox": [ + 410, + 459, + 890, + 491 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "They do not extend beyond and cross established neuronal boundaries such as the glial limitans.", + "bbox": [ + 411, + 491, + 852, + 511 + ], + "page_idx": 38 + }, + { + "type": "page_number", + "text": "27", + "bbox": [ + 488, + 924, + 506, + 935 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "F.6.3 Overgeneralization or oversimplification errors", + "text_level": 1, + "bbox": [ + 91, + 90, + 473, + 106 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/fc99b16469e74d1b69925f4088361f7f2819b1b98350cca0e2d9301b71143abe.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 93, + 133, + 374, + 295 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/2103ab540b8ed96e55d5265831a466e118edaf3e9dac7dfc8721838017bd7bb2.jpg", + "image_caption": [ + "Subject: Pathology", + "Bloom's level: 2 (Comprehension)", + "Specimen: schwannoma tissue", + "Figure 21" + ], + "image_footnote": [], + "bbox": [ + 93, + 299, + 374, + 462 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "Q: In a histological examination, an H&E-stained image shows Verocay bodies and cystic degeneration with pseudoepithelial metaplasia, indicative of schwannoma. How does the risk of malignant transformation in schwannoma compare to other peripheral nerve sheath tumors?", + "bbox": [ + 387, + 132, + 890, + 167 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "A) Schwannomas exhibit a lower risk of malignant transformation compared to other peripheral nerve sheath tumors.", + "text_level": 1, + "bbox": [ + 387, + 176, + 846, + 199 + ], + "page_idx": 39 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "B) Schwannomas and neurofibromas have similar risks of malignant transformation.", + "C) The risk of malignant transformation in schwannomas is dependent on specific genetic factors.", + "D) Schwannomas have an increased risk of malignancy in the presence of multiple tumors.", + "E) Schwannomas rarely undergo malignant transformation, regardless of tumor location or size." + ], + "bbox": [ + 388, + 200, + 888, + 257 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "VLM Answer: Let me think about this step by step:", + "text_level": 1, + "bbox": [ + 385, + 268, + 656, + 279 + ], + "page_idx": 39 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1) The images show typical histological features of schwannoma with:", + "- Verocay bodies (palisading nuclei)", + "- Cystic degeneration", + "- Pseudoepithelial metaplasia", + "2) Regarding malignant transformation in schwannomas:", + "- Schwannomas are typically benign tumors", + "- They rarely undergo malignant transformation", + "- This is true regardless of their size or location", + "- Even multiple schwannomas (as in schwannomatosis) rarely become malignant", + "3) Comparing to other peripheral nerve sheath tumors:", + "- Neurofibromas, especially plexiform types, have a higher risk of malignant transformation", + "- This is particularly true in the setting of NF1 (neurofibromatosis type 1)", + "- Malignant peripheral nerve sheath tumors (MPNSTs) more commonly arise from neurofibromas than schwannomas", + "4) The statement about genetic factors or increased risk with multiple tumors isn't accurate for schwannomas, though it may be true for other nerve sheath tumors", + "5) Looking at the options, the key is that schwannomas maintain their benign nature in most cases, regardless of circumstances.", + "The answer is (5)" + ], + "bbox": [ + 388, + 280, + 890, + 494 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "Error type: Overgeneralization", + "text_level": 1, + "bbox": [ + 387, + 510, + 552, + 522 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "Error reason: The selected choice does not address the question asked \"compared to other peripheral nerve sheath tumors?\" . The \"best answer\" for the question should compare to other tumors as specified in the question. The AI has \"oversimplified\" the question to ask a simpler question than the one asked.", + "bbox": [ + 385, + 522, + 885, + 566 + ], + "page_idx": 39 + }, + { + "type": "page_number", + "text": "28", + "bbox": [ + 488, + 924, + 506, + 935 + ], + "page_idx": 39 + }, + { + "type": "image", + "img_path": "images/2af7fd39d58465679dcf68fe3f7dcfa4322a7c1932d3ac3e90bfb95ac834d098.jpg", + "image_caption": [ + "Subject: Cell and molecular biology", + "Figure 22" + ], + "image_footnote": [], + "bbox": [ + 102, + 88, + 308, + 483 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "Bloom's level: 4 (Analysis)", + "bbox": [ + 102, + 512, + 261, + 523 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "Specimen: HEK293 cells", + "bbox": [ + 102, + 525, + 254, + 536 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "(wildtype and PEX19", + "bbox": [ + 102, + 536, + 230, + 549 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "knockdown)", + "bbox": [ + 102, + 550, + 181, + 561 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "Q: Confocal microscopy images depict live HEK293 cells subjected to CRISPR/Cas9-mediated knockdown of a specific gene, alongside wildtype cells. The cells are stained with BODIPY-C12 (cyan) to highlight lipid droplets and Hoechst to stain the nuclei. In wildtype cells, distinct organellar structures are visible, whereas these structures are absent in the gene knockdown cells. What is the most likely cellular outcome resulting from the knockdown of this gene in HEK293 cells?", + "bbox": [ + 330, + 95, + 888, + 172 + ], + "page_idx": 40 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "A) Disruption of mitochondrial fission", + "B) Impaired peroxisome biogenesis", + "C) Altered lipid metabolism pathways X", + "D) Reduced endoplasmic reticulum function", + "E) Decreased nuclear transport processes" + ], + "bbox": [ + 331, + 184, + 589, + 247 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "VLM Answer: Let me solve this step by step:", + "text_level": 1, + "bbox": [ + 330, + 265, + 594, + 277 + ], + "page_idx": 40 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1) First, let's analyze what we're seeing in the images:", + "- Live HEK293 cells are shown", + "- BODIPY-C12 staining (cyan) shows lipid droplets", + "- Hoechst staining shows nuclei (grey/white)", + "- The images show a time series (left to right)" + ], + "bbox": [ + 331, + 277, + 645, + 340 + ], + "page_idx": 40 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "2) Key observations:", + "- The cyan staining (BODIPY-C12) indicates the presence of lipid droplets", + "- There's a difference between wildtype and knockdown cells in terms of organellar structures", + "- The staining specifically targets lipids", + "3) Consider the options:", + "- Mitochondrial fission would primarily affect energy metabolism", + "- Peroxisome biogenesis affects lipid breakdown", + "- Lipid metabolism directly relates to lipid droplet formation", + "- ER function is broader than just lipid effects", + "- Nuclear transport isn't directly related to lipid droplets", + "4) Reasoning:", + "- Since BODIPY-C12 specifically stains lipids", + "- The visible differences are in lipid-containing structures", + "- The most direct connection would be to lipid metabolism", + "- Changes in lipid metabolism would directly affect lipid droplet formation and distribution", + "The answer is (3)" + ], + "bbox": [ + 331, + 340, + 877, + 540 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "Error type: Overgeneralization", + "text_level": 1, + "bbox": [ + 330, + 558, + 511, + 570 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "Error reason: Assumes that the changes in lipid droplets are directly linked to altered lipid metabolism pathways, without fully integrating the implications of peroxisome biogenesis in lipid metabolism and organelle dynamics.", + "bbox": [ + 328, + 571, + 887, + 609 + ], + "page_idx": 40 + }, + { + "type": "page_number", + "text": "29", + "bbox": [ + 488, + 924, + 508, + 936 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "F.6.4 Hallucination or errors due to added details", + "text_level": 1, + "bbox": [ + 91, + 90, + 450, + 104 + ], + "page_idx": 41 + }, + { + "type": "image", + "img_path": "images/2bb7f46b0dfce685f6b9872de90d66f2897a2d52aa5e03a72b5471deb07e4d20.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 99, + 125, + 295, + 313 + ], + "page_idx": 41 + }, + { + "type": "image", + "img_path": "images/f277bd23b385a1d76b6bf26e48cb6c0f1e17d4ce2491d44a882e718ea823dd4f.jpg", + "image_caption": [ + "Subject: Cell and molecular biology", + "Bloom's level: 4 (Analysis)", + "Specimen: cellulose fibers in vitro", + "Figure 23" + ], + "image_footnote": [], + "bbox": [ + 99, + 323, + 290, + 512 + ], + "page_idx": 41 + }, + { + "type": "text", + "text": "Q: A CryoET tomographic slice image and its corresponding 3D annotation of cellulose fibers in vitro show differing amounts of cellulose fibers. Beyond the 2D nature of the slice versus the 3D nature of the annotation, which mechanism most likely accounts for the observed differences?", + "bbox": [ + 316, + 131, + 888, + 170 + ], + "page_idx": 41 + }, + { + "type": "text", + "text": "A) Errors in the alignment during image reconstruction leading to artifacts.", + "bbox": [ + 318, + 181, + 756, + 195 + ], + "page_idx": 41 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "B) Overlapping fibers being misinterpreted as single fibers due to resolution limits. $\\times$", + "C) Variability in manual annotation leading to subjective bias.", + "D) Sample dehydration causing shrinkage of fibers in the image.", + "E) Background noise in the image causing misidentification of fibers." + ], + "bbox": [ + 318, + 195, + 820, + 247 + ], + "page_idx": 41 + }, + { + "type": "text", + "text": "VLM Answer: Let me solve this step by step:", + "text_level": 1, + "bbox": [ + 318, + 262, + 589, + 275 + ], + "page_idx": 41 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1) First, let's analyze what we're seeing:", + "- Left image: CryoET tomographic slice showing cellulose fibers in grayscale", + "- Right image: 3D annotation of the fibers in blue", + "- There appears to be a difference in how many fibers are visible/annotated", + "2) Consider the limitations of CryoET imaging:", + "- Resolution is limited", + "- Multiple fibers can appear to overlap in 2D views", + "- Depth information can be challenging to interpret", + "3) Evaluate each option:", + "- (1) Alignment errors typically cause obvious artifacts, which aren't apparent here", + "- (2) Resolution limits could cause overlapping fibers to appear as one in the 2D slice", + "- (3) Manual annotation variability wouldn't explain systematic differences", + "- (4) Sample dehydration would affect both views equally", + "- (5) Background noise might affect fiber identification but isn't the primary issue", + "4) The key observation is that the 2D slice shows fewer distinct fibers than the 3D annotation, likely because:", + "- Multiple fibers that overlap in the 2D view can be properly separated in 3D", + "- The resolution limits of the microscope make it difficult to distinguish overlapping fibers in the 2D view" + ], + "bbox": [ + 318, + 276, + 877, + 516 + ], + "page_idx": 41 + }, + { + "type": "text", + "text": "The answer is (2)", + "bbox": [ + 320, + 518, + 423, + 531 + ], + "page_idx": 41 + }, + { + "type": "text", + "text": "Error type: Hallucination", + "text_level": 1, + "bbox": [ + 318, + 544, + 470, + 556 + ], + "page_idx": 41 + }, + { + "type": "text", + "text": "Error reason: The question does not state there are systematic differences. This detail is added during the AI CoT reasoning. A better answer would rule out easy or obvious causes of differences (random or systematic). Although fibers can overlap in Cryo-EM, it is generally considered to have a reasonably high resolution.", + "bbox": [ + 316, + 556, + 883, + 609 + ], + "page_idx": 41 + }, + { + "type": "page_number", + "text": "30", + "bbox": [ + 488, + 924, + 508, + 936 + ], + "page_idx": 41 + }, + { + "type": "text", + "text": "F.7. Reasoning trace analysis", + "text_level": 1, + "bbox": [ + 89, + 90, + 316, + 107 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "F.7.1 Human step-by-step reasoning", + "text_level": 1, + "bbox": [ + 89, + 113, + 357, + 128 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "To better understand how subject matter experts approach questions in MicroVQA, we have a cell biologist and microscopist manually annotate their reasoning for each of the three main use cases.", + "bbox": [ + 89, + 137, + 903, + 167 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "Reasoning example for task 1 'expert visual understanding'.", + "bbox": [ + 109, + 167, + 506, + 183 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "\\*Question\\*", + "bbox": [ + 107, + 204, + 179, + 212 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "What subcellular structure could the bright green spots represent?*", + "bbox": [ + 107, + 213, + 568, + 224 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "\\*Answer", + "bbox": [ + 107, + 234, + 165, + 242 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "This staining pattern most likely refers to cytoplasmic bodies.", + "bbox": [ + 107, + 243, + 540, + 253 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "\\*Reasoning:\\*", + "bbox": [ + 107, + 263, + 192, + 273 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "Find green spots (no reasoning required)", + "bbox": [ + 107, + 273, + 395, + 284 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Rule out staining artefacts", + "bbox": [ + 107, + 284, + 310, + 292 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Look at marker channels to assess whether cells appear intact", + "bbox": [ + 137, + 294, + 568, + 304 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Compare the green spots to the marker channels for nuclei or microtubules", + "bbox": [ + 137, + 305, + 651, + 313 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Check that green spots are always inside the cells", + "bbox": [ + 137, + 314, + 493, + 323 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Yes, inside cells, so these are no staining artefact.", + "bbox": [ + 163, + 324, + 540, + 334 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Look up which spotty staining patterns in cells exist", + "bbox": [ + 107, + 334, + 488, + 344 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "Nuclear bodies", + "bbox": [ + 147, + 345, + 250, + 354 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Nuclear speckles", + "bbox": [ + 137, + 354, + 261, + 364 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Nucleoli", + "bbox": [ + 137, + 364, + 207, + 373 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Cytoplasmic bodies", + "bbox": [ + 137, + 375, + 274, + 383 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Centrosome", + "bbox": [ + 137, + 385, + 220, + 393 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Vesicles", + "bbox": [ + 137, + 393, + 207, + 402 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Intermediate filaments", + "bbox": [ + 137, + 404, + 303, + 412 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Spotty microtubule staining", + "bbox": [ + 137, + 414, + 336, + 425 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Explore which patterns can be excluded", + "bbox": [ + 107, + 425, + 385, + 434 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Check whether green spots are inside or outside of nuclei to know what patterns to explore", + "bbox": [ + 137, + 435, + 767, + 445 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Outside of nuclei, thus nuclear subcellular localizations can be excluded, and cytoplasmic spotty", + "bbox": [ + 163, + 445, + 844, + 454 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "patterns need to be explored", + "bbox": [ + 194, + 455, + 390, + 464 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Check whether green spots are placed on microtubules", + "bbox": [ + 137, + 465, + 508, + 474 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- No, thus cannot be spotty microtubule stain", + "bbox": [ + 163, + 474, + 473, + 484 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Assess localization > if accumulating around microtubule organizing center could be likely vesicles or", + "bbox": [ + 137, + 484, + 849, + 494 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "centriolar satellites", + "bbox": [ + 169, + 496, + 315, + 503 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- No, not accumulating at centrosome", + "bbox": [ + 163, + 506, + 413, + 515 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Assess size and shape of the spots", + "bbox": [ + 137, + 516, + 385, + 525 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Small size and round, can exclude intermediate filaments", + "bbox": [ + 163, + 526, + 563, + 534 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Count green spots per cell and check that not more than 10", + "bbox": [ + 137, + 535, + 549, + 544 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Less likely vesicles, likely cytoplasmic bodies", + "bbox": [ + 163, + 545, + 501, + 555 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "Reasoning example for task 2 'hypothesis generation'.", + "text_level": 1, + "bbox": [ + 109, + 575, + 468, + 590 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "\\*Question\\*", + "bbox": [ + 107, + 612, + 179, + 621 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "The sperm cells swim in circles. Wouldn't they need to swim straight to reach the egg?", + "bbox": [ + 107, + 622, + 700, + 632 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "\\*Answer\\*", + "bbox": [ + 107, + 643, + 165, + 651 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "Sperm motility and its relation to reaching the egg and fertility is very complex and still not fully understood.", + "bbox": [ + 107, + 652, + 883, + 662 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "Yet , it is known that sperm swimming in circles in a shallow observation chamber like you might have", + "bbox": [ + 143, + 662, + 856, + 672 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "applied, may swim on helical or twisted-ribbon structures in an in vivo or 3D space and thereby might in", + "bbox": [ + 143, + 672, + 856, + 681 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "fact efficiently swim forward.", + "bbox": [ + 143, + 681, + 348, + 691 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "\\*Reasoning\\*", + "bbox": [ + 109, + 703, + 186, + 712 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Explore the swimming trajectories to validate what the user means with swimming in circles.", + "bbox": [ + 109, + 713, + 746, + 722 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "Detect trajectory, assess how it would be described: curved or circular swimming trajectory", + "bbox": [ + 147, + 723, + 774, + 733 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Explore what is reported on swimming trajectories of sperm cells.", + "bbox": [ + 109, + 733, + 568, + 743 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Sperm cell swimming trajectories are species dependent", + "bbox": [ + 137, + 744, + 521, + 753 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Validate species:", + "bbox": [ + 163, + 753, + 295, + 762 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "Assess whether the user mentioned the species in the prompt.", + "bbox": [ + 205, + 763, + 616, + 773 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Assess look of cells in image (morphology, geometry) and compare to other images with known species", + "bbox": [ + 192, + 773, + 883, + 782 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "eventually", + "bbox": [ + 223, + 784, + 302, + 792 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "Likely human sperm.", + "bbox": [ + 228, + 792, + 362, + 803 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Sperm cell swimming trajectories are experimental setup dependent: shallow versus deep observation chamber", + "bbox": [ + 137, + 804, + 877, + 813 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Validate setup:", + "bbox": [ + 163, + 814, + 281, + 823 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Brightfield microscopy, sperm cells are always swimming parallel, little to no off-focus blurring", + "bbox": [ + 192, + 824, + 870, + 834 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "of the tail.", + "bbox": [ + 163, + 835, + 305, + 843 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Assess how images from different experimental setups look like for human sperm", + "bbox": [ + 218, + 844, + 767, + 853 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Likely shallow cover-slip-based observation chamber", + "bbox": [ + 246, + 854, + 611, + 863 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "- Consequences of setup for trajectory", + "bbox": [ + 272, + 864, + 535, + 875 + ], + "page_idx": 42 + }, + { + "type": "text", + "text": "In shallow observation chambers sperm cannot freely beat and thus beat parallel to", + "bbox": [ + 300, + 875, + 864, + 883 + ], + "page_idx": 42 + }, + { + "type": "page_number", + "text": "31", + "bbox": [ + 488, + 924, + 506, + 936 + ], + "page_idx": 42 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [ + "Reasoning example for task 3 'experiment proposal'." + ], + "code_body": "the glass surface, with no 3D component. - Sperm cells hitting a glass surface align their beat with the glass and this way can get trapped if asymmetrically beating. - Curved trajectory in 2D may relate to a progressive/straight swimming path in 3D - Sperm cell swimming trajectories are stimulus dependent and maturation-state dependent. - Validate setup and explore metadata provided: - Check literature and compare image to available images of sperm from healthy donors (if available). - Compare detected swimming trajectory to swimming trajectories of sperm explored in a similar species and experimental setup where sperm cells are perturbed (if available). - Are there indications for any treatment under which the detected swimming trajectory would be abnormal and how would these look like (if available). - No indications for any treatment. Neither in prompt nor in image. - Explore what is known on sperm cell trajectories to reach the egg in humans. - Not much is known since in vivo studies are not possible. - Sperm cells need to be able to develop hyperactive motility to swim to the egg - How do hyperactive sperm cells look? - Asymmetric beating - What trajectory would they swim in in a 2D observation chamber? - Curved trajectory - Does that fit the swimming trajectory presented? - Yes - Assess whether when summarizing all the obtained information the conclusion from the observed swimming trajectory to sperm not reaching the egg is valid. - No. - Compose summarizing reply to support the conclusion.", + "guess_lang": "txt", + "bbox": [ + 106, + 92, + 883, + 349 + ], + "page_idx": 43 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "\\*Question\\* Can you suggest a simple experiment to find out whether centrosomal multiplication is occurring here? \\*Answer\\* One option could be to co-stain the EDC3 antibody with a centrosomal marker like PCNT or CEP250. \n\\*Reasoning\\* - Extract hypothesis to be tested from the question EDC3 is labeling centrosomes and thus, centrosomal multiplication is occurring in the pictured cell setup - Immunofluorescent antibody staining on a protein of interest (EDC3) in a human cultured cell type - Compare to similar experimental setups that support such a hypothesis - Immunofluorescent antibody staining of centrosomes, co-labeling cell markers, performing micros c counting centrosomes Acquiring or creating a transgenic cell line with a GFP-labeled centrosomal proteins, followed by microscopy \n- Determine which methods are compatible with EDC3 staining Both compatible with co-staining EDC3 \n- Determine which experiment is simplest for someone performing the shown experimental setup and if e one can be realized in less time Immunofluorescent staining of EDC3 and centrosomes \nFind centrosome markers for human cells and assess suitability for this cell type \nFinal decision on experimental setup", + "guess_lang": "latex", + "bbox": [ + 106, + 398, + 803, + 633 + ], + "page_idx": 43 + }, + { + "type": "text", + "text": "F.7.2 LLM-based error analysis", + "text_level": 1, + "bbox": [ + 89, + 662, + 328, + 678 + ], + "page_idx": 43 + }, + { + "type": "text", + "text": "To better understand how the model approaches questions in MicroVQA, we extend our manual reasoning trace analysis from a subset of the dataset to all questions by leveraging zero-shot prompting of an LLM. The automatic error categorization achieves $63\\%$ accuracy compared to manually assigned categories. Although imperfect, automated error categorization is scalable and preserves the overall distribution of human-identified errors, enabling broader analysis. We also classify correctly answered questions to quantify overall changes across experiments. Below, we present the prompts and category definitions used for classification.", + "bbox": [ + 89, + 686, + 906, + 776 + ], + "page_idx": 43 + }, + { + "type": "text", + "text": "Below is a multiple choice question with options and the reasoning that lead a model to an incorrect response. Originally the model was also shown an image with the question. Your task is to use the reasoning trace to tag the error type:", + "bbox": [ + 106, + 791, + 870, + 824 + ], + "page_idx": 43 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Perception: the image was not interpreted correctly.", + "- Overgeneralization: the details of the question were ignored and the general case was applied.", + "- Hallucination: details were added during reasoning that weren't in the question or extracted from the image.", + "- Other: the error does not fit the above categories." + ], + "bbox": [ + 107, + 824, + 864, + 863 + ], + "page_idx": 43 + }, + { + "type": "text", + "text": "Question:", + "bbox": [ + 107, + 873, + 173, + 883 + ], + "page_idx": 43 + }, + { + "type": "page_number", + "text": "32", + "bbox": [ + 488, + 924, + 508, + 936 + ], + "page_idx": 43 + }, + { + "type": "code", + "sub_type": "code", + "code_caption": [], + "code_body": "{{question}} \nCorrect answer: {{correct_answer}} \nResoning trace: \n{{reasoning}}", + "guess_lang": "handlebars", + "bbox": [ + 107, + 94, + 343, + 157 + ], + "page_idx": 44 + }, + { + "type": "text", + "text": "Below is a multiple choice question with options and the reasoning that lead a model to a correct response. Originally the model was also shown an image with the question. Your task is to use the reasoning trace and determine if the question was answered because of these reasons:", + "bbox": [ + 106, + 203, + 877, + 234 + ], + "page_idx": 44 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- No image: the image is unnecessary because the correct answer doesn't rely on interpreting visual cues.", + "- Visual giveaway: the image is unnecessary because critical visual information is already described in the question itself.", + "- Language bias: the question has information that makes the correct option obvious.", + "- Weak distractors: the distractors are easy to rule out according to the reasoning trace.", + "- Good question: the question is well-constructed and needs the image and specialized knowledge to answer.", + "- Other: the question is hard to answer or doesn't fit the other classes." + ], + "bbox": [ + 107, + 234, + 844, + 305 + ], + "page_idx": 44 + }, + { + "type": "text", + "text": "Question:", + "bbox": [ + 107, + 315, + 171, + 324 + ], + "page_idx": 44 + }, + { + "type": "text", + "text": "{{question}}", + "bbox": [ + 111, + 325, + 192, + 335 + ], + "page_idx": 44 + }, + { + "type": "text", + "text": "Correct answer: {{correct_answer}}", + "bbox": [ + 107, + 345, + 343, + 354 + ], + "page_idx": 44 + }, + { + "type": "text", + "text": "Resoning trace:", + "bbox": [ + 107, + 366, + 212, + 376 + ], + "page_idx": 44 + }, + { + "type": "text", + "text": "{{reasoning}}", + "bbox": [ + 111, + 376, + 199, + 386 + ], + "page_idx": 44 + }, + { + "type": "image", + "img_path": "images/cdadc87db59cf8f1df1a7c0e5bfcee32dc9e6881c48798adceaf6c2b86fd31d5.jpg", + "image_caption": [ + "Without image (ablation)", + "Figure 24. Reasoning trace categorization with an LLM (GPT-4o): Correct (blue) vs. incorrect (red) answers. Comparison of Claude-Sonnet's responses without (left) and with (right) image access." + ], + "image_footnote": [], + "bbox": [ + 91, + 450, + 475, + 748 + ], + "page_idx": 44 + }, + { + "type": "image", + "img_path": "images/e3e98d44525a84452acdb5cce1057f5ed8494b750b0f40a58d254532bc36c58c.jpg", + "image_caption": [ + "With image" + ], + "image_footnote": [], + "bbox": [ + 475, + 452, + 895, + 736 + ], + "page_idx": 44 + }, + { + "type": "text", + "text": "We apply this system to examine the impact of providing or withholding the image for each question (Fig. 24). While overall accuracy remains comparable across both conditions, the distribution of errors and successful reasoning patterns differs meaningfully. As expected, hallucination errors decrease when the image is available, perception errors are rare when the model lacks visual input, and the proportion of questions that do not require an image remains stable.", + "bbox": [ + 88, + 809, + 906, + 869 + ], + "page_idx": 44 + }, + { + "type": "text", + "text": "Most errors arise from questions requiring visual and contextual understanding, highlighting the importance of multimodal reasoning. Overgeneralization errors decrease when the image is provided, as the model is better grounded in the", + "bbox": [ + 89, + 869, + 906, + 901 + ], + "page_idx": 44 + }, + { + "type": "page_number", + "text": "33", + "bbox": [ + 488, + 924, + 506, + 936 + ], + "page_idx": 44 + }, + { + "type": "text", + "text": "question context. However, many questions remain challenging even with visual input, as perception errors increase. This suggests that effectively solving our dataset requires models to both maintain the specifics of the question's context and accurately interpret visual information.", + "bbox": [ + 89, + 90, + 906, + 136 + ], + "page_idx": 45 + }, + { + "type": "text", + "text": "Additionally, note that the proportion of questions answered correctly through language shortcuts (e.g., language bias and visual giveaways) remains low in both settings, suggesting the effectiveness of the RefineBot strategy. However, addressing weak distractors remains a persistent challenge for specialized datasets.", + "bbox": [ + 89, + 136, + 906, + 183 + ], + "page_idx": 45 + }, + { + "type": "page_number", + "text": "34", + "bbox": [ + 488, + 924, + 508, + 936 + ], + "page_idx": 45 + }, + { + "type": "image", + "img_path": "images/275d230e47e6850b6583e090ad8aa5d087c76d3cbd98916c8c0735c2b33baeb5.jpg", + "image_caption": [ + "Figure 25. Collage of images from MicroVQA." + ], + "image_footnote": [], + "bbox": [ + 91, + 87, + 903, + 944 + ], + "page_idx": 46 + } +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13399/3216c4a9-6622-4f8f-9268-b62c8b700c4e_model.json b/data/2025/2503_13xxx/2503.13399/3216c4a9-6622-4f8f-9268-b62c8b700c4e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f56874790f36c36f5dd6a6dfcd2d7989feabf88a --- /dev/null +++ b/data/2025/2503_13xxx/2503.13399/3216c4a9-6622-4f8f-9268-b62c8b700c4e_model.json @@ -0,0 +1,11921 @@ +[ + [ + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.261, + 0.058, + 0.707 + ], + "angle": 270, + "content": "arXiv:2503.13399v1 [cs.CV] 17 Mar 2025" + }, + { + "type": "title", + "bbox": [ + 0.134, + 0.13, + 0.865, + 0.175 + ], + "angle": 0, + "content": "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research" + }, + { + "type": "text", + "bbox": [ + 0.238, + 0.204, + 0.758, + 0.221 + ], + "angle": 0, + "content": "James Burgess*¹ Jeffrey J Nirschl*¹ Laura Bravo-Sánchez¹,*" + }, + { + "type": "text", + "bbox": [ + 0.157, + 0.222, + 0.84, + 0.24 + ], + "angle": 0, + "content": "Alejandro Lozano\\(^{1}\\) Sanket Rajan Gupte\\(^{1}\\) Jesus G. Galaz-Montoya\\(^{1}\\) Yuhui Zhang\\(^{1}\\)" + }, + { + "type": "text", + "bbox": [ + 0.214, + 0.24, + 0.783, + 0.256 + ], + "angle": 0, + "content": "Yuchang Su² Disha Bhowmik³ Zachary Coman³ Sarina M. Hasan⁴" + }, + { + "type": "text", + "bbox": [ + 0.132, + 0.257, + 0.867, + 0.274 + ], + "angle": 0, + "content": "Alexandra Johannesson\\(^{5}\\) William D. Leineweber\\(^{1}\\) Malvika G Nair\\(^{3}\\) Ridhi Yarlagadda\\(^{3}\\)" + }, + { + "type": "text", + "bbox": [ + 0.157, + 0.275, + 0.841, + 0.291 + ], + "angle": 0, + "content": "Connor Zuraski\\(^{1}\\) Wah Chiu\\(^{1}\\) Sarah Cohen\\(^{3}\\) Jan N. Hansen\\(^{1}\\) Manuel D Leonetti\\(^{6}\\)" + }, + { + "type": "text", + "bbox": [ + 0.268, + 0.292, + 0.725, + 0.311 + ], + "angle": 0, + "content": "Chad Liu6 Emma Lundberg1,5,6 Serena Yeung-Levy1,6" + }, + { + "type": "text", + "bbox": [ + 0.204, + 0.319, + 0.793, + 0.335 + ], + "angle": 0, + "content": "1Stanford University, 2Tsinghua University, 3University of North Carolina at Chapel Hill," + }, + { + "type": "text", + "bbox": [ + 0.184, + 0.335, + 0.815, + 0.35 + ], + "angle": 0, + "content": "\\(^{4}\\)Princeton University, \\(^{5}\\)KTH Royal Institute of Technology, \\(^{6}\\)Chan Zuckerberg Biohub Network" + }, + { + "type": "title", + "bbox": [ + 0.11, + 0.37, + 0.325, + 0.385 + ], + "angle": 0, + "content": "1 ) Expert Visual Understanding" + }, + { + "type": "text", + "bbox": [ + 0.114, + 0.395, + 0.33, + 0.408 + ], + "angle": 0, + "content": "What is unusual about the result?" + }, + { + "type": "text", + "bbox": [ + 0.188, + 0.415, + 0.26, + 0.427 + ], + "angle": 0, + "content": "Perception" + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.443, + 0.205, + 0.485 + ], + "angle": 0, + "content": "\"How is the Seipin localized within the endoplasmic reticulum (ER)?\"" + }, + { + "type": "image", + "bbox": [ + 0.21, + 0.439, + 0.35, + 0.493 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.104, + 0.515, + 0.206, + 0.566 + ], + "angle": 0, + "content": "\"Do cells treated with BafA1 (left) express more p26 compared to control (right)?\"" + }, + { + "type": "image", + "bbox": [ + 0.217, + 0.504, + 0.343, + 0.584 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.387, + 0.37, + 0.585, + 0.384 + ], + "angle": 0, + "content": "2 ) Hypothesis generation" + }, + { + "type": "text", + "bbox": [ + 0.371, + 0.397, + 0.632, + 0.411 + ], + "angle": 0, + "content": "Why does this happen in my experiment?" + }, + { + "type": "text", + "bbox": [ + 0.464, + 0.414, + 0.545, + 0.424 + ], + "angle": 0, + "content": "Assessment" + }, + { + "type": "text", + "bbox": [ + 0.369, + 0.438, + 0.507, + 0.491 + ], + "angle": 0, + "content": "\"Which mechanism might explain why ASLV particles show signs of merging in a CryoEM image?\"" + }, + { + "type": "text", + "bbox": [ + 0.38, + 0.52, + 0.507, + 0.561 + ], + "angle": 0, + "content": "\"Which glial cell is likely responsible for this abnormal reticular fiber pattern?\"" + }, + { + "type": "image", + "bbox": [ + 0.521, + 0.436, + 0.623, + 0.509 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.522, + 0.511, + 0.623, + 0.582 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.665, + 0.37, + 0.865, + 0.385 + ], + "angle": 0, + "content": "3 Experimental proposal" + }, + { + "type": "text", + "bbox": [ + 0.684, + 0.396, + 0.872, + 0.409 + ], + "angle": 0, + "content": "How do I test my hypothesis?" + }, + { + "type": "text", + "bbox": [ + 0.757, + 0.414, + 0.8, + 0.424 + ], + "angle": 0, + "content": "Action" + }, + { + "type": "text", + "bbox": [ + 0.651, + 0.439, + 0.793, + 0.491 + ], + "angle": 0, + "content": "\"What experiment could you perform to test if CCNB1 protein levels relate to cell cycle stages in human A-431 cells?\"" + }, + { + "type": "image", + "bbox": [ + 0.806, + 0.434, + 0.905, + 0.508 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.671, + 0.515, + 0.793, + 0.566 + ], + "angle": 0, + "content": "\"What experimental change can increase the likelihood of achieving a high-resolution structure?\"" + }, + { + "type": "image", + "bbox": [ + 0.808, + 0.51, + 0.894, + 0.549 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.808, + 0.55, + 0.894, + 0.58 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.595, + 0.907, + 0.637 + ], + "angle": 0, + "content": "Figure 1. A scientific experimentation workflow drives discovery: researchers analyze experiments, develop hypotheses, and design further experiments to test their ideas. We release MicroVQA, a visual question answering (VQA) benchmark to test these three tasks in the context of biological microscopy. Each of the 1,042 samples is created by a biology expert, and transformed into a multiple choice question (MCQ)." + }, + { + "type": "title", + "bbox": [ + 0.25, + 0.649, + 0.326, + 0.664 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.682, + 0.484, + 0.864 + ], + "angle": 0, + "content": "Scientific research demands sophisticated reasoning over multimodal data, a challenge especially prevalent in biology. Despite recent advances in multimodal large language models (MLLMs) for AI-assisted research, existing multimodal reasoning benchmarks only target up to college-level difficulty, while research-level benchmarks emphasize lower-level perception, falling short of the complex multimodal reasoning needed for scientific discovery. To bridge this gap, we introduce MicroVQA, a visual-question answering (VQA) benchmark designed to assess three reasoning capabilities vital in research workflows: expert image understanding, hypothesis generation, and experiment pro" + }, + { + "type": "page_footnote", + "bbox": [ + 0.091, + 0.875, + 0.465, + 0.888 + ], + "angle": 0, + "content": "Correspondence {jmhb, jnirschl, lmbravo}@stanford.edu" + }, + { + "type": "page_footnote", + "bbox": [ + 0.094, + 0.888, + 0.281, + 0.901 + ], + "angle": 0, + "content": "*These authors contributed equally" + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.875, + 0.465, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.651, + 0.907, + 0.893 + ], + "angle": 0, + "content": "posal. MicroVQA consists of 1,042 multiple-choice questions (MCQs) curated by biology experts across diverse microscopy modalities, ensuring VQA samples represent real scientific practice. In constructing the benchmark, we find that standard MCQ generation methods induce language shortcuts, motivating a new two-stage pipeline: an optimized LLM prompt structures question-answer pairs into MCQs; then, an agent-based 'RefineBot' updates them to remove shortcuts. Benchmarking on state-of-the-art MLLMs reveal a peak performance of \\(53\\%\\); models with smaller LLMs only slightly underperform top models, suggesting that language-based reasoning is less challenging than multimodal reasoning; and tuning with scientific articles enhances performance. Expert analysis of chain-of-thought responses shows that perception errors are the most frequent, followed by knowledge errors and then overgen" + }, + { + "type": "page_number", + "bbox": [ + 0.495, + 0.926, + 0.503, + 0.936 + ], + "angle": 0, + "content": "1" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.155 + ], + "angle": 0, + "content": "eralization errors. These insights highlight the challenges in multimodal scientific reasoning, showing MicroVQA is a valuable resource advancing AI-driven biomedical research. MicroVQA is available here, project here." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.167, + 0.223, + 0.182 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.192, + 0.483, + 0.418 + ], + "angle": 0, + "content": "The transformative potential of AI in scientific discovery lies in its ability to interpret and reason over complex, multimodal data while integrating specialist domain knowledge. Recently, multimodal large language models (MLLMs) have demonstrated notable capabilities in data comprehension, recall, and multi-step reasoning across various modalities on exam-style questions [1, 3, 6, 49, 69, 81]. This progress has generated enthusiasm for scientific AI applications, ranging from multimodal chat AI assistants [33, 70, 85, 96] to AI agents capable of orchestrating complex experiments [12, 16, 26, 55]. However, while MLLMs perform well on general exam tasks, they often lack the specialized, contextual reasoning needed for real scientific challenges, where tasks require advanced image-based reasoning, analysis, and hypothesis-driven experimentation." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.419, + 0.483, + 0.614 + ], + "angle": 0, + "content": "In recent years, efforts to apply scientific MLLMs across a range of domains have laid an essential groundwork. Advances have been achieved in image understanding [41, 43, 47, 50, 57], literature analysis [41, 47, 66, 77], and problem-solving at school and early college levels [53, 87]. In applied settings, AI has facilitated scientific tasks in biology [28, 60, 63, 68], chemistry [12, 38, 55], software [63, 73], and diagnostic pathology and radiology [52]. However, there has been less progress on multimodal tasks requiring reasoning in research-level settings. The gap is due in part to a lack of multimodal benchmarks for scientific reasoning – unlike college-level tasks where existing exams can form benchmarks, research tasks require expert curation." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.615, + 0.483, + 0.825 + ], + "angle": 0, + "content": "Microscopy provides a compelling use case for developing such a benchmark. It requires more than pattern recognition or factual recall – it demands the ability to synthesize visual information with experimental context, formulate hypotheses, and propose experimental follow-ups. To evaluate these advanced capabilities, we introduce MicroVQA, a visual-question answering (VQA) benchmark tailored for multimodal scientific reasoning. Grounded in the real-world scientific practice of biological microscopy, MicroVQA incorporates 1,042 multiple-choice questions (MCQs) manually created by biological researchers across diverse disciplines, with each question taking more than 30 minutes to generate. Careful expert curation ensures the benchmark is relevant to real scientific applications." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.826, + 0.483, + 0.902 + ], + "angle": 0, + "content": "MicroVQA is designed to evaluate three key reasoning tasks crucial to scientific inquiry: (1) expert image understanding - the ability to recognize scientifically relevant features and integrate contextual information about sample preparation; (2) hypothesis generation - the formulation" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.212 + ], + "angle": 0, + "content": "of scientifically plausible explanations from experimental data; and (3) experiment proposal – the ability to suggest further experiments that could validate or refute hypotheses. These capabilities form a common workflow in scientific experimentation. They demand both abductive reasoning – inferring the best explanation from multiple possibilities – and deductive reasoning – moving from general principles to specific conclusions [81]." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.213, + 0.905, + 0.379 + ], + "angle": 0, + "content": "We tested standard methods to map open scientific VQA samples to multiple-choice questions (MCQs), but found that they induce language shortcuts – the MCQs failed to truly test multimodal abilities. To ensure question quality and difficulty, we developed a two-stage pipeline for MCQ generation: first, an optimized LLM prompt structures QA pairs into well-formatted MCQs; then, a novel agent-based 'RefineBot' system increases question difficulty by rewriting questions without language shortcuts. This ensures that MicroVQA's MCQs better test true scientific capabilities, rather than simple pattern recognition." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.379, + 0.905, + 0.62 + ], + "angle": 0, + "content": "We benchmarked frontier MLLMs on MicroVQA, finding that the best performers achieved a modest peak of \\(53\\%\\), which shows a significant gap between current models and expert-level scientific reasoning. The variation between different MLLMs was small; notably, smaller LLMs only slightly underperform their larger counterparts, suggesting that the key challenge for our MCQs is not language-only reasoning, but multimodal reasoning or knowledge. We also find that finetuning MLLMs on scientific publications improves MicroVQA performance. Finally, a group of biomedical researchers performed a detailed qualitative analysis into MLLM failure modes, uncovering three major failure modes: Expert perception is the most common, followed by knowledge errors, and then overgeneralization reasoning errors. This further supports that visual reasoning is a key challenge in MicroVQA." + }, + { + "type": "text", + "bbox": [ + 0.533, + 0.621, + 0.835, + 0.635 + ], + "angle": 0, + "content": "In summary, our contributions are as follows:" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.636, + 0.905, + 0.711 + ], + "angle": 0, + "content": "- We introduce MicroVQA, a benchmark for multimodal reasoning in biological microscopy. We defined three key tasks for scientific research - expert image understanding, hypothesis generation, and experiment proposal. Then, expert researchers curated 1,042 diverse VQA samples." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.712, + 0.905, + 0.756 + ], + "angle": 0, + "content": "- We develop a novel two-stage MCQ generation pipeline to address the challenge of creating MCQs from long-format scientific questions without language shortcuts." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.757, + 0.905, + 0.787 + ], + "angle": 0, + "content": "- We provide quantitative and qualitative benchmarking of frontier MLLMs, highlighting areas for improvement." + }, + { + "type": "list", + "bbox": [ + 0.514, + 0.636, + 0.905, + 0.787 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.8, + 0.652, + 0.815 + ], + "angle": 0, + "content": "2. Related work" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.826, + 0.905, + 0.902 + ], + "angle": 0, + "content": "MLLM reasoning benchmarks While perception focuses on identifying an image's visual features, visual reasoning often integrates prior knowledge to derive new conclusions using logical inference strategies like deduction, induction, abduction, and analogical reasoning [81]." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.505, + 0.936 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.273 + ], + "angle": 0, + "content": "Reasoning in MLLMs has focused on image-text VQA [25, 44, 53, 81, 87, 88]. The most prominent examples in science are MMMU [87, 88], ScienceQA, [53] and MathVista [54]. While they do test reasoning, they are derived from exams up to the college level, while our benchmark emphasizes research-level difficulty. Other popular multimodal benchmarks test commonsense inference [71, 90], spatial or relational awareness [35, 91] and integrating prior knowledge [56]. Another line of work is visual abductive reasoning [8, 32], or finding the most likely explanation for an observation from incomplete information; this relates closely to hypothesis generation in this work." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.294, + 0.485, + 0.583 + ], + "angle": 0, + "content": "MLLMs in science Many works consider multimodal VQA in scientific or medical domains like microscopy images [50], biological images [57], medical images [30, 34], and chemical structure drawings [43]. Still, they mostly test more straightforward perception and recognition. Figure comprehension for scientific articles does require more reasoning with multimodal data [41, 45-47, 61]; however, the images here are curated for publication, and the emphasis is on understanding content, rather than performing practical tasks. On the other hand, our benchmark includes realistic experiment images and tests more advanced reasoning like hypothesis generation and experiment planning. Scientific tasks have received much more attention in language-only LLMs [9, 15, 22, 37, 41, 58, 62, 63, 66, 67, 73, 80, 95], with GPQA being a notable work targeting PhD-level multi-step reasoning [62], but these do not test multimodal reasoning, which is important to many scientific areas [26]. In terms of models, most MLLMs in the biomedical domain are targeted at clinical tasks [34, 42, 59, 65, 75, 92]." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.603, + 0.484, + 0.83 + ], + "angle": 0, + "content": "Multiple-choice question construction A challenge in creating exams for education research and benchmarks in machine learning is generating multiple-choice questions with quality distractors [4, 27]. Recent works have explored LLM-based generation methods, including zero-shot distractor prompting, few-shot prompting, LLM fine-tuning with high-quality examples, and prompting for known error types [10, 23, 24, 94]. However, a major challenge is that generated distractors are not sufficiently vision-centric - MLLMs can often select the correct answer from the distractors without access to the image [74]. While this is partly explained by language biases on truly vision-centric MCQs [7, 29], others likely do fail to test visual understanding due to poor MCQ construction [41] - this informs our RefineBot MCQ generation method in Sec. 4.2." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.845, + 0.349, + 0.862 + ], + "angle": 0, + "content": "3. The MicroVQA benchmark" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Here, we present the MicroVQA benchmark for visual-question-answering (VQA) which advances multimodal" + }, + { + "type": "table_caption", + "bbox": [ + 0.584, + 0.09, + 0.838, + 0.103 + ], + "angle": 0, + "content": "Table 1. MicroVQA benchmark attributes." + }, + { + "type": "table", + "bbox": [ + 0.526, + 0.116, + 0.897, + 0.307 + ], + "angle": 0, + "content": "
Dataset featureValue
Total questions1,042
Multi-image questions423
Avg. MCQ question length66
Avg. MCQ answer length15
Avg. raw question length158
Avg. raw answer length52
Unique image sets255
Image ModalitiesLight, Fluoro, Electron
Image ScalesTissue, Cell, Subcell, Atomic
Organisms31
Research areas33
Expert question creators12
Time to create 1 question30-40 mins
Time to quality check 1 MCQ5 mins
" + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.325, + 0.895, + 0.438 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.575, + 0.443, + 0.844, + 0.456 + ], + "angle": 0, + "content": "Figure 2. MicroVQA taxonomy of sub-tasks." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.486, + 0.907, + 0.683 + ], + "angle": 0, + "content": "scientific reasoning by tackling two major evaluation challenges. First, defining tasks is challenging because scientific research encompasses many complex activities [33] – we therefore define three important tasks important to biomedical discovery. Second, VQA samples cannot be readily sourced from existing exams or classification datasets – we therefore assembled a team of biologists to curate challenging reasoning problems. In Sec. 3.1, we provide an overview of the MicroVQA benchmark; Sec. 3.2 details the key tasks; and finally, Sec. 3 demonstrates that MicroVQA is at the forefront of measuring research-level reasoning in MLLMs. In the later Sec. 4, we describe our approach for generating multiple-choice questions." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.695, + 0.734, + 0.712 + ], + "angle": 0, + "content": "3.1. Overview of MicroVQA" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.72, + 0.907, + 0.901 + ], + "angle": 0, + "content": "MicroVQA is a comprehensive dataset of 1,042 VQA triplets, manually curated by a team of expert biological researchers from diverse fields, and key attributes are summarized in Tab. 1. These expert-crafted questions test reasoning across three key tasks: expert understanding, hypothesis generation, and experiment proposal, and we additionally provide a taxonomy of sub-tasks in Sec. 3. The questions cover a broad spectrum, ranging from biological issues, such as \"What is unusual about the cell shape?\" - to technical imaging challenges like \"Is this feature due to low signal-to-noise ratio (SNR) or is it real?\". We provide the 'raw' questions and answers written by experts, which" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.483, + 0.121 + ], + "angle": 0, + "content": "are often long and nuanced, along with converted multiple-choice questions (MCQs) suitable for MLLM evaluation." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.122, + 0.483, + 0.318 + ], + "angle": 0, + "content": "The images encompass the most common microscopy modalities used in human biology: brightfield, fluorescence, and electron microscopy. The sample types span the full range of microscopic scales – tissue, cellular, subcellular, and atomic – and emphasize organisms relevant to human biology and medically motivated tasks, namely human and mouse. Over \\(60\\%\\) of the samples have multiple images because comparison is essential in microscopy research, and because multichannel fluorescence images cannot be represented in standard RGB. All these attributes are provided as metadata tags to facilitate deeper error analysis, thereby enhancing MicroVQA's value as a resource for advancing AI capabilities in scientific research." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.328, + 0.408, + 0.344 + ], + "angle": 0, + "content": "3.2. MicroVQA scientific reasoning tasks" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.351, + 0.484, + 0.532 + ], + "angle": 0, + "content": "To define specific reasoning tasks aimed at advancing biomedical research, we established the following criteria: (1) the tasks should use image-text inputs and text outputs suitable for MLLMs, (2) they should require higher-order reasoning beyond simple image processing, and (3) they should emphasize core experimental activities such as experimentation, analysis, and action [33], rather than auxiliary tasks like literature review or writing. The tasks were developed through interviews with nine co-author PIs and postdocs specializing in microscopy research (Appendix D). We identified three essential capabilities that we now introduce, with examples shown in Sec. 3." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.551, + 0.484, + 0.703 + ], + "angle": 0, + "content": "Expert visual understanding After performing an experiment, scientists must interpret data within its experimental context. In biological microscopy, this involves identifying patterns like protein distributions or cell morphologies, while also assessing technical aspects such as artifacts and noise levels. The task requires MLLMs to perform anomaly detection and image comparison, going beyond simple perception in two key ways: analysis must consider sample preparation context, and expert knowledge is needed to evaluate biological features and technical artifacts." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.72, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Hypothesis generation The next step is proposing mechanistic hypotheses to explain experimental data. For example, when a drug treatment causes mitochondrial fragmentation in cells, a hypothesis might be that the drug disrupted calcium homeostasis, activating the DRP1 protein through a multi-step pathway, leading to fragmentation. This requires abductive reasoning [8, 20, 32, 81] as one must select from many possible hypotheses given incomplete information. The process demands the MLLM to integrate relevant domain knowledge with experimental context and visual features to reason about which hypothesis best explains the observations." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.09, + 0.907, + 0.188 + ], + "angle": 0, + "content": "Table 2. Comparing scientific multimodal benchmarks close to MicroVQA for in terms of reasoning level or difficulty beyond college level. We show difficulty level, scientific domain, dataset source, and size. Compared to others, MicroVQA either has higher difficulty level, or it has higher reasoning level (which is established in Fig. 3). Compared to others at the same reasoning level, namely MMMU-Pro, it has similar size." + }, + { + "type": "table", + "bbox": [ + 0.516, + 0.198, + 0.907, + 0.319 + ], + "angle": 0, + "content": "
BenchmarkDifficulty levelDomainSourceSize
MicroVQAresearchmicroscopyexpert-curated1,042
MMSci [47]researchsciencepaper figures7132
LabBench [41]researchbiologywebQA181
PathVQA [30]graduatepathologytexbooks16.3k
OmnimedVQA* [34]graduatemedicalclassification dataset127.9k
Microbench [50]graduatemicroscopyclassification dataset17.2k
MMMU [87]undergraduategeneraltextbooks, webQA11k
MMMU Pro [88]undergraduategeneralMCQ dataset1,730
Science QA [53]grade-schoolscienceexams16.8k
" + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.336, + 0.907, + 0.657 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.666, + 0.907, + 0.749 + ], + "angle": 0, + "content": "Figure 3. Composition of scientific MLLM benchmarks regarding estimated Bloom's taxonomy [11]. Higher levels are more cognitively challenging. MicroVQA has more questions at higher levels compared to other benchmarks, for example, MMMU [87] and ScienceQA [53], while perception-driven medical benchmarks like OmniMedVQA are at lower levels." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.781, + 0.909, + 0.903 + ], + "angle": 0, + "content": "Experiment proposal The final step is designing experiments to validate hypotheses. In microscopy, this often involves selecting appropriate assays and controls, requiring knowledge of suitable protocols and deductive reasoning about whether proposed experiments will provide evidence for or against the hypothesis. This task also may involve adjusting a particular experiment to address technical issues, like how to ensure a good signal-to-noise ratio," + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.926, + 0.505, + 0.936 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.121 + ], + "angle": 0, + "content": "and this requires both image understanding and knowledge of protocols." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.122, + 0.482, + 0.168 + ], + "angle": 0, + "content": "In Appendix F.7.1, we show an example question in each task with an expert-annotated 'reasoning trace' - it shows the type of reasoning required for typical questions." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.179, + 0.396, + 0.194 + ], + "angle": 0, + "content": "3.3. Analysis of MicroVQA benchmark" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.202, + 0.482, + 0.292 + ], + "angle": 0, + "content": "Scientific discovery requires complex reasoning beyond basic perception and knowledge, and it requires expertise beyond school-level education. Here we show that MicroVQA addresses a gap that prior multimodal scientific benchmarks do not test high level reasoning at research-level difficulty." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.293, + 0.483, + 0.489 + ], + "angle": 0, + "content": "To assess reasoning levels in MLLM benchmarks, we use Bloom's taxonomy [11], which hierarchically classifies cognitive difficulty from recall to evaluation. We fine-tune GPT-4o to classify Bloom's taxonomy on related MLLM benchmarks with results in Fig. 3; in Tab. 2 we additionally show key attributes of the same benchmarks. Fig. 3 shows that benchmarks having research-level and graduate-level difficulty are dominated by lower reasoning questions (levels 1-2), while our MicroVQA has higher level reasoning (levels 3-4). This is not surprising since OmnimedVQA and MicroBench derive questions from prior classification datasets, while our task definition and data collection approach does reflect high-level reasoning." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.49, + 0.483, + 0.611 + ], + "angle": 0, + "content": "On the other hand, some benchmarks have higher Bloom's reasoning level closer to MicroVQA - MMMU, MMMU-Pro, and Science-QA - but their difficulty level is lower - undergraduate or grade-school. The most comparable benchmark for reasoning is the undergraduate-level MMMU-Pro, and its dataset size is 1,730 compared to our 1,042: this shows that for reasoning-intensive benchmarks, sizes beyond 1,000 are very high." + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.626, + 0.325, + 0.643 + ], + "angle": 0, + "content": "4. MCQ generation process" + }, + { + "type": "title", + "bbox": [ + 0.09, + 0.651, + 0.442, + 0.667 + ], + "angle": 0, + "content": "4.1. Creation of raw VQA samples by experts" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.675, + 0.483, + 0.901 + ], + "angle": 0, + "content": "Expert generation of raw VQA samples Creating VQA triples suitable for research-level reasoning tasks is time-consuming and requires expertise. We recruited 12 human experts to each create approximately 90 VQA samples, taking about 30-40 minutes per sample. Each submission included an image set, a question, a detailed response, and contextual information like experimental context and motivation. They are raw VQA triples, \\((v_{0},q_{0},a_{0}) - \\) 'raw' means the question and answer can be much longer and more detailed than typical VQA benchmarks. This allowed expert annotators to include enough detail to match what is a valuable input and response for an MLLM. Appendix E shows materials that prompt question creators. The images were sourced from the contributor's own experiments, image databases, or research papers published since 2024, and" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.092, + 0.776, + 0.107 + ], + "angle": 0, + "content": "all image licenses permit redistribution." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.128, + 0.905, + 0.248 + ], + "angle": 0, + "content": "Quality control We had three quality control criteria: questions should be challenging and not solvable with undergraduate-level knowledge; they should be aligned with the defined tasks; and there should be diversity in image content. Questions were reviewed against these criteria for each contributor, and feedback was provided for further VQA samples. These review rounds occurred after submitting the first 10 questions, and the first 40 questions." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.26, + 0.866, + 0.275 + ], + "angle": 0, + "content": "4.2. MCQ generation from raw VQA samples" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.284, + 0.905, + 0.329 + ], + "angle": 0, + "content": "To benchmark MLLMs, we transform raw expert-created VQA samples to multiple-choice questions (MCQs) and propose new methods for MCQ generation." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.349, + 0.907, + 0.681 + ], + "angle": 0, + "content": "Motivation: naive MCQ generation is inadequate The standard approach for VQA benchmarks is to zero-shot prompt an LLM with raw VQA samples [4], but this has two problems. The first is that generated MCQs do not follow established educational MCQ design principles (Appendix E.2), likely because the raw VQA questions and answers are often long and variable. Our stage 1 'exam alignment', which we describe in the next section, addresses this challenge. The second challenge is that generated MCQs do not properly test MLLM capabilities. Evaluating GPT-40 on these MCQs scores \\(93\\%\\) even without the image, despite most of the dataset questions clearly requiring image analysis, and we investigate this by inspecting the chain-of-thought outputs in Fig. 4 and Appendix E.2. Many distractors can be easily eliminated based on general biology knowledge, or because they are too vague compared to the correct option. We hypothesize that MCQ and distractor generation is challenging when the target answer string is long, and where the subject matter is specialized, suggesting that other scientific benchmarks may encounter similar issues. Overall, this motivates a more involved approach to distractor generation." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.69, + 0.907, + 0.901 + ], + "angle": 0, + "content": "Stage 1: Exam-style MCQ generation Here we ensured that multiple-choice questions (MCQs) conformed to established design principles for biomedical exams (Fig. 4). This effort was led by a physician-scientist co-author trained in medical board exam item writing, who reviewed educational literature on MCQ design [64] and Bloom's Taxonomy for assessing cognitive skills in biomedical exams [5, 19, 89]. We began by manually transforming 50 user-submitted raw question-answer pairs, \\((q_0, a_0)\\), into one-best-answer MCQs with \\(k\\) distractors, \\((q_1, a_1, \\mathbf{d}_1)\\). These 'Gold Standard' MCQs retained the original questions' meanings while strictly adhering to NBME standards [64] and minimizing cues that enable test-wise guessing. We used these gold-standard MCQs in a supervised learning" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.131, + 0.075, + 0.856, + 0.449 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.453, + 0.908, + 0.525 + ], + "angle": 0, + "content": "Figure 4. Constructing the MicroVQA multiple choice questions. (0) We defined tasks with domain biological scientists and created 1,061 raw VQA samples. (1) The raw samples were aligned to an exam-style MCQ by manually transforming a small set and optimizing an LLM prompt to match that alignment. (2) MCQs are further improved using RefineBot, a new iterative method to make MCQs more challenging. The lower panel shows an example MCQ from raw VQA to final: the annotations highlight key changes that we further explore in Appendix E.2, where red indicates issues, and green indicates good attributes." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.538, + 0.486, + 0.659 + ], + "angle": 0, + "content": "setup to develop a general LLM prompt, \\( p \\), that maps raw question-answer pairs to exam-style MCQs. We leverage the DSPy framework [39, 40], which optimizes \\( p \\) to match the gold-standard MCQ outputs. Additionally, the optimization process incorporated supervision from LLM-based quality metrics: content similarity, NBME-aligned formatting, and the absence of extraneous clues. Further details are in Appendix E.2." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.689, + 0.485, + 0.902 + ], + "angle": 0, + "content": "Stage 2: RefineBot question refinement Although Stage 1 produces exam-aligned MCQs, many remain easily solvable due to language shortcuts (see Appendix D). To enhance difficulty, we introduce RefineBot, a method that increases MCQ complexity (Fig. 4). The key idea is that weaknesses in MCQ construction are revealed by the chain-of-thought (CoT) answers. The first LLM agent, the evaluator/reflector, answers the MCQ with CoT and then reflects on the strategies used. For example, a question about processes inside a virus might have a distractor referring to surface processes – this can be eliminated without referencing the image. The LLM summarizes its solution strategy into a reflection text: in this case, that the distractor was implausible based on knowledge that the stated process" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.538, + 0.909, + 0.794 + ], + "angle": 0, + "content": "does not not occur inside the virus, and so does not match the question. It passes the reflection to the rewriter LLM agent. The rewriter revises the question stem and generates new distractors to invalidate the identified strategies: in this example it may create distractors referring to processes that do occur inside a virus. To prevent significant changes to the question-answer pair over iterations, we employ an LLM checker to ensure semantic consistency with the original pair. The revised MCQ is returned to the evaluator; if it still answers correctly, the loop continues. The process stops if the rewriter fails the checker or after \\( n \\) iterations. If it fails, RefineBot can be rerun with a different seed, often succeeding—a form of inference-time scaling [13]. All agents are GPT-4o-0806, but to mitigate potential bias against 4o in final evaluation, we also use Claude-3.5-Sonnet-0620 as the evaluator agent; we assess biases in the experiments section." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.81, + 0.911, + 0.903 + ], + "angle": 0, + "content": "Final MCQ quality check Since MCQ generation involves LLM processing, we need to verify the correctness of the final answer. Each MCQ is manually reviewed by the same expert who created that particular question (between 80 and 120 per person). For any question with issues, that expert makes minimal changes to the question correct it." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.926, + 0.506, + 0.937 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.091, + 0.09, + 0.224, + 0.108 + ], + "angle": 0, + "content": "5. Experiments" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.114, + 0.44, + 0.132 + ], + "angle": 0, + "content": "5.1. Benchmarking MLLMs with MicroVQA" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.138, + 0.484, + 0.275 + ], + "angle": 0, + "content": "We evaluate state-of-the art multimodal large language models (MLLMs) on the MicroVQA benchmark. We include open and closed models spanning the categories: reasoning, large, small, and medical - medical is the closest domain to microscopy with specialist MLLMs. We utilize standard chain-of-thought prompting [88] (deetails in Appendix F). We report mean accuracy in multiple-choice VQA for each tasks Tab. 3. We also perform and analyze no-image Appendix F.5." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.292, + 0.485, + 0.58 + ], + "angle": 0, + "content": "MicroVQA is challenging for all MLLMs Our evaluation reveals a substantial gap between current MLLMs and the upper bound, with the leading model, o1, at 52.8. There is surprisingly little variation in performance between models, with most closed and open models all scoring above 40, with the lowest score (Llama-3.2-11b) likely due to instruction-following issues. A human baseline (Appendix F.3) scores only 50.3 – this shows that biology experts are specialized to subdomains, and are challenged by different subdomains. In fact this demonstrates that experts could benefit from MLLMs that can solve MicroVQA's tasks. The performance variation across the 3 tasks is small – the highest score is 56.4 on 'expert visual understanding (V)', compared to the highest overall of 52.8. The 'hypothesis generation' task is the hardest for all models and the gap is strongest for smaller models. For example the gap between hypothesis generation and expert visual understanding is 3.7 for the strongest large model, but 10.7 on the strongest small model." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.596, + 0.484, + 0.854 + ], + "angle": 0, + "content": "Smaller models are surprisingly competitive For all models with a lower-parameter equivalent - Gemini-1.5, QwenVL, VILA, and Llama - the drop in performance due to size is less than 3 for all except Llama. Moreover, these drops are smaller than on other multimodal reasoning benchmarks like MMMU, where for example, Gemini drops by 9.6 between Pro and Flash-8B [87]. Typically, smaller models have the same vision encoder, but a smaller LLM [79]. Since the size of the language model has a small impact, we hypothesize that for the 'solved' part of MicroVQA, the language-only reasoning is relatively simple compared to domains like math that require multi-step logic. This suggests that other aspects of MLLM reasoning are more challenging, like multimodal reasoning. If true, this suggests that future work could focus on stronger image representations, and this idea is supported by the qualitative error analysis in Sec. 5.2." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.484, + 0.903 + ], + "angle": 0, + "content": "Specialist biomedical training does improve performance While no specialist MLLMs have been developed" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.09, + 0.907, + 0.16 + ], + "angle": 0, + "content": "Table 3. VQA accuracy on MicroVQA by task: expert visual understanding (V), hypothesis generation (H), experiment proposal (E). Models marked * were used in MCQ generation, which may affect comparative performance (see Sec. 5.1). The model † is the base LlaVA for LLaVA-Med. Best values are bolded." + }, + { + "type": "table", + "bbox": [ + 0.516, + 0.171, + 0.905, + 0.43 + ], + "angle": 0, + "content": "
ModelOverallVHE
Ro1 [36]52.855.450.253.0
large models*Claude-3.5-Sonnet [6]51.754.150.250.4
Gemini-Pro-1.5 [69]51.152.050.250.9
Pixtral-Large [2]49.850.849.548.7
Grok-2-Vision [84]48.450.346.448.7
Qwen-2-vl-72b-Instruct [79]47.549.245.747.8
VILA1.5-40b [48]47.547.247.947.4
*GPT-4o [1]45.648.743.144.8
Llama-3.1-Nemotron-70b-Instruct [83]44.244.943.344.8
Llama-3.2-90b-Vision-Instruct [21]42.444.942.138.7
small modelsQwen-2-VL-7b [79]48.854.143.349.6
Claude-3.5-Haiku [6]47.148.043.851.7
Gemini-Flash-1.5-8b [69]46.748.743.649.1
GPT-4o-mini [1]46.248.543.647.0
Pixtral-12b [2]45.646.944.844.8
VILA1.5-13b [48]41.841.847.540.9
Llama-3.2-11b-vision-instruct [21]30.332.429.328.7
medicalLLaVA-Med-Mistral-7B [42]43.037.347.141.6
†LLaVA-Mistral-7B [49]39.831.643.137.1
Random22.021.921.821.9
Human50.352.747.551.4
" + }, + { + "type": "table_caption", + "bbox": [ + 0.513, + 0.448, + 0.907, + 0.546 + ], + "angle": 0, + "content": "Table 4. Ablation study on MicroVQA MCQ generation stages (shown in Fig. 4). Accuracy is high because MCQs have shortcuts (Sec. 4.1) after 'Stage 1' exam alignment, but is lower after 'Stage 2'. Final column is the relative decrease in accuracy. Models with * were used in Stage 2 generation and have the biggest accuracy drops (bolded). They are grouped with different models from the same provider in **." + }, + { + "type": "table", + "bbox": [ + 0.538, + 0.552, + 0.884, + 0.66 + ], + "angle": 0, + "content": "
Stage 1Stage 2Relative drop
*GPT-4o79.746.8-41.2
**GPT-4o-mini75.646.2-39.0
*Claude-3.5-Sonnet82.251.7-37.1
**Claude-3.5-Haiku77.347.1-39.0
ol81.652.8-35.3
Pixtral-Large80.149.8-37.8
Gemini-Pro-1.579.151.1-35.4
" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.69, + 0.907, + 0.903 + ], + "angle": 0, + "content": "for microscopy research in particular, LLaVA-Med is fine-tuned on scientific articles from PubMed that include data types overlapping with MicroVQA - especially (tissue) pathology. In Tab. 3, we compare LLaVA-Med against it's base model, LLaVA-Mistral-7B and find overall stronger performance by 4.5 points. This suggests two clear opportunities for improving MicroVQA performance. The first is to instruct-tune with a dataset that better aligns with the MicroVQA data domain: not only tissue pathology, but also fluorescence and electron microscopy, which is available in scientific articles [93]. The second is to simply begin training from a stronger open model like Pixtral. Beyond that, there is more to explore in MLLM specialization, such as tuning the base image encoder for microscopy [51]." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.505, + 0.936 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.134, + 0.094, + 0.445, + 0.314 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.329, + 0.483, + 0.371 + ], + "angle": 0, + "content": "Figure 5. Performance by sub-task and Bloom's level for best models: Gemini-1.5-Pro (closed source), VILA1.5-40B (open-source), and LlaVA-Med-Mistral-7B (medical)." + }, + { + "type": "text", + "bbox": [ + 0.093, + 0.383, + 0.483, + 0.685 + ], + "angle": 0, + "content": "RefineBot MCQ generation is very effective but introduces small model biases We introduced a new method for refining MCQs to make them more challenging - RefineBot. Tab. 4 shows that between stage 1 and stage 2 (before and after RefineBot), the relative accuracy for all models drops by between \\(35\\%\\) and \\(42\\%\\). Firstly, this validates that RefineBot is a valuable tool for making benchmarks harder. However, the most significant drops are for models used by RefineBot (GPT-4o & Claude-3.5-Sonnet, and their smaller versions, GPT-4o-mini & Claude-3-Haiku). The adversarial process introduced a small bias against these models (and we hypothesise the bias would be higher if using only one model instead of two). Despite making fair evaluation slightly more challenging, our use of frontier LLMs in the refinement process is well-motivated, as these models are best positioned to generate challenging, high-quality MCQs that probe the boundaries of current capabilities. Moreover, the bias seems modest - less than 10 points of relative drop - while the benefit is a huge increase in difficulty across all test MLLMs, as seen in Tab. 4." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.703, + 0.483, + 0.794 + ], + "angle": 0, + "content": "Metadata-based analysis supports deeper error analysis Fig. 5 shows error rates based on attribute tag. In Appendix F we explore results further - namely that multi-image reasoning has surprisingly good performance, higher level Bloom's questions are harder, and (not shown) rare image modalities perform worse." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.802, + 0.353, + 0.818 + ], + "angle": 0, + "content": "5.2. Error analysis on MicroVQA" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.825, + 0.483, + 0.902 + ], + "angle": 0, + "content": "To understand model errors, three biomedical experts reviewed the Chain-of-Thought (CoT) responses of 30 samples on Claude 3.5 Sonnet, with each review requiring at least 45 mins of careful analysis. Errors were classified into a major category and minor contributing errors were also" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.183 + ], + "angle": 0, + "content": "noted. Expert perception accounted for \\(50\\%\\) of errors, \\(30\\%\\) were knowledge errors, and \\(13\\%\\) were due to overgeneralization; the final \\(7\\%\\) were text hallucination or general reasoning error. Appendix F.6 shows examples for the most common types. We expand on this analysis with an automated version across the complete dataset in Appendix F.7." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.188, + 0.907, + 0.355 + ], + "angle": 0, + "content": "Expert perception errors occurred when the model misinterpreted visual features, leading to incorrect inferences. For example, in Fig. 13, the model judges the small electron-dense objects as ribosomes, rather than stress granules – ribosomes tend to be smaller, have a more regular shape, and lower contrast. Language bias may have contributed to the preference for ribosomes since they are very commonly studied in EM literature available to LLM training. Overall, perception errors were the dominant error type (50%), which suggests that future methods should improve vision representations in MLLMs for microscopy." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.361, + 0.909, + 0.543 + ], + "angle": 0, + "content": "Knowledge errors highlight gaps in nuanced biomedical knowledge. One example in Fig. 17 deals with understanding how protein localization changes during cell signaling. An experiment gives information about a transmembrane signaling protein, and the question asks to interpret the image and explain the function in these cells. The model correctly perceives a punctate distribution, but assumes the role of coordinating intracellular vesicle trafficking. This represents a shallow interpretation that could have been improved with specialized biomedical knowledge. These errors could be mitigated by improving knowledge in MLLMs, either in training or as a database at inference." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.549, + 0.909, + 0.672 + ], + "angle": 0, + "content": "Overgeneralization errors (and simplification errors) reflect the model's tendency to apply broad scientific principles without regard for specific context – arguably they are reasoning errors. In one example in Fig. 21, the question is to compare the risk of malignancy of a tumor, schwannoma, to other nerve sheath tumors. During reasoning, the model proposes a simplified question – what is true about schwannoma – which it answers while ignoring the comparison." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.703, + 0.634, + 0.719 + ], + "angle": 0, + "content": "6. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.735, + 0.907, + 0.903 + ], + "angle": 0, + "content": "MLLMs hold great potential for advancing scientific discovery. We aim to inspire the development of broadly applicable scientific AI systems. Future directions include training models for more robust image representations, integrating knowledge bases, exploring open evaluation methods, and studying LLMs' reasoning over hypotheses. Beyond microscopy, we hope our benchmark serves as a blueprint for AI evaluation in expert-driven domains. More generally we aspire for different application domains – biomedicine, chemistry, materials science – to share methods towards the common goal of building scientific AI systems." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.505, + 0.936 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.093, + 0.09, + 0.188, + 0.106 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.115, + 0.484, + 0.184 + ], + "angle": 0, + "content": "[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.187, + 0.484, + 0.255 + ], + "angle": 0, + "content": "[2] Pravesh Agrawal, Szymon Antoniak, Emma Bou Hanna, Devendra Chaplot, Jessica Chudnovsky, Saurabh Garg, Theophile Gervet, Soham Ghosh, Amélie Héliiou, Paul Jacob, et al. Pixtral 12b. arXiv preprint arXiv:2410.07073, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.258, + 0.483, + 0.34 + ], + "angle": 0, + "content": "[3] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35:23716-23736, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.343, + 0.483, + 0.398 + ], + "angle": 0, + "content": "[4] Elaf Alhazmi, Quan Z Sheng, Wei Emma Zhang, Munazza Zaib, and Ahoud Alhazmi. Distractor generation for multiple-choice questions: A survey of methods, datasets, and evaluation. arXiv preprint arXiv:2402.01512, 2024. 3, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.401, + 0.485, + 0.455 + ], + "angle": 0, + "content": "[5] Lorin W. Anderson and David R. Krathwohl, editors. A Taxonomy for Learning, Teaching, and Assessing. A Revision of Bloom's Taxonomy of Educational Objectives. Allyn & Bacon, New York, 2 edition, 2001. 5, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.458, + 0.482, + 0.485 + ], + "angle": 0, + "content": "[6] Anthropic. The claude 3 model family: Opus, sonnet, haiku. Model card, Anthropic, 2024. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.488, + 0.483, + 0.543 + ], + "angle": 0, + "content": "[7] Nishant Balepur, Abhilasha Ravichander, and Rachel Rudinger. Artifacts or abduction: How do llms answer multiple-choice questions without the question? arXiv preprint arXiv:2402.12483, 2024. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.545, + 0.483, + 0.613 + ], + "angle": 0, + "content": "[8] Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. Abductive commonsense reasoning. arXiv preprint arXiv:1908.05739, 2019. 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.616, + 0.483, + 0.699 + ], + "angle": 0, + "content": "[9] Owen Bianchi, Maya Willey, Chelsea X Alvarado, Benjamin Danek, Marzieh Khani, Nicole Kuznetsov, Anant Dadu, Syed Shah, Mathew J Koretsky, Mary B Makarious, et al. Cardbiomedbench: A benchmark for evaluating large language model performance in biomedical research. bioRxiv, pages 2025-01, 2025. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.702, + 0.483, + 0.757 + ], + "angle": 0, + "content": "[10] Semere Kiros Bitew, Johannes Deleu, Chris Develder, and Thomas Demeester. Distractor generation for multiple-choice questions with predictive prompting and large language models. arXiv preprint arXiv:2307.16338, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.759, + 0.482, + 0.813 + ], + "angle": 0, + "content": "[11] Benjamin S. Bloom. Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: The Cognitive Domain. David McKay Co Inc, New York, 1956. 4, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.816, + 0.483, + 0.857 + ], + "angle": 0, + "content": "[12] Daniil A Boiko, Robert MacKnight, Ben Kline, and Gabe Gomes. Autonomous chemical research with large language models. Nature, 624(7992):570-578, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.859, + 0.483, + 0.901 + ], + "angle": 0, + "content": "[13] Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.115, + 0.485, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.548, + 0.093, + 0.905, + 0.119 + ], + "angle": 0, + "content": "repeated sampling. arXiv preprint arXiv:2407.21787, 2024.6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.122, + 0.905, + 0.19 + ], + "angle": 0, + "content": "[14] James Burgess, Jeffrey J Nirschl, Maria-Clara Zanellati, Alejandro Lozano, Sarah Cohen, and Serena Yeung-Levy. Orientation-invariant autoencoders learn robust representations for shape profiling of cells and organelles. Nature Communications, 15(1):1022, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.193, + 0.905, + 0.261 + ], + "angle": 0, + "content": "[15] Hengxing Cai, Xiaochen Cai, Junhan Chang, Sihang Li, Lin Yao, Changxin Wang, Zhifeng Gao, Yongge Li, Mujie Lin, Shuwen Yang, et al. Sciassess: Benchmarking llm proficiency in scientific literature analysis. arXiv preprint arXiv:2403.01976, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.264, + 0.905, + 0.304 + ], + "angle": 0, + "content": "[16] Anne E Carpenter, Beth A Cimini, and Kevin W Eliceiri. Smart microscopes of the future. nature methods, 20(7):962-964, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.307, + 0.905, + 0.375 + ], + "angle": 0, + "content": "[17] Nathan H Cho, Keith C Cheveralls, Andreas-David Brunner, Kibeom Kim, André C Michaelis, Preethi Raghavan, Hirofumi Kobayashi, Laura Savy, Jason Y Li, Hera Canaj, et al. Opencell: Endogenous tagging for the cartography of human cellular organization. Science, 375(6585):eabi6983, 2022. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.378, + 0.905, + 0.459 + ], + "angle": 0, + "content": "[18] Kenneth Clark, Bruce Vendt, Kirk Smith, John Freymann, Justin Kirby, Paul Koppel, Stephen Moore, Stanley Phillips, David Maffitt, Michael Pringle, et al. The cancer imaging archive (tcia): maintaining and operating a public information repository. Journal of digital imaging, 26:1045-1057, 2013. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.462, + 0.905, + 0.516 + ], + "angle": 0, + "content": "[19] Alison Crowe, Clarissa Dirks, and Mary Pat Wenderoth. Biology in bloom: implementing bloom's taxonomy to enhance student learning in biology. CBE Life Sci. Educ., 7 (4):368-381, 2008. 5, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.519, + 0.75, + 0.532 + ], + "angle": 0, + "content": "[20] Igor Douven. Abduction, 2011. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.535, + 0.905, + 0.601 + ], + "angle": 0, + "content": "[21] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.605, + 0.905, + 0.673 + ], + "angle": 0, + "content": "[22] Kehua Feng, Keyan Ding, Weijie Wang, Xiang Zhuang, Zeyuan Wang, Ming Qin, Yu Zhao, Jianhua Yao, Qiang Zhang, and Huajun Chen. Sciknoweval: Evaluating multilevel scientific knowledge of large language models. arXiv preprint arXiv:2406.09098, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.676, + 0.905, + 0.744 + ], + "angle": 0, + "content": "[23] Wanyong Feng, Jaewook Lee, Hunter McNichols, Alexander Scarlatos, Digory Smith, Simon Woodhead, Nancy Otero Ornelas, and Andrew Lan. Exploring automated distractor generation for math multiple-choice questions via large language models. arXiv preprint arXiv:2404.02124, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.747, + 0.905, + 0.801 + ], + "angle": 0, + "content": "[24] Nigel Fernandez, Alexander Scarlatos, Simon Woodhead, and Andrew Lan. Divert: Distractor generation with variational errors represented as text for math multiple-choice questions. arXiv preprint arXiv:2406.19356, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.803, + 0.905, + 0.87 + ], + "angle": 0, + "content": "[25] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.874, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[26] Shanghua Gao, Ada Fang, Yepeng Huang, Valentina Giunchiglia, Ayush Noori, Jonathan Richard Schwarz, Yasha" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.505, + 0.937 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.125, + 0.092, + 0.483, + 0.134 + ], + "angle": 0, + "content": "Ektefaie, Jovana Kondic, and Marinka Zitnik. Empowering biomedical discovery with ai agents. arXiv preprint arXiv:2404.02831, 2024. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.135, + 0.483, + 0.19 + ], + "angle": 0, + "content": "[27] Mark J Gierl, Okan Bulut, Qi Guo, and Xinxin Zhang. Developing, analyzing, and using distractors for multiple-choice tests in education: A comprehensive review. Review of educational research, 87(6):1082-1116, 2017. 3, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.19, + 0.482, + 0.247 + ], + "angle": 0, + "content": "[28] Juraj Gottweis, Wei-Hung Weng, Alexander Daryin, Tao Tu, Anil Palepu, Petar Sirkovic, Artiom Myaskovsky, Felix Weissenberger, Keran Rong, Ryutaro Tanno, et al. Towards an ai co-scientist. arXiv preprint arXiv:2502.18864, 2025. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.247, + 0.482, + 0.328 + ], + "angle": 0, + "content": "[29] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904-6913, 2017. 3, 13, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.33, + 0.482, + 0.384 + ], + "angle": 0, + "content": "[30] Xuehai He, Yichen Zhang, Luntian Mou, Eric Xing, and Pengtao Xie. Pathvqa: 30000+ questions for medical visual question answering. arXiv preprint arXiv:2003.10286, 2020. 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.385, + 0.482, + 0.455 + ], + "angle": 0, + "content": "[31] Marco Y Hein, Duo Peng, Verina Todorova, Frank McCarthy, Kibeom Kim, Chad Liu, Laura Savy, Camille Januel, Rodrigo Baltazar-Nunez, Madhurya Sekhar, et al. Global organelle profiling reveals subcellular localization and remodeling at proteome scale. Cell, 188(4):1137-1155, 2025. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.455, + 0.482, + 0.525 + ], + "angle": 0, + "content": "[32] Jack Hessel, Jena D Hwang, Jae Sung Park, Rowan Zellers, Chandra Bhagavatula, Anna Rohrbach, Kate Saenko, and Yejin Choi. The abduction of sherlock holmes: A dataset for visual abductive reasoning. In European Conference on Computer Vision, pages 558-575. Springer, 2022. 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.525, + 0.482, + 0.58 + ], + "angle": 0, + "content": "[33] Tom Hope, Doug Downey, Daniel S Weld, Oren Etzioni, and Eric Horvitz. A computational inflection for scientific discovery. Communications of the ACM, 66(8):62-73, 2023. 2, 3, 4, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.58, + 0.482, + 0.663 + ], + "angle": 0, + "content": "[34] Yutao Hu, Tianbin Li, Quanfeng Lu, Wenqi Shao, Junjun He, Yu Qiao, and Ping Luo. Omnimedvqa: A new large-scale comprehensive evaluation benchmark for medical lvm. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22170-22183, 2024. 3, 4, 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.664, + 0.482, + 0.734 + ], + "angle": 0, + "content": "[35] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700-6709, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.734, + 0.482, + 0.789 + ], + "angle": 0, + "content": "[36] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.789, + 0.482, + 0.845 + ], + "angle": 0, + "content": "[37] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W Cohen, and Xinghua Lu. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.845, + 0.482, + 0.901 + ], + "angle": 0, + "content": "[38] Yeonghun Kang and Jihan Kim. ChatMOF: an artificial intelligence system for predicting and generating metal-organic frameworks using large language models. Nat. Commun., 15 (1):4705, 2024. 2" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.092, + 0.906, + 0.161 + ], + "angle": 0, + "content": "[39] Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive NLP. arXiv preprint arXiv:2212.14024, 2022. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.162, + 0.906, + 0.26 + ], + "angle": 0, + "content": "[40] Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, Zhiyuan Zhang, Keshav Santhanam, Sri Vardhamanan, Saiful Haq, Ashutosh Sharma, Thomas T. Joshi, Hanna Moazam, Heather Miller, Matei Zaharia, and Christopher Potts. Dspy: Compiling declarative language model calls into self-improving pipelines. arXiv preprint arXiv:2310.03714, 2023.6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.261, + 0.906, + 0.345 + ], + "angle": 0, + "content": "[41] Jon M Laurent, Joseph D Janizek, Michael Ruzo, Michaela M Hinks, Michael J Hammerling, Siddharth Narayanan, Manvitha Ponnapati, Andrew D White, and Samuel G Rodriques. Lab-bench: Measuring capabilities of language models for biology research. arXiv preprint arXiv:2407.10362, 2024. 2, 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.346, + 0.906, + 0.416 + ], + "angle": 0, + "content": "[42] Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. Advances in Neural Information Processing Systems, 36, 2024. 3, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.416, + 0.906, + 0.473 + ], + "angle": 0, + "content": "[43] Junxian Li, Di Zhang, Xunzhi Wang, Zeying Hao, Jingdi Lei, Qian Tan, Cai Zhou, Wei Liu, Yaotian Yang, Xinrui Xiong, et al. Chemvlm: Exploring the power of multimodal large language models in chemistry area. CoRR, 2024. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.473, + 0.906, + 0.517 + ], + "angle": 0, + "content": "[44] Lin Li, Guikun Chen, Hanrong Shi, Jun Xiao, and Long Chen. A survey on multimodal benchmarks: In the era of large ai models. arXiv preprint arXiv:2409.18142, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.517, + 0.906, + 0.574 + ], + "angle": 0, + "content": "[45] Lei Li, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, and Qi Liu. Multimodal arxiv: A dataset for improving scientific comprehension of large vision-language models. arXiv preprint arXiv:2403.00231, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.574, + 0.906, + 0.617 + ], + "angle": 0, + "content": "[46] Shengzhi Li and Nima Tajbakhsh. Scigraphqa: A large-scale synthetic multi-turn question-answering dataset for scientific graphs. arXiv preprint arXiv:2308.03349, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.617, + 0.906, + 0.687 + ], + "angle": 0, + "content": "[47] Zekun Li, Xianjun Yang, Kyuri Choi, Wanrong Zhu, Ryan Hsieh, HyeonJung Kim, Jin Hyuk Lim, Sungyoung Ji, Byungju Lee, Xifeng Yan, et al. Mmsci: A multimodal multi-discipline dataset for phd-level scientific comprehension. arXiv preprint arXiv:2407.04903, 2024. 2, 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.687, + 0.906, + 0.757 + ], + "angle": 0, + "content": "[48] Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, and Song Han. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26689-26699, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.757, + 0.906, + 0.801 + ], + "angle": 0, + "content": "[49] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.801, + 0.906, + 0.872 + ], + "angle": 0, + "content": "[50] Alejandro Lozano, Jeffrey Nirschl, James Burgess, Sanket Rajan Gupte, Yuhui Zhang, Alyssa Unell, and Serena Yeung. Micro-bench: A microscopy benchmark for vision-language understanding. Advances in Neural Information Processing Systems, 37:30670-30685, 2024. 2, 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.872, + 0.906, + 0.903 + ], + "angle": 0, + "content": "[51] Alejandro Lozano, Min Woo Sun, James Burgess, Liangyu Chen, Jeffrey J Nirschl, Jeffrey Gu, Ivan Lopez, Josiah" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.092, + 0.906, + 0.903 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.125, + 0.092, + 0.482, + 0.147 + ], + "angle": 0, + "content": "Aklilu, Austin Wolfgang Katzer, Collin Chiu, et al. Biomedica: An open biomedical image-caption archive, dataset, and vision-language models derived from scientific literature. arXiv preprint arXiv:2501.07171, 2025. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.149, + 0.483, + 0.258 + ], + "angle": 0, + "content": "[52] Ming Y \"Lu, Bowen Chen, Drew F K Williamson, Richard J Chen, Melissa Zhao, Aaron K Chow, Kenji Ikemura, Ahrong Kim, Dimitra Pouli, Ankush Patel, Amr Soliman, Chengkuan Chen, Tong Ding, Judy J Wang, Georg Gerber, Ivy Liang, Long Phi Le, Anil V Parwani, Luca L Weishaupt, and Faisal\" Mahmood. \"a multimodal generative AI copilot for human pathology\". \"Nature\", 634(8033):466-473\", 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.26, + 0.482, + 0.342 + ], + "angle": 0, + "content": "[53] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507-2521, 2022. 2, 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.344, + 0.483, + 0.412 + ], + "angle": 0, + "content": "[54] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.414, + 0.483, + 0.468 + ], + "angle": 0, + "content": "[55] Andres M. Bran, Sam Cox, Oliver Schilter, Carlo Baldassari, Andrew D White, and Philippe Schwaller. Augmenting large language models with chemistry tools. Nature Machine Intelligence, pages 1-11, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.47, + 0.483, + 0.538 + ], + "angle": 0, + "content": "[56] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195-3204, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.539, + 0.483, + 0.621 + ], + "angle": 0, + "content": "[57] M Maruf, Arka Daw, Kazi Sajeed Mehrab, Harish Babu Manogaran, Abhilash Neog, Medha Sawhney, Mridul Khurana, James P Balhoff, Yasin Bakis, Bahadir Altintas, et al. Vlm4bio: A benchmark dataset to evaluate pretrained vision-language models for trait discovery from biological images. arXiv preprint arXiv:2408.16176, 2024. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.623, + 0.483, + 0.704 + ], + "angle": 0, + "content": "[58] Adrian Mirza, Nawaf Alampara, Sreekanth Kunchapu, Benedict Emoekabu, Aswanth Krishnan, Mara Wilhelmi, Macjonathan Okereke, Juliane Eberhardt, Amir Mohammad Elahi, Maximilian Greiner, et al. Are large language models superhuman chemists? arXiv preprint arXiv:2404.01475, 2024.3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.707, + 0.483, + 0.773 + ], + "angle": 0, + "content": "[59] Michael Moor, Oishi Banerjee, Zahra Shakeri Hossein Abad, Harlan M Krumholz, Jure Leskovec, Eric J Topol, and Pranav Rajpurkar. Foundation models for generalist medical artificial intelligence. Nature, 616(7956):259-265, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.776, + 0.483, + 0.845 + ], + "angle": 0, + "content": "[60] Siddharth Narayanan, James D Braza, Ryan-Rhys Griffiths, Manu Ponnapati, Albert Bou, Jon Laurent, Ori Kabeli, Geemi Wellawatte, Sam Cox, Samuel G Rodriques, et al. Aviary: training language agents on challenging scientific tasks. arXiv preprint arXiv:2412.21154, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.846, + 0.483, + 0.9 + ], + "angle": 0, + "content": "[61] Shraman Pramanick, Rama Chellappa, and Subhashini Venugopalan. Spiqa: A dataset for multimodal question answering on scientific papers. arXiv preprint arXiv:2407.09413, 2024. 3" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.161 + ], + "angle": 0, + "content": "[62] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. arXiv preprint arXiv:2311.12022, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.164, + 0.905, + 0.205 + ], + "angle": 0, + "content": "[63] Loic A Royer. Omega—harnessing the power of large language models for bioimage analysis. nature methods, 21(8): 1371-1373, 2024. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.207, + 0.905, + 0.276 + ], + "angle": 0, + "content": "[64] Billings Melissa S, DeRuchie Kristine, Hussie Kieran, Kulesher Allison, Merrell Jacquelyn, Morales Amy, Panaguaa Miguel A, Sherlock Jennifer, Swygert Kimberly, and Tyson Julie. Nbme item-writing guide: Constructing written test questions for the health sciences, 2021. 5, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.278, + 0.905, + 0.346 + ], + "angle": 0, + "content": "[65] Khaled Saab, Tao Tu, Wei-Hung Weng, Ryutaro Tanno, David Stutz, Ellery Wulczyn, Fan Zhang, Tim Strother, Chunjong Park, Elahe Vedadi, et al. Capabilities of gemini models in medicine. arXiv preprint arXiv:2404.18416, 2024.3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.349, + 0.905, + 0.418 + ], + "angle": 0, + "content": "[66] Michael D Skarlinski, Sam Cox, Jon M Laurent, James D Braza, Michaela Hinks, Michael J Hammerling, Manvitha Ponnapati, Samuel G Rodriques, and Andrew D White. Language agents achieve superhuman synthesis of scientific knowledge. arXiv preprint arXiv:2409.13740, 2024. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.42, + 0.905, + 0.489 + ], + "angle": 0, + "content": "[67] Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, and Kai Yu. Scieval: A multi-level large language model evaluation benchmark for scientific research. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 19053-19061, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.491, + 0.905, + 0.546 + ], + "angle": 0, + "content": "[68] Kyle Swanson, Wesley Wu, Nash L Bulaong, John E Pak, and James Zou. The virtual lab: Ai agents design new sarscov-2 nanobodies with experimental validation. *bioRxiv*, pages 2024–11, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.548, + 0.905, + 0.617 + ], + "angle": 0, + "content": "[69] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. 2, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.619, + 0.905, + 0.673 + ], + "angle": 0, + "content": "[70] Surendrabikram Thapa and Surabhi Adhikari. Chatgpt, bard, and large language models for biomedical research: opportunities and pitfalls. Annals of biomedical engineering, 51 (12):2647-2651, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.676, + 0.905, + 0.759 + ], + "angle": 0, + "content": "[71] Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Candace Ross. Winoground: Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5238–5248, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.761, + 0.905, + 0.829 + ], + "angle": 0, + "content": "[72] Peter J Thul, Lovisa Åkesson, Mikaela Wiking, Diana Mahdessian, Aikaterini Geladaki, Hammou Ait Blal, Tove Alm, Anna Asplund, Lars Björk, Lisa M Breckels, et al. A subcellular map of the human proteome. Science, 356(6340): eaal3321, 2017. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.832, + 0.905, + 0.9 + ], + "angle": 0, + "content": "[73] Minyang Tian, Luyu Gao, Shizhuo Dylan Zhang, Xinan Chen, Cunwei Fan, Xuefei Guo, Roland Haas, Pan Ji, Kittithat Krongchon, Yao Li, et al. Scicode: A research coding benchmark curated by scientists. arXiv preprint arXiv:2407.13168, 2024. 2, 3" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.9 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.508, + 0.937 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.161 + ], + "angle": 0, + "content": "[74] Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. Cambrian1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860, 2024. 3, 18" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.162, + 0.482, + 0.232 + ], + "angle": 0, + "content": "[75] Tao Tu, Shekoofeh Azizi, Danny Driess, Mike Schaekermann, Mohamed Amin, Pi-Chuan Chang, Andrew Carroll, Charles Lau, Ryutaro Tanno, Ira Ktena, et al. Towards generalist biomedical ai. NEJM AI, 1(3):Aioa2300138, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.233, + 0.482, + 0.3 + ], + "angle": 0, + "content": "[76] Matheus P Viana, Jianxu Chen, Theo A Knijnenburg, Ritvik Vasan, Calysta Yan, Joy E Arakaki, Matte Bailey, Ben Berry, Antoine Borensztejn, Eva M Brown, et al. Integrated intracellular organization and its variations in human ips cells. Nature, 613(7943):345-354, 2023. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.302, + 0.482, + 0.383 + ], + "angle": 0, + "content": "[77] David Wadden, Kejian Shi, Jacob Morrison, Aakanksha Naik, Shruti Singh, Nitzan Barzilay, Kyle Lo, Tom Hope, Luca Soldaini, Shannon Zejiang Shen, et al. Scirff: A resource to enhance language model instruction-following over scientific literature. arXiv preprint arXiv:2406.07835, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.385, + 0.482, + 0.453 + ], + "angle": 0, + "content": "[78] Fei Wang, Xingyu Fu, James Y Huang, Zekun Li, Qin Liu, Xiaogeng Liu, Mingyu Derek Ma, Nan Xu, Wenxuan Zhou, Kai Zhang, et al. Muirbench: A comprehensive benchmark for robust multi-image understanding. arXiv preprint arXiv:2406.09411, 2024. 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.455, + 0.482, + 0.523 + ], + "angle": 0, + "content": "[79] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.525, + 0.482, + 0.594 + ], + "angle": 0, + "content": "[80] Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scientific problem-solving abilities of large language models. arXiv preprint arXiv:2307.10635, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.595, + 0.482, + 0.678 + ], + "angle": 0, + "content": "[81] Yiqi Wang, Wentao Chen, Xiaotian Han, Xudong Lin, Haiteng Zhao, Yongfei Liu, Bohan Zhai, Jianbo Yuan, Quanzeng You, and Hongxia Yang. Exploring the reasoning abilities of multimodal large language models (mllms): A comprehensive survey on emerging trends in multimodal reasoning. arXiv preprint arXiv:2401.06805, 2024. 2, 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.679, + 0.482, + 0.747 + ], + "angle": 0, + "content": "[82] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. arXiv preprint arXiv:2406.01574, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.748, + 0.482, + 0.803 + ], + "angle": 0, + "content": "[83] Zhilin Wang, Alexander Bukharin, Olivier Delalleau, Daniel Egert, Gerald Shen, Jiaqi Zeng, Oleksii Kuchaiev, and Yi Dong. Helpsteer2-preference: Complementing ratings with preferences. arXiv preprint arXiv:2410.01257, 2024. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.804, + 0.482, + 0.83 + ], + "angle": 0, + "content": "[84] xAI. Bringing grok to everyone, 2024. Accessed: 2025-03-13. 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.832, + 0.482, + 0.901 + ], + "angle": 0, + "content": "[85] Tong Xie, Yuwei Wan, Wei Huang, Yufei Zhou, Yixuan Liu, Qingyuan Linghu, Shaozhou Wang, Chunyu Kit, Clara Grazian, Wenjie Zhang, et al. Large language models as master key: unlocking the secrets of materials science with gpt. arXiv preprint arXiv:2304.02213, 2023. 2" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.093, + 0.482, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.162 + ], + "angle": 0, + "content": "[86] Kaining Ying, Fanqing Meng, Jin Wang, Zhiqian Li, Han Lin, Yue Yang, Hao Zhang, Wenbo Zhang, Yuqi Lin, Shuo Liu, et al. Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models towards multitask agi. arXiv preprint arXiv:2404.16006, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.163, + 0.905, + 0.259 + ], + "angle": 0, + "content": "[87] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9556-9567, 2024. 2, 3, 4, 7" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.261, + 0.905, + 0.33 + ], + "angle": 0, + "content": "[88] Xiang Yue, Tianyu Zheng, Yuansheng Ni, Yubo Wang, Kai Zhang, Shengbang Tong, Yuxuan Sun, Ming Yin, Botao Yu, Ge Zhang, et al. Mmmu-pro: A more robust multidiscipline multimodal understanding benchmark. arXiv preprint arXiv:2409.02813, 2024. 3, 4, 7, 15" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.332, + 0.905, + 0.386 + ], + "angle": 0, + "content": "[89] Nikki B Zaidi, Charles Hwang, Sara Scott, Stefanie Stallard, Joel Purkiss, and Michael Hortsch. Climbing bloom's taxonomy pyramid: Lessons from a graduate histology course. Anat. Sci. Educ., 10(5):456-464, 2017. 5, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.388, + 0.905, + 0.456 + ], + "angle": 0, + "content": "[90] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6720-6731, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.459, + 0.905, + 0.527 + ], + "angle": 0, + "content": "[91] Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5317-5327, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.529, + 0.905, + 0.597 + ], + "angle": 0, + "content": "[92] Kai Zhang, Rong Zhou, Eashan Adhikarla, Zhiling Yan, Yixin Liu, Jun Yu, Zhengliang Liu, Xun Chen, Brian D Davison, Hui Ren, et al. A generalist vision-language foundation model for diverse biomedical tasks. Nature Medicine, pages 1-13, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.6, + 0.905, + 0.681 + ], + "angle": 0, + "content": "[93] Sheng Zhang, Yanbo Xu, Naoto Usuyama, Hanwen Xu, Jaspreet Bagga, Robert Tinn, Sam Preston, Rajesh Rao, Mu Wei, Naveen Valluri, et al. Biomedclip: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs. arXiv preprint arXiv:2303.00915, 2023. 7, 3, 17" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.684, + 0.905, + 0.753 + ], + "angle": 0, + "content": "[94] Yuhui Zhang, Yuchang Su, Yiming Liu, Xiaohan Wang, James Burgess, Elaine Sui, Chenyu Wang, Josiah Akiliu, Alejandro Lozano, Anjiang Wei, et al. Automated generation of challenging multiple-choice questions for vision language model evaluation. arXiv preprint arXiv:2501.03225, 2025. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.755, + 0.905, + 0.81 + ], + "angle": 0, + "content": "[95] Haiteng Zhao, Chang Ma, FangZhi Xu, Lingpeng Kong, and Zhi-Hong Deng. Biomaze: Benchmarking and enhancing large language models for biological pathway reasoning. arXiv preprint arXiv:2502.16660, 2025. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.811, + 0.905, + 0.88 + ], + "angle": 0, + "content": "[96] Zhiling Zheng, Oufan Zhang, Ha L Nguyen, Nakul Rampal, Ali H Alawadhi, Zichao Rong, Teresa Head-Gordon, Christian Borgs, Jennifer T Chayes, and Omar M Yaghi. Chatgpt research group for optimizing the crystallinity of mofs and cofs. ACS Central Science, 9(11):2161-2170, 2023. 2" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.88 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.134, + 0.086, + 0.868, + 0.13 + ], + "angle": 0, + "content": "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research" + }, + { + "type": "text", + "bbox": [ + 0.382, + 0.142, + 0.615, + 0.163 + ], + "angle": 0, + "content": "Supplementary Material" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.19, + 0.907, + 0.237 + ], + "angle": 0, + "content": "Appendices A-C include acknowledgments, author contributions, ethics, and discussion of limitations and future work. The remaining sections correspond to sections in the paper. Appendix D is benchmark details from the main Sec. 3. Appendix E is MCQ generation details from the main Sec. 4. Appendix F is the experiments details from the main Sec. 5." + }, + { + "type": "title", + "bbox": [ + 0.092, + 0.253, + 0.273, + 0.271 + ], + "angle": 0, + "content": "Table of Contents" + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.28, + 0.868, + 0.295 + ], + "angle": 0, + "content": "A Acknowledgments and author contributions 1" + }, + { + "type": "text", + "bbox": [ + 0.132, + 0.308, + 0.868, + 0.321 + ], + "angle": 0, + "content": "B Ethics 2" + }, + { + "type": "text", + "bbox": [ + 0.132, + 0.335, + 0.868, + 0.348 + ], + "angle": 0, + "content": "C Limitations and future work 2" + }, + { + "type": "text", + "bbox": [ + 0.132, + 0.362, + 0.868, + 0.375 + ], + "angle": 0, + "content": "D Benchmark details 3" + }, + { + "type": "list", + "bbox": [ + 0.132, + 0.28, + 0.868, + 0.375 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.155, + 0.379, + 0.865, + 0.392 + ], + "angle": 0, + "content": "D.1. Accessing MicroVQA benchmark 3" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.395, + 0.865, + 0.407 + ], + "angle": 0, + "content": "D.2 Dataset Schema and Structure 3" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.412, + 0.865, + 0.424 + ], + "angle": 0, + "content": "D.3 Training contamination mitigation 3" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.428, + 0.865, + 0.44 + ], + "angle": 0, + "content": "D.4 Benchmark attribute details 4" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.444, + 0.865, + 0.455 + ], + "angle": 0, + "content": "D.5 Related benchmarks 4" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.46, + 0.865, + 0.472 + ], + "angle": 0, + "content": "D.6 Bloom's taxonomy 5" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.476, + 0.865, + 0.489 + ], + "angle": 0, + "content": "D.7.Methodology for proposing tasks 6" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.492, + 0.865, + 0.504 + ], + "angle": 0, + "content": "D.8 Guidance for question creators 7" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.508, + 0.865, + 0.521 + ], + "angle": 0, + "content": "D.9 Microscopy image representations 7" + }, + { + "type": "list", + "bbox": [ + 0.154, + 0.379, + 0.865, + 0.521 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.535, + 0.868, + 0.548 + ], + "angle": 0, + "content": "E.MCQ generation details 7" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.551, + 0.865, + 0.565 + ], + "angle": 0, + "content": "E.1. Question collection process details 7" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.568, + 0.865, + 0.58 + ], + "angle": 0, + "content": "E.2. Experiment on naive MCQ generation 7" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.583, + 0.865, + 0.597 + ], + "angle": 0, + "content": "E.3. Stage 1 MCQ generation (exam alignment) 8" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.6, + 0.865, + 0.613 + ], + "angle": 0, + "content": "E.4. Stage 2 MCQ generation (RefineBot) details 8" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.616, + 0.865, + 0.629 + ], + "angle": 0, + "content": "E.5.MCQ generation results: stage 1 exam alignment 13" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.632, + 0.865, + 0.645 + ], + "angle": 0, + "content": "E.6.MCQ generation results: stage 2 RefineBot 13" + }, + { + "type": "list", + "bbox": [ + 0.154, + 0.551, + 0.865, + 0.645 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.659, + 0.868, + 0.672 + ], + "angle": 0, + "content": "F. Experiments details 15" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.675, + 0.865, + 0.688 + ], + "angle": 0, + "content": "F.1. Evaluation prompts 15" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.692, + 0.865, + 0.704 + ], + "angle": 0, + "content": "F.2. Model details 16" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.708, + 0.865, + 0.72 + ], + "angle": 0, + "content": "F.3. Human baseline on MicroVQA 17" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.724, + 0.865, + 0.736 + ], + "angle": 0, + "content": "F.4. More experiments on attribute-level information 17" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.74, + 0.865, + 0.753 + ], + "angle": 0, + "content": "F.5. Language shortcut ablations 17" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.756, + 0.865, + 0.768 + ], + "angle": 0, + "content": "F.6. Error analysis 20" + }, + { + "type": "text", + "bbox": [ + 0.154, + 0.772, + 0.865, + 0.785 + ], + "angle": 0, + "content": "F.7. Reasoning trace analysis 31" + }, + { + "type": "list", + "bbox": [ + 0.154, + 0.675, + 0.865, + 0.785 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.815, + 0.489, + 0.833 + ], + "angle": 0, + "content": "A. Acknowledgments and author contributions" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.84, + 0.907, + 0.903 + ], + "angle": 0, + "content": "Acknowledgements We thank Christian Kindermann for feedback on motivation for benchmarking and reasoning taxonomies; Siddharth Doshi for feedback on motivation for applications in scientific chat systems; and Krisjanis Mazurs for feedback on structuring user interviews. We also thank Assaf Zaritsky, Andrew S Moore, Pedro Guedes Dias, Angus Toland, Helen Healy, and Andrew Kassianos, and Loic Royer for discussions on defining the VQA tasks." + }, + { + "type": "page_number", + "bbox": [ + 0.495, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "1" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.907, + 0.166 + ], + "angle": 0, + "content": "Funding acknowledgements We gratefully acknowledge funding from HAI Hoffman-Yee (E.L., S.Y-L., J.M.H.B.), Schmidt Futures (E.L.), the Bridge2AI Program (NIH Common Fund; OT2 OD032742; E.L.), Goran Gustafsson Foundation and Knut and Alice Wallenberg Foundation (KAW 2021.0346; E.L.) as well as to MU (HPA). S.C. received funding from NIH/NIGMS R35GM133460 and CZI A23-0264-001. J.N.H. was supported by a Postdoctoral Fellowship from EMBO Postdoctoral Fellowship (ALTF 556-2022)." + }, + { + "type": "title", + "bbox": [ + 0.11, + 0.168, + 0.264, + 0.18 + ], + "angle": 0, + "content": "Author contributions" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.183, + 0.392, + 0.197 + ], + "angle": 0, + "content": "- Project conception: JB, JN, LBS, AL, SYL" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.197, + 0.526, + 0.212 + ], + "angle": 0, + "content": "- Defining the VQA tasks: JB, JN, JGG, WC, SC, JNH, CL, SYL" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.213, + 0.441, + 0.227 + ], + "angle": 0, + "content": "- Managing benchmark collection processes: JN, JB" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.228, + 0.653, + 0.242 + ], + "angle": 0, + "content": "- Creating benchmark questions: DB, ZC, SMH, AJ, WDL, MGN, RY, CZ, JNH, CL" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.243, + 0.37, + 0.257 + ], + "angle": 0, + "content": "- MCQ conversion pipeline: JB, JN, LBS" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.258, + 0.283, + 0.271 + ], + "angle": 0, + "content": "- Evaluations: JB, SRG, YS" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.273, + 0.37, + 0.287 + ], + "angle": 0, + "content": "- Qualitative error analysis: JN, JGG, CZ" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.289, + 0.322, + 0.302 + ], + "angle": 0, + "content": "- Quantitative error analysis: LBS" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.304, + 0.366, + 0.318 + ], + "angle": 0, + "content": "- Paper writing and figures: JB, JN, LBS" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.319, + 0.434, + 0.333 + ], + "angle": 0, + "content": "Supervision: SYL, EL, CL, MDL, CNH, SC, WC" + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.183, + 0.653, + 0.333 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.346, + 0.172, + 0.361 + ], + "angle": 0, + "content": "B. Ethics" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.372, + 0.907, + 0.447 + ], + "angle": 0, + "content": "Ethical use of biomedical data: MicroVQA was developed with a commitment to ethical practices in handling biomedical research data. All microscopy images included in the dataset were either original, unpublished data or sourced from open-access articles (e.g., CC-BY licensed) published after January 2024. The dataset does not include patient-identifiable information, and MicroVQA adheres to applicable privacy and research ethics guidelines. The dataset does not constitute human subjects research." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.448, + 0.906, + 0.523 + ], + "angle": 0, + "content": "Potential societal impacts: Biomedical AI models, including those trained on MicroVQA, have the potential to influence research and healthcare significantly, both positively and negatively. While these models can accelerate scientific discovery, they may also perpetuate biases in training data or result in uneven performance across demographic or biological groups. To mitigate these risks, MicroVQA includes metadata annotations to support the analysis of potential biases and emphasizes diverse data sourcing. We will engage with the research community to address emerging ethical concerns." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.523, + 0.906, + 0.599 + ], + "angle": 0, + "content": "Data licensing and usage: MicroVQA is distributed under the Creative Commons Attribution-ShareAlike 4.0 International (https://creativecommons.org/licenses/by-sa/4.0/) license to promote transparency and collaboration in the research community. This license reflects our efforts to balance open access with the ethical use of data, enabling both academic and commercial applications of the dataset while adhering to original license requirements. Full licensing details are available at https://creativecommons.org/licenses/by-sa/4.0/." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.612, + 0.358, + 0.628 + ], + "angle": 0, + "content": "C. Limitations and future work" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.638, + 0.906, + 0.668 + ], + "angle": 0, + "content": "While MicroVQA advances research-level reasoning benchmark significantly, we can identify a number of limitations and avenues for future work." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.686, + 0.906, + 0.763 + ], + "angle": 0, + "content": "Open vs closed evaluation Like most VQA benchmarks, we used multiple-choice (MCQ) evaluation. However downstream applications will obviously operate in an open setting, without options and requiring more detail. MLLM research does have some open benchmarks [86], though impartial evaluation is a significant challenge. Our early experiments with open prompting suggest that models like GPT-4o tend to give very vague answers, and tend to depend strongly on the text part of the input prompt." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.78, + 0.907, + 0.903 + ], + "angle": 0, + "content": "Dataset scale Our final dataset had 1042 samples, which is smaller than some other VQA benchmarks like MicroBench [50] and OmniMedVQA [34]. MicroVQA's scale sufficient to compare overall scores between models, however statistical power starts to become limiting when trying to compare different models on subsets of the dataset. How does this compare to other benchmarks? As we can see from Tab. 2 and Fig. 3, the larger datasets tend to be those with lower-level reasoning according to Bloom's taxonomy. Their samples tend to be derived automatically from metadata tags or classification labels. Benchmarks with comparably high-level reasoning, like MMMU-Pro, have a similar scale to ours (1,700). In the language-only domain, the research-level reasoning benchmark GPQA has 448 samples. This trend reflects a clear, practical limitation: it is very time-consuming for humans to create difficult reasoning questions, and the pool of capable experts is small." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.505, + 0.936 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.089, + 0.092, + 0.907, + 0.184 + ], + "angle": 0, + "content": "Breadth of dataset attributes in microscopy We aimed for MicroVQA to cover a broad range of the microscopy discipline; for example, we have samples from all four relevant scales - tissues, cells, subcellular, atomic. However, each human annotator is an expert in a subset of microscopy and biology, so there are practical limits on dataset coverage. We, therefore, chose to focus on the most common imaging modalities - brightfield, fluorescence, electron - while skipping less common modalities like Raman spectroscopy. In terms of sample types, we emphasized those relevant to humans - human and mouse - while having only a few samples from rare organisms like bottlenose dolphin." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.2, + 0.907, + 0.277 + ], + "angle": 0, + "content": "Coverage of scientific reasoning Looking even more broadly, the motivation of this work is to advance reasoning in scientific research generally. However, we use microscopy in particular. This is because of a clear trade-off in breadth vs depth. Our dataset strikes a good balance, given that microscopy represents a large portion of visual data in biomedicine and biology. Future work can reuse our framework to generate reasoning VQA benchmarks in other fields of visual biomedicine and biology, such as in medicine [34] or ecology [93]." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.292, + 0.907, + 0.37 + ], + "angle": 0, + "content": "Resolving MCQ shortcuts A key contribution in this paper is the RefineBot for generating multiple choice questions (MCQs) that are too challenging to solve using text-only shortcuts. Our approach used the outputs of existing MLLMs as signal to make distractors harder. However, this cannot guarantee the removal of shortcuts; more generally, harder distractors may exist. Therefore, the next generation of LLMs may be able to identify and exploit new shortcuts. If that occurs, we can run RefineBot again using the new LLM, creating MicroVQA-2. This strategy could be used for other VQA evaluations." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.379, + 0.279, + 0.395 + ], + "angle": 0, + "content": "D. Benchmark details" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.404, + 0.389, + 0.42 + ], + "angle": 0, + "content": "D.1. Accessing MicroVQA benchmark" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.427, + 0.906, + 0.488 + ], + "angle": 0, + "content": "MicroVQA is an expert-curated benchmark for multimodal scientific reasoning, specifically in microscopy-based biological research. The dataset comprises 1,042 multiple-choice questions (MCQs) created by biological researchers to represent authentic scientific practice. Each MCQ is paired with an associated microscopy image and metadata describing the experimental context. A demonstration subset of 28 instances is available for initial exploration." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.488, + 0.906, + 0.563 + ], + "angle": 0, + "content": "The MicroVQA dataset is publicly accessible on HuggingFace at https://huggingface.co/datasets/jmhb/microvqa. It is distributed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license, allowing reuse and share-alike redistribution with attribution. The dataset is available in Apache Arrow and Parquet formats for efficient data handling. These formats support fast serialization, memory mapping, and streaming from the web, facilitating seamless integration into machine learning pipelines." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.57, + 0.365, + 0.585 + ], + "angle": 0, + "content": "D.2. Dataset Schema and Structure" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.593, + 0.906, + 0.623 + ], + "angle": 0, + "content": "The dataset schema defines the fields included for each data instance. Figure 6 provides a schema summary, while a sample data instance is detailed below 7." + }, + { + "type": "code", + "bbox": [ + 0.233, + 0.632, + 0.768, + 0.774 + ], + "angle": 0, + "content": "{ \"image_id\": datasets.Value(\"string\"), \"image\": datasets.Image(decode=True, id=None), \"label\": datasets.ClassLabel(), \"label_name\": datasets.Value(\"string\"), \"domain\": datasets.Value(\"string\"), \"subdomain\": datasets.Value(\"string\"), \"modality\": datasets.Value(\"string\"), \"submodality\": datasets.Value(\"string\"), \"stain\": datasets.Value(\"string\"), \"microns_perpixel\": datasets.Value(\"string\"), \"questions\": datasets.Value(\"string\"), }" + }, + { + "type": "image_caption", + "bbox": [ + 0.4, + 0.795, + 0.597, + 0.81 + ], + "angle": 0, + "content": "Figure 6. Example data schema." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.832, + 0.398, + 0.849 + ], + "angle": 0, + "content": "D.3. Training contamination mitigation" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.856, + 0.906, + 0.902 + ], + "angle": 0, + "content": "To mitigate contamination, all data in MicroVQA were sourced from original, unpublished microscopy images or open-access articles (e.g., CC-BY-licensed) published after January 2024. Additionally, canary strings were embedded in the dataset to detect potential contamination during fine-tuning or model evaluation." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.182, + 0.088, + 0.813, + 0.403 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.399, + 0.424, + 0.597, + 0.438 + ], + "angle": 0, + "content": "Figure 7. Example data instance." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.465, + 0.347, + 0.48 + ], + "angle": 0, + "content": "D.4. Benchmark attribute details" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.488, + 0.904, + 0.564 + ], + "angle": 0, + "content": "The MicroVQA benchmark is designed to evaluate multimodal reasoning capabilities in biomedical research contexts. The dataset incorporates diverse attributes such as organism, research subject, imaging modalities, and biological length scales (image scales) to ensure a comprehensive assessment1. These metadata not only enhance the scientific relevance but also enable analysis of reasoning across different biological and microscopy contexts. This section describes the methodology used to define, compute, and annotate these key attributes." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.565, + 0.904, + 0.653 + ], + "angle": 0, + "content": "The benchmark comprises 1,042 total questions, including 423 questions containing multiple images combined into panels. A collage of sample images is shown in 25. The average MCQ token lengths were computed using the o200k_base tokenizer, with separate calculations for the raw input and formatted multiple-choice questions. Imaging modalities are broadly categorized into light, fluorescence, and electron microscopy. Biological length scales span four levels: tissue (millimeter scale), cellular (micrometer scale), subcellular (nanometer scale), and atomic (angstrom scale), enabling reasoning across diverse structural hierarchies." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.656, + 0.904, + 0.729 + ], + "angle": 0, + "content": "To annotate attributes such as organism, specimen, and research subject, we manually reviewed random samples from the dataset and created a taxonomy to cover the most likely categories. A biomedical domain expert manually labeled these categories for 50 raw input question-answer pairs and performed initial prompt engineering. We used DSPy to optimize the prompt and few-shot examples on the labeled dataset. The input context included the taxonomy of organisms8 and research subjects 9." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.731, + 0.904, + 0.777 + ], + "angle": 0, + "content": "These annotations were the gold standard to guide DSPy prompt engineering for automated attribute labeling on the full dataset. Prompt instructions included fallback rules (e.g., tagging \"None\" for indeterminate attributes) to ensure accuracy and minimize hallucination. The optimized pipeline annotated all questions while ensuring the validity of the dataset attributes." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.787, + 0.29, + 0.801 + ], + "angle": 0, + "content": "D.5. Related benchmarks" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.81, + 0.907, + 0.903 + ], + "angle": 0, + "content": "In section 3.3 and Tab. 2 we compare MicroVQA to related scientific multimodal benchmarks, focusing on its intersection of high-difficulty and advanced reasoning questions. Beyond these key attributes, we examine the sources of the questions. Most benchmarks rely on existing question repositories, such as exams, web-based QA datasets, textbooks, or other MCQ datasets. This approach bypasses the need for MCQ generation methods, as the questions and options are pre-tested and curated, making these benchmarks more representative of established tasks. Another common strategy involves reformating classification datasets into MCQs using question templates, which increases task diversity but often limits scenario unique-" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.182, + 0.089, + 0.813, + 0.484 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.505, + 0.907, + 0.531 + ], + "angle": 0, + "content": "Figure 8. Examples of taxonomy classes used as context to LLM to assign an organism to a question. A YAML file with the full taxonomy will be released with the code." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.561, + 0.907, + 0.621 + ], + "angle": 0, + "content": "ness and language variability. In contrast, MicroVQA is built by directly querying experts for challenging, domain-specific research questions. By starting with long form questions and answers, our approach requires an innovative MCQ generation method, ensuring uniquely tailored questions that push the boundaries of domain-specific reasoning and knowledge assessment." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.634, + 0.275, + 0.65 + ], + "angle": 0, + "content": "D.6. Bloom's taxonomy" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.658, + 0.907, + 0.718 + ], + "angle": 0, + "content": "Bloom's Taxonomy is a model for evaluating cognitive skill levels in educational assessments. It has six hierarchical levels of learning: recall, understand/comprehend, apply, analyze, evaluate, and create. Multiple-choice questions can test five of Bloom's levels but not the sixth level, \"create\". These levels provide a structured framework for assessing lower-order and higher-order thinking skills, which are essential for designing robust benchmarks like MicroVQA." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.719, + 0.906, + 0.825 + ], + "angle": 0, + "content": "Initial attempts to use LLMs to classify Bloom's levels for biomedical visual question answering (VQA) benchmarks revealed a systematic overestimation of cognitive skill levels. This discrepancy limits the naive use LLMs for automated Bloom's classification, perhaps due to the nuances of applying Bloom's taxonomy in domain-specific contexts. To ensure accurate classification of Bloom's level, a physician-scientist with NBME training in item writing and familiarity with Bloom's classification manually labeled Bloom's level for 200 examples from public MCQ questions, MicroVQA, MicroBench, and ScienceQA. These examples were used to fine-tune GPT-4o-mini (gpt-4o-mini-2024-07-18) to classify Bloom's level and align with expert judgments." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.826, + 0.906, + 0.903 + ], + "angle": 0, + "content": "The fine-tuned Bloom's classifier was applied to MicroVQA and numerous public science/biomedical benchmarks to assess the cognitive skills evaluated by each question. Each question was mapped to one of Bloom's levels, providing insights into the distribution of cognitive tasks across the benchmark. This analysis revealed that while existing benchmarks predominantly tested lower-order skills (e.g., recall and comprehension), MicroVQA has a greater emphasis on questions that require higher-order reasoning (e.g., analyzing, evaluating), filling a critical gap in multimodal scientific reasoning" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.207, + 0.089, + 0.791, + 0.495 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.515, + 0.907, + 0.543 + ], + "angle": 0, + "content": "Figure 9. Examples of taxonomy classes used as context to LLM to assign a research subject to a question. A YAML file with the full taxonomy will be released with the code." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.577, + 0.179, + 0.59 + ], + "angle": 0, + "content": "assessments." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.609, + 0.383, + 0.626 + ], + "angle": 0, + "content": "D.7. Methodology for proposing tasks" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.636, + 0.907, + 0.711 + ], + "angle": 0, + "content": "Here, we expand on the task proposal methodology introduced in Sec. 3.2. To define specific reasoning tasks aimed at advancing biomedical research, we conducted a comprehensive expert consultation process. Through structured interviews with nine Principal Investigators and postdocs specializing in microscopy research, including participants from both academia and industry, we developed a framework for tasks that would meaningfully integrate multimodal large language models (MLLMs) into microscopy research workflows." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.714, + 0.906, + 0.775 + ], + "angle": 0, + "content": "The task selection process followed specific criteria: tasks must utilize image-text inputs and text outputs suitable for MLLMs; they should require higher-order reasoning beyond simple image processing; and they should emphasize core experimental activities (experimentation, analysis, and action) rather than auxiliary tasks like literature review or writing [33]." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.777, + 0.906, + 0.838 + ], + "angle": 0, + "content": "Our methodology involved a two-phase interview process. The first phase consisted of initial one-hour individual sessions with eight research groups featuring open-ended, unstructured discussions about key challenges in microscopy experiments, potential applications of MLLMs, current workflows and unmet needs, and possible AI system integration points. One week later, we then conducted follow-up interviews to consolidate findings and build consensus." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.84, + 0.906, + 0.903 + ], + "angle": 0, + "content": "To avoid anchoring biases, we intentionally withheld predefined categories during initial interviews. This approach enabled unbiased task identification and allowed us to capture a broad spectrum of potential use cases. Through this iterative process, we identified three essential capabilities that align with the scientific life cycle for microscopy experiments: advanced image interpretation, hypothesis generation, and experimental design/hypothesis testing." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.926, + 0.506, + 0.937 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.091, + 0.091, + 0.37, + 0.108 + ], + "angle": 0, + "content": "D.8. Guidance for question creators" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.114, + 0.908, + 0.16 + ], + "angle": 0, + "content": "Question creators were given detailed guidance as pdfs, that we share at https://github.com/jmhb0/microvqa under folder benchmark/human.collection. During the submission process, we completed two rounds of quality control to ensure difficulty and alignment to the prompts, as described in Sec. 4.1." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.169, + 0.395, + 0.185 + ], + "angle": 0, + "content": "D.9. Microscopy image representations" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.192, + 0.907, + 0.252 + ], + "angle": 0, + "content": "MLLMs are trained primarily on three-channel RGB images, however microscopy images are more diverse. To ensure alignment with how common MLLMs process images, we required that each image artifact for a single question was represented as no more than six RGB images. Where multiple images were needed, they could be multiple image files, or a single image that is concatenated. For videos, users could submit up to six frames; for 3d, they could submit slices or z-projections." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.253, + 0.908, + 0.342 + ], + "angle": 0, + "content": "An important difference is multichannel images. Many fluorescence microscopy images are multichannel, where the image collection ensures that each channel represents a certain structure. For example channel 0 may be nucleus, and channel 1 may be mitochondria. There are a few approaches. Some questions represent this situation as separate grayscale images for each channel. Others show a 'composite image', which is a single image where one each channel is mapped to one of the RGB channels; e.g. a common visualization will put nucleus in blue and mitochondria in green. Some questions do both: grayscale images for each channel, along with a composite image merging them." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.343, + 0.907, + 0.372 + ], + "angle": 0, + "content": "Finally, we allowed screenshots from research papers or image software, since this is a realistic way that future LLM-users would use AI tools." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.385, + 0.321, + 0.403 + ], + "angle": 0, + "content": "E. MCQ generation details" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.41, + 0.393, + 0.426 + ], + "angle": 0, + "content": "E.1. Question collection process details" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.434, + 0.906, + 0.464 + ], + "angle": 0, + "content": "Data collection used a structured submission form shared with collaborators. The form required users to provide the following:" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.464, + 0.903, + 0.479 + ], + "angle": 0, + "content": "- Image or Image Set: Users uploaded microscopy images, optionally including multiple images for comparative questions." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.48, + 0.904, + 0.494 + ], + "angle": 0, + "content": "- Experiment Details: A brief description of the experimental setup, imaging modalities, and any relevant biological context." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.495, + 0.904, + 0.525 + ], + "angle": 0, + "content": "- Question Text: Each question was aligned with one of three predefined use cases: identifying interesting features, explaining mechanisms, or proposing next steps." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.525, + 0.821, + 0.539 + ], + "angle": 0, + "content": "- Answer and Distractors: Contributors provided a correct answer and biologically plausible incorrect answers." + }, + { + "type": "list", + "bbox": [ + 0.091, + 0.464, + 0.904, + 0.539 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.54, + 0.908, + 0.614 + ], + "angle": 0, + "content": "Quality control was performed in multiple rounds. To familiarize users with the task and data requirements, each user submitted an initial form with 4-6 questions for manual quality control and feedback. Feedback was provided to users on improving question specificity, ensuring the questions required image interpretation rather than relying solely on general knowledge. Contributors were encouraged to refine their submissions based on the feedback to better align with the tasks and goals of the MicroVQA benchmark." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.615, + 0.906, + 0.705 + ], + "angle": 0, + "content": "Users were encouraged to submit data from multiple sources, imaging modalities (e.g., fluorescence, electron microscopy), and biological scales (e.g., tissue, cellular, subcellular) to promote diversity. Contributors could upload microscopy images from their experiments or use CC-BY publicly available images. Common image sources included preprints or publications released after January 2024 (to mitigate contamination), or from established image repositories [14, 17, 18, 31, 72, 76]. This ensured that all images were either original or sourced from open-license publications, enabling their use in an open-source benchmark." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.714, + 0.424, + 0.73 + ], + "angle": 0, + "content": "E.2. Experiment on naive MCQ generation" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.737, + 0.908, + 0.784 + ], + "angle": 0, + "content": "In Sec. 4.2, we discuss how the naive approach to MCQ generation – simply zero-shot prompting GPT – leads to MCQs that are very easy for frontier models to solve. Which ultimately motivated our two-stage MCQ generation approach. Here, we discuss those experiments in more detail. We create a naive version of MicroVQA using a standard prompt [34, 82]:" + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.803, + 0.571, + 0.816 + ], + "angle": 0, + "content": "You are an expert in molecular and cell biology, and in microscopy." + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.823, + 0.845, + 0.844 + ], + "angle": 0, + "content": "I will give you an original biology-related question and its answer, your task is to rephrase an equivalent question with identical answer. The question related to an image, and we don't show the image." + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.844, + 0.872, + 0.865 + ], + "angle": 0, + "content": "Meanwhile, I want to transfer this QA-pair into a multi-choice question. Please generate 5 incorrect options to construct the candidate options." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.873, + 0.195, + 0.886 + ], + "angle": 0, + "content": "{QUESTION}" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.505, + 0.936 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.111, + 0.104, + 0.18, + 0.117 + ], + "angle": 0, + "content": "{ANSWER}" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.136, + 0.907, + 0.196 + ], + "angle": 0, + "content": "Then, we evaluated this naive MicroVQA on the top closed source models with the image, and to study the over-reliance on language shortcuts, we evaluated a text-only version. In the text-only setting, we add the following to the prompt: \"If an image is mentioned, ignore this information and try your best to answer the question\". Tab. 5 shows the results of this experiment." + }, + { + "type": "table_caption", + "bbox": [ + 0.123, + 0.208, + 0.873, + 0.223 + ], + "angle": 0, + "content": "Table 5. Performance on MicroVQA MCQs with naive MCQ generation. We report overall VQA and VQA without the image." + }, + { + "type": "table", + "bbox": [ + 0.362, + 0.234, + 0.637, + 0.293 + ], + "angle": 0, + "content": "
VQAVQA-no-image
GPT-4o85.182.7
Claude-3.5-Sonnet91.488.4
Gemini-1.5-Pro88.582.4
" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.307, + 0.907, + 0.457 + ], + "angle": 0, + "content": "All models score over \\(80\\%\\) on questions that are from a specialized domain. We hypothesize that this reflects a problem with the non-trivial task of MCQ generation from raw VQAs for several reasons. First, the high 'VQA-no-image' scores across all models suggest that selecting the correct option does not require image understanding, despite the raw VQA samples being designed to incorporate image-based reasoning. An alternative explanation is that the questions themselves may be too easy for advanced models. Second, we test this alternate explanation by performing open evaluation on a subset of the raw VQA queries-- that is, we queried GPT-4o with the questions but without the options. Here, the models rarely gave good responses, even after careful prompting. This suggests that models are leveraging test-taking strategies to eliminate distractors [27]. Third, we examined the chain-of-thought responses in the VQA-no-image case, which suggested the models often eliminate distractors based on information from the text question. Therefore, we conclude that there are issues with the improper generation of the MCQs using the naive strategy." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.457, + 0.906, + 0.489 + ], + "angle": 0, + "content": "What exactly are the problems with MCQ generation? Qualitatively, the types of errors are the same as those we discuss in Appendix E.6, so we refer readers to that section to get a deeper understanding." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.496, + 0.459, + 0.513 + ], + "angle": 0, + "content": "E.3. Stage 1 MCQ generation (exam alignment)" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.52, + 0.906, + 0.64 + ], + "angle": 0, + "content": "The first stage transformed raw, open-ended questions and long-form answers into well-structured MCQs aligned with established principles of biomedical exam design. A physician-scientist trained in medical board exam item writing initiated the process by reviewing biomedical educational literature, NBME guidelines [64], and Bloom's taxonomy for assessing cognitive skills [5, 19, 89]. The expert manually converted 50 raw question-answer pairs into one-best-answer MCQs with biologically relevant distractors using the literature review and their prior training. This manual effort ensured that the MCQs preserved the original question's intent and scientific validity while adhering to educational standards. Distractors were crafted to require reasoning and prevent reliance on superficial cues like word association or general knowledge, thus maintaining the depth and complexity of the original content." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.641, + 0.906, + 0.776 + ], + "angle": 0, + "content": "To scale this process, the expert designed an initial LLM prompt for automated Stage 1 MCQ generation. This prompt and the manually curated training data were input for DSPy-based prompt optimization. Using the MIPROv2 optimizer, the prompt was refined within a Chain of Thought (CoT) framework combined with retrieval-augmented generation (RAG) pipelines powered by o1-mini. Optimization was guided by a custom metric evaluated by a separate LLM judge (o1-mini). The judge assessed each revised MCQ across three criteria: (1) semantic similarity to the original question and answer, (2) adherence to NBME formatting standards, and (3) minimization of extraneous information that could provide unintended clues. The optimization metric was weighted to prioritize preserving the original meaning and scientific validity, with lesser emphasis on formatting and extraneous text. Additional implementation details are provided in the source code." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.784, + 0.468, + 0.801 + ], + "angle": 0, + "content": "E.4. Stage 2 MCQ generation (RefineBot) details" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.808, + 0.906, + 0.854 + ], + "angle": 0, + "content": "The goal of RefineBot is to take MCQs that are too easy for MLLMs - especially where they are leveraging language-only shortcuts - and to rewrite them to make them more challenging. It is the third panel in Fig. 4. For more details on what types of improvements are achieved, see Appendix E.6. The complete code for RefineBot is released." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.871, + 0.906, + 0.903 + ], + "angle": 0, + "content": "A single RefineBot iteration Below, we show the pseudo-code for a single-question revision. The input is a question string, a list of options, and the index of the correct answer in the options. It returns a tuple (code, MCQ) where code is a" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.926, + 0.504, + 0.936 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.907, + 0.121 + ], + "angle": 0, + "content": "string on whether the run was successful, and MCQ is either the revised multiple choice question if the rewrite was successful or otherwise None." + }, + { + "type": "code", + "bbox": [ + 0.095, + 0.126, + 0.911, + 0.532 + ], + "angle": 0, + "content": "def revise_mcq(question: str, choices: list, correctidx: int, max_iters: int = 5):\n # Store original version\n original = {'question': question, 'choices': choices, 'correctidx': correctidx}\n reflections = []\nfor iter in range(max_iters):\n # Try answering without image\n result = evaluate_without_image(question, choices, correctidx)\nif not result.can_answer:\n code = 'SUCCEEDNESS_CHANGES' if iter == 0 else 'SUCCEED_REWRITE'\nreturn code, (question, choices, correctidx)\nif iter == max_iters - 1:\n return 'FAIL ITERATIONS', None\n# Reflect on strategies used to solve\nreflection = analyze WHY_answerable(result)\nreflections += reflection\n# Rewrite based on all reflections from all iterations\nnew_version = rewrite_question(reflections, original)\n# Verify the rewrite didn't change the meaning meaning preserved\nif not check_meaning_preserved(original, new_version):\n return 'FAIL_REWRITE', None\n# Update current version\nquestion = new_version['question']\nchoices = new_version['choices']\ncorrectidx = new_version['correctidx']\nreturn 'FAIL_UNKNOWN', None" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.538, + 0.907, + 0.583 + ], + "angle": 0, + "content": "This process is also described in the main text. One detail to note: the rewrite_question function receives the reflections from all prior iterations, not just the current one. In this way, the rewriting function can consider all possible strategies that have been identified. All LLM calls are to gpt-4o-2024-08-06." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.584, + 0.907, + 0.613 + ], + "angle": 0, + "content": "To better illustrate the method, we step through a single example. This way, we can show the prompts and understand responses." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.614, + 0.722, + 0.629 + ], + "angle": 0, + "content": "1. The evaluation of a question with the evaluate_without_image prompt and response:" + }, + { + "type": "code", + "bbox": [ + 0.111, + 0.633, + 0.911, + 0.902 + ], + "angle": 0, + "content": "Prompt \nThe following question is supposed to be paired with an image. We will not provide the image, so answer to the best of your ability. \nIn an electron microscopy image of a mouse primary embryonic cortical neuron expressing full-length human mHTT (BACHD), \"eye-ball\" shaped features are observed in the top left corner that appear anomalous. What is the most likely cause of these features? Think step by step and then output the answer in the format of \"The answer is (X)\" at the end.. \n(a) Aggregation of human mHTT protein within the neuron. \n(b) Ice contamination from liquid nitrogen during sample preparation. \n(c) Formation of intracellular vesicles during neuron development. \n(d) Artifacts arising from excessive heavy metal staining. \nResponse (target answer is b) \nTo determine the cause of the \"eye-ball\" shaped features observed in the electron microscopy image, we can analyze each option step by step: \n(a) Aggregation of human mHTT protein within the neuron: Aggregations of proteins can indeed appear as anomalous structures under electron microscopy. However, these typically have irregular shapes and may not consistently resemble \"eye-ball\" shapes. \n(b) Ice contamination from liquid nitrogen during sample preparation: Ice contamination can occur during sample preparation, particularly if samples are not handled properly when flash-frozen. This can lead to" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.506, + 0.937 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.165, + 0.095, + 0.877, + 0.117 + ], + "angle": 0, + "content": "anomalous features appearing in microscopy images. These features might appear as circular or irregular shapes, potentially looking like \"eye-balls.\"" + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.125, + 0.899, + 0.157 + ], + "angle": 0, + "content": "(c) Formation of intracellular vesicles during neuron development: Intracellular vesicles are normal structures within neurons and other cells. Although they can appear during development, they are typically well characterized and documented, and are less likely to be described as anomalous in the context given." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.165, + 0.904, + 0.197 + ], + "angle": 0, + "content": "(d) Artifacts arising from excessive heavy metal staining: Over staining can indeed produce artifacts in electron microscopy images. While staining artifacts are plausible, they might not specifically result in \"eye-ball\" shapes unless the staining process consistently targets specific round structures." + }, + { + "type": "list", + "bbox": [ + 0.136, + 0.125, + 0.904, + 0.197 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.135, + 0.205, + 0.899, + 0.237 + ], + "angle": 0, + "content": "Considering the nature of the descriptors (\"eye-ball\" shaped, anomalous), option (b) ice contamination is often a source of unexpected features in microscopy images that could fit the appearance described, as ice can form circular shapes." + }, + { + "type": "text", + "bbox": [ + 0.135, + 0.246, + 0.892, + 0.268 + ], + "angle": 0, + "content": "Therefore, the most likely cause of these features is (b) Ice contamination from liquid nitrogen during sample preparation." + }, + { + "type": "text", + "bbox": [ + 0.135, + 0.276, + 0.261, + 0.286 + ], + "angle": 0, + "content": "The answer is (b)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.298, + 0.907, + 0.341 + ], + "angle": 0, + "content": "2. If the answer is incorrect, then the question is already hard enough, and we can exit. Otherwise, we call the analyze WHY Answerable function, which reflects on how this answer was achieved. It continues in the same conversation:" + }, + { + "type": "text", + "bbox": [ + 0.135, + 0.361, + 0.166, + 0.371 + ], + "angle": 0, + "content": "User" + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.382, + 0.247, + 0.392 + ], + "angle": 0, + "content": "That is correct." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.401, + 0.858, + 0.422 + ], + "angle": 0, + "content": "Explain how you were able to answer the question without access to the image - only the question_stem and choices." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.422, + 0.331, + 0.433 + ], + "angle": 0, + "content": "What strategies did you use?" + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.442, + 0.65, + 0.453 + ], + "angle": 0, + "content": "Then, briefly summarize these \"language-only strategies\" into a short list." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.462, + 0.48, + 0.473 + ], + "angle": 0, + "content": "Finally, say whether your reasoning was closer to:" + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.473, + 0.712, + 0.483 + ], + "angle": 0, + "content": "- 'deduction': you answered with high confidence because you had enough information." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.483, + 0.663, + 0.493 + ], + "angle": 0, + "content": "- 'abduction': you answered with lower confidence you made an informed guess." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.502, + 0.37, + 0.513 + ], + "angle": 0, + "content": "Be concise in your final response." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.532, + 0.2, + 0.542 + ], + "angle": 0, + "content": "Assistant" + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.552, + 0.837, + 0.574 + ], + "angle": 0, + "content": "To answer the question without the image, I relied entirely on analyzing the given information and the plausible characteristics of each option. Here's a breakdown of my strategy:" + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.583, + 0.892, + 0.614 + ], + "angle": 0, + "content": "1. **Keyword and Context Analysis**: I focused on key descriptors such as \"eye-ball\" shaped features and their being \"anomalous\" in an electron microscopy image of neurons expressing human mHTT. This guided me to consider characteristics commonly seen under electron microscopy." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.623, + 0.872, + 0.654 + ], + "angle": 0, + "content": "2. **Knowledge of the Subject Matter**: I applied general knowledge about electron microscopy artifacts and protein behavior. This helped me weigh the likelihood of each option based on what typically causes specific image features." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.663, + 0.897, + 0.694 + ], + "angle": 0, + "content": "3. **Process of Elimination**: By understanding the nature of each option-including potential artifacts or structures (such as vesicles or ice) in microscopy-I systematically assessed the validity of each scenario in relation to the described features." + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.703, + 0.877, + 0.725 + ], + "angle": 0, + "content": "4. **Logical Inference**: I inferred which scenario most commonly results in anomalies resembling \"eye-ball\" features, favoring the simplest explanation." + }, + { + "type": "list", + "bbox": [ + 0.136, + 0.583, + 0.897, + 0.725 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.734, + 0.397, + 0.745 + ], + "angle": 0, + "content": "Summary of \"language only strategies\":" + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.745, + 0.344, + 0.754 + ], + "angle": 0, + "content": "- Analyze keywords and context" + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.755, + 0.419, + 0.765 + ], + "angle": 0, + "content": "- Apply relevant subject matter knowledge" + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.765, + 0.33, + 0.775 + ], + "angle": 0, + "content": "- Use process of elimination" + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.775, + 0.31, + 0.785 + ], + "angle": 0, + "content": "- Make logical inferences" + }, + { + "type": "list", + "bbox": [ + 0.136, + 0.745, + 0.419, + 0.785 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.136, + 0.794, + 0.899, + 0.815 + ], + "angle": 0, + "content": "The reasoning was closer to 'abduction' as I made an informed guess based on patterns and typical causes, given that I lacked direct visual confirmation." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.826, + 0.907, + 0.872 + ], + "angle": 0, + "content": "3. Next, we call rewrite_question. Here, the templated variable {{conversations}} has the full conversations produced by the function analyze WHY_answerable in the previous call and from all other iterations. The {{n chatting}} just indicates how many conversations there are included:" + }, + { + "type": "text", + "bbox": [ + 0.134, + 0.879, + 0.767, + 0.89 + ], + "angle": 0, + "content": "Below, I will display {{n chatting}} chat conversations between a 'user' and an LLM 'assistant'." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "code", + "bbox": [ + 0.117, + 0.095, + 0.909, + 0.402 + ], + "angle": 0, + "content": "In each conversation\n- a user asks the assistant to answer a multichoice VQA question, however they do not provide the image. They only get the question_stem and choices.\n- the question_stem and correct answer are similar to other conversations, however the distractors are different\n- the assistant then answers correctly.\n- the user then asks the assistant to explain how it answered the question with only the text.\n- the assistant then summarizes what strategy they used to answer the question.\nAltogether, these conversations give examples of language shortcuts that need to be avoided when constructing the question_stem and choices.\nHere are the conversations:\n{{conversations}}\nYour task is to rewrite the question_stem and choices so that a different LLM 'assistant' cannot use the language-only strategies that were identified in these past conversations.\nInclude an 'explanation' about why your new set of distractors are better.\nYour revised choices should include the correct answer at the 'correct_index'.\nYour revised question_stem and choices should not significantly change the meaning of the question and correct answer.\nYou are free to change the distractors a lot to achieve this task.\nInclude {{nchoices}} choices.\nThe choices should not be made too different in length and detail to the original question stem and choices.\nNote that the question stem may contain important cues that cannot be removed. E.g. if a question asks about the \"green stain\" in an image, you cannot change it to \"stain\" because it introduces ambiguity.\nThe choices should not include the letters in front like (a), (b), ...\nReturn a json" + }, + { + "type": "code_caption", + "bbox": [ + 0.113, + 0.411, + 0.407, + 0.426 + ], + "angle": 0, + "content": "The output of this function is a JSON and is:" + }, + { + "type": "code", + "bbox": [ + 0.117, + 0.434, + 0.909, + 0.653 + ], + "angle": 0, + "content": "{ \"question_stem\": \"In an electron microscopy image of a neuron expressing human mHTT (BACHD) protein, distinct, spherical features are prominent and seem to be atypical. What is the most probable explanation for these features?\", \"choices\": [ \"Concentration of cellular waste products leading to distinct morphological anomalies\", \"Formation of air bubbles introduced during sample immobilization processes\", \"Ice contamination from liquid nitrogen during sample preparation\", \"Inclusion bodies formed by aggregated proteins expressing during stress response\", \"Cellular morphological changes during mitotic division\" ], \"correct_index\": 2, \"explanation\": \"The revised options introduce plausible scientific phenomena that could also result in spherical or distinct anomalies. This set of distractors is more varied and grounded in legitimate alternative explanations without leading the responder towards the correct answer through language patterns. For example, air bubbles and protein inclusion bodies might mimic sphere-like structures. Additionally, by adding an unrelated phenomenon like cellular changes during division, the set confuses the language pattern strategy previously used. Thus, the correct choice of \\(\" Ice\\) contamination from liquid nitrogen\"\\( \" stands out less and necessitates either visual confirmation or real understanding of sample preparation outcomes to identify correctly.\"" + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.663, + 0.907, + 0.693 + ], + "angle": 0, + "content": "Notice that the response includes an explanation of why it made the changes, and the changes relate to what was in the reflection, adding interpretability." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.693, + 0.771, + 0.709 + ], + "angle": 0, + "content": "4. Finally, we check that the rewrite has not changed the semantic meaning of the question and answer." + }, + { + "type": "code", + "bbox": [ + 0.134, + 0.718, + 0.9, + 0.9 + ], + "angle": 0, + "content": "Prompt \n\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\* \nBelow are two question-answer pairs. \nThe question-answer pairs are part of VQA triplets, and both pairs use the same image. \nAre these question-answer pairs semantically equivalent? \nOr are they significantly different? \nGive a true/false and also an explanation. \nQUESTION 1: \nIn an electron microscopy image of a mouse primary embryonic cortical neuron expressing full-length human mHTT (BACHD), \"eye-ball\" shaped features are observed in the top left corner that appear anomalous. What is the most likely cause of these features? \nANSWER 1: \nIce contamination from liquid nitrogen during sample preparation \nQUESTION 2:" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.508, + 0.937 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "code", + "bbox": [ + 0.111, + 0.094, + 0.911, + 0.272 + ], + "angle": 0, + "content": "In an electron microscopy image of a neuron expressing human mHTT (BACHD) protein, distinct, spherical features are prominent and seem to be atypical. What is the most probable explanation for these features? \nANSWER 2: \nIce contamination from liquid nitrogen during sample preparation \n\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.279, + 0.907, + 0.325 + ], + "angle": 0, + "content": "5. If the meaning check step fails, then we exit with a fail as we notice that once the meaning deviates it will become more distorted with further iterations. Otherwise, we update the MCQ, and start the loop again. Where the first function call in the new loop will check whether this revised MCQ is now too hard or needs to be rewritten." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.344, + 0.907, + 0.406 + ], + "angle": 0, + "content": "Calibrating the preservation of meaning Since there is a risk that the rewriter will change the semantic meaning of the question and answer, it is crucial that the check_meaning_preserved is effective. For topics requiring domain expertise, this requires prompt engineering with a domain expert providing feedback. This is one of the key human bottlenecks in tuning the prompts in this system." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.424, + 0.907, + 0.501 + ], + "angle": 0, + "content": "Other sources of MCQ invalidation While the check_meaning_preserved function checks that the question and answer is correct, an additional risk is that the rewrite introduces distractors that are more correct than the target answer. In our manual review evaluations (we reviewed all 1,042 MCQs), this was very rare. However, if this were a significant problem in future systems, it may be addressable by tuning the rewriter bot, with an expert human evaluator in the prompt engineering loop." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.519, + 0.906, + 0.625 + ], + "angle": 0, + "content": "Extending RefineBot beyond language shortcuts We constructed RefineBot specifically to address the issue of language shortcuts – where the question is answerable without using the image. This is appropriate because we work with vision-centric VQA, meaning the questions should require image interpretation. However, the framework should work in other settings – namely language-only QA. This would require adjusting the prompts slightly. For example, the current reflection prompt asks how the question was successfully evaluated without using language; also, the current rewriter prompt directs the LLM to address language shortcuts in its rewrite. Therefore, adapting Refinebot to language-only MCQ should make these prompts more general." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.625, + 0.907, + 0.732 + ], + "angle": 0, + "content": "On the other hand, the method could also be updated for the more general VQA setting – to improve questions that do not have language shortcuts, but are easy for some other reason. Here, the evaluator and reflector agents, which are currently LLMs, could be switched for an MLLM, which also ingests the image (though this would be expensive for current state-of-the-art models). We hypothesize that current MLLMs are so weak at image perception, that this is not necessary for MicroVQA – that simply removing language shortcuts will suffice for creating a challenging benchmark. As MLLMs improve at perceiving microscopy images, this general VQA RefineBot could become more useful for ensuring that the benchmark remains challenging." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.75, + 0.905, + 0.81 + ], + "angle": 0, + "content": "Repeated RefineBot iterations A single run of RefineBot may exit as a FAIL, either because the rewrite changed the meaning of the question, or because the maximum number of iterations was reached. However, RefineBot can give very different outputs when changing the random seed, and we find that simply running again with a different random seed can lead to a successful run. Therefore, if the bot fails, we rerun it." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.811, + 0.907, + 0.903 + ], + "angle": 0, + "content": "If an MCQ exited RefineBot successfully, then the evaluator LLM was fooled by the question (got the answer incorrect). Yet, the question could still be suboptimal in detectable ways: the LLM could be capable of narrowing the MCQ down to two options, but have guessed the wrong option; moreover, this question could be easier for different models. To ensure it is difficult enough overall, we require that the MCQ is answered incorrectly for two random seeds from two different models: gpt-4o-2024-08-06 and claude-3-5-sonnet-20241022. If not, then it fails, and we run the RefineBot again with a different random seed." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.904, + 0.136 + ], + "angle": 0, + "content": "We discuss in the experiments Sec. 5.1 how this process leads to a small evaluation bias against the models used as evaluators. That section shows that despite the bias, this process makes the MCQs generally harder for all models. Future versions of RefineBot could mitigate this by including a larger family of evaluator models." + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.138, + 0.79, + 0.153 + ], + "angle": 0, + "content": "Finally, some MCQs fail to pass the RefineBot check. In these cases, we simply keep the stage 1 MCQ." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.167, + 0.506, + 0.183 + ], + "angle": 0, + "content": "E.5. MCQ generation results: stage 1 exam alignment" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.192, + 0.906, + 0.268 + ], + "angle": 0, + "content": "The Stage 1 pipeline produced one-best-answer MCQs that preserved the content, testing focus, and validity of the expert-generated question-answer pairs. It generated an initial set of distractors following best practices outlined in educational and NBME guidelines. A key design goal of Stage 1 was to preserve the original question content without altering or introducing new information. Secondary objectives included ensuring proper formatting according to NBME guidelines and minimizing information that might provide clues to the correct answer without reference to the image." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.269, + 0.906, + 0.373 + ], + "angle": 0, + "content": "While the pipeline ensured alignment with these goals, preserving the original content occasionally resulted in MCQs containing extra information, such as image descriptions or specific biological terms (i.e., gene or protein name), that could inadvertently help narrow the options. For example, stage 1 question stems sometimes included protein names, disease descriptions, or descriptions of image features, which could reduce the challenge. Figure 11 illustrates Stage 1 outputs that are appropriately formatted but may preserve details from the original input, making questions less challenging. While the distractors were biologically plausible, some options were overly easy to eliminate, reducing the challenge. These limitations are addressed and refined in Stage 2." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.376, + 0.906, + 0.497 + ], + "angle": 0, + "content": "The o1-mini-2024-09-12 LLM judge, employed during DSPy optimization, compared the Stage 1 MCQs to the original inputs to evaluate (1) semantic similarity and (2) adherence to MCQ formatting. The results showed that \\(96\\%\\) of questions preserved the original meaning, and \\(100\\%\\) adhered to NBME formatting guidelines. Manual analysis of the \\(4\\%\\) labeled as at least partially dissimilar revealed that these cases typically involved open-ended original questions being reformulated into closed-ended MCQs. This transformation, a necessary step for creating one-best-answer MCQs, was not a failure but rather an expected outcome of Stage 1. In rare instances where the original submission included multiple correct answers, the output of Stage 1 retained one correct answer and excluded others to comply with the one-best-answer format. The LLM judge flagged these cases as partially dissimilar but these changes were essential for adhering to the MCQ structure." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.498, + 0.904, + 0.604 + ], + "angle": 0, + "content": "A common issue in MCQ design is that correct answers tend to be longer and more detailed than distractors. To assess this, we computed the ratio of tokens in incorrect options to those in the correct answer, where values \\(\\leq 1\\) indicate longer correct answers, and values \\(\\geq 1\\) suggest shorter correct answers. Stage 1 MCQs showed a ratio of 0.87, indicating that correct answers were only slightly longer than distractors. By comparison, naive MCQ generation yielded a ratio of 0.46, meaning correct answers were nearly twice as long as distractors, making them easier to identify. Additionally, naive MCQ generation preserved the original meaning in only \\(89\\%\\) of cases and met NBME formatting guidelines in just \\(70\\%\\), highlighting the efficacy of the Stage 1 process at preserving information and ensuring proper MCQ formatting." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.617, + 0.461, + 0.634 + ], + "angle": 0, + "content": "E.6. MCQ generation results: stage 2 RefineBot" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.642, + 0.906, + 0.747 + ], + "angle": 0, + "content": "The second stage of our MCQ generation method focuses on increasing the difficulty of the questions and options. The key idea is that we want to remove language shortcuts – information in the MCQ that allows answering the question without access to the image. To aid in understanding, we construct a toy example in Fig. 10 with the three types of language shortcuts that we identify. The questions should be paired with a stain that is mitochondria, and we want the VQA to require actually inferring that from the image. In shortcut 1, the text ‘gives away’ the image content so it’s trivial to answer the question. In shortcut 2: The LLM generates implausible or weak distractors. In shortcut 3: ‘Language bias’, is a known problem in VQA [29]." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.749, + 0.906, + 0.838 + ], + "angle": 0, + "content": "Moving to real MCQs, in Fig. 4 we illustrate the transformation of a question at each stage, highlighting issues and corresponding improvements. While stage 1 produces exam-style questions that preserve the QA objective, the wording of the questions and distractors often enable MLLMs to exploit shortcuts rather than demonstrate knowledge. To address this, stage 2 (RefineBot) employs a chain-of-thought process. We analyze additional examples of improvements achieved in stage 2, showing how RefineBot improves both the style and content. The key strategies correspond to the three shortcut types that we identified:" + }, + { + "type": "text", + "bbox": [ + 0.14, + 0.84, + 0.905, + 0.9 + ], + "angle": 0, + "content": "1. (Visual giveaway) Over-description of the image eliminates the need for image interpretation. For example, in Fig. 11 (middle row), stage 1 includes details that reveal the specifics of the image, allowing the model to propose common issues without analyzing the image. Stage 2 replaces these specifics with general terms in the question stem." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.508, + 0.936 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.218, + 0.094, + 0.34, + 0.118 + ], + "angle": 0, + "content": "Language shortcut 1: Visual giveaway" + }, + { + "type": "text", + "bbox": [ + 0.186, + 0.126, + 0.365, + 0.161 + ], + "angle": 0, + "content": "Q: The green TOMM20 outer mitochondrial membrane protein localizes where?" + }, + { + "type": "text", + "bbox": [ + 0.186, + 0.171, + 0.278, + 0.182 + ], + "angle": 0, + "content": "A) Mitochondria" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.182, + 0.253, + 0.194 + ], + "angle": 0, + "content": "B) Nucleus" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.194, + 0.227, + 0.205 + ], + "angle": 0, + "content": "C) ER" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.205, + 0.237, + 0.218 + ], + "angle": 0, + "content": "D) Golgi" + }, + { + "type": "list", + "bbox": [ + 0.187, + 0.182, + 0.253, + 0.218 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.24, + 0.374, + 0.276 + ], + "angle": 0, + "content": "Why? The 'M' in \"TOMM20\" stands for 'mitochondria', so that's enough to answer the question." + }, + { + "type": "title", + "bbox": [ + 0.439, + 0.094, + 0.558, + 0.117 + ], + "angle": 0, + "content": "Language shortcut 2: Weak distractors" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.132, + 0.578, + 0.156 + ], + "angle": 0, + "content": "Q: What is the localization of the green stain?" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.165, + 0.498, + 0.177 + ], + "angle": 0, + "content": "A) Mitochondria" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.178, + 0.468, + 0.19 + ], + "angle": 0, + "content": "B) Butane" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.19, + 0.472, + 0.2 + ], + "angle": 0, + "content": "C) Quokka" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.201, + 0.458, + 0.212 + ], + "angle": 0, + "content": "D) Coatf" + }, + { + "type": "list", + "bbox": [ + 0.406, + 0.165, + 0.498, + 0.212 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.24, + 0.578, + 0.276 + ], + "angle": 0, + "content": "Why? The LLM generates distractors that are not plausible, and can easily be eliminated" + }, + { + "type": "title", + "bbox": [ + 0.658, + 0.094, + 0.778, + 0.118 + ], + "angle": 0, + "content": "Language shortcut 3: Language bias" + }, + { + "type": "text", + "bbox": [ + 0.624, + 0.126, + 0.801, + 0.161 + ], + "angle": 0, + "content": "Q: For a study on Parkinson's disease, we stain a target green. Where does it localize?" + }, + { + "type": "text", + "bbox": [ + 0.624, + 0.171, + 0.718, + 0.183 + ], + "angle": 0, + "content": "A) Mitochondria" + }, + { + "type": "text", + "bbox": [ + 0.624, + 0.183, + 0.692, + 0.194 + ], + "angle": 0, + "content": "B) Nucleus" + }, + { + "type": "text", + "bbox": [ + 0.624, + 0.195, + 0.666, + 0.205 + ], + "angle": 0, + "content": "C) ER" + }, + { + "type": "text", + "bbox": [ + 0.624, + 0.205, + 0.676, + 0.218 + ], + "angle": 0, + "content": "D) Golgi" + }, + { + "type": "list", + "bbox": [ + 0.624, + 0.171, + 0.718, + 0.218 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.621, + 0.228, + 0.8, + 0.287 + ], + "angle": 0, + "content": "Why? 'Mitochondria' is the most plausible answer because the question mentions Parkinson's disease, which frequently studies mitochondrial dysfunction" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.303, + 0.907, + 0.332 + ], + "angle": 0, + "content": "Figure 10. Three types of language shortcut relevant to MicroVQA. The target VQA has an image that is fluorescence microscopy stained with TOMM20 which would show a pattern consistent with visualizing mitochondria." + }, + { + "type": "text", + "bbox": [ + 0.14, + 0.358, + 0.907, + 0.403 + ], + "angle": 0, + "content": "2. (Weak distractors) Distractors unrelated to the specifics of the scenario described by the question. In Fig. 11 (bottom row), stage 2 increases distractor complexity by incorporating elements tied to the question's context, making them less predictable." + }, + { + "type": "text", + "bbox": [ + 0.14, + 0.403, + 0.907, + 0.451 + ], + "angle": 0, + "content": "3. (Language bias) Overly precise wording in the question stem: Precise phrasing can make it easy to identify the correct answer through pattern matching. For example, Fig. 11 (top row), changing 'eye-ball shaped' to 'spherical' retains the question's intent while creating more challenging distractors referencing plausible alternative processes." + }, + { + "type": "list", + "bbox": [ + 0.14, + 0.358, + 0.907, + 0.451 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "14" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.091, + 0.088, + 0.905, + 0.643 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.655, + 0.908, + 0.684 + ], + "angle": 0, + "content": "Figure 11. Examples of changes to questions and options between stage 1 and stage 2 (RefineBot) of our MCQ generation method. In red elements that need to be improved and in green improvements." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.711, + 0.285, + 0.729 + ], + "angle": 0, + "content": "F. Experiments details" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.738, + 0.283, + 0.755 + ], + "angle": 0, + "content": "F.1. Evaluation prompts" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.763, + 0.908, + 0.794 + ], + "angle": 0, + "content": "All prompts except for LLaVA and LLaVA-Med used the chain-of-thought prompt template from the MMMU-Pro code [88]:" + }, + { + "type": "code", + "bbox": [ + 0.108, + 0.817, + 0.768, + 0.892 + ], + "angle": 0, + "content": "The following is a multiple choice question (with answers). Think step by step and then output the answer in the format of \\(\"The\\) answer is \\((\\mathrm{X})\\backslash \"\\) at the end. {{QUESTION}}} Options: {{CHOICES}}" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "15" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.111, + 0.093, + 0.337, + 0.107 + ], + "angle": 0, + "content": "An example complete question is:" + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.13, + 0.516, + 0.142 + ], + "angle": 0, + "content": "The following is a multiple choice question (with answers)." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.142, + 0.755, + 0.152 + ], + "angle": 0, + "content": "Think step by step and then output the answer in the format of \"The answer is (X)\" at the end." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.161, + 0.885, + 0.203 + ], + "angle": 0, + "content": "A fluorescence microscopy image displays cultured mouse primary neurons stained with a specific marker localized to mitochondria. The fluorescence signal exhibits an uneven distribution across the neuronal cell, and there is an absence of signal within the nucleus. What is the most probable cause for the uneven distribution of the mitochondrial signal?" + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.212, + 0.167, + 0.222 + ], + "angle": 0, + "content": "Options:" + }, + { + "type": "text", + "bbox": [ + 0.126, + 0.223, + 0.886, + 0.242 + ], + "angle": 0, + "content": "(1): Mitochondrial transport along axonal and dendritic processes ensures uneven distribution based on cellular energy demands." + }, + { + "type": "text", + "bbox": [ + 0.126, + 0.242, + 0.885, + 0.253 + ], + "angle": 0, + "content": "(2): Differential expression of mitochondrial proteins leads to variable fluorescence signal across the neuron." + }, + { + "type": "text", + "bbox": [ + 0.126, + 0.253, + 0.885, + 0.263 + ], + "angle": 0, + "content": "(3): Mitochondrial biogenesis occurs at specific neuronal regions, causing localized fluorescence accumulation." + }, + { + "type": "text", + "bbox": [ + 0.126, + 0.263, + 0.885, + 0.272 + ], + "angle": 0, + "content": "(4): Mitochondrial anchoring to the cytoskeleton restricts their movement, resulting in uneven distribution." + }, + { + "type": "text", + "bbox": [ + 0.126, + 0.272, + 0.885, + 0.292 + ], + "angle": 0, + "content": "(5): Mitochondrial dynamics, including fission and fusion, allow mitochondria to move and distribute throughout the neuron." + }, + { + "type": "list", + "bbox": [ + 0.126, + 0.223, + 0.886, + 0.292 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.32, + 0.686, + 0.334 + ], + "angle": 0, + "content": "The answer is then extracted with the regex pattern: answer is \\((?([0-9])\\)?" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.336, + 0.906, + 0.396 + ], + "angle": 0, + "content": "For LLaVA-Med, we added an extra parsing strategy. This was because we observed that while LLaVA-Med often gave reasonable responses to questions, it failed to follow instructions well. First, it did not follow the CoT prompt consistently to think step by step. It also didn't use the right answer formatting: \"The answer is (X) \\\". Since we were more interested in biomedical reasoning adhering to VQA formatting, we decided to handle this model differently." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.398, + 0.638, + 0.412 + ], + "angle": 0, + "content": "Specifically, to encourage CoT reasoning, we fixed the prefix of the response to:" + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.413, + 0.859, + 0.427 + ], + "angle": 0, + "content": "Response: Let's think through this step by step and evaluate each option.." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.428, + 0.32, + 0.441 + ], + "angle": 0, + "content": "Then we added a suffix to the end:" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.443, + 0.852, + 0.473 + ], + "angle": 0, + "content": "After carefully considering all the options, the most likely answer is (1). Then we continued generation, and did regular regex parsing to get the final answer." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.474, + 0.906, + 0.549 + ], + "angle": 0, + "content": "We also find that a few models make minor but consistent errors with formatting the answer and elect to fix them. For example x-ai/grok-2-vision-1212 commonly adds two asterisks before the answer like \"The answer is \\(^{**}(4)^{**}\\)\" instead of \"The answer is (4)\" (GPT-4o also did this sometimes). Some other common errors that we saw but did not handle (because it would be difficult to do rigorously) was shown by meta-llama/llama-3.2-11b-vision-instruct: the response would write one of the MCQ options but not state its number." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.562, + 0.231, + 0.576 + ], + "angle": 0, + "content": "F.2. Model details" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.587, + 0.906, + 0.616 + ], + "angle": 0, + "content": "Our main benchmarking experiments in Tab. 3 refer to models that have multiple versions. The exact model versions used for inference are shown in Tab. 6" + }, + { + "type": "table", + "bbox": [ + 0.143, + 0.628, + 0.854, + 0.874 + ], + "angle": 0, + "content": "
Model NameAPI EndpointSource & Details
olol-2024-12-17OpenAI API
Claude-3.5-Sonnet*anthropic/claude-3.5-sonnet-20240620Openrouter API
Gemini-Pro-1.5google/gemini-pro-1.5Openrouter API
Pixtral-Largemistralai/pixtral-large-2411Openrouter API
Grok-2-Visionx-ai/grok-2-vision-1212Openrouter API
Qwen-2-vl-72b-Instructqwen/qwen-2-vl-72b-instructOpenrouter API
VILA1.5-40bVILA1.5-40bHuggingFace, local inference
GPT-4ogpt-4o-2024-08-06OpenAI API
Llama-3.1-Nemotron-70b-Instructnvidia/llama-3.1-nemotron-70b-instructOpenrouter API
Llama-3.2-90b-Vision-Instructmeta-llama/llama-3.2-90b-vision-instructOpenrouter API
Qwen-2-VL-7bqwen/qwen-2-vl-7b-instructOpenrouter API
Claude-3.5-Haikuanthropic/claude-3.5-haikuOpenrouter API
Gemini-Flash-1.5-8bgoogle/gemini-flash-1.5-8bOpenrouter API
GPT-4o-minigpt-4o-mini-2024-07-18OpenAI API
Pixtral-12bmistralai/pixtral-12bOpenrouter API
VILA1.5-13bVILA1.5-13bHuggingFace, local inference
Llama-3.2-11b-vision-instructmeta-llama/llama-3.2-11b-vision-instructOpenrouter API
LLaVA-Med-Mistral-7BLLaVA-Med-Mistral-7BHuggingFace, local inference
LLaVA-Mistral-7BLLaVA-Mistral-7BHuggingFace, local inference
" + }, + { + "type": "table_caption", + "bbox": [ + 0.347, + 0.883, + 0.651, + 0.897 + ], + "angle": 0, + "content": "Table 6. AI model API endpoints and their sources" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.51, + 0.937 + ], + "angle": 0, + "content": "16" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.089, + 0.092, + 0.908, + 0.152 + ], + "angle": 0, + "content": "For LLaVA and LLaVA-med, although the version number 1.6 is different from LLaVA-Med with 1.5, we do believe that LLaVA-Med was either trained from this LLaVA version, or from one that is very similar. They have identical architecture, and the version is very close; it is possible the LLaVA-Med copied the number incorrectly. The LLaVA-Med codebase or github issues does not clarify this at the time of writing." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.153, + 0.908, + 0.185 + ], + "angle": 0, + "content": "Note that for OpenRouter function calls to Anthropic, the date is not included in the string. We believe they use timestamps for older model versions, but not for the latest version. So reproducing it may require checking if the model has been updated." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.195, + 0.367, + 0.212 + ], + "angle": 0, + "content": "F.3. Human baseline on MicroVQA" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.219, + 0.744, + 0.234 + ], + "angle": 0, + "content": "To perform a human baseline, we recruited co-authors who created questions. The conditions were:" + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.239, + 0.388, + 0.253 + ], + "angle": 0, + "content": "- Questions were provided in a google form." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.254, + 0.474, + 0.269 + ], + "angle": 0, + "content": "- Participants did not view any question that they created." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.269, + 0.904, + 0.299 + ], + "angle": 0, + "content": "- Internet access was allowed, but LLM access was not. This is because realistic biological practice is knowledge-intensive, and may require looking up facts or references." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.3, + 0.722, + 0.314 + ], + "angle": 0, + "content": "- Time limit was 3 minutes per question. They were free to take as many breaks as they wanted." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.315, + 0.633, + 0.33 + ], + "angle": 0, + "content": "- They answered the multiple choice question without further written justification." + }, + { + "type": "list", + "bbox": [ + 0.09, + 0.239, + 0.904, + 0.33 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.334, + 0.908, + 0.38 + ], + "angle": 0, + "content": "These conditions – 3 minutes and internet access – give sufficient time for an expert to make a reasonable attempt. Since the score was far below \\(100\\%\\), we claim that these questions could be of value to real experts as an LLM chat assistant. Each participant only took a subset of the questions (about 100 each). The final score is the aggregate." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.391, + 0.5, + 0.407 + ], + "angle": 0, + "content": "F.4. More experiments on attribute-level information" + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.431, + 0.645, + 0.655 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.089, + 0.668, + 0.908, + 0.697 + ], + "angle": 0, + "content": "Figure 12. Performance by image modality type for the best models: Gemini-1.5-Pro (closed source), VILA1.5-40B (open-source), and LlaVA-Med-Mistral-7B (medical)." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.714, + 0.907, + 0.837 + ], + "angle": 0, + "content": "We described our metadata features in Sec. 3, and we can now leverage them to benchmark models on different attributes. Fig. 5 (left) shows all sub-tasks, and shows that 'compare image sets', which is common in biology to compare 'treatment' and 'control' groups. All models perform relatively well, despite multi-image reasoning being understudied in MLLMs [78]. Fig. 5 (right) shows that for Gemini and LLaVa-Med, higher Bloom's levels - corresponding to more advanced reasoning - leads to worse performance. This supports the intuitive result that higher-level reasoning is more challenging - although VILA does not follow this trend. A final result in Fig. 12 shows that all models performed worse on cryo-electron microscopy images than brightfield or fluorescence microscopy. This may be due to cryo-EM's challenging features (low contrast, low signal-to-noise ratio) and their under-representation in pretraining datasets [93]." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.847, + 0.347, + 0.864 + ], + "angle": 0, + "content": "F.5. Language shortcut ablations" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.871, + 0.908, + 0.903 + ], + "angle": 0, + "content": "In Tab. 4 we do language shortcut ablations for some top-performing models and some lower-performing models - both no-image and choices-only ablations." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "17" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.089, + 0.908, + 0.131 + ], + "angle": 0, + "content": "Table 7. Ablation study on MicroVQA MCQs. Column 2 is overall accuracy. Cols 3-4 are the no-image ablation accuracy and drop compared to overall accuracy. Cols 5-6 are the choices-only ablation accuracy and drop compared to overall accuracy. See the text for discussion." + }, + { + "type": "table", + "bbox": [ + 0.236, + 0.143, + 0.763, + 0.306 + ], + "angle": 0, + "content": "
AccuracyNo-image ablationChoices-only ablation
AccuracyDropAccuracydiff
o152.849.2-3.637.7-15.1
Claude-3.5-Sonnet51.746.0-5.844.0-7.7
Gemini-Pro-1.551.147.2-3.836.8-14.3
Pixtral-Large49.846.3-3.636.7-13.1
Grok-2-Vision48.446.3-2.140.5-7.9
GPT-4o-mini46.244.2-1.934.3-11.9
Pixtral-12b45.643.7-1.931.8-13.8
Random22.0
" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.333, + 0.907, + 0.363 + ], + "angle": 0, + "content": "The no-image ablation is a common test in VQA evaluations[29, 74]. The MLLM only receives the text prompt, and we add the following sentence to the prompt template:" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.365, + 0.885, + 0.392 + ], + "angle": 0, + "content": "If an image is mentioned ignore this information and try your best to answer the question." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.394, + 0.904, + 0.424 + ], + "angle": 0, + "content": "The choices-only ablation has been explored in (non-visual) question answering [7]. The model does not receive the image or the question and must guess an answer from only the choices. We add this sentence to the prompt template:" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.425, + 0.895, + 0.455 + ], + "angle": 0, + "content": "However I will not give you the question text or the images, I will only give you the choices, so please try your best to answer the question." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.456, + 0.906, + 0.514 + ], + "angle": 0, + "content": "Both ablations lower the scores, but they are still above random. While this is initially surprising, it is actually consistent with other benchmarks. A key challenge in VQA benchmarks is for questions to be \"vision-centric\": that the question requires using the image features to answer. We still have very strong evidence that MicroVQA is vision-centric, which we discuss below." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.517, + 0.905, + 0.547 + ], + "angle": 0, + "content": "First, let's recall the language shortcuts by considering a VQA sample with an image of a green field, with the question \"what color is the grass\"." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.549, + 0.905, + 0.593 + ], + "angle": 0, + "content": "- 'Language bias' is well-known for enabling models to make educated guesses about MCQs without access to the image [29]. For the question \"what color is the grass?\", then without the image, \"green\" is a likely guess, although there are other possible answers, like \"brown\"." + }, + { + "type": "text", + "bbox": [ + 0.091, + 0.594, + 0.905, + 0.623 + ], + "angle": 0, + "content": "- 'Visual giveaway' means there are too many image details in the question, for example if the question were \"what color is the green grass?\" This removes any need to look at the image." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.624, + 0.905, + 0.654 + ], + "angle": 0, + "content": "- 'Weak distractors' gives other question choices that are easily eliminated. For example, for question \"what color is the grass?\", a bad distractors would be \"pink\". This also removes any need to look at the image." + }, + { + "type": "list", + "bbox": [ + 0.09, + 0.549, + 0.905, + 0.654 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.655, + 0.904, + 0.685 + ], + "angle": 0, + "content": "(In Appendix E.6, we also discussed language shortcuts with biology examples as it related to MCQ construction.) Now we argue that MicroVQA is still vision-centric despite the ablation results." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.687, + 0.906, + 0.808 + ], + "angle": 0, + "content": "1. Language bias allows cheating on vision-centric questions If answerable due to a 'language bias', the question may still require perception. A model may guess that the grass is green, but it is only a guess - the grass really could be brown [29]. Therefore correct answers under the 'no-image' ablation may still be vision-centric. On the other hand problems with 'visual giveaway' and 'weak distractor' shortcuts really do render the question not-vision-centric. The RefineBot method we employed in MCQ generation was designed to target language shortcut removal. Anecdotally, our qualitative analysis (from Appendix E.6) found that RefineBot tends to focus on fixing the visual giveaway and weak distractor shortcuts. Ideally we would perform a quantitative analysis to identify which language shortcuts are due to language bias versus the others, however analyzing these questions is very time-consuming." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.81, + 0.905, + 0.855 + ], + "angle": 0, + "content": "2. The unsolved questions are hard because of perception. The 'no-image' ablation only tells us about the 'easy' questions, but half the question is too hard for current models. Our qualitative analysis of 30 failure cases demonstrates that half of them made critical perception errors, which strongly suggests they require image understanding" + }, + { + "type": "list", + "bbox": [ + 0.09, + 0.687, + 0.906, + 0.855 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.906, + 0.902 + ], + "angle": 0, + "content": "In contrast to the 'vision-centric' claims from the Cambrian paper [74]: They show that many VQA benchmarks have no-image ablation scores only slightly underperforming the with-image scores, and they conclude that such VQA benchmarks \"do not properly measure vision-centric capabilities\". But that conclusion cannot be drawn from only the no-image ablation" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "18" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.907, + 0.152 + ], + "angle": 0, + "content": "(following our point 2 above). For benchmarks where MLLM's score far from \\(100\\%\\), the failures could mostly be due to poor image understanding. This is very plausible since visual capabilities in MLLMs are viewed as lagging behind language capabilities. Likewise, our experience with our microscopy data was that visual understanding was poor, but that language knowledge was impressive." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.153, + 0.907, + 0.197 + ], + "angle": 0, + "content": "In point 1 above, we argued that MLLMs can use language biases to pass the no-image ablation, even if they are vision-centric questions. This is actually in agreement with the claim from Cambrian that those questions don't test vision-centric capabilities." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.198, + 0.907, + 0.243 + ], + "angle": 0, + "content": "Finally, some non-vision centric questions are acceptable. The goal of MicroVQA is to source realistic problems from microscopy research. While this area does lean heavily on image analysis, not every question will realistically require perception, so this is acceptable." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "19" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.091, + 0.091, + 0.238, + 0.106 + ], + "angle": 0, + "content": "F.6. Error analysis" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.114, + 0.908, + 0.145 + ], + "angle": 0, + "content": "Here we show samples from the manual qualitative error analysis (details in the main paper). To see all 30 examples, refer the github repo." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.163, + 0.267, + 0.178 + ], + "angle": 0, + "content": "F.6.1 Perception errors" + }, + { + "type": "image", + "bbox": [ + 0.107, + 0.204, + 0.391, + 0.508 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.106, + 0.516, + 0.368, + 0.532 + ], + "angle": 0, + "content": "Subject: Cell and molecular biology" + }, + { + "type": "image_caption", + "bbox": [ + 0.107, + 0.533, + 0.299, + 0.546 + ], + "angle": 0, + "content": "Bloom's level: 4 (Analysis)" + }, + { + "type": "image_caption", + "bbox": [ + 0.107, + 0.548, + 0.313, + 0.561 + ], + "angle": 0, + "content": "Specimen: mammalian cells" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.21, + 0.882, + 0.287 + ], + "angle": 0, + "content": "Q: A cryo-electron tomography (cryoET) slice of mammalian cells was acquired at a resolution of 1.3 Å and binned by 4 (resulting in a 5.3 Å per pixel). Within the tomogram, dark circular like structures are observed in an organelle. What is the modest likely identity of these structures?" + }, + { + "type": "text", + "bbox": [ + 0.419, + 0.301, + 0.547, + 0.316 + ], + "angle": 0, + "content": "A) Ribosomes X" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.317, + 0.843, + 0.332 + ], + "angle": 0, + "content": "B) Phase-separated condensates such as stress granules" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.333, + 0.525, + 0.347 + ], + "angle": 0, + "content": "C) Lysosomes" + }, + { + "type": "text", + "bbox": [ + 0.421, + 0.348, + 0.535, + 0.362 + ], + "angle": 0, + "content": "D) Peroxisomes" + }, + { + "type": "list", + "bbox": [ + 0.421, + 0.317, + 0.843, + 0.362 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.417, + 0.38, + 0.78, + 0.395 + ], + "angle": 0, + "content": "VLM Answer: Let me think about this step by step:" + }, + { + "type": "text", + "bbox": [ + 0.419, + 0.396, + 0.747, + 0.41 + ], + "angle": 0, + "content": "1) First, let's look at what we see in the image:" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.411, + 0.856, + 0.425 + ], + "angle": 0, + "content": "- A cryo-electron tomography slice showing cellular structures" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.426, + 0.674, + 0.439 + ], + "angle": 0, + "content": "- Circular-like dark structures visible" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.441, + 0.761, + 0.456 + ], + "angle": 0, + "content": "- The resolution is quite high (5.3 Å after binning)" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.457, + 0.856, + 0.47 + ], + "angle": 0, + "content": "2) Let's consider the size and appearance of these structures:" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.472, + 0.806, + 0.486 + ], + "angle": 0, + "content": "- They appear as dense, small, roughly circular objects" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.487, + 0.878, + 0.501 + ], + "angle": 0, + "content": "- The size is consistent with ribosomes (approximately \\(25 - 30\\mathrm{nm}\\))" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.502, + 0.824, + 0.53 + ], + "angle": 0, + "content": "- They show the characteristic electron density pattern of ribosomes" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.532, + 0.624, + 0.546 + ], + "angle": 0, + "content": "3) Let's analyze each option:" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.547, + 0.82, + 0.562 + ], + "angle": 0, + "content": "- Ribosomes: Match the size, shape, and density pattern" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.563, + 0.841, + 0.591 + ], + "angle": 0, + "content": "- Phase-separated condensates: Usually less defined, more amorphous" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.593, + 0.819, + 0.607 + ], + "angle": 0, + "content": "- Lysosomes: Much larger, membrane-bound organelles" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.608, + 0.819, + 0.622 + ], + "angle": 0, + "content": "- Peroxisomes: Also larger, membrane-bound organelles" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.623, + 0.621, + 0.636 + ], + "angle": 0, + "content": "4) Additional considerations:" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.638, + 0.834, + 0.652 + ], + "angle": 0, + "content": "- Ribosomes are often found associated with mitochondria" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.654, + 0.834, + 0.667 + ], + "angle": 0, + "content": "- The resolution is sufficient to resolve ribosomal structures" + }, + { + "type": "text", + "bbox": [ + 0.419, + 0.669, + 0.847, + 0.697 + ], + "angle": 0, + "content": "- The contrast and density are typical of ribosome imaging in cryoET" + }, + { + "type": "list", + "bbox": [ + 0.419, + 0.396, + 0.878, + 0.697 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.699, + 0.541, + 0.713 + ], + "angle": 0, + "content": "The answer is (1)" + }, + { + "type": "title", + "bbox": [ + 0.42, + 0.734, + 0.585, + 0.748 + ], + "angle": 0, + "content": "Error type: Perception" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.749, + 0.776, + 0.764 + ], + "angle": 0, + "content": "Error reason: These are granules characterized by" + }, + { + "type": "text", + "bbox": [ + 0.42, + 0.765, + 0.887, + 0.794 + ], + "angle": 0, + "content": "heterogeneous, electron-dense aggregates. Ribosomes would be smaller, more regular in shape, and not have as high contrast." + }, + { + "type": "image_caption", + "bbox": [ + 0.468, + 0.817, + 0.529, + 0.83 + ], + "angle": 0, + "content": "Figure 13" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "20" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.099, + 0.094, + 0.381, + 0.277 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.102, + 0.287, + 0.2, + 0.299 + ], + "angle": 0, + "content": "Subject: Anatomy" + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.299, + 0.242, + 0.31 + ], + "angle": 0, + "content": "Bloom's level: 4 (Analysis)" + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.31, + 0.318, + 0.321 + ], + "angle": 0, + "content": "Specimen: gastrointestinal lesion biopsy" + }, + { + "type": "text", + "bbox": [ + 0.404, + 0.094, + 0.885, + 0.128 + ], + "angle": 0, + "content": "Q: A hematoxylin and eosin (H&E)-stained micrograph shows cellular formations within a gastrointestinal tissue sample with unique organization and cellular patterns. What pattern and morphology do these cells most closely align with?" + }, + { + "type": "title", + "bbox": [ + 0.404, + 0.138, + 0.84, + 0.16 + ], + "angle": 0, + "content": "A) Bundles of elongated cells with pale pink cytoplasm and uniform oval nuclei in a crisscross pattern" + }, + { + "type": "text", + "bbox": [ + 0.405, + 0.16, + 0.816, + 0.171 + ], + "angle": 0, + "content": "B) Circular groups of cells with mucin in central spaces and small, dark nuclei" + }, + { + "type": "text", + "bbox": [ + 0.405, + 0.172, + 0.81, + 0.182 + ], + "angle": 0, + "content": "C) Dense arrays of polygonal cells with small nucleoli and vacuolated cytoplasm" + }, + { + "type": "text", + "bbox": [ + 0.405, + 0.182, + 0.739, + 0.192 + ], + "angle": 0, + "content": "D) Rows of cells with conspicuous cytoplasm and rounded nuclei" + }, + { + "type": "text", + "bbox": [ + 0.405, + 0.193, + 0.888, + 0.204 + ], + "angle": 0, + "content": "E) Diffused arrangement of cells with transparent cytoplasm and irregularly clustered chromatin" + }, + { + "type": "list", + "bbox": [ + 0.405, + 0.16, + 0.888, + 0.204 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.406, + 0.221, + 0.649, + 0.232 + ], + "angle": 0, + "content": "VLM Answer: Let me analyze this step by step:" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.232, + 0.626, + 0.242 + ], + "angle": 0, + "content": "1) Looking at the H&E-stained micrograph:" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.243, + 0.705, + 0.253 + ], + "angle": 0, + "content": "- It shows a circular cross-section of gastrointestinal tissue" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.254, + 0.705, + 0.264 + ], + "angle": 0, + "content": "- There's a clear organizational pattern along the periphery" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.265, + 0.745, + 0.276 + ], + "angle": 0, + "content": "- The center shows pink-stained connective tissue (lamina propria)" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.277, + 0.612, + 0.286 + ], + "angle": 0, + "content": "2) Focusing on the cellular arrangement:" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.287, + 0.62, + 0.297 + ], + "angle": 0, + "content": "- The cells are arranged in circular groups" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.298, + 0.617, + 0.308 + ], + "angle": 0, + "content": "- These groups form glandular structures" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.309, + 0.712, + 0.319 + ], + "angle": 0, + "content": "- There are clear central spaces (lumens) in these structures" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.32, + 0.734, + 0.33 + ], + "angle": 0, + "content": "- The nuclei appear dark and are located at the base of the cells" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.331, + 0.566, + 0.341 + ], + "angle": 0, + "content": "3) Comparing with the options:" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.342, + 0.782, + 0.351 + ], + "angle": 0, + "content": "- Option 1 describes elongated cells in crisscross pattern \n- doesn't match" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.353, + 0.86, + 0.373 + ], + "angle": 0, + "content": "- Option 2 describes circular groups with mucin in central spaces and small, dark nuclei - matches perfectly" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.375, + 0.747, + 0.385 + ], + "angle": 0, + "content": "- Option 3 describes polygonal cells with vacuoles - doesn't match" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.386, + 0.664, + 0.396 + ], + "angle": 0, + "content": "- Option 4 describes rows of cells - doesn't match" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.397, + 0.709, + 0.407 + ], + "angle": 0, + "content": "- Option 5 describes diffused arrangement - doesn't match" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.408, + 0.698, + 0.418 + ], + "angle": 0, + "content": "4) The image shows typical intestinal glands (crypts) with:" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.419, + 0.524, + 0.429 + ], + "angle": 0, + "content": "- Circular arrangement" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.43, + 0.582, + 0.44 + ], + "angle": 0, + "content": "- Central lumens containing mucin" + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.441, + 0.624, + 0.45 + ], + "angle": 0, + "content": "- Small, dark nuclei at the base of the cells" + }, + { + "type": "list", + "bbox": [ + 0.406, + 0.232, + 0.86, + 0.45 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.406, + 0.451, + 0.495, + 0.461 + ], + "angle": 0, + "content": "The answer is (2)" + }, + { + "type": "title", + "bbox": [ + 0.402, + 0.48, + 0.52, + 0.491 + ], + "angle": 0, + "content": "Error type: Perception" + }, + { + "type": "text", + "bbox": [ + 0.401, + 0.492, + 0.894, + 0.514 + ], + "angle": 0, + "content": "Error reason: No. The intestinal glands are not the lesion. There is a pink spindle cell lesion with a fascicular arrangement of cells" + }, + { + "type": "image_caption", + "bbox": [ + 0.468, + 0.532, + 0.529, + 0.545 + ], + "angle": 0, + "content": "Figure 14" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.507, + 0.937 + ], + "angle": 0, + "content": "21" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.095, + 0.09, + 0.374, + 0.249 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.255, + 0.374, + 0.418 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.427, + 0.213, + 0.438 + ], + "angle": 0, + "content": "Subject: Pathology" + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.439, + 0.246, + 0.45 + ], + "angle": 0, + "content": "Bloom's level: 4 (Analysis)" + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.451, + 0.275, + 0.461 + ], + "angle": 0, + "content": "Specimen: schwannoma tissue" + }, + { + "type": "text", + "bbox": [ + 0.386, + 0.096, + 0.892, + 0.131 + ], + "angle": 0, + "content": "Q: An H&E-stained image of a nerve sheath tumor displays unique cell arrangements and marked changes within its structure. Given these observations, what is the most likely reason for the observed morphological alternations?" + }, + { + "type": "title", + "bbox": [ + 0.387, + 0.141, + 0.623, + 0.153 + ], + "angle": 0, + "content": "A) Indication of rapid cellular proliferation X" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.153, + 0.609, + 0.164 + ], + "angle": 0, + "content": "B) Sign of tumor shrinkage post-treatment" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.165, + 0.654, + 0.175 + ], + "angle": 0, + "content": "C) Result of genetic changes within the tumor cells" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.176, + 0.676, + 0.187 + ], + "angle": 0, + "content": "D) Outcome of a chronic process affecting the tumor" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.187, + 0.601, + 0.198 + ], + "angle": 0, + "content": "E) Sign of metastasis to another location" + }, + { + "type": "list", + "bbox": [ + 0.388, + 0.153, + 0.676, + 0.198 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.386, + 0.208, + 0.638, + 0.22 + ], + "angle": 0, + "content": "VLM Answer: Let me analyze this step by step:" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.221, + 0.668, + 0.232 + ], + "angle": 0, + "content": "1) Looking at the H&E-stained images, I can observe:" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.232, + 0.536, + 0.242 + ], + "angle": 0, + "content": "- Dense cellular proliferation" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.243, + 0.623, + 0.254 + ], + "angle": 0, + "content": "- Distinctive \"herringbone\" or swirling pattern" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.255, + 0.588, + 0.265 + ], + "angle": 0, + "content": "- High cellularity with numerous nuclei" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.266, + 0.574, + 0.276 + ], + "angle": 0, + "content": "- Evidence of active growth pattern" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.277, + 0.543, + 0.288 + ], + "angle": 0, + "content": "2) Key characteristics visible:" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.289, + 0.575, + 0.299 + ], + "angle": 0, + "content": "- The cells are arranged in fascicles" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.3, + 0.619, + 0.31 + ], + "angle": 0, + "content": "- There's a high nuclear to cytoplasmic ratio" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.311, + 0.617, + 0.322 + ], + "angle": 0, + "content": "- The pattern suggests active tumor growth" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.323, + 0.604, + 0.332 + ], + "angle": 0, + "content": "- No signs of necrosis or treatment effect" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.333, + 0.599, + 0.344 + ], + "angle": 0, + "content": "- No obvious signs of metastatic spread" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.345, + 0.526, + 0.355 + ], + "angle": 0, + "content": "3) Analysis of the options:" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.356, + 0.715, + 0.367 + ], + "angle": 0, + "content": "- Option 1 matches with the high cellularity and growth pattern" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.368, + 0.687, + 0.378 + ], + "angle": 0, + "content": "- Option 2 doesn't fit as there's no evidence of shrinkage." + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.379, + 0.706, + 0.389 + ], + "angle": 0, + "content": "- Option 3 while possible, isn't directly evident in morphology" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.39, + 0.685, + 0.401 + ], + "angle": 0, + "content": "- Option 4 doesn't match the active growth pattern seen" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.402, + 0.706, + 0.412 + ], + "angle": 0, + "content": "- Option 5 isn't supported by the histological features shown" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.413, + 0.77, + 0.423 + ], + "angle": 0, + "content": "4) The morphological features strongly suggest active tumor growth with:" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.424, + 0.563, + 0.433 + ], + "angle": 0, + "content": "- Characteristic fascicular pattern" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.435, + 0.503, + 0.445 + ], + "angle": 0, + "content": "- High cellular density" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.446, + 0.576, + 0.456 + ], + "angle": 0, + "content": "- Well-preserved tumor architecture" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.457, + 0.538, + 0.468 + ], + "angle": 0, + "content": "- Signs of active proliferation" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.469, + 0.478, + 0.479 + ], + "angle": 0, + "content": "The answer is (1)" + }, + { + "type": "list", + "bbox": [ + 0.388, + 0.221, + 0.77, + 0.479 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.388, + 0.489, + 0.51, + 0.501 + ], + "angle": 0, + "content": "Error type: Perception" + }, + { + "type": "text", + "bbox": [ + 0.386, + 0.501, + 0.886, + 0.535 + ], + "angle": 0, + "content": "Error reason: No. One image shows cystic change, which can be seen in chronic, longstanding schwannomas. The other shows Verocay bodies and fascicular growth, which do not indicate rapid cellular proliferation." + }, + { + "type": "image_caption", + "bbox": [ + 0.468, + 0.553, + 0.529, + 0.566 + ], + "angle": 0, + "content": "Figure 15" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.508, + 0.936 + ], + "angle": 0, + "content": "22" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.104, + 0.09, + 0.379, + 0.252 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.104, + 0.258, + 0.379, + 0.43 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.437, + 0.211, + 0.448 + ], + "angle": 0, + "content": "Subject: Physiology" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.448, + 0.284, + 0.459 + ], + "angle": 0, + "content": "Bloom's level: 2 (Comprehension)" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.459, + 0.34, + 0.481 + ], + "angle": 0, + "content": "Specimen: mouse liver tissue (wounded and unwounded)" + }, + { + "type": "text", + "bbox": [ + 0.4, + 0.095, + 0.872, + 0.14 + ], + "angle": 0, + "content": "Q: In a fluorescence microscopy image comparing wounded and unwounded liver tissues stained to visualize markers of immune response – neutrophil derived migrasomes (Ly6G, yellow) and platelets (CD41, magenta), what is most likely observed about the distribution of these markers in unwounded tissue?" + }, + { + "type": "title", + "bbox": [ + 0.401, + 0.15, + 0.731, + 0.161 + ], + "angle": 0, + "content": "A) Slight overlap without any notable biological consequences" + }, + { + "type": "text", + "bbox": [ + 0.401, + 0.161, + 0.709, + 0.172 + ], + "angle": 0, + "content": "B) Substantial overlap implying persistent immune activation" + }, + { + "type": "text", + "bbox": [ + 0.401, + 0.172, + 0.758, + 0.183 + ], + "angle": 0, + "content": "C) No overlap, indicating complete segregation of cellular functions X" + }, + { + "type": "text", + "bbox": [ + 0.401, + 0.183, + 0.697, + 0.195 + ], + "angle": 0, + "content": "D) Randomized distribution suggesting passive circulation" + }, + { + "type": "text", + "bbox": [ + 0.401, + 0.195, + 0.705, + 0.206 + ], + "angle": 0, + "content": "E) Significant separation highlighting cellular non-interaction" + }, + { + "type": "list", + "bbox": [ + 0.401, + 0.161, + 0.758, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.401, + 0.215, + 0.665, + 0.226 + ], + "angle": 0, + "content": "VLM Answer: Let me think about this step by step:" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.227, + 0.62, + 0.238 + ], + "angle": 0, + "content": "1) The image shows two types of staining:" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.238, + 0.661, + 0.248 + ], + "angle": 0, + "content": "- Yellow (Ly6G) for neutrophil-derived migrasomes" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.248, + 0.559, + 0.259 + ], + "angle": 0, + "content": "- Magenta (CD41) for platelets" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.26, + 0.73, + 0.271 + ], + "angle": 0, + "content": "2) Looking at the unwounded tissue (presumably the left image):" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.271, + 0.633, + 0.281 + ], + "angle": 0, + "content": "- There are distinct yellow dots (migrasomes)" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.282, + 0.616, + 0.292 + ], + "angle": 0, + "content": "- There are magenta structures (platelets)" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.293, + 0.617, + 0.303 + ], + "angle": 0, + "content": "- The distribution appears fairly separated" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.304, + 0.547, + 0.314 + ], + "angle": 0, + "content": "- The markers rarely overlap" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.314, + 0.61, + 0.325 + ], + "angle": 0, + "content": "3) This makes biological sense because:" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.326, + 0.882, + 0.336 + ], + "angle": 0, + "content": "- In healthy/unwounded tissue, immune cells and platelets typically maintain their distinct roles" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.337, + 0.693, + 0.347 + ], + "angle": 0, + "content": "- Without injury, there's no strong stimulus for interaction" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.348, + 0.738, + 0.358 + ], + "angle": 0, + "content": "- The cells are likely performing their normal surveillance functions" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.359, + 0.53, + 0.369 + ], + "angle": 0, + "content": "4) Analyzing the options:" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.37, + 0.711, + 0.38 + ], + "angle": 0, + "content": "- Option 1 suggests slight overlap - more than what's visible" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.381, + 0.627, + 0.391 + ], + "angle": 0, + "content": "- Option 2 suggests high overlap - not seen" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.392, + 0.728, + 0.402 + ], + "angle": 0, + "content": "- Option 3 indicates complete segregation - matches the image" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.403, + 0.789, + 0.413 + ], + "angle": 0, + "content": "- Option 4 suggests random distribution - the pattern looks more organized" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.414, + 0.69, + 0.424 + ], + "angle": 0, + "content": "- Option 5 also suggests separation - similar to option 3" + }, + { + "type": "list", + "bbox": [ + 0.402, + 0.227, + 0.882, + 0.424 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.401, + 0.425, + 0.874, + 0.445 + ], + "angle": 0, + "content": "5) Option 3 best matches what we observe: \"No overlap, indicating complete segregation of cellular functions\"" + }, + { + "type": "text", + "bbox": [ + 0.402, + 0.446, + 0.491, + 0.457 + ], + "angle": 0, + "content": "The answer is (3)" + }, + { + "type": "title", + "bbox": [ + 0.399, + 0.467, + 0.519, + 0.478 + ], + "angle": 0, + "content": "Error type: Perception" + }, + { + "type": "text", + "bbox": [ + 0.398, + 0.479, + 0.879, + 0.49 + ], + "angle": 0, + "content": "Error reason: The markers rarely overlap, but there is very focal colocalization in some puncta." + }, + { + "type": "text", + "bbox": [ + 0.398, + 0.49, + 0.868, + 0.511 + ], + "angle": 0, + "content": "The selected answer argues for \"complete segregation of cellular functions\", which is a too extreme or absolute conclusion based on the image." + }, + { + "type": "image_caption", + "bbox": [ + 0.468, + 0.53, + 0.53, + 0.543 + ], + "angle": 0, + "content": "Figure 16" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.508, + 0.937 + ], + "angle": 0, + "content": "23" + } + ], + [ + { + "type": "image_caption", + "bbox": [ + 0.092, + 0.092, + 0.271, + 0.107 + ], + "angle": 0, + "content": "F.6.2 Knowledge errors" + }, + { + "type": "image", + "bbox": [ + 0.1, + 0.133, + 0.315, + 0.272 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.329, + 0.138, + 0.353, + 0.152 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.365, + 0.133, + 0.886, + 0.182 + ], + "angle": 0, + "content": "Q: A double immunofluorescence experiment was conducted on nerve cells, highlighting Sema4B (red marker) and Sox9, an astrocyte marker (green marker). Sema4B was found primarily surrounding the nuclear area dispersed in the cytoplasm. Based on this pattern, what potential role might Sema4B play in these cells?" + }, + { + "type": "text", + "bbox": [ + 0.365, + 0.192, + 0.834, + 0.204 + ], + "angle": 0, + "content": "A) Sema4B contributes to the regulation of metabolic pathways within the cytoplasm." + }, + { + "type": "text", + "bbox": [ + 0.367, + 0.205, + 0.716, + 0.216 + ], + "angle": 0, + "content": "B) Sema4B is associated with intracellular trafficking processes." + }, + { + "type": "text", + "bbox": [ + 0.367, + 0.217, + 0.815, + 0.228 + ], + "angle": 0, + "content": "C) Sema4B acts as a signaling molecule interacting with cell surface receptors." + }, + { + "type": "text", + "bbox": [ + 0.367, + 0.229, + 0.754, + 0.239 + ], + "angle": 0, + "content": "D) Sema4B aids in transporting proteins across cell compartments. X" + }, + { + "type": "text", + "bbox": [ + 0.367, + 0.24, + 0.81, + 0.251 + ], + "angle": 0, + "content": "E) Sema4B is directly involved in gene transcription regulation inside the nucleus." + }, + { + "type": "list", + "bbox": [ + 0.365, + 0.192, + 0.834, + 0.251 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.31, + 0.316, + 0.332, + 0.373 + ], + "angle": 270, + "content": "grewes" + }, + { + "type": "text", + "bbox": [ + 0.338, + 0.263, + 0.613, + 0.274 + ], + "angle": 0, + "content": "VLM Answer: Let me solve this step by step:" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.275, + 0.622, + 0.287 + ], + "angle": 0, + "content": "1) Looking at the immunofluorescence images:" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.288, + 0.542, + 0.298 + ], + "angle": 0, + "content": "- Sox9 (green) marks astrocytes" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.299, + 0.659, + 0.31 + ], + "angle": 0, + "content": "- Sema4B (red) shows a particular distribution pattern" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.311, + 0.741, + 0.322 + ], + "angle": 0, + "content": "- The merged image shows both markers plus nuclear staining (blue)" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.323, + 0.56, + 0.333 + ], + "angle": 0, + "content": "2) Analyzing Sema4B's distribution:" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.334, + 0.537, + 0.345 + ], + "angle": 0, + "content": "- It appears as red punctate sta" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.346, + 0.589, + 0.356 + ], + "angle": 0, + "content": "- Located primarily around nuclear areas" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.358, + 0.585, + 0.368 + ], + "angle": 0, + "content": "- Shows dispersed pattern in cytoplasm" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.37, + 0.558, + 0.38 + ], + "angle": 0, + "content": "- Not concentrated in nucleus itself" + }, + { + "type": "list", + "bbox": [ + 0.366, + 0.275, + 0.741, + 0.38 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.365, + 0.382, + 0.565, + 0.393 + ], + "angle": 0, + "content": "3) This distribution pattern suggests:" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.394, + 0.638, + 0.404 + ], + "angle": 0, + "content": "- The protein is involved in cytoplasmic processes" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.405, + 0.528, + 0.416 + ], + "angle": 0, + "content": "- Shows vesicular-like pattern" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.417, + 0.653, + 0.428 + ], + "angle": 0, + "content": "- Located in areas suggesting trafficking or transport" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.429, + 0.505, + 0.439 + ], + "angle": 0, + "content": "4) Looking at the options:" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.44, + 0.571, + 0.451 + ], + "angle": 0, + "content": "- Option 4 mentions protein transport" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.452, + 0.585, + 0.462 + ], + "angle": 0, + "content": "- This matches the vesicular-like pattern" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.463, + 0.811, + 0.475 + ], + "angle": 0, + "content": "- The perinuclear and cytoplasmic distribution is consistent with protein trafficking" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.476, + 0.758, + 0.487 + ], + "angle": 0, + "content": "- Other options don't align as well with the observed distribution pattern" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.488, + 0.442, + 0.499 + ], + "angle": 0, + "content": "5) Reasoning:" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.5, + 0.886, + 0.511 + ], + "angle": 0, + "content": "- The punctate pattern and distribution is characteristic of proteins involved in cellular trafficking" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.512, + 0.857, + 0.534 + ], + "angle": 0, + "content": "- The location around nucleus (where protein synthesis occurs) and throughout cytoplasm supports a transport role" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.535, + 0.801, + 0.546 + ], + "angle": 0, + "content": "- This pattern is less consistent with metabolic, signaling, or transcriptional roles" + }, + { + "type": "text", + "bbox": [ + 0.366, + 0.547, + 0.459, + 0.558 + ], + "angle": 0, + "content": "The answer is (4)" + }, + { + "type": "list", + "bbox": [ + 0.365, + 0.275, + 0.886, + 0.558 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.523, + 0.244, + 0.557 + ], + "angle": 0, + "content": "Subject: Neurobiology Bloom's level: 3 (Applica Specimen: Nerve cells" + }, + { + "type": "title", + "bbox": [ + 0.363, + 0.57, + 0.585, + 0.582 + ], + "angle": 0, + "content": "Error type: Misconception (knowledge)" + }, + { + "type": "text", + "bbox": [ + 0.362, + 0.583, + 0.861, + 0.617 + ], + "angle": 0, + "content": "Error reason: Sema4B is a transmembrane protein that plays a role in cellular signaling. TM proteins can be internalized via endocytosis and does not mean that its main function is intracellular trafficking." + }, + { + "type": "image_caption", + "bbox": [ + 0.468, + 0.635, + 0.529, + 0.648 + ], + "angle": 0, + "content": "Figure 17" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.508, + 0.936 + ], + "angle": 0, + "content": "24" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.095, + 0.091, + 0.34, + 0.356 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.364, + 0.233, + 0.375 + ], + "angle": 0, + "content": "Subject: Neurobiology" + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.376, + 0.251, + 0.387 + ], + "angle": 0, + "content": "Bloom's level: 4 (Analysis)" + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.389, + 0.278, + 0.409 + ], + "angle": 0, + "content": "Specimen: Fibroblasts from an Alzheimer's patient" + }, + { + "type": "text", + "bbox": [ + 0.37, + 0.095, + 0.854, + 0.131 + ], + "angle": 0, + "content": "Q: A cryo-Electron Tomography image of fibroblasts from a 70-year-old patient with Alzheimer's disease shows three mitochondria gathered together. What is the most likely reason for this mitochondrial clustering." + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.131, + 0.581, + 0.142 + ], + "angle": 0, + "content": "A) Increased mitochondrial biogenesis" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.143, + 0.75, + 0.152 + ], + "angle": 0, + "content": "B) Enhanced mitochondrial fusion leading to interconnected networks" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.154, + 0.624, + 0.165 + ], + "angle": 0, + "content": "C) Mitochondrial fission due to cellular stress" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.166, + 0.78, + 0.177 + ], + "angle": 0, + "content": "D) Impaired mitophagy causing accumulation of damaged mitochondria \\(\\times\\)" + }, + { + "type": "list", + "bbox": [ + 0.372, + 0.131, + 0.78, + 0.177 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.37, + 0.189, + 0.606, + 0.201 + ], + "angle": 0, + "content": "VLM Answer: Let's solve this step by step:" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.202, + 0.558, + 0.213 + ], + "angle": 0, + "content": "1) First, let's consider the context:" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.214, + 0.697, + 0.224 + ], + "angle": 0, + "content": "- This is from a 70-year-old patient with Alzheimer's disease" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.226, + 0.797, + 0.236 + ], + "angle": 0, + "content": "- Alzheimer's is known to affect cellular function, including mitochondrial health" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.237, + 0.666, + 0.248 + ], + "angle": 0, + "content": "- We're observing clustered mitochondria in the image" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.249, + 0.51, + 0.26 + ], + "angle": 0, + "content": "2) Analyzing each option:" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.261, + 0.852, + 0.272 + ], + "angle": 0, + "content": "- Option 1 (Increased biogenesis): Would show new, healthy mitochondria, not clustering" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.273, + 0.799, + 0.283 + ], + "angle": 0, + "content": "- Option 2 (Enhanced fusion): Would show elongated, connected mitochondria" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.284, + 0.807, + 0.295 + ], + "angle": 0, + "content": "- Option 3 (Mitochondrial fission): Would show smaller, fragmented mitochondria" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.295, + 0.885, + 0.306 + ], + "angle": 0, + "content": "- Option 4 (Impaired mitophagy): Would show accumulated, potentially damaged mitochondria" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.307, + 0.504, + 0.318 + ], + "angle": 0, + "content": "3) Looking at the image:" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.319, + 0.644, + 0.329 + ], + "angle": 0, + "content": "- We see multiple mitochondria clustered together" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.33, + 0.642, + 0.342 + ], + "angle": 0, + "content": "- They appear to be intact but gathered in groups" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.343, + 0.712, + 0.353 + ], + "angle": 0, + "content": "- This suggests they're not being properly cleared from the cell" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.354, + 0.566, + 0.365 + ], + "angle": 0, + "content": "4) Considering Alzheimer's disease:" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.366, + 0.647, + 0.376 + ], + "angle": 0, + "content": "- It's known to impair cellular cleanup mechanisms" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.377, + 0.701, + 0.388 + ], + "angle": 0, + "content": "- Mitophagy is a key quality control process for mitochondria" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.389, + 0.804, + 0.4 + ], + "angle": 0, + "content": "- When impaired, damaged mitochondria accumulate instead of being removed" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.401, + 0.451, + 0.411 + ], + "angle": 0, + "content": "5) Conclusion:" + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.412, + 0.87, + 0.445 + ], + "angle": 0, + "content": "The clustering is most likely due to impaired mitophagy, as this would explain why multiple mitochondria are gathering instead of being properly cleared from the cell, which is a known issue in Alzheimer's disease." + }, + { + "type": "text", + "bbox": [ + 0.372, + 0.447, + 0.465, + 0.458 + ], + "angle": 0, + "content": "The answer is (4)" + }, + { + "type": "title", + "bbox": [ + 0.37, + 0.477, + 0.589, + 0.489 + ], + "angle": 0, + "content": "Error type: Misconception (knowledge)" + }, + { + "type": "text", + "bbox": [ + 0.369, + 0.489, + 0.874, + 0.512 + ], + "angle": 0, + "content": "Error reason: Impaired mitophagy may result in larger overall mitochondria, not an increase in small mitochondrial aggregates. Fission is a better answer in this case." + }, + { + "type": "image_caption", + "bbox": [ + 0.468, + 0.534, + 0.53, + 0.548 + ], + "angle": 0, + "content": "Figure 18" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.508, + 0.936 + ], + "angle": 0, + "content": "25" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.095, + 0.092, + 0.398, + 0.239 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.095, + 0.244, + 0.398, + 0.383 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.102, + 0.393, + 0.22, + 0.404 + ], + "angle": 0, + "content": "Subject: Neurobiology" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.405, + 0.253, + 0.415 + ], + "angle": 0, + "content": "Bloom's level: 3 (Application)" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.416, + 0.329, + 0.426 + ], + "angle": 0, + "content": "Specimen: Resection specimen for epilepsy" + }, + { + "type": "text", + "bbox": [ + 0.416, + 0.098, + 0.888, + 0.143 + ], + "angle": 0, + "content": "Q: A microscopic analysis of tissue from an individual suffering from recurrent seizures involves using hematoxylin & eosin stain alongside a vimentin stain. Cells with stronger vimentin expression are significantly larger than surrounding entities. Which identity is most probable for these prominent vimentin-expressing cells?" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.152, + 0.694, + 0.163 + ], + "angle": 0, + "content": "A) Dystrophic neurons typically found around glial scars" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.163, + 0.679, + 0.174 + ], + "angle": 0, + "content": "B) Enlarged reactive astrocytes near injury sites X" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.174, + 0.729, + 0.184 + ], + "angle": 0, + "content": "C) Aberrant large cells characteristic of cortical malformation" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.185, + 0.666, + 0.195 + ], + "angle": 0, + "content": "D) Macrophages adapted to engulf cellular debris" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.196, + 0.674, + 0.206 + ], + "angle": 0, + "content": "E) Fibroblast proliferation in response to brain injury" + }, + { + "type": "list", + "bbox": [ + 0.418, + 0.152, + 0.729, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.219, + 0.642, + 0.23 + ], + "angle": 0, + "content": "VLM Answer: Let me solve this step by step:" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.23, + 0.684, + 0.241 + ], + "angle": 0, + "content": "1) First, let's analyze what we're seeing in the images:" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.241, + 0.845, + 0.251 + ], + "angle": 0, + "content": "- The H&E stain (top image) shows tissue with scattered cells and some enlarged cells" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.251, + 0.844, + 0.262 + ], + "angle": 0, + "content": "- The vimentin stain (bottom image) shows strong brown staining of certain larger cells" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.262, + 0.602, + 0.272 + ], + "angle": 0, + "content": "2) Key information from the question:" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.273, + 0.575, + 0.282 + ], + "angle": 0, + "content": "- Patient has recurrent seizures" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.283, + 0.646, + 0.293 + ], + "angle": 0, + "content": "- Vimentin-positive cells are significantly larger" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.294, + 0.664, + 0.304 + ], + "angle": 0, + "content": "- Looking for most probable identity of these cells" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.305, + 0.584, + 0.314 + ], + "angle": 0, + "content": "3) Important facts about vimentin:" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.315, + 0.639, + 0.325 + ], + "angle": 0, + "content": "- Vimentin is an intermediate filament protein" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.326, + 0.631, + 0.336 + ], + "angle": 0, + "content": "- Strongly expressed in reactive astrocytes" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.336, + 0.612, + 0.346 + ], + "angle": 0, + "content": "- Also expressed in mesenchymal cells" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.347, + 0.558, + 0.357 + ], + "angle": 0, + "content": "4) In the context of seizures:" + }, + { + "type": "list", + "bbox": [ + 0.418, + 0.23, + 0.845, + 0.357 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.358, + 0.581, + 0.368 + ], + "angle": 0, + "content": "- Seizures can cause brain injury" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.369, + 0.581, + 0.379 + ], + "angle": 0, + "content": "- Brain injury triggers astrogliosis" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.38, + 0.735, + 0.39 + ], + "angle": 0, + "content": "- Astrocytes become reactive and enlarged in response to injury" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.391, + 0.546, + 0.4 + ], + "angle": 0, + "content": "5) Looking at the options:" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.401, + 0.676, + 0.411 + ], + "angle": 0, + "content": "- Option 1: Neurons don't typically express vimentin" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.412, + 0.8, + 0.421 + ], + "angle": 0, + "content": "- Option 2: Reactive astrocytes do express vimentin and enlarge during injury" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.422, + 0.795, + 0.431 + ], + "angle": 0, + "content": "- Option 3: Cortical malformation cells wouldn't necessarily express vimentin" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.432, + 0.644, + 0.442 + ], + "angle": 0, + "content": "- Option 4: Macrophages are typically smaller" + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.443, + 0.68, + 0.452 + ], + "angle": 0, + "content": "- Option 5: Fibroblasts would be rare in this context" + }, + { + "type": "list", + "bbox": [ + 0.418, + 0.358, + 0.8, + 0.452 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.416, + 0.463, + 0.884, + 0.485 + ], + "angle": 0, + "content": "6) The morphology and staining pattern of the large cells, combined with the clinical context of seizures, most strongly suggests reactive astrocytes." + }, + { + "type": "text", + "bbox": [ + 0.418, + 0.486, + 0.52, + 0.496 + ], + "angle": 0, + "content": "The answer is (2) X" + }, + { + "type": "title", + "bbox": [ + 0.416, + 0.51, + 0.616, + 0.522 + ], + "angle": 0, + "content": "Error type: Misconception (knowledge)" + }, + { + "type": "text", + "bbox": [ + 0.416, + 0.522, + 0.868, + 0.543 + ], + "angle": 0, + "content": "Error reason: The AI is able to interpret the question and image, but lacks the knowledge to recognize the link between recurrent seizures, cortical dysplasia, and large ballooned cells." + }, + { + "type": "image_caption", + "bbox": [ + 0.468, + 0.564, + 0.53, + 0.578 + ], + "angle": 0, + "content": "Figure 19" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "26" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.101, + 0.089, + 0.398, + 0.268 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.102, + 0.275, + 0.221, + 0.286 + ], + "angle": 0, + "content": "Subject: Neurobiology" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.286, + 0.238, + 0.297 + ], + "angle": 0, + "content": "Bloom's level: 4 (Analysis)" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.297, + 0.258, + 0.308 + ], + "angle": 0, + "content": "Specimen: Brain parenchyma" + }, + { + "type": "text", + "bbox": [ + 0.411, + 0.094, + 0.892, + 0.128 + ], + "angle": 0, + "content": "Q: A certain specialized stain reveals unusual cell movement crossing boundaries typical of neuronal migration in a tissue resection from a case of structural brain anomaly. Which condition is primarily associated with this type of cellular pattern?" + }, + { + "type": "title", + "bbox": [ + 0.412, + 0.137, + 0.559, + 0.148 + ], + "angle": 0, + "content": "A) Focal cortical dysplasia" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.149, + 0.559, + 0.159 + ], + "angle": 0, + "content": "B) Congenital Zika syndrome" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.159, + 0.508, + 0.17 + ], + "angle": 0, + "content": "C) Schizencephaly" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.17, + 0.537, + 0.181 + ], + "angle": 0, + "content": "D) Type II lissencephaly" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.181, + 0.563, + 0.191 + ], + "angle": 0, + "content": "E) Walker-Warburg syndrome" + }, + { + "type": "list", + "bbox": [ + 0.413, + 0.149, + 0.563, + 0.191 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.412, + 0.201, + 0.64, + 0.212 + ], + "angle": 0, + "content": "VLM Answer: Let me solve this step by step:" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.213, + 0.892, + 0.233 + ], + "angle": 0, + "content": "1) The image shows a histological section with specialized staining that reveals abnormal cellular organization and migration patterns." + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.234, + 0.843, + 0.254 + ], + "angle": 0, + "content": "2) The question asks about unusual cell movement crossing typical neuronal migration boundaries." + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.255, + 0.543, + 0.266 + ], + "angle": 0, + "content": "3) Looking at the options:" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.266, + 0.78, + 0.277 + ], + "angle": 0, + "content": "- Focal cortical dysplasia involves disorganized neurons in localized areas" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.277, + 0.747, + 0.287 + ], + "angle": 0, + "content": "- Congenital Zika syndrome primarily affects neural progenitor cells" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.287, + 0.714, + 0.298 + ], + "angle": 0, + "content": "- Schizencephaly involves clefts in the cerebral hemispheres" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.298, + 0.786, + 0.309 + ], + "angle": 0, + "content": "- Type II lissencephaly shows abnormal neuronal migration and positioning" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.309, + 0.859, + 0.329 + ], + "angle": 0, + "content": "- Walker-Warburg syndrome is a severe form of congenital muscular dystrophy with brain malformations" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.33, + 0.692, + 0.34 + ], + "angle": 0, + "content": "4) Key characteristics of focal cortical dysplasia include:" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.341, + 0.568, + 0.351 + ], + "angle": 0, + "content": "- Abnormal neuronal migration" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.352, + 0.569, + 0.362 + ], + "angle": 0, + "content": "- Disorganized cortical layering" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.363, + 0.663, + 0.373 + ], + "angle": 0, + "content": "- Abnormal cell types crossing normal boundaries" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.373, + 0.613, + 0.383 + ], + "angle": 0, + "content": "- Disrupted cerebral cortex architecture" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.384, + 0.863, + 0.395 + ], + "angle": 0, + "content": "5) The histological image shows features consistent with focal cortical dysplasia, including:" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.395, + 0.574, + 0.405 + ], + "angle": 0, + "content": "- Disrupted cellular organization" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.405, + 0.566, + 0.416 + ], + "angle": 0, + "content": "- Abnormal migration patterns" + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.416, + 0.587, + 0.427 + ], + "angle": 0, + "content": "- Cells crossing typical boundaries" + }, + { + "type": "list", + "bbox": [ + 0.413, + 0.213, + 0.892, + 0.427 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.413, + 0.427, + 0.501, + 0.438 + ], + "angle": 0, + "content": "The answer is (1)" + }, + { + "type": "title", + "bbox": [ + 0.412, + 0.448, + 0.615, + 0.459 + ], + "angle": 0, + "content": "Error type: Misconception (knowledge)" + }, + { + "type": "text", + "bbox": [ + 0.411, + 0.46, + 0.892, + 0.492 + ], + "angle": 0, + "content": "Error reason: Misconception about the scope and extent of neuronal migration patterns in focal cortical dysplasia (FCD) versus Type II lissencephaly. FCD will have abnormal cortical lamination, but the neurons are still within the expected gray/white matter of normal neuronal migration." + }, + { + "type": "text", + "bbox": [ + 0.412, + 0.492, + 0.854, + 0.512 + ], + "angle": 0, + "content": "They do not extend beyond and cross established neuronal boundaries such as the glial limitans." + }, + { + "type": "image_caption", + "bbox": [ + 0.468, + 0.53, + 0.53, + 0.544 + ], + "angle": 0, + "content": "Figure 20" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.508, + 0.936 + ], + "angle": 0, + "content": "27" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.092, + 0.475, + 0.107 + ], + "angle": 0, + "content": "F.6.3 Overgeneralization or oversimplification errors" + }, + { + "type": "image", + "bbox": [ + 0.094, + 0.135, + 0.375, + 0.296 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.094, + 0.3, + 0.375, + 0.463 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.102, + 0.472, + 0.21, + 0.484 + ], + "angle": 0, + "content": "Subject: Pathology" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.484, + 0.288, + 0.495 + ], + "angle": 0, + "content": "Bloom's level: 2 (Comprehension)" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.496, + 0.275, + 0.506 + ], + "angle": 0, + "content": "Specimen: schwannoma tissue" + }, + { + "type": "text", + "bbox": [ + 0.388, + 0.133, + 0.892, + 0.168 + ], + "angle": 0, + "content": "Q: In a histological examination, an H&E-stained image shows Verocay bodies and cystic degeneration with pseudoepithelial metaplasia, indicative of schwannoma. How does the risk of malignant transformation in schwannoma compare to other peripheral nerve sheath tumors?" + }, + { + "type": "title", + "bbox": [ + 0.388, + 0.178, + 0.848, + 0.2 + ], + "angle": 0, + "content": "A) Schwannomas exhibit a lower risk of malignant transformation compared to other peripheral nerve sheath tumors." + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.202, + 0.828, + 0.212 + ], + "angle": 0, + "content": "B) Schwannomas and neurofibromas have similar risks of malignant transformation." + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.213, + 0.859, + 0.234 + ], + "angle": 0, + "content": "C) The risk of malignant transformation in schwannomas is dependent on specific genetic factors." + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.235, + 0.865, + 0.246 + ], + "angle": 0, + "content": "D) Schwannomas have an increased risk of malignancy in the presence of multiple tumors." + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.247, + 0.889, + 0.258 + ], + "angle": 0, + "content": "E) Schwannomas rarely undergo malignant transformation, regardless of tumor location or size." + }, + { + "type": "list", + "bbox": [ + 0.389, + 0.202, + 0.889, + 0.258 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.387, + 0.269, + 0.658, + 0.28 + ], + "angle": 0, + "content": "VLM Answer: Let me think about this step by step:" + }, + { + "type": "text", + "bbox": [ + 0.389, + 0.281, + 0.754, + 0.292 + ], + "angle": 0, + "content": "1) The images show typical histological features of schwannoma with:" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.292, + 0.576, + 0.303 + ], + "angle": 0, + "content": "- Verocay bodies (palisading nuclei)" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.304, + 0.503, + 0.314 + ], + "angle": 0, + "content": "- Cystic degeneration" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.315, + 0.545, + 0.326 + ], + "angle": 0, + "content": "- Pseudoepithelial metaplasia" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.327, + 0.686, + 0.337 + ], + "angle": 0, + "content": "2) Regarding malignant transformation in schwannomas:" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.338, + 0.621, + 0.348 + ], + "angle": 0, + "content": "- Schwannomas are typically benign tumors" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.349, + 0.636, + 0.359 + ], + "angle": 0, + "content": "- They rarely undergo malignant transformation" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.36, + 0.635, + 0.37 + ], + "angle": 0, + "content": "- This is true regardless of their size or location" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.371, + 0.811, + 0.382 + ], + "angle": 0, + "content": "- Even multiple schwannomas (as in schwannomatosis) rarely become malignant" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.383, + 0.676, + 0.393 + ], + "angle": 0, + "content": "3) Comparing to other peripheral nerve sheath tumors:" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.394, + 0.861, + 0.404 + ], + "angle": 0, + "content": "- Neurofibromas, especially plexiform types, have a higher risk of malignant transformation" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.405, + 0.763, + 0.416 + ], + "angle": 0, + "content": "- This is particularly true in the setting of NF1 (neurofibromatosis type 1)" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.417, + 0.892, + 0.438 + ], + "angle": 0, + "content": "- Malignant peripheral nerve sheath tumors (MPNSTs) more commonly arise from neurofibromas than schwannomas" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.439, + 0.883, + 0.46 + ], + "angle": 0, + "content": "4) The statement about genetic factors or increased risk with multiple tumors isn't accurate for schwannomas, though it may be true for other nerve sheath tumors" + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.461, + 0.869, + 0.483 + ], + "angle": 0, + "content": "5) Looking at the options, the key is that schwannomas maintain their benign nature in most cases, regardless of circumstances." + }, + { + "type": "text", + "bbox": [ + 0.39, + 0.484, + 0.479, + 0.495 + ], + "angle": 0, + "content": "The answer is (5)" + }, + { + "type": "list", + "bbox": [ + 0.389, + 0.281, + 0.892, + 0.495 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.388, + 0.511, + 0.553, + 0.523 + ], + "angle": 0, + "content": "Error type: Overgeneralization" + }, + { + "type": "text", + "bbox": [ + 0.387, + 0.523, + 0.886, + 0.568 + ], + "angle": 0, + "content": "Error reason: The selected choice does not address the question asked \"compared to other peripheral nerve sheath tumors?\" . The \"best answer\" for the question should compare to other tumors as specified in the question. The AI has \"oversimplified\" the question to ask a simpler question than the one asked." + }, + { + "type": "image_caption", + "bbox": [ + 0.468, + 0.585, + 0.528, + 0.599 + ], + "angle": 0, + "content": "Figure 21" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.508, + 0.936 + ], + "angle": 0, + "content": "28" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.104, + 0.089, + 0.31, + 0.484 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.487, + 0.272, + 0.511 + ], + "angle": 0, + "content": "Subject: Cell and molecular biology" + }, + { + "type": "text", + "bbox": [ + 0.104, + 0.513, + 0.262, + 0.525 + ], + "angle": 0, + "content": "Bloom's level: 4 (Analysis)" + }, + { + "type": "text", + "bbox": [ + 0.104, + 0.526, + 0.256, + 0.537 + ], + "angle": 0, + "content": "Specimen: HEK293 cells" + }, + { + "type": "text", + "bbox": [ + 0.104, + 0.537, + 0.231, + 0.55 + ], + "angle": 0, + "content": "(wildtype and PEX19" + }, + { + "type": "text", + "bbox": [ + 0.104, + 0.551, + 0.182, + 0.562 + ], + "angle": 0, + "content": "knockdown)" + }, + { + "type": "text", + "bbox": [ + 0.331, + 0.097, + 0.89, + 0.173 + ], + "angle": 0, + "content": "Q: Confocal microscopy images depict live HEK293 cells subjected to CRISPR/Cas9-mediated knockdown of a specific gene, alongside wildtype cells. The cells are stained with BODIPY-C12 (cyan) to highlight lipid droplets and Hoechst to stain the nuclei. In wildtype cells, distinct organellar structures are visible, whereas these structures are absent in the gene knockdown cells. What is the most likely cellular outcome resulting from the knockdown of this gene in HEK293 cells?" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.185, + 0.549, + 0.197 + ], + "angle": 0, + "content": "A) Disruption of mitochondrial fission" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.198, + 0.548, + 0.21 + ], + "angle": 0, + "content": "B) Impaired peroxisome biogenesis" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.211, + 0.571, + 0.223 + ], + "angle": 0, + "content": "C) Altered lipid metabolism pathways X" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.223, + 0.59, + 0.235 + ], + "angle": 0, + "content": "D) Reduced endoplasmic reticulum function" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.236, + 0.582, + 0.248 + ], + "angle": 0, + "content": "E) Decreased nuclear transport processes" + }, + { + "type": "list", + "bbox": [ + 0.332, + 0.185, + 0.59, + 0.248 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.331, + 0.266, + 0.596, + 0.278 + ], + "angle": 0, + "content": "VLM Answer: Let me solve this step by step:" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.279, + 0.646, + 0.291 + ], + "angle": 0, + "content": "1) First, let's analyze what we're seeing in the images:" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.292, + 0.513, + 0.303 + ], + "angle": 0, + "content": "- Live HEK293 cells are shown" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.304, + 0.626, + 0.316 + ], + "angle": 0, + "content": "- BODIPY-C12 staining (cyan) shows lipid droplets" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.317, + 0.59, + 0.328 + ], + "angle": 0, + "content": "- Hoechst staining shows nuclei (grey/white)" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.329, + 0.594, + 0.341 + ], + "angle": 0, + "content": "- The images show a time series (left to right)" + }, + { + "type": "list", + "bbox": [ + 0.332, + 0.279, + 0.646, + 0.341 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.342, + 0.453, + 0.353 + ], + "angle": 0, + "content": "2) Key observations:" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.354, + 0.761, + 0.366 + ], + "angle": 0, + "content": "- The cyan staining (BODIPY-C12) indicates the presence of lipid droplets" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.367, + 0.878, + 0.378 + ], + "angle": 0, + "content": "- There's a difference between wildtype and knockdown cells in terms of organellar structures" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.379, + 0.558, + 0.391 + ], + "angle": 0, + "content": "- The staining specifically targets lipids" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.392, + 0.475, + 0.403 + ], + "angle": 0, + "content": "3) Consider the options:" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.404, + 0.702, + 0.416 + ], + "angle": 0, + "content": "- Mitochondrial fission would primarily affect energy metabolism" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.417, + 0.617, + 0.428 + ], + "angle": 0, + "content": "- Peroxisome biogenesis affects lipid breakdown" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.429, + 0.675, + 0.441 + ], + "angle": 0, + "content": "- Lipid metabolism directly relates to lipid droplet formation" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.442, + 0.597, + 0.453 + ], + "angle": 0, + "content": "- ER function is broader than just lipid effects" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.454, + 0.654, + 0.466 + ], + "angle": 0, + "content": "- Nuclear transport isn't directly related to lipid droplets" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.467, + 0.415, + 0.479 + ], + "angle": 0, + "content": "4) Reasoning:" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.48, + 0.594, + 0.491 + ], + "angle": 0, + "content": "- Since BODIPY-C12 specifically stains lipids" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.492, + 0.661, + 0.503 + ], + "angle": 0, + "content": "- The visible differences are in lipid-containing structures" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.504, + 0.673, + 0.516 + ], + "angle": 0, + "content": "- The most direct connection would be to lipid metabolism" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.517, + 0.849, + 0.528 + ], + "angle": 0, + "content": "- Changes in lipid metabolism would directly affect lipid droplet formation and distribution" + }, + { + "type": "text", + "bbox": [ + 0.332, + 0.529, + 0.433, + 0.541 + ], + "angle": 0, + "content": "The answer is (3)" + }, + { + "type": "list", + "bbox": [ + 0.332, + 0.342, + 0.878, + 0.541 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.331, + 0.559, + 0.513, + 0.571 + ], + "angle": 0, + "content": "Error type: Overgeneralization" + }, + { + "type": "text", + "bbox": [ + 0.33, + 0.572, + 0.888, + 0.61 + ], + "angle": 0, + "content": "Error reason: Assumes that the changes in lipid droplets are directly linked to altered lipid metabolism pathways, without fully integrating the implications of peroxisome biogenesis in lipid metabolism and organelle dynamics." + }, + { + "type": "image_caption", + "bbox": [ + 0.468, + 0.628, + 0.53, + 0.642 + ], + "angle": 0, + "content": "Figure 22" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "29" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.092, + 0.092, + 0.452, + 0.106 + ], + "angle": 0, + "content": "F.6.4 Hallucination or errors due to added details" + }, + { + "type": "image", + "bbox": [ + 0.101, + 0.126, + 0.296, + 0.314 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.1, + 0.324, + 0.291, + 0.513 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.524, + 0.276, + 0.551 + ], + "angle": 0, + "content": "Subject: Cell and molecular biology" + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.551, + 0.27, + 0.564 + ], + "angle": 0, + "content": "Bloom's level: 4 (Analysis)" + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.564, + 0.281, + 0.587 + ], + "angle": 0, + "content": "Specimen: cellulose fibers in vitro" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.132, + 0.89, + 0.171 + ], + "angle": 0, + "content": "Q: A CryoET tomographic slice image and its corresponding 3D annotation of cellulose fibers in vitro show differing amounts of cellulose fibers. Beyond the 2D nature of the slice versus the 3D nature of the annotation, which mechanism most likely accounts for the observed differences?" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.183, + 0.758, + 0.196 + ], + "angle": 0, + "content": "A) Errors in the alignment during image reconstruction leading to artifacts." + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.196, + 0.821, + 0.21 + ], + "angle": 0, + "content": "B) Overlapping fibers being misinterpreted as single fibers due to resolution limits. \\(\\times\\)" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.21, + 0.692, + 0.222 + ], + "angle": 0, + "content": "C) Variability in manual annotation leading to subjective bias." + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.222, + 0.7, + 0.235 + ], + "angle": 0, + "content": "D) Sample dehydration causing shrinkage of fibers in the image." + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.235, + 0.725, + 0.248 + ], + "angle": 0, + "content": "E) Background noise in the image causing misidentification of fibers." + }, + { + "type": "list", + "bbox": [ + 0.319, + 0.196, + 0.821, + 0.248 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.319, + 0.263, + 0.59, + 0.276 + ], + "angle": 0, + "content": "VLM Answer: Let me solve this step by step:" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.277, + 0.558, + 0.289 + ], + "angle": 0, + "content": "1) First, let's analyze what we're seeing:" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.289, + 0.773, + 0.302 + ], + "angle": 0, + "content": "- Left image: CryoET tomographic slice showing cellulose fibers in grayscale" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.302, + 0.611, + 0.314 + ], + "angle": 0, + "content": "- Right image: 3D annotation of the fibers in blue" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.315, + 0.768, + 0.327 + ], + "angle": 0, + "content": "- There appears to be a difference in how many fibers are visible/annotated" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.328, + 0.594, + 0.34 + ], + "angle": 0, + "content": "2) Consider the limitations of CryoET imaging:" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.341, + 0.451, + 0.352 + ], + "angle": 0, + "content": "- Resolution is limited" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.353, + 0.619, + 0.365 + ], + "angle": 0, + "content": "- Multiple fibers can appear to overlap in 2D views" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.366, + 0.622, + 0.378 + ], + "angle": 0, + "content": "- Depth information can be challenging to interpret" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.379, + 0.466, + 0.391 + ], + "angle": 0, + "content": "3) Evaluate each option:" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.391, + 0.805, + 0.404 + ], + "angle": 0, + "content": "- (1) Alignment errors typically cause obvious artifacts, which aren't apparent here" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.404, + 0.822, + 0.416 + ], + "angle": 0, + "content": "- (2) Resolution limits could cause overlapping fibers to appear as one in the 2D slice" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.417, + 0.753, + 0.429 + ], + "angle": 0, + "content": "- (3) Manual annotation variability wouldn't explain systematic differences" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.429, + 0.659, + 0.442 + ], + "angle": 0, + "content": "- (4) Sample dehydration would affect both views equally" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.442, + 0.795, + 0.455 + ], + "angle": 0, + "content": "- (5) Background noise might affect fiber identification but isn't the primary issue" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.455, + 0.872, + 0.479 + ], + "angle": 0, + "content": "4) The key observation is that the 2D slice shows fewer distinct fibers than the 3D annotation, likely because:" + }, + { + "type": "text", + "bbox": [ + 0.32, + 0.481, + 0.77, + 0.493 + ], + "angle": 0, + "content": "- Multiple fibers that overlap in the 2D view can be properly separated in 3D" + }, + { + "type": "text", + "bbox": [ + 0.319, + 0.494, + 0.879, + 0.517 + ], + "angle": 0, + "content": "- The resolution limits of the microscope make it difficult to distinguish overlapping fibers in the 2D view" + }, + { + "type": "list", + "bbox": [ + 0.319, + 0.277, + 0.879, + 0.517 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.321, + 0.52, + 0.424, + 0.532 + ], + "angle": 0, + "content": "The answer is (2)" + }, + { + "type": "title", + "bbox": [ + 0.319, + 0.545, + 0.471, + 0.558 + ], + "angle": 0, + "content": "Error type: Hallucination" + }, + { + "type": "text", + "bbox": [ + 0.318, + 0.558, + 0.885, + 0.61 + ], + "angle": 0, + "content": "Error reason: The question does not state there are systematic differences. This detail is added during the AI CoT reasoning. A better answer would rule out easy or obvious causes of differences (random or systematic). Although fibers can overlap in Cryo-EM, it is generally considered to have a reasonably high resolution." + }, + { + "type": "image_caption", + "bbox": [ + 0.468, + 0.629, + 0.529, + 0.642 + ], + "angle": 0, + "content": "Figure 23" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "30" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.091, + 0.091, + 0.317, + 0.108 + ], + "angle": 0, + "content": "F.7. Reasoning trace analysis" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.114, + 0.358, + 0.13 + ], + "angle": 0, + "content": "F.7.1 Human step-by-step reasoning" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.138, + 0.905, + 0.168 + ], + "angle": 0, + "content": "To better understand how subject matter experts approach questions in MicroVQA, we have a cell biologist and microscopist manually annotate their reasoning for each of the three main use cases." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.169, + 0.507, + 0.184 + ], + "angle": 0, + "content": "Reasoning example for task 1 'expert visual understanding'." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.205, + 0.18, + 0.213 + ], + "angle": 0, + "content": "\\*Question\\*" + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.214, + 0.57, + 0.225 + ], + "angle": 0, + "content": "What subcellular structure could the bright green spots represent?*" + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.235, + 0.166, + 0.243 + ], + "angle": 0, + "content": "\\*Answer" + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.244, + 0.542, + 0.255 + ], + "angle": 0, + "content": "This staining pattern most likely refers to cytoplasmic bodies." + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.265, + 0.194, + 0.274 + ], + "angle": 0, + "content": "\\*Reasoning:\\*" + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.275, + 0.397, + 0.285 + ], + "angle": 0, + "content": "Find green spots (no reasoning required)" + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.285, + 0.311, + 0.294 + ], + "angle": 0, + "content": "- Rule out staining artefacts" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.295, + 0.57, + 0.305 + ], + "angle": 0, + "content": "- Look at marker channels to assess whether cells appear intact" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.306, + 0.652, + 0.314 + ], + "angle": 0, + "content": "- Compare the green spots to the marker channels for nuclei or microtubules" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.315, + 0.495, + 0.324 + ], + "angle": 0, + "content": "- Check that green spots are always inside the cells" + }, + { + "type": "text", + "bbox": [ + 0.165, + 0.325, + 0.542, + 0.335 + ], + "angle": 0, + "content": "- Yes, inside cells, so these are no staining artefact." + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.335, + 0.489, + 0.345 + ], + "angle": 0, + "content": "- Look up which spotty staining patterns in cells exist" + }, + { + "type": "text", + "bbox": [ + 0.148, + 0.346, + 0.25, + 0.355 + ], + "angle": 0, + "content": "Nuclear bodies" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.356, + 0.262, + 0.365 + ], + "angle": 0, + "content": "- Nuclear speckles" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.366, + 0.208, + 0.374 + ], + "angle": 0, + "content": "- Nucleoli" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.376, + 0.276, + 0.385 + ], + "angle": 0, + "content": "- Cytoplasmic bodies" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.386, + 0.222, + 0.394 + ], + "angle": 0, + "content": "- Centrosome" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.395, + 0.208, + 0.404 + ], + "angle": 0, + "content": "- Vesicles" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.405, + 0.304, + 0.414 + ], + "angle": 0, + "content": "- Intermediate filaments" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.415, + 0.338, + 0.426 + ], + "angle": 0, + "content": "- Spotty microtubule staining" + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.426, + 0.387, + 0.435 + ], + "angle": 0, + "content": "- Explore which patterns can be excluded" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.436, + 0.769, + 0.446 + ], + "angle": 0, + "content": "- Check whether green spots are inside or outside of nuclei to know what patterns to explore" + }, + { + "type": "text", + "bbox": [ + 0.165, + 0.446, + 0.845, + 0.455 + ], + "angle": 0, + "content": "- Outside of nuclei, thus nuclear subcellular localizations can be excluded, and cytoplasmic spotty" + }, + { + "type": "text", + "bbox": [ + 0.196, + 0.456, + 0.392, + 0.465 + ], + "angle": 0, + "content": "patterns need to be explored" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.466, + 0.509, + 0.475 + ], + "angle": 0, + "content": "- Check whether green spots are placed on microtubules" + }, + { + "type": "text", + "bbox": [ + 0.165, + 0.476, + 0.475, + 0.485 + ], + "angle": 0, + "content": "- No, thus cannot be spotty microtubule stain" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.486, + 0.851, + 0.496 + ], + "angle": 0, + "content": "- Assess localization > if accumulating around microtubule organizing center could be likely vesicles or" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.497, + 0.316, + 0.505 + ], + "angle": 0, + "content": "centriolar satellites" + }, + { + "type": "text", + "bbox": [ + 0.165, + 0.507, + 0.414, + 0.516 + ], + "angle": 0, + "content": "- No, not accumulating at centrosome" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.517, + 0.386, + 0.526 + ], + "angle": 0, + "content": "- Assess size and shape of the spots" + }, + { + "type": "text", + "bbox": [ + 0.164, + 0.527, + 0.564, + 0.535 + ], + "angle": 0, + "content": "- Small size and round, can exclude intermediate filaments" + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.536, + 0.55, + 0.545 + ], + "angle": 0, + "content": "- Count green spots per cell and check that not more than 10" + }, + { + "type": "text", + "bbox": [ + 0.164, + 0.546, + 0.502, + 0.556 + ], + "angle": 0, + "content": "- Less likely vesicles, likely cytoplasmic bodies" + }, + { + "type": "title", + "bbox": [ + 0.111, + 0.577, + 0.469, + 0.592 + ], + "angle": 0, + "content": "Reasoning example for task 2 'hypothesis generation'." + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.613, + 0.18, + 0.622 + ], + "angle": 0, + "content": "\\*Question\\*" + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.623, + 0.701, + 0.633 + ], + "angle": 0, + "content": "The sperm cells swim in circles. Wouldn't they need to swim straight to reach the egg?" + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.644, + 0.166, + 0.652 + ], + "angle": 0, + "content": "\\*Answer\\*" + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.653, + 0.885, + 0.664 + ], + "angle": 0, + "content": "Sperm motility and its relation to reaching the egg and fertility is very complex and still not fully understood." + }, + { + "type": "text", + "bbox": [ + 0.145, + 0.664, + 0.857, + 0.674 + ], + "angle": 0, + "content": "Yet , it is known that sperm swimming in circles in a shallow observation chamber like you might have" + }, + { + "type": "text", + "bbox": [ + 0.144, + 0.674, + 0.857, + 0.683 + ], + "angle": 0, + "content": "applied, may swim on helical or twisted-ribbon structures in an in vivo or 3D space and thereby might in" + }, + { + "type": "text", + "bbox": [ + 0.144, + 0.683, + 0.349, + 0.693 + ], + "angle": 0, + "content": "fact efficiently swim forward." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.704, + 0.187, + 0.713 + ], + "angle": 0, + "content": "\\*Reasoning\\*" + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.714, + 0.747, + 0.723 + ], + "angle": 0, + "content": "- Explore the swimming trajectories to validate what the user means with swimming in circles." + }, + { + "type": "text", + "bbox": [ + 0.148, + 0.724, + 0.776, + 0.734 + ], + "angle": 0, + "content": "Detect trajectory, assess how it would be described: curved or circular swimming trajectory" + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.734, + 0.57, + 0.744 + ], + "angle": 0, + "content": "- Explore what is reported on swimming trajectories of sperm cells." + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.745, + 0.522, + 0.754 + ], + "angle": 0, + "content": "- Sperm cell swimming trajectories are species dependent" + }, + { + "type": "text", + "bbox": [ + 0.165, + 0.754, + 0.296, + 0.763 + ], + "angle": 0, + "content": "- Validate species:" + }, + { + "type": "text", + "bbox": [ + 0.206, + 0.764, + 0.617, + 0.774 + ], + "angle": 0, + "content": "Assess whether the user mentioned the species in the prompt." + }, + { + "type": "text", + "bbox": [ + 0.194, + 0.775, + 0.885, + 0.784 + ], + "angle": 0, + "content": "- Assess look of cells in image (morphology, geometry) and compare to other images with known species" + }, + { + "type": "text", + "bbox": [ + 0.225, + 0.785, + 0.303, + 0.794 + ], + "angle": 0, + "content": "eventually" + }, + { + "type": "text", + "bbox": [ + 0.23, + 0.794, + 0.364, + 0.804 + ], + "angle": 0, + "content": "Likely human sperm." + }, + { + "type": "text", + "bbox": [ + 0.138, + 0.805, + 0.879, + 0.814 + ], + "angle": 0, + "content": "- Sperm cell swimming trajectories are experimental setup dependent: shallow versus deep observation chamber" + }, + { + "type": "text", + "bbox": [ + 0.165, + 0.815, + 0.282, + 0.824 + ], + "angle": 0, + "content": "- Validate setup:" + }, + { + "type": "text", + "bbox": [ + 0.194, + 0.825, + 0.872, + 0.835 + ], + "angle": 0, + "content": "- Brightfield microscopy, sperm cells are always swimming parallel, little to no off-focus blurring" + }, + { + "type": "text", + "bbox": [ + 0.165, + 0.836, + 0.306, + 0.844 + ], + "angle": 0, + "content": "of the tail." + }, + { + "type": "text", + "bbox": [ + 0.22, + 0.845, + 0.769, + 0.854 + ], + "angle": 0, + "content": "- Assess how images from different experimental setups look like for human sperm" + }, + { + "type": "text", + "bbox": [ + 0.248, + 0.855, + 0.612, + 0.864 + ], + "angle": 0, + "content": "- Likely shallow cover-slip-based observation chamber" + }, + { + "type": "text", + "bbox": [ + 0.274, + 0.865, + 0.537, + 0.875 + ], + "angle": 0, + "content": "- Consequences of setup for trajectory" + }, + { + "type": "text", + "bbox": [ + 0.302, + 0.875, + 0.865, + 0.884 + ], + "angle": 0, + "content": "In shallow observation chambers sperm cannot freely beat and thus beat parallel to" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.507, + 0.937 + ], + "angle": 0, + "content": "31" + } + ], + [ + { + "type": "code", + "bbox": [ + 0.107, + 0.093, + 0.885, + 0.35 + ], + "angle": 0, + "content": "the glass surface, with no 3D component. - Sperm cells hitting a glass surface align their beat with the glass and this way can get trapped if asymmetrically beating. - Curved trajectory in 2D may relate to a progressive/straight swimming path in 3D - Sperm cell swimming trajectories are stimulus dependent and maturation-state dependent. - Validate setup and explore metadata provided: - Check literature and compare image to available images of sperm from healthy donors (if available). - Compare detected swimming trajectory to swimming trajectories of sperm explored in a similar species and experimental setup where sperm cells are perturbed (if available). - Are there indications for any treatment under which the detected swimming trajectory would be abnormal and how would these look like (if available). - No indications for any treatment. Neither in prompt nor in image. - Explore what is known on sperm cell trajectories to reach the egg in humans. - Not much is known since in vivo studies are not possible. - Sperm cells need to be able to develop hyperactive motility to swim to the egg - How do hyperactive sperm cells look? - Asymmetric beating - What trajectory would they swim in in a 2D observation chamber? - Curved trajectory - Does that fit the swimming trajectory presented? - Yes - Assess whether when summarizing all the obtained information the conclusion from the observed swimming trajectory to sperm not reaching the egg is valid. - No. - Compose summarizing reply to support the conclusion." + }, + { + "type": "code_caption", + "bbox": [ + 0.11, + 0.366, + 0.461, + 0.382 + ], + "angle": 0, + "content": "Reasoning example for task 3 'experiment proposal'." + }, + { + "type": "code", + "bbox": [ + 0.107, + 0.399, + 0.805, + 0.634 + ], + "angle": 0, + "content": "\\*Question\\* Can you suggest a simple experiment to find out whether centrosomal multiplication is occurring here? \\*Answer\\* One option could be to co-stain the EDC3 antibody with a centrosomal marker like PCNT or CEP250. \n\\*Reasoning\\* - Extract hypothesis to be tested from the question EDC3 is labeling centrosomes and thus, centrosomal multiplication is occurring in the pictured cell setup - Immunofluorescent antibody staining on a protein of interest (EDC3) in a human cultured cell type - Compare to similar experimental setups that support such a hypothesis - Immunofluorescent antibody staining of centrosomes, co-labeling cell markers, performing micros c counting centrosomes Acquiring or creating a transgenic cell line with a GFP-labeled centrosomal proteins, followed by microscopy \n- Determine which methods are compatible with EDC3 staining Both compatible with co-staining EDC3 \n- Determine which experiment is simplest for someone performing the shown experimental setup and if e one can be realized in less time Immunofluorescent staining of EDC3 and centrosomes \nFind centrosome markers for human cells and assess suitability for this cell type \nFinal decision on experimental setup" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.663, + 0.33, + 0.679 + ], + "angle": 0, + "content": "F.7.2 LLM-based error analysis" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.687, + 0.907, + 0.777 + ], + "angle": 0, + "content": "To better understand how the model approaches questions in MicroVQA, we extend our manual reasoning trace analysis from a subset of the dataset to all questions by leveraging zero-shot prompting of an LLM. The automatic error categorization achieves \\(63\\%\\) accuracy compared to manually assigned categories. Although imperfect, automated error categorization is scalable and preserves the overall distribution of human-identified errors, enabling broader analysis. We also classify correctly answered questions to quantify overall changes across experiments. Below, we present the prompts and category definitions used for classification." + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.792, + 0.871, + 0.825 + ], + "angle": 0, + "content": "Below is a multiple choice question with options and the reasoning that lead a model to an incorrect response. Originally the model was also shown an image with the question. Your task is to use the reasoning trace to tag the error type:" + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.825, + 0.482, + 0.834 + ], + "angle": 0, + "content": "- Perception: the image was not interpreted correctly." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.835, + 0.769, + 0.844 + ], + "angle": 0, + "content": "- Overgeneralization: the details of the question were ignored and the general case was applied." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.845, + 0.865, + 0.854 + ], + "angle": 0, + "content": "- Hallucination: details were added during reasoning that weren't in the question or extracted from the image." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.855, + 0.473, + 0.864 + ], + "angle": 0, + "content": "- Other: the error does not fit the above categories." + }, + { + "type": "list", + "bbox": [ + 0.108, + 0.825, + 0.865, + 0.864 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.874, + 0.174, + 0.884 + ], + "angle": 0, + "content": "Question:" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "32" + } + ], + [ + { + "type": "code", + "bbox": [ + 0.108, + 0.095, + 0.344, + 0.158 + ], + "angle": 0, + "content": "{{question}} \nCorrect answer: {{correct_answer}} \nResoning trace: \n{{reasoning}}" + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.204, + 0.879, + 0.236 + ], + "angle": 0, + "content": "Below is a multiple choice question with options and the reasoning that lead a model to a correct response. Originally the model was also shown an image with the question. Your task is to use the reasoning trace and determine if the question was answered because of these reasons:" + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.236, + 0.83, + 0.246 + ], + "angle": 0, + "content": "- No image: the image is unnecessary because the correct answer doesn't rely on interpreting visual cues." + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.246, + 0.845, + 0.266 + ], + "angle": 0, + "content": "- Visual giveaway: the image is unnecessary because critical visual information is already described in the question itself." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.266, + 0.686, + 0.276 + ], + "angle": 0, + "content": "- Language bias: the question has information that makes the correct option obvious." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.276, + 0.726, + 0.285 + ], + "angle": 0, + "content": "- Weak distractors: the distractors are easy to rule out according to the reasoning trace." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.285, + 0.836, + 0.295 + ], + "angle": 0, + "content": "- Good question: the question is well-constructed and needs the image and specialized knowledge to answer." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.295, + 0.611, + 0.306 + ], + "angle": 0, + "content": "- Other: the question is hard to answer or doesn't fit the other classes." + }, + { + "type": "list", + "bbox": [ + 0.108, + 0.236, + 0.845, + 0.306 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.316, + 0.173, + 0.325 + ], + "angle": 0, + "content": "Question:" + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.326, + 0.193, + 0.337 + ], + "angle": 0, + "content": "{{question}}" + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.346, + 0.344, + 0.356 + ], + "angle": 0, + "content": "Correct answer: {{correct_answer}}" + }, + { + "type": "text", + "bbox": [ + 0.109, + 0.367, + 0.214, + 0.377 + ], + "angle": 0, + "content": "Resoning trace:" + }, + { + "type": "text", + "bbox": [ + 0.112, + 0.377, + 0.2, + 0.387 + ], + "angle": 0, + "content": "{{reasoning}}" + }, + { + "type": "image_caption", + "bbox": [ + 0.21, + 0.431, + 0.354, + 0.445 + ], + "angle": 0, + "content": "Without image (ablation)" + }, + { + "type": "image", + "bbox": [ + 0.092, + 0.452, + 0.476, + 0.749 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.654, + 0.431, + 0.722, + 0.445 + ], + "angle": 0, + "content": "With image" + }, + { + "type": "image", + "bbox": [ + 0.476, + 0.453, + 0.897, + 0.737 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.768, + 0.908, + 0.797 + ], + "angle": 0, + "content": "Figure 24. Reasoning trace categorization with an LLM (GPT-4o): Correct (blue) vs. incorrect (red) answers. Comparison of Claude-Sonnet's responses without (left) and with (right) image access." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.81, + 0.907, + 0.87 + ], + "angle": 0, + "content": "We apply this system to examine the impact of providing or withholding the image for each question (Fig. 24). While overall accuracy remains comparable across both conditions, the distribution of errors and successful reasoning patterns differs meaningfully. As expected, hallucination errors decrease when the image is available, perception errors are rare when the model lacks visual input, and the proportion of questions that do not require an image remains stable." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.907, + 0.902 + ], + "angle": 0, + "content": "Most errors arise from questions requiring visual and contextual understanding, highlighting the importance of multimodal reasoning. Overgeneralization errors decrease when the image is provided, as the model is better grounded in the" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.508, + 0.937 + ], + "angle": 0, + "content": "33" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.907, + 0.137 + ], + "angle": 0, + "content": "question context. However, many questions remain challenging even with visual input, as perception errors increase. This suggests that effectively solving our dataset requires models to both maintain the specifics of the question's context and accurately interpret visual information." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.137, + 0.907, + 0.184 + ], + "angle": 0, + "content": "Additionally, note that the proportion of questions answered correctly through language shortcuts (e.g., language bias and visual giveaways) remains low in both settings, suggesting the effectiveness of the RefineBot strategy. However, addressing weak distractors remains a persistent challenge for specialized datasets." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "34" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.092, + 0.088, + 0.905, + 0.945 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.357, + 0.961, + 0.64, + 0.977 + ], + "angle": 0, + "content": "Figure 25. Collage of images from MicroVQA." + } + ] +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13399/3216c4a9-6622-4f8f-9268-b62c8b700c4e_origin.pdf b/data/2025/2503_13xxx/2503.13399/3216c4a9-6622-4f8f-9268-b62c8b700c4e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8cdc431c7663ce65461aa50ae06e930fbd4635c4 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13399/3216c4a9-6622-4f8f-9268-b62c8b700c4e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6914af7a48f591a220b0a452973c5cfbf0866e5a1e32891b26e86a71fc8971b1 +size 28043625 diff --git a/data/2025/2503_13xxx/2503.13399/full.md b/data/2025/2503_13xxx/2503.13399/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5697c9c9bd927fb93880e61856bec360301ccf54 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13399/full.md @@ -0,0 +1,1610 @@ +# MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research + +James Burgess*¹ Jeffrey J Nirschl*¹ Laura Bravo-Sánchez¹,* + +Alejandro Lozano $^{1}$ Sanket Rajan Gupte $^{1}$ Jesus G. Galaz-Montoya $^{1}$ Yuhui Zhang $^{1}$ + +Yuchang Su² Disha Bhowmik³ Zachary Coman³ Sarina M. Hasan⁴ + +Alexandra Johannesson $^{5}$ William D. Leineweber $^{1}$ Malvika G Nair $^{3}$ Ridhi Yarlagadda $^{3}$ + +Connor Zuraski $^{1}$ Wah Chiu $^{1}$ Sarah Cohen $^{3}$ Jan N. Hansen $^{1}$ Manuel D Leonetti $^{6}$ + +Chad Liu6 Emma Lundberg1,5,6 Serena Yeung-Levy1,6 + +1Stanford University, 2Tsinghua University, 3University of North Carolina at Chapel Hill, + +$^{4}$ Princeton University, $^{5}$ KTH Royal Institute of Technology, $^{6}$ Chan Zuckerberg Biohub Network + +# 1 ) Expert Visual Understanding + +What is unusual about the result? + +Perception + +"How is the Seipin localized within the endoplasmic reticulum (ER)?" + +![](images/48fb0b015fe336e7f1f2ee20f79fa8128697bbf493c68df1a3c102ba11f79ade.jpg) + +"Do cells treated with BafA1 (left) express more p26 compared to control (right)?" + +![](images/fbf6225fd6cc5a03ce08245eb457e6ac241f62677aa68ba0b8663ce128f2d34a.jpg) +Figure 1. A scientific experimentation workflow drives discovery: researchers analyze experiments, develop hypotheses, and design further experiments to test their ideas. We release MicroVQA, a visual question answering (VQA) benchmark to test these three tasks in the context of biological microscopy. Each of the 1,042 samples is created by a biology expert, and transformed into a multiple choice question (MCQ). + +# 2 ) Hypothesis generation + +Why does this happen in my experiment? + +Assessment + +"Which mechanism might explain why ASLV particles show signs of merging in a CryoEM image?" + +"Which glial cell is likely responsible for this abnormal reticular fiber pattern?" + +![](images/743ab2730b296e0179253c4096d078e836c67aeb0542c24381b8e5557e9ff0ed.jpg) + +![](images/d49b3c5b889aacafd67f16612e1e0234e4bbf363b0baafdfa4c53e1eb51a055a.jpg) + +# 3 Experimental proposal + +How do I test my hypothesis? + +Action + +"What experiment could you perform to test if CCNB1 protein levels relate to cell cycle stages in human A-431 cells?" + +![](images/49ee27fd732ed49af49867b6852d6dc9224ef77778edb08746e4310eb67393ad.jpg) + +"What experimental change can increase the likelihood of achieving a high-resolution structure?" + +![](images/69d02fafbdedd7b8e5a3996c1814917433fee933752157a05313b1d02141cd0c.jpg) + +![](images/41c5c5c6b79455a8e5bed80c85cbc8336ec8c76d066a5f1bc1a77fb088e9a319.jpg) + +# Abstract + +Scientific research demands sophisticated reasoning over multimodal data, a challenge especially prevalent in biology. Despite recent advances in multimodal large language models (MLLMs) for AI-assisted research, existing multimodal reasoning benchmarks only target up to college-level difficulty, while research-level benchmarks emphasize lower-level perception, falling short of the complex multimodal reasoning needed for scientific discovery. To bridge this gap, we introduce MicroVQA, a visual-question answering (VQA) benchmark designed to assess three reasoning capabilities vital in research workflows: expert image understanding, hypothesis generation, and experiment pro + +posal. MicroVQA consists of 1,042 multiple-choice questions (MCQs) curated by biology experts across diverse microscopy modalities, ensuring VQA samples represent real scientific practice. In constructing the benchmark, we find that standard MCQ generation methods induce language shortcuts, motivating a new two-stage pipeline: an optimized LLM prompt structures question-answer pairs into MCQs; then, an agent-based 'RefineBot' updates them to remove shortcuts. Benchmarking on state-of-the-art MLLMs reveal a peak performance of $53\%$ ; models with smaller LLMs only slightly underperform top models, suggesting that language-based reasoning is less challenging than multimodal reasoning; and tuning with scientific articles enhances performance. Expert analysis of chain-of-thought responses shows that perception errors are the most frequent, followed by knowledge errors and then overgen + +eralization errors. These insights highlight the challenges in multimodal scientific reasoning, showing MicroVQA is a valuable resource advancing AI-driven biomedical research. MicroVQA is available here, project here. + +# 1. Introduction + +The transformative potential of AI in scientific discovery lies in its ability to interpret and reason over complex, multimodal data while integrating specialist domain knowledge. Recently, multimodal large language models (MLLMs) have demonstrated notable capabilities in data comprehension, recall, and multi-step reasoning across various modalities on exam-style questions [1, 3, 6, 49, 69, 81]. This progress has generated enthusiasm for scientific AI applications, ranging from multimodal chat AI assistants [33, 70, 85, 96] to AI agents capable of orchestrating complex experiments [12, 16, 26, 55]. However, while MLLMs perform well on general exam tasks, they often lack the specialized, contextual reasoning needed for real scientific challenges, where tasks require advanced image-based reasoning, analysis, and hypothesis-driven experimentation. + +In recent years, efforts to apply scientific MLLMs across a range of domains have laid an essential groundwork. Advances have been achieved in image understanding [41, 43, 47, 50, 57], literature analysis [41, 47, 66, 77], and problem-solving at school and early college levels [53, 87]. In applied settings, AI has facilitated scientific tasks in biology [28, 60, 63, 68], chemistry [12, 38, 55], software [63, 73], and diagnostic pathology and radiology [52]. However, there has been less progress on multimodal tasks requiring reasoning in research-level settings. The gap is due in part to a lack of multimodal benchmarks for scientific reasoning – unlike college-level tasks where existing exams can form benchmarks, research tasks require expert curation. + +Microscopy provides a compelling use case for developing such a benchmark. It requires more than pattern recognition or factual recall – it demands the ability to synthesize visual information with experimental context, formulate hypotheses, and propose experimental follow-ups. To evaluate these advanced capabilities, we introduce MicroVQA, a visual-question answering (VQA) benchmark tailored for multimodal scientific reasoning. Grounded in the real-world scientific practice of biological microscopy, MicroVQA incorporates 1,042 multiple-choice questions (MCQs) manually created by biological researchers across diverse disciplines, with each question taking more than 30 minutes to generate. Careful expert curation ensures the benchmark is relevant to real scientific applications. + +MicroVQA is designed to evaluate three key reasoning tasks crucial to scientific inquiry: (1) expert image understanding - the ability to recognize scientifically relevant features and integrate contextual information about sample preparation; (2) hypothesis generation - the formulation + +of scientifically plausible explanations from experimental data; and (3) experiment proposal – the ability to suggest further experiments that could validate or refute hypotheses. These capabilities form a common workflow in scientific experimentation. They demand both abductive reasoning – inferring the best explanation from multiple possibilities – and deductive reasoning – moving from general principles to specific conclusions [81]. + +We tested standard methods to map open scientific VQA samples to multiple-choice questions (MCQs), but found that they induce language shortcuts – the MCQs failed to truly test multimodal abilities. To ensure question quality and difficulty, we developed a two-stage pipeline for MCQ generation: first, an optimized LLM prompt structures QA pairs into well-formatted MCQs; then, a novel agent-based 'RefineBot' system increases question difficulty by rewriting questions without language shortcuts. This ensures that MicroVQA's MCQs better test true scientific capabilities, rather than simple pattern recognition. + +We benchmarked frontier MLLMs on MicroVQA, finding that the best performers achieved a modest peak of $53\%$ , which shows a significant gap between current models and expert-level scientific reasoning. The variation between different MLLMs was small; notably, smaller LLMs only slightly underperform their larger counterparts, suggesting that the key challenge for our MCQs is not language-only reasoning, but multimodal reasoning or knowledge. We also find that finetuning MLLMs on scientific publications improves MicroVQA performance. Finally, a group of biomedical researchers performed a detailed qualitative analysis into MLLM failure modes, uncovering three major failure modes: Expert perception is the most common, followed by knowledge errors, and then overgeneralization reasoning errors. This further supports that visual reasoning is a key challenge in MicroVQA. + +In summary, our contributions are as follows: + +- We introduce MicroVQA, a benchmark for multimodal reasoning in biological microscopy. We defined three key tasks for scientific research - expert image understanding, hypothesis generation, and experiment proposal. Then, expert researchers curated 1,042 diverse VQA samples. +- We develop a novel two-stage MCQ generation pipeline to address the challenge of creating MCQs from long-format scientific questions without language shortcuts. +- We provide quantitative and qualitative benchmarking of frontier MLLMs, highlighting areas for improvement. + +# 2. Related work + +MLLM reasoning benchmarks While perception focuses on identifying an image's visual features, visual reasoning often integrates prior knowledge to derive new conclusions using logical inference strategies like deduction, induction, abduction, and analogical reasoning [81]. + +Reasoning in MLLMs has focused on image-text VQA [25, 44, 53, 81, 87, 88]. The most prominent examples in science are MMMU [87, 88], ScienceQA, [53] and MathVista [54]. While they do test reasoning, they are derived from exams up to the college level, while our benchmark emphasizes research-level difficulty. Other popular multimodal benchmarks test commonsense inference [71, 90], spatial or relational awareness [35, 91] and integrating prior knowledge [56]. Another line of work is visual abductive reasoning [8, 32], or finding the most likely explanation for an observation from incomplete information; this relates closely to hypothesis generation in this work. + +MLLMs in science Many works consider multimodal VQA in scientific or medical domains like microscopy images [50], biological images [57], medical images [30, 34], and chemical structure drawings [43]. Still, they mostly test more straightforward perception and recognition. Figure comprehension for scientific articles does require more reasoning with multimodal data [41, 45-47, 61]; however, the images here are curated for publication, and the emphasis is on understanding content, rather than performing practical tasks. On the other hand, our benchmark includes realistic experiment images and tests more advanced reasoning like hypothesis generation and experiment planning. Scientific tasks have received much more attention in language-only LLMs [9, 15, 22, 37, 41, 58, 62, 63, 66, 67, 73, 80, 95], with GPQA being a notable work targeting PhD-level multi-step reasoning [62], but these do not test multimodal reasoning, which is important to many scientific areas [26]. In terms of models, most MLLMs in the biomedical domain are targeted at clinical tasks [34, 42, 59, 65, 75, 92]. + +Multiple-choice question construction A challenge in creating exams for education research and benchmarks in machine learning is generating multiple-choice questions with quality distractors [4, 27]. Recent works have explored LLM-based generation methods, including zero-shot distractor prompting, few-shot prompting, LLM fine-tuning with high-quality examples, and prompting for known error types [10, 23, 24, 94]. However, a major challenge is that generated distractors are not sufficiently vision-centric - MLLMs can often select the correct answer from the distractors without access to the image [74]. While this is partly explained by language biases on truly vision-centric MCQs [7, 29], others likely do fail to test visual understanding due to poor MCQ construction [41] - this informs our RefineBot MCQ generation method in Sec. 4.2. + +# 3. The MicroVQA benchmark + +Here, we present the MicroVQA benchmark for visual-question-answering (VQA) which advances multimodal + +Table 1. MicroVQA benchmark attributes. + +
Dataset featureValue
Total questions1,042
Multi-image questions423
Avg. MCQ question length66
Avg. MCQ answer length15
Avg. raw question length158
Avg. raw answer length52
Unique image sets255
Image ModalitiesLight, Fluoro, Electron
Image ScalesTissue, Cell, Subcell, Atomic
Organisms31
Research areas33
Expert question creators12
Time to create 1 question30-40 mins
Time to quality check 1 MCQ5 mins
+ +![](images/03d2e6afce9ebf9ce549a1902ba73c25c37b17096a0491c3439a9eed68299682.jpg) +Figure 2. MicroVQA taxonomy of sub-tasks. + +scientific reasoning by tackling two major evaluation challenges. First, defining tasks is challenging because scientific research encompasses many complex activities [33] – we therefore define three important tasks important to biomedical discovery. Second, VQA samples cannot be readily sourced from existing exams or classification datasets – we therefore assembled a team of biologists to curate challenging reasoning problems. In Sec. 3.1, we provide an overview of the MicroVQA benchmark; Sec. 3.2 details the key tasks; and finally, Sec. 3 demonstrates that MicroVQA is at the forefront of measuring research-level reasoning in MLLMs. In the later Sec. 4, we describe our approach for generating multiple-choice questions. + +# 3.1. Overview of MicroVQA + +MicroVQA is a comprehensive dataset of 1,042 VQA triplets, manually curated by a team of expert biological researchers from diverse fields, and key attributes are summarized in Tab. 1. These expert-crafted questions test reasoning across three key tasks: expert understanding, hypothesis generation, and experiment proposal, and we additionally provide a taxonomy of sub-tasks in Sec. 3. The questions cover a broad spectrum, ranging from biological issues, such as "What is unusual about the cell shape?" - to technical imaging challenges like "Is this feature due to low signal-to-noise ratio (SNR) or is it real?". We provide the 'raw' questions and answers written by experts, which + +are often long and nuanced, along with converted multiple-choice questions (MCQs) suitable for MLLM evaluation. + +The images encompass the most common microscopy modalities used in human biology: brightfield, fluorescence, and electron microscopy. The sample types span the full range of microscopic scales – tissue, cellular, subcellular, and atomic – and emphasize organisms relevant to human biology and medically motivated tasks, namely human and mouse. Over $60\%$ of the samples have multiple images because comparison is essential in microscopy research, and because multichannel fluorescence images cannot be represented in standard RGB. All these attributes are provided as metadata tags to facilitate deeper error analysis, thereby enhancing MicroVQA's value as a resource for advancing AI capabilities in scientific research. + +# 3.2. MicroVQA scientific reasoning tasks + +To define specific reasoning tasks aimed at advancing biomedical research, we established the following criteria: (1) the tasks should use image-text inputs and text outputs suitable for MLLMs, (2) they should require higher-order reasoning beyond simple image processing, and (3) they should emphasize core experimental activities such as experimentation, analysis, and action [33], rather than auxiliary tasks like literature review or writing. The tasks were developed through interviews with nine co-author PIs and postdocs specializing in microscopy research (Appendix D). We identified three essential capabilities that we now introduce, with examples shown in Sec. 3. + +Expert visual understanding After performing an experiment, scientists must interpret data within its experimental context. In biological microscopy, this involves identifying patterns like protein distributions or cell morphologies, while also assessing technical aspects such as artifacts and noise levels. The task requires MLLMs to perform anomaly detection and image comparison, going beyond simple perception in two key ways: analysis must consider sample preparation context, and expert knowledge is needed to evaluate biological features and technical artifacts. + +Hypothesis generation The next step is proposing mechanistic hypotheses to explain experimental data. For example, when a drug treatment causes mitochondrial fragmentation in cells, a hypothesis might be that the drug disrupted calcium homeostasis, activating the DRP1 protein through a multi-step pathway, leading to fragmentation. This requires abductive reasoning [8, 20, 32, 81] as one must select from many possible hypotheses given incomplete information. The process demands the MLLM to integrate relevant domain knowledge with experimental context and visual features to reason about which hypothesis best explains the observations. + +Table 2. Comparing scientific multimodal benchmarks close to MicroVQA for in terms of reasoning level or difficulty beyond college level. We show difficulty level, scientific domain, dataset source, and size. Compared to others, MicroVQA either has higher difficulty level, or it has higher reasoning level (which is established in Fig. 3). Compared to others at the same reasoning level, namely MMMU-Pro, it has similar size. + +
BenchmarkDifficulty levelDomainSourceSize
MicroVQAresearchmicroscopyexpert-curated1,042
MMSci [47]researchsciencepaper figures7132
LabBench [41]researchbiologywebQA181
PathVQA [30]graduatepathologytexbooks16.3k
OmnimedVQA* [34]graduatemedicalclassification dataset127.9k
Microbench [50]graduatemicroscopyclassification dataset17.2k
MMMU [87]undergraduategeneraltextbooks, webQA11k
MMMU Pro [88]undergraduategeneralMCQ dataset1,730
Science QA [53]grade-schoolscienceexams16.8k
+ +![](images/5295c1b7d26a09eff3d6c5cf0c423b626244b32e9626088d654ec8fc1a9df192.jpg) +Figure 3. Composition of scientific MLLM benchmarks regarding estimated Bloom's taxonomy [11]. Higher levels are more cognitively challenging. MicroVQA has more questions at higher levels compared to other benchmarks, for example, MMMU [87] and ScienceQA [53], while perception-driven medical benchmarks like OmniMedVQA are at lower levels. + +Experiment proposal The final step is designing experiments to validate hypotheses. In microscopy, this often involves selecting appropriate assays and controls, requiring knowledge of suitable protocols and deductive reasoning about whether proposed experiments will provide evidence for or against the hypothesis. This task also may involve adjusting a particular experiment to address technical issues, like how to ensure a good signal-to-noise ratio, + +and this requires both image understanding and knowledge of protocols. + +In Appendix F.7.1, we show an example question in each task with an expert-annotated 'reasoning trace' - it shows the type of reasoning required for typical questions. + +# 3.3. Analysis of MicroVQA benchmark + +Scientific discovery requires complex reasoning beyond basic perception and knowledge, and it requires expertise beyond school-level education. Here we show that MicroVQA addresses a gap that prior multimodal scientific benchmarks do not test high level reasoning at research-level difficulty. + +To assess reasoning levels in MLLM benchmarks, we use Bloom's taxonomy [11], which hierarchically classifies cognitive difficulty from recall to evaluation. We fine-tune GPT-4o to classify Bloom's taxonomy on related MLLM benchmarks with results in Fig. 3; in Tab. 2 we additionally show key attributes of the same benchmarks. Fig. 3 shows that benchmarks having research-level and graduate-level difficulty are dominated by lower reasoning questions (levels 1-2), while our MicroVQA has higher level reasoning (levels 3-4). This is not surprising since OmnimedVQA and MicroBench derive questions from prior classification datasets, while our task definition and data collection approach does reflect high-level reasoning. + +On the other hand, some benchmarks have higher Bloom's reasoning level closer to MicroVQA - MMMU, MMMU-Pro, and Science-QA - but their difficulty level is lower - undergraduate or grade-school. The most comparable benchmark for reasoning is the undergraduate-level MMMU-Pro, and its dataset size is 1,730 compared to our 1,042: this shows that for reasoning-intensive benchmarks, sizes beyond 1,000 are very high. + +# 4. MCQ generation process + +# 4.1. Creation of raw VQA samples by experts + +Expert generation of raw VQA samples Creating VQA triples suitable for research-level reasoning tasks is time-consuming and requires expertise. We recruited 12 human experts to each create approximately 90 VQA samples, taking about 30-40 minutes per sample. Each submission included an image set, a question, a detailed response, and contextual information like experimental context and motivation. They are raw VQA triples, $(v_{0},q_{0},a_{0}) -$ 'raw' means the question and answer can be much longer and more detailed than typical VQA benchmarks. This allowed expert annotators to include enough detail to match what is a valuable input and response for an MLLM. Appendix E shows materials that prompt question creators. The images were sourced from the contributor's own experiments, image databases, or research papers published since 2024, and + +all image licenses permit redistribution. + +Quality control We had three quality control criteria: questions should be challenging and not solvable with undergraduate-level knowledge; they should be aligned with the defined tasks; and there should be diversity in image content. Questions were reviewed against these criteria for each contributor, and feedback was provided for further VQA samples. These review rounds occurred after submitting the first 10 questions, and the first 40 questions. + +# 4.2. MCQ generation from raw VQA samples + +To benchmark MLLMs, we transform raw expert-created VQA samples to multiple-choice questions (MCQs) and propose new methods for MCQ generation. + +Motivation: naive MCQ generation is inadequate The standard approach for VQA benchmarks is to zero-shot prompt an LLM with raw VQA samples [4], but this has two problems. The first is that generated MCQs do not follow established educational MCQ design principles (Appendix E.2), likely because the raw VQA questions and answers are often long and variable. Our stage 1 'exam alignment', which we describe in the next section, addresses this challenge. The second challenge is that generated MCQs do not properly test MLLM capabilities. Evaluating GPT-40 on these MCQs scores $93\%$ even without the image, despite most of the dataset questions clearly requiring image analysis, and we investigate this by inspecting the chain-of-thought outputs in Fig. 4 and Appendix E.2. Many distractors can be easily eliminated based on general biology knowledge, or because they are too vague compared to the correct option. We hypothesize that MCQ and distractor generation is challenging when the target answer string is long, and where the subject matter is specialized, suggesting that other scientific benchmarks may encounter similar issues. Overall, this motivates a more involved approach to distractor generation. + +Stage 1: Exam-style MCQ generation Here we ensured that multiple-choice questions (MCQs) conformed to established design principles for biomedical exams (Fig. 4). This effort was led by a physician-scientist co-author trained in medical board exam item writing, who reviewed educational literature on MCQ design [64] and Bloom's Taxonomy for assessing cognitive skills in biomedical exams [5, 19, 89]. We began by manually transforming 50 user-submitted raw question-answer pairs, $(q_0, a_0)$ , into one-best-answer MCQs with $k$ distractors, $(q_1, a_1, \mathbf{d}_1)$ . These 'Gold Standard' MCQs retained the original questions' meanings while strictly adhering to NBME standards [64] and minimizing cues that enable test-wise guessing. We used these gold-standard MCQs in a supervised learning + +![](images/6963e397abdb5827ff6b3efd8fc096e97daf7d2bd82b69915beb1726b9190dba.jpg) +Figure 4. Constructing the MicroVQA multiple choice questions. (0) We defined tasks with domain biological scientists and created 1,061 raw VQA samples. (1) The raw samples were aligned to an exam-style MCQ by manually transforming a small set and optimizing an LLM prompt to match that alignment. (2) MCQs are further improved using RefineBot, a new iterative method to make MCQs more challenging. The lower panel shows an example MCQ from raw VQA to final: the annotations highlight key changes that we further explore in Appendix E.2, where red indicates issues, and green indicates good attributes. + +setup to develop a general LLM prompt, $p$ , that maps raw question-answer pairs to exam-style MCQs. We leverage the DSPy framework [39, 40], which optimizes $p$ to match the gold-standard MCQ outputs. Additionally, the optimization process incorporated supervision from LLM-based quality metrics: content similarity, NBME-aligned formatting, and the absence of extraneous clues. Further details are in Appendix E.2. + +Stage 2: RefineBot question refinement Although Stage 1 produces exam-aligned MCQs, many remain easily solvable due to language shortcuts (see Appendix D). To enhance difficulty, we introduce RefineBot, a method that increases MCQ complexity (Fig. 4). The key idea is that weaknesses in MCQ construction are revealed by the chain-of-thought (CoT) answers. The first LLM agent, the evaluator/reflector, answers the MCQ with CoT and then reflects on the strategies used. For example, a question about processes inside a virus might have a distractor referring to surface processes – this can be eliminated without referencing the image. The LLM summarizes its solution strategy into a reflection text: in this case, that the distractor was implausible based on knowledge that the stated process + +does not not occur inside the virus, and so does not match the question. It passes the reflection to the rewriter LLM agent. The rewriter revises the question stem and generates new distractors to invalidate the identified strategies: in this example it may create distractors referring to processes that do occur inside a virus. To prevent significant changes to the question-answer pair over iterations, we employ an LLM checker to ensure semantic consistency with the original pair. The revised MCQ is returned to the evaluator; if it still answers correctly, the loop continues. The process stops if the rewriter fails the checker or after $n$ iterations. If it fails, RefineBot can be rerun with a different seed, often succeeding—a form of inference-time scaling [13]. All agents are GPT-4o-0806, but to mitigate potential bias against 4o in final evaluation, we also use Claude-3.5-Sonnet-0620 as the evaluator agent; we assess biases in the experiments section. + +Final MCQ quality check Since MCQ generation involves LLM processing, we need to verify the correctness of the final answer. Each MCQ is manually reviewed by the same expert who created that particular question (between 80 and 120 per person). For any question with issues, that expert makes minimal changes to the question correct it. + +# 5. Experiments + +# 5.1. Benchmarking MLLMs with MicroVQA + +We evaluate state-of-the art multimodal large language models (MLLMs) on the MicroVQA benchmark. We include open and closed models spanning the categories: reasoning, large, small, and medical - medical is the closest domain to microscopy with specialist MLLMs. We utilize standard chain-of-thought prompting [88] (deetails in Appendix F). We report mean accuracy in multiple-choice VQA for each tasks Tab. 3. We also perform and analyze no-image Appendix F.5. + +MicroVQA is challenging for all MLLMs Our evaluation reveals a substantial gap between current MLLMs and the upper bound, with the leading model, o1, at 52.8. There is surprisingly little variation in performance between models, with most closed and open models all scoring above 40, with the lowest score (Llama-3.2-11b) likely due to instruction-following issues. A human baseline (Appendix F.3) scores only 50.3 – this shows that biology experts are specialized to subdomains, and are challenged by different subdomains. In fact this demonstrates that experts could benefit from MLLMs that can solve MicroVQA's tasks. The performance variation across the 3 tasks is small – the highest score is 56.4 on 'expert visual understanding (V)', compared to the highest overall of 52.8. The 'hypothesis generation' task is the hardest for all models and the gap is strongest for smaller models. For example the gap between hypothesis generation and expert visual understanding is 3.7 for the strongest large model, but 10.7 on the strongest small model. + +Smaller models are surprisingly competitive For all models with a lower-parameter equivalent - Gemini-1.5, QwenVL, VILA, and Llama - the drop in performance due to size is less than 3 for all except Llama. Moreover, these drops are smaller than on other multimodal reasoning benchmarks like MMMU, where for example, Gemini drops by 9.6 between Pro and Flash-8B [87]. Typically, smaller models have the same vision encoder, but a smaller LLM [79]. Since the size of the language model has a small impact, we hypothesize that for the 'solved' part of MicroVQA, the language-only reasoning is relatively simple compared to domains like math that require multi-step logic. This suggests that other aspects of MLLM reasoning are more challenging, like multimodal reasoning. If true, this suggests that future work could focus on stronger image representations, and this idea is supported by the qualitative error analysis in Sec. 5.2. + +Specialist biomedical training does improve performance While no specialist MLLMs have been developed + +Table 3. VQA accuracy on MicroVQA by task: expert visual understanding (V), hypothesis generation (H), experiment proposal (E). Models marked * were used in MCQ generation, which may affect comparative performance (see Sec. 5.1). The model † is the base LlaVA for LLaVA-Med. Best values are bolded. + +
ModelOverallVHE
Ro1 [36]52.855.450.253.0
large models*Claude-3.5-Sonnet [6]51.754.150.250.4
Gemini-Pro-1.5 [69]51.152.050.250.9
Pixtral-Large [2]49.850.849.548.7
Grok-2-Vision [84]48.450.346.448.7
Qwen-2-vl-72b-Instruct [79]47.549.245.747.8
VILA1.5-40b [48]47.547.247.947.4
*GPT-4o [1]45.648.743.144.8
Llama-3.1-Nemotron-70b-Instruct [83]44.244.943.344.8
Llama-3.2-90b-Vision-Instruct [21]42.444.942.138.7
small modelsQwen-2-VL-7b [79]48.854.143.349.6
Claude-3.5-Haiku [6]47.148.043.851.7
Gemini-Flash-1.5-8b [69]46.748.743.649.1
GPT-4o-mini [1]46.248.543.647.0
Pixtral-12b [2]45.646.944.844.8
VILA1.5-13b [48]41.841.847.540.9
Llama-3.2-11b-vision-instruct [21]30.332.429.328.7
medicalLLaVA-Med-Mistral-7B [42]43.037.347.141.6
†LLaVA-Mistral-7B [49]39.831.643.137.1
Random22.021.921.821.9
Human50.352.747.551.4
+ +Table 4. Ablation study on MicroVQA MCQ generation stages (shown in Fig. 4). Accuracy is high because MCQs have shortcuts (Sec. 4.1) after 'Stage 1' exam alignment, but is lower after 'Stage 2'. Final column is the relative decrease in accuracy. Models with * were used in Stage 2 generation and have the biggest accuracy drops (bolded). They are grouped with different models from the same provider in **. + +
Stage 1Stage 2Relative drop
*GPT-4o79.746.8-41.2
**GPT-4o-mini75.646.2-39.0
*Claude-3.5-Sonnet82.251.7-37.1
**Claude-3.5-Haiku77.347.1-39.0
ol81.652.8-35.3
Pixtral-Large80.149.8-37.8
Gemini-Pro-1.579.151.1-35.4
+ +for microscopy research in particular, LLaVA-Med is fine-tuned on scientific articles from PubMed that include data types overlapping with MicroVQA - especially (tissue) pathology. In Tab. 3, we compare LLaVA-Med against it's base model, LLaVA-Mistral-7B and find overall stronger performance by 4.5 points. This suggests two clear opportunities for improving MicroVQA performance. The first is to instruct-tune with a dataset that better aligns with the MicroVQA data domain: not only tissue pathology, but also fluorescence and electron microscopy, which is available in scientific articles [93]. The second is to simply begin training from a stronger open model like Pixtral. Beyond that, there is more to explore in MLLM specialization, such as tuning the base image encoder for microscopy [51]. + +![](images/fb7ae61d2c764b73e9ea659d9fa49d8dfbcb46b282473ec901f7169f1e274c0a.jpg) +Figure 5. Performance by sub-task and Bloom's level for best models: Gemini-1.5-Pro (closed source), VILA1.5-40B (open-source), and LlaVA-Med-Mistral-7B (medical). + +RefineBot MCQ generation is very effective but introduces small model biases We introduced a new method for refining MCQs to make them more challenging - RefineBot. Tab. 4 shows that between stage 1 and stage 2 (before and after RefineBot), the relative accuracy for all models drops by between $35\%$ and $42\%$ . Firstly, this validates that RefineBot is a valuable tool for making benchmarks harder. However, the most significant drops are for models used by RefineBot (GPT-4o & Claude-3.5-Sonnet, and their smaller versions, GPT-4o-mini & Claude-3-Haiku). The adversarial process introduced a small bias against these models (and we hypothesise the bias would be higher if using only one model instead of two). Despite making fair evaluation slightly more challenging, our use of frontier LLMs in the refinement process is well-motivated, as these models are best positioned to generate challenging, high-quality MCQs that probe the boundaries of current capabilities. Moreover, the bias seems modest - less than 10 points of relative drop - while the benefit is a huge increase in difficulty across all test MLLMs, as seen in Tab. 4. + +Metadata-based analysis supports deeper error analysis Fig. 5 shows error rates based on attribute tag. In Appendix F we explore results further - namely that multi-image reasoning has surprisingly good performance, higher level Bloom's questions are harder, and (not shown) rare image modalities perform worse. + +# 5.2. Error analysis on MicroVQA + +To understand model errors, three biomedical experts reviewed the Chain-of-Thought (CoT) responses of 30 samples on Claude 3.5 Sonnet, with each review requiring at least 45 mins of careful analysis. Errors were classified into a major category and minor contributing errors were also + +noted. Expert perception accounted for $50\%$ of errors, $30\%$ were knowledge errors, and $13\%$ were due to overgeneralization; the final $7\%$ were text hallucination or general reasoning error. Appendix F.6 shows examples for the most common types. We expand on this analysis with an automated version across the complete dataset in Appendix F.7. + +Expert perception errors occurred when the model misinterpreted visual features, leading to incorrect inferences. For example, in Fig. 13, the model judges the small electron-dense objects as ribosomes, rather than stress granules – ribosomes tend to be smaller, have a more regular shape, and lower contrast. Language bias may have contributed to the preference for ribosomes since they are very commonly studied in EM literature available to LLM training. Overall, perception errors were the dominant error type (50%), which suggests that future methods should improve vision representations in MLLMs for microscopy. + +Knowledge errors highlight gaps in nuanced biomedical knowledge. One example in Fig. 17 deals with understanding how protein localization changes during cell signaling. An experiment gives information about a transmembrane signaling protein, and the question asks to interpret the image and explain the function in these cells. The model correctly perceives a punctate distribution, but assumes the role of coordinating intracellular vesicle trafficking. This represents a shallow interpretation that could have been improved with specialized biomedical knowledge. These errors could be mitigated by improving knowledge in MLLMs, either in training or as a database at inference. + +Overgeneralization errors (and simplification errors) reflect the model's tendency to apply broad scientific principles without regard for specific context – arguably they are reasoning errors. In one example in Fig. 21, the question is to compare the risk of malignancy of a tumor, schwannoma, to other nerve sheath tumors. During reasoning, the model proposes a simplified question – what is true about schwannoma – which it answers while ignoring the comparison. + +# 6. Conclusion + +MLLMs hold great potential for advancing scientific discovery. We aim to inspire the development of broadly applicable scientific AI systems. Future directions include training models for more robust image representations, integrating knowledge bases, exploring open evaluation methods, and studying LLMs' reasoning over hypotheses. Beyond microscopy, we hope our benchmark serves as a blueprint for AI evaluation in expert-driven domains. More generally we aspire for different application domains – biomedicine, chemistry, materials science – to share methods towards the common goal of building scientific AI systems. + +# References + +[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 2, 7 +[2] Pravesh Agrawal, Szymon Antoniak, Emma Bou Hanna, Devendra Chaplot, Jessica Chudnovsky, Saurabh Garg, Theophile Gervet, Soham Ghosh, Amélie Héliiou, Paul Jacob, et al. Pixtral 12b. arXiv preprint arXiv:2410.07073, 2024. 7 +[3] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35:23716-23736, 2022. 2 +[4] Elaf Alhazmi, Quan Z Sheng, Wei Emma Zhang, Munazza Zaib, and Ahoud Alhazmi. Distractor generation for multiple-choice questions: A survey of methods, datasets, and evaluation. arXiv preprint arXiv:2402.01512, 2024. 3, 5 +[5] Lorin W. Anderson and David R. Krathwohl, editors. A Taxonomy for Learning, Teaching, and Assessing. A Revision of Bloom's Taxonomy of Educational Objectives. Allyn & Bacon, New York, 2 edition, 2001. 5, 8 +[6] Anthropic. The claude 3 model family: Opus, sonnet, haiku. Model card, Anthropic, 2024. 2, 7 +[7] Nishant Balepur, Abhilasha Ravichander, and Rachel Rudinger. Artifacts or abduction: How do llms answer multiple-choice questions without the question? arXiv preprint arXiv:2402.12483, 2024. 3, 18 +[8] Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. Abductive commonsense reasoning. arXiv preprint arXiv:1908.05739, 2019. 3, 4 +[9] Owen Bianchi, Maya Willey, Chelsea X Alvarado, Benjamin Danek, Marzieh Khani, Nicole Kuznetsov, Anant Dadu, Syed Shah, Mathew J Koretsky, Mary B Makarious, et al. Cardbiomedbench: A benchmark for evaluating large language model performance in biomedical research. bioRxiv, pages 2025-01, 2025. 3 +[10] Semere Kiros Bitew, Johannes Deleu, Chris Develder, and Thomas Demeester. Distractor generation for multiple-choice questions with predictive prompting and large language models. arXiv preprint arXiv:2307.16338, 2023. 3 +[11] Benjamin S. Bloom. Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: The Cognitive Domain. David McKay Co Inc, New York, 1956. 4, 5 +[12] Daniil A Boiko, Robert MacKnight, Ben Kline, and Gabe Gomes. Autonomous chemical research with large language models. Nature, 624(7992):570-578, 2023. 2 +[13] Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with + +repeated sampling. arXiv preprint arXiv:2407.21787, 2024.6 +[14] James Burgess, Jeffrey J Nirschl, Maria-Clara Zanellati, Alejandro Lozano, Sarah Cohen, and Serena Yeung-Levy. Orientation-invariant autoencoders learn robust representations for shape profiling of cells and organelles. Nature Communications, 15(1):1022, 2024. 7 +[15] Hengxing Cai, Xiaochen Cai, Junhan Chang, Sihang Li, Lin Yao, Changxin Wang, Zhifeng Gao, Yongge Li, Mujie Lin, Shuwen Yang, et al. Sciassess: Benchmarking llm proficiency in scientific literature analysis. arXiv preprint arXiv:2403.01976, 2024. 3 +[16] Anne E Carpenter, Beth A Cimini, and Kevin W Eliceiri. Smart microscopes of the future. nature methods, 20(7):962-964, 2023. 2 +[17] Nathan H Cho, Keith C Cheveralls, Andreas-David Brunner, Kibeom Kim, André C Michaelis, Preethi Raghavan, Hirofumi Kobayashi, Laura Savy, Jason Y Li, Hera Canaj, et al. Opencell: Endogenous tagging for the cartography of human cellular organization. Science, 375(6585):eabi6983, 2022. 7 +[18] Kenneth Clark, Bruce Vendt, Kirk Smith, John Freymann, Justin Kirby, Paul Koppel, Stephen Moore, Stanley Phillips, David Maffitt, Michael Pringle, et al. The cancer imaging archive (tcia): maintaining and operating a public information repository. Journal of digital imaging, 26:1045-1057, 2013. 7 +[19] Alison Crowe, Clarissa Dirks, and Mary Pat Wenderoth. Biology in bloom: implementing bloom's taxonomy to enhance student learning in biology. CBE Life Sci. Educ., 7 (4):368-381, 2008. 5, 8 +[20] Igor Douven. Abduction, 2011. 4 +[21] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. 7 +[22] Kehua Feng, Keyan Ding, Weijie Wang, Xiang Zhuang, Zeyuan Wang, Ming Qin, Yu Zhao, Jianhua Yao, Qiang Zhang, and Huajun Chen. Sciknoweval: Evaluating multilevel scientific knowledge of large language models. arXiv preprint arXiv:2406.09098, 2024. 3 +[23] Wanyong Feng, Jaewook Lee, Hunter McNichols, Alexander Scarlatos, Digory Smith, Simon Woodhead, Nancy Otero Ornelas, and Andrew Lan. Exploring automated distractor generation for math multiple-choice questions via large language models. arXiv preprint arXiv:2404.02124, 2024. 3 +[24] Nigel Fernandez, Alexander Scarlatos, Simon Woodhead, and Andrew Lan. Divert: Distractor generation with variational errors represented as text for math multiple-choice questions. arXiv preprint arXiv:2406.19356, 2024. 3 +[25] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023. 3 +[26] Shanghua Gao, Ada Fang, Yepeng Huang, Valentina Giunchiglia, Ayush Noori, Jonathan Richard Schwarz, Yasha + +Ektefaie, Jovana Kondic, and Marinka Zitnik. Empowering biomedical discovery with ai agents. arXiv preprint arXiv:2404.02831, 2024. 2, 3 +[27] Mark J Gierl, Okan Bulut, Qi Guo, and Xinxin Zhang. Developing, analyzing, and using distractors for multiple-choice tests in education: A comprehensive review. Review of educational research, 87(6):1082-1116, 2017. 3, 8 +[28] Juraj Gottweis, Wei-Hung Weng, Alexander Daryin, Tao Tu, Anil Palepu, Petar Sirkovic, Artiom Myaskovsky, Felix Weissenberger, Keran Rong, Ryutaro Tanno, et al. Towards an ai co-scientist. arXiv preprint arXiv:2502.18864, 2025. 2 +[29] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904-6913, 2017. 3, 13, 18 +[30] Xuehai He, Yichen Zhang, Luntian Mou, Eric Xing, and Pengtao Xie. Pathvqa: 30000+ questions for medical visual question answering. arXiv preprint arXiv:2003.10286, 2020. 3, 4 +[31] Marco Y Hein, Duo Peng, Verina Todorova, Frank McCarthy, Kibeom Kim, Chad Liu, Laura Savy, Camille Januel, Rodrigo Baltazar-Nunez, Madhurya Sekhar, et al. Global organelle profiling reveals subcellular localization and remodeling at proteome scale. Cell, 188(4):1137-1155, 2025. 7 +[32] Jack Hessel, Jena D Hwang, Jae Sung Park, Rowan Zellers, Chandra Bhagavatula, Anna Rohrbach, Kate Saenko, and Yejin Choi. The abduction of sherlock holmes: A dataset for visual abductive reasoning. In European Conference on Computer Vision, pages 558-575. Springer, 2022. 3, 4 +[33] Tom Hope, Doug Downey, Daniel S Weld, Oren Etzioni, and Eric Horvitz. A computational inflection for scientific discovery. Communications of the ACM, 66(8):62-73, 2023. 2, 3, 4, 6 +[34] Yutao Hu, Tianbin Li, Quanfeng Lu, Wenqi Shao, Junjun He, Yu Qiao, and Ping Luo. Omnimedvqa: A new large-scale comprehensive evaluation benchmark for medical lvm. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22170-22183, 2024. 3, 4, 2, 7 +[35] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700-6709, 2019. 3 +[36] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024. 7 +[37] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W Cohen, and Xinghua Lu. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146, 2019. 3 +[38] Yeonghun Kang and Jihan Kim. ChatMOF: an artificial intelligence system for predicting and generating metal-organic frameworks using large language models. Nat. Commun., 15 (1):4705, 2024. 2 + +[39] Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive NLP. arXiv preprint arXiv:2212.14024, 2022. 6 +[40] Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, Zhiyuan Zhang, Keshav Santhanam, Sri Vardhamanan, Saiful Haq, Ashutosh Sharma, Thomas T. Joshi, Hanna Moazam, Heather Miller, Matei Zaharia, and Christopher Potts. Dspy: Compiling declarative language model calls into self-improving pipelines. arXiv preprint arXiv:2310.03714, 2023.6 +[41] Jon M Laurent, Joseph D Janizek, Michael Ruzo, Michaela M Hinks, Michael J Hammerling, Siddharth Narayanan, Manvitha Ponnapati, Andrew D White, and Samuel G Rodriques. Lab-bench: Measuring capabilities of language models for biology research. arXiv preprint arXiv:2407.10362, 2024. 2, 3, 4 +[42] Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. Advances in Neural Information Processing Systems, 36, 2024. 3, 7 +[43] Junxian Li, Di Zhang, Xunzhi Wang, Zeying Hao, Jingdi Lei, Qian Tan, Cai Zhou, Wei Liu, Yaotian Yang, Xinrui Xiong, et al. Chemvlm: Exploring the power of multimodal large language models in chemistry area. CoRR, 2024. 2, 3 +[44] Lin Li, Guikun Chen, Hanrong Shi, Jun Xiao, and Long Chen. A survey on multimodal benchmarks: In the era of large ai models. arXiv preprint arXiv:2409.18142, 2024. 3 +[45] Lei Li, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, and Qi Liu. Multimodal arxiv: A dataset for improving scientific comprehension of large vision-language models. arXiv preprint arXiv:2403.00231, 2024. 3 +[46] Shengzhi Li and Nima Tajbakhsh. Scigraphqa: A large-scale synthetic multi-turn question-answering dataset for scientific graphs. arXiv preprint arXiv:2308.03349, 2023. +[47] Zekun Li, Xianjun Yang, Kyuri Choi, Wanrong Zhu, Ryan Hsieh, HyeonJung Kim, Jin Hyuk Lim, Sungyoung Ji, Byungju Lee, Xifeng Yan, et al. Mmsci: A multimodal multi-discipline dataset for phd-level scientific comprehension. arXiv preprint arXiv:2407.04903, 2024. 2, 3, 4 +[48] Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, and Song Han. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26689-26699, 2024. 7 +[49] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024. 2, 7 +[50] Alejandro Lozano, Jeffrey Nirschl, James Burgess, Sanket Rajan Gupte, Yuhui Zhang, Alyssa Unell, and Serena Yeung. Micro-bench: A microscopy benchmark for vision-language understanding. Advances in Neural Information Processing Systems, 37:30670-30685, 2024. 2, 3, 4 +[51] Alejandro Lozano, Min Woo Sun, James Burgess, Liangyu Chen, Jeffrey J Nirschl, Jeffrey Gu, Ivan Lopez, Josiah + +Aklilu, Austin Wolfgang Katzer, Collin Chiu, et al. Biomedica: An open biomedical image-caption archive, dataset, and vision-language models derived from scientific literature. arXiv preprint arXiv:2501.07171, 2025. 7 +[52] Ming Y "Lu, Bowen Chen, Drew F K Williamson, Richard J Chen, Melissa Zhao, Aaron K Chow, Kenji Ikemura, Ahrong Kim, Dimitra Pouli, Ankush Patel, Amr Soliman, Chengkuan Chen, Tong Ding, Judy J Wang, Georg Gerber, Ivy Liang, Long Phi Le, Anil V Parwani, Luca L Weishaupt, and Faisal" Mahmood. "a multimodal generative AI copilot for human pathology". "Nature", 634(8033):466-473", 2024. 2 +[53] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507-2521, 2022. 2, 3, 4 +[54] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255, 2023. 3 +[55] Andres M. Bran, Sam Cox, Oliver Schilter, Carlo Baldassari, Andrew D White, and Philippe Schwaller. Augmenting large language models with chemistry tools. Nature Machine Intelligence, pages 1-11, 2024. 2 +[56] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195-3204, 2019. 3 +[57] M Maruf, Arka Daw, Kazi Sajeed Mehrab, Harish Babu Manogaran, Abhilash Neog, Medha Sawhney, Mridul Khurana, James P Balhoff, Yasin Bakis, Bahadir Altintas, et al. Vlm4bio: A benchmark dataset to evaluate pretrained vision-language models for trait discovery from biological images. arXiv preprint arXiv:2408.16176, 2024. 2, 3 +[58] Adrian Mirza, Nawaf Alampara, Sreekanth Kunchapu, Benedict Emoekabu, Aswanth Krishnan, Mara Wilhelmi, Macjonathan Okereke, Juliane Eberhardt, Amir Mohammad Elahi, Maximilian Greiner, et al. Are large language models superhuman chemists? arXiv preprint arXiv:2404.01475, 2024.3 +[59] Michael Moor, Oishi Banerjee, Zahra Shakeri Hossein Abad, Harlan M Krumholz, Jure Leskovec, Eric J Topol, and Pranav Rajpurkar. Foundation models for generalist medical artificial intelligence. Nature, 616(7956):259-265, 2023. 3 +[60] Siddharth Narayanan, James D Braza, Ryan-Rhys Griffiths, Manu Ponnapati, Albert Bou, Jon Laurent, Ori Kabeli, Geemi Wellawatte, Sam Cox, Samuel G Rodriques, et al. Aviary: training language agents on challenging scientific tasks. arXiv preprint arXiv:2412.21154, 2024. 2 +[61] Shraman Pramanick, Rama Chellappa, and Subhashini Venugopalan. Spiqa: A dataset for multimodal question answering on scientific papers. arXiv preprint arXiv:2407.09413, 2024. 3 + +[62] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. arXiv preprint arXiv:2311.12022, 2023. 3 +[63] Loic A Royer. Omega—harnessing the power of large language models for bioimage analysis. nature methods, 21(8): 1371-1373, 2024. 2, 3 +[64] Billings Melissa S, DeRuchie Kristine, Hussie Kieran, Kulesher Allison, Merrell Jacquelyn, Morales Amy, Panaguaa Miguel A, Sherlock Jennifer, Swygert Kimberly, and Tyson Julie. Nbme item-writing guide: Constructing written test questions for the health sciences, 2021. 5, 8 +[65] Khaled Saab, Tao Tu, Wei-Hung Weng, Ryutaro Tanno, David Stutz, Ellery Wulczyn, Fan Zhang, Tim Strother, Chunjong Park, Elahe Vedadi, et al. Capabilities of gemini models in medicine. arXiv preprint arXiv:2404.18416, 2024.3 +[66] Michael D Skarlinski, Sam Cox, Jon M Laurent, James D Braza, Michaela Hinks, Michael J Hammerling, Manvitha Ponnapati, Samuel G Rodriques, and Andrew D White. Language agents achieve superhuman synthesis of scientific knowledge. arXiv preprint arXiv:2409.13740, 2024. 2, 3 +[67] Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, and Kai Yu. Scieval: A multi-level large language model evaluation benchmark for scientific research. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 19053-19061, 2024. 3 +[68] Kyle Swanson, Wesley Wu, Nash L Bulaong, John E Pak, and James Zou. The virtual lab: Ai agents design new sarscov-2 nanobodies with experimental validation. *bioRxiv*, pages 2024–11, 2024. 2 +[69] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. 2, 7 +[70] Surendrabikram Thapa and Surabhi Adhikari. Chatgpt, bard, and large language models for biomedical research: opportunities and pitfalls. Annals of biomedical engineering, 51 (12):2647-2651, 2023. 2 +[71] Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Candace Ross. Winoground: Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5238–5248, 2022. 3 +[72] Peter J Thul, Lovisa Åkesson, Mikaela Wiking, Diana Mahdessian, Aikaterini Geladaki, Hammou Ait Blal, Tove Alm, Anna Asplund, Lars Björk, Lisa M Breckels, et al. A subcellular map of the human proteome. Science, 356(6340): eaal3321, 2017. 7 +[73] Minyang Tian, Luyu Gao, Shizhuo Dylan Zhang, Xinan Chen, Cunwei Fan, Xuefei Guo, Roland Haas, Pan Ji, Kittithat Krongchon, Yao Li, et al. Scicode: A research coding benchmark curated by scientists. arXiv preprint arXiv:2407.13168, 2024. 2, 3 + +[74] Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. Cambrian1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860, 2024. 3, 18 +[75] Tao Tu, Shekoofeh Azizi, Danny Driess, Mike Schaekermann, Mohamed Amin, Pi-Chuan Chang, Andrew Carroll, Charles Lau, Ryutaro Tanno, Ira Ktena, et al. Towards generalist biomedical ai. NEJM AI, 1(3):Aioa2300138, 2024. 3 +[76] Matheus P Viana, Jianxu Chen, Theo A Knijnenburg, Ritvik Vasan, Calysta Yan, Joy E Arakaki, Matte Bailey, Ben Berry, Antoine Borensztejn, Eva M Brown, et al. Integrated intracellular organization and its variations in human ips cells. Nature, 613(7943):345-354, 2023. 7 +[77] David Wadden, Kejian Shi, Jacob Morrison, Aakanksha Naik, Shruti Singh, Nitzan Barzilay, Kyle Lo, Tom Hope, Luca Soldaini, Shannon Zejiang Shen, et al. Scirff: A resource to enhance language model instruction-following over scientific literature. arXiv preprint arXiv:2406.07835, 2024. 2 +[78] Fei Wang, Xingyu Fu, James Y Huang, Zekun Li, Qin Liu, Xiaogeng Liu, Mingyu Derek Ma, Nan Xu, Wenxuan Zhou, Kai Zhang, et al. Muirbench: A comprehensive benchmark for robust multi-image understanding. arXiv preprint arXiv:2406.09411, 2024. 17 +[79] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 7 +[80] Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scientific problem-solving abilities of large language models. arXiv preprint arXiv:2307.10635, 2023. 3 +[81] Yiqi Wang, Wentao Chen, Xiaotian Han, Xudong Lin, Haiteng Zhao, Yongfei Liu, Bohan Zhai, Jianbo Yuan, Quanzeng You, and Hongxia Yang. Exploring the reasoning abilities of multimodal large language models (mllms): A comprehensive survey on emerging trends in multimodal reasoning. arXiv preprint arXiv:2401.06805, 2024. 2, 3, 4 +[82] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. arXiv preprint arXiv:2406.01574, 2024. 7 +[83] Zhilin Wang, Alexander Bukharin, Olivier Delalleau, Daniel Egert, Gerald Shen, Jiaqi Zeng, Oleksii Kuchaiev, and Yi Dong. Helpsteer2-preference: Complementing ratings with preferences. arXiv preprint arXiv:2410.01257, 2024. 7 +[84] xAI. Bringing grok to everyone, 2024. Accessed: 2025-03-13. 7 +[85] Tong Xie, Yuwei Wan, Wei Huang, Yufei Zhou, Yixuan Liu, Qingyuan Linghu, Shaozhou Wang, Chunyu Kit, Clara Grazian, Wenjie Zhang, et al. Large language models as master key: unlocking the secrets of materials science with gpt. arXiv preprint arXiv:2304.02213, 2023. 2 + +[86] Kaining Ying, Fanqing Meng, Jin Wang, Zhiqian Li, Han Lin, Yue Yang, Hao Zhang, Wenbo Zhang, Yuqi Lin, Shuo Liu, et al. Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models towards multitask agi. arXiv preprint arXiv:2404.16006, 2024. 2 +[87] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9556-9567, 2024. 2, 3, 4, 7 +[88] Xiang Yue, Tianyu Zheng, Yuansheng Ni, Yubo Wang, Kai Zhang, Shengbang Tong, Yuxuan Sun, Ming Yin, Botao Yu, Ge Zhang, et al. Mmmu-pro: A more robust multidiscipline multimodal understanding benchmark. arXiv preprint arXiv:2409.02813, 2024. 3, 4, 7, 15 +[89] Nikki B Zaidi, Charles Hwang, Sara Scott, Stefanie Stallard, Joel Purkiss, and Michael Hortsch. Climbing bloom's taxonomy pyramid: Lessons from a graduate histology course. Anat. Sci. Educ., 10(5):456-464, 2017. 5, 8 +[90] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6720-6731, 2019. 3 +[91] Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5317-5327, 2019. 3 +[92] Kai Zhang, Rong Zhou, Eashan Adhikarla, Zhiling Yan, Yixin Liu, Jun Yu, Zhengliang Liu, Xun Chen, Brian D Davison, Hui Ren, et al. A generalist vision-language foundation model for diverse biomedical tasks. Nature Medicine, pages 1-13, 2024. 3 +[93] Sheng Zhang, Yanbo Xu, Naoto Usuyama, Hanwen Xu, Jaspreet Bagga, Robert Tinn, Sam Preston, Rajesh Rao, Mu Wei, Naveen Valluri, et al. Biomedclip: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs. arXiv preprint arXiv:2303.00915, 2023. 7, 3, 17 +[94] Yuhui Zhang, Yuchang Su, Yiming Liu, Xiaohan Wang, James Burgess, Elaine Sui, Chenyu Wang, Josiah Akiliu, Alejandro Lozano, Anjiang Wei, et al. Automated generation of challenging multiple-choice questions for vision language model evaluation. arXiv preprint arXiv:2501.03225, 2025. 3 +[95] Haiteng Zhao, Chang Ma, FangZhi Xu, Lingpeng Kong, and Zhi-Hong Deng. Biomaze: Benchmarking and enhancing large language models for biological pathway reasoning. arXiv preprint arXiv:2502.16660, 2025. 3 +[96] Zhiling Zheng, Oufan Zhang, Ha L Nguyen, Nakul Rampal, Ali H Alawadhi, Zichao Rong, Teresa Head-Gordon, Christian Borgs, Jennifer T Chayes, and Omar M Yaghi. Chatgpt research group for optimizing the crystallinity of mofs and cofs. ACS Central Science, 9(11):2161-2170, 2023. 2 + +# MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research + +Supplementary Material + +Appendices A-C include acknowledgments, author contributions, ethics, and discussion of limitations and future work. The remaining sections correspond to sections in the paper. Appendix D is benchmark details from the main Sec. 3. Appendix E is MCQ generation details from the main Sec. 4. Appendix F is the experiments details from the main Sec. 5. + +# Table of Contents + +A Acknowledgments and author contributions 1 +B Ethics 2 +C Limitations and future work 2 +D Benchmark details 3 + +D.1. Accessing MicroVQA benchmark 3 +D.2 Dataset Schema and Structure 3 +D.3 Training contamination mitigation 3 +D.4 Benchmark attribute details 4 +D.5 Related benchmarks 4 +D.6 Bloom's taxonomy 5 +D.7.Methodology for proposing tasks 6 +D.8 Guidance for question creators 7 +D.9 Microscopy image representations 7 + +E.MCQ generation details 7 + +E.1. Question collection process details 7 +E.2. Experiment on naive MCQ generation 7 +E.3. Stage 1 MCQ generation (exam alignment) 8 +E.4. Stage 2 MCQ generation (RefineBot) details 8 +E.5.MCQ generation results: stage 1 exam alignment 13 +E.6.MCQ generation results: stage 2 RefineBot 13 + +F. Experiments details 15 + +F.1. Evaluation prompts 15 +F.2. Model details 16 +F.3. Human baseline on MicroVQA 17 +F.4. More experiments on attribute-level information 17 +F.5. Language shortcut ablations 17 +F.6. Error analysis 20 +F.7. Reasoning trace analysis 31 + +# A. Acknowledgments and author contributions + +Acknowledgements We thank Christian Kindermann for feedback on motivation for benchmarking and reasoning taxonomies; Siddharth Doshi for feedback on motivation for applications in scientific chat systems; and Krisjanis Mazurs for feedback on structuring user interviews. We also thank Assaf Zaritsky, Andrew S Moore, Pedro Guedes Dias, Angus Toland, Helen Healy, and Andrew Kassianos, and Loic Royer for discussions on defining the VQA tasks. + +Funding acknowledgements We gratefully acknowledge funding from HAI Hoffman-Yee (E.L., S.Y-L., J.M.H.B.), Schmidt Futures (E.L.), the Bridge2AI Program (NIH Common Fund; OT2 OD032742; E.L.), Goran Gustafsson Foundation and Knut and Alice Wallenberg Foundation (KAW 2021.0346; E.L.) as well as to MU (HPA). S.C. received funding from NIH/NIGMS R35GM133460 and CZI A23-0264-001. J.N.H. was supported by a Postdoctoral Fellowship from EMBO Postdoctoral Fellowship (ALTF 556-2022). + +# Author contributions + +- Project conception: JB, JN, LBS, AL, SYL +- Defining the VQA tasks: JB, JN, JGG, WC, SC, JNH, CL, SYL +- Managing benchmark collection processes: JN, JB +- Creating benchmark questions: DB, ZC, SMH, AJ, WDL, MGN, RY, CZ, JNH, CL +- MCQ conversion pipeline: JB, JN, LBS +- Evaluations: JB, SRG, YS +- Qualitative error analysis: JN, JGG, CZ +- Quantitative error analysis: LBS +- Paper writing and figures: JB, JN, LBS +Supervision: SYL, EL, CL, MDL, CNH, SC, WC + +# B. Ethics + +Ethical use of biomedical data: MicroVQA was developed with a commitment to ethical practices in handling biomedical research data. All microscopy images included in the dataset were either original, unpublished data or sourced from open-access articles (e.g., CC-BY licensed) published after January 2024. The dataset does not include patient-identifiable information, and MicroVQA adheres to applicable privacy and research ethics guidelines. The dataset does not constitute human subjects research. + +Potential societal impacts: Biomedical AI models, including those trained on MicroVQA, have the potential to influence research and healthcare significantly, both positively and negatively. While these models can accelerate scientific discovery, they may also perpetuate biases in training data or result in uneven performance across demographic or biological groups. To mitigate these risks, MicroVQA includes metadata annotations to support the analysis of potential biases and emphasizes diverse data sourcing. We will engage with the research community to address emerging ethical concerns. + +Data licensing and usage: MicroVQA is distributed under the Creative Commons Attribution-ShareAlike 4.0 International (https://creativecommons.org/licenses/by-sa/4.0/) license to promote transparency and collaboration in the research community. This license reflects our efforts to balance open access with the ethical use of data, enabling both academic and commercial applications of the dataset while adhering to original license requirements. Full licensing details are available at https://creativecommons.org/licenses/by-sa/4.0/. + +# C. Limitations and future work + +While MicroVQA advances research-level reasoning benchmark significantly, we can identify a number of limitations and avenues for future work. + +Open vs closed evaluation Like most VQA benchmarks, we used multiple-choice (MCQ) evaluation. However downstream applications will obviously operate in an open setting, without options and requiring more detail. MLLM research does have some open benchmarks [86], though impartial evaluation is a significant challenge. Our early experiments with open prompting suggest that models like GPT-4o tend to give very vague answers, and tend to depend strongly on the text part of the input prompt. + +Dataset scale Our final dataset had 1042 samples, which is smaller than some other VQA benchmarks like MicroBench [50] and OmniMedVQA [34]. MicroVQA's scale sufficient to compare overall scores between models, however statistical power starts to become limiting when trying to compare different models on subsets of the dataset. How does this compare to other benchmarks? As we can see from Tab. 2 and Fig. 3, the larger datasets tend to be those with lower-level reasoning according to Bloom's taxonomy. Their samples tend to be derived automatically from metadata tags or classification labels. Benchmarks with comparably high-level reasoning, like MMMU-Pro, have a similar scale to ours (1,700). In the language-only domain, the research-level reasoning benchmark GPQA has 448 samples. This trend reflects a clear, practical limitation: it is very time-consuming for humans to create difficult reasoning questions, and the pool of capable experts is small. + +Breadth of dataset attributes in microscopy We aimed for MicroVQA to cover a broad range of the microscopy discipline; for example, we have samples from all four relevant scales - tissues, cells, subcellular, atomic. However, each human annotator is an expert in a subset of microscopy and biology, so there are practical limits on dataset coverage. We, therefore, chose to focus on the most common imaging modalities - brightfield, fluorescence, electron - while skipping less common modalities like Raman spectroscopy. In terms of sample types, we emphasized those relevant to humans - human and mouse - while having only a few samples from rare organisms like bottlenose dolphin. + +Coverage of scientific reasoning Looking even more broadly, the motivation of this work is to advance reasoning in scientific research generally. However, we use microscopy in particular. This is because of a clear trade-off in breadth vs depth. Our dataset strikes a good balance, given that microscopy represents a large portion of visual data in biomedicine and biology. Future work can reuse our framework to generate reasoning VQA benchmarks in other fields of visual biomedicine and biology, such as in medicine [34] or ecology [93]. + +Resolving MCQ shortcuts A key contribution in this paper is the RefineBot for generating multiple choice questions (MCQs) that are too challenging to solve using text-only shortcuts. Our approach used the outputs of existing MLLMs as signal to make distractors harder. However, this cannot guarantee the removal of shortcuts; more generally, harder distractors may exist. Therefore, the next generation of LLMs may be able to identify and exploit new shortcuts. If that occurs, we can run RefineBot again using the new LLM, creating MicroVQA-2. This strategy could be used for other VQA evaluations. + +# D. Benchmark details + +# D.1. Accessing MicroVQA benchmark + +MicroVQA is an expert-curated benchmark for multimodal scientific reasoning, specifically in microscopy-based biological research. The dataset comprises 1,042 multiple-choice questions (MCQs) created by biological researchers to represent authentic scientific practice. Each MCQ is paired with an associated microscopy image and metadata describing the experimental context. A demonstration subset of 28 instances is available for initial exploration. + +The MicroVQA dataset is publicly accessible on HuggingFace at https://huggingface.co/datasets/jmhb/microvqa. It is distributed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license, allowing reuse and share-alike redistribution with attribution. The dataset is available in Apache Arrow and Parquet formats for efficient data handling. These formats support fast serialization, memory mapping, and streaming from the web, facilitating seamless integration into machine learning pipelines. + +# D.2. Dataset Schema and Structure + +The dataset schema defines the fields included for each data instance. Figure 6 provides a schema summary, while a sample data instance is detailed below 7. + +```python +{ "image_id": datasets.Value("string"), "image": datasets.Image(decode=True, id=None), "label": datasets.ClassLabel(), "label_name": datasets.Value("string"), "domain": datasets.Value("string"), "subdomain": datasets.Value("string"), "modality": datasets.Value("string"), "submodality": datasets.Value("string"), "stain": datasets.Value("string"), "microns_perpixel": datasets.Value("string"), "questions": datasets.Value("string"), } +``` + +Figure 6. Example data schema. + +# D.3. Training contamination mitigation + +To mitigate contamination, all data in MicroVQA were sourced from original, unpublished microscopy images or open-access articles (e.g., CC-BY-licensed) published after January 2024. Additionally, canary strings were embedded in the dataset to detect potential contamination during fine-tuning or model evaluation. + +![](images/db454ed8bb095d61fcba6351b941e5ce35de9a752d37024f4308c9dae146a736.jpg) +Figure 7. Example data instance. + +# D.4. Benchmark attribute details + +The MicroVQA benchmark is designed to evaluate multimodal reasoning capabilities in biomedical research contexts. The dataset incorporates diverse attributes such as organism, research subject, imaging modalities, and biological length scales (image scales) to ensure a comprehensive assessment1. These metadata not only enhance the scientific relevance but also enable analysis of reasoning across different biological and microscopy contexts. This section describes the methodology used to define, compute, and annotate these key attributes. + +The benchmark comprises 1,042 total questions, including 423 questions containing multiple images combined into panels. A collage of sample images is shown in 25. The average MCQ token lengths were computed using the o200k_base tokenizer, with separate calculations for the raw input and formatted multiple-choice questions. Imaging modalities are broadly categorized into light, fluorescence, and electron microscopy. Biological length scales span four levels: tissue (millimeter scale), cellular (micrometer scale), subcellular (nanometer scale), and atomic (angstrom scale), enabling reasoning across diverse structural hierarchies. + +To annotate attributes such as organism, specimen, and research subject, we manually reviewed random samples from the dataset and created a taxonomy to cover the most likely categories. A biomedical domain expert manually labeled these categories for 50 raw input question-answer pairs and performed initial prompt engineering. We used DSPy to optimize the prompt and few-shot examples on the labeled dataset. The input context included the taxonomy of organisms8 and research subjects 9. + +These annotations were the gold standard to guide DSPy prompt engineering for automated attribute labeling on the full dataset. Prompt instructions included fallback rules (e.g., tagging "None" for indeterminate attributes) to ensure accuracy and minimize hallucination. The optimized pipeline annotated all questions while ensuring the validity of the dataset attributes. + +# D.5. Related benchmarks + +In section 3.3 and Tab. 2 we compare MicroVQA to related scientific multimodal benchmarks, focusing on its intersection of high-difficulty and advanced reasoning questions. Beyond these key attributes, we examine the sources of the questions. Most benchmarks rely on existing question repositories, such as exams, web-based QA datasets, textbooks, or other MCQ datasets. This approach bypasses the need for MCQ generation methods, as the questions and options are pre-tested and curated, making these benchmarks more representative of established tasks. Another common strategy involves reformating classification datasets into MCQs using question templates, which increases task diversity but often limits scenario unique- + +![](images/62b824bbb820fb6c64cfa94694ffb4e53ede1cd5e46aaac7ebc2b76b55093535.jpg) +Figure 8. Examples of taxonomy classes used as context to LLM to assign an organism to a question. A YAML file with the full taxonomy will be released with the code. + +ness and language variability. In contrast, MicroVQA is built by directly querying experts for challenging, domain-specific research questions. By starting with long form questions and answers, our approach requires an innovative MCQ generation method, ensuring uniquely tailored questions that push the boundaries of domain-specific reasoning and knowledge assessment. + +# D.6. Bloom's taxonomy + +Bloom's Taxonomy is a model for evaluating cognitive skill levels in educational assessments. It has six hierarchical levels of learning: recall, understand/comprehend, apply, analyze, evaluate, and create. Multiple-choice questions can test five of Bloom's levels but not the sixth level, "create". These levels provide a structured framework for assessing lower-order and higher-order thinking skills, which are essential for designing robust benchmarks like MicroVQA. + +Initial attempts to use LLMs to classify Bloom's levels for biomedical visual question answering (VQA) benchmarks revealed a systematic overestimation of cognitive skill levels. This discrepancy limits the naive use LLMs for automated Bloom's classification, perhaps due to the nuances of applying Bloom's taxonomy in domain-specific contexts. To ensure accurate classification of Bloom's level, a physician-scientist with NBME training in item writing and familiarity with Bloom's classification manually labeled Bloom's level for 200 examples from public MCQ questions, MicroVQA, MicroBench, and ScienceQA. These examples were used to fine-tune GPT-4o-mini (gpt-4o-mini-2024-07-18) to classify Bloom's level and align with expert judgments. + +The fine-tuned Bloom's classifier was applied to MicroVQA and numerous public science/biomedical benchmarks to assess the cognitive skills evaluated by each question. Each question was mapped to one of Bloom's levels, providing insights into the distribution of cognitive tasks across the benchmark. This analysis revealed that while existing benchmarks predominantly tested lower-order skills (e.g., recall and comprehension), MicroVQA has a greater emphasis on questions that require higher-order reasoning (e.g., analyzing, evaluating), filling a critical gap in multimodal scientific reasoning + +![](images/c8a86566ed0b521b3e2535d57c8324cfca644cb74345c31456c9beb0e3e4a4d9.jpg) +Figure 9. Examples of taxonomy classes used as context to LLM to assign a research subject to a question. A YAML file with the full taxonomy will be released with the code. + +assessments. + +# D.7. Methodology for proposing tasks + +Here, we expand on the task proposal methodology introduced in Sec. 3.2. To define specific reasoning tasks aimed at advancing biomedical research, we conducted a comprehensive expert consultation process. Through structured interviews with nine Principal Investigators and postdocs specializing in microscopy research, including participants from both academia and industry, we developed a framework for tasks that would meaningfully integrate multimodal large language models (MLLMs) into microscopy research workflows. + +The task selection process followed specific criteria: tasks must utilize image-text inputs and text outputs suitable for MLLMs; they should require higher-order reasoning beyond simple image processing; and they should emphasize core experimental activities (experimentation, analysis, and action) rather than auxiliary tasks like literature review or writing [33]. + +Our methodology involved a two-phase interview process. The first phase consisted of initial one-hour individual sessions with eight research groups featuring open-ended, unstructured discussions about key challenges in microscopy experiments, potential applications of MLLMs, current workflows and unmet needs, and possible AI system integration points. One week later, we then conducted follow-up interviews to consolidate findings and build consensus. + +To avoid anchoring biases, we intentionally withheld predefined categories during initial interviews. This approach enabled unbiased task identification and allowed us to capture a broad spectrum of potential use cases. Through this iterative process, we identified three essential capabilities that align with the scientific life cycle for microscopy experiments: advanced image interpretation, hypothesis generation, and experimental design/hypothesis testing. + +# D.8. Guidance for question creators + +Question creators were given detailed guidance as pdfs, that we share at https://github.com/jmhb0/microvqa under folder benchmark/human.collection. During the submission process, we completed two rounds of quality control to ensure difficulty and alignment to the prompts, as described in Sec. 4.1. + +# D.9. Microscopy image representations + +MLLMs are trained primarily on three-channel RGB images, however microscopy images are more diverse. To ensure alignment with how common MLLMs process images, we required that each image artifact for a single question was represented as no more than six RGB images. Where multiple images were needed, they could be multiple image files, or a single image that is concatenated. For videos, users could submit up to six frames; for 3d, they could submit slices or z-projections. + +An important difference is multichannel images. Many fluorescence microscopy images are multichannel, where the image collection ensures that each channel represents a certain structure. For example channel 0 may be nucleus, and channel 1 may be mitochondria. There are a few approaches. Some questions represent this situation as separate grayscale images for each channel. Others show a 'composite image', which is a single image where one each channel is mapped to one of the RGB channels; e.g. a common visualization will put nucleus in blue and mitochondria in green. Some questions do both: grayscale images for each channel, along with a composite image merging them. + +Finally, we allowed screenshots from research papers or image software, since this is a realistic way that future LLM-users would use AI tools. + +# E. MCQ generation details + +# E.1. Question collection process details + +Data collection used a structured submission form shared with collaborators. The form required users to provide the following: + +- Image or Image Set: Users uploaded microscopy images, optionally including multiple images for comparative questions. +- Experiment Details: A brief description of the experimental setup, imaging modalities, and any relevant biological context. +- Question Text: Each question was aligned with one of three predefined use cases: identifying interesting features, explaining mechanisms, or proposing next steps. +- Answer and Distractors: Contributors provided a correct answer and biologically plausible incorrect answers. + +Quality control was performed in multiple rounds. To familiarize users with the task and data requirements, each user submitted an initial form with 4-6 questions for manual quality control and feedback. Feedback was provided to users on improving question specificity, ensuring the questions required image interpretation rather than relying solely on general knowledge. Contributors were encouraged to refine their submissions based on the feedback to better align with the tasks and goals of the MicroVQA benchmark. + +Users were encouraged to submit data from multiple sources, imaging modalities (e.g., fluorescence, electron microscopy), and biological scales (e.g., tissue, cellular, subcellular) to promote diversity. Contributors could upload microscopy images from their experiments or use CC-BY publicly available images. Common image sources included preprints or publications released after January 2024 (to mitigate contamination), or from established image repositories [14, 17, 18, 31, 72, 76]. This ensured that all images were either original or sourced from open-license publications, enabling their use in an open-source benchmark. + +# E.2. Experiment on naive MCQ generation + +In Sec. 4.2, we discuss how the naive approach to MCQ generation – simply zero-shot prompting GPT – leads to MCQs that are very easy for frontier models to solve. Which ultimately motivated our two-stage MCQ generation approach. Here, we discuss those experiments in more detail. We create a naive version of MicroVQA using a standard prompt [34, 82]: + +You are an expert in molecular and cell biology, and in microscopy. + +I will give you an original biology-related question and its answer, your task is to rephrase an equivalent question with identical answer. The question related to an image, and we don't show the image. + +Meanwhile, I want to transfer this QA-pair into a multi-choice question. Please generate 5 incorrect options to construct the candidate options. + +{QUESTION} + +{ANSWER} + +Then, we evaluated this naive MicroVQA on the top closed source models with the image, and to study the over-reliance on language shortcuts, we evaluated a text-only version. In the text-only setting, we add the following to the prompt: "If an image is mentioned, ignore this information and try your best to answer the question". Tab. 5 shows the results of this experiment. + +Table 5. Performance on MicroVQA MCQs with naive MCQ generation. We report overall VQA and VQA without the image. + +
VQAVQA-no-image
GPT-4o85.182.7
Claude-3.5-Sonnet91.488.4
Gemini-1.5-Pro88.582.4
+ +All models score over $80\%$ on questions that are from a specialized domain. We hypothesize that this reflects a problem with the non-trivial task of MCQ generation from raw VQAs for several reasons. First, the high 'VQA-no-image' scores across all models suggest that selecting the correct option does not require image understanding, despite the raw VQA samples being designed to incorporate image-based reasoning. An alternative explanation is that the questions themselves may be too easy for advanced models. Second, we test this alternate explanation by performing open evaluation on a subset of the raw VQA queries-- that is, we queried GPT-4o with the questions but without the options. Here, the models rarely gave good responses, even after careful prompting. This suggests that models are leveraging test-taking strategies to eliminate distractors [27]. Third, we examined the chain-of-thought responses in the VQA-no-image case, which suggested the models often eliminate distractors based on information from the text question. Therefore, we conclude that there are issues with the improper generation of the MCQs using the naive strategy. + +What exactly are the problems with MCQ generation? Qualitatively, the types of errors are the same as those we discuss in Appendix E.6, so we refer readers to that section to get a deeper understanding. + +# E.3. Stage 1 MCQ generation (exam alignment) + +The first stage transformed raw, open-ended questions and long-form answers into well-structured MCQs aligned with established principles of biomedical exam design. A physician-scientist trained in medical board exam item writing initiated the process by reviewing biomedical educational literature, NBME guidelines [64], and Bloom's taxonomy for assessing cognitive skills [5, 19, 89]. The expert manually converted 50 raw question-answer pairs into one-best-answer MCQs with biologically relevant distractors using the literature review and their prior training. This manual effort ensured that the MCQs preserved the original question's intent and scientific validity while adhering to educational standards. Distractors were crafted to require reasoning and prevent reliance on superficial cues like word association or general knowledge, thus maintaining the depth and complexity of the original content. + +To scale this process, the expert designed an initial LLM prompt for automated Stage 1 MCQ generation. This prompt and the manually curated training data were input for DSPy-based prompt optimization. Using the MIPROv2 optimizer, the prompt was refined within a Chain of Thought (CoT) framework combined with retrieval-augmented generation (RAG) pipelines powered by o1-mini. Optimization was guided by a custom metric evaluated by a separate LLM judge (o1-mini). The judge assessed each revised MCQ across three criteria: (1) semantic similarity to the original question and answer, (2) adherence to NBME formatting standards, and (3) minimization of extraneous information that could provide unintended clues. The optimization metric was weighted to prioritize preserving the original meaning and scientific validity, with lesser emphasis on formatting and extraneous text. Additional implementation details are provided in the source code. + +# E.4. Stage 2 MCQ generation (RefineBot) details + +The goal of RefineBot is to take MCQs that are too easy for MLLMs - especially where they are leveraging language-only shortcuts - and to rewrite them to make them more challenging. It is the third panel in Fig. 4. For more details on what types of improvements are achieved, see Appendix E.6. The complete code for RefineBot is released. + +A single RefineBot iteration Below, we show the pseudo-code for a single-question revision. The input is a question string, a list of options, and the index of the correct answer in the options. It returns a tuple (code, MCQ) where code is a + +string on whether the run was successful, and MCQ is either the revised multiple choice question if the rewrite was successful or otherwise None. + +```python +def revise_mcq(question: str, choices: list, correctidx: int, max_iters: int = 5): + # Store original version + original = {'question': question, 'choices': choices, 'correctidx': correctidx} + reflections = [] +for iter in range(max_iters): + # Try answering without image + result = evaluate_without_image(question, choices, correctidx) +if not result.can_answer: + code = 'SUCCEEDNESS_CHANGES' if iter == 0 else 'SUCCEED_REWRITE' +return code, (question, choices, correctidx) +if iter == max_iters - 1: + return 'FAIL ITERATIONS', None +# Reflect on strategies used to solve +reflection = analyze WHY_answerable(result) +reflections += reflection +# Rewrite based on all reflections from all iterations +new_version = rewrite_question(reflections, original) +# Verify the rewrite didn't change the meaning meaning preserved +if not check_meaning_preserved(original, new_version): + return 'FAIL_REWRITE', None +# Update current version +question = new_version['question'] +choices = new_version['choices'] +correctidx = new_version['correctidx'] +return 'FAIL_UNKNOWN', None +``` + +This process is also described in the main text. One detail to note: the rewrite_question function receives the reflections from all prior iterations, not just the current one. In this way, the rewriting function can consider all possible strategies that have been identified. All LLM calls are to gpt-4o-2024-08-06. + +To better illustrate the method, we step through a single example. This way, we can show the prompts and understand responses. + +1. The evaluation of a question with the evaluate_without_image prompt and response: + +```txt +Prompt +The following question is supposed to be paired with an image. We will not provide the image, so answer to the best of your ability. +In an electron microscopy image of a mouse primary embryonic cortical neuron expressing full-length human mHTT (BACHD), "eye-ball" shaped features are observed in the top left corner that appear anomalous. What is the most likely cause of these features? Think step by step and then output the answer in the format of "The answer is (X)" at the end.. +(a) Aggregation of human mHTT protein within the neuron. +(b) Ice contamination from liquid nitrogen during sample preparation. +(c) Formation of intracellular vesicles during neuron development. +(d) Artifacts arising from excessive heavy metal staining. +Response (target answer is b) +To determine the cause of the "eye-ball" shaped features observed in the electron microscopy image, we can analyze each option step by step: +(a) Aggregation of human mHTT protein within the neuron: Aggregations of proteins can indeed appear as anomalous structures under electron microscopy. However, these typically have irregular shapes and may not consistently resemble "eye-ball" shapes. +(b) Ice contamination from liquid nitrogen during sample preparation: Ice contamination can occur during sample preparation, particularly if samples are not handled properly when flash-frozen. This can lead to +``` + +anomalous features appearing in microscopy images. These features might appear as circular or irregular shapes, potentially looking like "eye-balls." + +(c) Formation of intracellular vesicles during neuron development: Intracellular vesicles are normal structures within neurons and other cells. Although they can appear during development, they are typically well characterized and documented, and are less likely to be described as anomalous in the context given. +(d) Artifacts arising from excessive heavy metal staining: Over staining can indeed produce artifacts in electron microscopy images. While staining artifacts are plausible, they might not specifically result in "eye-ball" shapes unless the staining process consistently targets specific round structures. + +Considering the nature of the descriptors ("eye-ball" shaped, anomalous), option (b) ice contamination is often a source of unexpected features in microscopy images that could fit the appearance described, as ice can form circular shapes. + +Therefore, the most likely cause of these features is (b) Ice contamination from liquid nitrogen during sample preparation. + +The answer is (b). + +2. If the answer is incorrect, then the question is already hard enough, and we can exit. Otherwise, we call the analyze WHY Answerable function, which reflects on how this answer was achieved. It continues in the same conversation: + +User + +That is correct. + +Explain how you were able to answer the question without access to the image - only the question_stem and choices. + +What strategies did you use? + +Then, briefly summarize these "language-only strategies" into a short list. + +Finally, say whether your reasoning was closer to: + +- 'deduction': you answered with high confidence because you had enough information. + +- 'abduction': you answered with lower confidence you made an informed guess. + +Be concise in your final response. + +Assistant + +To answer the question without the image, I relied entirely on analyzing the given information and the plausible characteristics of each option. Here's a breakdown of my strategy: + +1. **Keyword and Context Analysis**: I focused on key descriptors such as "eye-ball" shaped features and their being "anomalous" in an electron microscopy image of neurons expressing human mHTT. This guided me to consider characteristics commonly seen under electron microscopy. +2. **Knowledge of the Subject Matter**: I applied general knowledge about electron microscopy artifacts and protein behavior. This helped me weigh the likelihood of each option based on what typically causes specific image features. +3. **Process of Elimination**: By understanding the nature of each option-including potential artifacts or structures (such as vesicles or ice) in microscopy-I systematically assessed the validity of each scenario in relation to the described features. +4. **Logical Inference**: I inferred which scenario most commonly results in anomalies resembling "eye-ball" features, favoring the simplest explanation. + +Summary of "language only strategies": + +- Analyze keywords and context +- Apply relevant subject matter knowledge +- Use process of elimination +- Make logical inferences + +The reasoning was closer to 'abduction' as I made an informed guess based on patterns and typical causes, given that I lacked direct visual confirmation. + +3. Next, we call rewrite_question. Here, the templated variable {{conversations}} has the full conversations produced by the function analyze WHY_answerable in the previous call and from all other iterations. The {{n chatting}} just indicates how many conversations there are included: + +Below, I will display {{n chatting}} chat conversations between a 'user' and an LLM 'assistant'. + +The output of this function is a JSON and is: +```txt +In each conversation +- a user asks the assistant to answer a multichoice VQA question, however they do not provide the image. They only get the question_stem and choices. +- the question_stem and correct answer are similar to other conversations, however the distractors are different +- the assistant then answers correctly. +- the user then asks the assistant to explain how it answered the question with only the text. +- the assistant then summarizes what strategy they used to answer the question. +Altogether, these conversations give examples of language shortcuts that need to be avoided when constructing the question_stem and choices. +Here are the conversations: +{{conversations}} +Your task is to rewrite the question_stem and choices so that a different LLM 'assistant' cannot use the language-only strategies that were identified in these past conversations. +Include an 'explanation' about why your new set of distractors are better. +Your revised choices should include the correct answer at the 'correct_index'. +Your revised question_stem and choices should not significantly change the meaning of the question and correct answer. +You are free to change the distractors a lot to achieve this task. +Include {{nchoices}} choices. +The choices should not be made too different in length and detail to the original question stem and choices. +Note that the question stem may contain important cues that cannot be removed. E.g. if a question asks about the "green stain" in an image, you cannot change it to "stain" because it introduces ambiguity. +The choices should not include the letters in front like (a), (b), ... +Return a json +``` + +```json +{ "question_stem": "In an electron microscopy image of a neuron expressing human mHTT (BACHD) protein, distinct, spherical features are prominent and seem to be atypical. What is the most probable explanation for these features?", "choices": [ "Concentration of cellular waste products leading to distinct morphological anomalies", "Formation of air bubbles introduced during sample immobilization processes", "Ice contamination from liquid nitrogen during sample preparation", "Inclusion bodies formed by aggregated proteins expressing during stress response", "Cellular morphological changes during mitotic division" ], "correct_index": 2, "explanation": "The revised options introduce plausible scientific phenomena that could also result in spherical or distinct anomalies. This set of distractors is more varied and grounded in legitimate alternative explanations without leading the responder towards the correct answer through language patterns. For example, air bubbles and protein inclusion bodies might mimic sphere-like structures. Additionally, by adding an unrelated phenomenon like cellular changes during division, the set confuses the language pattern strategy previously used. Thus, the correct choice of \(" Ice\) contamination from liquid nitrogen"\( " stands out less and necessitates either visual confirmation or real understanding of sample preparation outcomes to identify correctly." +``` + +Notice that the response includes an explanation of why it made the changes, and the changes relate to what was in the reflection, adding interpretability. + +4. Finally, we check that the rewrite has not changed the semantic meaning of the question and answer. + +```txt +Prompt +\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* +Below are two question-answer pairs. +The question-answer pairs are part of VQA triplets, and both pairs use the same image. +Are these question-answer pairs semantically equivalent? +Or are they significantly different? +Give a true/false and also an explanation. +QUESTION 1: +In an electron microscopy image of a mouse primary embryonic cortical neuron expressing full-length human mHTT (BACHD), "eye-ball" shaped features are observed in the top left corner that appear anomalous. What is the most likely cause of these features? +ANSWER 1: +Ice contamination from liquid nitrogen during sample preparation +QUESTION 2: +``` + +```txt +In an electron microscopy image of a neuron expressing human mHTT (BACHD) protein, distinct, spherical features are prominent and seem to be atypical. What is the most probable explanation for these features? +ANSWER 2: +Ice contamination from liquid nitrogen during sample preparation +\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* +``` + +5. If the meaning check step fails, then we exit with a fail as we notice that once the meaning deviates it will become more distorted with further iterations. Otherwise, we update the MCQ, and start the loop again. Where the first function call in the new loop will check whether this revised MCQ is now too hard or needs to be rewritten. + +Calibrating the preservation of meaning Since there is a risk that the rewriter will change the semantic meaning of the question and answer, it is crucial that the check_meaning_preserved is effective. For topics requiring domain expertise, this requires prompt engineering with a domain expert providing feedback. This is one of the key human bottlenecks in tuning the prompts in this system. + +Other sources of MCQ invalidation While the check_meaning_preserved function checks that the question and answer is correct, an additional risk is that the rewrite introduces distractors that are more correct than the target answer. In our manual review evaluations (we reviewed all 1,042 MCQs), this was very rare. However, if this were a significant problem in future systems, it may be addressable by tuning the rewriter bot, with an expert human evaluator in the prompt engineering loop. + +Extending RefineBot beyond language shortcuts We constructed RefineBot specifically to address the issue of language shortcuts – where the question is answerable without using the image. This is appropriate because we work with vision-centric VQA, meaning the questions should require image interpretation. However, the framework should work in other settings – namely language-only QA. This would require adjusting the prompts slightly. For example, the current reflection prompt asks how the question was successfully evaluated without using language; also, the current rewriter prompt directs the LLM to address language shortcuts in its rewrite. Therefore, adapting Refinebot to language-only MCQ should make these prompts more general. + +On the other hand, the method could also be updated for the more general VQA setting – to improve questions that do not have language shortcuts, but are easy for some other reason. Here, the evaluator and reflector agents, which are currently LLMs, could be switched for an MLLM, which also ingests the image (though this would be expensive for current state-of-the-art models). We hypothesize that current MLLMs are so weak at image perception, that this is not necessary for MicroVQA – that simply removing language shortcuts will suffice for creating a challenging benchmark. As MLLMs improve at perceiving microscopy images, this general VQA RefineBot could become more useful for ensuring that the benchmark remains challenging. + +Repeated RefineBot iterations A single run of RefineBot may exit as a FAIL, either because the rewrite changed the meaning of the question, or because the maximum number of iterations was reached. However, RefineBot can give very different outputs when changing the random seed, and we find that simply running again with a different random seed can lead to a successful run. Therefore, if the bot fails, we rerun it. + +If an MCQ exited RefineBot successfully, then the evaluator LLM was fooled by the question (got the answer incorrect). Yet, the question could still be suboptimal in detectable ways: the LLM could be capable of narrowing the MCQ down to two options, but have guessed the wrong option; moreover, this question could be easier for different models. To ensure it is difficult enough overall, we require that the MCQ is answered incorrectly for two random seeds from two different models: gpt-4o-2024-08-06 and claude-3-5-sonnet-20241022. If not, then it fails, and we run the RefineBot again with a different random seed. + +We discuss in the experiments Sec. 5.1 how this process leads to a small evaluation bias against the models used as evaluators. That section shows that despite the bias, this process makes the MCQs generally harder for all models. Future versions of RefineBot could mitigate this by including a larger family of evaluator models. + +Finally, some MCQs fail to pass the RefineBot check. In these cases, we simply keep the stage 1 MCQ. + +# E.5. MCQ generation results: stage 1 exam alignment + +The Stage 1 pipeline produced one-best-answer MCQs that preserved the content, testing focus, and validity of the expert-generated question-answer pairs. It generated an initial set of distractors following best practices outlined in educational and NBME guidelines. A key design goal of Stage 1 was to preserve the original question content without altering or introducing new information. Secondary objectives included ensuring proper formatting according to NBME guidelines and minimizing information that might provide clues to the correct answer without reference to the image. + +While the pipeline ensured alignment with these goals, preserving the original content occasionally resulted in MCQs containing extra information, such as image descriptions or specific biological terms (i.e., gene or protein name), that could inadvertently help narrow the options. For example, stage 1 question stems sometimes included protein names, disease descriptions, or descriptions of image features, which could reduce the challenge. Figure 11 illustrates Stage 1 outputs that are appropriately formatted but may preserve details from the original input, making questions less challenging. While the distractors were biologically plausible, some options were overly easy to eliminate, reducing the challenge. These limitations are addressed and refined in Stage 2. + +The o1-mini-2024-09-12 LLM judge, employed during DSPy optimization, compared the Stage 1 MCQs to the original inputs to evaluate (1) semantic similarity and (2) adherence to MCQ formatting. The results showed that $96\%$ of questions preserved the original meaning, and $100\%$ adhered to NBME formatting guidelines. Manual analysis of the $4\%$ labeled as at least partially dissimilar revealed that these cases typically involved open-ended original questions being reformulated into closed-ended MCQs. This transformation, a necessary step for creating one-best-answer MCQs, was not a failure but rather an expected outcome of Stage 1. In rare instances where the original submission included multiple correct answers, the output of Stage 1 retained one correct answer and excluded others to comply with the one-best-answer format. The LLM judge flagged these cases as partially dissimilar but these changes were essential for adhering to the MCQ structure. + +A common issue in MCQ design is that correct answers tend to be longer and more detailed than distractors. To assess this, we computed the ratio of tokens in incorrect options to those in the correct answer, where values $\leq 1$ indicate longer correct answers, and values $\geq 1$ suggest shorter correct answers. Stage 1 MCQs showed a ratio of 0.87, indicating that correct answers were only slightly longer than distractors. By comparison, naive MCQ generation yielded a ratio of 0.46, meaning correct answers were nearly twice as long as distractors, making them easier to identify. Additionally, naive MCQ generation preserved the original meaning in only $89\%$ of cases and met NBME formatting guidelines in just $70\%$ , highlighting the efficacy of the Stage 1 process at preserving information and ensuring proper MCQ formatting. + +# E.6. MCQ generation results: stage 2 RefineBot + +The second stage of our MCQ generation method focuses on increasing the difficulty of the questions and options. The key idea is that we want to remove language shortcuts – information in the MCQ that allows answering the question without access to the image. To aid in understanding, we construct a toy example in Fig. 10 with the three types of language shortcuts that we identify. The questions should be paired with a stain that is mitochondria, and we want the VQA to require actually inferring that from the image. In shortcut 1, the text ‘gives away’ the image content so it’s trivial to answer the question. In shortcut 2: The LLM generates implausible or weak distractors. In shortcut 3: ‘Language bias’, is a known problem in VQA [29]. + +Moving to real MCQs, in Fig. 4 we illustrate the transformation of a question at each stage, highlighting issues and corresponding improvements. While stage 1 produces exam-style questions that preserve the QA objective, the wording of the questions and distractors often enable MLLMs to exploit shortcuts rather than demonstrate knowledge. To address this, stage 2 (RefineBot) employs a chain-of-thought process. We analyze additional examples of improvements achieved in stage 2, showing how RefineBot improves both the style and content. The key strategies correspond to the three shortcut types that we identified: + +1. (Visual giveaway) Over-description of the image eliminates the need for image interpretation. For example, in Fig. 11 (middle row), stage 1 includes details that reveal the specifics of the image, allowing the model to propose common issues without analyzing the image. Stage 2 replaces these specifics with general terms in the question stem. + +# Language shortcut 1: Visual giveaway + +Q: The green TOMM20 outer mitochondrial membrane protein localizes where? + +A) Mitochondria + +B) Nucleus +C) ER +D) Golgi + +Why? The 'M' in "TOMM20" stands for 'mitochondria', so that's enough to answer the question. + +# Language shortcut 2: Weak distractors + +Q: What is the localization of the green stain? + +A) Mitochondria +B) Butane +C) Quokka +D) Coatf + +Why? The LLM generates distractors that are not plausible, and can easily be eliminated + +# Language shortcut 3: Language bias + +Q: For a study on Parkinson's disease, we stain a target green. Where does it localize? + +A) Mitochondria +B) Nucleus +C) ER +D) Golgi + +Why? 'Mitochondria' is the most plausible answer because the question mentions Parkinson's disease, which frequently studies mitochondrial dysfunction + +Figure 10. Three types of language shortcut relevant to MicroVQA. The target VQA has an image that is fluorescence microscopy stained with TOMM20 which would show a pattern consistent with visualizing mitochondria. + +2. (Weak distractors) Distractors unrelated to the specifics of the scenario described by the question. In Fig. 11 (bottom row), stage 2 increases distractor complexity by incorporating elements tied to the question's context, making them less predictable. +3. (Language bias) Overly precise wording in the question stem: Precise phrasing can make it easy to identify the correct answer through pattern matching. For example, Fig. 11 (top row), changing 'eye-ball shaped' to 'spherical' retains the question's intent while creating more challenging distractors referencing plausible alternative processes. + +![](images/be6415061000edb79f7e653656032f75e763b7c4fe5f97d82e4c52b46e24c788.jpg) +Figure 11. Examples of changes to questions and options between stage 1 and stage 2 (RefineBot) of our MCQ generation method. In red elements that need to be improved and in green improvements. + +# F. Experiments details + +# F.1. Evaluation prompts + +All prompts except for LLaVA and LLaVA-Med used the chain-of-thought prompt template from the MMMU-Pro code [88]: + +```txt +The following is a multiple choice question (with answers). Think step by step and then output the answer in the format of $"The$ answer is $(\mathrm{X})\backslash "$ at the end. {{QUESTION}}} Options: {{CHOICES}} +``` + +# An example complete question is: + +The following is a multiple choice question (with answers). + +Think step by step and then output the answer in the format of "The answer is (X)" at the end. + +A fluorescence microscopy image displays cultured mouse primary neurons stained with a specific marker localized to mitochondria. The fluorescence signal exhibits an uneven distribution across the neuronal cell, and there is an absence of signal within the nucleus. What is the most probable cause for the uneven distribution of the mitochondrial signal? + +Options: + +(1): Mitochondrial transport along axonal and dendritic processes ensures uneven distribution based on cellular energy demands. +(2): Differential expression of mitochondrial proteins leads to variable fluorescence signal across the neuron. +(3): Mitochondrial biogenesis occurs at specific neuronal regions, causing localized fluorescence accumulation. +(4): Mitochondrial anchoring to the cytoskeleton restricts their movement, resulting in uneven distribution. +(5): Mitochondrial dynamics, including fission and fusion, allow mitochondria to move and distribute throughout the neuron. + +The answer is then extracted with the regex pattern: answer is $(?([0-9])$ ? + +For LLaVA-Med, we added an extra parsing strategy. This was because we observed that while LLaVA-Med often gave reasonable responses to questions, it failed to follow instructions well. First, it did not follow the CoT prompt consistently to think step by step. It also didn't use the right answer formatting: "The answer is (X) \". Since we were more interested in biomedical reasoning adhering to VQA formatting, we decided to handle this model differently. + +Specifically, to encourage CoT reasoning, we fixed the prefix of the response to: + +Response: Let's think through this step by step and evaluate each option.. + +Then we added a suffix to the end: + +After carefully considering all the options, the most likely answer is (1). Then we continued generation, and did regular regex parsing to get the final answer. + +We also find that a few models make minor but consistent errors with formatting the answer and elect to fix them. For example x-ai/grok-2-vision-1212 commonly adds two asterisks before the answer like "The answer is $^{**}(4)^{**}$ " instead of "The answer is (4)" (GPT-4o also did this sometimes). Some other common errors that we saw but did not handle (because it would be difficult to do rigorously) was shown by meta-llama/llama-3.2-11b-vision-instruct: the response would write one of the MCQ options but not state its number. + +# F.2. Model details + +Our main benchmarking experiments in Tab. 3 refer to models that have multiple versions. The exact model versions used for inference are shown in Tab. 6 + +
Model NameAPI EndpointSource & Details
olol-2024-12-17OpenAI API
Claude-3.5-Sonnet*anthropic/claude-3.5-sonnet-20240620Openrouter API
Gemini-Pro-1.5google/gemini-pro-1.5Openrouter API
Pixtral-Largemistralai/pixtral-large-2411Openrouter API
Grok-2-Visionx-ai/grok-2-vision-1212Openrouter API
Qwen-2-vl-72b-Instructqwen/qwen-2-vl-72b-instructOpenrouter API
VILA1.5-40bVILA1.5-40bHuggingFace, local inference
GPT-4ogpt-4o-2024-08-06OpenAI API
Llama-3.1-Nemotron-70b-Instructnvidia/llama-3.1-nemotron-70b-instructOpenrouter API
Llama-3.2-90b-Vision-Instructmeta-llama/llama-3.2-90b-vision-instructOpenrouter API
Qwen-2-VL-7bqwen/qwen-2-vl-7b-instructOpenrouter API
Claude-3.5-Haikuanthropic/claude-3.5-haikuOpenrouter API
Gemini-Flash-1.5-8bgoogle/gemini-flash-1.5-8bOpenrouter API
GPT-4o-minigpt-4o-mini-2024-07-18OpenAI API
Pixtral-12bmistralai/pixtral-12bOpenrouter API
VILA1.5-13bVILA1.5-13bHuggingFace, local inference
Llama-3.2-11b-vision-instructmeta-llama/llama-3.2-11b-vision-instructOpenrouter API
LLaVA-Med-Mistral-7BLLaVA-Med-Mistral-7BHuggingFace, local inference
LLaVA-Mistral-7BLLaVA-Mistral-7BHuggingFace, local inference
+ +Table 6. AI model API endpoints and their sources + +For LLaVA and LLaVA-med, although the version number 1.6 is different from LLaVA-Med with 1.5, we do believe that LLaVA-Med was either trained from this LLaVA version, or from one that is very similar. They have identical architecture, and the version is very close; it is possible the LLaVA-Med copied the number incorrectly. The LLaVA-Med codebase or github issues does not clarify this at the time of writing. + +Note that for OpenRouter function calls to Anthropic, the date is not included in the string. We believe they use timestamps for older model versions, but not for the latest version. So reproducing it may require checking if the model has been updated. + +# F.3. Human baseline on MicroVQA + +To perform a human baseline, we recruited co-authors who created questions. The conditions were: + +- Questions were provided in a google form. +- Participants did not view any question that they created. +- Internet access was allowed, but LLM access was not. This is because realistic biological practice is knowledge-intensive, and may require looking up facts or references. +- Time limit was 3 minutes per question. They were free to take as many breaks as they wanted. +- They answered the multiple choice question without further written justification. + +These conditions – 3 minutes and internet access – give sufficient time for an expert to make a reasonable attempt. Since the score was far below $100\%$ , we claim that these questions could be of value to real experts as an LLM chat assistant. Each participant only took a subset of the questions (about 100 each). The final score is the aggregate. + +# F.4. More experiments on attribute-level information + +![](images/ebc6e3c6cd413d919dbcc1c3973a6968c02e37ed4f14c33c578aea3801c93a6c.jpg) +Figure 12. Performance by image modality type for the best models: Gemini-1.5-Pro (closed source), VILA1.5-40B (open-source), and LlaVA-Med-Mistral-7B (medical). + +We described our metadata features in Sec. 3, and we can now leverage them to benchmark models on different attributes. Fig. 5 (left) shows all sub-tasks, and shows that 'compare image sets', which is common in biology to compare 'treatment' and 'control' groups. All models perform relatively well, despite multi-image reasoning being understudied in MLLMs [78]. Fig. 5 (right) shows that for Gemini and LLaVa-Med, higher Bloom's levels - corresponding to more advanced reasoning - leads to worse performance. This supports the intuitive result that higher-level reasoning is more challenging - although VILA does not follow this trend. A final result in Fig. 12 shows that all models performed worse on cryo-electron microscopy images than brightfield or fluorescence microscopy. This may be due to cryo-EM's challenging features (low contrast, low signal-to-noise ratio) and their under-representation in pretraining datasets [93]. + +# F.5. Language shortcut ablations + +In Tab. 4 we do language shortcut ablations for some top-performing models and some lower-performing models - both no-image and choices-only ablations. + +Table 7. Ablation study on MicroVQA MCQs. Column 2 is overall accuracy. Cols 3-4 are the no-image ablation accuracy and drop compared to overall accuracy. Cols 5-6 are the choices-only ablation accuracy and drop compared to overall accuracy. See the text for discussion. + +
AccuracyNo-image ablationChoices-only ablation
AccuracyDropAccuracydiff
o152.849.2-3.637.7-15.1
Claude-3.5-Sonnet51.746.0-5.844.0-7.7
Gemini-Pro-1.551.147.2-3.836.8-14.3
Pixtral-Large49.846.3-3.636.7-13.1
Grok-2-Vision48.446.3-2.140.5-7.9
GPT-4o-mini46.244.2-1.934.3-11.9
Pixtral-12b45.643.7-1.931.8-13.8
Random22.0
+ +The no-image ablation is a common test in VQA evaluations[29, 74]. The MLLM only receives the text prompt, and we add the following sentence to the prompt template: + +If an image is mentioned ignore this information and try your best to answer the question. + +The choices-only ablation has been explored in (non-visual) question answering [7]. The model does not receive the image or the question and must guess an answer from only the choices. We add this sentence to the prompt template: + +However I will not give you the question text or the images, I will only give you the choices, so please try your best to answer the question. + +Both ablations lower the scores, but they are still above random. While this is initially surprising, it is actually consistent with other benchmarks. A key challenge in VQA benchmarks is for questions to be "vision-centric": that the question requires using the image features to answer. We still have very strong evidence that MicroVQA is vision-centric, which we discuss below. + +First, let's recall the language shortcuts by considering a VQA sample with an image of a green field, with the question "what color is the grass". + +- 'Language bias' is well-known for enabling models to make educated guesses about MCQs without access to the image [29]. For the question "what color is the grass?", then without the image, "green" is a likely guess, although there are other possible answers, like "brown". +- 'Visual giveaway' means there are too many image details in the question, for example if the question were "what color is the green grass?" This removes any need to look at the image. +- 'Weak distractors' gives other question choices that are easily eliminated. For example, for question "what color is the grass?", a bad distractors would be "pink". This also removes any need to look at the image. + +(In Appendix E.6, we also discussed language shortcuts with biology examples as it related to MCQ construction.) Now we argue that MicroVQA is still vision-centric despite the ablation results. + +1. Language bias allows cheating on vision-centric questions If answerable due to a 'language bias', the question may still require perception. A model may guess that the grass is green, but it is only a guess - the grass really could be brown [29]. Therefore correct answers under the 'no-image' ablation may still be vision-centric. On the other hand problems with 'visual giveaway' and 'weak distractor' shortcuts really do render the question not-vision-centric. The RefineBot method we employed in MCQ generation was designed to target language shortcut removal. Anecdotally, our qualitative analysis (from Appendix E.6) found that RefineBot tends to focus on fixing the visual giveaway and weak distractor shortcuts. Ideally we would perform a quantitative analysis to identify which language shortcuts are due to language bias versus the others, however analyzing these questions is very time-consuming. +2. The unsolved questions are hard because of perception. The 'no-image' ablation only tells us about the 'easy' questions, but half the question is too hard for current models. Our qualitative analysis of 30 failure cases demonstrates that half of them made critical perception errors, which strongly suggests they require image understanding + +In contrast to the 'vision-centric' claims from the Cambrian paper [74]: They show that many VQA benchmarks have no-image ablation scores only slightly underperforming the with-image scores, and they conclude that such VQA benchmarks "do not properly measure vision-centric capabilities". But that conclusion cannot be drawn from only the no-image ablation + +(following our point 2 above). For benchmarks where MLLM's score far from $100\%$ , the failures could mostly be due to poor image understanding. This is very plausible since visual capabilities in MLLMs are viewed as lagging behind language capabilities. Likewise, our experience with our microscopy data was that visual understanding was poor, but that language knowledge was impressive. + +In point 1 above, we argued that MLLMs can use language biases to pass the no-image ablation, even if they are vision-centric questions. This is actually in agreement with the claim from Cambrian that those questions don't test vision-centric capabilities. + +Finally, some non-vision centric questions are acceptable. The goal of MicroVQA is to source realistic problems from microscopy research. While this area does lean heavily on image analysis, not every question will realistically require perception, so this is acceptable. + +# F.6. Error analysis + +Here we show samples from the manual qualitative error analysis (details in the main paper). To see all 30 examples, refer the github repo. + +# F.6.1 Perception errors + +![](images/3746b9ae2847c78b38a88ee79e517e75b632e6cdb134845904911e91617e5f4b.jpg) +Subject: Cell and molecular biology +Bloom's level: 4 (Analysis) +Specimen: mammalian cells +Figure 13 + +Q: A cryo-electron tomography (cryoET) slice of mammalian cells was acquired at a resolution of 1.3 Å and binned by 4 (resulting in a 5.3 Å per pixel). Within the tomogram, dark circular like structures are observed in an organelle. What is the modest likely identity of these structures? + +A) Ribosomes X + +B) Phase-separated condensates such as stress granules +C) Lysosomes +D) Peroxisomes + +VLM Answer: Let me think about this step by step: + +1) First, let's look at what we see in the image: +- A cryo-electron tomography slice showing cellular structures +- Circular-like dark structures visible +- The resolution is quite high (5.3 Å after binning) +2) Let's consider the size and appearance of these structures: +- They appear as dense, small, roughly circular objects +- The size is consistent with ribosomes (approximately $25 - 30\mathrm{nm}$ ) +- They show the characteristic electron density pattern of ribosomes +3) Let's analyze each option: +- Ribosomes: Match the size, shape, and density pattern +- Phase-separated condensates: Usually less defined, more amorphous +- Lysosomes: Much larger, membrane-bound organelles +- Peroxisomes: Also larger, membrane-bound organelles +4) Additional considerations: +- Ribosomes are often found associated with mitochondria +- The resolution is sufficient to resolve ribosomal structures +- The contrast and density are typical of ribosome imaging in cryoET + +The answer is (1) + +# Error type: Perception + +Error reason: These are granules characterized by + +heterogeneous, electron-dense aggregates. Ribosomes would be smaller, more regular in shape, and not have as high contrast. + +![](images/a55baf6d76f11541649358ddec8e664a4e40a1e350239dd7102746bd0dbe4c24.jpg) +Subject: Anatomy +Bloom's level: 4 (Analysis) +Specimen: gastrointestinal lesion biopsy +Figure 14 + +Q: A hematoxylin and eosin (H&E)-stained micrograph shows cellular formations within a gastrointestinal tissue sample with unique organization and cellular patterns. What pattern and morphology do these cells most closely align with? + +# A) Bundles of elongated cells with pale pink cytoplasm and uniform oval nuclei in a crisscross pattern + +B) Circular groups of cells with mucin in central spaces and small, dark nuclei +C) Dense arrays of polygonal cells with small nucleoli and vacuolated cytoplasm +D) Rows of cells with conspicuous cytoplasm and rounded nuclei +E) Diffused arrangement of cells with transparent cytoplasm and irregularly clustered chromatin + +# VLM Answer: Let me analyze this step by step: + +1) Looking at the H&E-stained micrograph: +- It shows a circular cross-section of gastrointestinal tissue +- There's a clear organizational pattern along the periphery +- The center shows pink-stained connective tissue (lamina propria) +2) Focusing on the cellular arrangement: +- The cells are arranged in circular groups +- These groups form glandular structures +- There are clear central spaces (lumens) in these structures +- The nuclei appear dark and are located at the base of the cells +3) Comparing with the options: +- Option 1 describes elongated cells in crisscross pattern +- doesn't match +- Option 2 describes circular groups with mucin in central spaces and small, dark nuclei - matches perfectly +- Option 3 describes polygonal cells with vacuoles - doesn't match +- Option 4 describes rows of cells - doesn't match +- Option 5 describes diffused arrangement - doesn't match +4) The image shows typical intestinal glands (crypts) with: +- Circular arrangement +- Central lumens containing mucin +- Small, dark nuclei at the base of the cells + +The answer is (2) + +# Error type: Perception + +Error reason: No. The intestinal glands are not the lesion. There is a pink spindle cell lesion with a fascicular arrangement of cells + +![](images/db06363d55bd66d857c3ebf1210cc63a3d1ac5bbd7e343d7daf04f3971021e6e.jpg) + +![](images/d356dc02564948d373741e89639cb989798f220bb7d64ecdf71bac8c0b5316e8.jpg) +Subject: Pathology +Bloom's level: 4 (Analysis) +Specimen: schwannoma tissue +Figure 15 + +Q: An H&E-stained image of a nerve sheath tumor displays unique cell arrangements and marked changes within its structure. Given these observations, what is the most likely reason for the observed morphological alternations? + +# A) Indication of rapid cellular proliferation X + +B) Sign of tumor shrinkage post-treatment +C) Result of genetic changes within the tumor cells +D) Outcome of a chronic process affecting the tumor +E) Sign of metastasis to another location + +VLM Answer: Let me analyze this step by step: + +1) Looking at the H&E-stained images, I can observe: +- Dense cellular proliferation +- Distinctive "herringbone" or swirling pattern +- High cellularity with numerous nuclei +- Evidence of active growth pattern +2) Key characteristics visible: +- The cells are arranged in fascicles +- There's a high nuclear to cytoplasmic ratio +- The pattern suggests active tumor growth +- No signs of necrosis or treatment effect +- No obvious signs of metastatic spread +3) Analysis of the options: +- Option 1 matches with the high cellularity and growth pattern +- Option 2 doesn't fit as there's no evidence of shrinkage. +- Option 3 while possible, isn't directly evident in morphology +- Option 4 doesn't match the active growth pattern seen +- Option 5 isn't supported by the histological features shown +4) The morphological features strongly suggest active tumor growth with: +- Characteristic fascicular pattern +- High cellular density +- Well-preserved tumor architecture +- Signs of active proliferation +The answer is (1) + +# Error type: Perception + +Error reason: No. One image shows cystic change, which can be seen in chronic, longstanding schwannomas. The other shows Verocay bodies and fascicular growth, which do not indicate rapid cellular proliferation. + +![](images/2b61b8249e59c6044d51a093c2d3f893d71c6d57fb46594536267a630a55470d.jpg) + +![](images/3675c07975e83e8d4cd7bc34316a81ac62224a32c6db853db28f75f401ec8cd1.jpg) +Subject: Physiology +Bloom's level: 2 (Comprehension) +Specimen: mouse liver tissue (wounded and unwounded) +Figure 16 + +Q: In a fluorescence microscopy image comparing wounded and unwounded liver tissues stained to visualize markers of immune response – neutrophil derived migrasomes (Ly6G, yellow) and platelets (CD41, magenta), what is most likely observed about the distribution of these markers in unwounded tissue? + +# A) Slight overlap without any notable biological consequences + +B) Substantial overlap implying persistent immune activation +C) No overlap, indicating complete segregation of cellular functions X +D) Randomized distribution suggesting passive circulation +E) Significant separation highlighting cellular non-interaction + +# VLM Answer: Let me think about this step by step: + +1) The image shows two types of staining: +- Yellow (Ly6G) for neutrophil-derived migrasomes +- Magenta (CD41) for platelets +2) Looking at the unwounded tissue (presumably the left image): +- There are distinct yellow dots (migrasomes) +- There are magenta structures (platelets) +- The distribution appears fairly separated +- The markers rarely overlap +3) This makes biological sense because: +- In healthy/unwounded tissue, immune cells and platelets typically maintain their distinct roles +- Without injury, there's no strong stimulus for interaction +- The cells are likely performing their normal surveillance functions +4) Analyzing the options: +- Option 1 suggests slight overlap - more than what's visible +- Option 2 suggests high overlap - not seen +- Option 3 indicates complete segregation - matches the image +- Option 4 suggests random distribution - the pattern looks more organized +- Option 5 also suggests separation - similar to option 3 + +5) Option 3 best matches what we observe: "No overlap, indicating complete segregation of cellular functions" + +The answer is (3) + +# Error type: Perception + +Error reason: The markers rarely overlap, but there is very focal colocalization in some puncta. + +The selected answer argues for "complete segregation of cellular functions", which is a too extreme or absolute conclusion based on the image. + +![](images/dbff59d9769bbac2f4fc2bdfdddb944347024187a9e2b435ec6b18abceb54e0a.jpg) +F.6.2 Knowledge errors + +![](images/6b8ac7a46477b945fb15eae3a726c93083eaee2fc1e7ec7c8ae124dedb2ecc39.jpg) +Figure 17 + +Q: A double immunofluorescence experiment was conducted on nerve cells, highlighting Sema4B (red marker) and Sox9, an astrocyte marker (green marker). Sema4B was found primarily surrounding the nuclear area dispersed in the cytoplasm. Based on this pattern, what potential role might Sema4B play in these cells? + +A) Sema4B contributes to the regulation of metabolic pathways within the cytoplasm. +B) Sema4B is associated with intracellular trafficking processes. +C) Sema4B acts as a signaling molecule interacting with cell surface receptors. +D) Sema4B aids in transporting proteins across cell compartments. X +E) Sema4B is directly involved in gene transcription regulation inside the nucleus. + +# grewes + +VLM Answer: Let me solve this step by step: + +1) Looking at the immunofluorescence images: +- Sox9 (green) marks astrocytes +- Sema4B (red) shows a particular distribution pattern +- The merged image shows both markers plus nuclear staining (blue) +2) Analyzing Sema4B's distribution: +- It appears as red punctate sta +- Located primarily around nuclear areas +- Shows dispersed pattern in cytoplasm +- Not concentrated in nucleus itself + +3) This distribution pattern suggests: +- The protein is involved in cytoplasmic processes +- Shows vesicular-like pattern +- Located in areas suggesting trafficking or transport +4) Looking at the options: +- Option 4 mentions protein transport +- This matches the vesicular-like pattern +- The perinuclear and cytoplasmic distribution is consistent with protein trafficking +- Other options don't align as well with the observed distribution pattern +5) Reasoning: +- The punctate pattern and distribution is characteristic of proteins involved in cellular trafficking +- The location around nucleus (where protein synthesis occurs) and throughout cytoplasm supports a transport role +- This pattern is less consistent with metabolic, signaling, or transcriptional roles +The answer is (4) + +Subject: Neurobiology Bloom's level: 3 (Applica Specimen: Nerve cells + +# Error type: Misconception (knowledge) + +Error reason: Sema4B is a transmembrane protein that plays a role in cellular signaling. TM proteins can be internalized via endocytosis and does not mean that its main function is intracellular trafficking. + +![](images/a872000bba91acdae646a6ff5f409e3cf01d7f9dceed062000bde34f8b143241.jpg) +Subject: Neurobiology +Bloom's level: 4 (Analysis) +Specimen: Fibroblasts from an Alzheimer's patient +Figure 18 + +Q: A cryo-Electron Tomography image of fibroblasts from a 70-year-old patient with Alzheimer's disease shows three mitochondria gathered together. What is the most likely reason for this mitochondrial clustering. + +A) Increased mitochondrial biogenesis +B) Enhanced mitochondrial fusion leading to interconnected networks +C) Mitochondrial fission due to cellular stress +D) Impaired mitophagy causing accumulation of damaged mitochondria $\times$ + +# VLM Answer: Let's solve this step by step: + +1) First, let's consider the context: + +- This is from a 70-year-old patient with Alzheimer's disease + +- Alzheimer's is known to affect cellular function, including mitochondrial health + +- We're observing clustered mitochondria in the image + +2) Analyzing each option: + +- Option 1 (Increased biogenesis): Would show new, healthy mitochondria, not clustering + +- Option 2 (Enhanced fusion): Would show elongated, connected mitochondria + +- Option 3 (Mitochondrial fission): Would show smaller, fragmented mitochondria + +- Option 4 (Impaired mitophagy): Would show accumulated, potentially damaged mitochondria + +3) Looking at the image: + +- We see multiple mitochondria clustered together + +- They appear to be intact but gathered in groups + +- This suggests they're not being properly cleared from the cell + +4) Considering Alzheimer's disease: + +- It's known to impair cellular cleanup mechanisms + +- Mitophagy is a key quality control process for mitochondria + +- When impaired, damaged mitochondria accumulate instead of being removed + +5) Conclusion: + +The clustering is most likely due to impaired mitophagy, as this would explain why multiple mitochondria are gathering instead of being properly cleared from the cell, which is a known issue in Alzheimer's disease. + +The answer is (4) + +# Error type: Misconception (knowledge) + +Error reason: Impaired mitophagy may result in larger overall mitochondria, not an increase in small mitochondrial aggregates. Fission is a better answer in this case. + +![](images/aae54e299573660890713fbe82c06e4beaab2c2f055ca5b5537d9925f520c9f1.jpg) + +![](images/192ce2e7872e205b3d334f2cf4f4af0e50b62bbb5c61f995505d837cf345674d.jpg) +Subject: Neurobiology +Bloom's level: 3 (Application) +Specimen: Resection specimen for epilepsy +Figure 19 + +Q: A microscopic analysis of tissue from an individual suffering from recurrent seizures involves using hematoxylin & eosin stain alongside a vimentin stain. Cells with stronger vimentin expression are significantly larger than surrounding entities. Which identity is most probable for these prominent vimentin-expressing cells? + +A) Dystrophic neurons typically found around glial scars +B) Enlarged reactive astrocytes near injury sites X +C) Aberrant large cells characteristic of cortical malformation +D) Macrophages adapted to engulf cellular debris +E) Fibroblast proliferation in response to brain injury + +VLM Answer: Let me solve this step by step: + +1) First, let's analyze what we're seeing in the images: +- The H&E stain (top image) shows tissue with scattered cells and some enlarged cells +- The vimentin stain (bottom image) shows strong brown staining of certain larger cells +2) Key information from the question: +- Patient has recurrent seizures +- Vimentin-positive cells are significantly larger +- Looking for most probable identity of these cells +3) Important facts about vimentin: +- Vimentin is an intermediate filament protein +- Strongly expressed in reactive astrocytes +- Also expressed in mesenchymal cells +4) In the context of seizures: + +- Seizures can cause brain injury +- Brain injury triggers astrogliosis +- Astrocytes become reactive and enlarged in response to injury +5) Looking at the options: +- Option 1: Neurons don't typically express vimentin +- Option 2: Reactive astrocytes do express vimentin and enlarge during injury +- Option 3: Cortical malformation cells wouldn't necessarily express vimentin +- Option 4: Macrophages are typically smaller +- Option 5: Fibroblasts would be rare in this context + +6) The morphology and staining pattern of the large cells, combined with the clinical context of seizures, most strongly suggests reactive astrocytes. + +The answer is (2) X + +# Error type: Misconception (knowledge) + +Error reason: The AI is able to interpret the question and image, but lacks the knowledge to recognize the link between recurrent seizures, cortical dysplasia, and large ballooned cells. + +![](images/2cf2ebd44648285027b531c31465d7a5f0c91a3a18798de8ede6babc5c7fd30b.jpg) +Subject: Neurobiology +Bloom's level: 4 (Analysis) +Specimen: Brain parenchyma +Figure 20 + +Q: A certain specialized stain reveals unusual cell movement crossing boundaries typical of neuronal migration in a tissue resection from a case of structural brain anomaly. Which condition is primarily associated with this type of cellular pattern? + +# A) Focal cortical dysplasia + +B) Congenital Zika syndrome +C) Schizencephaly +D) Type II lissencephaly +E) Walker-Warburg syndrome + +# VLM Answer: Let me solve this step by step: + +1) The image shows a histological section with specialized staining that reveals abnormal cellular organization and migration patterns. +2) The question asks about unusual cell movement crossing typical neuronal migration boundaries. +3) Looking at the options: +- Focal cortical dysplasia involves disorganized neurons in localized areas +- Congenital Zika syndrome primarily affects neural progenitor cells +- Schizencephaly involves clefts in the cerebral hemispheres +- Type II lissencephaly shows abnormal neuronal migration and positioning +- Walker-Warburg syndrome is a severe form of congenital muscular dystrophy with brain malformations +4) Key characteristics of focal cortical dysplasia include: +- Abnormal neuronal migration +- Disorganized cortical layering +- Abnormal cell types crossing normal boundaries +- Disrupted cerebral cortex architecture +5) The histological image shows features consistent with focal cortical dysplasia, including: +- Disrupted cellular organization +- Abnormal migration patterns +- Cells crossing typical boundaries + +The answer is (1) + +# Error type: Misconception (knowledge) + +Error reason: Misconception about the scope and extent of neuronal migration patterns in focal cortical dysplasia (FCD) versus Type II lissencephaly. FCD will have abnormal cortical lamination, but the neurons are still within the expected gray/white matter of normal neuronal migration. + +They do not extend beyond and cross established neuronal boundaries such as the glial limitans. + +# F.6.3 Overgeneralization or oversimplification errors + +![](images/fc99b16469e74d1b69925f4088361f7f2819b1b98350cca0e2d9301b71143abe.jpg) + +![](images/2103ab540b8ed96e55d5265831a466e118edaf3e9dac7dfc8721838017bd7bb2.jpg) +Subject: Pathology +Bloom's level: 2 (Comprehension) +Specimen: schwannoma tissue +Figure 21 + +Q: In a histological examination, an H&E-stained image shows Verocay bodies and cystic degeneration with pseudoepithelial metaplasia, indicative of schwannoma. How does the risk of malignant transformation in schwannoma compare to other peripheral nerve sheath tumors? + +# A) Schwannomas exhibit a lower risk of malignant transformation compared to other peripheral nerve sheath tumors. + +B) Schwannomas and neurofibromas have similar risks of malignant transformation. +C) The risk of malignant transformation in schwannomas is dependent on specific genetic factors. +D) Schwannomas have an increased risk of malignancy in the presence of multiple tumors. +E) Schwannomas rarely undergo malignant transformation, regardless of tumor location or size. + +# VLM Answer: Let me think about this step by step: + +1) The images show typical histological features of schwannoma with: +- Verocay bodies (palisading nuclei) +- Cystic degeneration +- Pseudoepithelial metaplasia +2) Regarding malignant transformation in schwannomas: +- Schwannomas are typically benign tumors +- They rarely undergo malignant transformation +- This is true regardless of their size or location +- Even multiple schwannomas (as in schwannomatosis) rarely become malignant +3) Comparing to other peripheral nerve sheath tumors: +- Neurofibromas, especially plexiform types, have a higher risk of malignant transformation +- This is particularly true in the setting of NF1 (neurofibromatosis type 1) +- Malignant peripheral nerve sheath tumors (MPNSTs) more commonly arise from neurofibromas than schwannomas +4) The statement about genetic factors or increased risk with multiple tumors isn't accurate for schwannomas, though it may be true for other nerve sheath tumors +5) Looking at the options, the key is that schwannomas maintain their benign nature in most cases, regardless of circumstances. +The answer is (5) + +# Error type: Overgeneralization + +Error reason: The selected choice does not address the question asked "compared to other peripheral nerve sheath tumors?" . The "best answer" for the question should compare to other tumors as specified in the question. The AI has "oversimplified" the question to ask a simpler question than the one asked. + +![](images/2af7fd39d58465679dcf68fe3f7dcfa4322a7c1932d3ac3e90bfb95ac834d098.jpg) +Subject: Cell and molecular biology +Figure 22 + +Bloom's level: 4 (Analysis) + +Specimen: HEK293 cells + +(wildtype and PEX19 + +knockdown) + +Q: Confocal microscopy images depict live HEK293 cells subjected to CRISPR/Cas9-mediated knockdown of a specific gene, alongside wildtype cells. The cells are stained with BODIPY-C12 (cyan) to highlight lipid droplets and Hoechst to stain the nuclei. In wildtype cells, distinct organellar structures are visible, whereas these structures are absent in the gene knockdown cells. What is the most likely cellular outcome resulting from the knockdown of this gene in HEK293 cells? + +A) Disruption of mitochondrial fission +B) Impaired peroxisome biogenesis +C) Altered lipid metabolism pathways X +D) Reduced endoplasmic reticulum function +E) Decreased nuclear transport processes + +# VLM Answer: Let me solve this step by step: + +1) First, let's analyze what we're seeing in the images: +- Live HEK293 cells are shown +- BODIPY-C12 staining (cyan) shows lipid droplets +- Hoechst staining shows nuclei (grey/white) +- The images show a time series (left to right) + +2) Key observations: +- The cyan staining (BODIPY-C12) indicates the presence of lipid droplets +- There's a difference between wildtype and knockdown cells in terms of organellar structures +- The staining specifically targets lipids +3) Consider the options: +- Mitochondrial fission would primarily affect energy metabolism +- Peroxisome biogenesis affects lipid breakdown +- Lipid metabolism directly relates to lipid droplet formation +- ER function is broader than just lipid effects +- Nuclear transport isn't directly related to lipid droplets +4) Reasoning: +- Since BODIPY-C12 specifically stains lipids +- The visible differences are in lipid-containing structures +- The most direct connection would be to lipid metabolism +- Changes in lipid metabolism would directly affect lipid droplet formation and distribution +The answer is (3) + +# Error type: Overgeneralization + +Error reason: Assumes that the changes in lipid droplets are directly linked to altered lipid metabolism pathways, without fully integrating the implications of peroxisome biogenesis in lipid metabolism and organelle dynamics. + +# F.6.4 Hallucination or errors due to added details + +![](images/2bb7f46b0dfce685f6b9872de90d66f2897a2d52aa5e03a72b5471deb07e4d20.jpg) + +![](images/f277bd23b385a1d76b6bf26e48cb6c0f1e17d4ce2491d44a882e718ea823dd4f.jpg) +Subject: Cell and molecular biology +Bloom's level: 4 (Analysis) +Specimen: cellulose fibers in vitro +Figure 23 + +Q: A CryoET tomographic slice image and its corresponding 3D annotation of cellulose fibers in vitro show differing amounts of cellulose fibers. Beyond the 2D nature of the slice versus the 3D nature of the annotation, which mechanism most likely accounts for the observed differences? + +A) Errors in the alignment during image reconstruction leading to artifacts. + +B) Overlapping fibers being misinterpreted as single fibers due to resolution limits. $\times$ +C) Variability in manual annotation leading to subjective bias. +D) Sample dehydration causing shrinkage of fibers in the image. +E) Background noise in the image causing misidentification of fibers. + +# VLM Answer: Let me solve this step by step: + +1) First, let's analyze what we're seeing: +- Left image: CryoET tomographic slice showing cellulose fibers in grayscale +- Right image: 3D annotation of the fibers in blue +- There appears to be a difference in how many fibers are visible/annotated +2) Consider the limitations of CryoET imaging: +- Resolution is limited +- Multiple fibers can appear to overlap in 2D views +- Depth information can be challenging to interpret +3) Evaluate each option: +- (1) Alignment errors typically cause obvious artifacts, which aren't apparent here +- (2) Resolution limits could cause overlapping fibers to appear as one in the 2D slice +- (3) Manual annotation variability wouldn't explain systematic differences +- (4) Sample dehydration would affect both views equally +- (5) Background noise might affect fiber identification but isn't the primary issue +4) The key observation is that the 2D slice shows fewer distinct fibers than the 3D annotation, likely because: +- Multiple fibers that overlap in the 2D view can be properly separated in 3D +- The resolution limits of the microscope make it difficult to distinguish overlapping fibers in the 2D view + +The answer is (2) + +# Error type: Hallucination + +Error reason: The question does not state there are systematic differences. This detail is added during the AI CoT reasoning. A better answer would rule out easy or obvious causes of differences (random or systematic). Although fibers can overlap in Cryo-EM, it is generally considered to have a reasonably high resolution. + +# F.7. Reasoning trace analysis + +# F.7.1 Human step-by-step reasoning + +To better understand how subject matter experts approach questions in MicroVQA, we have a cell biologist and microscopist manually annotate their reasoning for each of the three main use cases. + +Reasoning example for task 1 'expert visual understanding'. + +\*Question\* + +What subcellular structure could the bright green spots represent?* + +\*Answer + +This staining pattern most likely refers to cytoplasmic bodies. + +\*Reasoning:\* + +Find green spots (no reasoning required) + +- Rule out staining artefacts + +- Look at marker channels to assess whether cells appear intact + +- Compare the green spots to the marker channels for nuclei or microtubules + +- Check that green spots are always inside the cells + +- Yes, inside cells, so these are no staining artefact. + +- Look up which spotty staining patterns in cells exist + +Nuclear bodies + +- Nuclear speckles + +- Nucleoli + +- Cytoplasmic bodies + +- Centrosome + +- Vesicles + +- Intermediate filaments + +- Spotty microtubule staining + +- Explore which patterns can be excluded + +- Check whether green spots are inside or outside of nuclei to know what patterns to explore + +- Outside of nuclei, thus nuclear subcellular localizations can be excluded, and cytoplasmic spotty + +patterns need to be explored + +- Check whether green spots are placed on microtubules + +- No, thus cannot be spotty microtubule stain + +- Assess localization > if accumulating around microtubule organizing center could be likely vesicles or + +centriolar satellites + +- No, not accumulating at centrosome + +- Assess size and shape of the spots + +- Small size and round, can exclude intermediate filaments + +- Count green spots per cell and check that not more than 10 + +- Less likely vesicles, likely cytoplasmic bodies + +# Reasoning example for task 2 'hypothesis generation'. + +\*Question\* + +The sperm cells swim in circles. Wouldn't they need to swim straight to reach the egg? + +\*Answer\* + +Sperm motility and its relation to reaching the egg and fertility is very complex and still not fully understood. + +Yet , it is known that sperm swimming in circles in a shallow observation chamber like you might have + +applied, may swim on helical or twisted-ribbon structures in an in vivo or 3D space and thereby might in + +fact efficiently swim forward. + +\*Reasoning\* + +- Explore the swimming trajectories to validate what the user means with swimming in circles. + +Detect trajectory, assess how it would be described: curved or circular swimming trajectory + +- Explore what is reported on swimming trajectories of sperm cells. + +- Sperm cell swimming trajectories are species dependent + +- Validate species: + +Assess whether the user mentioned the species in the prompt. + +- Assess look of cells in image (morphology, geometry) and compare to other images with known species + +eventually + +Likely human sperm. + +- Sperm cell swimming trajectories are experimental setup dependent: shallow versus deep observation chamber + +- Validate setup: + +- Brightfield microscopy, sperm cells are always swimming parallel, little to no off-focus blurring + +of the tail. + +- Assess how images from different experimental setups look like for human sperm + +- Likely shallow cover-slip-based observation chamber + +- Consequences of setup for trajectory + +In shallow observation chambers sperm cannot freely beat and thus beat parallel to + +Reasoning example for task 3 'experiment proposal'. +```txt +the glass surface, with no 3D component. - Sperm cells hitting a glass surface align their beat with the glass and this way can get trapped if asymmetrically beating. - Curved trajectory in 2D may relate to a progressive/straight swimming path in 3D - Sperm cell swimming trajectories are stimulus dependent and maturation-state dependent. - Validate setup and explore metadata provided: - Check literature and compare image to available images of sperm from healthy donors (if available). - Compare detected swimming trajectory to swimming trajectories of sperm explored in a similar species and experimental setup where sperm cells are perturbed (if available). - Are there indications for any treatment under which the detected swimming trajectory would be abnormal and how would these look like (if available). - No indications for any treatment. Neither in prompt nor in image. - Explore what is known on sperm cell trajectories to reach the egg in humans. - Not much is known since in vivo studies are not possible. - Sperm cells need to be able to develop hyperactive motility to swim to the egg - How do hyperactive sperm cells look? - Asymmetric beating - What trajectory would they swim in in a 2D observation chamber? - Curved trajectory - Does that fit the swimming trajectory presented? - Yes - Assess whether when summarizing all the obtained information the conclusion from the observed swimming trajectory to sperm not reaching the egg is valid. - No. - Compose summarizing reply to support the conclusion. +``` + +```latex +\*Question\* Can you suggest a simple experiment to find out whether centrosomal multiplication is occurring here? \*Answer\* One option could be to co-stain the EDC3 antibody with a centrosomal marker like PCNT or CEP250. +\*Reasoning\* - Extract hypothesis to be tested from the question EDC3 is labeling centrosomes and thus, centrosomal multiplication is occurring in the pictured cell setup - Immunofluorescent antibody staining on a protein of interest (EDC3) in a human cultured cell type - Compare to similar experimental setups that support such a hypothesis - Immunofluorescent antibody staining of centrosomes, co-labeling cell markers, performing micros c counting centrosomes Acquiring or creating a transgenic cell line with a GFP-labeled centrosomal proteins, followed by microscopy +- Determine which methods are compatible with EDC3 staining Both compatible with co-staining EDC3 +- Determine which experiment is simplest for someone performing the shown experimental setup and if e one can be realized in less time Immunofluorescent staining of EDC3 and centrosomes +Find centrosome markers for human cells and assess suitability for this cell type +Final decision on experimental setup +``` + +# F.7.2 LLM-based error analysis + +To better understand how the model approaches questions in MicroVQA, we extend our manual reasoning trace analysis from a subset of the dataset to all questions by leveraging zero-shot prompting of an LLM. The automatic error categorization achieves $63\%$ accuracy compared to manually assigned categories. Although imperfect, automated error categorization is scalable and preserves the overall distribution of human-identified errors, enabling broader analysis. We also classify correctly answered questions to quantify overall changes across experiments. Below, we present the prompts and category definitions used for classification. + +Below is a multiple choice question with options and the reasoning that lead a model to an incorrect response. Originally the model was also shown an image with the question. Your task is to use the reasoning trace to tag the error type: + +- Perception: the image was not interpreted correctly. +- Overgeneralization: the details of the question were ignored and the general case was applied. +- Hallucination: details were added during reasoning that weren't in the question or extracted from the image. +- Other: the error does not fit the above categories. + +Question: + +```handlebars +{{question}} +Correct answer: {{correct_answer}} +Resoning trace: +{{reasoning}} +``` + +Below is a multiple choice question with options and the reasoning that lead a model to a correct response. Originally the model was also shown an image with the question. Your task is to use the reasoning trace and determine if the question was answered because of these reasons: + +- No image: the image is unnecessary because the correct answer doesn't rely on interpreting visual cues. +- Visual giveaway: the image is unnecessary because critical visual information is already described in the question itself. +- Language bias: the question has information that makes the correct option obvious. +- Weak distractors: the distractors are easy to rule out according to the reasoning trace. +- Good question: the question is well-constructed and needs the image and specialized knowledge to answer. +- Other: the question is hard to answer or doesn't fit the other classes. + +Question: + +{{question}} + +Correct answer: {{correct_answer}} + +Resoning trace: + +{{reasoning}} + +![](images/cdadc87db59cf8f1df1a7c0e5bfcee32dc9e6881c48798adceaf6c2b86fd31d5.jpg) +Without image (ablation) +Figure 24. Reasoning trace categorization with an LLM (GPT-4o): Correct (blue) vs. incorrect (red) answers. Comparison of Claude-Sonnet's responses without (left) and with (right) image access. + +![](images/e3e98d44525a84452acdb5cce1057f5ed8494b750b0f40a58d254532bc36c58c.jpg) +With image + +We apply this system to examine the impact of providing or withholding the image for each question (Fig. 24). While overall accuracy remains comparable across both conditions, the distribution of errors and successful reasoning patterns differs meaningfully. As expected, hallucination errors decrease when the image is available, perception errors are rare when the model lacks visual input, and the proportion of questions that do not require an image remains stable. + +Most errors arise from questions requiring visual and contextual understanding, highlighting the importance of multimodal reasoning. Overgeneralization errors decrease when the image is provided, as the model is better grounded in the + +question context. However, many questions remain challenging even with visual input, as perception errors increase. This suggests that effectively solving our dataset requires models to both maintain the specifics of the question's context and accurately interpret visual information. + +Additionally, note that the proportion of questions answered correctly through language shortcuts (e.g., language bias and visual giveaways) remains low in both settings, suggesting the effectiveness of the RefineBot strategy. However, addressing weak distractors remains a persistent challenge for specialized datasets. + +![](images/275d230e47e6850b6583e090ad8aa5d087c76d3cbd98916c8c0735c2b33baeb5.jpg) +Figure 25. Collage of images from MicroVQA. \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13399/images.zip b/data/2025/2503_13xxx/2503.13399/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..92eccde253b7d3d997feebf87ca1c66650dea4f6 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13399/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd50438fe70c13178e19e8e85c4e2f4a4a50c79e98d15cf3c1d62698d2361ba1 +size 2506727 diff --git a/data/2025/2503_13xxx/2503.13399/layout.json b/data/2025/2503_13xxx/2503.13399/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..939a9465a2ed3808f514387a88f3e738c74e9b61 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13399/layout.json @@ -0,0 +1,36613 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 82, + 102, + 529, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 102, + 529, + 138 + ], + "spans": [ + { + "bbox": [ + 82, + 102, + 529, + 138 + ], + "type": "text", + "content": "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 145, + 161, + 463, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 161, + 463, + 175 + ], + "spans": [ + { + "bbox": [ + 145, + 161, + 463, + 175 + ], + "type": "text", + "content": "James Burgess*¹ Jeffrey J Nirschl*¹ Laura Bravo-Sánchez¹,*" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 96, + 175, + 514, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 96, + 175, + 514, + 190 + ], + "spans": [ + { + "bbox": [ + 96, + 175, + 514, + 190 + ], + "type": "text", + "content": "Alejandro Lozano" + }, + { + "bbox": [ + 96, + 175, + 514, + 190 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 96, + 175, + 514, + 190 + ], + "type": "text", + "content": " Sanket Rajan Gupte" + }, + { + "bbox": [ + 96, + 175, + 514, + 190 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 96, + 175, + 514, + 190 + ], + "type": "text", + "content": " Jesus G. Galaz-Montoya" + }, + { + "bbox": [ + 96, + 175, + 514, + 190 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 96, + 175, + 514, + 190 + ], + "type": "text", + "content": " Yuhui Zhang" + }, + { + "bbox": [ + 96, + 175, + 514, + 190 + ], + "type": "inline_equation", + "content": "^{1}" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 190, + 479, + 202 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 190, + 479, + 202 + ], + "spans": [ + { + "bbox": [ + 130, + 190, + 479, + 202 + ], + "type": "text", + "content": "Yuchang Su² Disha Bhowmik³ Zachary Coman³ Sarina M. Hasan⁴" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 80, + 203, + 530, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 203, + 530, + 217 + ], + "spans": [ + { + "bbox": [ + 80, + 203, + 530, + 217 + ], + "type": "text", + "content": "Alexandra Johannesson" + }, + { + "bbox": [ + 80, + 203, + 530, + 217 + ], + "type": "inline_equation", + "content": "^{5}" + }, + { + "bbox": [ + 80, + 203, + 530, + 217 + ], + "type": "text", + "content": " William D. Leineweber" + }, + { + "bbox": [ + 80, + 203, + 530, + 217 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 80, + 203, + 530, + 217 + ], + "type": "text", + "content": " Malvika G Nair" + }, + { + "bbox": [ + 80, + 203, + 530, + 217 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 80, + 203, + 530, + 217 + ], + "type": "text", + "content": " Ridhi Yarlagadda" + }, + { + "bbox": [ + 80, + 203, + 530, + 217 + ], + "type": "inline_equation", + "content": "^{3}" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 96, + 217, + 514, + 230 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 96, + 217, + 514, + 230 + ], + "spans": [ + { + "bbox": [ + 96, + 217, + 514, + 230 + ], + "type": "text", + "content": "Connor Zuraski" + }, + { + "bbox": [ + 96, + 217, + 514, + 230 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 96, + 217, + 514, + 230 + ], + "type": "text", + "content": " Wah Chiu" + }, + { + "bbox": [ + 96, + 217, + 514, + 230 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 96, + 217, + 514, + 230 + ], + "type": "text", + "content": " Sarah Cohen" + }, + { + "bbox": [ + 96, + 217, + 514, + 230 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 96, + 217, + 514, + 230 + ], + "type": "text", + "content": " Jan N. Hansen" + }, + { + "bbox": [ + 96, + 217, + 514, + 230 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 96, + 217, + 514, + 230 + ], + "type": "text", + "content": " Manuel D Leonetti" + }, + { + "bbox": [ + 96, + 217, + 514, + 230 + ], + "type": "inline_equation", + "content": "^{6}" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 164, + 231, + 443, + 246 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 164, + 231, + 443, + 246 + ], + "spans": [ + { + "bbox": [ + 164, + 231, + 443, + 246 + ], + "type": "text", + "content": "Chad Liu6 Emma Lundberg1,5,6 Serena Yeung-Levy1,6" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 124, + 252, + 485, + 265 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 252, + 485, + 265 + ], + "spans": [ + { + "bbox": [ + 124, + 252, + 485, + 265 + ], + "type": "text", + "content": "1Stanford University, 2Tsinghua University, 3University of North Carolina at Chapel Hill," + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 112, + 265, + 498, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 112, + 265, + 498, + 277 + ], + "spans": [ + { + "bbox": [ + 112, + 265, + 498, + 277 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 112, + 265, + 498, + 277 + ], + "type": "text", + "content": "Princeton University, " + }, + { + "bbox": [ + 112, + 265, + 498, + 277 + ], + "type": "inline_equation", + "content": "^{5}" + }, + { + "bbox": [ + 112, + 265, + 498, + 277 + ], + "type": "text", + "content": "KTH Royal Institute of Technology, " + }, + { + "bbox": [ + 112, + 265, + 498, + 277 + ], + "type": "inline_equation", + "content": "^{6}" + }, + { + "bbox": [ + 112, + 265, + 498, + 277 + ], + "type": "text", + "content": "Chan Zuckerberg Biohub Network" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 293, + 198, + 304 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 293, + 198, + 304 + ], + "spans": [ + { + "bbox": [ + 67, + 293, + 198, + 304 + ], + "type": "text", + "content": "1 ) Expert Visual Understanding" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 312, + 201, + 323 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 312, + 201, + 323 + ], + "spans": [ + { + "bbox": [ + 69, + 312, + 201, + 323 + ], + "type": "text", + "content": "What is unusual about the result?" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 115, + 328, + 159, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 328, + 159, + 338 + ], + "spans": [ + { + "bbox": [ + 115, + 328, + 159, + 338 + ], + "type": "text", + "content": "Perception" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 58, + 350, + 125, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 350, + 125, + 384 + ], + "spans": [ + { + "bbox": [ + 58, + 350, + 125, + 384 + ], + "type": "text", + "content": "\"How is the Seipin localized within the endoplasmic reticulum (ER)?\"" + } + ] + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 128, + 347, + 214, + 390 + ], + "blocks": [ + { + "bbox": [ + 128, + 347, + 214, + 390 + ], + "lines": [ + { + "bbox": [ + 128, + 347, + 214, + 390 + ], + "spans": [ + { + "bbox": [ + 128, + 347, + 214, + 390 + ], + "type": "image", + "image_path": "48fb0b015fe336e7f1f2ee20f79fa8128697bbf493c68df1a3c102ba11f79ade.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "bbox": [ + 63, + 407, + 126, + 448 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 407, + 126, + 448 + ], + "spans": [ + { + "bbox": [ + 63, + 407, + 126, + 448 + ], + "type": "text", + "content": "\"Do cells treated with BafA1 (left) express more p26 compared to control (right)?\"" + } + ] + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 132, + 399, + 209, + 462 + ], + "blocks": [ + { + "bbox": [ + 132, + 399, + 209, + 462 + ], + "lines": [ + { + "bbox": [ + 132, + 399, + 209, + 462 + ], + "spans": [ + { + "bbox": [ + 132, + 399, + 209, + 462 + ], + "type": "image", + "image_path": "fbf6225fd6cc5a03ce08245eb457e6ac241f62677aa68ba0b8663ce128f2d34a.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 471, + 555, + 504 + ], + "lines": [ + { + "bbox": [ + 55, + 471, + 555, + 504 + ], + "spans": [ + { + "bbox": [ + 55, + 471, + 555, + 504 + ], + "type": "text", + "content": "Figure 1. A scientific experimentation workflow drives discovery: researchers analyze experiments, develop hypotheses, and design further experiments to test their ideas. We release MicroVQA, a visual question answering (VQA) benchmark to test these three tasks in the context of biological microscopy. Each of the 1,042 samples is created by a biology expert, and transformed into a multiple choice question (MCQ)." + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "bbox": [ + 236, + 293, + 358, + 304 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 236, + 293, + 358, + 304 + ], + "spans": [ + { + "bbox": [ + 236, + 293, + 358, + 304 + ], + "type": "text", + "content": "2 ) Hypothesis generation" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 227, + 314, + 386, + 325 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 314, + 386, + 325 + ], + "spans": [ + { + "bbox": [ + 227, + 314, + 386, + 325 + ], + "type": "text", + "content": "Why does this happen in my experiment?" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 283, + 327, + 333, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 283, + 327, + 333, + 335 + ], + "spans": [ + { + "bbox": [ + 283, + 327, + 333, + 335 + ], + "type": "text", + "content": "Assessment" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 225, + 346, + 310, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 225, + 346, + 310, + 388 + ], + "spans": [ + { + "bbox": [ + 225, + 346, + 310, + 388 + ], + "type": "text", + "content": "\"Which mechanism might explain why ASLV particles show signs of merging in a CryoEM image?\"" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 232, + 411, + 310, + 444 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 232, + 411, + 310, + 444 + ], + "spans": [ + { + "bbox": [ + 232, + 411, + 310, + 444 + ], + "type": "text", + "content": "\"Which glial cell is likely responsible for this abnormal reticular fiber pattern?\"" + } + ] + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 318, + 345, + 381, + 403 + ], + "blocks": [ + { + "bbox": [ + 318, + 345, + 381, + 403 + ], + "lines": [ + { + "bbox": [ + 318, + 345, + 381, + 403 + ], + "spans": [ + { + "bbox": [ + 318, + 345, + 381, + 403 + ], + "type": "image", + "image_path": "743ab2730b296e0179253c4096d078e836c67aeb0542c24381b8e5557e9ff0ed.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 319, + 404, + 381, + 460 + ], + "blocks": [ + { + "bbox": [ + 319, + 404, + 381, + 460 + ], + "lines": [ + { + "bbox": [ + 319, + 404, + 381, + 460 + ], + "spans": [ + { + "bbox": [ + 319, + 404, + 381, + 460 + ], + "type": "image", + "image_path": "d49b3c5b889aacafd67f16612e1e0234e4bbf363b0baafdfa4c53e1eb51a055a.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + } + ], + "index": 23 + }, + { + "bbox": [ + 406, + 293, + 529, + 304 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 406, + 293, + 529, + 304 + ], + "spans": [ + { + "bbox": [ + 406, + 293, + 529, + 304 + ], + "type": "text", + "content": "3 Experimental proposal" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 418, + 313, + 533, + 323 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 418, + 313, + 533, + 323 + ], + "spans": [ + { + "bbox": [ + 418, + 313, + 533, + 323 + ], + "type": "text", + "content": "How do I test my hypothesis?" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 463, + 327, + 489, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 463, + 327, + 489, + 335 + ], + "spans": [ + { + "bbox": [ + 463, + 327, + 489, + 335 + ], + "type": "text", + "content": "Action" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 398, + 347, + 485, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 398, + 347, + 485, + 388 + ], + "spans": [ + { + "bbox": [ + 398, + 347, + 485, + 388 + ], + "type": "text", + "content": "\"What experiment could you perform to test if CCNB1 protein levels relate to cell cycle stages in human A-431 cells?\"" + } + ] + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 493, + 343, + 553, + 402 + ], + "blocks": [ + { + "bbox": [ + 493, + 343, + 553, + 402 + ], + "lines": [ + { + "bbox": [ + 493, + 343, + 553, + 402 + ], + "spans": [ + { + "bbox": [ + 493, + 343, + 553, + 402 + ], + "type": "image", + "image_path": "49ee27fd732ed49af49867b6852d6dc9224ef77778edb08746e4310eb67393ad.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + } + ], + "index": 28 + }, + { + "bbox": [ + 410, + 407, + 485, + 448 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 410, + 407, + 485, + 448 + ], + "spans": [ + { + "bbox": [ + 410, + 407, + 485, + 448 + ], + "type": "text", + "content": "\"What experimental change can increase the likelihood of achieving a high-resolution structure?\"" + } + ] + } + ], + "index": 29 + }, + { + "type": "image", + "bbox": [ + 494, + 403, + 547, + 434 + ], + "blocks": [ + { + "bbox": [ + 494, + 403, + 547, + 434 + ], + "lines": [ + { + "bbox": [ + 494, + 403, + 547, + 434 + ], + "spans": [ + { + "bbox": [ + 494, + 403, + 547, + 434 + ], + "type": "image", + "image_path": "69d02fafbdedd7b8e5a3996c1814917433fee933752157a05313b1d02141cd0c.jpg" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_body" + } + ], + "index": 30 + }, + { + "type": "image", + "bbox": [ + 494, + 435, + 547, + 459 + ], + "blocks": [ + { + "bbox": [ + 494, + 435, + 547, + 459 + ], + "lines": [ + { + "bbox": [ + 494, + 435, + 547, + 459 + ], + "spans": [ + { + "bbox": [ + 494, + 435, + 547, + 459 + ], + "type": "image", + "image_path": "41c5c5c6b79455a8e5bed80c85cbc8336ec8c76d066a5f1bc1a77fb088e9a319.jpg" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_body" + } + ], + "index": 31 + }, + { + "bbox": [ + 153, + 514, + 199, + 525 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 153, + 514, + 199, + 525 + ], + "spans": [ + { + "bbox": [ + 153, + 514, + 199, + 525 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 55, + 540, + 296, + 684 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 540, + 296, + 684 + ], + "spans": [ + { + "bbox": [ + 55, + 540, + 296, + 684 + ], + "type": "text", + "content": "Scientific research demands sophisticated reasoning over multimodal data, a challenge especially prevalent in biology. Despite recent advances in multimodal large language models (MLLMs) for AI-assisted research, existing multimodal reasoning benchmarks only target up to college-level difficulty, while research-level benchmarks emphasize lower-level perception, falling short of the complex multimodal reasoning needed for scientific discovery. To bridge this gap, we introduce MicroVQA, a visual-question answering (VQA) benchmark designed to assess three reasoning capabilities vital in research workflows: expert image understanding, hypothesis generation, and experiment pro" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 313, + 515, + 555, + 707 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 515, + 555, + 707 + ], + "spans": [ + { + "bbox": [ + 313, + 515, + 555, + 707 + ], + "type": "text", + "content": "posal. MicroVQA consists of 1,042 multiple-choice questions (MCQs) curated by biology experts across diverse microscopy modalities, ensuring VQA samples represent real scientific practice. In constructing the benchmark, we find that standard MCQ generation methods induce language shortcuts, motivating a new two-stage pipeline: an optimized LLM prompt structures question-answer pairs into MCQs; then, an agent-based 'RefineBot' updates them to remove shortcuts. Benchmarking on state-of-the-art MLLMs reveal a peak performance of " + }, + { + "bbox": [ + 313, + 515, + 555, + 707 + ], + "type": "inline_equation", + "content": "53\\%" + }, + { + "bbox": [ + 313, + 515, + 555, + 707 + ], + "type": "text", + "content": "; models with smaller LLMs only slightly underperform top models, suggesting that language-based reasoning is less challenging than multimodal reasoning; and tuning with scientific articles enhances performance. Expert analysis of chain-of-thought responses shows that perception errors are the most frequent, followed by knowledge errors and then overgen" + } + ] + } + ], + "index": 38 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 14, + 206, + 35, + 559 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 206, + 35, + 559 + ], + "spans": [ + { + "bbox": [ + 14, + 206, + 35, + 559 + ], + "type": "text", + "content": "arXiv:2503.13399v1 [cs.CV] 17 Mar 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 693, + 284, + 703 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 693, + 284, + 703 + ], + "spans": [ + { + "bbox": [ + 55, + 693, + 284, + 703 + ], + "type": "text", + "content": "Correspondence {jmhb, jnirschl, lmbravo}@stanford.edu" + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 57, + 703, + 171, + 713 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 703, + 171, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 703, + 171, + 713 + ], + "type": "text", + "content": "*These authors contributed equally" + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 302, + 733, + 307, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 307, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 307, + 741 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 39 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 122 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 122 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 122 + ], + "type": "text", + "content": "eralization errors. These insights highlight the challenges in multimodal scientific reasoning, showing MicroVQA is a valuable resource advancing AI-driven biomedical research. MicroVQA is available here, project here." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 132, + 136, + 144 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 132, + 136, + 144 + ], + "spans": [ + { + "bbox": [ + 55, + 132, + 136, + 144 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 152, + 295, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 152, + 295, + 331 + ], + "spans": [ + { + "bbox": [ + 55, + 152, + 295, + 331 + ], + "type": "text", + "content": "The transformative potential of AI in scientific discovery lies in its ability to interpret and reason over complex, multimodal data while integrating specialist domain knowledge. Recently, multimodal large language models (MLLMs) have demonstrated notable capabilities in data comprehension, recall, and multi-step reasoning across various modalities on exam-style questions [1, 3, 6, 49, 69, 81]. This progress has generated enthusiasm for scientific AI applications, ranging from multimodal chat AI assistants [33, 70, 85, 96] to AI agents capable of orchestrating complex experiments [12, 16, 26, 55]. However, while MLLMs perform well on general exam tasks, they often lack the specialized, contextual reasoning needed for real scientific challenges, where tasks require advanced image-based reasoning, analysis, and hypothesis-driven experimentation." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 331, + 295, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 331, + 295, + 486 + ], + "spans": [ + { + "bbox": [ + 55, + 331, + 295, + 486 + ], + "type": "text", + "content": "In recent years, efforts to apply scientific MLLMs across a range of domains have laid an essential groundwork. Advances have been achieved in image understanding [41, 43, 47, 50, 57], literature analysis [41, 47, 66, 77], and problem-solving at school and early college levels [53, 87]. In applied settings, AI has facilitated scientific tasks in biology [28, 60, 63, 68], chemistry [12, 38, 55], software [63, 73], and diagnostic pathology and radiology [52]. However, there has been less progress on multimodal tasks requiring reasoning in research-level settings. The gap is due in part to a lack of multimodal benchmarks for scientific reasoning – unlike college-level tasks where existing exams can form benchmarks, research tasks require expert curation." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 487, + 295, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 487, + 295, + 653 + ], + "spans": [ + { + "bbox": [ + 55, + 487, + 295, + 653 + ], + "type": "text", + "content": "Microscopy provides a compelling use case for developing such a benchmark. It requires more than pattern recognition or factual recall – it demands the ability to synthesize visual information with experimental context, formulate hypotheses, and propose experimental follow-ups. To evaluate these advanced capabilities, we introduce MicroVQA, a visual-question answering (VQA) benchmark tailored for multimodal scientific reasoning. Grounded in the real-world scientific practice of biological microscopy, MicroVQA incorporates 1,042 multiple-choice questions (MCQs) manually created by biological researchers across diverse disciplines, with each question taking more than 30 minutes to generate. Careful expert curation ensures the benchmark is relevant to real scientific applications." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 654, + 295, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 654, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 654, + 295, + 714 + ], + "type": "text", + "content": "MicroVQA is designed to evaluate three key reasoning tasks crucial to scientific inquiry: (1) expert image understanding - the ability to recognize scientifically relevant features and integrate contextual information about sample preparation; (2) hypothesis generation - the formulation" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 72, + 553, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 167 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 167 + ], + "type": "text", + "content": "of scientifically plausible explanations from experimental data; and (3) experiment proposal – the ability to suggest further experiments that could validate or refute hypotheses. These capabilities form a common workflow in scientific experimentation. They demand both abductive reasoning – inferring the best explanation from multiple possibilities – and deductive reasoning – moving from general principles to specific conclusions [81]." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 168, + 553, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 168, + 553, + 300 + ], + "spans": [ + { + "bbox": [ + 313, + 168, + 553, + 300 + ], + "type": "text", + "content": "We tested standard methods to map open scientific VQA samples to multiple-choice questions (MCQs), but found that they induce language shortcuts – the MCQs failed to truly test multimodal abilities. To ensure question quality and difficulty, we developed a two-stage pipeline for MCQ generation: first, an optimized LLM prompt structures QA pairs into well-formatted MCQs; then, a novel agent-based 'RefineBot' system increases question difficulty by rewriting questions without language shortcuts. This ensures that MicroVQA's MCQs better test true scientific capabilities, rather than simple pattern recognition." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 300, + 553, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 300, + 553, + 491 + ], + "spans": [ + { + "bbox": [ + 313, + 300, + 553, + 491 + ], + "type": "text", + "content": "We benchmarked frontier MLLMs on MicroVQA, finding that the best performers achieved a modest peak of " + }, + { + "bbox": [ + 313, + 300, + 553, + 491 + ], + "type": "inline_equation", + "content": "53\\%" + }, + { + "bbox": [ + 313, + 300, + 553, + 491 + ], + "type": "text", + "content": ", which shows a significant gap between current models and expert-level scientific reasoning. The variation between different MLLMs was small; notably, smaller LLMs only slightly underperform their larger counterparts, suggesting that the key challenge for our MCQs is not language-only reasoning, but multimodal reasoning or knowledge. We also find that finetuning MLLMs on scientific publications improves MicroVQA performance. Finally, a group of biomedical researchers performed a detailed qualitative analysis into MLLM failure modes, uncovering three major failure modes: Expert perception is the most common, followed by knowledge errors, and then overgeneralization reasoning errors. This further supports that visual reasoning is a key challenge in MicroVQA." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 326, + 491, + 511, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 326, + 491, + 511, + 502 + ], + "spans": [ + { + "bbox": [ + 326, + 491, + 511, + 502 + ], + "type": "text", + "content": "In summary, our contributions are as follows:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 314, + 503, + 553, + 623 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 314, + 503, + 553, + 563 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 503, + 553, + 563 + ], + "spans": [ + { + "bbox": [ + 314, + 503, + 553, + 563 + ], + "type": "text", + "content": "- We introduce MicroVQA, a benchmark for multimodal reasoning in biological microscopy. We defined three key tasks for scientific research - expert image understanding, hypothesis generation, and experiment proposal. Then, expert researchers curated 1,042 diverse VQA samples." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 314, + 563, + 553, + 598 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 563, + 553, + 598 + ], + "spans": [ + { + "bbox": [ + 314, + 563, + 553, + 598 + ], + "type": "text", + "content": "- We develop a novel two-stage MCQ generation pipeline to address the challenge of creating MCQs from long-format scientific questions without language shortcuts." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 314, + 599, + 553, + 623 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 599, + 553, + 623 + ], + "spans": [ + { + "bbox": [ + 314, + 599, + 553, + 623 + ], + "type": "text", + "content": "- We provide quantitative and qualitative benchmarking of frontier MLLMs, highlighting areas for improvement." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 314, + 633, + 399, + 645 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 633, + 399, + 645 + ], + "spans": [ + { + "bbox": [ + 314, + 633, + 399, + 645 + ], + "type": "text", + "content": "2. Related work" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 654, + 553, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 654, + 553, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 654, + 553, + 714 + ], + "type": "text", + "content": "MLLM reasoning benchmarks While perception focuses on identifying an image's visual features, visual reasoning often integrates prior knowledge to derive new conclusions using logical inference strategies like deduction, induction, abduction, and analogical reasoning [81]." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 216 + ], + "type": "text", + "content": "Reasoning in MLLMs has focused on image-text VQA [25, 44, 53, 81, 87, 88]. The most prominent examples in science are MMMU [87, 88], ScienceQA, [53] and MathVista [54]. While they do test reasoning, they are derived from exams up to the college level, while our benchmark emphasizes research-level difficulty. Other popular multimodal benchmarks test commonsense inference [71, 90], spatial or relational awareness [35, 91] and integrating prior knowledge [56]. Another line of work is visual abductive reasoning [8, 32], or finding the most likely explanation for an observation from incomplete information; this relates closely to hypothesis generation in this work." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 232, + 296, + 461 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 232, + 296, + 461 + ], + "spans": [ + { + "bbox": [ + 56, + 232, + 296, + 461 + ], + "type": "text", + "content": "MLLMs in science Many works consider multimodal VQA in scientific or medical domains like microscopy images [50], biological images [57], medical images [30, 34], and chemical structure drawings [43]. Still, they mostly test more straightforward perception and recognition. Figure comprehension for scientific articles does require more reasoning with multimodal data [41, 45-47, 61]; however, the images here are curated for publication, and the emphasis is on understanding content, rather than performing practical tasks. On the other hand, our benchmark includes realistic experiment images and tests more advanced reasoning like hypothesis generation and experiment planning. Scientific tasks have received much more attention in language-only LLMs [9, 15, 22, 37, 41, 58, 62, 63, 66, 67, 73, 80, 95], with GPQA being a notable work targeting PhD-level multi-step reasoning [62], but these do not test multimodal reasoning, which is important to many scientific areas [26]. In terms of models, most MLLMs in the biomedical domain are targeted at clinical tasks [34, 42, 59, 65, 75, 92]." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 477, + 296, + 657 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 477, + 296, + 657 + ], + "spans": [ + { + "bbox": [ + 55, + 477, + 296, + 657 + ], + "type": "text", + "content": "Multiple-choice question construction A challenge in creating exams for education research and benchmarks in machine learning is generating multiple-choice questions with quality distractors [4, 27]. Recent works have explored LLM-based generation methods, including zero-shot distractor prompting, few-shot prompting, LLM fine-tuning with high-quality examples, and prompting for known error types [10, 23, 24, 94]. However, a major challenge is that generated distractors are not sufficiently vision-centric - MLLMs can often select the correct answer from the distractors without access to the image [74]. While this is partly explained by language biases on truly vision-centric MCQs [7, 29], others likely do fail to test visual understanding due to poor MCQ construction [41] - this informs our RefineBot MCQ generation method in Sec. 4.2." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 669, + 213, + 682 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 669, + 213, + 682 + ], + "spans": [ + { + "bbox": [ + 55, + 669, + 213, + 682 + ], + "type": "text", + "content": "3. The MicroVQA benchmark" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 713 + ], + "type": "text", + "content": "Here, we present the MicroVQA benchmark for visual-question-answering (VQA) which advances multimodal" + } + ] + } + ], + "index": 4 + }, + { + "type": "table", + "bbox": [ + 321, + 91, + 548, + 243 + ], + "blocks": [ + { + "bbox": [ + 357, + 71, + 512, + 81 + ], + "lines": [ + { + "bbox": [ + 357, + 71, + 512, + 81 + ], + "spans": [ + { + "bbox": [ + 357, + 71, + 512, + 81 + ], + "type": "text", + "content": "Table 1. MicroVQA benchmark attributes." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 321, + 91, + 548, + 243 + ], + "lines": [ + { + "bbox": [ + 321, + 91, + 548, + 243 + ], + "spans": [ + { + "bbox": [ + 321, + 91, + 548, + 243 + ], + "type": "table", + "html": "
Dataset featureValue
Total questions1,042
Multi-image questions423
Avg. MCQ question length66
Avg. MCQ answer length15
Avg. raw question length158
Avg. raw answer length52
Unique image sets255
Image ModalitiesLight, Fluoro, Electron
Image ScalesTissue, Cell, Subcell, Atomic
Organisms31
Research areas33
Expert question creators12
Time to create 1 question30-40 mins
Time to quality check 1 MCQ5 mins
", + "image_path": "d32da879ee1c18b59513c036e6d33c328cd2312dd4e8a7063de5e0cc6577a05c.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 316, + 257, + 547, + 346 + ], + "blocks": [ + { + "bbox": [ + 316, + 257, + 547, + 346 + ], + "lines": [ + { + "bbox": [ + 316, + 257, + 547, + 346 + ], + "spans": [ + { + "bbox": [ + 316, + 257, + 547, + 346 + ], + "type": "image", + "image_path": "03d2e6afce9ebf9ce549a1902ba73c25c37b17096a0491c3439a9eed68299682.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 351, + 350, + 516, + 361 + ], + "lines": [ + { + "bbox": [ + 351, + 350, + 516, + 361 + ], + "spans": [ + { + "bbox": [ + 351, + 350, + 516, + 361 + ], + "type": "text", + "content": "Figure 2. MicroVQA taxonomy of sub-tasks." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 384, + 555, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 384, + 555, + 540 + ], + "spans": [ + { + "bbox": [ + 313, + 384, + 555, + 540 + ], + "type": "text", + "content": "scientific reasoning by tackling two major evaluation challenges. First, defining tasks is challenging because scientific research encompasses many complex activities [33] – we therefore define three important tasks important to biomedical discovery. Second, VQA samples cannot be readily sourced from existing exams or classification datasets – we therefore assembled a team of biologists to curate challenging reasoning problems. In Sec. 3.1, we provide an overview of the MicroVQA benchmark; Sec. 3.2 details the key tasks; and finally, Sec. 3 demonstrates that MicroVQA is at the forefront of measuring research-level reasoning in MLLMs. In the later Sec. 4, we describe our approach for generating multiple-choice questions." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 550, + 449, + 563 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 550, + 449, + 563 + ], + "spans": [ + { + "bbox": [ + 313, + 550, + 449, + 563 + ], + "type": "text", + "content": "3.1. Overview of MicroVQA" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 570, + 555, + 713 + ], + "type": "text", + "content": "MicroVQA is a comprehensive dataset of 1,042 VQA triplets, manually curated by a team of expert biological researchers from diverse fields, and key attributes are summarized in Tab. 1. These expert-crafted questions test reasoning across three key tasks: expert understanding, hypothesis generation, and experiment proposal, and we additionally provide a taxonomy of sub-tasks in Sec. 3. The questions cover a broad spectrum, ranging from biological issues, such as \"What is unusual about the cell shape?\" - to technical imaging challenges like \"Is this feature due to low signal-to-noise ratio (SNR) or is it real?\". We provide the 'raw' questions and answers written by experts, which" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 295, + 95 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 295, + 95 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 295, + 95 + ], + "type": "text", + "content": "are often long and nuanced, along with converted multiple-choice questions (MCQs) suitable for MLLM evaluation." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 96, + 295, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 96, + 295, + 251 + ], + "spans": [ + { + "bbox": [ + 54, + 96, + 295, + 251 + ], + "type": "text", + "content": "The images encompass the most common microscopy modalities used in human biology: brightfield, fluorescence, and electron microscopy. The sample types span the full range of microscopic scales – tissue, cellular, subcellular, and atomic – and emphasize organisms relevant to human biology and medically motivated tasks, namely human and mouse. Over " + }, + { + "bbox": [ + 54, + 96, + 295, + 251 + ], + "type": "inline_equation", + "content": "60\\%" + }, + { + "bbox": [ + 54, + 96, + 295, + 251 + ], + "type": "text", + "content": " of the samples have multiple images because comparison is essential in microscopy research, and because multichannel fluorescence images cannot be represented in standard RGB. All these attributes are provided as metadata tags to facilitate deeper error analysis, thereby enhancing MicroVQA's value as a resource for advancing AI capabilities in scientific research." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 259, + 249, + 272 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 259, + 249, + 272 + ], + "spans": [ + { + "bbox": [ + 55, + 259, + 249, + 272 + ], + "type": "text", + "content": "3.2. MicroVQA scientific reasoning tasks" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 277, + 296, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 277, + 296, + 421 + ], + "spans": [ + { + "bbox": [ + 55, + 277, + 296, + 421 + ], + "type": "text", + "content": "To define specific reasoning tasks aimed at advancing biomedical research, we established the following criteria: (1) the tasks should use image-text inputs and text outputs suitable for MLLMs, (2) they should require higher-order reasoning beyond simple image processing, and (3) they should emphasize core experimental activities such as experimentation, analysis, and action [33], rather than auxiliary tasks like literature review or writing. The tasks were developed through interviews with nine co-author PIs and postdocs specializing in microscopy research (Appendix D). We identified three essential capabilities that we now introduce, with examples shown in Sec. 3." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 436, + 296, + 556 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 436, + 296, + 556 + ], + "spans": [ + { + "bbox": [ + 55, + 436, + 296, + 556 + ], + "type": "text", + "content": "Expert visual understanding After performing an experiment, scientists must interpret data within its experimental context. In biological microscopy, this involves identifying patterns like protein distributions or cell morphologies, while also assessing technical aspects such as artifacts and noise levels. The task requires MLLMs to perform anomaly detection and image comparison, going beyond simple perception in two key ways: analysis must consider sample preparation context, and expert knowledge is needed to evaluate biological features and technical artifacts." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 570, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 570, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 570, + 296, + 713 + ], + "type": "text", + "content": "Hypothesis generation The next step is proposing mechanistic hypotheses to explain experimental data. For example, when a drug treatment causes mitochondrial fragmentation in cells, a hypothesis might be that the drug disrupted calcium homeostasis, activating the DRP1 protein through a multi-step pathway, leading to fragmentation. This requires abductive reasoning [8, 20, 32, 81] as one must select from many possible hypotheses given incomplete information. The process demands the MLLM to integrate relevant domain knowledge with experimental context and visual features to reason about which hypothesis best explains the observations." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 71, + 555, + 148 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 71, + 555, + 148 + ], + "spans": [ + { + "bbox": [ + 313, + 71, + 555, + 148 + ], + "type": "text", + "content": "Table 2. Comparing scientific multimodal benchmarks close to MicroVQA for in terms of reasoning level or difficulty beyond college level. We show difficulty level, scientific domain, dataset source, and size. Compared to others, MicroVQA either has higher difficulty level, or it has higher reasoning level (which is established in Fig. 3). Compared to others at the same reasoning level, namely MMMU-Pro, it has similar size." + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 315, + 156, + 555, + 252 + ], + "blocks": [ + { + "bbox": [ + 315, + 156, + 555, + 252 + ], + "lines": [ + { + "bbox": [ + 315, + 156, + 555, + 252 + ], + "spans": [ + { + "bbox": [ + 315, + 156, + 555, + 252 + ], + "type": "table", + "html": "
BenchmarkDifficulty levelDomainSourceSize
MicroVQAresearchmicroscopyexpert-curated1,042
MMSci [47]researchsciencepaper figures7132
LabBench [41]researchbiologywebQA181
PathVQA [30]graduatepathologytexbooks16.3k
OmnimedVQA* [34]graduatemedicalclassification dataset127.9k
Microbench [50]graduatemicroscopyclassification dataset17.2k
MMMU [87]undergraduategeneraltextbooks, webQA11k
MMMU Pro [88]undergraduategeneralMCQ dataset1,730
Science QA [53]grade-schoolscienceexams16.8k
", + "image_path": "74382198ca3b9ec195519c7bd834f9055a7627c18103536daa7765928d1e68d7.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 316, + 266, + 555, + 520 + ], + "blocks": [ + { + "bbox": [ + 316, + 266, + 555, + 520 + ], + "lines": [ + { + "bbox": [ + 316, + 266, + 555, + 520 + ], + "spans": [ + { + "bbox": [ + 316, + 266, + 555, + 520 + ], + "type": "image", + "image_path": "5295c1b7d26a09eff3d6c5cf0c423b626244b32e9626088d654ec8fc1a9df192.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 313, + 527, + 555, + 593 + ], + "lines": [ + { + "bbox": [ + 313, + 527, + 555, + 593 + ], + "spans": [ + { + "bbox": [ + 313, + 527, + 555, + 593 + ], + "type": "text", + "content": "Figure 3. Composition of scientific MLLM benchmarks regarding estimated Bloom's taxonomy [11]. Higher levels are more cognitively challenging. MicroVQA has more questions at higher levels compared to other benchmarks, for example, MMMU [87] and ScienceQA [53], while perception-driven medical benchmarks like OmniMedVQA are at lower levels." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 618, + 556, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 618, + 556, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 618, + 556, + 715 + ], + "type": "text", + "content": "Experiment proposal The final step is designing experiments to validate hypotheses. In microscopy, this often involves selecting appropriate assays and controls, requiring knowledge of suitable protocols and deductive reasoning about whether proposed experiments will provide evidence for or against the hypothesis. This task also may involve adjusting a particular experiment to address technical issues, like how to ensure a good signal-to-noise ratio," + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 733, + 309, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 309, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 309, + 741 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 95 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 95 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 95 + ], + "type": "text", + "content": "and this requires both image understanding and knowledge of protocols." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 96, + 294, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 96, + 294, + 133 + ], + "spans": [ + { + "bbox": [ + 55, + 96, + 294, + 133 + ], + "type": "text", + "content": "In Appendix F.7.1, we show an example question in each task with an expert-annotated 'reasoning trace' - it shows the type of reasoning required for typical questions." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 141, + 242, + 153 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 141, + 242, + 153 + ], + "spans": [ + { + "bbox": [ + 55, + 141, + 242, + 153 + ], + "type": "text", + "content": "3.3. Analysis of MicroVQA benchmark" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 159, + 294, + 231 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 159, + 294, + 231 + ], + "spans": [ + { + "bbox": [ + 55, + 159, + 294, + 231 + ], + "type": "text", + "content": "Scientific discovery requires complex reasoning beyond basic perception and knowledge, and it requires expertise beyond school-level education. Here we show that MicroVQA addresses a gap that prior multimodal scientific benchmarks do not test high level reasoning at research-level difficulty." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 232, + 295, + 387 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 232, + 295, + 387 + ], + "spans": [ + { + "bbox": [ + 55, + 232, + 295, + 387 + ], + "type": "text", + "content": "To assess reasoning levels in MLLM benchmarks, we use Bloom's taxonomy [11], which hierarchically classifies cognitive difficulty from recall to evaluation. We fine-tune GPT-4o to classify Bloom's taxonomy on related MLLM benchmarks with results in Fig. 3; in Tab. 2 we additionally show key attributes of the same benchmarks. Fig. 3 shows that benchmarks having research-level and graduate-level difficulty are dominated by lower reasoning questions (levels 1-2), while our MicroVQA has higher level reasoning (levels 3-4). This is not surprising since OmnimedVQA and MicroBench derive questions from prior classification datasets, while our task definition and data collection approach does reflect high-level reasoning." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 388, + 295, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 388, + 295, + 483 + ], + "spans": [ + { + "bbox": [ + 55, + 388, + 295, + 483 + ], + "type": "text", + "content": "On the other hand, some benchmarks have higher Bloom's reasoning level closer to MicroVQA - MMMU, MMMU-Pro, and Science-QA - but their difficulty level is lower - undergraduate or grade-school. The most comparable benchmark for reasoning is the undergraduate-level MMMU-Pro, and its dataset size is 1,730 compared to our 1,042: this shows that for reasoning-intensive benchmarks, sizes beyond 1,000 are very high." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 495, + 198, + 509 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 495, + 198, + 509 + ], + "spans": [ + { + "bbox": [ + 55, + 495, + 198, + 509 + ], + "type": "text", + "content": "4. MCQ generation process" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 515, + 270, + 528 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 515, + 270, + 528 + ], + "spans": [ + { + "bbox": [ + 55, + 515, + 270, + 528 + ], + "type": "text", + "content": "4.1. Creation of raw VQA samples by experts" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 534, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 534, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 534, + 295, + 713 + ], + "type": "text", + "content": "Expert generation of raw VQA samples Creating VQA triples suitable for research-level reasoning tasks is time-consuming and requires expertise. We recruited 12 human experts to each create approximately 90 VQA samples, taking about 30-40 minutes per sample. Each submission included an image set, a question, a detailed response, and contextual information like experimental context and motivation. They are raw VQA triples, " + }, + { + "bbox": [ + 55, + 534, + 295, + 713 + ], + "type": "inline_equation", + "content": "(v_{0},q_{0},a_{0}) -" + }, + { + "bbox": [ + 55, + 534, + 295, + 713 + ], + "type": "text", + "content": " 'raw' means the question and answer can be much longer and more detailed than typical VQA benchmarks. This allowed expert annotators to include enough detail to match what is a valuable input and response for an MLLM. Appendix E shows materials that prompt question creators. The images were sourced from the contributor's own experiments, image databases, or research papers published since 2024, and" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 314, + 72, + 474, + 84 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 72, + 474, + 84 + ], + "spans": [ + { + "bbox": [ + 314, + 72, + 474, + 84 + ], + "type": "text", + "content": "all image licenses permit redistribution." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 101, + 553, + 196 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 101, + 553, + 196 + ], + "spans": [ + { + "bbox": [ + 313, + 101, + 553, + 196 + ], + "type": "text", + "content": "Quality control We had three quality control criteria: questions should be challenging and not solvable with undergraduate-level knowledge; they should be aligned with the defined tasks; and there should be diversity in image content. Questions were reviewed against these criteria for each contributor, and feedback was provided for further VQA samples. These review rounds occurred after submitting the first 10 questions, and the first 40 questions." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 205, + 529, + 217 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 205, + 529, + 217 + ], + "spans": [ + { + "bbox": [ + 313, + 205, + 529, + 217 + ], + "type": "text", + "content": "4.2. MCQ generation from raw VQA samples" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 224, + 553, + 260 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 224, + 553, + 260 + ], + "spans": [ + { + "bbox": [ + 313, + 224, + 553, + 260 + ], + "type": "text", + "content": "To benchmark MLLMs, we transform raw expert-created VQA samples to multiple-choice questions (MCQs) and propose new methods for MCQ generation." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 276, + 555, + 539 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 276, + 555, + 539 + ], + "spans": [ + { + "bbox": [ + 313, + 276, + 555, + 539 + ], + "type": "text", + "content": "Motivation: naive MCQ generation is inadequate The standard approach for VQA benchmarks is to zero-shot prompt an LLM with raw VQA samples [4], but this has two problems. The first is that generated MCQs do not follow established educational MCQ design principles (Appendix E.2), likely because the raw VQA questions and answers are often long and variable. Our stage 1 'exam alignment', which we describe in the next section, addresses this challenge. The second challenge is that generated MCQs do not properly test MLLM capabilities. Evaluating GPT-40 on these MCQs scores " + }, + { + "bbox": [ + 313, + 276, + 555, + 539 + ], + "type": "inline_equation", + "content": "93\\%" + }, + { + "bbox": [ + 313, + 276, + 555, + 539 + ], + "type": "text", + "content": " even without the image, despite most of the dataset questions clearly requiring image analysis, and we investigate this by inspecting the chain-of-thought outputs in Fig. 4 and Appendix E.2. Many distractors can be easily eliminated based on general biology knowledge, or because they are too vague compared to the correct option. We hypothesize that MCQ and distractor generation is challenging when the target answer string is long, and where the subject matter is specialized, suggesting that other scientific benchmarks may encounter similar issues. Overall, this motivates a more involved approach to distractor generation." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 546, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 546, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 546, + 555, + 713 + ], + "type": "text", + "content": "Stage 1: Exam-style MCQ generation Here we ensured that multiple-choice questions (MCQs) conformed to established design principles for biomedical exams (Fig. 4). This effort was led by a physician-scientist co-author trained in medical board exam item writing, who reviewed educational literature on MCQ design [64] and Bloom's Taxonomy for assessing cognitive skills in biomedical exams [5, 19, 89]. We began by manually transforming 50 user-submitted raw question-answer pairs, " + }, + { + "bbox": [ + 313, + 546, + 555, + 713 + ], + "type": "inline_equation", + "content": "(q_0, a_0)" + }, + { + "bbox": [ + 313, + 546, + 555, + 713 + ], + "type": "text", + "content": ", into one-best-answer MCQs with " + }, + { + "bbox": [ + 313, + 546, + 555, + 713 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 313, + 546, + 555, + 713 + ], + "type": "text", + "content": " distractors, " + }, + { + "bbox": [ + 313, + 546, + 555, + 713 + ], + "type": "inline_equation", + "content": "(q_1, a_1, \\mathbf{d}_1)" + }, + { + "bbox": [ + 313, + 546, + 555, + 713 + ], + "type": "text", + "content": ". These 'Gold Standard' MCQs retained the original questions' meanings while strictly adhering to NBME standards [64] and minimizing cues that enable test-wise guessing. We used these gold-standard MCQs in a supervised learning" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 80, + 59, + 523, + 355 + ], + "blocks": [ + { + "bbox": [ + 80, + 59, + 523, + 355 + ], + "lines": [ + { + "bbox": [ + 80, + 59, + 523, + 355 + ], + "spans": [ + { + "bbox": [ + 80, + 59, + 523, + 355 + ], + "type": "image", + "image_path": "6963e397abdb5827ff6b3efd8fc096e97daf7d2bd82b69915beb1726b9190dba.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 358, + 555, + 415 + ], + "lines": [ + { + "bbox": [ + 54, + 358, + 555, + 415 + ], + "spans": [ + { + "bbox": [ + 54, + 358, + 555, + 415 + ], + "type": "text", + "content": "Figure 4. Constructing the MicroVQA multiple choice questions. (0) We defined tasks with domain biological scientists and created 1,061 raw VQA samples. (1) The raw samples were aligned to an exam-style MCQ by manually transforming a small set and optimizing an LLM prompt to match that alignment. (2) MCQs are further improved using RefineBot, a new iterative method to make MCQs more challenging. The lower panel shows an example MCQ from raw VQA to final: the annotations highlight key changes that we further explore in Appendix E.2, where red indicates issues, and green indicates good attributes." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 426, + 297, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 426, + 297, + 521 + ], + "spans": [ + { + "bbox": [ + 54, + 426, + 297, + 521 + ], + "type": "text", + "content": "setup to develop a general LLM prompt, " + }, + { + "bbox": [ + 54, + 426, + 297, + 521 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 54, + 426, + 297, + 521 + ], + "type": "text", + "content": ", that maps raw question-answer pairs to exam-style MCQs. We leverage the DSPy framework [39, 40], which optimizes " + }, + { + "bbox": [ + 54, + 426, + 297, + 521 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 54, + 426, + 297, + 521 + ], + "type": "text", + "content": " to match the gold-standard MCQ outputs. Additionally, the optimization process incorporated supervision from LLM-based quality metrics: content similarity, NBME-aligned formatting, and the absence of extraneous clues. Further details are in Appendix E.2." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 545, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 545, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 545, + 296, + 714 + ], + "type": "text", + "content": "Stage 2: RefineBot question refinement Although Stage 1 produces exam-aligned MCQs, many remain easily solvable due to language shortcuts (see Appendix D). To enhance difficulty, we introduce RefineBot, a method that increases MCQ complexity (Fig. 4). The key idea is that weaknesses in MCQ construction are revealed by the chain-of-thought (CoT) answers. The first LLM agent, the evaluator/reflector, answers the MCQ with CoT and then reflects on the strategies used. For example, a question about processes inside a virus might have a distractor referring to surface processes – this can be eliminated without referencing the image. The LLM summarizes its solution strategy into a reflection text: in this case, that the distractor was implausible based on knowledge that the stated process" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 313, + 426, + 556, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 426, + 556, + 628 + ], + "spans": [ + { + "bbox": [ + 313, + 426, + 556, + 628 + ], + "type": "text", + "content": "does not not occur inside the virus, and so does not match the question. It passes the reflection to the rewriter LLM agent. The rewriter revises the question stem and generates new distractors to invalidate the identified strategies: in this example it may create distractors referring to processes that do occur inside a virus. To prevent significant changes to the question-answer pair over iterations, we employ an LLM checker to ensure semantic consistency with the original pair. The revised MCQ is returned to the evaluator; if it still answers correctly, the loop continues. The process stops if the rewriter fails the checker or after " + }, + { + "bbox": [ + 313, + 426, + 556, + 628 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 313, + 426, + 556, + 628 + ], + "type": "text", + "content": " iterations. If it fails, RefineBot can be rerun with a different seed, often succeeding—a form of inference-time scaling [13]. All agents are GPT-4o-0806, but to mitigate potential bias against 4o in final evaluation, we also use Claude-3.5-Sonnet-0620 as the evaluator agent; we assess biases in the experiments section." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 313, + 641, + 557, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 641, + 557, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 641, + 557, + 715 + ], + "type": "text", + "content": "Final MCQ quality check Since MCQ generation involves LLM processing, we need to verify the correctness of the final answer. Each MCQ is manually reviewed by the same expert who created that particular question (between 80 and 120 per person). For any question with issues, that expert makes minimal changes to the question correct it." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 733, + 309, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 309, + 742 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 309, + 742 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 71, + 137, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 71, + 137, + 85 + ], + "spans": [ + { + "bbox": [ + 55, + 71, + 137, + 85 + ], + "type": "text", + "content": "5. Experiments" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 90, + 269, + 104 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 90, + 269, + 104 + ], + "spans": [ + { + "bbox": [ + 55, + 90, + 269, + 104 + ], + "type": "text", + "content": "5.1. Benchmarking MLLMs with MicroVQA" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 109, + 296, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 109, + 296, + 217 + ], + "spans": [ + { + "bbox": [ + 55, + 109, + 296, + 217 + ], + "type": "text", + "content": "We evaluate state-of-the art multimodal large language models (MLLMs) on the MicroVQA benchmark. We include open and closed models spanning the categories: reasoning, large, small, and medical - medical is the closest domain to microscopy with specialist MLLMs. We utilize standard chain-of-thought prompting [88] (deetails in Appendix F). We report mean accuracy in multiple-choice VQA for each tasks Tab. 3. We also perform and analyze no-image Appendix F.5." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 231, + 296, + 459 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 231, + 296, + 459 + ], + "spans": [ + { + "bbox": [ + 56, + 231, + 296, + 459 + ], + "type": "text", + "content": "MicroVQA is challenging for all MLLMs Our evaluation reveals a substantial gap between current MLLMs and the upper bound, with the leading model, o1, at 52.8. There is surprisingly little variation in performance between models, with most closed and open models all scoring above 40, with the lowest score (Llama-3.2-11b) likely due to instruction-following issues. A human baseline (Appendix F.3) scores only 50.3 – this shows that biology experts are specialized to subdomains, and are challenged by different subdomains. In fact this demonstrates that experts could benefit from MLLMs that can solve MicroVQA's tasks. The performance variation across the 3 tasks is small – the highest score is 56.4 on 'expert visual understanding (V)', compared to the highest overall of 52.8. The 'hypothesis generation' task is the hardest for all models and the gap is strongest for smaller models. For example the gap between hypothesis generation and expert visual understanding is 3.7 for the strongest large model, but 10.7 on the strongest small model." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 472, + 296, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 472, + 296, + 676 + ], + "spans": [ + { + "bbox": [ + 55, + 472, + 296, + 676 + ], + "type": "text", + "content": "Smaller models are surprisingly competitive For all models with a lower-parameter equivalent - Gemini-1.5, QwenVL, VILA, and Llama - the drop in performance due to size is less than 3 for all except Llama. Moreover, these drops are smaller than on other multimodal reasoning benchmarks like MMMU, where for example, Gemini drops by 9.6 between Pro and Flash-8B [87]. Typically, smaller models have the same vision encoder, but a smaller LLM [79]. Since the size of the language model has a small impact, we hypothesize that for the 'solved' part of MicroVQA, the language-only reasoning is relatively simple compared to domains like math that require multi-step logic. This suggests that other aspects of MLLM reasoning are more challenging, like multimodal reasoning. If true, this suggests that future work could focus on stronger image representations, and this idea is supported by the qualitative error analysis in Sec. 5.2." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 689, + 296, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 296, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 296, + 715 + ], + "type": "text", + "content": "Specialist biomedical training does improve performance While no specialist MLLMs have been developed" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 71, + 555, + 126 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 71, + 555, + 126 + ], + "spans": [ + { + "bbox": [ + 313, + 71, + 555, + 126 + ], + "type": "text", + "content": "Table 3. VQA accuracy on MicroVQA by task: expert visual understanding (V), hypothesis generation (H), experiment proposal (E). Models marked * were used in MCQ generation, which may affect comparative performance (see Sec. 5.1). The model † is the base LlaVA for LLaVA-Med. Best values are bolded." + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 315, + 135, + 553, + 340 + ], + "blocks": [ + { + "bbox": [ + 315, + 135, + 553, + 340 + ], + "lines": [ + { + "bbox": [ + 315, + 135, + 553, + 340 + ], + "spans": [ + { + "bbox": [ + 315, + 135, + 553, + 340 + ], + "type": "table", + "html": "
ModelOverallVHE
Ro1 [36]52.855.450.253.0
large models*Claude-3.5-Sonnet [6]51.754.150.250.4
Gemini-Pro-1.5 [69]51.152.050.250.9
Pixtral-Large [2]49.850.849.548.7
Grok-2-Vision [84]48.450.346.448.7
Qwen-2-vl-72b-Instruct [79]47.549.245.747.8
VILA1.5-40b [48]47.547.247.947.4
*GPT-4o [1]45.648.743.144.8
Llama-3.1-Nemotron-70b-Instruct [83]44.244.943.344.8
Llama-3.2-90b-Vision-Instruct [21]42.444.942.138.7
small modelsQwen-2-VL-7b [79]48.854.143.349.6
Claude-3.5-Haiku [6]47.148.043.851.7
Gemini-Flash-1.5-8b [69]46.748.743.649.1
GPT-4o-mini [1]46.248.543.647.0
Pixtral-12b [2]45.646.944.844.8
VILA1.5-13b [48]41.841.847.540.9
Llama-3.2-11b-vision-instruct [21]30.332.429.328.7
medicalLLaVA-Med-Mistral-7B [42]43.037.347.141.6
†LLaVA-Mistral-7B [49]39.831.643.137.1
Random22.021.921.821.9
Human50.352.747.551.4
", + "image_path": "25ea5f9019a11013a86d27e025e57525ea0afa293313e7e67d2c997a88f57ceb.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 329, + 437, + 541, + 522 + ], + "blocks": [ + { + "bbox": [ + 313, + 354, + 555, + 432 + ], + "lines": [ + { + "bbox": [ + 313, + 354, + 555, + 432 + ], + "spans": [ + { + "bbox": [ + 313, + 354, + 555, + 432 + ], + "type": "text", + "content": "Table 4. Ablation study on MicroVQA MCQ generation stages (shown in Fig. 4). Accuracy is high because MCQs have shortcuts (Sec. 4.1) after 'Stage 1' exam alignment, but is lower after 'Stage 2'. Final column is the relative decrease in accuracy. Models with * were used in Stage 2 generation and have the biggest accuracy drops (bolded). They are grouped with different models from the same provider in **." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 329, + 437, + 541, + 522 + ], + "lines": [ + { + "bbox": [ + 329, + 437, + 541, + 522 + ], + "spans": [ + { + "bbox": [ + 329, + 437, + 541, + 522 + ], + "type": "table", + "html": "
Stage 1Stage 2Relative drop
*GPT-4o79.746.8-41.2
**GPT-4o-mini75.646.2-39.0
*Claude-3.5-Sonnet82.251.7-37.1
**Claude-3.5-Haiku77.347.1-39.0
ol81.652.8-35.3
Pixtral-Large80.149.8-37.8
Gemini-Pro-1.579.151.1-35.4
", + "image_path": "0d276a6c3025cbc820c010979d7949d8cc8c3b10e99164ba5034c627462ec952.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 546, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 546, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 546, + 555, + 715 + ], + "type": "text", + "content": "for microscopy research in particular, LLaVA-Med is fine-tuned on scientific articles from PubMed that include data types overlapping with MicroVQA - especially (tissue) pathology. In Tab. 3, we compare LLaVA-Med against it's base model, LLaVA-Mistral-7B and find overall stronger performance by 4.5 points. This suggests two clear opportunities for improving MicroVQA performance. The first is to instruct-tune with a dataset that better aligns with the MicroVQA data domain: not only tissue pathology, but also fluorescence and electron microscopy, which is available in scientific articles [93]. The second is to simply begin training from a stronger open model like Pixtral. Beyond that, there is more to explore in MLLM specialization, such as tuning the base image encoder for microscopy [51]." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 82, + 74, + 272, + 248 + ], + "blocks": [ + { + "bbox": [ + 82, + 74, + 272, + 248 + ], + "lines": [ + { + "bbox": [ + 82, + 74, + 272, + 248 + ], + "spans": [ + { + "bbox": [ + 82, + 74, + 272, + 248 + ], + "type": "image", + "image_path": "fb7ae61d2c764b73e9ea659d9fa49d8dfbcb46b282473ec901f7169f1e274c0a.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 260, + 295, + 293 + ], + "lines": [ + { + "bbox": [ + 55, + 260, + 295, + 293 + ], + "spans": [ + { + "bbox": [ + 55, + 260, + 295, + 293 + ], + "type": "text", + "content": "Figure 5. Performance by sub-task and Bloom's level for best models: Gemini-1.5-Pro (closed source), VILA1.5-40B (open-source), and LlaVA-Med-Mistral-7B (medical)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 303, + 295, + 542 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 303, + 295, + 542 + ], + "spans": [ + { + "bbox": [ + 56, + 303, + 295, + 542 + ], + "type": "text", + "content": "RefineBot MCQ generation is very effective but introduces small model biases We introduced a new method for refining MCQs to make them more challenging - RefineBot. Tab. 4 shows that between stage 1 and stage 2 (before and after RefineBot), the relative accuracy for all models drops by between " + }, + { + "bbox": [ + 56, + 303, + 295, + 542 + ], + "type": "inline_equation", + "content": "35\\%" + }, + { + "bbox": [ + 56, + 303, + 295, + 542 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 56, + 303, + 295, + 542 + ], + "type": "inline_equation", + "content": "42\\%" + }, + { + "bbox": [ + 56, + 303, + 295, + 542 + ], + "type": "text", + "content": ". Firstly, this validates that RefineBot is a valuable tool for making benchmarks harder. However, the most significant drops are for models used by RefineBot (GPT-4o & Claude-3.5-Sonnet, and their smaller versions, GPT-4o-mini & Claude-3-Haiku). The adversarial process introduced a small bias against these models (and we hypothesise the bias would be higher if using only one model instead of two). Despite making fair evaluation slightly more challenging, our use of frontier LLMs in the refinement process is well-motivated, as these models are best positioned to generate challenging, high-quality MCQs that probe the boundaries of current capabilities. Moreover, the bias seems modest - less than 10 points of relative drop - while the benefit is a huge increase in difficulty across all test MLLMs, as seen in Tab. 4." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 556, + 295, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 556, + 295, + 628 + ], + "spans": [ + { + "bbox": [ + 55, + 556, + 295, + 628 + ], + "type": "text", + "content": "Metadata-based analysis supports deeper error analysis Fig. 5 shows error rates based on attribute tag. In Appendix F we explore results further - namely that multi-image reasoning has surprisingly good performance, higher level Bloom's questions are harder, and (not shown) rare image modalities perform worse." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 635, + 216, + 647 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 635, + 216, + 647 + ], + "spans": [ + { + "bbox": [ + 55, + 635, + 216, + 647 + ], + "type": "text", + "content": "5.2. Error analysis on MicroVQA" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 653, + 295, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 653, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 653, + 295, + 714 + ], + "type": "text", + "content": "To understand model errors, three biomedical experts reviewed the Chain-of-Thought (CoT) responses of 30 samples on Claude 3.5 Sonnet, with each review requiring at least 45 mins of careful analysis. Errors were classified into a major category and minor contributing errors were also" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "type": "text", + "content": "noted. Expert perception accounted for " + }, + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "type": "inline_equation", + "content": "50\\%" + }, + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "type": "text", + "content": " of errors, " + }, + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "type": "inline_equation", + "content": "30\\%" + }, + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "type": "text", + "content": " were knowledge errors, and " + }, + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "type": "inline_equation", + "content": "13\\%" + }, + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "type": "text", + "content": " were due to overgeneralization; the final " + }, + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "type": "inline_equation", + "content": "7\\%" + }, + { + "bbox": [ + 313, + 72, + 553, + 144 + ], + "type": "text", + "content": " were text hallucination or general reasoning error. Appendix F.6 shows examples for the most common types. We expand on this analysis with an automated version across the complete dataset in Appendix F.7." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 148, + 555, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 148, + 555, + 281 + ], + "spans": [ + { + "bbox": [ + 313, + 148, + 555, + 281 + ], + "type": "text", + "content": "Expert perception errors occurred when the model misinterpreted visual features, leading to incorrect inferences. For example, in Fig. 13, the model judges the small electron-dense objects as ribosomes, rather than stress granules – ribosomes tend to be smaller, have a more regular shape, and lower contrast. Language bias may have contributed to the preference for ribosomes since they are very commonly studied in EM literature available to LLM training. Overall, perception errors were the dominant error type (50%), which suggests that future methods should improve vision representations in MLLMs for microscopy." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 285, + 556, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 285, + 556, + 430 + ], + "spans": [ + { + "bbox": [ + 313, + 285, + 556, + 430 + ], + "type": "text", + "content": "Knowledge errors highlight gaps in nuanced biomedical knowledge. One example in Fig. 17 deals with understanding how protein localization changes during cell signaling. An experiment gives information about a transmembrane signaling protein, and the question asks to interpret the image and explain the function in these cells. The model correctly perceives a punctate distribution, but assumes the role of coordinating intracellular vesicle trafficking. This represents a shallow interpretation that could have been improved with specialized biomedical knowledge. These errors could be mitigated by improving knowledge in MLLMs, either in training or as a database at inference." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 434, + 556, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 434, + 556, + 532 + ], + "spans": [ + { + "bbox": [ + 313, + 434, + 556, + 532 + ], + "type": "text", + "content": "Overgeneralization errors (and simplification errors) reflect the model's tendency to apply broad scientific principles without regard for specific context – arguably they are reasoning errors. In one example in Fig. 21, the question is to compare the risk of malignancy of a tumor, schwannoma, to other nerve sheath tumors. During reasoning, the model proposes a simplified question – what is true about schwannoma – which it answers while ignoring the comparison." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 314, + 556, + 388, + 569 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 556, + 388, + 569 + ], + "spans": [ + { + "bbox": [ + 314, + 556, + 388, + 569 + ], + "type": "text", + "content": "6. Conclusion" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 582, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 582, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 582, + 555, + 715 + ], + "type": "text", + "content": "MLLMs hold great potential for advancing scientific discovery. We aim to inspire the development of broadly applicable scientific AI systems. Future directions include training models for more robust image representations, integrating knowledge bases, exploring open evaluation methods, and studying LLMs' reasoning over hypotheses. Beyond microscopy, we hope our benchmark serves as a blueprint for AI evaluation in expert-driven domains. More generally we aspire for different application domains – biomedicine, chemistry, materials science – to share methods towards the common goal of building scientific AI systems." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 115, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 91, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 61, + 91, + 296, + 145 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 91, + 296, + 145 + ], + "spans": [ + { + "bbox": [ + 61, + 91, + 296, + 145 + ], + "type": "text", + "content": "[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 2, 7" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 148, + 296, + 201 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 148, + 296, + 201 + ], + "spans": [ + { + "bbox": [ + 61, + 148, + 296, + 201 + ], + "type": "text", + "content": "[2] Pravesh Agrawal, Szymon Antoniak, Emma Bou Hanna, Devendra Chaplot, Jessica Chudnovsky, Saurabh Garg, Theophile Gervet, Soham Ghosh, Amélie Héliiou, Paul Jacob, et al. Pixtral 12b. arXiv preprint arXiv:2410.07073, 2024. 7" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 204, + 295, + 269 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 204, + 295, + 269 + ], + "spans": [ + { + "bbox": [ + 62, + 204, + 295, + 269 + ], + "type": "text", + "content": "[3] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35:23716-23736, 2022. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 271, + 295, + 315 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 271, + 295, + 315 + ], + "spans": [ + { + "bbox": [ + 62, + 271, + 295, + 315 + ], + "type": "text", + "content": "[4] Elaf Alhazmi, Quan Z Sheng, Wei Emma Zhang, Munazza Zaib, and Ahoud Alhazmi. Distractor generation for multiple-choice questions: A survey of methods, datasets, and evaluation. arXiv preprint arXiv:2402.01512, 2024. 3, 5" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 317, + 296, + 360 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 317, + 296, + 360 + ], + "spans": [ + { + "bbox": [ + 62, + 317, + 296, + 360 + ], + "type": "text", + "content": "[5] Lorin W. Anderson and David R. Krathwohl, editors. A Taxonomy for Learning, Teaching, and Assessing. A Revision of Bloom's Taxonomy of Educational Objectives. Allyn & Bacon, New York, 2 edition, 2001. 5, 8" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 362, + 294, + 384 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 362, + 294, + 384 + ], + "spans": [ + { + "bbox": [ + 62, + 362, + 294, + 384 + ], + "type": "text", + "content": "[6] Anthropic. The claude 3 model family: Opus, sonnet, haiku. Model card, Anthropic, 2024. 2, 7" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 386, + 295, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 386, + 295, + 430 + ], + "spans": [ + { + "bbox": [ + 62, + 386, + 295, + 430 + ], + "type": "text", + "content": "[7] Nishant Balepur, Abhilasha Ravichander, and Rachel Rudinger. Artifacts or abduction: How do llms answer multiple-choice questions without the question? arXiv preprint arXiv:2402.12483, 2024. 3, 18" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 431, + 295, + 485 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 431, + 295, + 485 + ], + "spans": [ + { + "bbox": [ + 62, + 431, + 295, + 485 + ], + "type": "text", + "content": "[8] Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. Abductive commonsense reasoning. arXiv preprint arXiv:1908.05739, 2019. 3, 4" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 487, + 295, + 553 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 487, + 295, + 553 + ], + "spans": [ + { + "bbox": [ + 62, + 487, + 295, + 553 + ], + "type": "text", + "content": "[9] Owen Bianchi, Maya Willey, Chelsea X Alvarado, Benjamin Danek, Marzieh Khani, Nicole Kuznetsov, Anant Dadu, Syed Shah, Mathew J Koretsky, Mary B Makarious, et al. Cardbiomedbench: A benchmark for evaluating large language model performance in biomedical research. bioRxiv, pages 2025-01, 2025. 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 555, + 295, + 599 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 555, + 295, + 599 + ], + "spans": [ + { + "bbox": [ + 57, + 555, + 295, + 599 + ], + "type": "text", + "content": "[10] Semere Kiros Bitew, Johannes Deleu, Chris Develder, and Thomas Demeester. Distractor generation for multiple-choice questions with predictive prompting and large language models. arXiv preprint arXiv:2307.16338, 2023. 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 601, + 294, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 601, + 294, + 643 + ], + "spans": [ + { + "bbox": [ + 57, + 601, + 294, + 643 + ], + "type": "text", + "content": "[11] Benjamin S. Bloom. Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: The Cognitive Domain. David McKay Co Inc, New York, 1956. 4, 5" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 646, + 295, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 646, + 295, + 678 + ], + "spans": [ + { + "bbox": [ + 57, + 646, + 295, + 678 + ], + "type": "text", + "content": "[12] Daniil A Boiko, Robert MacKnight, Ben Kline, and Gabe Gomes. Autonomous chemical research with large language models. Nature, 624(7992):570-578, 2023. 2" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 57, + 680, + 295, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 680, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 680, + 295, + 713 + ], + "type": "text", + "content": "[13] Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 713 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 335, + 73, + 553, + 94 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 335, + 73, + 553, + 94 + ], + "spans": [ + { + "bbox": [ + 335, + 73, + 553, + 94 + ], + "type": "text", + "content": "repeated sampling. arXiv preprint arXiv:2407.21787, 2024.6" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 317, + 96, + 553, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 96, + 553, + 150 + ], + "spans": [ + { + "bbox": [ + 317, + 96, + 553, + 150 + ], + "type": "text", + "content": "[14] James Burgess, Jeffrey J Nirschl, Maria-Clara Zanellati, Alejandro Lozano, Sarah Cohen, and Serena Yeung-Levy. Orientation-invariant autoencoders learn robust representations for shape profiling of cells and organelles. Nature Communications, 15(1):1022, 2024. 7" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 152, + 553, + 206 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 152, + 553, + 206 + ], + "spans": [ + { + "bbox": [ + 317, + 152, + 553, + 206 + ], + "type": "text", + "content": "[15] Hengxing Cai, Xiaochen Cai, Junhan Chang, Sihang Li, Lin Yao, Changxin Wang, Zhifeng Gao, Yongge Li, Mujie Lin, Shuwen Yang, et al. Sciassess: Benchmarking llm proficiency in scientific literature analysis. arXiv preprint arXiv:2403.01976, 2024. 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 209, + 553, + 240 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 209, + 553, + 240 + ], + "spans": [ + { + "bbox": [ + 316, + 209, + 553, + 240 + ], + "type": "text", + "content": "[16] Anne E Carpenter, Beth A Cimini, and Kevin W Eliceiri. Smart microscopes of the future. nature methods, 20(7):962-964, 2023. 2" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 317, + 243, + 553, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 243, + 553, + 297 + ], + "spans": [ + { + "bbox": [ + 317, + 243, + 553, + 297 + ], + "type": "text", + "content": "[17] Nathan H Cho, Keith C Cheveralls, Andreas-David Brunner, Kibeom Kim, André C Michaelis, Preethi Raghavan, Hirofumi Kobayashi, Laura Savy, Jason Y Li, Hera Canaj, et al. Opencell: Endogenous tagging for the cartography of human cellular organization. Science, 375(6585):eabi6983, 2022. 7" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 317, + 299, + 553, + 363 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 299, + 553, + 363 + ], + "spans": [ + { + "bbox": [ + 317, + 299, + 553, + 363 + ], + "type": "text", + "content": "[18] Kenneth Clark, Bruce Vendt, Kirk Smith, John Freymann, Justin Kirby, Paul Koppel, Stephen Moore, Stanley Phillips, David Maffitt, Michael Pringle, et al. The cancer imaging archive (tcia): maintaining and operating a public information repository. Journal of digital imaging, 26:1045-1057, 2013. 7" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 317, + 365, + 553, + 408 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 365, + 553, + 408 + ], + "spans": [ + { + "bbox": [ + 317, + 365, + 553, + 408 + ], + "type": "text", + "content": "[19] Alison Crowe, Clarissa Dirks, and Mary Pat Wenderoth. Biology in bloom: implementing bloom's taxonomy to enhance student learning in biology. CBE Life Sci. Educ., 7 (4):368-381, 2008. 5, 8" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 411, + 459, + 421 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 411, + 459, + 421 + ], + "spans": [ + { + "bbox": [ + 317, + 411, + 459, + 421 + ], + "type": "text", + "content": "[20] Igor Douven. Abduction, 2011. 4" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 423, + 553, + 475 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 423, + 553, + 475 + ], + "spans": [ + { + "bbox": [ + 317, + 423, + 553, + 475 + ], + "type": "text", + "content": "[21] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. 7" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 479, + 553, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 479, + 553, + 533 + ], + "spans": [ + { + "bbox": [ + 317, + 479, + 553, + 533 + ], + "type": "text", + "content": "[22] Kehua Feng, Keyan Ding, Weijie Wang, Xiang Zhuang, Zeyuan Wang, Ming Qin, Yu Zhao, Jianhua Yao, Qiang Zhang, and Huajun Chen. Sciknoweval: Evaluating multilevel scientific knowledge of large language models. arXiv preprint arXiv:2406.09098, 2024. 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 317, + 535, + 553, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 535, + 553, + 589 + ], + "spans": [ + { + "bbox": [ + 317, + 535, + 553, + 589 + ], + "type": "text", + "content": "[23] Wanyong Feng, Jaewook Lee, Hunter McNichols, Alexander Scarlatos, Digory Smith, Simon Woodhead, Nancy Otero Ornelas, and Andrew Lan. Exploring automated distractor generation for math multiple-choice questions via large language models. arXiv preprint arXiv:2404.02124, 2024. 3" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 317, + 591, + 553, + 634 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 591, + 553, + 634 + ], + "spans": [ + { + "bbox": [ + 317, + 591, + 553, + 634 + ], + "type": "text", + "content": "[24] Nigel Fernandez, Alexander Scarlatos, Simon Woodhead, and Andrew Lan. Divert: Distractor generation with variational errors represented as text for math multiple-choice questions. arXiv preprint arXiv:2406.19356, 2024. 3" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 317, + 635, + 553, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 635, + 553, + 689 + ], + "spans": [ + { + "bbox": [ + 317, + 635, + 553, + 689 + ], + "type": "text", + "content": "[25] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023. 3" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 316, + 692, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 692, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 692, + 553, + 713 + ], + "type": "text", + "content": "[26] Shanghua Gao, Ada Fang, Yepeng Huang, Valentina Giunchiglia, Ayush Noori, Jonathan Richard Schwarz, Yasha" + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 742 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 742 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 713 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 76, + 72, + 295, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 72, + 295, + 106 + ], + "spans": [ + { + "bbox": [ + 76, + 72, + 295, + 106 + ], + "type": "text", + "content": "Ektefaie, Jovana Kondic, and Marinka Zitnik. Empowering biomedical discovery with ai agents. arXiv preprint arXiv:2404.02831, 2024. 2, 3" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 106, + 295, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 106, + 295, + 150 + ], + "spans": [ + { + "bbox": [ + 56, + 106, + 295, + 150 + ], + "type": "text", + "content": "[27] Mark J Gierl, Okan Bulut, Qi Guo, and Xinxin Zhang. Developing, analyzing, and using distractors for multiple-choice tests in education: A comprehensive review. Review of educational research, 87(6):1082-1116, 2017. 3, 8" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 150, + 294, + 195 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 150, + 294, + 195 + ], + "spans": [ + { + "bbox": [ + 56, + 150, + 294, + 195 + ], + "type": "text", + "content": "[28] Juraj Gottweis, Wei-Hung Weng, Alexander Daryin, Tao Tu, Anil Palepu, Petar Sirkovic, Artiom Myaskovsky, Felix Weissenberger, Keran Rong, Ryutaro Tanno, et al. Towards an ai co-scientist. arXiv preprint arXiv:2502.18864, 2025. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 195, + 294, + 259 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 195, + 294, + 259 + ], + "spans": [ + { + "bbox": [ + 56, + 195, + 294, + 259 + ], + "type": "text", + "content": "[29] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904-6913, 2017. 3, 13, 18" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 261, + 294, + 304 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 261, + 294, + 304 + ], + "spans": [ + { + "bbox": [ + 56, + 261, + 294, + 304 + ], + "type": "text", + "content": "[30] Xuehai He, Yichen Zhang, Luntian Mou, Eric Xing, and Pengtao Xie. Pathvqa: 30000+ questions for medical visual question answering. arXiv preprint arXiv:2003.10286, 2020. 3, 4" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 304, + 294, + 360 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 304, + 294, + 360 + ], + "spans": [ + { + "bbox": [ + 56, + 304, + 294, + 360 + ], + "type": "text", + "content": "[31] Marco Y Hein, Duo Peng, Verina Todorova, Frank McCarthy, Kibeom Kim, Chad Liu, Laura Savy, Camille Januel, Rodrigo Baltazar-Nunez, Madhurya Sekhar, et al. Global organelle profiling reveals subcellular localization and remodeling at proteome scale. Cell, 188(4):1137-1155, 2025. 7" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 360, + 294, + 415 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 360, + 294, + 415 + ], + "spans": [ + { + "bbox": [ + 56, + 360, + 294, + 415 + ], + "type": "text", + "content": "[32] Jack Hessel, Jena D Hwang, Jae Sung Park, Rowan Zellers, Chandra Bhagavatula, Anna Rohrbach, Kate Saenko, and Yejin Choi. The abduction of sherlock holmes: A dataset for visual abductive reasoning. In European Conference on Computer Vision, pages 558-575. Springer, 2022. 3, 4" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 415, + 294, + 459 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 415, + 294, + 459 + ], + "spans": [ + { + "bbox": [ + 56, + 415, + 294, + 459 + ], + "type": "text", + "content": "[33] Tom Hope, Doug Downey, Daniel S Weld, Oren Etzioni, and Eric Horvitz. A computational inflection for scientific discovery. Communications of the ACM, 66(8):62-73, 2023. 2, 3, 4, 6" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 459, + 294, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 459, + 294, + 525 + ], + "spans": [ + { + "bbox": [ + 56, + 459, + 294, + 525 + ], + "type": "text", + "content": "[34] Yutao Hu, Tianbin Li, Quanfeng Lu, Wenqi Shao, Junjun He, Yu Qiao, and Ping Luo. Omnimedvqa: A new large-scale comprehensive evaluation benchmark for medical lvm. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22170-22183, 2024. 3, 4, 2, 7" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 525, + 294, + 581 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 525, + 294, + 581 + ], + "spans": [ + { + "bbox": [ + 56, + 525, + 294, + 581 + ], + "type": "text", + "content": "[35] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700-6709, 2019. 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 581, + 294, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 581, + 294, + 624 + ], + "spans": [ + { + "bbox": [ + 56, + 581, + 294, + 624 + ], + "type": "text", + "content": "[36] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024. 7" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 624, + 294, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 624, + 294, + 669 + ], + "spans": [ + { + "bbox": [ + 56, + 624, + 294, + 669 + ], + "type": "text", + "content": "[37] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W Cohen, and Xinghua Lu. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146, 2019. 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 669, + 294, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 669, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 669, + 294, + 713 + ], + "type": "text", + "content": "[38] Yeonghun Kang and Jihan Kim. ChatMOF: an artificial intelligence system for predicting and generating metal-organic frameworks using large language models. Nat. Commun., 15 (1):4705, 2024. 2" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 72, + 554, + 715 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 316, + 72, + 554, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 72, + 554, + 127 + ], + "spans": [ + { + "bbox": [ + 316, + 72, + 554, + 127 + ], + "type": "text", + "content": "[39] Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive NLP. arXiv preprint arXiv:2212.14024, 2022. 6" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 128, + 554, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 128, + 554, + 205 + ], + "spans": [ + { + "bbox": [ + 316, + 128, + 554, + 205 + ], + "type": "text", + "content": "[40] Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, Zhiyuan Zhang, Keshav Santhanam, Sri Vardhamanan, Saiful Haq, Ashutosh Sharma, Thomas T. Joshi, Hanna Moazam, Heather Miller, Matei Zaharia, and Christopher Potts. Dspy: Compiling declarative language model calls into self-improving pipelines. arXiv preprint arXiv:2310.03714, 2023.6" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 206, + 554, + 273 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 206, + 554, + 273 + ], + "spans": [ + { + "bbox": [ + 316, + 206, + 554, + 273 + ], + "type": "text", + "content": "[41] Jon M Laurent, Joseph D Janizek, Michael Ruzo, Michaela M Hinks, Michael J Hammerling, Siddharth Narayanan, Manvitha Ponnapati, Andrew D White, and Samuel G Rodriques. Lab-bench: Measuring capabilities of language models for biology research. arXiv preprint arXiv:2407.10362, 2024. 2, 3, 4" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 274, + 554, + 329 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 274, + 554, + 329 + ], + "spans": [ + { + "bbox": [ + 316, + 274, + 554, + 329 + ], + "type": "text", + "content": "[42] Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, and Jianfeng Gao. Llava-med: Training a large language-and-vision assistant for biomedicine in one day. Advances in Neural Information Processing Systems, 36, 2024. 3, 7" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 329, + 554, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 329, + 554, + 374 + ], + "spans": [ + { + "bbox": [ + 316, + 329, + 554, + 374 + ], + "type": "text", + "content": "[43] Junxian Li, Di Zhang, Xunzhi Wang, Zeying Hao, Jingdi Lei, Qian Tan, Cai Zhou, Wei Liu, Yaotian Yang, Xinrui Xiong, et al. Chemvlm: Exploring the power of multimodal large language models in chemistry area. CoRR, 2024. 2, 3" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 374, + 554, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 374, + 554, + 409 + ], + "spans": [ + { + "bbox": [ + 316, + 374, + 554, + 409 + ], + "type": "text", + "content": "[44] Lin Li, Guikun Chen, Hanrong Shi, Jun Xiao, and Long Chen. A survey on multimodal benchmarks: In the era of large ai models. arXiv preprint arXiv:2409.18142, 2024. 3" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 409, + 554, + 454 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 409, + 554, + 454 + ], + "spans": [ + { + "bbox": [ + 316, + 409, + 554, + 454 + ], + "type": "text", + "content": "[45] Lei Li, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, and Qi Liu. Multimodal arxiv: A dataset for improving scientific comprehension of large vision-language models. arXiv preprint arXiv:2403.00231, 2024. 3" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 454, + 554, + 488 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 454, + 554, + 488 + ], + "spans": [ + { + "bbox": [ + 316, + 454, + 554, + 488 + ], + "type": "text", + "content": "[46] Shengzhi Li and Nima Tajbakhsh. Scigraphqa: A large-scale synthetic multi-turn question-answering dataset for scientific graphs. arXiv preprint arXiv:2308.03349, 2023." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 488, + 554, + 544 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 488, + 554, + 544 + ], + "spans": [ + { + "bbox": [ + 316, + 488, + 554, + 544 + ], + "type": "text", + "content": "[47] Zekun Li, Xianjun Yang, Kyuri Choi, Wanrong Zhu, Ryan Hsieh, HyeonJung Kim, Jin Hyuk Lim, Sungyoung Ji, Byungju Lee, Xifeng Yan, et al. Mmsci: A multimodal multi-discipline dataset for phd-level scientific comprehension. arXiv preprint arXiv:2407.04903, 2024. 2, 3, 4" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 544, + 554, + 599 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 544, + 554, + 599 + ], + "spans": [ + { + "bbox": [ + 316, + 544, + 554, + 599 + ], + "type": "text", + "content": "[48] Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, and Song Han. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26689-26699, 2024. 7" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 599, + 554, + 634 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 599, + 554, + 634 + ], + "spans": [ + { + "bbox": [ + 316, + 599, + 554, + 634 + ], + "type": "text", + "content": "[49] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36, 2024. 2, 7" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 634, + 554, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 634, + 554, + 690 + ], + "spans": [ + { + "bbox": [ + 316, + 634, + 554, + 690 + ], + "type": "text", + "content": "[50] Alejandro Lozano, Jeffrey Nirschl, James Burgess, Sanket Rajan Gupte, Yuhui Zhang, Alyssa Unell, and Serena Yeung. Micro-bench: A microscopy benchmark for vision-language understanding. Advances in Neural Information Processing Systems, 37:30670-30685, 2024. 2, 3, 4" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 690, + 554, + 715 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 690, + 554, + 715 + ], + "spans": [ + { + "bbox": [ + 316, + 690, + 554, + 715 + ], + "type": "text", + "content": "[51] Alejandro Lozano, Min Woo Sun, James Burgess, Liangyu Chen, Jeffrey J Nirschl, Jeffrey Gu, Ivan Lopez, Josiah" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 712 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 76, + 72, + 294, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 72, + 294, + 116 + ], + "spans": [ + { + "bbox": [ + 76, + 72, + 294, + 116 + ], + "type": "text", + "content": "Aklilu, Austin Wolfgang Katzer, Collin Chiu, et al. Biomedica: An open biomedical image-caption archive, dataset, and vision-language models derived from scientific literature. arXiv preprint arXiv:2501.07171, 2025. 7" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 118, + 295, + 204 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 118, + 295, + 204 + ], + "spans": [ + { + "bbox": [ + 56, + 118, + 295, + 204 + ], + "type": "text", + "content": "[52] Ming Y \"Lu, Bowen Chen, Drew F K Williamson, Richard J Chen, Melissa Zhao, Aaron K Chow, Kenji Ikemura, Ahrong Kim, Dimitra Pouli, Ankush Patel, Amr Soliman, Chengkuan Chen, Tong Ding, Judy J Wang, Georg Gerber, Ivy Liang, Long Phi Le, Anil V Parwani, Luca L Weishaupt, and Faisal\" Mahmood. \"a multimodal generative AI copilot for human pathology\". \"Nature\", 634(8033):466-473\", 2024. 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 205, + 294, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 205, + 294, + 270 + ], + "spans": [ + { + "bbox": [ + 56, + 205, + 294, + 270 + ], + "type": "text", + "content": "[53] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507-2521, 2022. 2, 3, 4" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 272, + 295, + 326 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 272, + 295, + 326 + ], + "spans": [ + { + "bbox": [ + 56, + 272, + 295, + 326 + ], + "type": "text", + "content": "[54] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255, 2023. 3" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 327, + 295, + 370 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 327, + 295, + 370 + ], + "spans": [ + { + "bbox": [ + 56, + 327, + 295, + 370 + ], + "type": "text", + "content": "[55] Andres M. Bran, Sam Cox, Oliver Schilter, Carlo Baldassari, Andrew D White, and Philippe Schwaller. Augmenting large language models with chemistry tools. Nature Machine Intelligence, pages 1-11, 2024. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 372, + 295, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 372, + 295, + 426 + ], + "spans": [ + { + "bbox": [ + 56, + 372, + 295, + 426 + ], + "type": "text", + "content": "[56] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195-3204, 2019. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 426, + 295, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 426, + 295, + 491 + ], + "spans": [ + { + "bbox": [ + 56, + 426, + 295, + 491 + ], + "type": "text", + "content": "[57] M Maruf, Arka Daw, Kazi Sajeed Mehrab, Harish Babu Manogaran, Abhilash Neog, Medha Sawhney, Mridul Khurana, James P Balhoff, Yasin Bakis, Bahadir Altintas, et al. Vlm4bio: A benchmark dataset to evaluate pretrained vision-language models for trait discovery from biological images. arXiv preprint arXiv:2408.16176, 2024. 2, 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 493, + 295, + 557 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 493, + 295, + 557 + ], + "spans": [ + { + "bbox": [ + 56, + 493, + 295, + 557 + ], + "type": "text", + "content": "[58] Adrian Mirza, Nawaf Alampara, Sreekanth Kunchapu, Benedict Emoekabu, Aswanth Krishnan, Mara Wilhelmi, Macjonathan Okereke, Juliane Eberhardt, Amir Mohammad Elahi, Maximilian Greiner, et al. Are large language models superhuman chemists? arXiv preprint arXiv:2404.01475, 2024.3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 559, + 295, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 559, + 295, + 612 + ], + "spans": [ + { + "bbox": [ + 56, + 559, + 295, + 612 + ], + "type": "text", + "content": "[59] Michael Moor, Oishi Banerjee, Zahra Shakeri Hossein Abad, Harlan M Krumholz, Jure Leskovec, Eric J Topol, and Pranav Rajpurkar. Foundation models for generalist medical artificial intelligence. Nature, 616(7956):259-265, 2023. 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 614, + 295, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 614, + 295, + 669 + ], + "spans": [ + { + "bbox": [ + 56, + 614, + 295, + 669 + ], + "type": "text", + "content": "[60] Siddharth Narayanan, James D Braza, Ryan-Rhys Griffiths, Manu Ponnapati, Albert Bou, Jon Laurent, Ori Kabeli, Geemi Wellawatte, Sam Cox, Samuel G Rodriques, et al. Aviary: training language agents on challenging scientific tasks. arXiv preprint arXiv:2412.21154, 2024. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 670, + 295, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 670, + 295, + 712 + ], + "spans": [ + { + "bbox": [ + 56, + 670, + 295, + 712 + ], + "type": "text", + "content": "[61] Shraman Pramanick, Rama Chellappa, and Subhashini Venugopalan. Spiqa: A dataset for multimodal question answering on scientific papers. arXiv preprint arXiv:2407.09413, 2024. 3" + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 712 + ], + "type": "list", + "angle": 0, + "index": 24, + "blocks": [ + { + "bbox": [ + 316, + 73, + 553, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 553, + 127 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 553, + 127 + ], + "type": "text", + "content": "[62] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. arXiv preprint arXiv:2311.12022, 2023. 3" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 316, + 129, + 553, + 162 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 129, + 553, + 162 + ], + "spans": [ + { + "bbox": [ + 316, + 129, + 553, + 162 + ], + "type": "text", + "content": "[63] Loic A Royer. Omega—harnessing the power of large language models for bioimage analysis. nature methods, 21(8): 1371-1373, 2024. 2, 3" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 316, + 163, + 553, + 218 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 163, + 553, + 218 + ], + "spans": [ + { + "bbox": [ + 316, + 163, + 553, + 218 + ], + "type": "text", + "content": "[64] Billings Melissa S, DeRuchie Kristine, Hussie Kieran, Kulesher Allison, Merrell Jacquelyn, Morales Amy, Panaguaa Miguel A, Sherlock Jennifer, Swygert Kimberly, and Tyson Julie. Nbme item-writing guide: Constructing written test questions for the health sciences, 2021. 5, 8" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 220, + 553, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 220, + 553, + 274 + ], + "spans": [ + { + "bbox": [ + 316, + 220, + 553, + 274 + ], + "type": "text", + "content": "[65] Khaled Saab, Tao Tu, Wei-Hung Weng, Ryutaro Tanno, David Stutz, Ellery Wulczyn, Fan Zhang, Tim Strother, Chunjong Park, Elahe Vedadi, et al. Capabilities of gemini models in medicine. arXiv preprint arXiv:2404.18416, 2024.3" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 276, + 553, + 331 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 276, + 553, + 331 + ], + "spans": [ + { + "bbox": [ + 316, + 276, + 553, + 331 + ], + "type": "text", + "content": "[66] Michael D Skarlinski, Sam Cox, Jon M Laurent, James D Braza, Michaela Hinks, Michael J Hammerling, Manvitha Ponnapati, Samuel G Rodriques, and Andrew D White. Language agents achieve superhuman synthesis of scientific knowledge. arXiv preprint arXiv:2409.13740, 2024. 2, 3" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 332, + 553, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 332, + 553, + 387 + ], + "spans": [ + { + "bbox": [ + 316, + 332, + 553, + 387 + ], + "type": "text", + "content": "[67] Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, and Kai Yu. Scieval: A multi-level large language model evaluation benchmark for scientific research. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 19053-19061, 2024. 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 388, + 553, + 432 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 388, + 553, + 432 + ], + "spans": [ + { + "bbox": [ + 316, + 388, + 553, + 432 + ], + "type": "text", + "content": "[68] Kyle Swanson, Wesley Wu, Nash L Bulaong, John E Pak, and James Zou. The virtual lab: Ai agents design new sarscov-2 nanobodies with experimental validation. *bioRxiv*, pages 2024–11, 2024. 2" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 434, + 553, + 488 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 434, + 553, + 488 + ], + "spans": [ + { + "bbox": [ + 316, + 434, + 553, + 488 + ], + "type": "text", + "content": "[69] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. 2, 7" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 490, + 553, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 490, + 553, + 533 + ], + "spans": [ + { + "bbox": [ + 316, + 490, + 553, + 533 + ], + "type": "text", + "content": "[70] Surendrabikram Thapa and Surabhi Adhikari. Chatgpt, bard, and large language models for biomedical research: opportunities and pitfalls. Annals of biomedical engineering, 51 (12):2647-2651, 2023. 2" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 535, + 553, + 601 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 535, + 553, + 601 + ], + "spans": [ + { + "bbox": [ + 316, + 535, + 553, + 601 + ], + "type": "text", + "content": "[71] Tristan Thrush, Ryan Jiang, Max Bartolo, Amanpreet Singh, Adina Williams, Douwe Kiela, and Candace Ross. Winoground: Probing vision and language models for visio-linguistic compositionality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5238–5248, 2022. 3" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 602, + 553, + 656 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 602, + 553, + 656 + ], + "spans": [ + { + "bbox": [ + 316, + 602, + 553, + 656 + ], + "type": "text", + "content": "[72] Peter J Thul, Lovisa Åkesson, Mikaela Wiking, Diana Mahdessian, Aikaterini Geladaki, Hammou Ait Blal, Tove Alm, Anna Asplund, Lars Björk, Lisa M Breckels, et al. A subcellular map of the human proteome. Science, 356(6340): eaal3321, 2017. 7" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 658, + 553, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 658, + 553, + 712 + ], + "spans": [ + { + "bbox": [ + 316, + 658, + 553, + 712 + ], + "type": "text", + "content": "[73] Minyang Tian, Luyu Gao, Shizhuo Dylan Zhang, Xinan Chen, Cunwei Fan, Xuefei Guo, Roland Haas, Pan Ji, Kittithat Krongchon, Yao Li, et al. Scicode: A research coding benchmark curated by scientists. arXiv preprint arXiv:2407.13168, 2024. 2, 3" + } + ] + } + ], + "index": 23 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 310, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 310, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 310, + 742 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 713 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 56, + 73, + 294, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 73, + 294, + 127 + ], + "spans": [ + { + "bbox": [ + 56, + 73, + 294, + 127 + ], + "type": "text", + "content": "[74] Shengbang Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Manoj Middepogu, Sai Charitha Akula, Jihan Yang, Shusheng Yang, Adithya Iyer, Xichen Pan, et al. Cambrian1: A fully open, vision-centric exploration of multimodal llms. arXiv preprint arXiv:2406.16860, 2024. 3, 18" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 128, + 294, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 128, + 294, + 183 + ], + "spans": [ + { + "bbox": [ + 56, + 128, + 294, + 183 + ], + "type": "text", + "content": "[75] Tao Tu, Shekoofeh Azizi, Danny Driess, Mike Schaekermann, Mohamed Amin, Pi-Chuan Chang, Andrew Carroll, Charles Lau, Ryutaro Tanno, Ira Ktena, et al. Towards generalist biomedical ai. NEJM AI, 1(3):Aioa2300138, 2024. 3" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 184, + 294, + 237 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 184, + 294, + 237 + ], + "spans": [ + { + "bbox": [ + 56, + 184, + 294, + 237 + ], + "type": "text", + "content": "[76] Matheus P Viana, Jianxu Chen, Theo A Knijnenburg, Ritvik Vasan, Calysta Yan, Joy E Arakaki, Matte Bailey, Ben Berry, Antoine Borensztejn, Eva M Brown, et al. Integrated intracellular organization and its variations in human ips cells. Nature, 613(7943):345-354, 2023. 7" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 239, + 294, + 303 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 239, + 294, + 303 + ], + "spans": [ + { + "bbox": [ + 56, + 239, + 294, + 303 + ], + "type": "text", + "content": "[77] David Wadden, Kejian Shi, Jacob Morrison, Aakanksha Naik, Shruti Singh, Nitzan Barzilay, Kyle Lo, Tom Hope, Luca Soldaini, Shannon Zejiang Shen, et al. Scirff: A resource to enhance language model instruction-following over scientific literature. arXiv preprint arXiv:2406.07835, 2024. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 304, + 294, + 358 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 304, + 294, + 358 + ], + "spans": [ + { + "bbox": [ + 56, + 304, + 294, + 358 + ], + "type": "text", + "content": "[78] Fei Wang, Xingyu Fu, James Y Huang, Zekun Li, Qin Liu, Xiaogeng Liu, Mingyu Derek Ma, Nan Xu, Wenxuan Zhou, Kai Zhang, et al. Muirbench: A comprehensive benchmark for robust multi-image understanding. arXiv preprint arXiv:2406.09411, 2024. 17" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 360, + 294, + 414 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 360, + 294, + 414 + ], + "spans": [ + { + "bbox": [ + 56, + 360, + 294, + 414 + ], + "type": "text", + "content": "[79] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024. 7" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 415, + 294, + 470 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 415, + 294, + 470 + ], + "spans": [ + { + "bbox": [ + 56, + 415, + 294, + 470 + ], + "type": "text", + "content": "[80] Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scientific problem-solving abilities of large language models. arXiv preprint arXiv:2307.10635, 2023. 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 471, + 294, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 471, + 294, + 536 + ], + "spans": [ + { + "bbox": [ + 56, + 471, + 294, + 536 + ], + "type": "text", + "content": "[81] Yiqi Wang, Wentao Chen, Xiaotian Han, Xudong Lin, Haiteng Zhao, Yongfei Liu, Bohan Zhai, Jianbo Yuan, Quanzeng You, and Hongxia Yang. Exploring the reasoning abilities of multimodal large language models (mllms): A comprehensive survey on emerging trends in multimodal reasoning. arXiv preprint arXiv:2401.06805, 2024. 2, 3, 4" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 537, + 294, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 537, + 294, + 591 + ], + "spans": [ + { + "bbox": [ + 56, + 537, + 294, + 591 + ], + "type": "text", + "content": "[82] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. arXiv preprint arXiv:2406.01574, 2024. 7" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 592, + 294, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 592, + 294, + 635 + ], + "spans": [ + { + "bbox": [ + 56, + 592, + 294, + 635 + ], + "type": "text", + "content": "[83] Zhilin Wang, Alexander Bukharin, Olivier Delalleau, Daniel Egert, Gerald Shen, Jiaqi Zeng, Oleksii Kuchaiev, and Yi Dong. Helpsteer2-preference: Complementing ratings with preferences. arXiv preprint arXiv:2410.01257, 2024. 7" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 636, + 294, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 636, + 294, + 657 + ], + "spans": [ + { + "bbox": [ + 56, + 636, + 294, + 657 + ], + "type": "text", + "content": "[84] xAI. Bringing grok to everyone, 2024. Accessed: 2025-03-13. 7" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 658, + 294, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 658, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 658, + 294, + 713 + ], + "type": "text", + "content": "[85] Tong Xie, Yuwei Wan, Wei Huang, Yufei Zhou, Yixuan Liu, Qingyuan Linghu, Shaozhou Wang, Chunyu Kit, Clara Grazian, Wenjie Zhang, et al. Large language models as master key: unlocking the secrets of materials science with gpt. arXiv preprint arXiv:2304.02213, 2023. 2" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 696 + ], + "type": "list", + "angle": 0, + "index": 24, + "blocks": [ + { + "bbox": [ + 316, + 73, + 553, + 128 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 553, + 128 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 553, + 128 + ], + "type": "text", + "content": "[86] Kaining Ying, Fanqing Meng, Jin Wang, Zhiqian Li, Han Lin, Yue Yang, Hao Zhang, Wenbo Zhang, Yuqi Lin, Shuo Liu, et al. Mmt-bench: A comprehensive multimodal benchmark for evaluating large vision-language models towards multitask agi. arXiv preprint arXiv:2404.16006, 2024. 2" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 316, + 129, + 553, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 129, + 553, + 205 + ], + "spans": [ + { + "bbox": [ + 316, + 129, + 553, + 205 + ], + "type": "text", + "content": "[87] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9556-9567, 2024. 2, 3, 4, 7" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 206, + 553, + 261 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 206, + 553, + 261 + ], + "spans": [ + { + "bbox": [ + 316, + 206, + 553, + 261 + ], + "type": "text", + "content": "[88] Xiang Yue, Tianyu Zheng, Yuansheng Ni, Yubo Wang, Kai Zhang, Shengbang Tong, Yuxuan Sun, Ming Yin, Botao Yu, Ge Zhang, et al. Mmmu-pro: A more robust multidiscipline multimodal understanding benchmark. arXiv preprint arXiv:2409.02813, 2024. 3, 4, 7, 15" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 262, + 553, + 305 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 262, + 553, + 305 + ], + "spans": [ + { + "bbox": [ + 316, + 262, + 553, + 305 + ], + "type": "text", + "content": "[89] Nikki B Zaidi, Charles Hwang, Sara Scott, Stefanie Stallard, Joel Purkiss, and Michael Hortsch. Climbing bloom's taxonomy pyramid: Lessons from a graduate histology course. Anat. Sci. Educ., 10(5):456-464, 2017. 5, 8" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 307, + 553, + 361 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 307, + 553, + 361 + ], + "spans": [ + { + "bbox": [ + 316, + 307, + 553, + 361 + ], + "type": "text", + "content": "[90] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6720-6731, 2019. 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 363, + 553, + 417 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 363, + 553, + 417 + ], + "spans": [ + { + "bbox": [ + 316, + 363, + 553, + 417 + ], + "type": "text", + "content": "[91] Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: A dataset for relational and analogical visual reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5317-5327, 2019. 3" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 418, + 553, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 418, + 553, + 472 + ], + "spans": [ + { + "bbox": [ + 316, + 418, + 553, + 472 + ], + "type": "text", + "content": "[92] Kai Zhang, Rong Zhou, Eashan Adhikarla, Zhiling Yan, Yixin Liu, Jun Yu, Zhengliang Liu, Xun Chen, Brian D Davison, Hui Ren, et al. A generalist vision-language foundation model for diverse biomedical tasks. Nature Medicine, pages 1-13, 2024. 3" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 475, + 553, + 539 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 475, + 553, + 539 + ], + "spans": [ + { + "bbox": [ + 316, + 475, + 553, + 539 + ], + "type": "text", + "content": "[93] Sheng Zhang, Yanbo Xu, Naoto Usuyama, Hanwen Xu, Jaspreet Bagga, Robert Tinn, Sam Preston, Rajesh Rao, Mu Wei, Naveen Valluri, et al. Biomedclip: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs. arXiv preprint arXiv:2303.00915, 2023. 7, 3, 17" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 541, + 553, + 596 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 541, + 553, + 596 + ], + "spans": [ + { + "bbox": [ + 316, + 541, + 553, + 596 + ], + "type": "text", + "content": "[94] Yuhui Zhang, Yuchang Su, Yiming Liu, Xiaohan Wang, James Burgess, Elaine Sui, Chenyu Wang, Josiah Akiliu, Alejandro Lozano, Anjiang Wei, et al. Automated generation of challenging multiple-choice questions for vision language model evaluation. arXiv preprint arXiv:2501.03225, 2025. 3" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 597, + 553, + 641 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 597, + 553, + 641 + ], + "spans": [ + { + "bbox": [ + 316, + 597, + 553, + 641 + ], + "type": "text", + "content": "[95] Haiteng Zhao, Chang Ma, FangZhi Xu, Lingpeng Kong, and Zhi-Hong Deng. Biomaze: Benchmarking and enhancing large language models for biological pathway reasoning. arXiv preprint arXiv:2502.16660, 2025. 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 642, + 553, + 696 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 642, + 553, + 696 + ], + "spans": [ + { + "bbox": [ + 316, + 642, + 553, + 696 + ], + "type": "text", + "content": "[96] Zhiling Zheng, Oufan Zhang, Ha L Nguyen, Nakul Rampal, Ali H Alawadhi, Zichao Rong, Teresa Head-Gordon, Christian Borgs, Jennifer T Chayes, and Omar M Yaghi. Chatgpt research group for optimizing the crystallinity of mofs and cofs. ACS Central Science, 9(11):2161-2170, 2023. 2" + } + ] + } + ], + "index": 23 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 82, + 68, + 531, + 102 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 68, + 531, + 102 + ], + "spans": [ + { + "bbox": [ + 82, + 68, + 531, + 102 + ], + "type": "text", + "content": "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 233, + 112, + 376, + 129 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 233, + 112, + 376, + 129 + ], + "spans": [ + { + "bbox": [ + 233, + 112, + 376, + 129 + ], + "type": "text", + "content": "Supplementary Material" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 150, + 555, + 187 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 150, + 555, + 187 + ], + "spans": [ + { + "bbox": [ + 55, + 150, + 555, + 187 + ], + "type": "text", + "content": "Appendices A-C include acknowledgments, author contributions, ethics, and discussion of limitations and future work. The remaining sections correspond to sections in the paper. Appendix D is benchmark details from the main Sec. 3. Appendix E is MCQ generation details from the main Sec. 4. Appendix F is the experiments details from the main Sec. 5." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 200, + 167, + 214 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 200, + 167, + 214 + ], + "spans": [ + { + "bbox": [ + 56, + 200, + 167, + 214 + ], + "type": "text", + "content": "Table of Contents" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 80, + 221, + 531, + 297 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 81, + 221, + 531, + 233 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 221, + 531, + 233 + ], + "spans": [ + { + "bbox": [ + 81, + 221, + 531, + 233 + ], + "type": "text", + "content": "A Acknowledgments and author contributions 1" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 80, + 243, + 531, + 254 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 243, + 531, + 254 + ], + "spans": [ + { + "bbox": [ + 80, + 243, + 531, + 254 + ], + "type": "text", + "content": "B Ethics 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 80, + 265, + 531, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 265, + 531, + 275 + ], + "spans": [ + { + "bbox": [ + 80, + 265, + 531, + 275 + ], + "type": "text", + "content": "C Limitations and future work 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 80, + 286, + 531, + 297 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 286, + 531, + 297 + ], + "spans": [ + { + "bbox": [ + 80, + 286, + 531, + 297 + ], + "type": "text", + "content": "D Benchmark details 3" + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 94, + 300, + 529, + 412 + ], + "type": "list", + "angle": 0, + "index": 18, + "blocks": [ + { + "bbox": [ + 94, + 300, + 529, + 310 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 300, + 529, + 310 + ], + "spans": [ + { + "bbox": [ + 94, + 300, + 529, + 310 + ], + "type": "text", + "content": "D.1. Accessing MicroVQA benchmark 3" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 94, + 312, + 529, + 322 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 312, + 529, + 322 + ], + "spans": [ + { + "bbox": [ + 94, + 312, + 529, + 322 + ], + "type": "text", + "content": "D.2 Dataset Schema and Structure 3" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 94, + 326, + 529, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 326, + 529, + 335 + ], + "spans": [ + { + "bbox": [ + 94, + 326, + 529, + 335 + ], + "type": "text", + "content": "D.3 Training contamination mitigation 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 94, + 338, + 529, + 348 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 338, + 529, + 348 + ], + "spans": [ + { + "bbox": [ + 94, + 338, + 529, + 348 + ], + "type": "text", + "content": "D.4 Benchmark attribute details 4" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 94, + 351, + 529, + 360 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 351, + 529, + 360 + ], + "spans": [ + { + "bbox": [ + 94, + 351, + 529, + 360 + ], + "type": "text", + "content": "D.5 Related benchmarks 4" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 94, + 364, + 529, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 364, + 529, + 373 + ], + "spans": [ + { + "bbox": [ + 94, + 364, + 529, + 373 + ], + "type": "text", + "content": "D.6 Bloom's taxonomy 5" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 94, + 376, + 529, + 387 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 376, + 529, + 387 + ], + "spans": [ + { + "bbox": [ + 94, + 376, + 529, + 387 + ], + "type": "text", + "content": "D.7.Methodology for proposing tasks 6" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 94, + 389, + 529, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 389, + 529, + 399 + ], + "spans": [ + { + "bbox": [ + 94, + 389, + 529, + 399 + ], + "type": "text", + "content": "D.8 Guidance for question creators 7" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 94, + 402, + 529, + 412 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 402, + 529, + 412 + ], + "spans": [ + { + "bbox": [ + 94, + 402, + 529, + 412 + ], + "type": "text", + "content": "D.9 Microscopy image representations 7" + } + ] + } + ], + "index": 17 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 81, + 423, + 531, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 423, + 531, + 434 + ], + "spans": [ + { + "bbox": [ + 81, + 423, + 531, + 434 + ], + "type": "text", + "content": "E.MCQ generation details 7" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 94, + 436, + 529, + 510 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 94, + 436, + 529, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 436, + 529, + 447 + ], + "spans": [ + { + "bbox": [ + 94, + 436, + 529, + 447 + ], + "type": "text", + "content": "E.1. Question collection process details 7" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 94, + 449, + 529, + 459 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 449, + 529, + 459 + ], + "spans": [ + { + "bbox": [ + 94, + 449, + 529, + 459 + ], + "type": "text", + "content": "E.2. Experiment on naive MCQ generation 7" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 94, + 461, + 529, + 472 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 461, + 529, + 472 + ], + "spans": [ + { + "bbox": [ + 94, + 461, + 529, + 472 + ], + "type": "text", + "content": "E.3. Stage 1 MCQ generation (exam alignment) 8" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 94, + 475, + 529, + 485 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 475, + 529, + 485 + ], + "spans": [ + { + "bbox": [ + 94, + 475, + 529, + 485 + ], + "type": "text", + "content": "E.4. Stage 2 MCQ generation (RefineBot) details 8" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 94, + 487, + 529, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 487, + 529, + 498 + ], + "spans": [ + { + "bbox": [ + 94, + 487, + 529, + 498 + ], + "type": "text", + "content": "E.5.MCQ generation results: stage 1 exam alignment 13" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 94, + 500, + 529, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 500, + 529, + 510 + ], + "spans": [ + { + "bbox": [ + 94, + 500, + 529, + 510 + ], + "type": "text", + "content": "E.6.MCQ generation results: stage 2 RefineBot 13" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 81, + 521, + 531, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 521, + 531, + 532 + ], + "spans": [ + { + "bbox": [ + 81, + 521, + 531, + 532 + ], + "type": "text", + "content": "F. Experiments details 15" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 94, + 534, + 529, + 621 + ], + "type": "list", + "angle": 0, + "index": 35, + "blocks": [ + { + "bbox": [ + 94, + 534, + 529, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 534, + 529, + 544 + ], + "spans": [ + { + "bbox": [ + 94, + 534, + 529, + 544 + ], + "type": "text", + "content": "F.1. Evaluation prompts 15" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 94, + 548, + 529, + 557 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 548, + 529, + 557 + ], + "spans": [ + { + "bbox": [ + 94, + 548, + 529, + 557 + ], + "type": "text", + "content": "F.2. Model details 16" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 94, + 560, + 529, + 570 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 560, + 529, + 570 + ], + "spans": [ + { + "bbox": [ + 94, + 560, + 529, + 570 + ], + "type": "text", + "content": "F.3. Human baseline on MicroVQA 17" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 94, + 573, + 529, + 582 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 573, + 529, + 582 + ], + "spans": [ + { + "bbox": [ + 94, + 573, + 529, + 582 + ], + "type": "text", + "content": "F.4. More experiments on attribute-level information 17" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 94, + 586, + 529, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 586, + 529, + 596 + ], + "spans": [ + { + "bbox": [ + 94, + 586, + 529, + 596 + ], + "type": "text", + "content": "F.5. Language shortcut ablations 17" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 94, + 598, + 529, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 598, + 529, + 608 + ], + "spans": [ + { + "bbox": [ + 94, + 598, + 529, + 608 + ], + "type": "text", + "content": "F.6. Error analysis 20" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 94, + 611, + 529, + 621 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 94, + 611, + 529, + 621 + ], + "spans": [ + { + "bbox": [ + 94, + 611, + 529, + 621 + ], + "type": "text", + "content": "F.7. Reasoning trace analysis 31" + } + ] + } + ], + "index": 34 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 645, + 299, + 659 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 645, + 299, + 659 + ], + "spans": [ + { + "bbox": [ + 55, + 645, + 299, + 659 + ], + "type": "text", + "content": "A. Acknowledgments and author contributions" + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 55, + 665, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 665, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 665, + 555, + 715 + ], + "type": "text", + "content": "Acknowledgements We thank Christian Kindermann for feedback on motivation for benchmarking and reasoning taxonomies; Siddharth Doshi for feedback on motivation for applications in scientific chat systems; and Krisjanis Mazurs for feedback on structuring user interviews. We also thank Assaf Zaritsky, Andrew S Moore, Pedro Guedes Dias, Angus Toland, Helen Healy, and Andrew Kassianos, and Loic Royer for discussions on defining the VQA tasks." + } + ] + } + ], + "index": 37 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 38 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 555, + 131 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 555, + 131 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 555, + 131 + ], + "type": "text", + "content": "Funding acknowledgements We gratefully acknowledge funding from HAI Hoffman-Yee (E.L., S.Y-L., J.M.H.B.), Schmidt Futures (E.L.), the Bridge2AI Program (NIH Common Fund; OT2 OD032742; E.L.), Goran Gustafsson Foundation and Knut and Alice Wallenberg Foundation (KAW 2021.0346; E.L.) as well as to MU (HPA). S.C. received funding from NIH/NIGMS R35GM133460 and CZI A23-0264-001. J.N.H. was supported by a Postdoctoral Fellowship from EMBO Postdoctoral Fellowship (ALTF 556-2022)." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 133, + 161, + 142 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 133, + 161, + 142 + ], + "spans": [ + { + "bbox": [ + 67, + 133, + 161, + 142 + ], + "type": "text", + "content": "Author contributions" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 144, + 399, + 263 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 55, + 144, + 239, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 144, + 239, + 156 + ], + "spans": [ + { + "bbox": [ + 55, + 144, + 239, + 156 + ], + "type": "text", + "content": "- Project conception: JB, JN, LBS, AL, SYL" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 156, + 321, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 156, + 321, + 167 + ], + "spans": [ + { + "bbox": [ + 55, + 156, + 321, + 167 + ], + "type": "text", + "content": "- Defining the VQA tasks: JB, JN, JGG, WC, SC, JNH, CL, SYL" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 168, + 269, + 179 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 168, + 269, + 179 + ], + "spans": [ + { + "bbox": [ + 55, + 168, + 269, + 179 + ], + "type": "text", + "content": "- Managing benchmark collection processes: JN, JB" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 180, + 399, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 180, + 399, + 191 + ], + "spans": [ + { + "bbox": [ + 55, + 180, + 399, + 191 + ], + "type": "text", + "content": "- Creating benchmark questions: DB, ZC, SMH, AJ, WDL, MGN, RY, CZ, JNH, CL" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 192, + 226, + 203 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 192, + 226, + 203 + ], + "spans": [ + { + "bbox": [ + 56, + 192, + 226, + 203 + ], + "type": "text", + "content": "- MCQ conversion pipeline: JB, JN, LBS" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 204, + 173, + 214 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 204, + 173, + 214 + ], + "spans": [ + { + "bbox": [ + 56, + 204, + 173, + 214 + ], + "type": "text", + "content": "- Evaluations: JB, SRG, YS" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 216, + 226, + 227 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 216, + 226, + 227 + ], + "spans": [ + { + "bbox": [ + 56, + 216, + 226, + 227 + ], + "type": "text", + "content": "- Qualitative error analysis: JN, JGG, CZ" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 228, + 197, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 228, + 197, + 239 + ], + "spans": [ + { + "bbox": [ + 56, + 228, + 197, + 239 + ], + "type": "text", + "content": "- Quantitative error analysis: LBS" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 240, + 223, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 240, + 223, + 251 + ], + "spans": [ + { + "bbox": [ + 56, + 240, + 223, + 251 + ], + "type": "text", + "content": "- Paper writing and figures: JB, JN, LBS" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 252, + 265, + 263 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 252, + 265, + 263 + ], + "spans": [ + { + "bbox": [ + 56, + 252, + 265, + 263 + ], + "type": "text", + "content": "Supervision: SYL, EL, CL, MDL, CNH, SC, WC" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 274, + 105, + 285 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 274, + 105, + 285 + ], + "spans": [ + { + "bbox": [ + 55, + 274, + 105, + 285 + ], + "type": "text", + "content": "B. Ethics" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 54, + 294, + 555, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 294, + 555, + 354 + ], + "spans": [ + { + "bbox": [ + 54, + 294, + 555, + 354 + ], + "type": "text", + "content": "Ethical use of biomedical data: MicroVQA was developed with a commitment to ethical practices in handling biomedical research data. All microscopy images included in the dataset were either original, unpublished data or sourced from open-access articles (e.g., CC-BY licensed) published after January 2024. The dataset does not include patient-identifiable information, and MicroVQA adheres to applicable privacy and research ethics guidelines. The dataset does not constitute human subjects research." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 54, + 354, + 554, + 414 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 354, + 554, + 414 + ], + "spans": [ + { + "bbox": [ + 54, + 354, + 554, + 414 + ], + "type": "text", + "content": "Potential societal impacts: Biomedical AI models, including those trained on MicroVQA, have the potential to influence research and healthcare significantly, both positively and negatively. While these models can accelerate scientific discovery, they may also perpetuate biases in training data or result in uneven performance across demographic or biological groups. To mitigate these risks, MicroVQA includes metadata annotations to support the analysis of potential biases and emphasizes diverse data sourcing. We will engage with the research community to address emerging ethical concerns." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 54, + 414, + 554, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 414, + 554, + 474 + ], + "spans": [ + { + "bbox": [ + 54, + 414, + 554, + 474 + ], + "type": "text", + "content": "Data licensing and usage: MicroVQA is distributed under the Creative Commons Attribution-ShareAlike 4.0 International (https://creativecommons.org/licenses/by-sa/4.0/) license to promote transparency and collaboration in the research community. This license reflects our efforts to balance open access with the ethical use of data, enabling both academic and commercial applications of the dataset while adhering to original license requirements. Full licensing details are available at https://creativecommons.org/licenses/by-sa/4.0/." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 55, + 484, + 219, + 497 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 484, + 219, + 497 + ], + "spans": [ + { + "bbox": [ + 55, + 484, + 219, + 497 + ], + "type": "text", + "content": "C. Limitations and future work" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 54, + 505, + 554, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 505, + 554, + 529 + ], + "spans": [ + { + "bbox": [ + 54, + 505, + 554, + 529 + ], + "type": "text", + "content": "While MicroVQA advances research-level reasoning benchmark significantly, we can identify a number of limitations and avenues for future work." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 54, + 543, + 554, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 543, + 554, + 604 + ], + "spans": [ + { + "bbox": [ + 54, + 543, + 554, + 604 + ], + "type": "text", + "content": "Open vs closed evaluation Like most VQA benchmarks, we used multiple-choice (MCQ) evaluation. However downstream applications will obviously operate in an open setting, without options and requiring more detail. MLLM research does have some open benchmarks [86], though impartial evaluation is a significant challenge. Our early experiments with open prompting suggest that models like GPT-4o tend to give very vague answers, and tend to depend strongly on the text part of the input prompt." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 54, + 617, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 617, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 54, + 617, + 555, + 715 + ], + "type": "text", + "content": "Dataset scale Our final dataset had 1042 samples, which is smaller than some other VQA benchmarks like MicroBench [50] and OmniMedVQA [34]. MicroVQA's scale sufficient to compare overall scores between models, however statistical power starts to become limiting when trying to compare different models on subsets of the dataset. How does this compare to other benchmarks? As we can see from Tab. 2 and Fig. 3, the larger datasets tend to be those with lower-level reasoning according to Bloom's taxonomy. Their samples tend to be derived automatically from metadata tags or classification labels. Benchmarks with comparably high-level reasoning, like MMMU-Pro, have a similar scale to ours (1,700). In the language-only domain, the research-level reasoning benchmark GPQA has 448 samples. This trend reflects a clear, practical limitation: it is very time-consuming for humans to create difficult reasoning questions, and the pool of capable experts is small." + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 54, + 72, + 555, + 145 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 72, + 555, + 145 + ], + "spans": [ + { + "bbox": [ + 54, + 72, + 555, + 145 + ], + "type": "text", + "content": "Breadth of dataset attributes in microscopy We aimed for MicroVQA to cover a broad range of the microscopy discipline; for example, we have samples from all four relevant scales - tissues, cells, subcellular, atomic. However, each human annotator is an expert in a subset of microscopy and biology, so there are practical limits on dataset coverage. We, therefore, chose to focus on the most common imaging modalities - brightfield, fluorescence, electron - while skipping less common modalities like Raman spectroscopy. In terms of sample types, we emphasized those relevant to humans - human and mouse - while having only a few samples from rare organisms like bottlenose dolphin." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 158, + 555, + 219 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 158, + 555, + 219 + ], + "spans": [ + { + "bbox": [ + 54, + 158, + 555, + 219 + ], + "type": "text", + "content": "Coverage of scientific reasoning Looking even more broadly, the motivation of this work is to advance reasoning in scientific research generally. However, we use microscopy in particular. This is because of a clear trade-off in breadth vs depth. Our dataset strikes a good balance, given that microscopy represents a large portion of visual data in biomedicine and biology. Future work can reuse our framework to generate reasoning VQA benchmarks in other fields of visual biomedicine and biology, such as in medicine [34] or ecology [93]." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 54, + 231, + 555, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 231, + 555, + 293 + ], + "spans": [ + { + "bbox": [ + 54, + 231, + 555, + 293 + ], + "type": "text", + "content": "Resolving MCQ shortcuts A key contribution in this paper is the RefineBot for generating multiple choice questions (MCQs) that are too challenging to solve using text-only shortcuts. Our approach used the outputs of existing MLLMs as signal to make distractors harder. However, this cannot guarantee the removal of shortcuts; more generally, harder distractors may exist. Therefore, the next generation of LLMs may be able to identify and exploit new shortcuts. If that occurs, we can run RefineBot again using the new LLM, creating MicroVQA-2. This strategy could be used for other VQA evaluations." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 300, + 170, + 312 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 300, + 170, + 312 + ], + "spans": [ + { + "bbox": [ + 55, + 300, + 170, + 312 + ], + "type": "text", + "content": "D. Benchmark details" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 319, + 238, + 332 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 319, + 238, + 332 + ], + "spans": [ + { + "bbox": [ + 55, + 319, + 238, + 332 + ], + "type": "text", + "content": "D.1. Accessing MicroVQA benchmark" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 54, + 338, + 554, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 338, + 554, + 386 + ], + "spans": [ + { + "bbox": [ + 54, + 338, + 554, + 386 + ], + "type": "text", + "content": "MicroVQA is an expert-curated benchmark for multimodal scientific reasoning, specifically in microscopy-based biological research. The dataset comprises 1,042 multiple-choice questions (MCQs) created by biological researchers to represent authentic scientific practice. Each MCQ is paired with an associated microscopy image and metadata describing the experimental context. A demonstration subset of 28 instances is available for initial exploration." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 54, + 386, + 554, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 386, + 554, + 445 + ], + "spans": [ + { + "bbox": [ + 54, + 386, + 554, + 445 + ], + "type": "text", + "content": "The MicroVQA dataset is publicly accessible on HuggingFace at https://huggingface.co/datasets/jmhb/microvqa. It is distributed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license, allowing reuse and share-alike redistribution with attribution. The dataset is available in Apache Arrow and Parquet formats for efficient data handling. These formats support fast serialization, memory mapping, and streaming from the web, facilitating seamless integration into machine learning pipelines." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 451, + 223, + 463 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 451, + 223, + 463 + ], + "spans": [ + { + "bbox": [ + 55, + 451, + 223, + 463 + ], + "type": "text", + "content": "D.2. Dataset Schema and Structure" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 54, + 469, + 554, + 493 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 469, + 554, + 493 + ], + "spans": [ + { + "bbox": [ + 54, + 469, + 554, + 493 + ], + "type": "text", + "content": "The dataset schema defines the fields included for each data instance. Figure 6 provides a schema summary, while a sample data instance is detailed below 7." + } + ] + } + ], + "index": 8 + }, + { + "type": "code", + "bbox": [ + 142, + 500, + 470, + 613 + ], + "blocks": [ + { + "bbox": [ + 142, + 500, + 470, + 613 + ], + "lines": [ + { + "bbox": [ + 142, + 500, + 470, + 613 + ], + "spans": [ + { + "bbox": [ + 142, + 500, + 470, + 613 + ], + "type": "text", + "content": "{ \"image_id\": datasets.Value(\"string\"), \"image\": datasets.Image(decode=True, id=None), \"label\": datasets.ClassLabel(), \"label_name\": datasets.Value(\"string\"), \"domain\": datasets.Value(\"string\"), \"subdomain\": datasets.Value(\"string\"), \"modality\": datasets.Value(\"string\"), \"submodality\": datasets.Value(\"string\"), \"stain\": datasets.Value(\"string\"), \"microns_perpixel\": datasets.Value(\"string\"), \"questions\": datasets.Value(\"string\"), }" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "code_body" + } + ], + "index": 9, + "sub_type": "code", + "guess_lang": "python" + }, + { + "bbox": [ + 244, + 629, + 365, + 641 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 629, + 365, + 641 + ], + "spans": [ + { + "bbox": [ + 244, + 629, + 365, + 641 + ], + "type": "text", + "content": "Figure 6. Example data schema." + } + ] + } + ], + "index": 10, + "type": "text" + }, + { + "bbox": [ + 55, + 658, + 243, + 672 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 658, + 243, + 672 + ], + "spans": [ + { + "bbox": [ + 55, + 658, + 243, + 672 + ], + "type": "text", + "content": "D.3. Training contamination mitigation" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 54, + 677, + 554, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 677, + 554, + 714 + ], + "spans": [ + { + "bbox": [ + 54, + 677, + 554, + 714 + ], + "type": "text", + "content": "To mitigate contamination, all data in MicroVQA were sourced from original, unpublished microscopy images or open-access articles (e.g., CC-BY-licensed) published after January 2024. Additionally, canary strings were embedded in the dataset to detect potential contamination during fine-tuning or model evaluation." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 111, + 69, + 497, + 319 + ], + "blocks": [ + { + "bbox": [ + 111, + 69, + 497, + 319 + ], + "lines": [ + { + "bbox": [ + 111, + 69, + 497, + 319 + ], + "spans": [ + { + "bbox": [ + 111, + 69, + 497, + 319 + ], + "type": "image", + "image_path": "db454ed8bb095d61fcba6351b941e5ce35de9a752d37024f4308c9dae146a736.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 244, + 335, + 365, + 346 + ], + "lines": [ + { + "bbox": [ + 244, + 335, + 365, + 346 + ], + "spans": [ + { + "bbox": [ + 244, + 335, + 365, + 346 + ], + "type": "text", + "content": "Figure 7. Example data instance." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 368, + 212, + 380 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 368, + 212, + 380 + ], + "spans": [ + { + "bbox": [ + 55, + 368, + 212, + 380 + ], + "type": "text", + "content": "D.4. Benchmark attribute details" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 386, + 553, + 446 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 386, + 553, + 446 + ], + "spans": [ + { + "bbox": [ + 54, + 386, + 553, + 446 + ], + "type": "text", + "content": "The MicroVQA benchmark is designed to evaluate multimodal reasoning capabilities in biomedical research contexts. The dataset incorporates diverse attributes such as organism, research subject, imaging modalities, and biological length scales (image scales) to ensure a comprehensive assessment1. These metadata not only enhance the scientific relevance but also enable analysis of reasoning across different biological and microscopy contexts. This section describes the methodology used to define, compute, and annotate these key attributes." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 447, + 553, + 517 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 447, + 553, + 517 + ], + "spans": [ + { + "bbox": [ + 54, + 447, + 553, + 517 + ], + "type": "text", + "content": "The benchmark comprises 1,042 total questions, including 423 questions containing multiple images combined into panels. A collage of sample images is shown in 25. The average MCQ token lengths were computed using the o200k_base tokenizer, with separate calculations for the raw input and formatted multiple-choice questions. Imaging modalities are broadly categorized into light, fluorescence, and electron microscopy. Biological length scales span four levels: tissue (millimeter scale), cellular (micrometer scale), subcellular (nanometer scale), and atomic (angstrom scale), enabling reasoning across diverse structural hierarchies." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 54, + 519, + 553, + 577 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 519, + 553, + 577 + ], + "spans": [ + { + "bbox": [ + 54, + 519, + 553, + 577 + ], + "type": "text", + "content": "To annotate attributes such as organism, specimen, and research subject, we manually reviewed random samples from the dataset and created a taxonomy to cover the most likely categories. A biomedical domain expert manually labeled these categories for 50 raw input question-answer pairs and performed initial prompt engineering. We used DSPy to optimize the prompt and few-shot examples on the labeled dataset. The input context included the taxonomy of organisms8 and research subjects 9." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 54, + 578, + 553, + 615 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 578, + 553, + 615 + ], + "spans": [ + { + "bbox": [ + 54, + 578, + 553, + 615 + ], + "type": "text", + "content": "These annotations were the gold standard to guide DSPy prompt engineering for automated attribute labeling on the full dataset. Prompt instructions included fallback rules (e.g., tagging \"None\" for indeterminate attributes) to ensure accuracy and minimize hallucination. The optimized pipeline annotated all questions while ensuring the validity of the dataset attributes." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 623, + 177, + 634 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 623, + 177, + 634 + ], + "spans": [ + { + "bbox": [ + 55, + 623, + 177, + 634 + ], + "type": "text", + "content": "D.5. Related benchmarks" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 54, + 641, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 641, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 54, + 641, + 555, + 715 + ], + "type": "text", + "content": "In section 3.3 and Tab. 2 we compare MicroVQA to related scientific multimodal benchmarks, focusing on its intersection of high-difficulty and advanced reasoning questions. Beyond these key attributes, we examine the sources of the questions. Most benchmarks rely on existing question repositories, such as exams, web-based QA datasets, textbooks, or other MCQ datasets. This approach bypasses the need for MCQ generation methods, as the questions and options are pre-tested and curated, making these benchmarks more representative of established tasks. Another common strategy involves reformating classification datasets into MCQs using question templates, which increases task diversity but often limits scenario unique-" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 111, + 70, + 497, + 383 + ], + "blocks": [ + { + "bbox": [ + 111, + 70, + 497, + 383 + ], + "lines": [ + { + "bbox": [ + 111, + 70, + 497, + 383 + ], + "spans": [ + { + "bbox": [ + 111, + 70, + 497, + 383 + ], + "type": "image", + "image_path": "62b824bbb820fb6c64cfa94694ffb4e53ede1cd5e46aaac7ebc2b76b55093535.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 399, + 555, + 420 + ], + "lines": [ + { + "bbox": [ + 55, + 399, + 555, + 420 + ], + "spans": [ + { + "bbox": [ + 55, + 399, + 555, + 420 + ], + "type": "text", + "content": "Figure 8. Examples of taxonomy classes used as context to LLM to assign an organism to a question. A YAML file with the full taxonomy will be released with the code." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 444, + 555, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 444, + 555, + 491 + ], + "spans": [ + { + "bbox": [ + 54, + 444, + 555, + 491 + ], + "type": "text", + "content": "ness and language variability. In contrast, MicroVQA is built by directly querying experts for challenging, domain-specific research questions. By starting with long form questions and answers, our approach requires an innovative MCQ generation method, ensuring uniquely tailored questions that push the boundaries of domain-specific reasoning and knowledge assessment." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 502, + 168, + 514 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 502, + 168, + 514 + ], + "spans": [ + { + "bbox": [ + 55, + 502, + 168, + 514 + ], + "type": "text", + "content": "D.6. Bloom's taxonomy" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 521, + 555, + 568 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 521, + 555, + 568 + ], + "spans": [ + { + "bbox": [ + 54, + 521, + 555, + 568 + ], + "type": "text", + "content": "Bloom's Taxonomy is a model for evaluating cognitive skill levels in educational assessments. It has six hierarchical levels of learning: recall, understand/comprehend, apply, analyze, evaluate, and create. Multiple-choice questions can test five of Bloom's levels but not the sixth level, \"create\". These levels provide a structured framework for assessing lower-order and higher-order thinking skills, which are essential for designing robust benchmarks like MicroVQA." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 54, + 569, + 554, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 569, + 554, + 653 + ], + "spans": [ + { + "bbox": [ + 54, + 569, + 554, + 653 + ], + "type": "text", + "content": "Initial attempts to use LLMs to classify Bloom's levels for biomedical visual question answering (VQA) benchmarks revealed a systematic overestimation of cognitive skill levels. This discrepancy limits the naive use LLMs for automated Bloom's classification, perhaps due to the nuances of applying Bloom's taxonomy in domain-specific contexts. To ensure accurate classification of Bloom's level, a physician-scientist with NBME training in item writing and familiarity with Bloom's classification manually labeled Bloom's level for 200 examples from public MCQ questions, MicroVQA, MicroBench, and ScienceQA. These examples were used to fine-tune GPT-4o-mini (gpt-4o-mini-2024-07-18) to classify Bloom's level and align with expert judgments." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 54, + 654, + 554, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 654, + 554, + 715 + ], + "spans": [ + { + "bbox": [ + 54, + 654, + 554, + 715 + ], + "type": "text", + "content": "The fine-tuned Bloom's classifier was applied to MicroVQA and numerous public science/biomedical benchmarks to assess the cognitive skills evaluated by each question. Each question was mapped to one of Bloom's levels, providing insights into the distribution of cognitive tasks across the benchmark. This analysis revealed that while existing benchmarks predominantly tested lower-order skills (e.g., recall and comprehension), MicroVQA has a greater emphasis on questions that require higher-order reasoning (e.g., analyzing, evaluating), filling a critical gap in multimodal scientific reasoning" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 126, + 70, + 484, + 392 + ], + "blocks": [ + { + "bbox": [ + 126, + 70, + 484, + 392 + ], + "lines": [ + { + "bbox": [ + 126, + 70, + 484, + 392 + ], + "spans": [ + { + "bbox": [ + 126, + 70, + 484, + 392 + ], + "type": "image", + "image_path": "c8a86566ed0b521b3e2535d57c8324cfca644cb74345c31456c9beb0e3e4a4d9.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 407, + 555, + 430 + ], + "lines": [ + { + "bbox": [ + 55, + 407, + 555, + 430 + ], + "spans": [ + { + "bbox": [ + 55, + 407, + 555, + 430 + ], + "type": "text", + "content": "Figure 9. Examples of taxonomy classes used as context to LLM to assign a research subject to a question. A YAML file with the full taxonomy will be released with the code." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 456, + 109, + 467 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 456, + 109, + 467 + ], + "spans": [ + { + "bbox": [ + 55, + 456, + 109, + 467 + ], + "type": "text", + "content": "assessments." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 482, + 234, + 495 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 482, + 234, + 495 + ], + "spans": [ + { + "bbox": [ + 55, + 482, + 234, + 495 + ], + "type": "text", + "content": "D.7. Methodology for proposing tasks" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 503, + 555, + 563 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 503, + 555, + 563 + ], + "spans": [ + { + "bbox": [ + 55, + 503, + 555, + 563 + ], + "type": "text", + "content": "Here, we expand on the task proposal methodology introduced in Sec. 3.2. To define specific reasoning tasks aimed at advancing biomedical research, we conducted a comprehensive expert consultation process. Through structured interviews with nine Principal Investigators and postdocs specializing in microscopy research, including participants from both academia and industry, we developed a framework for tasks that would meaningfully integrate multimodal large language models (MLLMs) into microscopy research workflows." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 565, + 554, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 565, + 554, + 613 + ], + "spans": [ + { + "bbox": [ + 55, + 565, + 554, + 613 + ], + "type": "text", + "content": "The task selection process followed specific criteria: tasks must utilize image-text inputs and text outputs suitable for MLLMs; they should require higher-order reasoning beyond simple image processing; and they should emphasize core experimental activities (experimentation, analysis, and action) rather than auxiliary tasks like literature review or writing [33]." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 615, + 554, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 615, + 554, + 663 + ], + "spans": [ + { + "bbox": [ + 55, + 615, + 554, + 663 + ], + "type": "text", + "content": "Our methodology involved a two-phase interview process. The first phase consisted of initial one-hour individual sessions with eight research groups featuring open-ended, unstructured discussions about key challenges in microscopy experiments, potential applications of MLLMs, current workflows and unmet needs, and possible AI system integration points. One week later, we then conducted follow-up interviews to consolidate findings and build consensus." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 665, + 554, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 665, + 554, + 715 + ], + "spans": [ + { + "bbox": [ + 55, + 665, + 554, + 715 + ], + "type": "text", + "content": "To avoid anchoring biases, we intentionally withheld predefined categories during initial interviews. This approach enabled unbiased task identification and allowed us to capture a broad spectrum of potential use cases. Through this iterative process, we identified three essential capabilities that align with the scientific life cycle for microscopy experiments: advanced image interpretation, hypothesis generation, and experimental design/hypothesis testing." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 733, + 309, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 309, + 742 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 309, + 742 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 226, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 226, + 85 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 226, + 85 + ], + "type": "text", + "content": "D.8. Guidance for question creators" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 90, + 555, + 126 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 90, + 555, + 126 + ], + "spans": [ + { + "bbox": [ + 54, + 90, + 555, + 126 + ], + "type": "text", + "content": "Question creators were given detailed guidance as pdfs, that we share at https://github.com/jmhb0/microvqa under folder benchmark/human.collection. During the submission process, we completed two rounds of quality control to ensure difficulty and alignment to the prompts, as described in Sec. 4.1." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 133, + 241, + 146 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 133, + 241, + 146 + ], + "spans": [ + { + "bbox": [ + 55, + 133, + 241, + 146 + ], + "type": "text", + "content": "D.9. Microscopy image representations" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 152, + 555, + 199 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 152, + 555, + 199 + ], + "spans": [ + { + "bbox": [ + 54, + 152, + 555, + 199 + ], + "type": "text", + "content": "MLLMs are trained primarily on three-channel RGB images, however microscopy images are more diverse. To ensure alignment with how common MLLMs process images, we required that each image artifact for a single question was represented as no more than six RGB images. Where multiple images were needed, they could be multiple image files, or a single image that is concatenated. For videos, users could submit up to six frames; for 3d, they could submit slices or z-projections." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 200, + 555, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 200, + 555, + 270 + ], + "spans": [ + { + "bbox": [ + 54, + 200, + 555, + 270 + ], + "type": "text", + "content": "An important difference is multichannel images. Many fluorescence microscopy images are multichannel, where the image collection ensures that each channel represents a certain structure. For example channel 0 may be nucleus, and channel 1 may be mitochondria. There are a few approaches. Some questions represent this situation as separate grayscale images for each channel. Others show a 'composite image', which is a single image where one each channel is mapped to one of the RGB channels; e.g. a common visualization will put nucleus in blue and mitochondria in green. Some questions do both: grayscale images for each channel, along with a composite image merging them." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 271, + 555, + 294 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 271, + 555, + 294 + ], + "spans": [ + { + "bbox": [ + 55, + 271, + 555, + 294 + ], + "type": "text", + "content": "Finally, we allowed screenshots from research papers or image software, since this is a realistic way that future LLM-users would use AI tools." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 304, + 196, + 319 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 304, + 196, + 319 + ], + "spans": [ + { + "bbox": [ + 55, + 304, + 196, + 319 + ], + "type": "text", + "content": "E. MCQ generation details" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 324, + 240, + 337 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 324, + 240, + 337 + ], + "spans": [ + { + "bbox": [ + 55, + 324, + 240, + 337 + ], + "type": "text", + "content": "E.1. Question collection process details" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 54, + 343, + 554, + 367 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 343, + 554, + 367 + ], + "spans": [ + { + "bbox": [ + 54, + 343, + 554, + 367 + ], + "type": "text", + "content": "Data collection used a structured submission form shared with collaborators. The form required users to provide the following:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 367, + 553, + 426 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 55, + 367, + 552, + 379 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 367, + 552, + 379 + ], + "spans": [ + { + "bbox": [ + 55, + 367, + 552, + 379 + ], + "type": "text", + "content": "- Image or Image Set: Users uploaded microscopy images, optionally including multiple images for comparative questions." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 380, + 553, + 391 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 380, + 553, + 391 + ], + "spans": [ + { + "bbox": [ + 55, + 380, + 553, + 391 + ], + "type": "text", + "content": "- Experiment Details: A brief description of the experimental setup, imaging modalities, and any relevant biological context." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 392, + 553, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 392, + 553, + 415 + ], + "spans": [ + { + "bbox": [ + 55, + 392, + 553, + 415 + ], + "type": "text", + "content": "- Question Text: Each question was aligned with one of three predefined use cases: identifying interesting features, explaining mechanisms, or proposing next steps." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 55, + 415, + 502, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 415, + 502, + 426 + ], + "spans": [ + { + "bbox": [ + 55, + 415, + 502, + 426 + ], + "type": "text", + "content": "- Answer and Distractors: Contributors provided a correct answer and biologically plausible incorrect answers." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 54, + 427, + 555, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 427, + 555, + 486 + ], + "spans": [ + { + "bbox": [ + 54, + 427, + 555, + 486 + ], + "type": "text", + "content": "Quality control was performed in multiple rounds. To familiarize users with the task and data requirements, each user submitted an initial form with 4-6 questions for manual quality control and feedback. Feedback was provided to users on improving question specificity, ensuring the questions required image interpretation rather than relying solely on general knowledge. Contributors were encouraged to refine their submissions based on the feedback to better align with the tasks and goals of the MicroVQA benchmark." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 54, + 487, + 554, + 558 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 487, + 554, + 558 + ], + "spans": [ + { + "bbox": [ + 54, + 487, + 554, + 558 + ], + "type": "text", + "content": "Users were encouraged to submit data from multiple sources, imaging modalities (e.g., fluorescence, electron microscopy), and biological scales (e.g., tissue, cellular, subcellular) to promote diversity. Contributors could upload microscopy images from their experiments or use CC-BY publicly available images. Common image sources included preprints or publications released after January 2024 (to mitigate contamination), or from established image repositories [14, 17, 18, 31, 72, 76]. This ensured that all images were either original or sourced from open-license publications, enabling their use in an open-source benchmark." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 55, + 565, + 259, + 578 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 565, + 259, + 578 + ], + "spans": [ + { + "bbox": [ + 55, + 565, + 259, + 578 + ], + "type": "text", + "content": "E.2. Experiment on naive MCQ generation" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 54, + 583, + 555, + 620 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 583, + 555, + 620 + ], + "spans": [ + { + "bbox": [ + 54, + 583, + 555, + 620 + ], + "type": "text", + "content": "In Sec. 4.2, we discuss how the naive approach to MCQ generation – simply zero-shot prompting GPT – leads to MCQs that are very easy for frontier models to solve. Which ultimately motivated our two-stage MCQ generation approach. Here, we discuss those experiments in more detail. We create a naive version of MicroVQA using a standard prompt [34, 82]:" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 65, + 635, + 349, + 646 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 635, + 349, + 646 + ], + "spans": [ + { + "bbox": [ + 65, + 635, + 349, + 646 + ], + "type": "text", + "content": "You are an expert in molecular and cell biology, and in microscopy." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 65, + 651, + 517, + 668 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 651, + 517, + 668 + ], + "spans": [ + { + "bbox": [ + 65, + 651, + 517, + 668 + ], + "type": "text", + "content": "I will give you an original biology-related question and its answer, your task is to rephrase an equivalent question with identical answer. The question related to an image, and we don't show the image." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 65, + 668, + 533, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 668, + 533, + 685 + ], + "spans": [ + { + "bbox": [ + 65, + 668, + 533, + 685 + ], + "type": "text", + "content": "Meanwhile, I want to transfer this QA-pair into a multi-choice question. Please generate 5 incorrect options to construct the candidate options." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 67, + 691, + 119, + 701 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 691, + 119, + 701 + ], + "spans": [ + { + "bbox": [ + 67, + 691, + 119, + 701 + ], + "type": "text", + "content": "{QUESTION}" + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 82, + 110, + 92 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 82, + 110, + 92 + ], + "spans": [ + { + "bbox": [ + 67, + 82, + 110, + 92 + ], + "type": "text", + "content": "{ANSWER}" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 107, + 555, + 155 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 107, + 555, + 155 + ], + "spans": [ + { + "bbox": [ + 55, + 107, + 555, + 155 + ], + "type": "text", + "content": "Then, we evaluated this naive MicroVQA on the top closed source models with the image, and to study the over-reliance on language shortcuts, we evaluated a text-only version. In the text-only setting, we add the following to the prompt: \"If an image is mentioned, ignore this information and try your best to answer the question\". Tab. 5 shows the results of this experiment." + } + ] + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 221, + 185, + 389, + 232 + ], + "blocks": [ + { + "bbox": [ + 75, + 164, + 534, + 176 + ], + "lines": [ + { + "bbox": [ + 75, + 164, + 534, + 176 + ], + "spans": [ + { + "bbox": [ + 75, + 164, + 534, + 176 + ], + "type": "text", + "content": "Table 5. Performance on MicroVQA MCQs with naive MCQ generation. We report overall VQA and VQA without the image." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 221, + 185, + 389, + 232 + ], + "lines": [ + { + "bbox": [ + 221, + 185, + 389, + 232 + ], + "spans": [ + { + "bbox": [ + 221, + 185, + 389, + 232 + ], + "type": "table", + "html": "
VQAVQA-no-image
GPT-4o85.182.7
Claude-3.5-Sonnet91.488.4
Gemini-1.5-Pro88.582.4
", + "image_path": "980068988f10ec7a5b60d0e0617e1b40223c65b30e0affe08933c1187678d70c.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 243, + 555, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 243, + 555, + 361 + ], + "spans": [ + { + "bbox": [ + 54, + 243, + 555, + 361 + ], + "type": "text", + "content": "All models score over " + }, + { + "bbox": [ + 54, + 243, + 555, + 361 + ], + "type": "inline_equation", + "content": "80\\%" + }, + { + "bbox": [ + 54, + 243, + 555, + 361 + ], + "type": "text", + "content": " on questions that are from a specialized domain. We hypothesize that this reflects a problem with the non-trivial task of MCQ generation from raw VQAs for several reasons. First, the high 'VQA-no-image' scores across all models suggest that selecting the correct option does not require image understanding, despite the raw VQA samples being designed to incorporate image-based reasoning. An alternative explanation is that the questions themselves may be too easy for advanced models. Second, we test this alternate explanation by performing open evaluation on a subset of the raw VQA queries-- that is, we queried GPT-4o with the questions but without the options. Here, the models rarely gave good responses, even after careful prompting. This suggests that models are leveraging test-taking strategies to eliminate distractors [27]. Third, we examined the chain-of-thought responses in the VQA-no-image case, which suggested the models often eliminate distractors based on information from the text question. Therefore, we conclude that there are issues with the improper generation of the MCQs using the naive strategy." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 54, + 361, + 554, + 387 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 361, + 554, + 387 + ], + "spans": [ + { + "bbox": [ + 54, + 361, + 554, + 387 + ], + "type": "text", + "content": "What exactly are the problems with MCQ generation? Qualitatively, the types of errors are the same as those we discuss in Appendix E.6, so we refer readers to that section to get a deeper understanding." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 392, + 280, + 406 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 392, + 280, + 406 + ], + "spans": [ + { + "bbox": [ + 55, + 392, + 280, + 406 + ], + "type": "text", + "content": "E.3. Stage 1 MCQ generation (exam alignment)" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 54, + 411, + 554, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 411, + 554, + 506 + ], + "spans": [ + { + "bbox": [ + 54, + 411, + 554, + 506 + ], + "type": "text", + "content": "The first stage transformed raw, open-ended questions and long-form answers into well-structured MCQs aligned with established principles of biomedical exam design. A physician-scientist trained in medical board exam item writing initiated the process by reviewing biomedical educational literature, NBME guidelines [64], and Bloom's taxonomy for assessing cognitive skills [5, 19, 89]. The expert manually converted 50 raw question-answer pairs into one-best-answer MCQs with biologically relevant distractors using the literature review and their prior training. This manual effort ensured that the MCQs preserved the original question's intent and scientific validity while adhering to educational standards. Distractors were crafted to require reasoning and prevent reliance on superficial cues like word association or general knowledge, thus maintaining the depth and complexity of the original content." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 54, + 507, + 554, + 614 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 507, + 554, + 614 + ], + "spans": [ + { + "bbox": [ + 54, + 507, + 554, + 614 + ], + "type": "text", + "content": "To scale this process, the expert designed an initial LLM prompt for automated Stage 1 MCQ generation. This prompt and the manually curated training data were input for DSPy-based prompt optimization. Using the MIPROv2 optimizer, the prompt was refined within a Chain of Thought (CoT) framework combined with retrieval-augmented generation (RAG) pipelines powered by o1-mini. Optimization was guided by a custom metric evaluated by a separate LLM judge (o1-mini). The judge assessed each revised MCQ across three criteria: (1) semantic similarity to the original question and answer, (2) adherence to NBME formatting standards, and (3) minimization of extraneous information that could provide unintended clues. The optimization metric was weighted to prioritize preserving the original meaning and scientific validity, with lesser emphasis on formatting and extraneous text. Additional implementation details are provided in the source code." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 620, + 286, + 634 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 620, + 286, + 634 + ], + "spans": [ + { + "bbox": [ + 55, + 620, + 286, + 634 + ], + "type": "text", + "content": "E.4. Stage 2 MCQ generation (RefineBot) details" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 54, + 639, + 554, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 639, + 554, + 676 + ], + "spans": [ + { + "bbox": [ + 54, + 639, + 554, + 676 + ], + "type": "text", + "content": "The goal of RefineBot is to take MCQs that are too easy for MLLMs - especially where they are leveraging language-only shortcuts - and to rewrite them to make them more challenging. It is the third panel in Fig. 4. For more details on what types of improvements are achieved, see Appendix E.6. The complete code for RefineBot is released." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 54, + 689, + 554, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 689, + 554, + 715 + ], + "spans": [ + { + "bbox": [ + 54, + 689, + 554, + 715 + ], + "type": "text", + "content": "A single RefineBot iteration Below, we show the pseudo-code for a single-question revision. The input is a question string, a list of options, and the index of the correct answer in the options. It returns a tuple (code, MCQ) where code is a" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 733, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 308, + 741 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 555, + 95 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 555, + 95 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 555, + 95 + ], + "type": "text", + "content": "string on whether the run was successful, and MCQ is either the revised multiple choice question if the rewrite was successful or otherwise None." + } + ] + } + ], + "index": 0 + }, + { + "type": "code", + "bbox": [ + 58, + 99, + 557, + 421 + ], + "blocks": [ + { + "bbox": [ + 58, + 99, + 557, + 421 + ], + "lines": [ + { + "bbox": [ + 58, + 99, + 557, + 421 + ], + "spans": [ + { + "bbox": [ + 58, + 99, + 557, + 421 + ], + "type": "text", + "content": "def revise_mcq(question: str, choices: list, correctidx: int, max_iters: int = 5):\n # Store original version\n original = {'question': question, 'choices': choices, 'correctidx': correctidx}\n reflections = []\nfor iter in range(max_iters):\n # Try answering without image\n result = evaluate_without_image(question, choices, correctidx)\nif not result.can_answer:\n code = 'SUCCEEDNESS_CHANGES' if iter == 0 else 'SUCCEED_REWRITE'\nreturn code, (question, choices, correctidx)\nif iter == max_iters - 1:\n return 'FAIL ITERATIONS', None\n# Reflect on strategies used to solve\nreflection = analyze WHY_answerable(result)\nreflections += reflection\n# Rewrite based on all reflections from all iterations\nnew_version = rewrite_question(reflections, original)\n# Verify the rewrite didn't change the meaning meaning preserved\nif not check_meaning_preserved(original, new_version):\n return 'FAIL_REWRITE', None\n# Update current version\nquestion = new_version['question']\nchoices = new_version['choices']\ncorrectidx = new_version['correctidx']\nreturn 'FAIL_UNKNOWN', None" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "code_body" + } + ], + "index": 1, + "sub_type": "code", + "guess_lang": "python" + }, + { + "bbox": [ + 55, + 426, + 555, + 461 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 426, + 555, + 461 + ], + "spans": [ + { + "bbox": [ + 55, + 426, + 555, + 461 + ], + "type": "text", + "content": "This process is also described in the main text. One detail to note: the rewrite_question function receives the reflections from all prior iterations, not just the current one. In this way, the rewriting function can consider all possible strategies that have been identified. All LLM calls are to gpt-4o-2024-08-06." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 462, + 555, + 485 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 462, + 555, + 485 + ], + "spans": [ + { + "bbox": [ + 55, + 462, + 555, + 485 + ], + "type": "text", + "content": "To better illustrate the method, we step through a single example. This way, we can show the prompts and understand responses." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 486, + 441, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 486, + 441, + 498 + ], + "spans": [ + { + "bbox": [ + 56, + 486, + 441, + 498 + ], + "type": "text", + "content": "1. The evaluation of a question with the evaluate_without_image prompt and response:" + } + ] + } + ], + "index": 4 + }, + { + "type": "code", + "bbox": [ + 67, + 501, + 557, + 714 + ], + "blocks": [ + { + "bbox": [ + 67, + 501, + 557, + 714 + ], + "lines": [ + { + "bbox": [ + 67, + 501, + 557, + 714 + ], + "spans": [ + { + "bbox": [ + 67, + 501, + 557, + 714 + ], + "type": "text", + "content": "Prompt \nThe following question is supposed to be paired with an image. We will not provide the image, so answer to the best of your ability. \nIn an electron microscopy image of a mouse primary embryonic cortical neuron expressing full-length human mHTT (BACHD), \"eye-ball\" shaped features are observed in the top left corner that appear anomalous. What is the most likely cause of these features? Think step by step and then output the answer in the format of \"The answer is (X)\" at the end.. \n(a) Aggregation of human mHTT protein within the neuron. \n(b) Ice contamination from liquid nitrogen during sample preparation. \n(c) Formation of intracellular vesicles during neuron development. \n(d) Artifacts arising from excessive heavy metal staining. \nResponse (target answer is b) \nTo determine the cause of the \"eye-ball\" shaped features observed in the electron microscopy image, we can analyze each option step by step: \n(a) Aggregation of human mHTT protein within the neuron: Aggregations of proteins can indeed appear as anomalous structures under electron microscopy. However, these typically have irregular shapes and may not consistently resemble \"eye-ball\" shapes. \n(b) Ice contamination from liquid nitrogen during sample preparation: Ice contamination can occur during sample preparation, particularly if samples are not handled properly when flash-frozen. This can lead to" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "code_body" + } + ], + "index": 5, + "sub_type": "code", + "guess_lang": "txt" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 742 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 742 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "bbox": [ + 100, + 75, + 536, + 92 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 75, + 536, + 92 + ], + "spans": [ + { + "bbox": [ + 100, + 75, + 536, + 92 + ], + "type": "text", + "content": "anomalous features appearing in microscopy images. These features might appear as circular or irregular shapes, potentially looking like \"eye-balls.\"" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 83, + 99, + 553, + 156 + ], + "type": "list", + "angle": 0, + "index": 3, + "blocks": [ + { + "bbox": [ + 83, + 99, + 550, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 99, + 550, + 124 + ], + "spans": [ + { + "bbox": [ + 83, + 99, + 550, + 124 + ], + "type": "text", + "content": "(c) Formation of intracellular vesicles during neuron development: Intracellular vesicles are normal structures within neurons and other cells. Although they can appear during development, they are typically well characterized and documented, and are less likely to be described as anomalous in the context given." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 83, + 130, + 553, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 130, + 553, + 156 + ], + "spans": [ + { + "bbox": [ + 83, + 130, + 553, + 156 + ], + "type": "text", + "content": "(d) Artifacts arising from excessive heavy metal staining: Over staining can indeed produce artifacts in electron microscopy images. While staining artifacts are plausible, they might not specifically result in \"eye-ball\" shapes unless the staining process consistently targets specific round structures." + } + ] + } + ], + "index": 2 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 82, + 162, + 550, + 187 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 162, + 550, + 187 + ], + "spans": [ + { + "bbox": [ + 82, + 162, + 550, + 187 + ], + "type": "text", + "content": "Considering the nature of the descriptors (\"eye-ball\" shaped, anomalous), option (b) ice contamination is often a source of unexpected features in microscopy images that could fit the appearance described, as ice can form circular shapes." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 82, + 194, + 545, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 194, + 545, + 212 + ], + "spans": [ + { + "bbox": [ + 82, + 194, + 545, + 212 + ], + "type": "text", + "content": "Therefore, the most likely cause of these features is (b) Ice contamination from liquid nitrogen during sample preparation." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 82, + 218, + 159, + 226 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 218, + 159, + 226 + ], + "spans": [ + { + "bbox": [ + 82, + 218, + 159, + 226 + ], + "type": "text", + "content": "The answer is (b)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 236, + 555, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 236, + 555, + 270 + ], + "spans": [ + { + "bbox": [ + 55, + 236, + 555, + 270 + ], + "type": "text", + "content": "2. If the answer is incorrect, then the question is already hard enough, and we can exit. Otherwise, we call the analyze WHY Answerable function, which reflects on how this answer was achieved. It continues in the same conversation:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 82, + 285, + 101, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 285, + 101, + 293 + ], + "spans": [ + { + "bbox": [ + 82, + 285, + 101, + 293 + ], + "type": "text", + "content": "User" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 83, + 302, + 151, + 310 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 302, + 151, + 310 + ], + "spans": [ + { + "bbox": [ + 83, + 302, + 151, + 310 + ], + "type": "text", + "content": "That is correct." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 83, + 317, + 525, + 334 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 317, + 525, + 334 + ], + "spans": [ + { + "bbox": [ + 83, + 317, + 525, + 334 + ], + "type": "text", + "content": "Explain how you were able to answer the question without access to the image - only the question_stem and choices." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 83, + 334, + 202, + 342 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 334, + 202, + 342 + ], + "spans": [ + { + "bbox": [ + 83, + 334, + 202, + 342 + ], + "type": "text", + "content": "What strategies did you use?" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 83, + 350, + 397, + 358 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 350, + 397, + 358 + ], + "spans": [ + { + "bbox": [ + 83, + 350, + 397, + 358 + ], + "type": "text", + "content": "Then, briefly summarize these \"language-only strategies\" into a short list." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 83, + 365, + 293, + 374 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 365, + 293, + 374 + ], + "spans": [ + { + "bbox": [ + 83, + 365, + 293, + 374 + ], + "type": "text", + "content": "Finally, say whether your reasoning was closer to:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 83, + 374, + 435, + 382 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 374, + 435, + 382 + ], + "spans": [ + { + "bbox": [ + 83, + 374, + 435, + 382 + ], + "type": "text", + "content": "- 'deduction': you answered with high confidence because you had enough information." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 83, + 382, + 405, + 390 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 382, + 405, + 390 + ], + "spans": [ + { + "bbox": [ + 83, + 382, + 405, + 390 + ], + "type": "text", + "content": "- 'abduction': you answered with lower confidence you made an informed guess." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 83, + 397, + 226, + 406 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 397, + 226, + 406 + ], + "spans": [ + { + "bbox": [ + 83, + 397, + 226, + 406 + ], + "type": "text", + "content": "Be concise in your final response." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 83, + 421, + 122, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 421, + 122, + 429 + ], + "spans": [ + { + "bbox": [ + 83, + 421, + 122, + 429 + ], + "type": "text", + "content": "Assistant" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 83, + 437, + 512, + 454 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 437, + 512, + 454 + ], + "spans": [ + { + "bbox": [ + 83, + 437, + 512, + 454 + ], + "type": "text", + "content": "To answer the question without the image, I relied entirely on analyzing the given information and the plausible characteristics of each option. Here's a breakdown of my strategy:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 83, + 461, + 548, + 574 + ], + "type": "list", + "angle": 0, + "index": 23, + "blocks": [ + { + "bbox": [ + 83, + 461, + 545, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 461, + 545, + 486 + ], + "spans": [ + { + "bbox": [ + 83, + 461, + 545, + 486 + ], + "type": "text", + "content": "1. **Keyword and Context Analysis**: I focused on key descriptors such as \"eye-ball\" shaped features and their being \"anomalous\" in an electron microscopy image of neurons expressing human mHTT. This guided me to consider characteristics commonly seen under electron microscopy." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 83, + 493, + 533, + 517 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 493, + 533, + 517 + ], + "spans": [ + { + "bbox": [ + 83, + 493, + 533, + 517 + ], + "type": "text", + "content": "2. **Knowledge of the Subject Matter**: I applied general knowledge about electron microscopy artifacts and protein behavior. This helped me weigh the likelihood of each option based on what typically causes specific image features." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 83, + 525, + 548, + 549 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 525, + 548, + 549 + ], + "spans": [ + { + "bbox": [ + 83, + 525, + 548, + 549 + ], + "type": "text", + "content": "3. **Process of Elimination**: By understanding the nature of each option-including potential artifacts or structures (such as vesicles or ice) in microscopy-I systematically assessed the validity of each scenario in relation to the described features." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 83, + 556, + 536, + 574 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 556, + 536, + 574 + ], + "spans": [ + { + "bbox": [ + 83, + 556, + 536, + 574 + ], + "type": "text", + "content": "4. **Logical Inference**: I inferred which scenario most commonly results in anomalies resembling \"eye-ball\" features, favoring the simplest explanation." + } + ] + } + ], + "index": 22 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 83, + 581, + 242, + 590 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 581, + 242, + 590 + ], + "spans": [ + { + "bbox": [ + 83, + 581, + 242, + 590 + ], + "type": "text", + "content": "Summary of \"language only strategies\":" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 83, + 590, + 256, + 621 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 83, + 590, + 210, + 597 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 590, + 210, + 597 + ], + "spans": [ + { + "bbox": [ + 83, + 590, + 210, + 597 + ], + "type": "text", + "content": "- Analyze keywords and context" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 83, + 597, + 256, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 597, + 256, + 605 + ], + "spans": [ + { + "bbox": [ + 83, + 597, + 256, + 605 + ], + "type": "text", + "content": "- Apply relevant subject matter knowledge" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 83, + 605, + 201, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 605, + 201, + 613 + ], + "spans": [ + { + "bbox": [ + 83, + 605, + 201, + 613 + ], + "type": "text", + "content": "- Use process of elimination" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 83, + 613, + 189, + 621 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 613, + 189, + 621 + ], + "spans": [ + { + "bbox": [ + 83, + 613, + 189, + 621 + ], + "type": "text", + "content": "- Make logical inferences" + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 83, + 628, + 550, + 645 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 83, + 628, + 550, + 645 + ], + "spans": [ + { + "bbox": [ + 83, + 628, + 550, + 645 + ], + "type": "text", + "content": "The reasoning was closer to 'abduction' as I made an informed guess based on patterns and typical causes, given that I lacked direct visual confirmation." + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 55, + 654, + 555, + 690 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 654, + 555, + 690 + ], + "spans": [ + { + "bbox": [ + 55, + 654, + 555, + 690 + ], + "type": "text", + "content": "3. Next, we call rewrite_question. Here, the templated variable {{conversations}} has the full conversations produced by the function analyze WHY_answerable in the previous call and from all other iterations. The {{n chatting}} just indicates how many conversations there are included:" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 82, + 696, + 469, + 704 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 696, + 469, + 704 + ], + "spans": [ + { + "bbox": [ + 82, + 696, + 469, + 704 + ], + "type": "text", + "content": "Below, I will display {{n chatting}} chat conversations between a 'user' and an LLM 'assistant'." + } + ] + } + ], + "index": 32 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 33 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "type": "code", + "bbox": [ + 71, + 75, + 556, + 318 + ], + "blocks": [ + { + "bbox": [ + 71, + 75, + 556, + 318 + ], + "lines": [ + { + "bbox": [ + 71, + 75, + 556, + 318 + ], + "spans": [ + { + "bbox": [ + 71, + 75, + 556, + 318 + ], + "type": "text", + "content": "In each conversation\n- a user asks the assistant to answer a multichoice VQA question, however they do not provide the image. They only get the question_stem and choices.\n- the question_stem and correct answer are similar to other conversations, however the distractors are different\n- the assistant then answers correctly.\n- the user then asks the assistant to explain how it answered the question with only the text.\n- the assistant then summarizes what strategy they used to answer the question.\nAltogether, these conversations give examples of language shortcuts that need to be avoided when constructing the question_stem and choices.\nHere are the conversations:\n{{conversations}}\nYour task is to rewrite the question_stem and choices so that a different LLM 'assistant' cannot use the language-only strategies that were identified in these past conversations.\nInclude an 'explanation' about why your new set of distractors are better.\nYour revised choices should include the correct answer at the 'correct_index'.\nYour revised question_stem and choices should not significantly change the meaning of the question and correct answer.\nYou are free to change the distractors a lot to achieve this task.\nInclude {{nchoices}} choices.\nThe choices should not be made too different in length and detail to the original question stem and choices.\nNote that the question stem may contain important cues that cannot be removed. E.g. if a question asks about the \"green stain\" in an image, you cannot change it to \"stain\" because it introduces ambiguity.\nThe choices should not include the letters in front like (a), (b), ...\nReturn a json" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "code_body" + }, + { + "bbox": [ + 69, + 325, + 249, + 337 + ], + "lines": [ + { + "bbox": [ + 69, + 325, + 249, + 337 + ], + "spans": [ + { + "bbox": [ + 69, + 325, + 249, + 337 + ], + "type": "text", + "content": "The output of this function is a JSON and is:" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "code_caption" + } + ], + "index": 0, + "sub_type": "code", + "guess_lang": "txt" + }, + { + "type": "code", + "bbox": [ + 71, + 343, + 556, + 517 + ], + "blocks": [ + { + "bbox": [ + 71, + 343, + 556, + 517 + ], + "lines": [ + { + "bbox": [ + 71, + 343, + 556, + 517 + ], + "spans": [ + { + "bbox": [ + 71, + 343, + 556, + 517 + ], + "type": "text", + "content": "{ \"question_stem\": \"In an electron microscopy image of a neuron expressing human mHTT (BACHD) protein, distinct, spherical features are prominent and seem to be atypical. What is the most probable explanation for these features?\", \"choices\": [ \"Concentration of cellular waste products leading to distinct morphological anomalies\", \"Formation of air bubbles introduced during sample immobilization processes\", \"Ice contamination from liquid nitrogen during sample preparation\", \"Inclusion bodies formed by aggregated proteins expressing during stress response\", \"Cellular morphological changes during mitotic division\" ], \"correct_index\": 2, \"explanation\": \"The revised options introduce plausible scientific phenomena that could also result in spherical or distinct anomalies. This set of distractors is more varied and grounded in legitimate alternative explanations without leading the responder towards the correct answer through language patterns. For example, air bubbles and protein inclusion bodies might mimic sphere-like structures. Additionally, by adding an unrelated phenomenon like cellular changes during division, the set confuses the language pattern strategy previously used. Thus, the correct choice of \\(\" Ice\\) contamination from liquid nitrogen\"\\( \" stands out less and necessitates either visual confirmation or real understanding of sample preparation outcomes to identify correctly.\"" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "code_body" + } + ], + "index": 2, + "sub_type": "code", + "guess_lang": "json" + }, + { + "bbox": [ + 67, + 525, + 555, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 525, + 555, + 548 + ], + "spans": [ + { + "bbox": [ + 67, + 525, + 555, + 548 + ], + "type": "text", + "content": "Notice that the response includes an explanation of why it made the changes, and the changes relate to what was in the reflection, adding interpretability." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 548, + 471, + 561 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 548, + 471, + 561 + ], + "spans": [ + { + "bbox": [ + 55, + 548, + 471, + 561 + ], + "type": "text", + "content": "4. Finally, we check that the rewrite has not changed the semantic meaning of the question and answer." + } + ] + } + ], + "index": 4 + }, + { + "type": "code", + "bbox": [ + 82, + 568, + 550, + 712 + ], + "blocks": [ + { + "bbox": [ + 82, + 568, + 550, + 712 + ], + "lines": [ + { + "bbox": [ + 82, + 568, + 550, + 712 + ], + "spans": [ + { + "bbox": [ + 82, + 568, + 550, + 712 + ], + "type": "text", + "content": "Prompt \n\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\* \nBelow are two question-answer pairs. \nThe question-answer pairs are part of VQA triplets, and both pairs use the same image. \nAre these question-answer pairs semantically equivalent? \nOr are they significantly different? \nGive a true/false and also an explanation. \nQUESTION 1: \nIn an electron microscopy image of a mouse primary embryonic cortical neuron expressing full-length human mHTT (BACHD), \"eye-ball\" shaped features are observed in the top left corner that appear anomalous. What is the most likely cause of these features? \nANSWER 1: \nIce contamination from liquid nitrogen during sample preparation \nQUESTION 2:" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "code_body" + } + ], + "index": 5, + "sub_type": "code", + "guess_lang": "txt" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 310, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 310, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 310, + 742 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "type": "code", + "bbox": [ + 67, + 74, + 557, + 215 + ], + "blocks": [ + { + "bbox": [ + 67, + 74, + 557, + 215 + ], + "lines": [ + { + "bbox": [ + 67, + 74, + 557, + 215 + ], + "spans": [ + { + "bbox": [ + 67, + 74, + 557, + 215 + ], + "type": "text", + "content": "In an electron microscopy image of a neuron expressing human mHTT (BACHD) protein, distinct, spherical features are prominent and seem to be atypical. What is the most probable explanation for these features? \nANSWER 2: \nIce contamination from liquid nitrogen during sample preparation \n\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*\\*" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "code_body" + } + ], + "index": 0, + "sub_type": "code", + "guess_lang": "txt" + }, + { + "bbox": [ + 55, + 220, + 555, + 257 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 220, + 555, + 257 + ], + "spans": [ + { + "bbox": [ + 55, + 220, + 555, + 257 + ], + "type": "text", + "content": "5. If the meaning check step fails, then we exit with a fail as we notice that once the meaning deviates it will become more distorted with further iterations. Otherwise, we update the MCQ, and start the loop again. Where the first function call in the new loop will check whether this revised MCQ is now too hard or needs to be rewritten." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 54, + 272, + 555, + 321 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 272, + 555, + 321 + ], + "spans": [ + { + "bbox": [ + 54, + 272, + 555, + 321 + ], + "type": "text", + "content": "Calibrating the preservation of meaning Since there is a risk that the rewriter will change the semantic meaning of the question and answer, it is crucial that the check_meaning_preserved is effective. For topics requiring domain expertise, this requires prompt engineering with a domain expert providing feedback. This is one of the key human bottlenecks in tuning the prompts in this system." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 335, + 555, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 335, + 555, + 396 + ], + "spans": [ + { + "bbox": [ + 54, + 335, + 555, + 396 + ], + "type": "text", + "content": "Other sources of MCQ invalidation While the check_meaning_preserved function checks that the question and answer is correct, an additional risk is that the rewrite introduces distractors that are more correct than the target answer. In our manual review evaluations (we reviewed all 1,042 MCQs), this was very rare. However, if this were a significant problem in future systems, it may be addressable by tuning the rewriter bot, with an expert human evaluator in the prompt engineering loop." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 411, + 554, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 411, + 554, + 495 + ], + "spans": [ + { + "bbox": [ + 54, + 411, + 554, + 495 + ], + "type": "text", + "content": "Extending RefineBot beyond language shortcuts We constructed RefineBot specifically to address the issue of language shortcuts – where the question is answerable without using the image. This is appropriate because we work with vision-centric VQA, meaning the questions should require image interpretation. However, the framework should work in other settings – namely language-only QA. This would require adjusting the prompts slightly. For example, the current reflection prompt asks how the question was successfully evaluated without using language; also, the current rewriter prompt directs the LLM to address language shortcuts in its rewrite. Therefore, adapting Refinebot to language-only MCQ should make these prompts more general." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 54, + 495, + 555, + 579 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 495, + 555, + 579 + ], + "spans": [ + { + "bbox": [ + 54, + 495, + 555, + 579 + ], + "type": "text", + "content": "On the other hand, the method could also be updated for the more general VQA setting – to improve questions that do not have language shortcuts, but are easy for some other reason. Here, the evaluator and reflector agents, which are currently LLMs, could be switched for an MLLM, which also ingests the image (though this would be expensive for current state-of-the-art models). We hypothesize that current MLLMs are so weak at image perception, that this is not necessary for MicroVQA – that simply removing language shortcuts will suffice for creating a challenging benchmark. As MLLMs improve at perceiving microscopy images, this general VQA RefineBot could become more useful for ensuring that the benchmark remains challenging." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 54, + 594, + 553, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 594, + 553, + 641 + ], + "spans": [ + { + "bbox": [ + 54, + 594, + 553, + 641 + ], + "type": "text", + "content": "Repeated RefineBot iterations A single run of RefineBot may exit as a FAIL, either because the rewrite changed the meaning of the question, or because the maximum number of iterations was reached. However, RefineBot can give very different outputs when changing the random seed, and we find that simply running again with a different random seed can lead to a successful run. Therefore, if the bot fails, we rerun it." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 54, + 642, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 642, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 54, + 642, + 555, + 715 + ], + "type": "text", + "content": "If an MCQ exited RefineBot successfully, then the evaluator LLM was fooled by the question (got the answer incorrect). Yet, the question could still be suboptimal in detectable ways: the LLM could be capable of narrowing the MCQ down to two options, but have guessed the wrong option; moreover, this question could be easier for different models. To ensure it is difficult enough overall, we require that the MCQ is answered incorrectly for two random seeds from two different models: gpt-4o-2024-08-06 and claude-3-5-sonnet-20241022. If not, then it fails, and we run the RefineBot again with a different random seed." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 23 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 553, + 107 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 553, + 107 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 553, + 107 + ], + "type": "text", + "content": "We discuss in the experiments Sec. 5.1 how this process leads to a small evaluation bias against the models used as evaluators. That section shows that despite the bias, this process makes the MCQs generally harder for all models. Future versions of RefineBot could mitigate this by including a larger family of evaluator models." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 66, + 109, + 483, + 121 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 109, + 483, + 121 + ], + "spans": [ + { + "bbox": [ + 66, + 109, + 483, + 121 + ], + "type": "text", + "content": "Finally, some MCQs fail to pass the RefineBot check. In these cases, we simply keep the stage 1 MCQ." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 132, + 309, + 144 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 132, + 309, + 144 + ], + "spans": [ + { + "bbox": [ + 55, + 132, + 309, + 144 + ], + "type": "text", + "content": "E.5. MCQ generation results: stage 1 exam alignment" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 54, + 152, + 554, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 152, + 554, + 212 + ], + "spans": [ + { + "bbox": [ + 54, + 152, + 554, + 212 + ], + "type": "text", + "content": "The Stage 1 pipeline produced one-best-answer MCQs that preserved the content, testing focus, and validity of the expert-generated question-answer pairs. It generated an initial set of distractors following best practices outlined in educational and NBME guidelines. A key design goal of Stage 1 was to preserve the original question content without altering or introducing new information. Secondary objectives included ensuring proper formatting according to NBME guidelines and minimizing information that might provide clues to the correct answer without reference to the image." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 54, + 213, + 554, + 295 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 213, + 554, + 295 + ], + "spans": [ + { + "bbox": [ + 54, + 213, + 554, + 295 + ], + "type": "text", + "content": "While the pipeline ensured alignment with these goals, preserving the original content occasionally resulted in MCQs containing extra information, such as image descriptions or specific biological terms (i.e., gene or protein name), that could inadvertently help narrow the options. For example, stage 1 question stems sometimes included protein names, disease descriptions, or descriptions of image features, which could reduce the challenge. Figure 11 illustrates Stage 1 outputs that are appropriately formatted but may preserve details from the original input, making questions less challenging. While the distractors were biologically plausible, some options were overly easy to eliminate, reducing the challenge. These limitations are addressed and refined in Stage 2." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 54, + 297, + 554, + 393 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 297, + 554, + 393 + ], + "spans": [ + { + "bbox": [ + 54, + 297, + 554, + 393 + ], + "type": "text", + "content": "The o1-mini-2024-09-12 LLM judge, employed during DSPy optimization, compared the Stage 1 MCQs to the original inputs to evaluate (1) semantic similarity and (2) adherence to MCQ formatting. The results showed that " + }, + { + "bbox": [ + 54, + 297, + 554, + 393 + ], + "type": "inline_equation", + "content": "96\\%" + }, + { + "bbox": [ + 54, + 297, + 554, + 393 + ], + "type": "text", + "content": " of questions preserved the original meaning, and " + }, + { + "bbox": [ + 54, + 297, + 554, + 393 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 54, + 297, + 554, + 393 + ], + "type": "text", + "content": " adhered to NBME formatting guidelines. Manual analysis of the " + }, + { + "bbox": [ + 54, + 297, + 554, + 393 + ], + "type": "inline_equation", + "content": "4\\%" + }, + { + "bbox": [ + 54, + 297, + 554, + 393 + ], + "type": "text", + "content": " labeled as at least partially dissimilar revealed that these cases typically involved open-ended original questions being reformulated into closed-ended MCQs. This transformation, a necessary step for creating one-best-answer MCQs, was not a failure but rather an expected outcome of Stage 1. In rare instances where the original submission included multiple correct answers, the output of Stage 1 retained one correct answer and excluded others to comply with the one-best-answer format. The LLM judge flagged these cases as partially dissimilar but these changes were essential for adhering to the MCQ structure." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 54, + 394, + 553, + 478 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 394, + 553, + 478 + ], + "spans": [ + { + "bbox": [ + 54, + 394, + 553, + 478 + ], + "type": "text", + "content": "A common issue in MCQ design is that correct answers tend to be longer and more detailed than distractors. To assess this, we computed the ratio of tokens in incorrect options to those in the correct answer, where values " + }, + { + "bbox": [ + 54, + 394, + 553, + 478 + ], + "type": "inline_equation", + "content": "\\leq 1" + }, + { + "bbox": [ + 54, + 394, + 553, + 478 + ], + "type": "text", + "content": " indicate longer correct answers, and values " + }, + { + "bbox": [ + 54, + 394, + 553, + 478 + ], + "type": "inline_equation", + "content": "\\geq 1" + }, + { + "bbox": [ + 54, + 394, + 553, + 478 + ], + "type": "text", + "content": " suggest shorter correct answers. Stage 1 MCQs showed a ratio of 0.87, indicating that correct answers were only slightly longer than distractors. By comparison, naive MCQ generation yielded a ratio of 0.46, meaning correct answers were nearly twice as long as distractors, making them easier to identify. Additionally, naive MCQ generation preserved the original meaning in only " + }, + { + "bbox": [ + 54, + 394, + 553, + 478 + ], + "type": "inline_equation", + "content": "89\\%" + }, + { + "bbox": [ + 54, + 394, + 553, + 478 + ], + "type": "text", + "content": " of cases and met NBME formatting guidelines in just " + }, + { + "bbox": [ + 54, + 394, + 553, + 478 + ], + "type": "inline_equation", + "content": "70\\%" + }, + { + "bbox": [ + 54, + 394, + 553, + 478 + ], + "type": "text", + "content": ", highlighting the efficacy of the Stage 1 process at preserving information and ensuring proper MCQ formatting." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 488, + 282, + 502 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 488, + 282, + 502 + ], + "spans": [ + { + "bbox": [ + 55, + 488, + 282, + 502 + ], + "type": "text", + "content": "E.6. MCQ generation results: stage 2 RefineBot" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 54, + 508, + 554, + 591 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 508, + 554, + 591 + ], + "spans": [ + { + "bbox": [ + 54, + 508, + 554, + 591 + ], + "type": "text", + "content": "The second stage of our MCQ generation method focuses on increasing the difficulty of the questions and options. The key idea is that we want to remove language shortcuts – information in the MCQ that allows answering the question without access to the image. To aid in understanding, we construct a toy example in Fig. 10 with the three types of language shortcuts that we identify. The questions should be paired with a stain that is mitochondria, and we want the VQA to require actually inferring that from the image. In shortcut 1, the text ‘gives away’ the image content so it’s trivial to answer the question. In shortcut 2: The LLM generates implausible or weak distractors. In shortcut 3: ‘Language bias’, is a known problem in VQA [29]." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 54, + 593, + 554, + 663 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 593, + 554, + 663 + ], + "spans": [ + { + "bbox": [ + 54, + 593, + 554, + 663 + ], + "type": "text", + "content": "Moving to real MCQs, in Fig. 4 we illustrate the transformation of a question at each stage, highlighting issues and corresponding improvements. While stage 1 produces exam-style questions that preserve the QA objective, the wording of the questions and distractors often enable MLLMs to exploit shortcuts rather than demonstrate knowledge. To address this, stage 2 (RefineBot) employs a chain-of-thought process. We analyze additional examples of improvements achieved in stage 2, showing how RefineBot improves both the style and content. The key strategies correspond to the three shortcut types that we identified:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 85, + 665, + 553, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 665, + 553, + 712 + ], + "spans": [ + { + "bbox": [ + 85, + 665, + 553, + 712 + ], + "type": "text", + "content": "1. (Visual giveaway) Over-description of the image eliminates the need for image interpretation. For example, in Fig. 11 (middle row), stage 1 includes details that reveal the specifics of the image, allowing the model to propose common issues without analyzing the image. Stage 2 replaces these specifics with general terms in the question stem." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 310, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 310, + 741 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 310, + 741 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 24 + }, + { + "para_blocks": [ + { + "bbox": [ + 133, + 74, + 208, + 93 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 133, + 74, + 208, + 93 + ], + "spans": [ + { + "bbox": [ + 133, + 74, + 208, + 93 + ], + "type": "text", + "content": "Language shortcut 1: Visual giveaway" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 113, + 99, + 223, + 127 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 113, + 99, + 223, + 127 + ], + "spans": [ + { + "bbox": [ + 113, + 99, + 223, + 127 + ], + "type": "text", + "content": "Q: The green TOMM20 outer mitochondrial membrane protein localizes where?" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 113, + 135, + 170, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 113, + 135, + 170, + 144 + ], + "spans": [ + { + "bbox": [ + 113, + 135, + 170, + 144 + ], + "type": "text", + "content": "A) Mitochondria" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 114, + 144, + 154, + 172 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 114, + 144, + 154, + 153 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 144, + 154, + 153 + ], + "spans": [ + { + "bbox": [ + 114, + 144, + 154, + 153 + ], + "type": "text", + "content": "B) Nucleus" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 114, + 153, + 138, + 162 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 153, + 138, + 162 + ], + "spans": [ + { + "bbox": [ + 114, + 153, + 138, + 162 + ], + "type": "text", + "content": "C) ER" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 114, + 162, + 145, + 172 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 162, + 145, + 172 + ], + "spans": [ + { + "bbox": [ + 114, + 162, + 145, + 172 + ], + "type": "text", + "content": "D) Golgi" + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 111, + 190, + 228, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 190, + 228, + 218 + ], + "spans": [ + { + "bbox": [ + 111, + 190, + 228, + 218 + ], + "type": "text", + "content": "Why? The 'M' in \"TOMM20\" stands for 'mitochondria', so that's enough to answer the question." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 268, + 74, + 341, + 92 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 268, + 74, + 341, + 92 + ], + "spans": [ + { + "bbox": [ + 268, + 74, + 341, + 92 + ], + "type": "text", + "content": "Language shortcut 2: Weak distractors" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 248, + 104, + 353, + 123 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 104, + 353, + 123 + ], + "spans": [ + { + "bbox": [ + 248, + 104, + 353, + 123 + ], + "type": "text", + "content": "Q: What is the localization of the green stain?" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 248, + 130, + 304, + 167 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 248, + 130, + 304, + 140 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 130, + 304, + 140 + ], + "spans": [ + { + "bbox": [ + 248, + 130, + 304, + 140 + ], + "type": "text", + "content": "A) Mitochondria" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 248, + 140, + 286, + 150 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 140, + 286, + 150 + ], + "spans": [ + { + "bbox": [ + 248, + 140, + 286, + 150 + ], + "type": "text", + "content": "B) Butane" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 248, + 150, + 288, + 158 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 150, + 288, + 158 + ], + "spans": [ + { + "bbox": [ + 248, + 150, + 288, + 158 + ], + "type": "text", + "content": "C) Quokka" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 248, + 159, + 280, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 159, + 280, + 167 + ], + "spans": [ + { + "bbox": [ + 248, + 159, + 280, + 167 + ], + "type": "text", + "content": "D) Coatf" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 246, + 190, + 353, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 190, + 353, + 218 + ], + "spans": [ + { + "bbox": [ + 246, + 190, + 353, + 218 + ], + "type": "text", + "content": "Why? The LLM generates distractors that are not plausible, and can easily be eliminated" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 402, + 74, + 476, + 93 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 402, + 74, + 476, + 93 + ], + "spans": [ + { + "bbox": [ + 402, + 74, + 476, + 93 + ], + "type": "text", + "content": "Language shortcut 3: Language bias" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 381, + 99, + 490, + 127 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 99, + 490, + 127 + ], + "spans": [ + { + "bbox": [ + 381, + 99, + 490, + 127 + ], + "type": "text", + "content": "Q: For a study on Parkinson's disease, we stain a target green. Where does it localize?" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 381, + 135, + 439, + 172 + ], + "type": "list", + "angle": 0, + "index": 22, + "blocks": [ + { + "bbox": [ + 381, + 135, + 439, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 135, + 439, + 144 + ], + "spans": [ + { + "bbox": [ + 381, + 135, + 439, + 144 + ], + "type": "text", + "content": "A) Mitochondria" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 381, + 144, + 423, + 153 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 144, + 423, + 153 + ], + "spans": [ + { + "bbox": [ + 381, + 144, + 423, + 153 + ], + "type": "text", + "content": "B) Nucleus" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 381, + 154, + 407, + 162 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 154, + 407, + 162 + ], + "spans": [ + { + "bbox": [ + 381, + 154, + 407, + 162 + ], + "type": "text", + "content": "C) ER" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 381, + 162, + 413, + 172 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 381, + 162, + 413, + 172 + ], + "spans": [ + { + "bbox": [ + 381, + 162, + 413, + 172 + ], + "type": "text", + "content": "D) Golgi" + } + ] + } + ], + "index": 21 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 380, + 180, + 489, + 227 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 380, + 180, + 489, + 227 + ], + "spans": [ + { + "bbox": [ + 380, + 180, + 489, + 227 + ], + "type": "text", + "content": "Why? 'Mitochondria' is the most plausible answer because the question mentions Parkinson's disease, which frequently studies mitochondrial dysfunction" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 55, + 239, + 555, + 262 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 239, + 555, + 262 + ], + "spans": [ + { + "bbox": [ + 55, + 239, + 555, + 262 + ], + "type": "text", + "content": "Figure 10. Three types of language shortcut relevant to MicroVQA. The target VQA has an image that is fluorescence microscopy stained with TOMM20 which would show a pattern consistent with visualizing mitochondria." + } + ] + } + ], + "index": 24, + "type": "text" + }, + { + "bbox": [ + 85, + 283, + 555, + 357 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 85, + 283, + 555, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 283, + 555, + 319 + ], + "spans": [ + { + "bbox": [ + 85, + 283, + 555, + 319 + ], + "type": "text", + "content": "2. (Weak distractors) Distractors unrelated to the specifics of the scenario described by the question. In Fig. 11 (bottom row), stage 2 increases distractor complexity by incorporating elements tied to the question's context, making them less predictable." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 85, + 319, + 555, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 319, + 555, + 357 + ], + "spans": [ + { + "bbox": [ + 85, + 319, + 555, + 357 + ], + "type": "text", + "content": "3. (Language bias) Overly precise wording in the question stem: Precise phrasing can make it easy to identify the correct answer through pattern matching. For example, Fig. 11 (top row), changing 'eye-ball shaped' to 'spherical' retains the question's intent while creating more challenging distractors referencing plausible alternative processes." + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 25 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 55, + 69, + 553, + 509 + ], + "blocks": [ + { + "bbox": [ + 55, + 69, + 553, + 509 + ], + "lines": [ + { + "bbox": [ + 55, + 69, + 553, + 509 + ], + "spans": [ + { + "bbox": [ + 55, + 69, + 553, + 509 + ], + "type": "image", + "image_path": "be6415061000edb79f7e653656032f75e763b7c4fe5f97d82e4c52b46e24c788.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 518, + 555, + 541 + ], + "lines": [ + { + "bbox": [ + 55, + 518, + 555, + 541 + ], + "spans": [ + { + "bbox": [ + 55, + 518, + 555, + 541 + ], + "type": "text", + "content": "Figure 11. Examples of changes to questions and options between stage 1 and stage 2 (RefineBot) of our MCQ generation method. In red elements that need to be improved and in green improvements." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 563, + 174, + 577 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 563, + 174, + 577 + ], + "spans": [ + { + "bbox": [ + 55, + 563, + 174, + 577 + ], + "type": "text", + "content": "F. Experiments details" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 584, + 173, + 597 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 584, + 173, + 597 + ], + "spans": [ + { + "bbox": [ + 55, + 584, + 173, + 597 + ], + "type": "text", + "content": "F.1. Evaluation prompts" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 604, + 555, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 604, + 555, + 628 + ], + "spans": [ + { + "bbox": [ + 55, + 604, + 555, + 628 + ], + "type": "text", + "content": "All prompts except for LLaVA and LLaVA-Med used the chain-of-thought prompt template from the MMMU-Pro code [88]:" + } + ] + } + ], + "index": 4 + }, + { + "type": "code", + "bbox": [ + 66, + 647, + 470, + 706 + ], + "blocks": [ + { + "bbox": [ + 66, + 647, + 470, + 706 + ], + "lines": [ + { + "bbox": [ + 66, + 647, + 470, + 706 + ], + "spans": [ + { + "bbox": [ + 66, + 647, + 470, + 706 + ], + "type": "text", + "content": "The following is a multiple choice question (with answers). Think step by step and then output the answer in the format of " + }, + { + "bbox": [ + 66, + 647, + 470, + 706 + ], + "type": "inline_equation", + "content": "\"The" + }, + { + "bbox": [ + 66, + 647, + 470, + 706 + ], + "type": "text", + "content": " answer is " + }, + { + "bbox": [ + 66, + 647, + 470, + 706 + ], + "type": "inline_equation", + "content": "(\\mathrm{X})\\backslash \"" + }, + { + "bbox": [ + 66, + 647, + 470, + 706 + ], + "type": "text", + "content": " at the end. {{QUESTION}}} Options: {{CHOICES}}" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "code_body" + } + ], + "index": 5, + "sub_type": "code", + "guess_lang": "txt" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 26 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 73, + 206, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 73, + 206, + 84 + ], + "spans": [ + { + "bbox": [ + 67, + 73, + 206, + 84 + ], + "type": "text", + "content": "An example complete question is:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 66, + 102, + 315, + 112 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 102, + 315, + 112 + ], + "spans": [ + { + "bbox": [ + 66, + 102, + 315, + 112 + ], + "type": "text", + "content": "The following is a multiple choice question (with answers)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 66, + 112, + 462, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 112, + 462, + 120 + ], + "spans": [ + { + "bbox": [ + 66, + 112, + 462, + 120 + ], + "type": "text", + "content": "Think step by step and then output the answer in the format of \"The answer is (X)\" at the end." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 66, + 127, + 541, + 160 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 127, + 541, + 160 + ], + "spans": [ + { + "bbox": [ + 66, + 127, + 541, + 160 + ], + "type": "text", + "content": "A fluorescence microscopy image displays cultured mouse primary neurons stained with a specific marker localized to mitochondria. The fluorescence signal exhibits an uneven distribution across the neuronal cell, and there is an absence of signal within the nucleus. What is the most probable cause for the uneven distribution of the mitochondrial signal?" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 167, + 102, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 167, + 102, + 175 + ], + "spans": [ + { + "bbox": [ + 67, + 167, + 102, + 175 + ], + "type": "text", + "content": "Options:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 77, + 176, + 542, + 231 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 77, + 176, + 542, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 176, + 542, + 191 + ], + "spans": [ + { + "bbox": [ + 77, + 176, + 542, + 191 + ], + "type": "text", + "content": "(1): Mitochondrial transport along axonal and dendritic processes ensures uneven distribution based on cellular energy demands." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 77, + 191, + 541, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 191, + 541, + 200 + ], + "spans": [ + { + "bbox": [ + 77, + 191, + 541, + 200 + ], + "type": "text", + "content": "(2): Differential expression of mitochondrial proteins leads to variable fluorescence signal across the neuron." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 77, + 200, + 541, + 208 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 200, + 541, + 208 + ], + "spans": [ + { + "bbox": [ + 77, + 200, + 541, + 208 + ], + "type": "text", + "content": "(3): Mitochondrial biogenesis occurs at specific neuronal regions, causing localized fluorescence accumulation." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 77, + 208, + 541, + 215 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 208, + 541, + 215 + ], + "spans": [ + { + "bbox": [ + 77, + 208, + 541, + 215 + ], + "type": "text", + "content": "(4): Mitochondrial anchoring to the cytoskeleton restricts their movement, resulting in uneven distribution." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 77, + 215, + 541, + 231 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 215, + 541, + 231 + ], + "spans": [ + { + "bbox": [ + 77, + 215, + 541, + 231 + ], + "type": "text", + "content": "(5): Mitochondrial dynamics, including fission and fusion, allow mitochondria to move and distribute throughout the neuron." + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 67, + 253, + 419, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 253, + 419, + 264 + ], + "spans": [ + { + "bbox": [ + 67, + 253, + 419, + 264 + ], + "type": "text", + "content": "The answer is then extracted with the regex pattern: answer is " + }, + { + "bbox": [ + 67, + 253, + 419, + 264 + ], + "type": "inline_equation", + "content": "(?([0-9])" + }, + { + "bbox": [ + 67, + 253, + 419, + 264 + ], + "type": "text", + "content": "?" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 54, + 266, + 554, + 313 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 266, + 554, + 313 + ], + "spans": [ + { + "bbox": [ + 54, + 266, + 554, + 313 + ], + "type": "text", + "content": "For LLaVA-Med, we added an extra parsing strategy. This was because we observed that while LLaVA-Med often gave reasonable responses to questions, it failed to follow instructions well. First, it did not follow the CoT prompt consistently to think step by step. It also didn't use the right answer formatting: \"The answer is (X) \\\". Since we were more interested in biomedical reasoning adhering to VQA formatting, we decided to handle this model differently." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 67, + 315, + 390, + 326 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 315, + 390, + 326 + ], + "spans": [ + { + "bbox": [ + 67, + 315, + 390, + 326 + ], + "type": "text", + "content": "Specifically, to encourage CoT reasoning, we fixed the prefix of the response to:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 66, + 327, + 525, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 327, + 525, + 338 + ], + "spans": [ + { + "bbox": [ + 66, + 327, + 525, + 338 + ], + "type": "text", + "content": "Response: Let's think through this step by step and evaluate each option.." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 55, + 338, + 195, + 349 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 338, + 195, + 349 + ], + "spans": [ + { + "bbox": [ + 55, + 338, + 195, + 349 + ], + "type": "text", + "content": "Then we added a suffix to the end:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 55, + 350, + 521, + 374 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 350, + 521, + 374 + ], + "spans": [ + { + "bbox": [ + 55, + 350, + 521, + 374 + ], + "type": "text", + "content": "After carefully considering all the options, the most likely answer is (1). Then we continued generation, and did regular regex parsing to get the final answer." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 54, + 375, + 554, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 375, + 554, + 434 + ], + "spans": [ + { + "bbox": [ + 54, + 375, + 554, + 434 + ], + "type": "text", + "content": "We also find that a few models make minor but consistent errors with formatting the answer and elect to fix them. For example x-ai/grok-2-vision-1212 commonly adds two asterisks before the answer like \"The answer is " + }, + { + "bbox": [ + 54, + 375, + 554, + 434 + ], + "type": "inline_equation", + "content": "^{**}(4)^{**}" + }, + { + "bbox": [ + 54, + 375, + 554, + 434 + ], + "type": "text", + "content": "\" instead of \"The answer is (4)\" (GPT-4o also did this sometimes). Some other common errors that we saw but did not handle (because it would be difficult to do rigorously) was shown by meta-llama/llama-3.2-11b-vision-instruct: the response would write one of the MCQ options but not state its number." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 55, + 445, + 141, + 456 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 445, + 141, + 456 + ], + "spans": [ + { + "bbox": [ + 55, + 445, + 141, + 456 + ], + "type": "text", + "content": "F.2. Model details" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 54, + 464, + 554, + 487 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 464, + 554, + 487 + ], + "spans": [ + { + "bbox": [ + 54, + 464, + 554, + 487 + ], + "type": "text", + "content": "Our main benchmarking experiments in Tab. 3 refer to models that have multiple versions. The exact model versions used for inference are shown in Tab. 6" + } + ] + } + ], + "index": 19 + }, + { + "type": "table", + "bbox": [ + 87, + 497, + 522, + 692 + ], + "blocks": [ + { + "bbox": [ + 87, + 497, + 522, + 692 + ], + "lines": [ + { + "bbox": [ + 87, + 497, + 522, + 692 + ], + "spans": [ + { + "bbox": [ + 87, + 497, + 522, + 692 + ], + "type": "table", + "html": "
Model NameAPI EndpointSource & Details
olol-2024-12-17OpenAI API
Claude-3.5-Sonnet*anthropic/claude-3.5-sonnet-20240620Openrouter API
Gemini-Pro-1.5google/gemini-pro-1.5Openrouter API
Pixtral-Largemistralai/pixtral-large-2411Openrouter API
Grok-2-Visionx-ai/grok-2-vision-1212Openrouter API
Qwen-2-vl-72b-Instructqwen/qwen-2-vl-72b-instructOpenrouter API
VILA1.5-40bVILA1.5-40bHuggingFace, local inference
GPT-4ogpt-4o-2024-08-06OpenAI API
Llama-3.1-Nemotron-70b-Instructnvidia/llama-3.1-nemotron-70b-instructOpenrouter API
Llama-3.2-90b-Vision-Instructmeta-llama/llama-3.2-90b-vision-instructOpenrouter API
Qwen-2-VL-7bqwen/qwen-2-vl-7b-instructOpenrouter API
Claude-3.5-Haikuanthropic/claude-3.5-haikuOpenrouter API
Gemini-Flash-1.5-8bgoogle/gemini-flash-1.5-8bOpenrouter API
GPT-4o-minigpt-4o-mini-2024-07-18OpenAI API
Pixtral-12bmistralai/pixtral-12bOpenrouter API
VILA1.5-13bVILA1.5-13bHuggingFace, local inference
Llama-3.2-11b-vision-instructmeta-llama/llama-3.2-11b-vision-instructOpenrouter API
LLaVA-Med-Mistral-7BLLaVA-Med-Mistral-7BHuggingFace, local inference
LLaVA-Mistral-7BLLaVA-Mistral-7BHuggingFace, local inference
", + "image_path": "ed6c38f64fb2524c7e826b1f14a467709fb078eff5a02a3d354453b7495b1d70.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "table_body" + } + ], + "index": 20 + }, + { + "bbox": [ + 212, + 699, + 398, + 710 + ], + "lines": [ + { + "bbox": [ + 212, + 699, + 398, + 710 + ], + "spans": [ + { + "bbox": [ + 212, + 699, + 398, + 710 + ], + "type": "text", + "content": "Table 6. AI model API endpoints and their sources" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 312, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 312, + 742 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 312, + 742 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 27 + }, + { + "para_blocks": [ + { + "bbox": [ + 54, + 72, + 555, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 72, + 555, + 120 + ], + "spans": [ + { + "bbox": [ + 54, + 72, + 555, + 120 + ], + "type": "text", + "content": "For LLaVA and LLaVA-med, although the version number 1.6 is different from LLaVA-Med with 1.5, we do believe that LLaVA-Med was either trained from this LLaVA version, or from one that is very similar. They have identical architecture, and the version is very close; it is possible the LLaVA-Med copied the number incorrectly. The LLaVA-Med codebase or github issues does not clarify this at the time of writing." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 54, + 121, + 555, + 146 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 121, + 555, + 146 + ], + "spans": [ + { + "bbox": [ + 54, + 121, + 555, + 146 + ], + "type": "text", + "content": "Note that for OpenRouter function calls to Anthropic, the date is not included in the string. We believe they use timestamps for older model versions, but not for the latest version. So reproducing it may require checking if the model has been updated." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 154, + 224, + 167 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 154, + 224, + 167 + ], + "spans": [ + { + "bbox": [ + 55, + 154, + 224, + 167 + ], + "type": "text", + "content": "F.3. Human baseline on MicroVQA" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 173, + 455, + 185 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 173, + 455, + 185 + ], + "spans": [ + { + "bbox": [ + 55, + 173, + 455, + 185 + ], + "type": "text", + "content": "To perform a human baseline, we recruited co-authors who created questions. The conditions were:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 189, + 553, + 261 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 55, + 189, + 237, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 189, + 237, + 200 + ], + "spans": [ + { + "bbox": [ + 55, + 189, + 237, + 200 + ], + "type": "text", + "content": "- Questions were provided in a google form." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 201, + 290, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 201, + 290, + 213 + ], + "spans": [ + { + "bbox": [ + 55, + 201, + 290, + 213 + ], + "type": "text", + "content": "- Participants did not view any question that they created." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 213, + 553, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 213, + 553, + 236 + ], + "spans": [ + { + "bbox": [ + 55, + 213, + 553, + 236 + ], + "type": "text", + "content": "- Internet access was allowed, but LLM access was not. This is because realistic biological practice is knowledge-intensive, and may require looking up facts or references." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 237, + 441, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 237, + 441, + 248 + ], + "spans": [ + { + "bbox": [ + 55, + 237, + 441, + 248 + ], + "type": "text", + "content": "- Time limit was 3 minutes per question. They were free to take as many breaks as they wanted." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 249, + 387, + 261 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 249, + 387, + 261 + ], + "spans": [ + { + "bbox": [ + 55, + 249, + 387, + 261 + ], + "type": "text", + "content": "- They answered the multiple choice question without further written justification." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 54, + 264, + 555, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 264, + 555, + 300 + ], + "spans": [ + { + "bbox": [ + 54, + 264, + 555, + 300 + ], + "type": "text", + "content": "These conditions – 3 minutes and internet access – give sufficient time for an expert to make a reasonable attempt. Since the score was far below " + }, + { + "bbox": [ + 54, + 264, + 555, + 300 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 54, + 264, + 555, + 300 + ], + "type": "text", + "content": ", we claim that these questions could be of value to real experts as an LLM chat assistant. Each participant only took a subset of the questions (about 100 each). The final score is the aggregate." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 309, + 306, + 322 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 309, + 306, + 322 + ], + "spans": [ + { + "bbox": [ + 55, + 309, + 306, + 322 + ], + "type": "text", + "content": "F.4. More experiments on attribute-level information" + } + ] + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 225, + 341, + 394, + 518 + ], + "blocks": [ + { + "bbox": [ + 225, + 341, + 394, + 518 + ], + "lines": [ + { + "bbox": [ + 225, + 341, + 394, + 518 + ], + "spans": [ + { + "bbox": [ + 225, + 341, + 394, + 518 + ], + "type": "image", + "image_path": "ebc6e3c6cd413d919dbcc1c3973a6968c02e37ed4f14c33c578aea3801c93a6c.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 54, + 529, + 555, + 552 + ], + "lines": [ + { + "bbox": [ + 54, + 529, + 555, + 552 + ], + "spans": [ + { + "bbox": [ + 54, + 529, + 555, + 552 + ], + "type": "text", + "content": "Figure 12. Performance by image modality type for the best models: Gemini-1.5-Pro (closed source), VILA1.5-40B (open-source), and LlaVA-Med-Mistral-7B (medical)." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "bbox": [ + 54, + 565, + 555, + 662 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 565, + 555, + 662 + ], + "spans": [ + { + "bbox": [ + 54, + 565, + 555, + 662 + ], + "type": "text", + "content": "We described our metadata features in Sec. 3, and we can now leverage them to benchmark models on different attributes. Fig. 5 (left) shows all sub-tasks, and shows that 'compare image sets', which is common in biology to compare 'treatment' and 'control' groups. All models perform relatively well, despite multi-image reasoning being understudied in MLLMs [78]. Fig. 5 (right) shows that for Gemini and LLaVa-Med, higher Bloom's levels - corresponding to more advanced reasoning - leads to worse performance. This supports the intuitive result that higher-level reasoning is more challenging - although VILA does not follow this trend. A final result in Fig. 12 shows that all models performed worse on cryo-electron microscopy images than brightfield or fluorescence microscopy. This may be due to cryo-EM's challenging features (low contrast, low signal-to-noise ratio) and their under-representation in pretraining datasets [93]." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 55, + 670, + 212, + 684 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 670, + 212, + 684 + ], + "spans": [ + { + "bbox": [ + 55, + 670, + 212, + 684 + ], + "type": "text", + "content": "F.5. Language shortcut ablations" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 54, + 689, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 689, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 54, + 689, + 555, + 715 + ], + "type": "text", + "content": "In Tab. 4 we do language shortcut ablations for some top-performing models and some lower-performing models - both no-image and choices-only ablations." + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 28 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 144, + 113, + 466, + 242 + ], + "blocks": [ + { + "bbox": [ + 55, + 70, + 555, + 103 + ], + "lines": [ + { + "bbox": [ + 55, + 70, + 555, + 103 + ], + "spans": [ + { + "bbox": [ + 55, + 70, + 555, + 103 + ], + "type": "text", + "content": "Table 7. Ablation study on MicroVQA MCQs. Column 2 is overall accuracy. Cols 3-4 are the no-image ablation accuracy and drop compared to overall accuracy. Cols 5-6 are the choices-only ablation accuracy and drop compared to overall accuracy. See the text for discussion." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 144, + 113, + 466, + 242 + ], + "lines": [ + { + "bbox": [ + 144, + 113, + 466, + 242 + ], + "spans": [ + { + "bbox": [ + 144, + 113, + 466, + 242 + ], + "type": "table", + "html": "
AccuracyNo-image ablationChoices-only ablation
AccuracyDropAccuracydiff
o152.849.2-3.637.7-15.1
Claude-3.5-Sonnet51.746.0-5.844.0-7.7
Gemini-Pro-1.551.147.2-3.836.8-14.3
Pixtral-Large49.846.3-3.636.7-13.1
Grok-2-Vision48.446.3-2.140.5-7.9
GPT-4o-mini46.244.2-1.934.3-11.9
Pixtral-12b45.643.7-1.931.8-13.8
Random22.0
", + "image_path": "2b63a3f4b0c5d1729ff388f508d13cf7d089a92cb9958a9b0d21a860aa55dd42.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 263, + 555, + 287 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 263, + 555, + 287 + ], + "spans": [ + { + "bbox": [ + 55, + 263, + 555, + 287 + ], + "type": "text", + "content": "The no-image ablation is a common test in VQA evaluations[29, 74]. The MLLM only receives the text prompt, and we add the following sentence to the prompt template:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 289, + 541, + 310 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 289, + 541, + 310 + ], + "spans": [ + { + "bbox": [ + 55, + 289, + 541, + 310 + ], + "type": "text", + "content": "If an image is mentioned ignore this information and try your best to answer the question." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 312, + 553, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 312, + 553, + 335 + ], + "spans": [ + { + "bbox": [ + 55, + 312, + 553, + 335 + ], + "type": "text", + "content": "The choices-only ablation has been explored in (non-visual) question answering [7]. The model does not receive the image or the question and must guess an answer from only the choices. We add this sentence to the prompt template:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 336, + 547, + 360 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 336, + 547, + 360 + ], + "spans": [ + { + "bbox": [ + 55, + 336, + 547, + 360 + ], + "type": "text", + "content": "However I will not give you the question text or the images, I will only give you the choices, so please try your best to answer the question." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 361, + 554, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 361, + 554, + 407 + ], + "spans": [ + { + "bbox": [ + 55, + 361, + 554, + 407 + ], + "type": "text", + "content": "Both ablations lower the scores, but they are still above random. While this is initially surprising, it is actually consistent with other benchmarks. A key challenge in VQA benchmarks is for questions to be \"vision-centric\": that the question requires using the image features to answer. We still have very strong evidence that MicroVQA is vision-centric, which we discuss below." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 409, + 553, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 409, + 553, + 433 + ], + "spans": [ + { + "bbox": [ + 55, + 409, + 553, + 433 + ], + "type": "text", + "content": "First, let's recall the language shortcuts by considering a VQA sample with an image of a green field, with the question \"what color is the grass\"." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 434, + 553, + 517 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 55, + 434, + 553, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 434, + 553, + 469 + ], + "spans": [ + { + "bbox": [ + 55, + 434, + 553, + 469 + ], + "type": "text", + "content": "- 'Language bias' is well-known for enabling models to make educated guesses about MCQs without access to the image [29]. For the question \"what color is the grass?\", then without the image, \"green\" is a likely guess, although there are other possible answers, like \"brown\"." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 55, + 470, + 553, + 493 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 470, + 553, + 493 + ], + "spans": [ + { + "bbox": [ + 55, + 470, + 553, + 493 + ], + "type": "text", + "content": "- 'Visual giveaway' means there are too many image details in the question, for example if the question were \"what color is the green grass?\" This removes any need to look at the image." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 494, + 553, + 517 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 494, + 553, + 517 + ], + "spans": [ + { + "bbox": [ + 55, + 494, + 553, + 517 + ], + "type": "text", + "content": "- 'Weak distractors' gives other question choices that are easily eliminated. For example, for question \"what color is the grass?\", a bad distractors would be \"pink\". This also removes any need to look at the image." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 518, + 553, + 542 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 518, + 553, + 542 + ], + "spans": [ + { + "bbox": [ + 55, + 518, + 553, + 542 + ], + "type": "text", + "content": "(In Appendix E.6, we also discussed language shortcuts with biology examples as it related to MCQ construction.) Now we argue that MicroVQA is still vision-centric despite the ablation results." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 55, + 544, + 554, + 677 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 55, + 544, + 554, + 639 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 544, + 554, + 639 + ], + "spans": [ + { + "bbox": [ + 55, + 544, + 554, + 639 + ], + "type": "text", + "content": "1. Language bias allows cheating on vision-centric questions If answerable due to a 'language bias', the question may still require perception. A model may guess that the grass is green, but it is only a guess - the grass really could be brown [29]. Therefore correct answers under the 'no-image' ablation may still be vision-centric. On the other hand problems with 'visual giveaway' and 'weak distractor' shortcuts really do render the question not-vision-centric. The RefineBot method we employed in MCQ generation was designed to target language shortcut removal. Anecdotally, our qualitative analysis (from Appendix E.6) found that RefineBot tends to focus on fixing the visual giveaway and weak distractor shortcuts. Ideally we would perform a quantitative analysis to identify which language shortcuts are due to language bias versus the others, however analyzing these questions is very time-consuming." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 55, + 641, + 553, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 641, + 553, + 677 + ], + "spans": [ + { + "bbox": [ + 55, + 641, + 553, + 677 + ], + "type": "text", + "content": "2. The unsolved questions are hard because of perception. The 'no-image' ablation only tells us about the 'easy' questions, but half the question is too hard for current models. Our qualitative analysis of 30 failure cases demonstrates that half of them made critical perception errors, which strongly suggests they require image understanding" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 55, + 677, + 554, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 554, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 554, + 714 + ], + "type": "text", + "content": "In contrast to the 'vision-centric' claims from the Cambrian paper [74]: They show that many VQA benchmarks have no-image ablation scores only slightly underperforming the with-image scores, and they conclude that such VQA benchmarks \"do not properly measure vision-centric capabilities\". But that conclusion cannot be drawn from only the no-image ablation" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 29 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 555, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 555, + 120 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 555, + 120 + ], + "type": "text", + "content": "(following our point 2 above). For benchmarks where MLLM's score far from " + }, + { + "bbox": [ + 55, + 72, + 555, + 120 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 55, + 72, + 555, + 120 + ], + "type": "text", + "content": ", the failures could mostly be due to poor image understanding. This is very plausible since visual capabilities in MLLMs are viewed as lagging behind language capabilities. Likewise, our experience with our microscopy data was that visual understanding was poor, but that language knowledge was impressive." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 121, + 555, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 121, + 555, + 156 + ], + "spans": [ + { + "bbox": [ + 55, + 121, + 555, + 156 + ], + "type": "text", + "content": "In point 1 above, we argued that MLLMs can use language biases to pass the no-image ablation, even if they are vision-centric questions. This is actually in agreement with the claim from Cambrian that those questions don't test vision-centric capabilities." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 156, + 555, + 192 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 156, + 555, + 192 + ], + "spans": [ + { + "bbox": [ + 55, + 156, + 555, + 192 + ], + "type": "text", + "content": "Finally, some non-vision centric questions are acceptable. The goal of MicroVQA is to source realistic problems from microscopy research. While this area does lean heavily on image analysis, not every question will realistically require perception, so this is acceptable." + } + ] + } + ], + "index": 2 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 30 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 145, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 145, + 83 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 145, + 83 + ], + "type": "text", + "content": "F.6. Error analysis" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 90, + 555, + 114 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 90, + 555, + 114 + ], + "spans": [ + { + "bbox": [ + 55, + 90, + 555, + 114 + ], + "type": "text", + "content": "Here we show samples from the manual qualitative error analysis (details in the main paper). To see all 30 examples, refer the github repo." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 129, + 163, + 140 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 129, + 163, + 140 + ], + "spans": [ + { + "bbox": [ + 55, + 129, + 163, + 140 + ], + "type": "text", + "content": "F.6.1 Perception errors" + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 65, + 161, + 239, + 402 + ], + "blocks": [ + { + "bbox": [ + 65, + 161, + 239, + 402 + ], + "lines": [ + { + "bbox": [ + 65, + 161, + 239, + 402 + ], + "spans": [ + { + "bbox": [ + 65, + 161, + 239, + 402 + ], + "type": "image", + "image_path": "3746b9ae2847c78b38a88ee79e517e75b632e6cdb134845904911e91617e5f4b.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 64, + 408, + 225, + 421 + ], + "lines": [ + { + "bbox": [ + 64, + 408, + 225, + 421 + ], + "spans": [ + { + "bbox": [ + 64, + 408, + 225, + 421 + ], + "type": "text", + "content": "Subject: Cell and molecular biology" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 65, + 422, + 182, + 432 + ], + "lines": [ + { + "bbox": [ + 65, + 422, + 182, + 432 + ], + "spans": [ + { + "bbox": [ + 65, + 422, + 182, + 432 + ], + "type": "text", + "content": "Bloom's level: 4 (Analysis)" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 65, + 434, + 191, + 444 + ], + "lines": [ + { + "bbox": [ + 65, + 434, + 191, + 444 + ], + "spans": [ + { + "bbox": [ + 65, + 434, + 191, + 444 + ], + "type": "text", + "content": "Specimen: mammalian cells" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 286, + 647, + 323, + 657 + ], + "lines": [ + { + "bbox": [ + 286, + 647, + 323, + 657 + ], + "spans": [ + { + "bbox": [ + 286, + 647, + 323, + 657 + ], + "type": "text", + "content": "Figure 13" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 255, + 166, + 539, + 227 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 166, + 539, + 227 + ], + "spans": [ + { + "bbox": [ + 255, + 166, + 539, + 227 + ], + "type": "text", + "content": "Q: A cryo-electron tomography (cryoET) slice of mammalian cells was acquired at a resolution of 1.3 Å and binned by 4 (resulting in a 5.3 Å per pixel). Within the tomogram, dark circular like structures are observed in an organelle. What is the modest likely identity of these structures?" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 256, + 238, + 334, + 250 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 256, + 238, + 334, + 250 + ], + "spans": [ + { + "bbox": [ + 256, + 238, + 334, + 250 + ], + "type": "text", + "content": "A) Ribosomes X" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 257, + 251, + 515, + 286 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 257, + 251, + 515, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 251, + 515, + 262 + ], + "spans": [ + { + "bbox": [ + 257, + 251, + 515, + 262 + ], + "type": "text", + "content": "B) Phase-separated condensates such as stress granules" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 257, + 263, + 321, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 263, + 321, + 274 + ], + "spans": [ + { + "bbox": [ + 257, + 263, + 321, + 274 + ], + "type": "text", + "content": "C) Lysosomes" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 257, + 275, + 327, + 286 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 275, + 327, + 286 + ], + "spans": [ + { + "bbox": [ + 257, + 275, + 327, + 286 + ], + "type": "text", + "content": "D) Peroxisomes" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 255, + 300, + 477, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 300, + 477, + 312 + ], + "spans": [ + { + "bbox": [ + 255, + 300, + 477, + 312 + ], + "type": "text", + "content": "VLM Answer: Let me think about this step by step:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 256, + 313, + 537, + 552 + ], + "type": "list", + "angle": 0, + "index": 31, + "blocks": [ + { + "bbox": [ + 256, + 313, + 457, + 324 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 256, + 313, + 457, + 324 + ], + "spans": [ + { + "bbox": [ + 256, + 313, + 457, + 324 + ], + "type": "text", + "content": "1) First, let's look at what we see in the image:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 257, + 325, + 523, + 336 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 325, + 523, + 336 + ], + "spans": [ + { + "bbox": [ + 257, + 325, + 523, + 336 + ], + "type": "text", + "content": "- A cryo-electron tomography slice showing cellular structures" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 257, + 337, + 412, + 347 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 337, + 412, + 347 + ], + "spans": [ + { + "bbox": [ + 257, + 337, + 412, + 347 + ], + "type": "text", + "content": "- Circular-like dark structures visible" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 257, + 349, + 465, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 349, + 465, + 361 + ], + "spans": [ + { + "bbox": [ + 257, + 349, + 465, + 361 + ], + "type": "text", + "content": "- The resolution is quite high (5.3 Å after binning)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 257, + 361, + 523, + 372 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 361, + 523, + 372 + ], + "spans": [ + { + "bbox": [ + 257, + 361, + 523, + 372 + ], + "type": "text", + "content": "2) Let's consider the size and appearance of these structures:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 257, + 373, + 493, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 373, + 493, + 384 + ], + "spans": [ + { + "bbox": [ + 257, + 373, + 493, + 384 + ], + "type": "text", + "content": "- They appear as dense, small, roughly circular objects" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 257, + 385, + 537, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 385, + 537, + 396 + ], + "spans": [ + { + "bbox": [ + 257, + 385, + 537, + 396 + ], + "type": "text", + "content": "- The size is consistent with ribosomes (approximately " + }, + { + "bbox": [ + 257, + 385, + 537, + 396 + ], + "type": "inline_equation", + "content": "25 - 30\\mathrm{nm}" + }, + { + "bbox": [ + 257, + 385, + 537, + 396 + ], + "type": "text", + "content": ")" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 257, + 397, + 504, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 397, + 504, + 419 + ], + "spans": [ + { + "bbox": [ + 257, + 397, + 504, + 419 + ], + "type": "text", + "content": "- They show the characteristic electron density pattern of ribosomes" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 257, + 421, + 381, + 432 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 421, + 381, + 432 + ], + "spans": [ + { + "bbox": [ + 257, + 421, + 381, + 432 + ], + "type": "text", + "content": "3) Let's analyze each option:" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 257, + 433, + 501, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 433, + 501, + 445 + ], + "spans": [ + { + "bbox": [ + 257, + 433, + 501, + 445 + ], + "type": "text", + "content": "- Ribosomes: Match the size, shape, and density pattern" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 257, + 445, + 514, + 468 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 445, + 514, + 468 + ], + "spans": [ + { + "bbox": [ + 257, + 445, + 514, + 468 + ], + "type": "text", + "content": "- Phase-separated condensates: Usually less defined, more amorphous" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 257, + 469, + 501, + 480 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 469, + 501, + 480 + ], + "spans": [ + { + "bbox": [ + 257, + 469, + 501, + 480 + ], + "type": "text", + "content": "- Lysosomes: Much larger, membrane-bound organelles" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 257, + 481, + 501, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 481, + 501, + 492 + ], + "spans": [ + { + "bbox": [ + 257, + 481, + 501, + 492 + ], + "type": "text", + "content": "- Peroxisomes: Also larger, membrane-bound organelles" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 257, + 493, + 380, + 503 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 493, + 380, + 503 + ], + "spans": [ + { + "bbox": [ + 257, + 493, + 380, + 503 + ], + "type": "text", + "content": "4) Additional considerations:" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 257, + 505, + 510, + 516 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 505, + 510, + 516 + ], + "spans": [ + { + "bbox": [ + 257, + 505, + 510, + 516 + ], + "type": "text", + "content": "- Ribosomes are often found associated with mitochondria" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 257, + 517, + 510, + 528 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 517, + 510, + 528 + ], + "spans": [ + { + "bbox": [ + 257, + 517, + 510, + 528 + ], + "type": "text", + "content": "- The resolution is sufficient to resolve ribosomal structures" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 256, + 529, + 518, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 256, + 529, + 518, + 552 + ], + "spans": [ + { + "bbox": [ + 256, + 529, + 518, + 552 + ], + "type": "text", + "content": "- The contrast and density are typical of ribosome imaging in cryoET" + } + ] + } + ], + "index": 30 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 257, + 553, + 331, + 564 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 553, + 331, + 564 + ], + "spans": [ + { + "bbox": [ + 257, + 553, + 331, + 564 + ], + "type": "text", + "content": "The answer is (1)" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 257, + 581, + 358, + 592 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 581, + 358, + 592 + ], + "spans": [ + { + "bbox": [ + 257, + 581, + 358, + 592 + ], + "type": "text", + "content": "Error type: Perception" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 257, + 593, + 474, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 593, + 474, + 605 + ], + "spans": [ + { + "bbox": [ + 257, + 593, + 474, + 605 + ], + "type": "text", + "content": "Error reason: These are granules characterized by" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 257, + 605, + 542, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 257, + 605, + 542, + 628 + ], + "spans": [ + { + "bbox": [ + 257, + 605, + 542, + 628 + ], + "type": "text", + "content": "heterogeneous, electron-dense aggregates. Ribosomes would be smaller, more regular in shape, and not have as high contrast." + } + ] + } + ], + "index": 35 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 37 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 31 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 60, + 74, + 233, + 219 + ], + "blocks": [ + { + "bbox": [ + 60, + 74, + 233, + 219 + ], + "lines": [ + { + "bbox": [ + 60, + 74, + 233, + 219 + ], + "spans": [ + { + "bbox": [ + 60, + 74, + 233, + 219 + ], + "type": "image", + "image_path": "a55baf6d76f11541649358ddec8e664a4e40a1e350239dd7102746bd0dbe4c24.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 62, + 227, + 122, + 236 + ], + "lines": [ + { + "bbox": [ + 62, + 227, + 122, + 236 + ], + "spans": [ + { + "bbox": [ + 62, + 227, + 122, + 236 + ], + "type": "text", + "content": "Subject: Anatomy" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 236, + 148, + 245 + ], + "lines": [ + { + "bbox": [ + 63, + 236, + 148, + 245 + ], + "spans": [ + { + "bbox": [ + 63, + 236, + 148, + 245 + ], + "type": "text", + "content": "Bloom's level: 4 (Analysis)" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 245, + 194, + 254 + ], + "lines": [ + { + "bbox": [ + 63, + 245, + 194, + 254 + ], + "spans": [ + { + "bbox": [ + 63, + 245, + 194, + 254 + ], + "type": "text", + "content": "Specimen: gastrointestinal lesion biopsy" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 286, + 421, + 323, + 431 + ], + "lines": [ + { + "bbox": [ + 286, + 421, + 323, + 431 + ], + "spans": [ + { + "bbox": [ + 286, + 421, + 323, + 431 + ], + "type": "text", + "content": "Figure 14" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 247, + 74, + 541, + 101 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 247, + 74, + 541, + 101 + ], + "spans": [ + { + "bbox": [ + 247, + 74, + 541, + 101 + ], + "type": "text", + "content": "Q: A hematoxylin and eosin (H&E)-stained micrograph shows cellular formations within a gastrointestinal tissue sample with unique organization and cellular patterns. What pattern and morphology do these cells most closely align with?" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 247, + 109, + 514, + 126 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 247, + 109, + 514, + 126 + ], + "spans": [ + { + "bbox": [ + 247, + 109, + 514, + 126 + ], + "type": "text", + "content": "A) Bundles of elongated cells with pale pink cytoplasm and uniform oval nuclei in a crisscross pattern" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 247, + 126, + 543, + 161 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 247, + 126, + 499, + 135 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 247, + 126, + 499, + 135 + ], + "spans": [ + { + "bbox": [ + 247, + 126, + 499, + 135 + ], + "type": "text", + "content": "B) Circular groups of cells with mucin in central spaces and small, dark nuclei" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 247, + 136, + 495, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 247, + 136, + 495, + 144 + ], + "spans": [ + { + "bbox": [ + 247, + 136, + 495, + 144 + ], + "type": "text", + "content": "C) Dense arrays of polygonal cells with small nucleoli and vacuolated cytoplasm" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 247, + 144, + 452, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 247, + 144, + 452, + 152 + ], + "spans": [ + { + "bbox": [ + 247, + 144, + 452, + 152 + ], + "type": "text", + "content": "D) Rows of cells with conspicuous cytoplasm and rounded nuclei" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 247, + 152, + 543, + 161 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 247, + 152, + 543, + 161 + ], + "spans": [ + { + "bbox": [ + 247, + 152, + 543, + 161 + ], + "type": "text", + "content": "E) Diffused arrangement of cells with transparent cytoplasm and irregularly clustered chromatin" + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 248, + 175, + 397, + 183 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 175, + 397, + 183 + ], + "spans": [ + { + "bbox": [ + 248, + 175, + 397, + 183 + ], + "type": "text", + "content": "VLM Answer: Let me analyze this step by step:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 248, + 183, + 526, + 356 + ], + "type": "list", + "angle": 0, + "index": 31, + "blocks": [ + { + "bbox": [ + 248, + 183, + 383, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 183, + 383, + 191 + ], + "spans": [ + { + "bbox": [ + 248, + 183, + 383, + 191 + ], + "type": "text", + "content": "1) Looking at the H&E-stained micrograph:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 248, + 192, + 431, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 192, + 431, + 200 + ], + "spans": [ + { + "bbox": [ + 248, + 192, + 431, + 200 + ], + "type": "text", + "content": "- It shows a circular cross-section of gastrointestinal tissue" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 248, + 201, + 431, + 209 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 201, + 431, + 209 + ], + "spans": [ + { + "bbox": [ + 248, + 201, + 431, + 209 + ], + "type": "text", + "content": "- There's a clear organizational pattern along the periphery" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 248, + 209, + 455, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 209, + 455, + 218 + ], + "spans": [ + { + "bbox": [ + 248, + 209, + 455, + 218 + ], + "type": "text", + "content": "- The center shows pink-stained connective tissue (lamina propria)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 248, + 219, + 374, + 226 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 219, + 374, + 226 + ], + "spans": [ + { + "bbox": [ + 248, + 219, + 374, + 226 + ], + "type": "text", + "content": "2) Focusing on the cellular arrangement:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 248, + 227, + 379, + 235 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 227, + 379, + 235 + ], + "spans": [ + { + "bbox": [ + 248, + 227, + 379, + 235 + ], + "type": "text", + "content": "- The cells are arranged in circular groups" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 248, + 236, + 377, + 243 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 236, + 377, + 243 + ], + "spans": [ + { + "bbox": [ + 248, + 236, + 377, + 243 + ], + "type": "text", + "content": "- These groups form glandular structures" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 248, + 244, + 435, + 252 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 244, + 435, + 252 + ], + "spans": [ + { + "bbox": [ + 248, + 244, + 435, + 252 + ], + "type": "text", + "content": "- There are clear central spaces (lumens) in these structures" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 248, + 253, + 449, + 261 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 253, + 449, + 261 + ], + "spans": [ + { + "bbox": [ + 248, + 253, + 449, + 261 + ], + "type": "text", + "content": "- The nuclei appear dark and are located at the base of the cells" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 248, + 262, + 346, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 262, + 346, + 270 + ], + "spans": [ + { + "bbox": [ + 248, + 262, + 346, + 270 + ], + "type": "text", + "content": "3) Comparing with the options:" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 248, + 270, + 478, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 270, + 478, + 277 + ], + "spans": [ + { + "bbox": [ + 248, + 270, + 478, + 277 + ], + "type": "text", + "content": "- Option 1 describes elongated cells in crisscross pattern \n- doesn't match" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 248, + 279, + 526, + 295 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 279, + 526, + 295 + ], + "spans": [ + { + "bbox": [ + 248, + 279, + 526, + 295 + ], + "type": "text", + "content": "- Option 2 describes circular groups with mucin in central spaces and small, dark nuclei - matches perfectly" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 248, + 297, + 457, + 304 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 297, + 457, + 304 + ], + "spans": [ + { + "bbox": [ + 248, + 297, + 457, + 304 + ], + "type": "text", + "content": "- Option 3 describes polygonal cells with vacuoles - doesn't match" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 248, + 305, + 406, + 313 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 305, + 406, + 313 + ], + "spans": [ + { + "bbox": [ + 248, + 305, + 406, + 313 + ], + "type": "text", + "content": "- Option 4 describes rows of cells - doesn't match" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 248, + 314, + 433, + 322 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 314, + 433, + 322 + ], + "spans": [ + { + "bbox": [ + 248, + 314, + 433, + 322 + ], + "type": "text", + "content": "- Option 5 describes diffused arrangement - doesn't match" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 248, + 323, + 427, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 323, + 427, + 331 + ], + "spans": [ + { + "bbox": [ + 248, + 323, + 427, + 331 + ], + "type": "text", + "content": "4) The image shows typical intestinal glands (crypts) with:" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 248, + 331, + 320, + 339 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 331, + 320, + 339 + ], + "spans": [ + { + "bbox": [ + 248, + 331, + 320, + 339 + ], + "type": "text", + "content": "- Circular arrangement" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 248, + 340, + 356, + 348 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 340, + 356, + 348 + ], + "spans": [ + { + "bbox": [ + 248, + 340, + 356, + 348 + ], + "type": "text", + "content": "- Central lumens containing mucin" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 248, + 349, + 381, + 356 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 349, + 381, + 356 + ], + "spans": [ + { + "bbox": [ + 248, + 349, + 381, + 356 + ], + "type": "text", + "content": "- Small, dark nuclei at the base of the cells" + } + ] + } + ], + "index": 30 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 248, + 357, + 302, + 365 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 248, + 357, + 302, + 365 + ], + "spans": [ + { + "bbox": [ + 248, + 357, + 302, + 365 + ], + "type": "text", + "content": "The answer is (2)" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 246, + 380, + 318, + 388 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 380, + 318, + 388 + ], + "spans": [ + { + "bbox": [ + 246, + 380, + 318, + 388 + ], + "type": "text", + "content": "Error type: Perception" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 245, + 389, + 547, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 245, + 389, + 547, + 407 + ], + "spans": [ + { + "bbox": [ + 245, + 389, + 547, + 407 + ], + "type": "text", + "content": "Error reason: No. The intestinal glands are not the lesion. There is a pink spindle cell lesion with a fascicular arrangement of cells" + } + ] + } + ], + "index": 34 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 310, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 310, + 742 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 310, + 742 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 36 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 32 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 58, + 71, + 228, + 197 + ], + "blocks": [ + { + "bbox": [ + 58, + 71, + 228, + 197 + ], + "lines": [ + { + "bbox": [ + 58, + 71, + 228, + 197 + ], + "spans": [ + { + "bbox": [ + 58, + 71, + 228, + 197 + ], + "type": "image", + "image_path": "db06363d55bd66d857c3ebf1210cc63a3d1ac5bbd7e343d7daf04f3971021e6e.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 58, + 201, + 228, + 331 + ], + "blocks": [ + { + "bbox": [ + 58, + 201, + 228, + 331 + ], + "lines": [ + { + "bbox": [ + 58, + 201, + 228, + 331 + ], + "spans": [ + { + "bbox": [ + 58, + 201, + 228, + 331 + ], + "type": "image", + "image_path": "d356dc02564948d373741e89639cb989798f220bb7d64ecdf71bac8c0b5316e8.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 63, + 338, + 130, + 346 + ], + "lines": [ + { + "bbox": [ + 63, + 338, + 130, + 346 + ], + "spans": [ + { + "bbox": [ + 63, + 338, + 130, + 346 + ], + "type": "text", + "content": "Subject: Pathology" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 347, + 150, + 356 + ], + "lines": [ + { + "bbox": [ + 63, + 347, + 150, + 356 + ], + "spans": [ + { + "bbox": [ + 63, + 347, + 150, + 356 + ], + "type": "text", + "content": "Bloom's level: 4 (Analysis)" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 357, + 168, + 365 + ], + "lines": [ + { + "bbox": [ + 63, + 357, + 168, + 365 + ], + "spans": [ + { + "bbox": [ + 63, + 357, + 168, + 365 + ], + "type": "text", + "content": "Specimen: schwannoma tissue" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 286, + 437, + 323, + 448 + ], + "lines": [ + { + "bbox": [ + 286, + 437, + 323, + 448 + ], + "spans": [ + { + "bbox": [ + 286, + 437, + 323, + 448 + ], + "type": "text", + "content": "Figure 15" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 236, + 76, + 545, + 103 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 236, + 76, + 545, + 103 + ], + "spans": [ + { + "bbox": [ + 236, + 76, + 545, + 103 + ], + "type": "text", + "content": "Q: An H&E-stained image of a nerve sheath tumor displays unique cell arrangements and marked changes within its structure. Given these observations, what is the most likely reason for the observed morphological alternations?" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 236, + 111, + 381, + 121 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 236, + 111, + 381, + 121 + ], + "spans": [ + { + "bbox": [ + 236, + 111, + 381, + 121 + ], + "type": "text", + "content": "A) Indication of rapid cellular proliferation X" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 237, + 121, + 413, + 156 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 237, + 121, + 372, + 129 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 121, + 372, + 129 + ], + "spans": [ + { + "bbox": [ + 237, + 121, + 372, + 129 + ], + "type": "text", + "content": "B) Sign of tumor shrinkage post-treatment" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 237, + 130, + 400, + 138 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 130, + 400, + 138 + ], + "spans": [ + { + "bbox": [ + 237, + 130, + 400, + 138 + ], + "type": "text", + "content": "C) Result of genetic changes within the tumor cells" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 237, + 139, + 413, + 148 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 139, + 413, + 148 + ], + "spans": [ + { + "bbox": [ + 237, + 139, + 413, + 148 + ], + "type": "text", + "content": "D) Outcome of a chronic process affecting the tumor" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 237, + 148, + 367, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 148, + 367, + 156 + ], + "spans": [ + { + "bbox": [ + 237, + 148, + 367, + 156 + ], + "type": "text", + "content": "E) Sign of metastasis to another location" + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 236, + 164, + 390, + 174 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 236, + 164, + 390, + 174 + ], + "spans": [ + { + "bbox": [ + 236, + 164, + 390, + 174 + ], + "type": "text", + "content": "VLM Answer: Let me analyze this step by step:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 237, + 175, + 471, + 379 + ], + "type": "list", + "angle": 0, + "index": 36, + "blocks": [ + { + "bbox": [ + 237, + 175, + 408, + 183 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 175, + 408, + 183 + ], + "spans": [ + { + "bbox": [ + 237, + 175, + 408, + 183 + ], + "type": "text", + "content": "1) Looking at the H&E-stained images, I can observe:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 237, + 183, + 328, + 191 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 183, + 328, + 191 + ], + "spans": [ + { + "bbox": [ + 237, + 183, + 328, + 191 + ], + "type": "text", + "content": "- Dense cellular proliferation" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 237, + 192, + 381, + 201 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 192, + 381, + 201 + ], + "spans": [ + { + "bbox": [ + 237, + 192, + 381, + 201 + ], + "type": "text", + "content": "- Distinctive \"herringbone\" or swirling pattern" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 237, + 201, + 359, + 209 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 201, + 359, + 209 + ], + "spans": [ + { + "bbox": [ + 237, + 201, + 359, + 209 + ], + "type": "text", + "content": "- High cellularity with numerous nuclei" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 237, + 210, + 351, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 210, + 351, + 218 + ], + "spans": [ + { + "bbox": [ + 237, + 210, + 351, + 218 + ], + "type": "text", + "content": "- Evidence of active growth pattern" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 237, + 219, + 332, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 219, + 332, + 228 + ], + "spans": [ + { + "bbox": [ + 237, + 219, + 332, + 228 + ], + "type": "text", + "content": "2) Key characteristics visible:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 237, + 228, + 351, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 228, + 351, + 236 + ], + "spans": [ + { + "bbox": [ + 237, + 228, + 351, + 236 + ], + "type": "text", + "content": "- The cells are arranged in fascicles" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 237, + 237, + 378, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 237, + 378, + 245 + ], + "spans": [ + { + "bbox": [ + 237, + 237, + 378, + 245 + ], + "type": "text", + "content": "- There's a high nuclear to cytoplasmic ratio" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 237, + 246, + 377, + 255 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 246, + 377, + 255 + ], + "spans": [ + { + "bbox": [ + 237, + 246, + 377, + 255 + ], + "type": "text", + "content": "- The pattern suggests active tumor growth" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 237, + 255, + 369, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 255, + 369, + 262 + ], + "spans": [ + { + "bbox": [ + 237, + 255, + 369, + 262 + ], + "type": "text", + "content": "- No signs of necrosis or treatment effect" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 237, + 263, + 366, + 272 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 263, + 366, + 272 + ], + "spans": [ + { + "bbox": [ + 237, + 263, + 366, + 272 + ], + "type": "text", + "content": "- No obvious signs of metastatic spread" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 237, + 273, + 321, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 273, + 321, + 281 + ], + "spans": [ + { + "bbox": [ + 237, + 273, + 321, + 281 + ], + "type": "text", + "content": "3) Analysis of the options:" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 237, + 281, + 437, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 281, + 437, + 290 + ], + "spans": [ + { + "bbox": [ + 237, + 281, + 437, + 290 + ], + "type": "text", + "content": "- Option 1 matches with the high cellularity and growth pattern" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 237, + 291, + 420, + 299 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 291, + 420, + 299 + ], + "spans": [ + { + "bbox": [ + 237, + 291, + 420, + 299 + ], + "type": "text", + "content": "- Option 2 doesn't fit as there's no evidence of shrinkage." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 237, + 300, + 432, + 308 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 300, + 432, + 308 + ], + "spans": [ + { + "bbox": [ + 237, + 300, + 432, + 308 + ], + "type": "text", + "content": "- Option 3 while possible, isn't directly evident in morphology" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 237, + 308, + 419, + 317 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 308, + 419, + 317 + ], + "spans": [ + { + "bbox": [ + 237, + 308, + 419, + 317 + ], + "type": "text", + "content": "- Option 4 doesn't match the active growth pattern seen" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 237, + 318, + 432, + 326 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 318, + 432, + 326 + ], + "spans": [ + { + "bbox": [ + 237, + 318, + 432, + 326 + ], + "type": "text", + "content": "- Option 5 isn't supported by the histological features shown" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 237, + 327, + 471, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 327, + 471, + 335 + ], + "spans": [ + { + "bbox": [ + 237, + 327, + 471, + 335 + ], + "type": "text", + "content": "4) The morphological features strongly suggest active tumor growth with:" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 237, + 335, + 344, + 342 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 335, + 344, + 342 + ], + "spans": [ + { + "bbox": [ + 237, + 335, + 344, + 342 + ], + "type": "text", + "content": "- Characteristic fascicular pattern" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 237, + 344, + 307, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 344, + 307, + 352 + ], + "spans": [ + { + "bbox": [ + 237, + 344, + 307, + 352 + ], + "type": "text", + "content": "- High cellular density" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 237, + 353, + 352, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 353, + 352, + 361 + ], + "spans": [ + { + "bbox": [ + 237, + 353, + 352, + 361 + ], + "type": "text", + "content": "- Well-preserved tumor architecture" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 237, + 361, + 329, + 370 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 361, + 329, + 370 + ], + "spans": [ + { + "bbox": [ + 237, + 361, + 329, + 370 + ], + "type": "text", + "content": "- Signs of active proliferation" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 237, + 371, + 292, + 379 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 371, + 292, + 379 + ], + "spans": [ + { + "bbox": [ + 237, + 371, + 292, + 379 + ], + "type": "text", + "content": "The answer is (1)" + } + ] + } + ], + "index": 35 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 237, + 387, + 312, + 396 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 387, + 312, + 396 + ], + "spans": [ + { + "bbox": [ + 237, + 387, + 312, + 396 + ], + "type": "text", + "content": "Error type: Perception" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 236, + 396, + 542, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 236, + 396, + 542, + 423 + ], + "spans": [ + { + "bbox": [ + 236, + 396, + 542, + 423 + ], + "type": "text", + "content": "Error reason: No. One image shows cystic change, which can be seen in chronic, longstanding schwannomas. The other shows Verocay bodies and fascicular growth, which do not indicate rapid cellular proliferation." + } + ] + } + ], + "index": 38 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "type": "text", + "content": "22" + } + ] + } + ], + "index": 40 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 33 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 63, + 71, + 231, + 199 + ], + "blocks": [ + { + "bbox": [ + 63, + 71, + 231, + 199 + ], + "lines": [ + { + "bbox": [ + 63, + 71, + 231, + 199 + ], + "spans": [ + { + "bbox": [ + 63, + 71, + 231, + 199 + ], + "type": "image", + "image_path": "2b61b8249e59c6044d51a093c2d3f893d71c6d57fb46594536267a630a55470d.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 63, + 204, + 231, + 340 + ], + "blocks": [ + { + "bbox": [ + 63, + 204, + 231, + 340 + ], + "lines": [ + { + "bbox": [ + 63, + 204, + 231, + 340 + ], + "spans": [ + { + "bbox": [ + 63, + 204, + 231, + 340 + ], + "type": "image", + "image_path": "3675c07975e83e8d4cd7bc34316a81ac62224a32c6db853db28f75f401ec8cd1.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 63, + 346, + 129, + 354 + ], + "lines": [ + { + "bbox": [ + 63, + 346, + 129, + 354 + ], + "spans": [ + { + "bbox": [ + 63, + 346, + 129, + 354 + ], + "type": "text", + "content": "Subject: Physiology" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 354, + 173, + 363 + ], + "lines": [ + { + "bbox": [ + 63, + 354, + 173, + 363 + ], + "spans": [ + { + "bbox": [ + 63, + 354, + 173, + 363 + ], + "type": "text", + "content": "Bloom's level: 2 (Comprehension)" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 363, + 208, + 380 + ], + "lines": [ + { + "bbox": [ + 63, + 363, + 208, + 380 + ], + "spans": [ + { + "bbox": [ + 63, + 363, + 208, + 380 + ], + "type": "text", + "content": "Specimen: mouse liver tissue (wounded and unwounded)" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 286, + 419, + 324, + 430 + ], + "lines": [ + { + "bbox": [ + 286, + 419, + 324, + 430 + ], + "spans": [ + { + "bbox": [ + 286, + 419, + 324, + 430 + ], + "type": "text", + "content": "Figure 16" + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 244, + 75, + 533, + 110 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 75, + 533, + 110 + ], + "spans": [ + { + "bbox": [ + 244, + 75, + 533, + 110 + ], + "type": "text", + "content": "Q: In a fluorescence microscopy image comparing wounded and unwounded liver tissues stained to visualize markers of immune response – neutrophil derived migrasomes (Ly6G, yellow) and platelets (CD41, magenta), what is most likely observed about the distribution of these markers in unwounded tissue?" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 245, + 118, + 447, + 127 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 245, + 118, + 447, + 127 + ], + "spans": [ + { + "bbox": [ + 245, + 118, + 447, + 127 + ], + "type": "text", + "content": "A) Slight overlap without any notable biological consequences" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 245, + 127, + 463, + 163 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 245, + 127, + 433, + 136 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 245, + 127, + 433, + 136 + ], + "spans": [ + { + "bbox": [ + 245, + 127, + 433, + 136 + ], + "type": "text", + "content": "B) Substantial overlap implying persistent immune activation" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 245, + 136, + 463, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 245, + 136, + 463, + 144 + ], + "spans": [ + { + "bbox": [ + 245, + 136, + 463, + 144 + ], + "type": "text", + "content": "C) No overlap, indicating complete segregation of cellular functions X" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 245, + 144, + 426, + 154 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 245, + 144, + 426, + 154 + ], + "spans": [ + { + "bbox": [ + 245, + 144, + 426, + 154 + ], + "type": "text", + "content": "D) Randomized distribution suggesting passive circulation" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 245, + 154, + 431, + 163 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 245, + 154, + 431, + 163 + ], + "spans": [ + { + "bbox": [ + 245, + 154, + 431, + 163 + ], + "type": "text", + "content": "E) Significant separation highlighting cellular non-interaction" + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 245, + 170, + 406, + 178 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 245, + 170, + 406, + 178 + ], + "spans": [ + { + "bbox": [ + 245, + 170, + 406, + 178 + ], + "type": "text", + "content": "VLM Answer: Let me think about this step by step:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 246, + 179, + 539, + 335 + ], + "type": "list", + "angle": 0, + "index": 31, + "blocks": [ + { + "bbox": [ + 246, + 179, + 379, + 188 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 179, + 379, + 188 + ], + "spans": [ + { + "bbox": [ + 246, + 179, + 379, + 188 + ], + "type": "text", + "content": "1) The image shows two types of staining:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 246, + 188, + 404, + 196 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 188, + 404, + 196 + ], + "spans": [ + { + "bbox": [ + 246, + 188, + 404, + 196 + ], + "type": "text", + "content": "- Yellow (Ly6G) for neutrophil-derived migrasomes" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 246, + 196, + 342, + 205 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 196, + 342, + 205 + ], + "spans": [ + { + "bbox": [ + 246, + 196, + 342, + 205 + ], + "type": "text", + "content": "- Magenta (CD41) for platelets" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 246, + 205, + 446, + 214 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 205, + 446, + 214 + ], + "spans": [ + { + "bbox": [ + 246, + 205, + 446, + 214 + ], + "type": "text", + "content": "2) Looking at the unwounded tissue (presumably the left image):" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 246, + 214, + 387, + 222 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 214, + 387, + 222 + ], + "spans": [ + { + "bbox": [ + 246, + 214, + 387, + 222 + ], + "type": "text", + "content": "- There are distinct yellow dots (migrasomes)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 246, + 223, + 376, + 231 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 223, + 376, + 231 + ], + "spans": [ + { + "bbox": [ + 246, + 223, + 376, + 231 + ], + "type": "text", + "content": "- There are magenta structures (platelets)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 246, + 232, + 377, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 232, + 377, + 239 + ], + "spans": [ + { + "bbox": [ + 246, + 232, + 377, + 239 + ], + "type": "text", + "content": "- The distribution appears fairly separated" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 246, + 240, + 334, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 240, + 334, + 248 + ], + "spans": [ + { + "bbox": [ + 246, + 240, + 334, + 248 + ], + "type": "text", + "content": "- The markers rarely overlap" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 246, + 248, + 373, + 257 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 248, + 373, + 257 + ], + "spans": [ + { + "bbox": [ + 246, + 248, + 373, + 257 + ], + "type": "text", + "content": "3) This makes biological sense because:" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 246, + 258, + 539, + 266 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 258, + 539, + 266 + ], + "spans": [ + { + "bbox": [ + 246, + 258, + 539, + 266 + ], + "type": "text", + "content": "- In healthy/unwounded tissue, immune cells and platelets typically maintain their distinct roles" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 246, + 266, + 424, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 266, + 424, + 274 + ], + "spans": [ + { + "bbox": [ + 246, + 266, + 424, + 274 + ], + "type": "text", + "content": "- Without injury, there's no strong stimulus for interaction" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 246, + 275, + 451, + 283 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 275, + 451, + 283 + ], + "spans": [ + { + "bbox": [ + 246, + 275, + 451, + 283 + ], + "type": "text", + "content": "- The cells are likely performing their normal surveillance functions" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 246, + 284, + 324, + 292 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 284, + 324, + 292 + ], + "spans": [ + { + "bbox": [ + 246, + 284, + 324, + 292 + ], + "type": "text", + "content": "4) Analyzing the options:" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 246, + 293, + 435, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 293, + 435, + 300 + ], + "spans": [ + { + "bbox": [ + 246, + 293, + 435, + 300 + ], + "type": "text", + "content": "- Option 1 suggests slight overlap - more than what's visible" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 246, + 301, + 383, + 309 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 301, + 383, + 309 + ], + "spans": [ + { + "bbox": [ + 246, + 301, + 383, + 309 + ], + "type": "text", + "content": "- Option 2 suggests high overlap - not seen" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 246, + 310, + 445, + 318 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 310, + 445, + 318 + ], + "spans": [ + { + "bbox": [ + 246, + 310, + 445, + 318 + ], + "type": "text", + "content": "- Option 3 indicates complete segregation - matches the image" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 246, + 319, + 482, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 319, + 482, + 327 + ], + "spans": [ + { + "bbox": [ + 246, + 319, + 482, + 327 + ], + "type": "text", + "content": "- Option 4 suggests random distribution - the pattern looks more organized" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 246, + 327, + 422, + 335 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 327, + 422, + 335 + ], + "spans": [ + { + "bbox": [ + 246, + 327, + 422, + 335 + ], + "type": "text", + "content": "- Option 5 also suggests separation - similar to option 3" + } + ] + } + ], + "index": 30 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 245, + 336, + 534, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 245, + 336, + 534, + 352 + ], + "spans": [ + { + "bbox": [ + 245, + 336, + 534, + 352 + ], + "type": "text", + "content": "5) Option 3 best matches what we observe: \"No overlap, indicating complete segregation of cellular functions\"" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 246, + 353, + 300, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 246, + 353, + 300, + 361 + ], + "spans": [ + { + "bbox": [ + 246, + 353, + 300, + 361 + ], + "type": "text", + "content": "The answer is (3)" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 244, + 369, + 317, + 378 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 244, + 369, + 317, + 378 + ], + "spans": [ + { + "bbox": [ + 244, + 369, + 317, + 378 + ], + "type": "text", + "content": "Error type: Perception" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 243, + 379, + 537, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 243, + 379, + 537, + 388 + ], + "spans": [ + { + "bbox": [ + 243, + 379, + 537, + 388 + ], + "type": "text", + "content": "Error reason: The markers rarely overlap, but there is very focal colocalization in some puncta." + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 243, + 388, + 531, + 404 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 243, + 388, + 531, + 404 + ], + "spans": [ + { + "bbox": [ + 243, + 388, + 531, + 404 + ], + "type": "text", + "content": "The selected answer argues for \"complete segregation of cellular functions\", which is a too extreme or absolute conclusion based on the image." + } + ] + } + ], + "index": 36 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 310, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 310, + 742 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 310, + 742 + ], + "type": "text", + "content": "23" + } + ] + } + ], + "index": 38 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 34 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 61, + 105, + 192, + 215 + ], + "blocks": [ + { + "bbox": [ + 56, + 72, + 165, + 84 + ], + "lines": [ + { + "bbox": [ + 56, + 72, + 165, + 84 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 165, + 84 + ], + "type": "text", + "content": "F.6.2 Knowledge errors" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 61, + 105, + 192, + 215 + ], + "lines": [ + { + "bbox": [ + 61, + 105, + 192, + 215 + ], + "spans": [ + { + "bbox": [ + 61, + 105, + 192, + 215 + ], + "type": "image", + "image_path": "dbff59d9769bbac2f4fc2bdfdddb944347024187a9e2b435ec6b18abceb54e0a.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 201, + 109, + 216, + 120 + ], + "blocks": [ + { + "bbox": [ + 201, + 109, + 216, + 120 + ], + "lines": [ + { + "bbox": [ + 201, + 109, + 216, + 120 + ], + "spans": [ + { + "bbox": [ + 201, + 109, + 216, + 120 + ], + "type": "image", + "image_path": "6b8ac7a46477b945fb15eae3a726c93083eaee2fc1e7ec7c8ae124dedb2ecc39.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 286, + 502, + 323, + 513 + ], + "lines": [ + { + "bbox": [ + 286, + 502, + 323, + 513 + ], + "spans": [ + { + "bbox": [ + 286, + 502, + 323, + 513 + ], + "type": "text", + "content": "Figure 17" + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 223, + 105, + 542, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 105, + 542, + 144 + ], + "spans": [ + { + "bbox": [ + 223, + 105, + 542, + 144 + ], + "type": "text", + "content": "Q: A double immunofluorescence experiment was conducted on nerve cells, highlighting Sema4B (red marker) and Sox9, an astrocyte marker (green marker). Sema4B was found primarily surrounding the nuclear area dispersed in the cytoplasm. Based on this pattern, what potential role might Sema4B play in these cells?" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 223, + 152, + 510, + 198 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 223, + 152, + 510, + 161 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 152, + 510, + 161 + ], + "spans": [ + { + "bbox": [ + 223, + 152, + 510, + 161 + ], + "type": "text", + "content": "A) Sema4B contributes to the regulation of metabolic pathways within the cytoplasm." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 224, + 162, + 438, + 171 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 224, + 162, + 438, + 171 + ], + "spans": [ + { + "bbox": [ + 224, + 162, + 438, + 171 + ], + "type": "text", + "content": "B) Sema4B is associated with intracellular trafficking processes." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 224, + 171, + 498, + 180 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 224, + 171, + 498, + 180 + ], + "spans": [ + { + "bbox": [ + 224, + 171, + 498, + 180 + ], + "type": "text", + "content": "C) Sema4B acts as a signaling molecule interacting with cell surface receptors." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 224, + 181, + 461, + 189 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 224, + 181, + 461, + 189 + ], + "spans": [ + { + "bbox": [ + 224, + 181, + 461, + 189 + ], + "type": "text", + "content": "D) Sema4B aids in transporting proteins across cell compartments. X" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 224, + 190, + 495, + 198 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 224, + 190, + 495, + 198 + ], + "spans": [ + { + "bbox": [ + 224, + 190, + 495, + 198 + ], + "type": "text", + "content": "E) Sema4B is directly involved in gene transcription regulation inside the nucleus." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 189, + 250, + 203, + 295 + ], + "type": "title", + "angle": 270, + "lines": [ + { + "bbox": [ + 189, + 250, + 203, + 295 + ], + "spans": [ + { + "bbox": [ + 189, + 250, + 203, + 295 + ], + "type": "text", + "content": "grewes" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 206, + 208, + 375, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 206, + 208, + 375, + 217 + ], + "spans": [ + { + "bbox": [ + 206, + 208, + 375, + 217 + ], + "type": "text", + "content": "VLM Answer: Let me solve this step by step:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 223, + 217, + 453, + 300 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 223, + 217, + 380, + 227 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 217, + 380, + 227 + ], + "spans": [ + { + "bbox": [ + 223, + 217, + 380, + 227 + ], + "type": "text", + "content": "1) Looking at the immunofluorescence images:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 223, + 228, + 331, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 228, + 331, + 236 + ], + "spans": [ + { + "bbox": [ + 223, + 228, + 331, + 236 + ], + "type": "text", + "content": "- Sox9 (green) marks astrocytes" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 223, + 236, + 403, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 236, + 403, + 245 + ], + "spans": [ + { + "bbox": [ + 223, + 236, + 403, + 245 + ], + "type": "text", + "content": "- Sema4B (red) shows a particular distribution pattern" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 223, + 246, + 453, + 255 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 246, + 453, + 255 + ], + "spans": [ + { + "bbox": [ + 223, + 246, + 453, + 255 + ], + "type": "text", + "content": "- The merged image shows both markers plus nuclear staining (blue)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 223, + 255, + 342, + 263 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 255, + 342, + 263 + ], + "spans": [ + { + "bbox": [ + 223, + 255, + 342, + 263 + ], + "type": "text", + "content": "2) Analyzing Sema4B's distribution:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 223, + 264, + 328, + 273 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 264, + 328, + 273 + ], + "spans": [ + { + "bbox": [ + 223, + 264, + 328, + 273 + ], + "type": "text", + "content": "- It appears as red punctate sta" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 223, + 274, + 360, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 274, + 360, + 281 + ], + "spans": [ + { + "bbox": [ + 223, + 274, + 360, + 281 + ], + "type": "text", + "content": "- Located primarily around nuclear areas" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 223, + 283, + 358, + 291 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 283, + 358, + 291 + ], + "spans": [ + { + "bbox": [ + 223, + 283, + 358, + 291 + ], + "type": "text", + "content": "- Shows dispersed pattern in cytoplasm" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 223, + 293, + 341, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 293, + 341, + 300 + ], + "spans": [ + { + "bbox": [ + 223, + 293, + 341, + 300 + ], + "type": "text", + "content": "- Not concentrated in nucleus itself" + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 223, + 217, + 542, + 441 + ], + "type": "list", + "angle": 0, + "index": 36, + "blocks": [ + { + "bbox": [ + 223, + 302, + 345, + 311 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 302, + 345, + 311 + ], + "spans": [ + { + "bbox": [ + 223, + 302, + 345, + 311 + ], + "type": "text", + "content": "3) This distribution pattern suggests:" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 223, + 312, + 390, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 312, + 390, + 319 + ], + "spans": [ + { + "bbox": [ + 223, + 312, + 390, + 319 + ], + "type": "text", + "content": "- The protein is involved in cytoplasmic processes" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 223, + 320, + 323, + 329 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 320, + 323, + 329 + ], + "spans": [ + { + "bbox": [ + 223, + 320, + 323, + 329 + ], + "type": "text", + "content": "- Shows vesicular-like pattern" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 223, + 330, + 399, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 330, + 399, + 338 + ], + "spans": [ + { + "bbox": [ + 223, + 330, + 399, + 338 + ], + "type": "text", + "content": "- Located in areas suggesting trafficking or transport" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 223, + 339, + 309, + 347 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 339, + 309, + 347 + ], + "spans": [ + { + "bbox": [ + 223, + 339, + 309, + 347 + ], + "type": "text", + "content": "4) Looking at the options:" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 223, + 348, + 349, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 348, + 349, + 357 + ], + "spans": [ + { + "bbox": [ + 223, + 348, + 349, + 357 + ], + "type": "text", + "content": "- Option 4 mentions protein transport" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 223, + 357, + 358, + 365 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 357, + 358, + 365 + ], + "spans": [ + { + "bbox": [ + 223, + 357, + 358, + 365 + ], + "type": "text", + "content": "- This matches the vesicular-like pattern" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 223, + 366, + 496, + 376 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 366, + 496, + 376 + ], + "spans": [ + { + "bbox": [ + 223, + 366, + 496, + 376 + ], + "type": "text", + "content": "- The perinuclear and cytoplasmic distribution is consistent with protein trafficking" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 223, + 376, + 463, + 385 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 376, + 463, + 385 + ], + "spans": [ + { + "bbox": [ + 223, + 376, + 463, + 385 + ], + "type": "text", + "content": "- Other options don't align as well with the observed distribution pattern" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 223, + 386, + 270, + 395 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 386, + 270, + 395 + ], + "spans": [ + { + "bbox": [ + 223, + 386, + 270, + 395 + ], + "type": "text", + "content": "5) Reasoning:" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 223, + 396, + 542, + 404 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 396, + 542, + 404 + ], + "spans": [ + { + "bbox": [ + 223, + 396, + 542, + 404 + ], + "type": "text", + "content": "- The punctate pattern and distribution is characteristic of proteins involved in cellular trafficking" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 223, + 405, + 524, + 422 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 405, + 524, + 422 + ], + "spans": [ + { + "bbox": [ + 223, + 405, + 524, + 422 + ], + "type": "text", + "content": "- The location around nucleus (where protein synthesis occurs) and throughout cytoplasm supports a transport role" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 223, + 423, + 490, + 432 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 423, + 490, + 432 + ], + "spans": [ + { + "bbox": [ + 223, + 423, + 490, + 432 + ], + "type": "text", + "content": "- This pattern is less consistent with metabolic, signaling, or transcriptional roles" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 223, + 433, + 280, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 223, + 433, + 280, + 441 + ], + "spans": [ + { + "bbox": [ + 223, + 433, + 280, + 441 + ], + "type": "text", + "content": "The answer is (4)" + } + ] + } + ], + "index": 35 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 62, + 414, + 149, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 414, + 149, + 441 + ], + "spans": [ + { + "bbox": [ + 62, + 414, + 149, + 441 + ], + "type": "text", + "content": "Subject: Neurobiology Bloom's level: 3 (Applica Specimen: Nerve cells" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 222, + 451, + 358, + 460 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 222, + 451, + 358, + 460 + ], + "spans": [ + { + "bbox": [ + 222, + 451, + 358, + 460 + ], + "type": "text", + "content": "Error type: Misconception (knowledge)" + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 221, + 461, + 526, + 488 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 221, + 461, + 526, + 488 + ], + "spans": [ + { + "bbox": [ + 221, + 461, + 526, + 488 + ], + "type": "text", + "content": "Error reason: Sema4B is a transmembrane protein that plays a role in cellular signaling. TM proteins can be internalized via endocytosis and does not mean that its main function is intracellular trafficking." + } + ] + } + ], + "index": 39 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "type": "text", + "content": "24" + } + ] + } + ], + "index": 41 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 35 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 58, + 72, + 208, + 281 + ], + "blocks": [ + { + "bbox": [ + 58, + 72, + 208, + 281 + ], + "lines": [ + { + "bbox": [ + 58, + 72, + 208, + 281 + ], + "spans": [ + { + "bbox": [ + 58, + 72, + 208, + 281 + ], + "type": "image", + "image_path": "a872000bba91acdae646a6ff5f409e3cf01d7f9dceed062000bde34f8b143241.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 63, + 288, + 142, + 297 + ], + "lines": [ + { + "bbox": [ + 63, + 288, + 142, + 297 + ], + "spans": [ + { + "bbox": [ + 63, + 288, + 142, + 297 + ], + "type": "text", + "content": "Subject: Neurobiology" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 297, + 153, + 306 + ], + "lines": [ + { + "bbox": [ + 63, + 297, + 153, + 306 + ], + "spans": [ + { + "bbox": [ + 63, + 297, + 153, + 306 + ], + "type": "text", + "content": "Bloom's level: 4 (Analysis)" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 308, + 170, + 323 + ], + "lines": [ + { + "bbox": [ + 63, + 308, + 170, + 323 + ], + "spans": [ + { + "bbox": [ + 63, + 308, + 170, + 323 + ], + "type": "text", + "content": "Specimen: Fibroblasts from an Alzheimer's patient" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 286, + 422, + 324, + 434 + ], + "lines": [ + { + "bbox": [ + 286, + 422, + 324, + 434 + ], + "spans": [ + { + "bbox": [ + 286, + 422, + 324, + 434 + ], + "type": "text", + "content": "Figure 18" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 226, + 75, + 522, + 103 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 226, + 75, + 522, + 103 + ], + "spans": [ + { + "bbox": [ + 226, + 75, + 522, + 103 + ], + "type": "text", + "content": "Q: A cryo-Electron Tomography image of fibroblasts from a 70-year-old patient with Alzheimer's disease shows three mitochondria gathered together. What is the most likely reason for this mitochondrial clustering." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 227, + 103, + 477, + 140 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 227, + 103, + 355, + 112 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 103, + 355, + 112 + ], + "spans": [ + { + "bbox": [ + 227, + 103, + 355, + 112 + ], + "type": "text", + "content": "A) Increased mitochondrial biogenesis" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 227, + 113, + 459, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 113, + 459, + 120 + ], + "spans": [ + { + "bbox": [ + 227, + 113, + 459, + 120 + ], + "type": "text", + "content": "B) Enhanced mitochondrial fusion leading to interconnected networks" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 227, + 121, + 381, + 130 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 121, + 381, + 130 + ], + "spans": [ + { + "bbox": [ + 227, + 121, + 381, + 130 + ], + "type": "text", + "content": "C) Mitochondrial fission due to cellular stress" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 227, + 131, + 477, + 140 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 131, + 477, + 140 + ], + "spans": [ + { + "bbox": [ + 227, + 131, + 477, + 140 + ], + "type": "text", + "content": "D) Impaired mitophagy causing accumulation of damaged mitochondria " + }, + { + "bbox": [ + 227, + 131, + 477, + 140 + ], + "type": "inline_equation", + "content": "\\times" + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 226, + 149, + 370, + 159 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 226, + 149, + 370, + 159 + ], + "spans": [ + { + "bbox": [ + 226, + 149, + 370, + 159 + ], + "type": "text", + "content": "VLM Answer: Let's solve this step by step:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 227, + 159, + 341, + 168 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 159, + 341, + 168 + ], + "spans": [ + { + "bbox": [ + 227, + 159, + 341, + 168 + ], + "type": "text", + "content": "1) First, let's consider the context:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 227, + 169, + 426, + 177 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 169, + 426, + 177 + ], + "spans": [ + { + "bbox": [ + 227, + 169, + 426, + 177 + ], + "type": "text", + "content": "- This is from a 70-year-old patient with Alzheimer's disease" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 227, + 178, + 487, + 186 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 178, + 487, + 186 + ], + "spans": [ + { + "bbox": [ + 227, + 178, + 487, + 186 + ], + "type": "text", + "content": "- Alzheimer's is known to affect cellular function, including mitochondrial health" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 227, + 187, + 407, + 196 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 187, + 407, + 196 + ], + "spans": [ + { + "bbox": [ + 227, + 187, + 407, + 196 + ], + "type": "text", + "content": "- We're observing clustered mitochondria in the image" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 227, + 197, + 312, + 205 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 197, + 312, + 205 + ], + "spans": [ + { + "bbox": [ + 227, + 197, + 312, + 205 + ], + "type": "text", + "content": "2) Analyzing each option:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 227, + 206, + 521, + 215 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 206, + 521, + 215 + ], + "spans": [ + { + "bbox": [ + 227, + 206, + 521, + 215 + ], + "type": "text", + "content": "- Option 1 (Increased biogenesis): Would show new, healthy mitochondria, not clustering" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 227, + 216, + 488, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 216, + 488, + 224 + ], + "spans": [ + { + "bbox": [ + 227, + 216, + 488, + 224 + ], + "type": "text", + "content": "- Option 2 (Enhanced fusion): Would show elongated, connected mitochondria" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 227, + 224, + 493, + 233 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 224, + 493, + 233 + ], + "spans": [ + { + "bbox": [ + 227, + 224, + 493, + 233 + ], + "type": "text", + "content": "- Option 3 (Mitochondrial fission): Would show smaller, fragmented mitochondria" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 227, + 233, + 541, + 242 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 233, + 541, + 242 + ], + "spans": [ + { + "bbox": [ + 227, + 233, + 541, + 242 + ], + "type": "text", + "content": "- Option 4 (Impaired mitophagy): Would show accumulated, potentially damaged mitochondria" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 227, + 243, + 308, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 243, + 308, + 251 + ], + "spans": [ + { + "bbox": [ + 227, + 243, + 308, + 251 + ], + "type": "text", + "content": "3) Looking at the image:" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 227, + 252, + 394, + 260 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 252, + 394, + 260 + ], + "spans": [ + { + "bbox": [ + 227, + 252, + 394, + 260 + ], + "type": "text", + "content": "- We see multiple mitochondria clustered together" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 227, + 261, + 392, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 261, + 392, + 270 + ], + "spans": [ + { + "bbox": [ + 227, + 261, + 392, + 270 + ], + "type": "text", + "content": "- They appear to be intact but gathered in groups" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 227, + 271, + 435, + 279 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 271, + 435, + 279 + ], + "spans": [ + { + "bbox": [ + 227, + 271, + 435, + 279 + ], + "type": "text", + "content": "- This suggests they're not being properly cleared from the cell" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 227, + 280, + 346, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 280, + 346, + 289 + ], + "spans": [ + { + "bbox": [ + 227, + 280, + 346, + 289 + ], + "type": "text", + "content": "4) Considering Alzheimer's disease:" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 227, + 289, + 395, + 297 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 289, + 395, + 297 + ], + "spans": [ + { + "bbox": [ + 227, + 289, + 395, + 297 + ], + "type": "text", + "content": "- It's known to impair cellular cleanup mechanisms" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 227, + 298, + 429, + 307 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 298, + 429, + 307 + ], + "spans": [ + { + "bbox": [ + 227, + 298, + 429, + 307 + ], + "type": "text", + "content": "- Mitophagy is a key quality control process for mitochondria" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 227, + 308, + 492, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 308, + 492, + 316 + ], + "spans": [ + { + "bbox": [ + 227, + 308, + 492, + 316 + ], + "type": "text", + "content": "- When impaired, damaged mitochondria accumulate instead of being removed" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 227, + 317, + 276, + 325 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 317, + 276, + 325 + ], + "spans": [ + { + "bbox": [ + 227, + 317, + 276, + 325 + ], + "type": "text", + "content": "5) Conclusion:" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 227, + 326, + 532, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 326, + 532, + 352 + ], + "spans": [ + { + "bbox": [ + 227, + 326, + 532, + 352 + ], + "type": "text", + "content": "The clustering is most likely due to impaired mitophagy, as this would explain why multiple mitochondria are gathering instead of being properly cleared from the cell, which is a known issue in Alzheimer's disease." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 227, + 354, + 284, + 362 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 227, + 354, + 284, + 362 + ], + "spans": [ + { + "bbox": [ + 227, + 354, + 284, + 362 + ], + "type": "text", + "content": "The answer is (4)" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 226, + 377, + 360, + 387 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 226, + 377, + 360, + 387 + ], + "spans": [ + { + "bbox": [ + 226, + 377, + 360, + 387 + ], + "type": "text", + "content": "Error type: Misconception (knowledge)" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 225, + 387, + 534, + 405 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 225, + 387, + 534, + 405 + ], + "spans": [ + { + "bbox": [ + 225, + 387, + 534, + 405 + ], + "type": "text", + "content": "Error reason: Impaired mitophagy may result in larger overall mitochondria, not an increase in small mitochondrial aggregates. Fission is a better answer in this case." + } + ] + } + ], + "index": 32 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "type": "text", + "content": "25" + } + ] + } + ], + "index": 34 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 36 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 58, + 72, + 243, + 189 + ], + "blocks": [ + { + "bbox": [ + 58, + 72, + 243, + 189 + ], + "lines": [ + { + "bbox": [ + 58, + 72, + 243, + 189 + ], + "spans": [ + { + "bbox": [ + 58, + 72, + 243, + 189 + ], + "type": "image", + "image_path": "aae54e299573660890713fbe82c06e4beaab2c2f055ca5b5537d9925f520c9f1.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 58, + 193, + 243, + 303 + ], + "blocks": [ + { + "bbox": [ + 58, + 193, + 243, + 303 + ], + "lines": [ + { + "bbox": [ + 58, + 193, + 243, + 303 + ], + "spans": [ + { + "bbox": [ + 58, + 193, + 243, + 303 + ], + "type": "image", + "image_path": "192ce2e7872e205b3d334f2cf4f4af0e50b62bbb5c61f995505d837cf345674d.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 62, + 311, + 134, + 319 + ], + "lines": [ + { + "bbox": [ + 62, + 311, + 134, + 319 + ], + "spans": [ + { + "bbox": [ + 62, + 311, + 134, + 319 + ], + "type": "text", + "content": "Subject: Neurobiology" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 320, + 154, + 328 + ], + "lines": [ + { + "bbox": [ + 63, + 320, + 154, + 328 + ], + "spans": [ + { + "bbox": [ + 63, + 320, + 154, + 328 + ], + "type": "text", + "content": "Bloom's level: 3 (Application)" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 329, + 201, + 337 + ], + "lines": [ + { + "bbox": [ + 63, + 329, + 201, + 337 + ], + "spans": [ + { + "bbox": [ + 63, + 329, + 201, + 337 + ], + "type": "text", + "content": "Specimen: Resection specimen for epilepsy" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 286, + 446, + 324, + 457 + ], + "lines": [ + { + "bbox": [ + 286, + 446, + 324, + 457 + ], + "spans": [ + { + "bbox": [ + 286, + 446, + 324, + 457 + ], + "type": "text", + "content": "Figure 19" + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 254, + 77, + 543, + 113 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 254, + 77, + 543, + 113 + ], + "spans": [ + { + "bbox": [ + 254, + 77, + 543, + 113 + ], + "type": "text", + "content": "Q: A microscopic analysis of tissue from an individual suffering from recurrent seizures involves using hematoxylin & eosin stain alongside a vimentin stain. Cells with stronger vimentin expression are significantly larger than surrounding entities. Which identity is most probable for these prominent vimentin-expressing cells?" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 255, + 120, + 446, + 163 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 255, + 120, + 424, + 129 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 120, + 424, + 129 + ], + "spans": [ + { + "bbox": [ + 255, + 120, + 424, + 129 + ], + "type": "text", + "content": "A) Dystrophic neurons typically found around glial scars" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 255, + 129, + 415, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 129, + 415, + 137 + ], + "spans": [ + { + "bbox": [ + 255, + 129, + 415, + 137 + ], + "type": "text", + "content": "B) Enlarged reactive astrocytes near injury sites X" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 255, + 137, + 446, + 145 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 137, + 446, + 145 + ], + "spans": [ + { + "bbox": [ + 255, + 137, + 446, + 145 + ], + "type": "text", + "content": "C) Aberrant large cells characteristic of cortical malformation" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 255, + 146, + 407, + 154 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 146, + 407, + 154 + ], + "spans": [ + { + "bbox": [ + 255, + 146, + 407, + 154 + ], + "type": "text", + "content": "D) Macrophages adapted to engulf cellular debris" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 255, + 155, + 412, + 163 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 155, + 412, + 163 + ], + "spans": [ + { + "bbox": [ + 255, + 155, + 412, + 163 + ], + "type": "text", + "content": "E) Fibroblast proliferation in response to brain injury" + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 255, + 173, + 392, + 182 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 173, + 392, + 182 + ], + "spans": [ + { + "bbox": [ + 255, + 173, + 392, + 182 + ], + "type": "text", + "content": "VLM Answer: Let me solve this step by step:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 255, + 182, + 517, + 282 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 255, + 182, + 418, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 182, + 418, + 190 + ], + "spans": [ + { + "bbox": [ + 255, + 182, + 418, + 190 + ], + "type": "text", + "content": "1) First, let's analyze what we're seeing in the images:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 255, + 190, + 517, + 198 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 190, + 517, + 198 + ], + "spans": [ + { + "bbox": [ + 255, + 190, + 517, + 198 + ], + "type": "text", + "content": "- The H&E stain (top image) shows tissue with scattered cells and some enlarged cells" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 255, + 198, + 516, + 207 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 198, + 516, + 207 + ], + "spans": [ + { + "bbox": [ + 255, + 198, + 516, + 207 + ], + "type": "text", + "content": "- The vimentin stain (bottom image) shows strong brown staining of certain larger cells" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 255, + 207, + 368, + 215 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 207, + 368, + 215 + ], + "spans": [ + { + "bbox": [ + 255, + 207, + 368, + 215 + ], + "type": "text", + "content": "2) Key information from the question:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 255, + 216, + 351, + 223 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 216, + 351, + 223 + ], + "spans": [ + { + "bbox": [ + 255, + 216, + 351, + 223 + ], + "type": "text", + "content": "- Patient has recurrent seizures" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 255, + 224, + 395, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 224, + 395, + 232 + ], + "spans": [ + { + "bbox": [ + 255, + 224, + 395, + 232 + ], + "type": "text", + "content": "- Vimentin-positive cells are significantly larger" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 255, + 232, + 406, + 240 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 232, + 406, + 240 + ], + "spans": [ + { + "bbox": [ + 255, + 232, + 406, + 240 + ], + "type": "text", + "content": "- Looking for most probable identity of these cells" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 255, + 241, + 357, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 241, + 357, + 248 + ], + "spans": [ + { + "bbox": [ + 255, + 241, + 357, + 248 + ], + "type": "text", + "content": "3) Important facts about vimentin:" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 255, + 249, + 391, + 257 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 249, + 391, + 257 + ], + "spans": [ + { + "bbox": [ + 255, + 249, + 391, + 257 + ], + "type": "text", + "content": "- Vimentin is an intermediate filament protein" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 255, + 258, + 386, + 266 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 258, + 386, + 266 + ], + "spans": [ + { + "bbox": [ + 255, + 258, + 386, + 266 + ], + "type": "text", + "content": "- Strongly expressed in reactive astrocytes" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 255, + 266, + 374, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 266, + 374, + 274 + ], + "spans": [ + { + "bbox": [ + 255, + 266, + 374, + 274 + ], + "type": "text", + "content": "- Also expressed in mesenchymal cells" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 255, + 274, + 341, + 282 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 274, + 341, + 282 + ], + "spans": [ + { + "bbox": [ + 255, + 274, + 341, + 282 + ], + "type": "text", + "content": "4) In the context of seizures:" + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 255, + 283, + 489, + 357 + ], + "type": "list", + "angle": 0, + "index": 35, + "blocks": [ + { + "bbox": [ + 255, + 283, + 355, + 291 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 283, + 355, + 291 + ], + "spans": [ + { + "bbox": [ + 255, + 283, + 355, + 291 + ], + "type": "text", + "content": "- Seizures can cause brain injury" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 255, + 292, + 355, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 292, + 355, + 300 + ], + "spans": [ + { + "bbox": [ + 255, + 292, + 355, + 300 + ], + "type": "text", + "content": "- Brain injury triggers astrogliosis" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 255, + 300, + 449, + 308 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 300, + 449, + 308 + ], + "spans": [ + { + "bbox": [ + 255, + 300, + 449, + 308 + ], + "type": "text", + "content": "- Astrocytes become reactive and enlarged in response to injury" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 255, + 309, + 334, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 309, + 334, + 316 + ], + "spans": [ + { + "bbox": [ + 255, + 309, + 334, + 316 + ], + "type": "text", + "content": "5) Looking at the options:" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 255, + 317, + 413, + 325 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 317, + 413, + 325 + ], + "spans": [ + { + "bbox": [ + 255, + 317, + 413, + 325 + ], + "type": "text", + "content": "- Option 1: Neurons don't typically express vimentin" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 255, + 326, + 489, + 333 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 326, + 489, + 333 + ], + "spans": [ + { + "bbox": [ + 255, + 326, + 489, + 333 + ], + "type": "text", + "content": "- Option 2: Reactive astrocytes do express vimentin and enlarge during injury" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 255, + 334, + 486, + 341 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 334, + 486, + 341 + ], + "spans": [ + { + "bbox": [ + 255, + 334, + 486, + 341 + ], + "type": "text", + "content": "- Option 3: Cortical malformation cells wouldn't necessarily express vimentin" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 255, + 342, + 394, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 342, + 394, + 350 + ], + "spans": [ + { + "bbox": [ + 255, + 342, + 394, + 350 + ], + "type": "text", + "content": "- Option 4: Macrophages are typically smaller" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 255, + 350, + 416, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 350, + 416, + 357 + ], + "spans": [ + { + "bbox": [ + 255, + 350, + 416, + 357 + ], + "type": "text", + "content": "- Option 5: Fibroblasts would be rare in this context" + } + ] + } + ], + "index": 34 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 254, + 366, + 541, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 254, + 366, + 541, + 384 + ], + "spans": [ + { + "bbox": [ + 254, + 366, + 541, + 384 + ], + "type": "text", + "content": "6) The morphology and staining pattern of the large cells, combined with the clinical context of seizures, most strongly suggests reactive astrocytes." + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 255, + 384, + 318, + 392 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 255, + 384, + 318, + 392 + ], + "spans": [ + { + "bbox": [ + 255, + 384, + 318, + 392 + ], + "type": "text", + "content": "The answer is (2) X" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 254, + 403, + 376, + 413 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 254, + 403, + 376, + 413 + ], + "spans": [ + { + "bbox": [ + 254, + 403, + 376, + 413 + ], + "type": "text", + "content": "Error type: Misconception (knowledge)" + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 254, + 413, + 531, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 254, + 413, + 531, + 430 + ], + "spans": [ + { + "bbox": [ + 254, + 413, + 531, + 430 + ], + "type": "text", + "content": "Error reason: The AI is able to interpret the question and image, but lacks the knowledge to recognize the link between recurrent seizures, cortical dysplasia, and large ballooned cells." + } + ] + } + ], + "index": 39 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "type": "text", + "content": "26" + } + ] + } + ], + "index": 41 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 37 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 61, + 70, + 243, + 212 + ], + "blocks": [ + { + "bbox": [ + 61, + 70, + 243, + 212 + ], + "lines": [ + { + "bbox": [ + 61, + 70, + 243, + 212 + ], + "spans": [ + { + "bbox": [ + 61, + 70, + 243, + 212 + ], + "type": "image", + "image_path": "2cf2ebd44648285027b531c31465d7a5f0c91a3a18798de8ede6babc5c7fd30b.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 62, + 217, + 135, + 226 + ], + "lines": [ + { + "bbox": [ + 62, + 217, + 135, + 226 + ], + "spans": [ + { + "bbox": [ + 62, + 217, + 135, + 226 + ], + "type": "text", + "content": "Subject: Neurobiology" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 226, + 145, + 235 + ], + "lines": [ + { + "bbox": [ + 63, + 226, + 145, + 235 + ], + "spans": [ + { + "bbox": [ + 63, + 226, + 145, + 235 + ], + "type": "text", + "content": "Bloom's level: 4 (Analysis)" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 235, + 157, + 243 + ], + "lines": [ + { + "bbox": [ + 63, + 235, + 157, + 243 + ], + "spans": [ + { + "bbox": [ + 63, + 235, + 157, + 243 + ], + "type": "text", + "content": "Specimen: Brain parenchyma" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 286, + 419, + 324, + 430 + ], + "lines": [ + { + "bbox": [ + 286, + 419, + 324, + 430 + ], + "spans": [ + { + "bbox": [ + 286, + 419, + 324, + 430 + ], + "type": "text", + "content": "Figure 20" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 251, + 74, + 545, + 101 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 251, + 74, + 545, + 101 + ], + "spans": [ + { + "bbox": [ + 251, + 74, + 545, + 101 + ], + "type": "text", + "content": "Q: A certain specialized stain reveals unusual cell movement crossing boundaries typical of neuronal migration in a tissue resection from a case of structural brain anomaly. Which condition is primarily associated with this type of cellular pattern?" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 252, + 108, + 342, + 117 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 108, + 342, + 117 + ], + "spans": [ + { + "bbox": [ + 252, + 108, + 342, + 117 + ], + "type": "text", + "content": "A) Focal cortical dysplasia" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 252, + 118, + 344, + 151 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 252, + 118, + 342, + 125 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 118, + 342, + 125 + ], + "spans": [ + { + "bbox": [ + 252, + 118, + 342, + 125 + ], + "type": "text", + "content": "B) Congenital Zika syndrome" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 252, + 125, + 310, + 134 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 125, + 310, + 134 + ], + "spans": [ + { + "bbox": [ + 252, + 125, + 310, + 134 + ], + "type": "text", + "content": "C) Schizencephaly" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 252, + 134, + 328, + 143 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 134, + 328, + 143 + ], + "spans": [ + { + "bbox": [ + 252, + 134, + 328, + 143 + ], + "type": "text", + "content": "D) Type II lissencephaly" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 252, + 143, + 344, + 151 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 143, + 344, + 151 + ], + "spans": [ + { + "bbox": [ + 252, + 143, + 344, + 151 + ], + "type": "text", + "content": "E) Walker-Warburg syndrome" + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 252, + 159, + 391, + 167 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 159, + 391, + 167 + ], + "spans": [ + { + "bbox": [ + 252, + 159, + 391, + 167 + ], + "type": "text", + "content": "VLM Answer: Let me solve this step by step:" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 252, + 168, + 545, + 338 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 252, + 168, + 545, + 184 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 168, + 545, + 184 + ], + "spans": [ + { + "bbox": [ + 252, + 168, + 545, + 184 + ], + "type": "text", + "content": "1) The image shows a histological section with specialized staining that reveals abnormal cellular organization and migration patterns." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 252, + 185, + 515, + 201 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 185, + 515, + 201 + ], + "spans": [ + { + "bbox": [ + 252, + 185, + 515, + 201 + ], + "type": "text", + "content": "2) The question asks about unusual cell movement crossing typical neuronal migration boundaries." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 252, + 201, + 332, + 210 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 201, + 332, + 210 + ], + "spans": [ + { + "bbox": [ + 252, + 201, + 332, + 210 + ], + "type": "text", + "content": "3) Looking at the options:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 252, + 210, + 477, + 219 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 210, + 477, + 219 + ], + "spans": [ + { + "bbox": [ + 252, + 210, + 477, + 219 + ], + "type": "text", + "content": "- Focal cortical dysplasia involves disorganized neurons in localized areas" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 252, + 219, + 457, + 227 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 219, + 457, + 227 + ], + "spans": [ + { + "bbox": [ + 252, + 219, + 457, + 227 + ], + "type": "text", + "content": "- Congenital Zika syndrome primarily affects neural progenitor cells" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 252, + 227, + 436, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 227, + 436, + 236 + ], + "spans": [ + { + "bbox": [ + 252, + 227, + 436, + 236 + ], + "type": "text", + "content": "- Schizencephaly involves clefts in the cerebral hemispheres" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 252, + 236, + 481, + 244 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 236, + 481, + 244 + ], + "spans": [ + { + "bbox": [ + 252, + 236, + 481, + 244 + ], + "type": "text", + "content": "- Type II lissencephaly shows abnormal neuronal migration and positioning" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 252, + 244, + 525, + 260 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 244, + 525, + 260 + ], + "spans": [ + { + "bbox": [ + 252, + 244, + 525, + 260 + ], + "type": "text", + "content": "- Walker-Warburg syndrome is a severe form of congenital muscular dystrophy with brain malformations" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 252, + 261, + 423, + 269 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 261, + 423, + 269 + ], + "spans": [ + { + "bbox": [ + 252, + 261, + 423, + 269 + ], + "type": "text", + "content": "4) Key characteristics of focal cortical dysplasia include:" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 252, + 270, + 347, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 270, + 347, + 277 + ], + "spans": [ + { + "bbox": [ + 252, + 270, + 347, + 277 + ], + "type": "text", + "content": "- Abnormal neuronal migration" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 252, + 278, + 348, + 286 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 278, + 348, + 286 + ], + "spans": [ + { + "bbox": [ + 252, + 278, + 348, + 286 + ], + "type": "text", + "content": "- Disorganized cortical layering" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 252, + 287, + 405, + 295 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 287, + 405, + 295 + ], + "spans": [ + { + "bbox": [ + 252, + 287, + 405, + 295 + ], + "type": "text", + "content": "- Abnormal cell types crossing normal boundaries" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 252, + 295, + 375, + 303 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 295, + 375, + 303 + ], + "spans": [ + { + "bbox": [ + 252, + 295, + 375, + 303 + ], + "type": "text", + "content": "- Disrupted cerebral cortex architecture" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 252, + 304, + 528, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 304, + 528, + 312 + ], + "spans": [ + { + "bbox": [ + 252, + 304, + 528, + 312 + ], + "type": "text", + "content": "5) The histological image shows features consistent with focal cortical dysplasia, including:" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 252, + 312, + 351, + 320 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 312, + 351, + 320 + ], + "spans": [ + { + "bbox": [ + 252, + 312, + 351, + 320 + ], + "type": "text", + "content": "- Disrupted cellular organization" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 252, + 320, + 346, + 329 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 320, + 346, + 329 + ], + "spans": [ + { + "bbox": [ + 252, + 320, + 346, + 329 + ], + "type": "text", + "content": "- Abnormal migration patterns" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 252, + 329, + 359, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 329, + 359, + 338 + ], + "spans": [ + { + "bbox": [ + 252, + 329, + 359, + 338 + ], + "type": "text", + "content": "- Cells crossing typical boundaries" + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 252, + 338, + 306, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 338, + 306, + 346 + ], + "spans": [ + { + "bbox": [ + 252, + 338, + 306, + 346 + ], + "type": "text", + "content": "The answer is (1)" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 252, + 354, + 376, + 363 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 354, + 376, + 363 + ], + "spans": [ + { + "bbox": [ + 252, + 354, + 376, + 363 + ], + "type": "text", + "content": "Error type: Misconception (knowledge)" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 251, + 364, + 545, + 389 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 251, + 364, + 545, + 389 + ], + "spans": [ + { + "bbox": [ + 251, + 364, + 545, + 389 + ], + "type": "text", + "content": "Error reason: Misconception about the scope and extent of neuronal migration patterns in focal cortical dysplasia (FCD) versus Type II lissencephaly. FCD will have abnormal cortical lamination, but the neurons are still within the expected gray/white matter of normal neuronal migration." + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 252, + 389, + 522, + 405 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 252, + 389, + 522, + 405 + ], + "spans": [ + { + "bbox": [ + 252, + 389, + 522, + 405 + ], + "type": "text", + "content": "They do not extend beyond and cross established neuronal boundaries such as the glial limitans." + } + ] + } + ], + "index": 33 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "type": "text", + "content": "27" + } + ] + } + ], + "index": 35 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 38 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 290, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 290, + 84 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 290, + 84 + ], + "type": "text", + "content": "F.6.3 Overgeneralization or oversimplification errors" + } + ] + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 57, + 106, + 229, + 234 + ], + "blocks": [ + { + "bbox": [ + 57, + 106, + 229, + 234 + ], + "lines": [ + { + "bbox": [ + 57, + 106, + 229, + 234 + ], + "spans": [ + { + "bbox": [ + 57, + 106, + 229, + 234 + ], + "type": "image", + "image_path": "fc99b16469e74d1b69925f4088361f7f2819b1b98350cca0e2d9301b71143abe.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 57, + 237, + 229, + 366 + ], + "blocks": [ + { + "bbox": [ + 57, + 237, + 229, + 366 + ], + "lines": [ + { + "bbox": [ + 57, + 237, + 229, + 366 + ], + "spans": [ + { + "bbox": [ + 57, + 237, + 229, + 366 + ], + "type": "image", + "image_path": "2103ab540b8ed96e55d5265831a466e118edaf3e9dac7dfc8721838017bd7bb2.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 62, + 373, + 128, + 383 + ], + "lines": [ + { + "bbox": [ + 62, + 373, + 128, + 383 + ], + "spans": [ + { + "bbox": [ + 62, + 373, + 128, + 383 + ], + "type": "text", + "content": "Subject: Pathology" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 383, + 176, + 392 + ], + "lines": [ + { + "bbox": [ + 63, + 383, + 176, + 392 + ], + "spans": [ + { + "bbox": [ + 63, + 383, + 176, + 392 + ], + "type": "text", + "content": "Bloom's level: 2 (Comprehension)" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 392, + 168, + 400 + ], + "lines": [ + { + "bbox": [ + 63, + 392, + 168, + 400 + ], + "spans": [ + { + "bbox": [ + 63, + 392, + 168, + 400 + ], + "type": "text", + "content": "Specimen: schwannoma tissue" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 286, + 463, + 323, + 474 + ], + "lines": [ + { + "bbox": [ + 286, + 463, + 323, + 474 + ], + "spans": [ + { + "bbox": [ + 286, + 463, + 323, + 474 + ], + "type": "text", + "content": "Figure 21" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 237, + 105, + 545, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 105, + 545, + 133 + ], + "spans": [ + { + "bbox": [ + 237, + 105, + 545, + 133 + ], + "type": "text", + "content": "Q: In a histological examination, an H&E-stained image shows Verocay bodies and cystic degeneration with pseudoepithelial metaplasia, indicative of schwannoma. How does the risk of malignant transformation in schwannoma compare to other peripheral nerve sheath tumors?" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 237, + 140, + 518, + 158 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 140, + 518, + 158 + ], + "spans": [ + { + "bbox": [ + 237, + 140, + 518, + 158 + ], + "type": "text", + "content": "A) Schwannomas exhibit a lower risk of malignant transformation compared to other peripheral nerve sheath tumors." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 238, + 159, + 544, + 204 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 238, + 159, + 506, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 159, + 506, + 167 + ], + "spans": [ + { + "bbox": [ + 238, + 159, + 506, + 167 + ], + "type": "text", + "content": "B) Schwannomas and neurofibromas have similar risks of malignant transformation." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 238, + 168, + 525, + 185 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 168, + 525, + 185 + ], + "spans": [ + { + "bbox": [ + 238, + 168, + 525, + 185 + ], + "type": "text", + "content": "C) The risk of malignant transformation in schwannomas is dependent on specific genetic factors." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 238, + 186, + 529, + 194 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 186, + 529, + 194 + ], + "spans": [ + { + "bbox": [ + 238, + 186, + 529, + 194 + ], + "type": "text", + "content": "D) Schwannomas have an increased risk of malignancy in the presence of multiple tumors." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 238, + 195, + 544, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 195, + 544, + 204 + ], + "spans": [ + { + "bbox": [ + 238, + 195, + 544, + 204 + ], + "type": "text", + "content": "E) Schwannomas rarely undergo malignant transformation, regardless of tumor location or size." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 236, + 213, + 402, + 221 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 236, + 213, + 402, + 221 + ], + "spans": [ + { + "bbox": [ + 236, + 213, + 402, + 221 + ], + "type": "text", + "content": "VLM Answer: Let me think about this step by step:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 238, + 222, + 545, + 392 + ], + "type": "list", + "angle": 0, + "index": 30, + "blocks": [ + { + "bbox": [ + 238, + 222, + 461, + 231 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 222, + 461, + 231 + ], + "spans": [ + { + "bbox": [ + 238, + 222, + 461, + 231 + ], + "type": "text", + "content": "1) The images show typical histological features of schwannoma with:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 238, + 231, + 352, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 231, + 352, + 239 + ], + "spans": [ + { + "bbox": [ + 238, + 231, + 352, + 239 + ], + "type": "text", + "content": "- Verocay bodies (palisading nuclei)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 238, + 240, + 307, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 240, + 307, + 248 + ], + "spans": [ + { + "bbox": [ + 238, + 240, + 307, + 248 + ], + "type": "text", + "content": "- Cystic degeneration" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 238, + 249, + 333, + 258 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 249, + 333, + 258 + ], + "spans": [ + { + "bbox": [ + 238, + 249, + 333, + 258 + ], + "type": "text", + "content": "- Pseudoepithelial metaplasia" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 238, + 258, + 419, + 266 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 258, + 419, + 266 + ], + "spans": [ + { + "bbox": [ + 238, + 258, + 419, + 266 + ], + "type": "text", + "content": "2) Regarding malignant transformation in schwannomas:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 238, + 267, + 380, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 267, + 380, + 275 + ], + "spans": [ + { + "bbox": [ + 238, + 267, + 380, + 275 + ], + "type": "text", + "content": "- Schwannomas are typically benign tumors" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 238, + 276, + 389, + 284 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 276, + 389, + 284 + ], + "spans": [ + { + "bbox": [ + 238, + 276, + 389, + 284 + ], + "type": "text", + "content": "- They rarely undergo malignant transformation" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 238, + 285, + 388, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 285, + 388, + 293 + ], + "spans": [ + { + "bbox": [ + 238, + 285, + 388, + 293 + ], + "type": "text", + "content": "- This is true regardless of their size or location" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 238, + 293, + 496, + 302 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 293, + 496, + 302 + ], + "spans": [ + { + "bbox": [ + 238, + 293, + 496, + 302 + ], + "type": "text", + "content": "- Even multiple schwannomas (as in schwannomatosis) rarely become malignant" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 238, + 303, + 413, + 311 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 303, + 413, + 311 + ], + "spans": [ + { + "bbox": [ + 238, + 303, + 413, + 311 + ], + "type": "text", + "content": "3) Comparing to other peripheral nerve sheath tumors:" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 238, + 312, + 526, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 312, + 526, + 319 + ], + "spans": [ + { + "bbox": [ + 238, + 312, + 526, + 319 + ], + "type": "text", + "content": "- Neurofibromas, especially plexiform types, have a higher risk of malignant transformation" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 238, + 320, + 466, + 329 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 320, + 466, + 329 + ], + "spans": [ + { + "bbox": [ + 238, + 320, + 466, + 329 + ], + "type": "text", + "content": "- This is particularly true in the setting of NF1 (neurofibromatosis type 1)" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 238, + 330, + 545, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 330, + 545, + 346 + ], + "spans": [ + { + "bbox": [ + 238, + 330, + 545, + 346 + ], + "type": "text", + "content": "- Malignant peripheral nerve sheath tumors (MPNSTs) more commonly arise from neurofibromas than schwannomas" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 238, + 347, + 540, + 364 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 347, + 540, + 364 + ], + "spans": [ + { + "bbox": [ + 238, + 347, + 540, + 364 + ], + "type": "text", + "content": "4) The statement about genetic factors or increased risk with multiple tumors isn't accurate for schwannomas, though it may be true for other nerve sheath tumors" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 238, + 365, + 531, + 382 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 365, + 531, + 382 + ], + "spans": [ + { + "bbox": [ + 238, + 365, + 531, + 382 + ], + "type": "text", + "content": "5) Looking at the options, the key is that schwannomas maintain their benign nature in most cases, regardless of circumstances." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 238, + 383, + 293, + 392 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 238, + 383, + 293, + 392 + ], + "spans": [ + { + "bbox": [ + 238, + 383, + 293, + 392 + ], + "type": "text", + "content": "The answer is (5)" + } + ] + } + ], + "index": 29 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 237, + 404, + 338, + 414 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 237, + 404, + 338, + 414 + ], + "spans": [ + { + "bbox": [ + 237, + 404, + 338, + 414 + ], + "type": "text", + "content": "Error type: Overgeneralization" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 236, + 414, + 542, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 236, + 414, + 542, + 449 + ], + "spans": [ + { + "bbox": [ + 236, + 414, + 542, + 449 + ], + "type": "text", + "content": "Error reason: The selected choice does not address the question asked \"compared to other peripheral nerve sheath tumors?\" . The \"best answer\" for the question should compare to other tumors as specified in the question. The AI has \"oversimplified\" the question to ask a simpler question than the one asked." + } + ] + } + ], + "index": 32 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 310, + 741 + ], + "type": "text", + "content": "28" + } + ] + } + ], + "index": 34 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 39 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 63, + 70, + 189, + 383 + ], + "blocks": [ + { + "bbox": [ + 63, + 70, + 189, + 383 + ], + "lines": [ + { + "bbox": [ + 63, + 70, + 189, + 383 + ], + "spans": [ + { + "bbox": [ + 63, + 70, + 189, + 383 + ], + "type": "image", + "image_path": "2af7fd39d58465679dcf68fe3f7dcfa4322a7c1932d3ac3e90bfb95ac834d098.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 63, + 385, + 166, + 404 + ], + "lines": [ + { + "bbox": [ + 63, + 385, + 166, + 404 + ], + "spans": [ + { + "bbox": [ + 63, + 385, + 166, + 404 + ], + "type": "text", + "content": "Subject: Cell and molecular biology" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 286, + 497, + 324, + 508 + ], + "lines": [ + { + "bbox": [ + 286, + 497, + 324, + 508 + ], + "spans": [ + { + "bbox": [ + 286, + 497, + 324, + 508 + ], + "type": "text", + "content": "Figure 22" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 63, + 406, + 160, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 406, + 160, + 415 + ], + "spans": [ + { + "bbox": [ + 63, + 406, + 160, + 415 + ], + "type": "text", + "content": "Bloom's level: 4 (Analysis)" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 416, + 156, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 416, + 156, + 425 + ], + "spans": [ + { + "bbox": [ + 63, + 416, + 156, + 425 + ], + "type": "text", + "content": "Specimen: HEK293 cells" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 425, + 141, + 435 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 425, + 141, + 435 + ], + "spans": [ + { + "bbox": [ + 63, + 425, + 141, + 435 + ], + "type": "text", + "content": "(wildtype and PEX19" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 436, + 111, + 445 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 436, + 111, + 445 + ], + "spans": [ + { + "bbox": [ + 63, + 436, + 111, + 445 + ], + "type": "text", + "content": "knockdown)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 202, + 76, + 544, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 202, + 76, + 544, + 137 + ], + "spans": [ + { + "bbox": [ + 202, + 76, + 544, + 137 + ], + "type": "text", + "content": "Q: Confocal microscopy images depict live HEK293 cells subjected to CRISPR/Cas9-mediated knockdown of a specific gene, alongside wildtype cells. The cells are stained with BODIPY-C12 (cyan) to highlight lipid droplets and Hoechst to stain the nuclei. In wildtype cells, distinct organellar structures are visible, whereas these structures are absent in the gene knockdown cells. What is the most likely cellular outcome resulting from the knockdown of this gene in HEK293 cells?" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 203, + 146, + 361, + 196 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 203, + 146, + 335, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 146, + 335, + 156 + ], + "spans": [ + { + "bbox": [ + 203, + 146, + 335, + 156 + ], + "type": "text", + "content": "A) Disruption of mitochondrial fission" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 203, + 156, + 335, + 166 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 156, + 335, + 166 + ], + "spans": [ + { + "bbox": [ + 203, + 156, + 335, + 166 + ], + "type": "text", + "content": "B) Impaired peroxisome biogenesis" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 203, + 167, + 349, + 176 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 167, + 349, + 176 + ], + "spans": [ + { + "bbox": [ + 203, + 167, + 349, + 176 + ], + "type": "text", + "content": "C) Altered lipid metabolism pathways X" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 203, + 176, + 361, + 186 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 176, + 361, + 186 + ], + "spans": [ + { + "bbox": [ + 203, + 176, + 361, + 186 + ], + "type": "text", + "content": "D) Reduced endoplasmic reticulum function" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 203, + 186, + 356, + 196 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 186, + 356, + 196 + ], + "spans": [ + { + "bbox": [ + 203, + 186, + 356, + 196 + ], + "type": "text", + "content": "E) Decreased nuclear transport processes" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 202, + 210, + 364, + 220 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 202, + 210, + 364, + 220 + ], + "spans": [ + { + "bbox": [ + 202, + 210, + 364, + 220 + ], + "type": "text", + "content": "VLM Answer: Let me solve this step by step:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 203, + 220, + 395, + 270 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 203, + 220, + 395, + 230 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 220, + 395, + 230 + ], + "spans": [ + { + "bbox": [ + 203, + 220, + 395, + 230 + ], + "type": "text", + "content": "1) First, let's analyze what we're seeing in the images:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 203, + 231, + 313, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 231, + 313, + 239 + ], + "spans": [ + { + "bbox": [ + 203, + 231, + 313, + 239 + ], + "type": "text", + "content": "- Live HEK293 cells are shown" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 203, + 240, + 383, + 250 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 240, + 383, + 250 + ], + "spans": [ + { + "bbox": [ + 203, + 240, + 383, + 250 + ], + "type": "text", + "content": "- BODIPY-C12 staining (cyan) shows lipid droplets" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 203, + 251, + 361, + 259 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 251, + 361, + 259 + ], + "spans": [ + { + "bbox": [ + 203, + 251, + 361, + 259 + ], + "type": "text", + "content": "- Hoechst staining shows nuclei (grey/white)" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 203, + 260, + 363, + 270 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 260, + 363, + 270 + ], + "spans": [ + { + "bbox": [ + 203, + 260, + 363, + 270 + ], + "type": "text", + "content": "- The images show a time series (left to right)" + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 203, + 270, + 537, + 428 + ], + "type": "list", + "angle": 0, + "index": 36, + "blocks": [ + { + "bbox": [ + 203, + 270, + 277, + 279 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 270, + 277, + 279 + ], + "spans": [ + { + "bbox": [ + 203, + 270, + 277, + 279 + ], + "type": "text", + "content": "2) Key observations:" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 203, + 280, + 465, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 280, + 465, + 289 + ], + "spans": [ + { + "bbox": [ + 203, + 280, + 465, + 289 + ], + "type": "text", + "content": "- The cyan staining (BODIPY-C12) indicates the presence of lipid droplets" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 203, + 290, + 537, + 299 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 290, + 537, + 299 + ], + "spans": [ + { + "bbox": [ + 203, + 290, + 537, + 299 + ], + "type": "text", + "content": "- There's a difference between wildtype and knockdown cells in terms of organellar structures" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 203, + 300, + 341, + 309 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 300, + 341, + 309 + ], + "spans": [ + { + "bbox": [ + 203, + 300, + 341, + 309 + ], + "type": "text", + "content": "- The staining specifically targets lipids" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 203, + 310, + 290, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 310, + 290, + 319 + ], + "spans": [ + { + "bbox": [ + 203, + 310, + 290, + 319 + ], + "type": "text", + "content": "3) Consider the options:" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 203, + 319, + 429, + 329 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 319, + 429, + 329 + ], + "spans": [ + { + "bbox": [ + 203, + 319, + 429, + 329 + ], + "type": "text", + "content": "- Mitochondrial fission would primarily affect energy metabolism" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 203, + 330, + 377, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 330, + 377, + 338 + ], + "spans": [ + { + "bbox": [ + 203, + 330, + 377, + 338 + ], + "type": "text", + "content": "- Peroxisome biogenesis affects lipid breakdown" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 203, + 339, + 413, + 349 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 339, + 413, + 349 + ], + "spans": [ + { + "bbox": [ + 203, + 339, + 413, + 349 + ], + "type": "text", + "content": "- Lipid metabolism directly relates to lipid droplet formation" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 203, + 350, + 365, + 358 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 350, + 365, + 358 + ], + "spans": [ + { + "bbox": [ + 203, + 350, + 365, + 358 + ], + "type": "text", + "content": "- ER function is broader than just lipid effects" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 203, + 359, + 400, + 369 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 359, + 400, + 369 + ], + "spans": [ + { + "bbox": [ + 203, + 359, + 400, + 369 + ], + "type": "text", + "content": "- Nuclear transport isn't directly related to lipid droplets" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 203, + 369, + 253, + 379 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 369, + 253, + 379 + ], + "spans": [ + { + "bbox": [ + 203, + 369, + 253, + 379 + ], + "type": "text", + "content": "4) Reasoning:" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 203, + 380, + 363, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 380, + 363, + 388 + ], + "spans": [ + { + "bbox": [ + 203, + 380, + 363, + 388 + ], + "type": "text", + "content": "- Since BODIPY-C12 specifically stains lipids" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 203, + 389, + 404, + 398 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 389, + 404, + 398 + ], + "spans": [ + { + "bbox": [ + 203, + 389, + 404, + 398 + ], + "type": "text", + "content": "- The visible differences are in lipid-containing structures" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 203, + 399, + 411, + 408 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 399, + 411, + 408 + ], + "spans": [ + { + "bbox": [ + 203, + 399, + 411, + 408 + ], + "type": "text", + "content": "- The most direct connection would be to lipid metabolism" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 203, + 409, + 519, + 418 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 409, + 519, + 418 + ], + "spans": [ + { + "bbox": [ + 203, + 409, + 519, + 418 + ], + "type": "text", + "content": "- Changes in lipid metabolism would directly affect lipid droplet formation and distribution" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 203, + 418, + 264, + 428 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 418, + 264, + 428 + ], + "spans": [ + { + "bbox": [ + 203, + 418, + 264, + 428 + ], + "type": "text", + "content": "The answer is (3)" + } + ] + } + ], + "index": 35 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 202, + 442, + 313, + 452 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 202, + 442, + 313, + 452 + ], + "spans": [ + { + "bbox": [ + 202, + 442, + 313, + 452 + ], + "type": "text", + "content": "Error type: Overgeneralization" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 201, + 453, + 543, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 201, + 453, + 543, + 483 + ], + "spans": [ + { + "bbox": [ + 201, + 453, + 543, + 483 + ], + "type": "text", + "content": "Error reason: Assumes that the changes in lipid droplets are directly linked to altered lipid metabolism pathways, without fully integrating the implications of peroxisome biogenesis in lipid metabolism and organelle dynamics." + } + ] + } + ], + "index": 38 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "type": "text", + "content": "29" + } + ] + } + ], + "index": 40 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 40 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 276, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 276, + 83 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 276, + 83 + ], + "type": "text", + "content": "F.6.4 Hallucination or errors due to added details" + } + ] + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 61, + 99, + 181, + 248 + ], + "blocks": [ + { + "bbox": [ + 61, + 99, + 181, + 248 + ], + "lines": [ + { + "bbox": [ + 61, + 99, + 181, + 248 + ], + "spans": [ + { + "bbox": [ + 61, + 99, + 181, + 248 + ], + "type": "image", + "image_path": "2bb7f46b0dfce685f6b9872de90d66f2897a2d52aa5e03a72b5471deb07e4d20.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 61, + 256, + 178, + 406 + ], + "blocks": [ + { + "bbox": [ + 61, + 256, + 178, + 406 + ], + "lines": [ + { + "bbox": [ + 61, + 256, + 178, + 406 + ], + "spans": [ + { + "bbox": [ + 61, + 256, + 178, + 406 + ], + "type": "image", + "image_path": "f277bd23b385a1d76b6bf26e48cb6c0f1e17d4ce2491d44a882e718ea823dd4f.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 63, + 415, + 168, + 436 + ], + "lines": [ + { + "bbox": [ + 63, + 415, + 168, + 436 + ], + "spans": [ + { + "bbox": [ + 63, + 415, + 168, + 436 + ], + "type": "text", + "content": "Subject: Cell and molecular biology" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 436, + 165, + 446 + ], + "lines": [ + { + "bbox": [ + 63, + 436, + 165, + 446 + ], + "spans": [ + { + "bbox": [ + 63, + 436, + 165, + 446 + ], + "type": "text", + "content": "Bloom's level: 4 (Analysis)" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 63, + 446, + 171, + 464 + ], + "lines": [ + { + "bbox": [ + 63, + 446, + 171, + 464 + ], + "spans": [ + { + "bbox": [ + 63, + 446, + 171, + 464 + ], + "type": "text", + "content": "Specimen: cellulose fibers in vitro" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 286, + 498, + 323, + 508 + ], + "lines": [ + { + "bbox": [ + 286, + 498, + 323, + 508 + ], + "spans": [ + { + "bbox": [ + 286, + 498, + 323, + 508 + ], + "type": "text", + "content": "Figure 23" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 194, + 104, + 544, + 135 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 104, + 544, + 135 + ], + "spans": [ + { + "bbox": [ + 194, + 104, + 544, + 135 + ], + "type": "text", + "content": "Q: A CryoET tomographic slice image and its corresponding 3D annotation of cellulose fibers in vitro show differing amounts of cellulose fibers. Beyond the 2D nature of the slice versus the 3D nature of the annotation, which mechanism most likely accounts for the observed differences?" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 195, + 144, + 463, + 155 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 144, + 463, + 155 + ], + "spans": [ + { + "bbox": [ + 195, + 144, + 463, + 155 + ], + "type": "text", + "content": "A) Errors in the alignment during image reconstruction leading to artifacts." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 195, + 155, + 502, + 196 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 195, + 155, + 502, + 166 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 155, + 502, + 166 + ], + "spans": [ + { + "bbox": [ + 195, + 155, + 502, + 166 + ], + "type": "text", + "content": "B) Overlapping fibers being misinterpreted as single fibers due to resolution limits. " + }, + { + "bbox": [ + 195, + 155, + 502, + 166 + ], + "type": "inline_equation", + "content": "\\times" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 195, + 166, + 423, + 175 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 166, + 423, + 175 + ], + "spans": [ + { + "bbox": [ + 195, + 166, + 423, + 175 + ], + "type": "text", + "content": "C) Variability in manual annotation leading to subjective bias." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 195, + 175, + 428, + 186 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 175, + 428, + 186 + ], + "spans": [ + { + "bbox": [ + 195, + 175, + 428, + 186 + ], + "type": "text", + "content": "D) Sample dehydration causing shrinkage of fibers in the image." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 195, + 186, + 443, + 196 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 186, + 443, + 196 + ], + "spans": [ + { + "bbox": [ + 195, + 186, + 443, + 196 + ], + "type": "text", + "content": "E) Background noise in the image causing misidentification of fibers." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 195, + 208, + 361, + 218 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 208, + 361, + 218 + ], + "spans": [ + { + "bbox": [ + 195, + 208, + 361, + 218 + ], + "type": "text", + "content": "VLM Answer: Let me solve this step by step:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 195, + 219, + 537, + 409 + ], + "type": "list", + "angle": 0, + "index": 31, + "blocks": [ + { + "bbox": [ + 195, + 219, + 341, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 219, + 341, + 228 + ], + "spans": [ + { + "bbox": [ + 195, + 219, + 341, + 228 + ], + "type": "text", + "content": "1) First, let's analyze what we're seeing:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 195, + 228, + 473, + 239 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 228, + 473, + 239 + ], + "spans": [ + { + "bbox": [ + 195, + 228, + 473, + 239 + ], + "type": "text", + "content": "- Left image: CryoET tomographic slice showing cellulose fibers in grayscale" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 195, + 239, + 373, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 239, + 373, + 248 + ], + "spans": [ + { + "bbox": [ + 195, + 239, + 373, + 248 + ], + "type": "text", + "content": "- Right image: 3D annotation of the fibers in blue" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 195, + 249, + 470, + 258 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 249, + 470, + 258 + ], + "spans": [ + { + "bbox": [ + 195, + 249, + 470, + 258 + ], + "type": "text", + "content": "- There appears to be a difference in how many fibers are visible/annotated" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 195, + 259, + 363, + 269 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 259, + 363, + 269 + ], + "spans": [ + { + "bbox": [ + 195, + 259, + 363, + 269 + ], + "type": "text", + "content": "2) Consider the limitations of CryoET imaging:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 195, + 270, + 276, + 278 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 270, + 276, + 278 + ], + "spans": [ + { + "bbox": [ + 195, + 270, + 276, + 278 + ], + "type": "text", + "content": "- Resolution is limited" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 195, + 279, + 378, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 279, + 378, + 289 + ], + "spans": [ + { + "bbox": [ + 195, + 279, + 378, + 289 + ], + "type": "text", + "content": "- Multiple fibers can appear to overlap in 2D views" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 195, + 289, + 380, + 299 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 289, + 380, + 299 + ], + "spans": [ + { + "bbox": [ + 195, + 289, + 380, + 299 + ], + "type": "text", + "content": "- Depth information can be challenging to interpret" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 195, + 300, + 285, + 309 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 300, + 285, + 309 + ], + "spans": [ + { + "bbox": [ + 195, + 300, + 285, + 309 + ], + "type": "text", + "content": "3) Evaluate each option:" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 195, + 309, + 492, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 309, + 492, + 319 + ], + "spans": [ + { + "bbox": [ + 195, + 309, + 492, + 319 + ], + "type": "text", + "content": "- (1) Alignment errors typically cause obvious artifacts, which aren't apparent here" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 195, + 319, + 503, + 329 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 319, + 503, + 329 + ], + "spans": [ + { + "bbox": [ + 195, + 319, + 503, + 329 + ], + "type": "text", + "content": "- (2) Resolution limits could cause overlapping fibers to appear as one in the 2D slice" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 195, + 330, + 460, + 339 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 330, + 460, + 339 + ], + "spans": [ + { + "bbox": [ + 195, + 330, + 460, + 339 + ], + "type": "text", + "content": "- (3) Manual annotation variability wouldn't explain systematic differences" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 195, + 339, + 403, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 339, + 403, + 350 + ], + "spans": [ + { + "bbox": [ + 195, + 339, + 403, + 350 + ], + "type": "text", + "content": "- (4) Sample dehydration would affect both views equally" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 195, + 350, + 486, + 360 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 350, + 486, + 360 + ], + "spans": [ + { + "bbox": [ + 195, + 350, + 486, + 360 + ], + "type": "text", + "content": "- (5) Background noise might affect fiber identification but isn't the primary issue" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 195, + 360, + 533, + 379 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 360, + 533, + 379 + ], + "spans": [ + { + "bbox": [ + 195, + 360, + 533, + 379 + ], + "type": "text", + "content": "4) The key observation is that the 2D slice shows fewer distinct fibers than the 3D annotation, likely because:" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 195, + 380, + 471, + 390 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 380, + 471, + 390 + ], + "spans": [ + { + "bbox": [ + 195, + 380, + 471, + 390 + ], + "type": "text", + "content": "- Multiple fibers that overlap in the 2D view can be properly separated in 3D" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 195, + 391, + 537, + 409 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 391, + 537, + 409 + ], + "spans": [ + { + "bbox": [ + 195, + 391, + 537, + 409 + ], + "type": "text", + "content": "- The resolution limits of the microscope make it difficult to distinguish overlapping fibers in the 2D view" + } + ] + } + ], + "index": 30 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 196, + 411, + 259, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 196, + 411, + 259, + 421 + ], + "spans": [ + { + "bbox": [ + 196, + 411, + 259, + 421 + ], + "type": "text", + "content": "The answer is (2)" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 195, + 431, + 288, + 441 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 431, + 288, + 441 + ], + "spans": [ + { + "bbox": [ + 195, + 431, + 288, + 441 + ], + "type": "text", + "content": "Error type: Hallucination" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 194, + 441, + 541, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 194, + 441, + 541, + 483 + ], + "spans": [ + { + "bbox": [ + 194, + 441, + 541, + 483 + ], + "type": "text", + "content": "Error reason: The question does not state there are systematic differences. This detail is added during the AI CoT reasoning. A better answer would rule out easy or obvious causes of differences (random or systematic). Although fibers can overlap in Cryo-EM, it is generally considered to have a reasonably high resolution." + } + ] + } + ], + "index": 34 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "type": "text", + "content": "30" + } + ] + } + ], + "index": 36 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 41 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 194, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 194, + 85 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 194, + 85 + ], + "type": "text", + "content": "F.7. Reasoning trace analysis" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 90, + 219, + 102 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 90, + 219, + 102 + ], + "spans": [ + { + "bbox": [ + 55, + 90, + 219, + 102 + ], + "type": "text", + "content": "F.7.1 Human step-by-step reasoning" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 109, + 553, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 109, + 553, + 133 + ], + "spans": [ + { + "bbox": [ + 55, + 109, + 553, + 133 + ], + "type": "text", + "content": "To better understand how subject matter experts approach questions in MicroVQA, we have a cell biologist and microscopist manually annotate their reasoning for each of the three main use cases." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 133, + 310, + 145 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 133, + 310, + 145 + ], + "spans": [ + { + "bbox": [ + 67, + 133, + 310, + 145 + ], + "type": "text", + "content": "Reasoning example for task 1 'expert visual understanding'." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 66, + 162, + 110, + 168 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 162, + 110, + 168 + ], + "spans": [ + { + "bbox": [ + 66, + 162, + 110, + 168 + ], + "type": "text", + "content": "\\*Question\\*" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 66, + 169, + 348, + 178 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 169, + 348, + 178 + ], + "spans": [ + { + "bbox": [ + 66, + 169, + 348, + 178 + ], + "type": "text", + "content": "What subcellular structure could the bright green spots represent?*" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 66, + 186, + 101, + 192 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 186, + 101, + 192 + ], + "spans": [ + { + "bbox": [ + 66, + 186, + 101, + 192 + ], + "type": "text", + "content": "\\*Answer" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 66, + 193, + 331, + 201 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 193, + 331, + 201 + ], + "spans": [ + { + "bbox": [ + 66, + 193, + 331, + 201 + ], + "type": "text", + "content": "This staining pattern most likely refers to cytoplasmic bodies." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 66, + 209, + 118, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 209, + 118, + 217 + ], + "spans": [ + { + "bbox": [ + 66, + 209, + 118, + 217 + ], + "type": "text", + "content": "\\*Reasoning:\\*" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 66, + 217, + 242, + 225 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 217, + 242, + 225 + ], + "spans": [ + { + "bbox": [ + 66, + 217, + 242, + 225 + ], + "type": "text", + "content": "Find green spots (no reasoning required)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 66, + 225, + 190, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 225, + 190, + 232 + ], + "spans": [ + { + "bbox": [ + 66, + 225, + 190, + 232 + ], + "type": "text", + "content": "- Rule out staining artefacts" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 84, + 233, + 348, + 241 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 233, + 348, + 241 + ], + "spans": [ + { + "bbox": [ + 84, + 233, + 348, + 241 + ], + "type": "text", + "content": "- Look at marker channels to assess whether cells appear intact" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 84, + 242, + 399, + 248 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 242, + 399, + 248 + ], + "spans": [ + { + "bbox": [ + 84, + 242, + 399, + 248 + ], + "type": "text", + "content": "- Compare the green spots to the marker channels for nuclei or microtubules" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 84, + 249, + 302, + 256 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 249, + 302, + 256 + ], + "spans": [ + { + "bbox": [ + 84, + 249, + 302, + 256 + ], + "type": "text", + "content": "- Check that green spots are always inside the cells" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 100, + 257, + 331, + 265 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 257, + 331, + 265 + ], + "spans": [ + { + "bbox": [ + 100, + 257, + 331, + 265 + ], + "type": "text", + "content": "- Yes, inside cells, so these are no staining artefact." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 66, + 265, + 299, + 273 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 265, + 299, + 273 + ], + "spans": [ + { + "bbox": [ + 66, + 265, + 299, + 273 + ], + "type": "text", + "content": "- Look up which spotty staining patterns in cells exist" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 90, + 274, + 153, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 90, + 274, + 153, + 281 + ], + "spans": [ + { + "bbox": [ + 90, + 274, + 153, + 281 + ], + "type": "text", + "content": "Nuclear bodies" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 84, + 281, + 160, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 281, + 160, + 289 + ], + "spans": [ + { + "bbox": [ + 84, + 281, + 160, + 289 + ], + "type": "text", + "content": "- Nuclear speckles" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 84, + 289, + 127, + 296 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 289, + 127, + 296 + ], + "spans": [ + { + "bbox": [ + 84, + 289, + 127, + 296 + ], + "type": "text", + "content": "- Nucleoli" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 84, + 297, + 168, + 304 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 297, + 168, + 304 + ], + "spans": [ + { + "bbox": [ + 84, + 297, + 168, + 304 + ], + "type": "text", + "content": "- Cytoplasmic bodies" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 84, + 305, + 135, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 305, + 135, + 312 + ], + "spans": [ + { + "bbox": [ + 84, + 305, + 135, + 312 + ], + "type": "text", + "content": "- Centrosome" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 84, + 312, + 127, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 312, + 127, + 319 + ], + "spans": [ + { + "bbox": [ + 84, + 312, + 127, + 319 + ], + "type": "text", + "content": "- Vesicles" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 84, + 320, + 186, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 320, + 186, + 327 + ], + "spans": [ + { + "bbox": [ + 84, + 320, + 186, + 327 + ], + "type": "text", + "content": "- Intermediate filaments" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 84, + 328, + 206, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 328, + 206, + 337 + ], + "spans": [ + { + "bbox": [ + 84, + 328, + 206, + 337 + ], + "type": "text", + "content": "- Spotty microtubule staining" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 66, + 337, + 236, + 344 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 337, + 236, + 344 + ], + "spans": [ + { + "bbox": [ + 66, + 337, + 236, + 344 + ], + "type": "text", + "content": "- Explore which patterns can be excluded" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 84, + 345, + 470, + 353 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 345, + 470, + 353 + ], + "spans": [ + { + "bbox": [ + 84, + 345, + 470, + 353 + ], + "type": "text", + "content": "- Check whether green spots are inside or outside of nuclei to know what patterns to explore" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 100, + 353, + 517, + 360 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 353, + 517, + 360 + ], + "spans": [ + { + "bbox": [ + 100, + 353, + 517, + 360 + ], + "type": "text", + "content": "- Outside of nuclei, thus nuclear subcellular localizations can be excluded, and cytoplasmic spotty" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 119, + 361, + 239, + 368 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 119, + 361, + 239, + 368 + ], + "spans": [ + { + "bbox": [ + 119, + 361, + 239, + 368 + ], + "type": "text", + "content": "patterns need to be explored" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 84, + 369, + 311, + 376 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 369, + 311, + 376 + ], + "spans": [ + { + "bbox": [ + 84, + 369, + 311, + 376 + ], + "type": "text", + "content": "- Check whether green spots are placed on microtubules" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 100, + 376, + 290, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 376, + 290, + 384 + ], + "spans": [ + { + "bbox": [ + 100, + 376, + 290, + 384 + ], + "type": "text", + "content": "- No, thus cannot be spotty microtubule stain" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 84, + 384, + 520, + 392 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 384, + 520, + 392 + ], + "spans": [ + { + "bbox": [ + 84, + 384, + 520, + 392 + ], + "type": "text", + "content": "- Assess localization > if accumulating around microtubule organizing center could be likely vesicles or" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 104, + 393, + 193, + 399 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 393, + 193, + 399 + ], + "spans": [ + { + "bbox": [ + 104, + 393, + 193, + 399 + ], + "type": "text", + "content": "centriolar satellites" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 100, + 401, + 253, + 408 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 401, + 253, + 408 + ], + "spans": [ + { + "bbox": [ + 100, + 401, + 253, + 408 + ], + "type": "text", + "content": "- No, not accumulating at centrosome" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 84, + 409, + 236, + 416 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 409, + 236, + 416 + ], + "spans": [ + { + "bbox": [ + 84, + 409, + 236, + 416 + ], + "type": "text", + "content": "- Assess size and shape of the spots" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 100, + 417, + 345, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 417, + 345, + 423 + ], + "spans": [ + { + "bbox": [ + 100, + 417, + 345, + 423 + ], + "type": "text", + "content": "- Small size and round, can exclude intermediate filaments" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 84, + 424, + 336, + 431 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 424, + 336, + 431 + ], + "spans": [ + { + "bbox": [ + 84, + 424, + 336, + 431 + ], + "type": "text", + "content": "- Count green spots per cell and check that not more than 10" + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 100, + 432, + 307, + 440 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 432, + 307, + 440 + ], + "spans": [ + { + "bbox": [ + 100, + 432, + 307, + 440 + ], + "type": "text", + "content": "- Less likely vesicles, likely cytoplasmic bodies" + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 67, + 456, + 287, + 468 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 456, + 287, + 468 + ], + "spans": [ + { + "bbox": [ + 67, + 456, + 287, + 468 + ], + "type": "text", + "content": "Reasoning example for task 2 'hypothesis generation'." + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 66, + 485, + 110, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 485, + 110, + 492 + ], + "spans": [ + { + "bbox": [ + 66, + 485, + 110, + 492 + ], + "type": "text", + "content": "\\*Question\\*" + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 66, + 493, + 429, + 501 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 493, + 429, + 501 + ], + "spans": [ + { + "bbox": [ + 66, + 493, + 429, + 501 + ], + "type": "text", + "content": "The sperm cells swim in circles. Wouldn't they need to swim straight to reach the egg?" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 66, + 510, + 101, + 516 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 510, + 101, + 516 + ], + "spans": [ + { + "bbox": [ + 66, + 510, + 101, + 516 + ], + "type": "text", + "content": "\\*Answer\\*" + } + ] + } + ], + "index": 40 + }, + { + "bbox": [ + 66, + 517, + 541, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 517, + 541, + 525 + ], + "spans": [ + { + "bbox": [ + 66, + 517, + 541, + 525 + ], + "type": "text", + "content": "Sperm motility and its relation to reaching the egg and fertility is very complex and still not fully understood." + } + ] + } + ], + "index": 41 + }, + { + "bbox": [ + 88, + 525, + 524, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 525, + 524, + 533 + ], + "spans": [ + { + "bbox": [ + 88, + 525, + 524, + 533 + ], + "type": "text", + "content": "Yet , it is known that sperm swimming in circles in a shallow observation chamber like you might have" + } + ] + } + ], + "index": 42 + }, + { + "bbox": [ + 88, + 533, + 524, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 533, + 524, + 540 + ], + "spans": [ + { + "bbox": [ + 88, + 533, + 524, + 540 + ], + "type": "text", + "content": "applied, may swim on helical or twisted-ribbon structures in an in vivo or 3D space and thereby might in" + } + ] + } + ], + "index": 43 + }, + { + "bbox": [ + 88, + 540, + 213, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 88, + 540, + 213, + 548 + ], + "spans": [ + { + "bbox": [ + 88, + 540, + 213, + 548 + ], + "type": "text", + "content": "fact efficiently swim forward." + } + ] + } + ], + "index": 44 + }, + { + "bbox": [ + 67, + 557, + 114, + 564 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 557, + 114, + 564 + ], + "spans": [ + { + "bbox": [ + 67, + 557, + 114, + 564 + ], + "type": "text", + "content": "\\*Reasoning\\*" + } + ] + } + ], + "index": 45 + }, + { + "bbox": [ + 67, + 565, + 457, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 565, + 457, + 572 + ], + "spans": [ + { + "bbox": [ + 67, + 565, + 457, + 572 + ], + "type": "text", + "content": "- Explore the swimming trajectories to validate what the user means with swimming in circles." + } + ] + } + ], + "index": 46 + }, + { + "bbox": [ + 90, + 573, + 474, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 90, + 573, + 474, + 581 + ], + "spans": [ + { + "bbox": [ + 90, + 573, + 474, + 581 + ], + "type": "text", + "content": "Detect trajectory, assess how it would be described: curved or circular swimming trajectory" + } + ] + } + ], + "index": 47 + }, + { + "bbox": [ + 67, + 581, + 348, + 589 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 581, + 348, + 589 + ], + "spans": [ + { + "bbox": [ + 67, + 581, + 348, + 589 + ], + "type": "text", + "content": "- Explore what is reported on swimming trajectories of sperm cells." + } + ] + } + ], + "index": 48 + }, + { + "bbox": [ + 84, + 590, + 319, + 597 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 590, + 319, + 597 + ], + "spans": [ + { + "bbox": [ + 84, + 590, + 319, + 597 + ], + "type": "text", + "content": "- Sperm cell swimming trajectories are species dependent" + } + ] + } + ], + "index": 49 + }, + { + "bbox": [ + 100, + 597, + 181, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 597, + 181, + 604 + ], + "spans": [ + { + "bbox": [ + 100, + 597, + 181, + 604 + ], + "type": "text", + "content": "- Validate species:" + } + ] + } + ], + "index": 50 + }, + { + "bbox": [ + 126, + 605, + 377, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 605, + 377, + 613 + ], + "spans": [ + { + "bbox": [ + 126, + 605, + 377, + 613 + ], + "type": "text", + "content": "Assess whether the user mentioned the species in the prompt." + } + ] + } + ], + "index": 51 + }, + { + "bbox": [ + 118, + 613, + 541, + 620 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 613, + 541, + 620 + ], + "spans": [ + { + "bbox": [ + 118, + 613, + 541, + 620 + ], + "type": "text", + "content": "- Assess look of cells in image (morphology, geometry) and compare to other images with known species" + } + ] + } + ], + "index": 52 + }, + { + "bbox": [ + 137, + 621, + 185, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 137, + 621, + 185, + 628 + ], + "spans": [ + { + "bbox": [ + 137, + 621, + 185, + 628 + ], + "type": "text", + "content": "eventually" + } + ] + } + ], + "index": 53 + }, + { + "bbox": [ + 140, + 628, + 222, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 628, + 222, + 636 + ], + "spans": [ + { + "bbox": [ + 140, + 628, + 222, + 636 + ], + "type": "text", + "content": "Likely human sperm." + } + ] + } + ], + "index": 54 + }, + { + "bbox": [ + 84, + 637, + 537, + 644 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 637, + 537, + 644 + ], + "spans": [ + { + "bbox": [ + 84, + 637, + 537, + 644 + ], + "type": "text", + "content": "- Sperm cell swimming trajectories are experimental setup dependent: shallow versus deep observation chamber" + } + ] + } + ], + "index": 55 + }, + { + "bbox": [ + 100, + 645, + 172, + 652 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 645, + 172, + 652 + ], + "spans": [ + { + "bbox": [ + 100, + 645, + 172, + 652 + ], + "type": "text", + "content": "- Validate setup:" + } + ] + } + ], + "index": 56 + }, + { + "bbox": [ + 118, + 653, + 533, + 661 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 653, + 533, + 661 + ], + "spans": [ + { + "bbox": [ + 118, + 653, + 533, + 661 + ], + "type": "text", + "content": "- Brightfield microscopy, sperm cells are always swimming parallel, little to no off-focus blurring" + } + ] + } + ], + "index": 57 + }, + { + "bbox": [ + 100, + 662, + 187, + 668 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 100, + 662, + 187, + 668 + ], + "spans": [ + { + "bbox": [ + 100, + 662, + 187, + 668 + ], + "type": "text", + "content": "of the tail." + } + ] + } + ], + "index": 58 + }, + { + "bbox": [ + 134, + 669, + 470, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 134, + 669, + 470, + 676 + ], + "spans": [ + { + "bbox": [ + 134, + 669, + 470, + 676 + ], + "type": "text", + "content": "- Assess how images from different experimental setups look like for human sperm" + } + ] + } + ], + "index": 59 + }, + { + "bbox": [ + 151, + 677, + 374, + 684 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 151, + 677, + 374, + 684 + ], + "spans": [ + { + "bbox": [ + 151, + 677, + 374, + 684 + ], + "type": "text", + "content": "- Likely shallow cover-slip-based observation chamber" + } + ] + } + ], + "index": 60 + }, + { + "bbox": [ + 167, + 685, + 328, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 167, + 685, + 328, + 693 + ], + "spans": [ + { + "bbox": [ + 167, + 685, + 328, + 693 + ], + "type": "text", + "content": "- Consequences of setup for trajectory" + } + ] + } + ], + "index": 61 + }, + { + "bbox": [ + 184, + 693, + 529, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 693, + 529, + 700 + ], + "spans": [ + { + "bbox": [ + 184, + 693, + 529, + 700 + ], + "type": "text", + "content": "In shallow observation chambers sperm cannot freely beat and thus beat parallel to" + } + ] + } + ], + "index": 62 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 310, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 310, + 742 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 310, + 742 + ], + "type": "text", + "content": "31" + } + ] + } + ], + "index": 63 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 42 + }, + { + "para_blocks": [ + { + "type": "code", + "bbox": [ + 65, + 73, + 541, + 277 + ], + "blocks": [ + { + "bbox": [ + 65, + 73, + 541, + 277 + ], + "lines": [ + { + "bbox": [ + 65, + 73, + 541, + 277 + ], + "spans": [ + { + "bbox": [ + 65, + 73, + 541, + 277 + ], + "type": "text", + "content": "the glass surface, with no 3D component. - Sperm cells hitting a glass surface align their beat with the glass and this way can get trapped if asymmetrically beating. - Curved trajectory in 2D may relate to a progressive/straight swimming path in 3D - Sperm cell swimming trajectories are stimulus dependent and maturation-state dependent. - Validate setup and explore metadata provided: - Check literature and compare image to available images of sperm from healthy donors (if available). - Compare detected swimming trajectory to swimming trajectories of sperm explored in a similar species and experimental setup where sperm cells are perturbed (if available). - Are there indications for any treatment under which the detected swimming trajectory would be abnormal and how would these look like (if available). - No indications for any treatment. Neither in prompt nor in image. - Explore what is known on sperm cell trajectories to reach the egg in humans. - Not much is known since in vivo studies are not possible. - Sperm cells need to be able to develop hyperactive motility to swim to the egg - How do hyperactive sperm cells look? - Asymmetric beating - What trajectory would they swim in in a 2D observation chamber? - Curved trajectory - Does that fit the swimming trajectory presented? - Yes - Assess whether when summarizing all the obtained information the conclusion from the observed swimming trajectory to sperm not reaching the egg is valid. - No. - Compose summarizing reply to support the conclusion." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "code_body" + }, + { + "bbox": [ + 67, + 289, + 282, + 302 + ], + "lines": [ + { + "bbox": [ + 67, + 289, + 282, + 302 + ], + "spans": [ + { + "bbox": [ + 67, + 289, + 282, + 302 + ], + "type": "text", + "content": "Reasoning example for task 3 'experiment proposal'." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "code_caption" + } + ], + "index": 0, + "sub_type": "code", + "guess_lang": "txt" + }, + { + "type": "code", + "bbox": [ + 65, + 316, + 492, + 502 + ], + "blocks": [ + { + "bbox": [ + 65, + 316, + 492, + 502 + ], + "lines": [ + { + "bbox": [ + 65, + 316, + 492, + 502 + ], + "spans": [ + { + "bbox": [ + 65, + 316, + 492, + 502 + ], + "type": "text", + "content": "\\*Question\\* Can you suggest a simple experiment to find out whether centrosomal multiplication is occurring here? \\*Answer\\* One option could be to co-stain the EDC3 antibody with a centrosomal marker like PCNT or CEP250. \n\\*Reasoning\\* - Extract hypothesis to be tested from the question EDC3 is labeling centrosomes and thus, centrosomal multiplication is occurring in the pictured cell setup - Immunofluorescent antibody staining on a protein of interest (EDC3) in a human cultured cell type - Compare to similar experimental setups that support such a hypothesis - Immunofluorescent antibody staining of centrosomes, co-labeling cell markers, performing micros c counting centrosomes Acquiring or creating a transgenic cell line with a GFP-labeled centrosomal proteins, followed by microscopy \n- Determine which methods are compatible with EDC3 staining Both compatible with co-staining EDC3 \n- Determine which experiment is simplest for someone performing the shown experimental setup and if e one can be realized in less time Immunofluorescent staining of EDC3 and centrosomes \nFind centrosome markers for human cells and assess suitability for this cell type \nFinal decision on experimental setup" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "code_body" + } + ], + "index": 2, + "sub_type": "code", + "guess_lang": "latex" + }, + { + "bbox": [ + 55, + 525, + 201, + 537 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 525, + 201, + 537 + ], + "spans": [ + { + "bbox": [ + 55, + 525, + 201, + 537 + ], + "type": "text", + "content": "F.7.2 LLM-based error analysis" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 544, + 555, + 615 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 544, + 555, + 615 + ], + "spans": [ + { + "bbox": [ + 55, + 544, + 555, + 615 + ], + "type": "text", + "content": "To better understand how the model approaches questions in MicroVQA, we extend our manual reasoning trace analysis from a subset of the dataset to all questions by leveraging zero-shot prompting of an LLM. The automatic error categorization achieves " + }, + { + "bbox": [ + 55, + 544, + 555, + 615 + ], + "type": "inline_equation", + "content": "63\\%" + }, + { + "bbox": [ + 55, + 544, + 555, + 615 + ], + "type": "text", + "content": " accuracy compared to manually assigned categories. Although imperfect, automated error categorization is scalable and preserves the overall distribution of human-identified errors, enabling broader analysis. We also classify correctly answered questions to quantify overall changes across experiments. Below, we present the prompts and category definitions used for classification." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 65, + 627, + 533, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 627, + 533, + 653 + ], + "spans": [ + { + "bbox": [ + 65, + 627, + 533, + 653 + ], + "type": "text", + "content": "Below is a multiple choice question with options and the reasoning that lead a model to an incorrect response. Originally the model was also shown an image with the question. Your task is to use the reasoning trace to tag the error type:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 66, + 653, + 529, + 684 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 66, + 653, + 294, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 653, + 294, + 660 + ], + "spans": [ + { + "bbox": [ + 66, + 653, + 294, + 660 + ], + "type": "text", + "content": "- Perception: the image was not interpreted correctly." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 66, + 661, + 470, + 668 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 661, + 470, + 668 + ], + "spans": [ + { + "bbox": [ + 66, + 661, + 470, + 668 + ], + "type": "text", + "content": "- Overgeneralization: the details of the question were ignored and the general case was applied." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 66, + 669, + 529, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 669, + 529, + 676 + ], + "spans": [ + { + "bbox": [ + 66, + 669, + 529, + 676 + ], + "type": "text", + "content": "- Hallucination: details were added during reasoning that weren't in the question or extracted from the image." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 66, + 677, + 289, + 684 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 677, + 289, + 684 + ], + "spans": [ + { + "bbox": [ + 66, + 677, + 289, + 684 + ], + "type": "text", + "content": "- Other: the error does not fit the above categories." + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 66, + 692, + 106, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 692, + 106, + 700 + ], + "spans": [ + { + "bbox": [ + 66, + 692, + 106, + 700 + ], + "type": "text", + "content": "Question:" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "type": "text", + "content": "32" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 43 + }, + { + "para_blocks": [ + { + "type": "code", + "bbox": [ + 66, + 75, + 210, + 125 + ], + "blocks": [ + { + "bbox": [ + 66, + 75, + 210, + 125 + ], + "lines": [ + { + "bbox": [ + 66, + 75, + 210, + 125 + ], + "spans": [ + { + "bbox": [ + 66, + 75, + 210, + 125 + ], + "type": "text", + "content": "{{question}} \nCorrect answer: {{correct_answer}} \nResoning trace: \n{{reasoning}}" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "code_body" + } + ], + "index": 0, + "sub_type": "code", + "guess_lang": "handlebars" + }, + { + "bbox": [ + 65, + 161, + 537, + 186 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 161, + 537, + 186 + ], + "spans": [ + { + "bbox": [ + 65, + 161, + 537, + 186 + ], + "type": "text", + "content": "Below is a multiple choice question with options and the reasoning that lead a model to a correct response. Originally the model was also shown an image with the question. Your task is to use the reasoning trace and determine if the question was answered because of these reasons:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 66, + 186, + 517, + 242 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 66, + 186, + 507, + 194 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 186, + 507, + 194 + ], + "spans": [ + { + "bbox": [ + 66, + 186, + 507, + 194 + ], + "type": "text", + "content": "- No image: the image is unnecessary because the correct answer doesn't rely on interpreting visual cues." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 66, + 194, + 517, + 210 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 194, + 517, + 210 + ], + "spans": [ + { + "bbox": [ + 66, + 194, + 517, + 210 + ], + "type": "text", + "content": "- Visual giveaway: the image is unnecessary because critical visual information is already described in the question itself." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 66, + 210, + 419, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 210, + 419, + 218 + ], + "spans": [ + { + "bbox": [ + 66, + 210, + 419, + 218 + ], + "type": "text", + "content": "- Language bias: the question has information that makes the correct option obvious." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 66, + 218, + 444, + 225 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 218, + 444, + 225 + ], + "spans": [ + { + "bbox": [ + 66, + 218, + 444, + 225 + ], + "type": "text", + "content": "- Weak distractors: the distractors are easy to rule out according to the reasoning trace." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 66, + 225, + 511, + 233 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 225, + 511, + 233 + ], + "spans": [ + { + "bbox": [ + 66, + 225, + 511, + 233 + ], + "type": "text", + "content": "- Good question: the question is well-constructed and needs the image and specialized knowledge to answer." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 66, + 233, + 373, + 242 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 233, + 373, + 242 + ], + "spans": [ + { + "bbox": [ + 66, + 233, + 373, + 242 + ], + "type": "text", + "content": "- Other: the question is hard to answer or doesn't fit the other classes." + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 66, + 250, + 105, + 257 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 250, + 105, + 257 + ], + "spans": [ + { + "bbox": [ + 66, + 250, + 105, + 257 + ], + "type": "text", + "content": "Question:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 68, + 258, + 118, + 266 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 258, + 118, + 266 + ], + "spans": [ + { + "bbox": [ + 68, + 258, + 118, + 266 + ], + "type": "text", + "content": "{{question}}" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 66, + 274, + 210, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 274, + 210, + 281 + ], + "spans": [ + { + "bbox": [ + 66, + 274, + 210, + 281 + ], + "type": "text", + "content": "Correct answer: {{correct_answer}}" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 66, + 290, + 130, + 298 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 66, + 290, + 130, + 298 + ], + "spans": [ + { + "bbox": [ + 66, + 290, + 130, + 298 + ], + "type": "text", + "content": "Resoning trace:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 68, + 298, + 122, + 306 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 298, + 122, + 306 + ], + "spans": [ + { + "bbox": [ + 68, + 298, + 122, + 306 + ], + "type": "text", + "content": "{{reasoning}}" + } + ] + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 56, + 357, + 291, + 593 + ], + "blocks": [ + { + "bbox": [ + 128, + 341, + 216, + 352 + ], + "lines": [ + { + "bbox": [ + 128, + 341, + 216, + 352 + ], + "spans": [ + { + "bbox": [ + 128, + 341, + 216, + 352 + ], + "type": "text", + "content": "Without image (ablation)" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 56, + 357, + 291, + 593 + ], + "lines": [ + { + "bbox": [ + 56, + 357, + 291, + 593 + ], + "spans": [ + { + "bbox": [ + 56, + 357, + 291, + 593 + ], + "type": "image", + "image_path": "cdadc87db59cf8f1df1a7c0e5bfcee32dc9e6881c48798adceaf6c2b86fd31d5.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 608, + 555, + 631 + ], + "lines": [ + { + "bbox": [ + 55, + 608, + 555, + 631 + ], + "spans": [ + { + "bbox": [ + 55, + 608, + 555, + 631 + ], + "type": "text", + "content": "Figure 24. Reasoning trace categorization with an LLM (GPT-4o): Correct (blue) vs. incorrect (red) answers. Comparison of Claude-Sonnet's responses without (left) and with (right) image access." + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 291, + 358, + 548, + 583 + ], + "blocks": [ + { + "bbox": [ + 400, + 341, + 441, + 352 + ], + "lines": [ + { + "bbox": [ + 400, + 341, + 441, + 352 + ], + "spans": [ + { + "bbox": [ + 400, + 341, + 441, + 352 + ], + "type": "text", + "content": "With image" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 291, + 358, + 548, + 583 + ], + "lines": [ + { + "bbox": [ + 291, + 358, + 548, + 583 + ], + "spans": [ + { + "bbox": [ + 291, + 358, + 548, + 583 + ], + "type": "image", + "image_path": "e3e98d44525a84452acdb5cce1057f5ed8494b750b0f40a58d254532bc36c58c.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "bbox": [ + 54, + 641, + 555, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 641, + 555, + 689 + ], + "spans": [ + { + "bbox": [ + 54, + 641, + 555, + 689 + ], + "type": "text", + "content": "We apply this system to examine the impact of providing or withholding the image for each question (Fig. 24). While overall accuracy remains comparable across both conditions, the distribution of errors and successful reasoning patterns differs meaningfully. As expected, hallucination errors decrease when the image is available, perception errors are rare when the model lacks visual input, and the proportion of questions that do not require an image remains stable." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 55, + 689, + 555, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 555, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 555, + 714 + ], + "type": "text", + "content": "Most errors arise from questions requiring visual and contextual understanding, highlighting the importance of multimodal reasoning. Overgeneralization errors decrease when the image is provided, as the model is better grounded in the" + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 310, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 310, + 742 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 310, + 742 + ], + "type": "text", + "content": "33" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 44 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 555, + 108 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 555, + 108 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 555, + 108 + ], + "type": "text", + "content": "question context. However, many questions remain challenging even with visual input, as perception errors increase. This suggests that effectively solving our dataset requires models to both maintain the specifics of the question's context and accurately interpret visual information." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 108, + 555, + 145 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 108, + 555, + 145 + ], + "spans": [ + { + "bbox": [ + 55, + 108, + 555, + 145 + ], + "type": "text", + "content": "Additionally, note that the proportion of questions answered correctly through language shortcuts (e.g., language bias and visual giveaways) remains low in both settings, suggesting the effectiveness of the RefineBot strategy. However, addressing weak distractors remains a persistent challenge for specialized datasets." + } + ] + } + ], + "index": 1 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 299, + 732, + 311, + 742 + ], + "type": "text", + "content": "34" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 45 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 56, + 69, + 553, + 748 + ], + "blocks": [ + { + "bbox": [ + 56, + 69, + 553, + 748 + ], + "lines": [ + { + "bbox": [ + 56, + 69, + 553, + 748 + ], + "spans": [ + { + "bbox": [ + 56, + 69, + 553, + 748 + ], + "type": "image", + "image_path": "275d230e47e6850b6583e090ad8aa5d087c76d3cbd98916c8c0735c2b33baeb5.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 218, + 761, + 391, + 773 + ], + "lines": [ + { + "bbox": [ + 218, + 761, + 391, + 773 + ], + "spans": [ + { + "bbox": [ + 218, + 761, + 391, + 773 + ], + "type": "text", + "content": "Figure 25. Collage of images from MicroVQA." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + } + ], + "discarded_blocks": [], + "page_size": [ + 612, + 792 + ], + "page_idx": 46 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13415/d6bcf737-8470-4ed5-949a-6d04e53f6bf7_content_list.json b/data/2025/2503_13xxx/2503.13415/d6bcf737-8470-4ed5-949a-6d04e53f6bf7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..51611d5b5df3b777a4ac6d47c47d383379607ac9 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13415/d6bcf737-8470-4ed5-949a-6d04e53f6bf7_content_list.json @@ -0,0 +1,6244 @@ +[ + { + "type": "text", + "text": "Highlights", + "text_level": 1, + "bbox": [ + 104, + 128, + 211, + 148 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "A Comprehensive Survey on Multi-Agent Cooperative Decision-Making: Scenarios, Approaches, Challenges and Perspectives", + "text_level": 1, + "bbox": [ + 102, + 177, + 892, + 206 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Weiqiang Jin, Hongyang Du, Biao Zhao, Xingwu Tian, Bohang Shi, Guang Yang", + "bbox": [ + 102, + 230, + 653, + 247 + ], + "page_idx": 0 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Provides a comprehensive survey of multi-agent decision-making methods.", + "- Analyzes key simulation environments for multi-agent reinforcement learning.", + "- Investigate decision-making approaches, including MARL and large language models.", + "- Identifies challenges and future research directions in multi-agent collaboration.", + "- Reviews real-world applications in transportation, aerial systems, and automation." + ], + "bbox": [ + 119, + 366, + 482, + 824 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2503.13415v1 [cs.MA] 17 Mar 2025", + "bbox": [ + 21, + 298, + 60, + 724 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "A Comprehensive Survey on Multi-Agent Cooperative Decision-Making: Scenarios, Approaches, Challenges and Perspectives", + "text_level": 1, + "bbox": [ + 142, + 139, + 852, + 180 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Weiqiang Jina, Hongyang Du, Biao Zhaoa,\\*, Xingwu Tiana, Bohang Shi a, Guang Yang c,d,e,f,*", + "bbox": [ + 184, + 199, + 811, + 215 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "$^{a}$ School of Information and Communications Engineering, Xi'an Jiaotong University, Innovation Harbour, Xi'an, 710049, Shaanxi, China \n $^{b}$ Department of Electrical and Electronic Engineering, The University of Hong Kong (HKU), Hong Kong, Hong Kong, China \n $^{c}$ Bioengineering Department and Imperial-X, Imperial College London, London, W12 7SL, UK \n $^{d}$ National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK \n $^{e}$ Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK \n $^{f}$ School of Biomedical Engineering & Imaging Sciences, King's College London, London, WC2R 2LS, UK", + "bbox": [ + 124, + 224, + 870, + 293 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 104, + 347, + 171, + 359 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "With the rapid development of artificial intelligence, intelligent decision-making techniques have gradually surpassed human levels in various human-machine competitions, especially in complex multi-agent cooperative task scenarios. Multi-agent cooperative decision-making involves multiple agents working together to complete established tasks and achieve specific objectives. These techniques are widely applicable in real-world scenarios such as autonomous driving, drone navigation, disaster rescue, and simulated military confrontations. This paper begins with a comprehensive survey of the leading simulation environments and platforms used for multi-agent cooperative decision-making. Specifically, we provide an in-depth analysis for these simulation environments from various perspectives, including task formats, reward allocation, and the underlying technologies employed. Subsequently, we provide a comprehensive overview of the mainstream intelligent decision-making approaches, algorithms and models for multi-agent systems (MAS). These approaches can be broadly categorized into five types: rule-based (primarily fuzzy logic), game theory-based, evolutionary algorithms-based, deep multi-agent reinforcement learning (MARL)-based, and large language models (LLMs) reasoning-based. Given the significant advantages of MARL and LLMs-based decision-making methods over the traditional rule, game theory, and evolutionary algorithms, this paper focuses on these multi-agent methods utilizing MARL and LLMs-based techniques. We provide an in-depth discussion of these approaches, highlighting their methodology taxonomies, advantages, and drawbacks. Further, several prominent research directions in the future and potential challenges of multi-agent cooperative decision-making are also detailed.", + "bbox": [ + 100, + 367, + 892, + 595 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Keywords:", + "text_level": 1, + "bbox": [ + 102, + 604, + 178, + 617 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Intelligent decision-making, Multi-agent systems, Multi-agent cooperative environments, Multi-agent reinforcement learning, Large language models.", + "bbox": [ + 102, + 618, + 887, + 646 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 102, + 673, + 220, + 686 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "1.1. Research Backgrounds of Multi-Agent Decision-Making", + "text_level": 1, + "bbox": [ + 102, + 705, + 480, + 733 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "With the continuous advancement of science and technology, intelligent decision-making technology has", + "bbox": [ + 102, + 745, + 480, + 772 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "made rapid progress. These technologies have gradually surpassed human capabilities in various human-machine game competitions, even exceeding the top human levels. Over the past few decades, especially following the successful application of Deep Q-Networks (DQN) [1, 2] in the Arita game and the victories of AlphaGo and AlphaZero [3, 4] over top human opponents, these landmark achievements have significantly propelled the advancement of intelligent decision-making research.", + "bbox": [ + 512, + 673, + 890, + 814 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To meet the growing complexity of real-world applications and the increasing demand for more sophisticated, reliable, and efficient intelligent systems, multi", + "bbox": [ + 512, + 828, + 890, + 870 + ], + "page_idx": 1 + }, + { + "type": "page_footnote", + "text": "*Corresponding authors: Biao Zhao and Guang Yang. Email addresses: weiqiangjin@stu.xjtu.edu.cn (Weiqiang Jin), duhy@hku.hk (Hongyang Du), biaozhao@xjtu.edu.cn (Biao Zhao), txw_xjtu@163.com (Xingwu Tian), Bh_567@stu.xjtu.edu.cn (Bohang Shi), g.yang@imperial.ac.uk (Guang Yang)", + "bbox": [ + 102, + 801, + 478, + 870 + ], + "page_idx": 1 + }, + { + "type": "footer", + "text": "Preprint submitted to Information fusion", + "bbox": [ + 102, + 879, + 326, + 891 + ], + "page_idx": 1 + }, + { + "type": "footer", + "text": "March 18, 2025", + "bbox": [ + 801, + 879, + 890, + 890 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/06418d1620b5a8ff60e8838d831282725f05ecef98c24bf5bc1a890f42541a06.jpg", + "image_caption": [ + "Figure 1: An overview of the evolution of scenarios and methods in decision-making from single-agent to multi-agent systems." + ], + "image_footnote": [], + "bbox": [ + 105, + 127, + 895, + 395 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "agent cooperative decision-making has rapidly evolved from simple single-agent scenarios [5, 6, 7, 8]. Multiagent cooperative decision-making is a crucial subfield within machine learning (ML) [9] and artificial intelligence (AI) [10]. It involves multiple interacting agents working together to complete established tasks across diverse well-designed dynamic simulated environments and various complex real-world systems.", + "bbox": [ + 100, + 443, + 482, + 556 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "As depicted in Figure 1, the evolution research progress from single-agent to multi-agent decision-making systems, along with methodological comparisons, highlights that this rapidly advancing field is a crucial step toward achieving human-level AI and the Artificial General Intelligence (AGI) age. Multiagent cooperative decision-making has a wide range of practical applications and many fundamental theoretical works across various domains. The service scenarios are extensive, encompassing smart agriculture management [11, 12], intelligent collaborative robots [13, 14, 15, 16], self-driving collaborative obstacle avoidance [17, 18, 19], autonomous navigation [20, 21, 22] as well as joint rescue tasks [12, 23]. Correspondingly, considering the rapid pace of technological advancement and the multifaceted needs of the real world, in this work, we focus on the comprehensive study of multi-agent cooperative decision-making.", + "bbox": [ + 100, + 557, + 482, + 813 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "1.2. Overview of Previous Multi-Agent Surveys", + "text_level": 1, + "bbox": [ + 102, + 825, + 428, + 840 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Concurrent with the fast-paced advancements in multi-agent cooperative decision-making, there has", + "bbox": [ + 102, + 841, + 482, + 870 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "been a marked increase in systematic literature reviews in this domain [24, 6, 8, 25]. These reviews have covered a wide range of topics, from theoretical innovations to practical applications, providing a comprehensive overview of the state-of-the-art.", + "bbox": [ + 512, + 443, + 892, + 513 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Ning et al. [25] provided a comprehensive overview of the evolution, challenges, and applications of multi-agent reinforcement learning (MARL)-based intelligent agents, including its practical implementation aspects. Gronauer et al. [6] provided an overview of recent developments in multi-agent deep reinforcement learning, focusing on training schemes, emergent agent behaviors, and the unique challenges of the multi-agent domain, while also discussing future research directions. Yang et al. [26] explored the utility theory application in AI robotics, focusing on how utility AI models can guide decision-making and cooperation in multi-agent/robot systems. Orr et al. [8] reviewed recent advancements in MARL, particularly its applications in multi-robot systems, while discussing current challenges and potential future applications. Du et al. [24] provided a systematic overview of multiagent deep reinforcement learning for MAS, focusing on its challenges, methodologies, and applications. Pamul et al. [7] provided a comprehensive analysis of the application of MARL in connected and automated vehicles (CAVs), identifying current developments, existing research directions, and challenges. Hernandez-Leal et al. [27] provided a comprehensive overview of approaches to addressing opponent-induced non", + "bbox": [ + 512, + 514, + 894, + 871 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 492, + 878, + 505, + 890 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "stationarity in multi-agent learning, categorizing algorithms into a new framework and offering insights into their effectiveness across different environments. The survey by Zhu et al. [28] provided a systematic classification and analysis of MARL systems that incorporate communication, encompassing recent advanced CommMARL research and identifying key dimensions that influence the design and development of these multi-agent systems.", + "bbox": [ + 100, + 131, + 480, + 261 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "1.3. Motivations of the Current Survey", + "text_level": 1, + "bbox": [ + 104, + 274, + 371, + 288 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "However, despite the growing body of work in this field, existing surveys often have noticeable limitations [24, 6, 25, 28]. Specifically, our thorough investigation reveals that most current reviews and surveys share several common and significant significant drawbacks and limitations:", + "bbox": [ + 102, + 292, + 480, + 376 + ], + "page_idx": 3 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Limited Research Scope: Previous literature reviews [27, 28] predominantly remain within the primary framework of reinforcement learning and have not broken through theoretical limitations, resulting in a lack of comprehensive coverage.", + "- Neglect of Environments: Previous literature reviews [29, 6, 30] have largely concentrated on methodological and algorithmic advancements, frequently overlooking the essential role of simulation environments and platforms in multi-agent intelligent decision-making.", + "- Under-emphasis of Project Implementation: Prior surveys [25, 28, 30] often focus on theoretical models and overlook detailed implementation aspects, including code-bases and project architectures. This gap limits readers' ability to fully understand and apply the findings." + ], + "bbox": [ + 119, + 387, + 480, + 648 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To address the aforementioned limitations and challenges, we recognize the need for more systematic and comprehensive reviews in the multi-agent intelligent decision-making field.", + "bbox": [ + 102, + 657, + 480, + 714 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Firstly, current reviews overly emphasize deep reinforcement learning and fail to adequately consider other potentially effective intelligent decision-making methods [24, 5, 7, 28]. Secondly, with the rapid development of large language models (LLMs), their potential in natural language processing, knowledge representation, and complex decision-making has become increasingly apparent. However, current surveys have largely overlooked their integration. Additionally, existing reviews often neglect the critical role of simulation environments in the development of multi-agent systems.", + "bbox": [ + 102, + 714, + 480, + 870 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "However, simulation environments are not merely auxiliary tools but are an integral part of the MAS development and evaluation process. The agents' learning and decision-making processes are influenced and constrained by these environments, making it equally important to understand and develop these environments as it is to focus on the algorithms themselves. Finally, the lack of attention to practical implementation details in current reviews has resulted in a disconnect between theory and practice. This survey will delve into the specifics of project implementation, including code structures, system architecture, and the challenges encountered during development, to enhance research reproducibility and facilitate the effective translation of theoretical research into practical applications.", + "bbox": [ + 512, + 131, + 892, + 344 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Building on the motivations outlined earlier, this survey extends beyond the scope of previous reviews, which were often limited to specific areas of discussion. We treat multi-agent environments as equally important components, alongside the methods and techniques, and provide a thorough introduction to the most advanced algorithms and simulation environments. Moreover, we categorize various multi-agent cooperative decision-making methods from a more fundamental implementation perspective. In summary, this survey seeks to provide a more comprehensive and practical framework for research in multi-agent cooperative decision-making, thereby advancing the continuous development of this critical field.", + "bbox": [ + 512, + 346, + 892, + 542 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "1.4. The Survey Overview/Contents Organization", + "text_level": 1, + "bbox": [ + 514, + 557, + 860, + 570 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "As depicted in Figure 2, we have structured the survey to reflect our research approach, with each main and sub-branch corresponding to a specific part: First, in Section 1, we introduce the research background of multi-agent cooperative decision-making, discuss the drawbacks of previous surveys, and outline the organizational structure of this survey. Given that MARL and LLMs-based intelligent decision-making methods demonstrate significant advantages and future potential, our primary attentions are placed on Deep MARL-based and LLMs-based methods due to their superior ability to manage dynamic and uncertain environments. In Section 2, we then delve into mainstream intelligent decision-making approaches, algorithms, and models. We categorize these approaches, with a continued focus on MARL-based and LLMs-based methods, discussing their methodologies, advantages, and limitations. Following this, in Section 3, we provide an in-depth analysis of the leading simulation environments and platforms for multi-agent cooperative decision-making, again focusing on Deep MARL-based and LLMs-based", + "bbox": [ + 512, + 571, + 894, + 871 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 492, + 879, + 504, + 889 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/6fd6b397e7b96d468eedec8a8c9751022174d5ea5d13ed48ce08682d7cec5141.jpg", + "image_caption": [ + "Figure 2: Illustration of our systematic review of multi-agent intelligent decision-making research. Compared to previous reviews, we have incorporated comprehensive introduction and analysis, with each segment corresponding to a specific chapter in the survey." + ], + "image_footnote": [], + "bbox": [ + 105, + 130, + 895, + 437 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "methods. Furthermore, in Section 4, we discuss the practical applications of multi-agent decision-making systems, such as autonomous driving, UAV navigation, and collaborative robotics. Finally, in Sections 5 and 6, we explore the potential challenges and future research directions of multi-agent cooperative decision-making.", + "bbox": [ + 102, + 502, + 480, + 590 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "1.5. How to read this survey?", + "text_level": 1, + "bbox": [ + 104, + 604, + 310, + 618 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "This survey caters to a diverse readership, each with varying levels of expertise and interest in different aspects of multi-agent decision-making systems. To help readers efficiently find the content that interests them, we offer the following guide, providing direction based on different topics:", + "bbox": [ + 102, + 621, + 480, + 706 + ], + "page_idx": 4 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- For those interested in rule (fuzzy logic)-based, game theory-based, and evolutionary algorithm-based decision-making research, please refer to Section 2.2.1, 2.2.2, and 2.2.3. This section provides a comprehensive analysis of the rule and game-based methods in multi-agent systems, detailing their corresponding technological taxonomies, features, and limitations.", + "- For those interested in MARL-based decision-making research, please refer to Section 2.3. This" + ], + "bbox": [ + 119, + 719, + 480, + 870 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "section provides a comprehensive analysis of the deep MARL-based methods in multi-agent systems, detailing their corresponding technological taxonomies, advantages, and limitations.", + "bbox": [ + 546, + 502, + 892, + 560 + ], + "page_idx": 4 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- If you are focused on decision-making based on LLMs, Section 2.4 will offer you an in-depth exploration, with the corresponding technological taxonomies, advantages, and limitations. This part discusses the unique capabilities of LLMs in multiagent environments and their potential applications, especially in reasoning and decision-making.", + "- For readers focused on the well-known simulation environments of MAS, we suggest reading Section 3, which primarily covers an introduction to MARL-based Simulation Environments (Section 3.1) and LLMs Reasoning-based Simulation Environments (Section 3.2).", + "- If your interest lies in the practical applications of multi-agent decision-making systems, Section 4 will be of particular relevance. This section offers a detailed discussion of how these systems are applied across various fields, such as autonomous driving, UAV navigation, and collaborative robotics." + ], + "bbox": [ + 532, + 569, + 892, + 869 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 492, + 879, + 504, + 889 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "- If you are interested in the challenges and problems faced by existing multi-agent decision-making methods, Section 5 provides an in-depth discussion, exploring the limitations of current approaches and unresolved issues in the field, offering insights into these challenges.", + "bbox": [ + 119, + 131, + 480, + 217 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "- Finally, if you wish to learn about future research directions and the prospects for multi-agent decision-making technique, we recommend reading Section 6. This section looks ahead to future research trends and potential breakthroughs, exploring key directions that could drive the field forward.", + "bbox": [ + 121, + 227, + 480, + 325 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "2. Multi-Agent Decision-Making Taxonomies", + "text_level": 1, + "bbox": [ + 102, + 348, + 433, + 363 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "This section discusses the taxonomies of decision-making in multi-agent systems and their related techniques. The multi-agent cooperative decision-making methods can be broadly classified into five categories: rule-based (primarily fuzzy logic), game theory-based, evolutionary algorithms-based methods, MARL-based approaches, and LLMs-based methods [31]. Although these rule-based, game theory-based, and evolutionary algorithms-based solutions demonstrate a degree of effectiveness, they typically rely heavily on pre-designed strategies and assumptions. This dependence limits their adaptability to changing and complex environments and ill-suited for handling highly dynamic and uncertain scenarios. In contrast, DRL-based and LLMs reasoning-based solutions offer more dynamic and flexible approaches, capable of learning and adapting to new strategies on the fly. Therefore, these methods have significant advantages in dealing with dynamic and uncertain environments. Thus, special research attentions are placed on DRL-based and LLMs-based methods due to their significant advantages in handling dynamic and uncertain environments.", + "bbox": [ + 102, + 373, + 480, + 684 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The analysis is conducted from multiple perspectives, including agent interaction dynamics, mainstream paradigms of cooperative decision-making, MARL (multi-agent reinforcement learning), and LLM (large language model)-driven multi-agent systems, aiming to provide a systematic framework and technical foundation for the design and optimization of multi-agent decision-making.", + "bbox": [ + 102, + 686, + 480, + 799 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Specifically, Section 2.1 analyzes agent interaction dynamics in MAS, categorizing them into four typical types: fully cooperative, fully competitive, mixed cooperative-competitive, and self-interested, while discussing their impact on overall system behavior. Subse", + "bbox": [ + 102, + 799, + 480, + 870 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "sequently, Sections 2.2, 2.3, and 2.4 introduce mainstream paradigms of cooperative decision-making, MARL-based decision-making methods, and LLMs-based multi-agent systems, respectively.", + "bbox": [ + 512, + 131, + 892, + 189 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "2.1. Agent Interaction Dynamics for Multi-Agent Systems", + "text_level": 1, + "bbox": [ + 514, + 202, + 892, + 230 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In multi-agent systems, the nature of interaction among agents can be categorized into distinct relational dynamics, i.e., agent interaction dynamics, each influencing the overall system behavior and outcomes. These dynamics are critical to understanding and designing intelligent systems where agents operate within shared environments. Below, we detail the primary types of agent relationships:", + "bbox": [ + 512, + 234, + 892, + 348 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "1. Fully Cooperative: In this scenario setting, all agents have aligned objectives, meaning they share identical reward structures and work towards a common goal. The agents operate with complete cooperation, aiming to maximize collective benefits. This relationship is typical in systems where synergy is essential, and the success of one agent directly contributes to the success of others, ensuring mutual reinforcement of strategies and actions.", + "bbox": [ + 527, + 355, + 892, + 483 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "2. Fully Competitive: This relationship is characterized by a zero-sum game dynamic, where the gain of one agent directly translates into the loss of another. Agents are in direct opposition, with their objectives fundamentally misaligned. This is commonly observed in competitive environments such as robotic competitions, where agents are designed to outperform each other, and success is measured relative to the failure or underperformance of others.", + "bbox": [ + 527, + 485, + 892, + 625 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3. Mixed Cooperative and Competitive: In most real-world scenarios, agents may engage in both cooperation and competition simultaneously. This type of relationship is evident in team-based environments like robotic soccer, where agents within the same team cooperate to achieve a shared objective (e.g., scoring goals), but simultaneously compete against agents from opposing teams. The complexity of such systems lies in balancing internal cooperation with external competition, often requiring sophisticated strategies to optimize outcomes at both individual and collective levels.", + "bbox": [ + 527, + 627, + 892, + 797 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4. Self-Interested: In self-interested dynamics, each agent acts primarily to maximize its own utility, with little regard for the impact on others. An agent's actions might incidentally benefit or harm other agents, but these effects are not a concern for", + "bbox": [ + 527, + 799, + 892, + 870 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 492, + 879, + 504, + 889 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "the self-interested agent. This relationship is pertinent in scenarios where agents are designed to prioritize personal gain over collective welfare, often leading to outcomes where the overall system efficiency is not necessarily optimized, as individual agents do not account for the potential externalities of their actions on the environment or other agents.", + "bbox": [ + 136, + 131, + 480, + 231 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Overall, these agent interaction dynamics is crucial for the design and analysis of multi-agent systems, as they directly impact the strategies employed by agents and the overall system performance. The complexity of agent interactions in mixed or self-interested relationships often requires advanced coordination mechanisms and incentive structures to manage potential conflicts and ensure desired outcomes.", + "bbox": [ + 102, + 241, + 480, + 353 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "While the overarching concept of agent interaction dynamics holds some value, this survey focuses more on analyzing the characteristics of these methods from a technical and scientific perspective. Therefore, distinguishing relationships is not the primary emphasis of this study. Instead, we will proceed with a more comprehensive taxonomy of Multi-Agent Decision-making Systems.", + "bbox": [ + 102, + 355, + 480, + 467 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "2.2. Mainstream Paradigms of Multi-Agent Cooperative Decision-Making", + "text_level": 1, + "bbox": [ + 102, + 482, + 480, + 511 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "In multi-agent cooperative decision-making, several mainstream paradigms exist, each leveraging different techniques to tackle challenges associated with coordination, learning, adaptability, and optimization among autonomous agents. These paradigms utilize diverse approaches, including rule-based (primarily fuzzy logic) systems [32, 33, 34, 35, 36, 37], game theory-based [38, 39, 40, 41, 42, 43], evolutionary algorithms-based [44, 45, 46, 47, 48, 49], MARL-based [50, 51, 52, 53, 54, 55, 56], and LLMs-based [57, 30, 58, 59] multiagent decision-making systems. Each of these methods has distinct strengths and applications, depending on the problem context and the complexity of interactions between agents. For a comprehensive overview, please refer to Table 1, which provides a detailed classification of these paradigms.", + "bbox": [ + 102, + 514, + 482, + 741 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "2.2.1. Rule-Based (Primarily Fuzzy Logic)", + "text_level": 1, + "bbox": [ + 102, + 756, + 405, + 769 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Rule-based decision-making, particularly fuzzy logic, has been widely adopted in multi-agent systems (MAS) due to its ability to handle uncertainty, imprecise data, and dynamic environments [117, 118, 36, 119]. Fuzzy logic enables agents to make adaptive, human-like decisions by mapping inputs to linguistic rules rather than strict mathematical models.", + "bbox": [ + 102, + 771, + 480, + 869 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Miki et al. [32] presented a rule-based multi-agent control algorithm that utilizes local information instead of absolute coordinates, making it more practical for real-world applications. Charaf et al. [120] introduced a rule-based multi-agent system to address coordination challenges, such as controllability and observability, in distributed testing environments. Yarahmadi et al. [33] reviewed the applications of multi-agent systems in Cyber-Physical Systems (CPS) and the Internet of Things (IoT), proposing a combination of learning and rule-based reasoning to improve decision-making in MAS. Marti et al. [117] presented an expert rule-based system using multi-agent technology to support traffic management during weather-related issues. Daeichian et al. [121] used fuzzy logic in combination with Q-learning and game theory to control traffic lights autonomously. Wu et al. [34] introduced a fuzzy-theoretic game framework that integrates fuzzy logic with game theory to handle uncertainty in utility values during multi-agent decision making. Nekhai et al. [118] devised a cybersecurity management model for agricultural enterprises using a multi-agent system (MAS) based on fuzzy logical reasoning. Ramezani et al. [119] applied fuzzy logic to multi-agent decision-making in soccer robot teams, combining cooperative and non-cooperative game strategies. Zhang et al. [35] introduced a new online method for optimal coordination control in multi-agent differential games, combining fuzzy logic, and adaptive dynamic programming. Ren et al. [36] presented a fuzzy logic-based approach for partner selection in multi-agent systems, emphasizing flexibility and adaptability in dynamic environments. Gu et al. [37] introduced a cooperative reinforcement learning algorithm for multi-agent systems using a leader-follower framework, modeled as a Stackelberg game. Schwartz et al. [60] introduced a multi-agent fuzzy actor-critic learning algorithm for differential games. Harmati et al. [61] proposed a game-theoretic model for coordinating multiple robots in target tracking, using a semi-cooperative Stackelberg equilibrium and a fuzzy inference system for high-level cost tuning. Khuen et al. [62] introduced an Adaptive Fuzzy Logic (AFL) approach for multi-agent systems with negotiation capabilities, focusing on resource allocation. Yan et al. [63] proposed a graphical game-based adaptive fuzzy optimal bipartite containment control scheme for high-order nonlinear multi-agent systems (MASs). Vicerra et al. [64] proposed a multi-agent robot system using a pure fuzzy logic-based artificial intelligence model. Gu et al. [65] presented a fuzzy logic-based policy gradient multi-agent reinforcement learning algorithm for leader-follower systems, where fuzzy logic", + "bbox": [ + 514, + 131, + 892, + 871 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 492, + 879, + 504, + 889 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/ca05f64b084c96aff84be8b6b59c5b56fc41a776606092b74e4f630b51e1bd03.jpg", + "table_caption": [ + "Table 1: Representative Methods in Mainstream Paradigms of Multi-Agent Cooperative Decision-Making." + ], + "table_footnote": [], + "table_body": "
ParadigmRepresentative Methods and Key References
Rule-Based (Primarily Fuzzy Logic)Miki et al. [32], Yarahmadi et al. [33], Wu et al. [34], Zhang et al. [35], Ren et al. [36], Gu et al. [37], Schwartz et al. [60], Harmati et al. [61], Khuen et al. [62], Yan et al. [63], Vicerra et al. [64], Gu et al. [65], Maruyama et al. [66], Peng et al. [67], Yang et al. [68]
Game Theory-basedWang et al. [38], Guo et al. [39], Schwung et al. [40], Wang et al. [41], Lin et al. [42], Wang et al. [43], Wang et al. [69], Lanctot et al. [70], Guo et al. [71], Zhang et al. [72], Kong et al. [73], Wang et al. [74], Dong et al. [75], Nguyen et al. [76], Schwung et al. [40], Khan et al. [77]
Evolutionary Algorithms-basedLiu et al. [45], Xu et al. [78], Daan et al. [46], Franciszek et al. [79], Larry et al. [44], Daan et al. [47], Liu et al. [80], Yuan et al. [48], Dong et al. [81], Chen et al. [82], Zhang et al. [49]
MARL-basedWai et al. [83], Hu et al. [84], Son et al. [53], Yu et al. [52], Rashid et al. [50], Rashid et al. [54], Sunehag et al. [85], Huang et al. [15], Xu et al. [58], Yun et al. [86], Mao et al. [87], Kraemer et al. [51], Kouzeghar et al. [88], Gao et al. [89], Liu et al. [19], Qi et al. [90], Vinyals et al. [91], Lu et al. [18], Chu et al. [92], et al. [20], Kurach et al. [55], Lv et al. [93], Radac et al. [94], Wang et al. [56], Liu et al. [95]
LLMs-basedMordatch et al. [59], Zhang et al. [96], Xu et al. [58], Li et al. [57], Wang et al. [30], Zhao et al. [10], Hou et al. [97], Puig et al. [98, 99], Gao et al. [100], Xiao et al. [101], Wang et al. [102], Wu et al. [103], Wen et al. [104], Chen et al. [105], Liu et al. [106], Chen et al. [107], Hong et al. [108, 109], XAgent Team [110], Wang et al. [111, 112], Zheng et al. [113], Zhang et al. [114, 115], Cao et al. [116]
", + "bbox": [ + 132, + 152, + 863, + 458 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "controllers act as policies. Maruyama et al. [66] extended the classical framework for reasoning about distributed knowledge, incorporating fuzzy logic to handle uncertainty and degrees of certainty within multi-agent systems. Peng et al. [67] proposed a two-layer coordination model for multi-agent systems using fuzzy reinforcement learning. Yang et al. [68] presented a multiagent reinforcement learning algorithm with fuzzy policy to address control challenges in cooperative multi-agent systems, particularly for autonomous robotic formations.", + "bbox": [ + 100, + 482, + 480, + 636 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Overall, fuzzy logic remains a foundational approach for rule-based decision-making in MAS, offering interpretability and robustness in uncertain environments. In the future, fuzzy logic will be further integrated with LLMs, hierarchical decision architectures, and multiagent planning, enabling more precise and adaptive decision-making in complex real-world scenarios.", + "bbox": [ + 102, + 639, + 480, + 737 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "2.2.2. Game theory-based", + "text_level": 1, + "bbox": [ + 102, + 755, + 289, + 769 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Game theory provides a structured framework for analyzing strategic interactions in multi-agent systems. It enables agents to make rational decisions in cooperative, competitive, or mixed scenarios through equilibrium-based optimization [5, 29]. Traditional methods such as Nash equilibrium and Stackelberg games form the foundation, while modern approaches", + "bbox": [ + 100, + 771, + 480, + 870 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "integrate reinforcement learning and Bayesian inference to enhance adaptability in dynamic environments.", + "bbox": [ + 512, + 482, + 890, + 510 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Wang et al. [38] provided a broad discussion on game-theoretic approaches in multi-agent systems, covering cooperative and non-cooperative scenarios. Guo et al. [39] applied game theory to multi-agent path planning, leveraging Nash equilibrium to optimize navigation and obstacle avoidance. Zhang et al. [72] developed a distributed control algorithm that ensures optimal coverage while maintaining network connectivity.", + "bbox": [ + 512, + 512, + 892, + 626 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Beyond fundamental decision-making, game theory has been applied in communication networks and energy systems. Wang et al. [41] utilized game-theoretic learning to enhance resource allocation in wireless networks while countering adversarial actions like jamming. Lin et al. [42] introduced potential game theory to optimize distributed energy management in microgrids, where agents autonomously coordinate power distribution. Dong et al. [75] further extended this approach using a hierarchical Stackelberg model for energy trading, balancing incentives between microgrids and individual agents.", + "bbox": [ + 512, + 627, + 892, + 797 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Incorporating machine learning with game theory has also led to advances in multi-agent optimization. Schwung et al. [40] combined potential game theory with reinforcement learning for adaptive production scheduling, while Wang et al. [74] designed a", + "bbox": [ + 512, + 799, + 892, + 870 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 492, + 879, + 504, + 889 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Nash equilibrium-based fault-tolerant control strategy for multi-agent systems. Additionally, game-theoretic methods have been explored for distributed computing, as shown by Khan et al. [77], who developed a replica placement strategy to minimize data access delays in distributed systems.", + "bbox": [ + 102, + 131, + 480, + 217 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Overall, game theory remains a cornerstone of multiagent decision-making, offering well-defined theoretical guarantees while enabling dynamic adaptation through hybrid approaches. Future research will likely focus on integrating game theory with deep learning and large language models to enhance strategic reasoning in high-dimensional, uncertain environments.", + "bbox": [ + 102, + 218, + 480, + 317 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "2.2.3. Evolutionary Algorithms-based", + "text_level": 1, + "bbox": [ + 102, + 338, + 371, + 354 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Evolutionary algorithms (EAs) provide a bio-inspired approach to optimization in multi-agent systems by leveraging principles such as natural selection, mutation, and recombination [79, 47, 81]. By allowing agents to evolve their strategies iteratively, EAs are particularly effective for problems requiring continuous learning, large-scale coordination, and self-organized behavior.", + "bbox": [ + 102, + 356, + 480, + 468 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Liu et al. [45] introduced the Multi-Agent Genetic Algorithm (MAGA), where agents interact through competition and cooperation to optimize global solutions. Xu et al. [78] extended this idea to hardware-based multi-agent systems, using nanoclusters as physical agents to achieve large-scale parallel computation. Daan et al. [46] explored the role of evolutionary strategies in dynamic environments such as financial markets, smart grids, and robotics, demonstrating how adaptive algorithms can handle real-world uncertainties.", + "bbox": [ + 102, + 470, + 480, + 611 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Franciszek et al. [79] proposed a self-optimization model integrating cellular automata and game theory, simulating competitive evolutionary interactions among agents. Larry et al. [44] analyzed the trade-offs between mutation and recombination, showing that mutation can sometimes outperform traditional recombination strategies in evolutionary computing. To further enhance adaptability, Daan et al. [47] introduced Deep Neuroevolution (DNE), applying coevolutionary techniques to complex multi-agent scenarios, including Atari games.", + "bbox": [ + 102, + 613, + 480, + 769 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Recent studies have focused on scaling evolutionary learning to larger agent populations. Liu et al. [80] developed Evolutionary Reinforcement Learning (ERL), a scalable approach that partitions learning into multiple stages, ensuring better adaptability in multiagent environments. Yuan et al. [48] introduced EvoAgent, a framework that extends LLMs-based au", + "bbox": [ + 102, + 771, + 480, + 870 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "tonomous agents into multi-agent systems using evolutionary techniques like mutation and selection.", + "bbox": [ + 512, + 131, + 890, + 159 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Evolutionary game theory has also been explored to improve cooperative behavior. Dong et al. [81] designed a three-strategy decision model, where agents adopt conservative or adaptive strategies based on their interactions with neighbors, fostering long-term cooperation. Chen et al. [82] proposed a kinetic decision-making model grounded in rarefied gas dynamics, offering a new perspective on agent evolution using the Boltzmann equation. Zhang et al. [49] applied evolutionary game theory to policy optimization, analyzing cooperation strategies among governments, enterprises, and farmers in agricultural water conservation projects.", + "bbox": [ + 512, + 161, + 892, + 330 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Overall, evolutionary algorithms provide a robust framework for decentralized decision-making, allowing agents to self-improve and adapt in uncertain environments. In the future, evolutionary algorithms will be further integrated with deep learning, hierarchical evolution, and large-scale multi-agent coordination, enabling more adaptive, autonomous MAS.", + "bbox": [ + 512, + 331, + 890, + 430 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "2.2.4. MARL-based Multi-Agent Systems", + "text_level": 1, + "bbox": [ + 514, + 443, + 806, + 457 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Before introducing the MARL-based multi-agent systems (MAS), we provide a detailed discussion in Appendix A on the key technological comparisons and methodological principles of both DRL-based single-agent systems and MARL-based MAS. This helps readers build the necessary background knowledge for better understanding the following discussions.", + "bbox": [ + 512, + 458, + 890, + 558 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Multi-Agent Reinforcement Learning offers a structured framework to tackle decision-making in MAS, where autonomous agents interact with each other and a shared environment. The MAS research in MARL is broadly divided into three paradigms: Centralized Training with Centralized Execution (CTCE) [122, 123], Decentralized Training with Decentralized Execution (DTDE) [122], and Centralized Training with Decentralized Execution (CTDE) [51, 124]. Each paradigm is designed to address specific challenges such as coordination, scalability, and policy optimization, providing tailored solutions for diverse scenarios.", + "bbox": [ + 512, + 558, + 892, + 741 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Centralized Training with Centralized Execution The CTCE paradigm [125, 123] relies on a central controller that governs all agents by aggregating their observations, actions, and rewards to make joint decisions. While this paradigm enables high levels of coordination, its scalability is limited in large-scale systems. Multi-Agent DQN (MADQN) [126, 1, 2] is a representative method, employing parameter-sharing mechanisms to handle cooperative tasks effectively. However,", + "bbox": [ + 512, + 743, + 892, + 870 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 492, + 879, + 504, + 889 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "its reliance on centralized control restricts its applicability in dynamic environments with numerous agents.", + "bbox": [ + 102, + 131, + 480, + 161 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Decentralized Training with Decentralized Execution The DTDE paradigm [122] emphasizes independent learning and execution, where each agent interacts with the environment individually and updates its policy based solely on local observations and rewards. This paradigm excels in scalability and robustness, especially in scenarios with limited communication. Notable methods include Independent Q-Learning (IQL) [127, 128] and Decentralized REINFORCE [129], which allow agents to learn autonomously. Despite its advantages, DTDE faces challenges such as learning nonstationarity, where the environment changes as other agents adapt, and difficulty in addressing the credit assignment problem in cooperative settings.", + "bbox": [ + 102, + 167, + 482, + 367 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Centralized Training with Decentralized Execution The CTDE paradigm [51, 130, 131] combines the strengths of centralized training and decentralized execution, making it the most prominent paradigm in MARL research. During training, a central controller aggregates information from all agents to optimize their policies, but during execution, each agent operates independently based on its own observations. CTDE addresses key challenges like non-stationarity and scalability, with methods such as Value Decomposition Networks (VDN) [85] and QMIX [50, 52] for value-based learning, Multi-Agent Deep Deterministic Policy Gradient (MADDPG) [132] for actor-critic frameworks, and Multi-Agent Proximal Policy Optimization (MAPPO) [133] for policy gradient optimization. These approaches are widely applied in complex environments like StarCraft II [134, 135] and the Multi-Agent Particle Environment (MPE) [136, 137].", + "bbox": [ + 102, + 373, + 482, + 629 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Communication-based MARL Algorithms Additionally, communication-based MARL algorithms have emerged to enhance coordination by enabling agents to share critical information during training and execution. Examples include Attentional Communication (ATOC) [138] and Targeted Multi-Agent Communication (TarMAC) [139], which use advanced mechanisms to improve the efficiency and effectiveness of inter-agent communication in cooperative tasks.", + "bbox": [ + 102, + 636, + 482, + 764 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "By structuring MARL methods within these paradigms, researchers provide a clear framework for addressing the diverse challenges of multi-agent decision-making. From autonomous driving fleets to resource allocation systems, MARL continues to push the boundaries of what distributed intelligent systems can achieve [124].", + "bbox": [ + 102, + 771, + 482, + 870 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "2.2.5. LLMs-based Multi-Agent Systems", + "text_level": 1, + "bbox": [ + 514, + 131, + 801, + 146 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Although LLMs like GPT [140, 141, 142], Llama [143, 144], and Gemini [145] support very long input contexts, their ability to understand complex inputs still varies. In this context, multi-agent collaboration optimizes task execution through role assignment, enabling better performance through collaboration among multiple agents compared to a single agent. Each agent has an independent workflow, memory, and can seek help from other agents when necessary. LLMs-based Multi-Agent Systems represent a relatively new multi-agent decision-making model that leverages the powerful capabilities of language models, to enhance communication and collaboration between autonomous agents. In an LLMs-based multi-agent system, agents communicate via natural language or symbolic representations, breaking down complex tasks into smaller, more manageable subtasks. One important feature of LLMs-based systems is the hierarchical organization of agents, typically consisting of two levels [30, 116]:", + "bbox": [ + 512, + 149, + 892, + 420 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "1) Global planning agents, responsible for high-level decisions such as task decomposition, resource allocation, and overall strategy management.", + "bbox": [ + 512, + 422, + 892, + 464 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "2) Local execution agents, which are responsible for executing specific subtasks and providing feedback to the global planning agent. These agents are generally more focused on local tasks but communicate progress and challenges with the global level for adjustments. This decomposition makes distributed problem solving possible, with agents sharing information, strategies, and goals through language, thus advancing task execution together.", + "bbox": [ + 512, + 467, + 892, + 595 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "For example, frameworks like AutoGen [103, 146, 107], Crew AI [114, 115], and LangGraph [111, 112] provide rich tools for building multi-agent solutions, supporting efficient cooperation and interaction between agents. Through these frameworks, developers can build virtual teams that leverage the strengths of different agents in distributed tasks. Additionally, LLMs-based multi-agent systems possess adaptive re-planning capabilities, enabling them to adjust in dynamic environments. When agents encounter changes or new information, they can quickly update strategies or re-plan tasks using language models, ensuring the system remains aligned with changing goals.", + "bbox": [ + 512, + 598, + 892, + 783 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Firstly, LLMs-based multi-agent environments have emerged as pivotal platforms for advancing research in multi-agent collaboration, reasoning, and task execution. For instance, environments such as Three-DWorld Multi-Agent Transport (TDW-MAT) [147, 148], Communicative Watch-And-Help (C-WAH) [99],", + "bbox": [ + 512, + 785, + 892, + 870 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 492, + 879, + 504, + 889 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Cuisineworld [149], and AgentScope [100] offer diverse settings for evaluating and enhancing multi-agent systems in various contexts, from household chores to gaming interactions and beyond. For instance, MindAgent [149] is a novel infrastructure for evaluating planning and coordination capabilities in gaming interactions, leveraging large foundation models (LFMs) to coordinate multi-agent systems, collaborate with human players. Communicative Watch-And-Help (C-WAH) [98, 99] is a realistic multi-agent simulation environment and an extension of the Watch-And-Help Challenge platform, VirtualHome-Social. AgentScope [100] is a robust and flexible multi-agent platform designed to empower developers in building advanced multi-agent systems by leveraging the potential of LLMs.", + "bbox": [ + 100, + 131, + 480, + 344 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Meanwhile, LLMs-based multi-agent systems have broad applications and great prospects [30, 116]. They can collaborate in robotic teams to perform complex tasks, such as product assembly or joint research, ensuring seamless interaction and cooperation [58, 59, 57]. In autonomous driving, LLMs help vehicles communicate, sharing real-time data and navigation strategies to achieve coordinated actions. Moreover, LLMs can support agents (such as drones) in disaster response, transmitting critical information to help systems efficiently respond to crises. Wu et al. [103] proposed AutoGen, an open-source framework for developing next-generation LLM applications through multi-agent conversations, allowing customizable agent interactions and integration of LLMs, human inputs, and tools. Xiao et al. [101] introduced Chain-of-Experts (CoE), a multi-agent framework that enhances reasoning in complex operations research (OR) problems using LLMs, with domain-specific roles and a conductor for guidance. Chen et al. [105] presented AgentVerse, a multi-agent framework inspired by human group dynamics, dynamically adjusting agent roles and composition to enhance complex task-solving across various domains. Chen et al. [107] developed AutoAgents, a framework that adaptively generates and coordinates multiple specialized agents for efficient task completion. Liu et al. [106] proposed Dynamic LLM-Agent Network (DyLAN), a framework that enhances LLM-agent collaboration through dynamic interactions based on task requirements. Zhang et al. [96] introduced CoELA, a Cooperative Embodied Language Agent framework that leverages LLMs to enhance multi-agent cooperation in complex, decentralized environments. Hong et al. [108, 109] proposed MetaGPT, a meta-programming framework that enhances LLMs-based multi-agent system collaboration using Standard Operating Procedures (SOPs). XAgent Team [110] developed XAgent, an", + "bbox": [ + 100, + 344, + 482, + 870 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "open-source, LLM-driven autonomous agent framework for solving complex tasks using a dual-loop architecture for task planning and execution. Zheng et al. [113] introduced PlanAgent, a closed-loop motion planning framework for autonomous driving using multi-modal LLMs to generate hierarchical driving commands. Wang et al. [111, 112] developed LangGraph, a library for building stateful, multi-actor applications with LLMs, offering fine-grained control over workflows and state management. Zhang et al. [114, 115] introduced CrewAI, an open-source framework for coordinating AI agents in role-playing and autonomous operations, with a modular design for efficient collaboration. Hou et al. [97] proposed CoAct, a hierarchical multi-agent system leveraging LLMs for collaborative task execution. It features a global planning agent for task decomposition and strategy management, and a local execution agent for subtask implementation, feedback collection, and adaptive replanning, ensuring alignment with overarching goals.", + "bbox": [ + 512, + 131, + 894, + 416 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "Despite the strong capabilities of LLMs in small to medium-sized multi-agent systems, scalability remains an open issue, particularly in maintaining coherent communication between large numbers of agents in large environments. As the number of agents increases, the complexity of coordinating their behaviors through language also intensifies. Finding a balance between agent autonomy and effective collaboration is a significant challenge. Additionally, LLMs are often seen as black-box models, meaning understanding the reasoning process behind an agent's decision-making can be difficult. The lack of transparency poses challenges for trust and debugging.", + "bbox": [ + 512, + 416, + 894, + 600 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "In summary, LLMs-based multi-agent systems hold great potential in a variety of applications, offering an advanced way to model and solve complex decision-making problems that require high levels of coordination, adaptability, and communication between agents. By optimizing task decomposition, collaboration, and feedback mechanisms, LLMs bring unprecedented efficiency and flexibility to multi-agent systems.", + "bbox": [ + 512, + 600, + 892, + 715 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "2.3. MARL-based Multi-Agent Decision-Making Tax-onomies", + "text_level": 1, + "bbox": [ + 514, + 726, + 890, + 753 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "In multi-agent systems, where multiple autonomous agents interact with a shared environment and often with each other, the complexity of decision-making increases significantly. To achieve optimal performance, agents need to learn not only how to act individually but also how to coordinate with others. One of the central challenges in MARL-based multi-agent systems is determining how much information should be", + "bbox": [ + 512, + 757, + 894, + 870 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 490, + 878, + 509, + 890 + ], + "page_idx": 10 + }, + { + "type": "text", + "text": "shared among agents during different phases of learning and deployment. The MARL research is typically structured into three primary paradigms: CTCE [150, 151, 152, 126, 123], DTDE [122, 153], and CTDE [124, 130, 51]. As illustrated in Figure 3, we will next explain the principles and differences of the three methods in conjunction with this conceptual framework diagram.", + "bbox": [ + 102, + 131, + 480, + 247 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "2.3.1. Centralized Training with Decentralized Execution (CTDE)", + "text_level": 1, + "bbox": [ + 102, + 268, + 480, + 296 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "As shown in the left of Figure 3, CTDE is a hybrid MARL approach that combines the strengths of both centralized and decentralized systems [124]. In CTDE, each agent possesses its own policy network, which is trained under the guidance of a central controller. This approach is characterized by a two-phase process: centralized training followed by decentralized execution.", + "bbox": [ + 102, + 298, + 480, + 399 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Centralized Training (Phase 1): During the training phase, the central controller collects data from all agents, including their observations, actions, and rewards. This centralized data aggregation allows the controller to oversee the learning process and facilitate the training of each agent's policy network.", + "2. Decentralized Execution (Phase 2): Once the training is complete, the central controller's involvement ceases, and each agent operates independently. At execution, agents make decisions based on their own observations using their trained policy networks." + ], + "bbox": [ + 115, + 413, + 480, + 601 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "In some communication-constrained scenarios, agents often cannot share or fully share their observations of the environment. Instead, they must make decisions independently based on their own local observations and policies, which limits the applicability of fully centralized methods. To overcome this challenge, Kraemer et al. [124, 51] proposed the CTDE learning paradigm. The CTDE agents have access to global environmental state information and the observations of other agents during the training phase, allowing them to learn a joint policy together. However, during the execution phase, each agent relies solely on its own observations and the trained policy to make independent decisions. It combines the advantages of fully decentralized and fully centralized methods, effectively mitigating issues such as learning non-stationarity and the curse of dimensionality, making it the dominant paradigm in current MARL solutions.", + "bbox": [ + 102, + 615, + 482, + 870 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Under CTDE, MARL algorithms can primarily be categorized into three types based on their technical implementations: value function decomposition-based algorithms, actor-critic-based algorithms, and algorithms based on policy gradient methods, such as proximal policy optimization (PPO).", + "bbox": [ + 512, + 131, + 892, + 217 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "1. Value Decomposition-based Algorithms Value decomposition-based algorithms mainly address the challenge of estimating the joint state-action value function (Q-value) in multi-agent systems, which is difficult due to the high dimensionality of the joint action space. Instead of directly estimating this joint value function, these algorithms decompose it into more manageable individual state-action value functions (Q-value) for each agent. During execution, each agent selects its action based on its own value function. In training, the joint value function is computed from individual value functions, and the temporal difference error of the joint value guides the learning of the individual functions. A key principle these algorithms must satisfy is the Individual-Global-Max (IGM) principle, ensuring that the actions maximizing the joint value are consistent with those maximizing individual values. Different algorithms use various methods to approximate or satisfy the IGM principle.", + "bbox": [ + 512, + 217, + 892, + 487 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "Value Decomposition Networks (VDN) [85] is one of the earliest value decomposition-based algorithms in CTDE-based MARL models. VDN simplifies the estimation of the joint state-action value function by assuming that it can be represented as the sum of the individual state-action value functions of all agents. It means that the joint value function is obtained by simply adding up the individual value functions, which does not take into account the varying contributions of each agent's Q-value. However, the assumption made by VDN is a sufficient but not necessary condition for satisfying the IGM principle, which can limit its applicability. Additionally, VDN does not utilize global state information during training, further restricting its effectiveness in more complex environments.", + "bbox": [ + 512, + 487, + 892, + 699 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "To address this issue, Rashid et al. [50] proposed the QMIX algorithm within the CTDE paradigm. QMIX assumes a monotonic nonlinear relationship between the joint state-action value function and the individual state-action value functions of agents. To implement this, QMIX introduces a mixing network that computes the joint state-action value function based on the individual Q-values of all agents. This mixing network is designed with non-negative parameters to ensure that the monotonicity assumption is met. QMIX has been successfully applied in various scenarios and is considered one of the most successful value decomposition al", + "bbox": [ + 512, + 699, + 892, + 870 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 490, + 879, + 507, + 889 + ], + "page_idx": 11 + }, + { + "type": "image", + "img_path": "images/a7c702553a9d5fa72fd71c9d66c7618d6f75fff815db9d11a1cdaba21c0183ca.jpg", + "image_caption": [ + "Figure 3: The paradigms visualization of CTDE (left), DTDE (centre), and CTCE (right), consisting of three crucial elements: agent (i.e., algorithm or model), environment, central controller (Optional)." + ], + "image_footnote": [], + "bbox": [ + 107, + 139, + 359, + 293 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/0bca6a2597b2b0575317349d7a3b6f6c565e49cb2666036c98971bd5711bbe0f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 369, + 139, + 615, + 292 + ], + "page_idx": 12 + }, + { + "type": "image", + "img_path": "images/4524d0bfb9ac0487743344e13bb71efc4a8b1faea35f3507db44ab1a20d4252d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 618, + 139, + 890, + 292 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "gorithms to date. By enforcing a monotonic relationship between the joint action Q-values and individual Q-values, QMIX simplifies the policy decomposition process, facilitating decentralized decision-making. However, the monotonicity assumption, while sufficient for ensuring the Individual-Global-Max (IGM) principle, is not a necessary condition. This limitation restricts the algorithm's applicability in situations where an agent's optimal action depends on the actions of other agents.", + "bbox": [ + 100, + 356, + 480, + 483 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Weighted QMIX [54] builds upon QMIX and addresses this limitation by introducing a novel weighting mechanism during the projection of Q-values, which is widely used for cooperative MARL scenarios. In QMIX, the projection of Q-learning targets into the representable space is done with equal weighting across all joint actions, which can lead to suboptimal policy representations, even if the true optimal Q-values $(\\mathbf{Q}^{*})$ are known. To overcome this, Weighted QMIX introduces two weighting schemes-Centrally-Weighted (CW) QMIX and Optimistically-Weighted (OW) QMIX-that place greater emphasis on the better joint actions during this projection process. The weighting schemes ensure that the correct maximal action is recovered for any set of joint action Q-values, effectively improving the algorithm's ability to learn optimal policies. These schemes in Weighted QMIX enhances the representational capacity of QMIX, demonstrating improved results on both predator-prey scenarios of Multi-Agent Particle Environment (MPE) [137] and the challenging StarCraft II benchmarks [154, 91, 134].", + "bbox": [ + 100, + 485, + 482, + 784 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Since then, numerous methods building on value function decomposition have been developed. QPLEX [56] introduces a novel duplex dueling network architecture for multi-agent Q-learning, designed to nonlinearly decompose the joint state-action value function while embedding the IGM principle directly into the", + "bbox": [ + 102, + 785, + 482, + 870 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "network structure. Fast-QMIX [52] enhances the original Q-MIX by dynamically assigning virtual weighted Q-values with an additional network, improving convergence speed, stability, and overall performance in cooperative multi-agent scenarios. QTRAN [53] introduces a more flexible factorization method that overcomes the structural limitations of Q-MIX, where the joint Q-value is constrained to be a monotonic function of the individual Q-values, thereby imposing a specific structural form on the factorization. Specifically, QTRAN introduces a necessary and sufficient condition for the IGM principle and incorporates two additional loss terms into the loss function to constrain the training of individual Q-value functions, ensuring they satisfy this IGM principle.", + "bbox": [ + 512, + 356, + 892, + 569 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "2. Actor-Critic-based Algorithms: Actor-Critic-based algorithms [132, 60, 155] represent a foundational class of methods within the CTDE paradigm, offering a flexible and effective approach for tackling the challenges of multi-agent environments. These algorithms combine the strengths of policy optimization (actor) with value estimation (critic), allowing agents to learn robust and adaptive strategies in both cooperative and competitive settings. By leveraging a centralized critic during training, Actor-Critic-based methods [136, 137, 92] address key issues such as environmental non-stationarity and credit assignment, enabling effective policy optimization in dynamic and complex multiagent scenarios. Below, we discuss several prominent Actor-Critic-based approaches and their contributions to advancing MARL.", + "bbox": [ + 512, + 571, + 894, + 797 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "MADDPG [132] is a typical Actor-Critic-based CTDE approach specifically designed to address the challenges of multi-agent environments, where agents engage in both cooperative and competitive interactions. Traditional reinforcement learning algorithms, such as", + "bbox": [ + 512, + 799, + 892, + 870 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 490, + 879, + 507, + 889 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Q-learning and policy gradient methods, struggle in multi-agent settings due to issues like non-stationarity where the environment constantly changes as other agents learn-and increased variance with the growing number of agents. MADDPG adapts the actor-critic framework by incorporating a centralized critic during training, which has access to the actions and observations of all agents. This centralized critic helps mitigate the non-stationarity problem by learning a more stable value function that considers the joint action space. During execution, however, each agent independently follows its policy (actor) based on local observations, enabling decentralized decision-making. It allows each agent to successfully learn and execute complex coordination strategies, outperforming existing methods in both cooperative and competitive multiagent environments. To address the computational challenges of continuous action spaces, Li et al. [156] extend the MADDPG algorithm to Multi-Agent Mutual Information Maximization Deep Deterministic Policy Gradient (M3DDPG) by incorporating a minimax approach to enhance robustness in multi-agent environments. M3DDPG introduce Multi-Agent Adversarial Learning (MAAL), which efficiently solves the minimax formulation, ensuring agents can generalize even when opponents' policies change and leading to significant improvements over existing baselines in mixed cooperative-competitive scenarios.", + "bbox": [ + 105, + 131, + 478, + 527 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Counterfactual Multi-Agent Policy Gradient (COMA) [157] is a cooperative algorithm based on the Actor-Critic framework that uses centralized learning to address the credit assignment problem in multi-agent settings. COMA employs a centralized critic to compute advantage functions for each agent, using counterfactual baselines to reduce policy dependencies among agents and improve learning efficiency. Each agent has its own policy network, but the shared centralized critic evaluates joint Q-values by considering the collective state and action information of all agents. This approach minimizes the negative impacts of policy dependencies and allows for a more comprehensive assessment of each agent's behavior, enhancing overall policy optimization.", + "bbox": [ + 105, + 530, + 478, + 741 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "3. Proximal Policy Optimization-based Algorithms: Proximal Policy Optimization (PPO) [158] is a widely used CTDE reinforcement learning algorithm that has been adapted and extended to address challenges in MARL. Within the CTDE paradigm, PPO and its multi-agent variants have shown remarkable effectiveness in balancing policy optimization efficiency and stability. PPO was introduced by Schulman et al. [158] as an efficient policy gradient algorithm designed to im", + "bbox": [ + 105, + 743, + 478, + 869 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "prove upon the trust region policy optimization (TRPO) framework [159]. PPO employs a clipped surrogate objective function that simplifies the trust region constraint in TRPO, allowing for stable updates without overly restrictive computational overhead. The key innovation of PPO lies in its ability to control the magnitude of policy updates through the clipping mechanism, which ensures that policies do not deviate excessively from their previous versions.", + "bbox": [ + 517, + 131, + 890, + 258 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "In MARL, Multi-Agent PPO (MAPPO) [133] extends PPO to the centralized critic paradigm. MAPPO uses a centralized value function (critic) that evaluates joint states and actions during training, while agents execute independently using their decentralized policies. MAPPO has demonstrated superior performance in various cooperative and competitive multi-agent environments, such as the StarCraft II [134, 135] and Multi-Agent Particle Environment (MPE) [59, 136, 137] benchmarks. The centralized critic allows for improved credit assignment and non-stationarity handling during training, while the decentralized execution ensures scalability. While MAPPO leverages parameter sharing among agents, this assumption may not hold in heterogeneous-agent systems where agents differ in capabilities, objectives, or action spaces.", + "bbox": [ + 517, + 260, + 890, + 486 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "To address this, Kuba et al. [160] proposed Heterogeneous-Agent Trust Region Policy Optimization (HATRPO) and Heterogeneous-Agent Proximal Policy Optimization (HAPPO). These algorithms remove the parameter-sharing assumption, allowing for individualized policy networks for each agent. HATRPO builds upon TRPO by introducing a sequential update scheme, where only one agent updates its policy at a time while the policies of other agents remain fixed. This approach ensures monotonic improvement in joint policies, as it approximates the Nash equilibrium under certain conditions, such as full observability and deterministic environments. HAPPO extends PPO in a similar vein, replacing parameter sharing with individualized policies. Like HATRPO, HAPPO employs a sequential update mechanism, but it retains the computational efficiency and practical simplicity of PPO's clipped objective function.", + "bbox": [ + 517, + 487, + 890, + 741 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "Both HATRPO and HAPPO utilize a sequential update process where one agent updates its policy while others remain fixed. This prevents conflicts during policy optimization and ensures theoretical convergence to a stable joint policy. Moreover, HATRPO and HAPPO provide monotonic improvement guarantees under specific conditions. By removing the parameter-sharing constraint, these algorithms enable agents to learn tailored policies that account for their unique roles and", + "bbox": [ + 517, + 741, + 890, + 869 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 490, + 879, + 507, + 889 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "capabilities. Both algorithms perform competitively in benchmark tasks, demonstrating their ability to scale to high-dimensional state-action spaces while maintaining robust coordination among agents.", + "bbox": [ + 107, + 131, + 482, + 187 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "PPO-based algorithms, including MAPPO [133], HATRPO [160], and HAPPO [160], have revolutionized multi-agent reinforcement learning by combining the stability of PPO with the coordination benefits of centralized critics. These algorithms have proven effective across a wide array of cooperative and competitive MARL tasks, offering strong performance and scalability.", + "bbox": [ + 107, + 189, + 482, + 302 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "3. Other Categories of Algorithms within the CTDE Paradigm: In addition to the well-established categories of Value Decomposition-based, Actor-Critic-based, and Proximal Policy Optimization (PPO)-based algorithms, the MARL research has seen significant advancements through innovative optimizations and enhancements within CTDE paradigm that are not confined to these traditional classifications. These approaches aim to address the inherent challenges of multi-agent environments, such as non-stationarity and limited communication, to improve overall cooperation and policy learning efficiency.", + "bbox": [ + 107, + 303, + 482, + 472 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "For example, Centralized Advising and Decentralized Pruning (CADP) is a novel framework introduced by Zhou et al. [131] to address limitations in the CTDE paradigm. CADP enhances the training process by allowing agents to explicitly communicate and exchange advice during centralized training, thus improving joint-policy exploration. To maintain decentralized execution, CADP incorporates a smooth model pruning mechanism that gradually restricts agent communication without compromising their cooperative capabilities, demonstrating its superior performance on multiagent StarCraft II SMAC and Google Research Football benchmarks. CommNet [161] introduces a neural model where multiple agents learn to communicate continuously and collaboratively through a shared communication channel, optimizing their performance on fully cooperative tasks. The method allows agents to develop their own communication protocols during training, leading to improved coordination and task-solving capabilities. Mao et al. [87] introduced a novel Meta-MARL framework by integrating game-theoretical meta-learning with MARL algorithms using the CTDE's framework, such as the Actor-Critic-based COMA [157]. This framework offers initialization-dependent convergence guarantees and significantly improves convergence rates by addressing related tasks collectively. Yun et al. [86] proposed a novel approach called Quantum Meta Multi-Agent Reinforce", + "bbox": [ + 107, + 474, + 482, + 869 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "ment Learning (QM2ARL), achieving high rewards, fast convergence, and quick adaptation in dynamic environments. QM2ARL leverages the unique dual-dimensional trainability of Quantum Neural Networks (QNNs) to enhance MARL. Liu et al. [95] proposed the Learning before Interaction (LBI) framework, which integrates a language-guided simulator into the multiagent reinforcement learning pipeline to address complex decision-making problems. By leveraging a generative world model with dynamics and reward components, LBI generates trial-and-error experiences to improve policy learning, demonstrating superior performance and generalization on the StarCraft Multi-Agent Challenge benchmark [134, 135].", + "bbox": [ + 517, + 131, + 892, + 330 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "2.3.2. Decentralized Training with Decentralized Execution (DTDE)", + "text_level": 1, + "bbox": [ + 517, + 344, + 892, + 370 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "As shown in the centre of Figure 3, DTDE represents a fully decentralized mechanism where each agent interacts independently with the environment and updates its own policy based on its own observations and rewards [122]. In this framework, each agent trains and operates completely independently, relying only on its own observations and rewards to update its strategy. DTDE is particularly suited for environments with limited communication or no global coordination, offering strong scalability and robustness [123].", + "bbox": [ + 517, + 373, + 892, + 514 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "The core idea behind DTDE is the independence of agents [122]. Each agent interacts with its environment and learns without requiring information from others. This makes DTDE scalable, but it also introduces challenges such as non-stationarity, where the environment appears to change as other agents adapt their strategies. This characteristic makes DTDE a valuable and challenging area of research. The theoretical foundation of DTDE is often based on Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs). As described by Amato et al. [122, 124], a Dec-POMDP models a decentralized decision-making environment where agents operate independently with limited observations while aiming to maximize a shared reward. The decentralized nature of DTDE requires each agent to learn optimal strategies based on local information only.", + "bbox": [ + 517, + 516, + 892, + 741 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "First and foremost, one of the earliest DTDE approaches is Independent Q-Learning (IQL) by et al. [127]. Here, each agent applies Q-learning independently, maintaining its own Q-function and updating it based on local observations and rewards. However, IQL faces several challenges, such as the non-stationary nature of the environment caused by other agents learning simultaneously. It also struggles with credit assignment, where it is hard to determine how an individual agent", + "bbox": [ + 517, + 743, + 892, + 869 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 490, + 879, + 509, + 889 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "contributes to the team's success. To address these issues, several extensions of IQL have been proposed:", + "bbox": [ + 102, + 131, + 480, + 161 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "- Distributed Q-Learning [128] optimistically assumes other agents always take optimal actions, focusing on learning from high-reward interactions. While effective in deterministic settings, it can be overly optimistic in environments with randomness.", + "bbox": [ + 119, + 170, + 480, + 254 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "- Hysteresic Q-Learning [162] By introducing two learning rates—one for positive updates and another, smaller rate for negative updates—hysteretic Q-learning balances optimism with robustness in stochastic environments.", + "bbox": [ + 121, + 263, + 480, + 335 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "- Lenient Q-Learning [163] dynamically adjusts how lenient the agent is in updating its values, depending on how frequently specific state-action pairs are encountered. It allows for more exploration in the early stages of learning while focusing on optimization later.", + "bbox": [ + 121, + 344, + 480, + 430 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "As MARL problems become more complex, DTDE methods have been extended to deep learning. Deep Q-Networks [150, 151, 152, 126] and Deep Recurrent Q-Networks [164] have been adapted for decentralized settings, enabling agents to handle high-dimensional state and action spaces. Independent DRQN (IDRQN) [165], for example, combines DRQN with independent learning, but its asynchronous experience replay can cause instability. To solve this, Concurrent Experience Replay Trajectories (CERTs) [166] synchronize experience replay among agents, reducing non-stationarity and improving learning efficiency. Other advancements include Deep Hysteretic DRQN (Dec-HDRQN) [166], which combines hysteretic updates with deep neural networks and uses concurrent buffers to handle decentralized data. These methods have shown robust performance in partially observable environments.", + "bbox": [ + 102, + 439, + 480, + 680 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "In the DTDE paradigm, policy gradient methods offer an alternative to value-based approaches, particularly for scenarios involving continuous action spaces [122]. Several policy gradient DTDE methods have been proposed:", + "bbox": [ + 102, + 681, + 480, + 753 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "- Decentralized REINFORCE [129] independently optimizes each agent's policy using gradient ascent based on rewards observed during episodes. While simple, it is less sample-efficient.", + "bbox": [ + 119, + 762, + 480, + 819 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "- Independent Actor-Critic (IAC) [157] Combining value estimation (critic) and policy optimization (actor), IAC enables agents to learn faster", + "bbox": [ + 121, + 828, + 480, + 870 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "and update more frequently, improving sample efficiency.", + "bbox": [ + 547, + 131, + 890, + 160 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "- Independent Proximal Policy Optimization (IPPO) [158, 153, 133] Extending Proximal Policy Optimization (PPO) to decentralized settings, IPPO improves policy stability by limiting how much policies can change between updates.", + "bbox": [ + 532, + 171, + 890, + 242 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Despite its advantages, DTDE still faces significant challenges [122, 123]: 1. Non-Stationarity: As other agents learn and adapt, the environment appears dynamic and unstable to each agent, making convergence difficult; 2. Credit Assignment: It is hard to determine how each agent's actions contribute to the team's overall reward in cooperative tasks; 3. Trade-Offs Between Scalability and Performance: While DTDE scales well, its performance may be limited in tasks requiring high levels of coordination. To overcome these challenges, future research could focus on improving communication strategies during training and more robust strategies for dynamic environments.", + "bbox": [ + 512, + 255, + 892, + 439 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "In conclusion, the DTDE paradigm provides a powerful framework for solving distributed decision-making problems, balancing scalability, independence, and efficiency. It has been successfully applied in areas such as autonomous driving, distributed energy management, and swarm robotics. As research continues, DTDE is expected to play a larger role in real-world multi-agent systems, especially in scenarios requiring robust, independent learning.", + "bbox": [ + 512, + 439, + 892, + 568 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "2.3.3. Centralized Training with Centralized Execution (CTCE)", + "text_level": 1, + "bbox": [ + 514, + 583, + 890, + 612 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "As shown in the right of Figure 3, Centralized Training with Centralized Execution (CTCE) stands out as a fully centralized mechanism to MARL decision-making, where all agents transmit their information to a central controller [150, 151, 152, 126]. This central controller has access to the observations, actions, and rewards of all agents. The agents themselves do not possess policy networks and are not responsible for making decisions. Instead, they simply execute the directives issued by the central controller [125, 123].", + "bbox": [ + 512, + 614, + 890, + 756 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Multi-Agent DQN [126] is a classic example of the CTCE paradigm, where DQN is combined with a parameter-sharing mechanism to address tasks in multi-agent environments. Gupta et al. [126] firstly extends three single-agent DRL algorithms (DQN [1, 2], TRPO, and A3C) to multi-agent systems, resulting in Multi-Agent-DQN, Multi-Agent-TRPO, and Multi-Agent-A3C. These approaches were designed to learn", + "bbox": [ + 512, + 757, + 892, + 870 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 490, + 879, + 507, + 889 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "cooperative policies in complex, partially observable environments without requiring explicit communication between agents. The DQN algorithm based on multi-agent settings, also known as PS-DQN (Parameter Sharing DQN), effectively utilizes curriculum learning to handle increasing task complexity. By starting with fewer agents and gradually increasing the number, the model scales well to more complex scenarios. Further, this foundational work has led to numerous enhancements and variants based on Multi-Agent DQN, each designed to address specific challenges in multi-agent systems, such as CoRe [150], MARL-DQN [151], and [152]. CoRe [150] introduces a counterfactual reward mechanism into MARL to address the credit assignment problem in cooperative settings. By computing the difference in global rewards when an agent hypothetically changes its action while others keep theirs fixed, CoRe enhances the standard DQN framework, significantly improving learning efficiency and performance in cooperative tasks. MARL-DQN [151] optimizes energy efficiency and resource allocation in NOMA wireless systems by using MARL framework combined with Deep Q-Networks. By combining MARL with DQN, it dynamically adjusts power and time allocation to minimize energy consumption while ensuring quality of service, outperforming traditional methods in terms of efficiency and performance. Hafiz et al. [152] proposed a simplified and efficient multi-agent DQN-based multi-agent system (MAS) that addresses the challenges of complexity, resource demands, and training difficulties inherent in more advanced MARL frameworks. The work introduced a shared state and reward system while maintaining agent-specific actions, which streamlines the experience replay process. The significant improvements in tasks such as Cartpole-v1 $^1$ , LunarLander-v2 $^2$ , and Maze Traversal $^3$ from OpenAI Gym [167] demonstrates the model's effectiveness and superiority.", + "bbox": [ + 105, + 128, + 484, + 656 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "2.3.4. Addition Taxonomies and Efforts of Communication-based MARL Algorithms", + "text_level": 1, + "bbox": [ + 102, + 675, + 482, + 703 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "As outlined above, three primary paradigms—CTCE, DTDE, and CTDE—have emerged in the MARL domain to tackle the challenges associated with training and execution in multi-agent systems. Each of these", + "bbox": [ + 102, + 705, + 480, + 762 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "paradigms has its strengths and limitations, yet all face inherent difficulties in handling communication among agents, which is critical for effective collaboration and decision-making.", + "bbox": [ + 512, + 131, + 892, + 187 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Specifically, the CTCE paradigm, while providing a fully integrated framework for learning and execution, struggles with scalability as the system size grows. The DTDE paradigm, on the other hand, allows for independent agent training and execution, but often lacks the necessary coordination required for global task optimization. The CTDE paradigm has emerged as a widely adopted approach due to its ability to leverage centralized information during training to learn effective policies, while enabling decentralized execution to operate efficiently in distributed environments. However, even in CTDE, the communication between agents during execution is a bottleneck, prompting researchers to focus on improving communication strategies to enhance system performance.", + "bbox": [ + 512, + 187, + 892, + 400 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Communication-based MARL algorithms have made significant progress in overcoming these challenges. From the perspective of communication protocols and languages, communication-based MARL methods can be categorized into three types: broadcasting communication, targeted communication, and networked communication, as shown in Figure 4. From the technical angle, we provide an overview of these communication-based MARL advancements, categorizing the algorithms into three main groups based on their focus: (1) value function-based Communication-based MARL, (2) policy search-based Communication-based MARL, and (3) Communication-based MARL algorithms designed to improve communication efficiency. These approaches represent the forefront of research in enabling agents to effectively share information, coordinate actions, and optimize performance in complex environments. Here, we provide a detailed introduction to these approaches.", + "bbox": [ + 512, + 401, + 892, + 671 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Value Function-Based Communication-based MARL: For Value Function-Based Communication-based MARL Algorithms, several notable works include Differentiable Inter-Agent Learning (DIAL) [168] and Deep Distributed Recurrent Q-Networks (DDRQN) [169, 170]. Among them, DIAL facilitates effective collaboration and optimization of joint action policies by enabling the exchange of gradients of Q-functions between agents. On the other hand, DDRQN leverages recurrent neural networks to address partially observable environments, allowing agents to share critical Q-values or hidden states and make adaptive decisions in dynamic settings.", + "bbox": [ + 512, + 671, + 892, + 854 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Policy Search-Based Communication-based", + "bbox": [ + 529, + 856, + 892, + 870 + ], + "page_idx": 16 + }, + { + "type": "page_footnote", + "text": "1Cartpole-v1 game: https://www.gymlibrary.dev/environments/classic_control/cart_pole/.", + "bbox": [ + 102, + 788, + 480, + 812 + ], + "page_idx": 16 + }, + { + "type": "page_footnote", + "text": "2LunarLander-v2 game: https://www.gymlibrary.dev/environments/box2d/lunar_lander/ and https://github.com/topics/lunarlander-v2.", + "bbox": [ + 102, + 812, + 478, + 846 + ], + "page_idx": 16 + }, + { + "type": "page_footnote", + "text": "3Maze Traversal game: https://github.com/vision-mini/MazeSolverLLM.", + "bbox": [ + 102, + 846, + 478, + 869 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 490, + 879, + 507, + 889 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/9250e939563f539e78b13d5c0bf791e94784531aabd5105d5e00674c0b29e491.jpg", + "image_caption": [ + "(1) Broadcasting communication" + ], + "image_footnote": [], + "bbox": [ + 115, + 133, + 339, + 282 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/f382e9eeb15ff5c88549bd6c0b57199c910ae4030f046b9be51bb2700944e725.jpg", + "image_caption": [ + "(2) Targeted communication" + ], + "image_footnote": [], + "bbox": [ + 381, + 133, + 596, + 298 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/494d99f68b7befe7e5aeb7bcab2ce12458baeb66a4548b45686ea81abbdb200e.jpg", + "image_caption": [ + "(3) Networked communication", + "Figure 4: A schematic representation of three distinct communication methods among agents, with arrows indicating the direction of message transmission. (a) Broadcasting communication: The activated agent transmits messages to all other agents within the communication network. (b) Targeted communication: Agents selectively communicate with specific target agents based on a supervisory mechanism that regulates the timing, content, and recipients of the messages. (c) Networked communication: Agents engage in localized interactions with their neighboring agents within the network." + ], + "image_footnote": [], + "bbox": [ + 643, + 134, + 873, + 298 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "MARL: For Policy Search-Based Communication-based MARL Algorithms, significant progress has been made with approaches such as Communication Networks (CommNet) [171], Bidirectional Coordinated Network (BiCNet) [172, 173], Multi-Agent Distributed MADDPG (MD-MADDPG) [174, 132], Intrinsic A3C [175, 155], and Multi-Agent Communication and Coordination (MACC) [176, 177]. Among them, CommNet [171] proposes a centralized but differentiable communication framework where agents share encoded signals to form a global context, improving policy decisions. BiCNet [172, 173] enhances coordination among agents by employing bidirectional recurrent layers, making it suitable for complex tasks. MD-MADDPG [174, 132] combines centralized training and decentralized execution, enabling agents to exchange critical state-action information during training for robust policy learning. Intrinsic A3C [175, 155] introduces intrinsic motivation to encourage effective exploration in sparse-reward scenarios, with agents sharing intrinsic rewards through communication to boost performance. Finally, Multi-Agent Communication and Coordination (MACC) [176, 177] focuses on adaptive communication mechanisms, providing stable and secure coordination to enhance training and execution in dynamic multi-agent environments.", + "bbox": [ + 102, + 420, + 480, + 790 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Communication-based MARL Enhancing Communication Efficiency: For algorithms aimed at enhancing communication efficiency, several outstanding approaches include Attentional Communication (ATOC) [138], Targeted Multi-Agent Communication", + "bbox": [ + 102, + 799, + 480, + 870 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "(TarMAC) [139], Inter-Agent Centralized Communication (IC3Net) [178]. Attentional Communication (ATOC) [138] employs an attention mechanism to dynamically determine when communication is necessary, achieving a balance between efficiency and coordination. Targeted Multi-Agent Communication (TarMAC) [139] introduces targeted attention mechanisms to direct messages to relevant teammates, minimizing redundant communication, and improving overall performance. Inter-Agent Centralized Communication (IC3Net) [178] incorporates a gating mechanism that allows agents to learn when and how to communicate, optimizing both the frequency and quality of interactions.", + "bbox": [ + 512, + 420, + 892, + 605 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "These research advances in Communication-based MARL methods demonstrate significant strides in enabling agents to share information and achieve coordinated decision-making in MAS. These advancements will pave the way for deploying MARL in real-world scenarios where efficient and effective communication is essential.", + "bbox": [ + 512, + 607, + 890, + 705 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "2.4. LLMs-based Multi-Agent System Taxonomies", + "text_level": 1, + "bbox": [ + 514, + 730, + 855, + 743 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "The field of LLMs-based multi-agent systems has seen significant advancements, with researchers exploring various aspects of these systems to enhance their capabilities and applications [116, 30]. A comprehensive taxonomy can help categorize and understand the different dimensions of LLMs-based multi-agent systems, including architectural design, application domains, evaluation methods, and future research directions.", + "bbox": [ + 512, + 757, + 890, + 869 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 489, + 879, + 507, + 889 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "2.4.1. Architectural Design", + "text_level": 1, + "bbox": [ + 102, + 131, + 294, + 145 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "The design of architectures for LLMs-based multiagent systems is a critical component in harnessing the full potential of LLMs to enhance the capabilities of autonomous agents. Architectural design encompasses the framework and mechanisms that enable agents to interact, adapt, and make decisions in complex and dynamic environments. This section explores two primary levels of autonomy within these systems: Adaptive Autonomy and Self-Organizing Autonomy.", + "bbox": [ + 102, + 147, + 480, + 275 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "- Adaptive Autonomy: [103, 146, 113, 110] Adaptive autonomy refers to systems where agents can adjust their behavior within a predefined framework. These agents are designed to operate within the constraints set by the system architects but can adapt their actions based on the specific requirements of the task at hand. For example, in a task-specific adaptation scenario, an agent might adjust its search strategy in an information retrieval task based on the relevance of the results. In a context-aware adaptation scenario, an agent might change its communication style based on the social context of the interaction. This level of autonomy is crucial for agents that need to operate in dynamic environments where the task requirements can change over time.", + "bbox": [ + 119, + 285, + 480, + 512 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "- Self-Organizing Autonomy: [101, 103, 105, 102, 104, 154] Self-organizing autonomy represents a higher level of autonomy where agents can dynamically adapt their behavior without predefined structures. This allows for more flexible and context-aware interactions among agents. For instance, in dynamic task allocation, agents can assign tasks to each other based on the current state of the environment and their individual skills. Emergent behavior is another key feature at this level, where agents can form coalitions or develop new strategies to solve complex problems. This level of autonomy is essential for multi-agent systems that need to operate in highly dynamic and unpredictable environments.", + "bbox": [ + 121, + 521, + 480, + 734 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "2.4.2. Applications", + "text_level": 1, + "bbox": [ + 102, + 747, + 238, + 760 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "In the social sciences [148, 99, 112], LLMs-based agents have been used to simulate various social phenomena, providing insights into human behavior and social dynamics.", + "bbox": [ + 102, + 762, + 480, + 820 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "1) Economic Agents: [179, 101] LLMs can be used to model economic agents, similar to how economists use the concept of homo economicus.", + "bbox": [ + 119, + 828, + 480, + 870 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Experiments have shown that LLMs can produce results qualitatively similar to those of traditional economic models, making them a promising tool for exploring new social science insights. For example, in market simulation, LLMs can predict market trends and the impact of economic policies. In behavioral economics, LLMs can model individual and group decision-making processes, providing a more nuanced understanding of economic behavior.", + "bbox": [ + 546, + 131, + 890, + 273 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "- 2) Social Network Simulation: [147, 148, 99, 180] The Social-network Simulation System (S3) uses LLMs-based agents to simulate social networks, accurately replicating individual attitudes, emotions, and behaviors. This system can model the propagation of information, attitudes, and emotions at the population level, providing valuable insights into social dynamics. For example, it can simulate how information spreads through social networks and identify influential nodes, or model the evolution of social norms and behaviors over time.", + "bbox": [ + 532, + 282, + 890, + 451 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "- 3) User Behavior Analysis: [97, 111, 112] LLMs are employed for user simulation in recommender systems, demonstrating superiority over baseline simulation systems. They can generate reliable user behaviors, improving the accuracy of recommendations. For example, in personalized recommendations, LLMs can generate user profiles and behaviors to optimize recommendation algorithms. In user engagement, LLMs can simulate user interactions to optimize user retention and engagement.", + "bbox": [ + 532, + 462, + 890, + 604 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "In the natural sciences [100, 149, 181], LLMs-based agents have been used to simulate complex systems and processes, providing insights into natural phenomena and scientific theories.", + "bbox": [ + 514, + 612, + 890, + 668 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "1) Macroeconomic Simulation: LLMs-based agents are used for macroeconomic simulation, making realistic decisions and reproducing classic macroeconomic phenomena. These agents can simulate the impact of economic policies on the macroeconomy, providing a more accurate and dynamic model of economic behavior. For example, they can simulate the interactions between different economic sectors and their impact on the overall economy, helping policymakers make more informed decisions.", + "bbox": [ + 532, + 677, + 890, + 832 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "2) Generative Agent-Based Modeling: This approach couples mechanistic models with genera", + "bbox": [ + 532, + 841, + 890, + 870 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 490, + 879, + 507, + 889 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "tive artificial intelligence to unveil social system dynamics, such as norm diffusion and social dynamics. By combining the strengths of both approaches, researchers can model complex social systems and predict their behavior over time. For example, they can model the spread of diseases in a population, the impact of environmental changes on ecosystems, or the evolution of social norms in a community.", + "bbox": [ + 134, + 131, + 480, + 259 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "In engineering [103, 146, 114, 115], LLMs-based agents have been used to develop and optimize complex systems, improving efficiency and performance.", + "bbox": [ + 102, + 268, + 480, + 311 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "- 1) Software Development: LLMs-based agents are used for software development, facilitating sophisticated interactions and decision-making in a wide range of contexts. These agents can assist in code generation, bug detection, and system optimization, improving the productivity and quality of software development. For example, they can generate code snippets based on natural language descriptions, detect bugs in code, and suggest optimizations to improve performance.", + "bbox": [ + 119, + 319, + 480, + 462 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "- 2) Multi-Robot Systems: LLMs-based multi-agent systems are used to simulate complex real-world environments effectively, enabling interactions among diverse agents to solve various tasks. These systems can coordinate the actions of multiple robots, optimizing their behavior to achieve common goals. For example, they can be used in search and rescue operations, where multiple robots need to coordinate their actions to locate and rescue victims.", + "bbox": [ + 119, + 470, + 482, + 612 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "3. Simulation Environments of Multi-Agent Decision-Making", + "text_level": 1, + "bbox": [ + 102, + 633, + 480, + 662 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "First and foremost, the designs and implementations of multi-agent cooperative simulation environments are crucial in the historical research of multi-agent decision-making, which are widely utilized in practical applications and production. These simulation environments form the foundation for conducting efficient and effective studies in multi-agent cooperative decision-making. Specifically, a dynamic multi-agent cooperative decision-making environment refers to predetermined scenarios and platforms where multiple agents collaborate to solve problems, complete tasks, and achieve goals. Such environments provide not only a platform for testing and validating various intelligent decision-making algorithms but also help", + "bbox": [ + 102, + 671, + 482, + 870 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "us better understand the behaviors and interactions of agents in dynamic settings. By simulating these interactions, researchers can gain insights into how agents coordinate and adapt to changing conditions, thereby improving the robustness and efficiency of multi-agent systems in real-world applications. Consequently, the importance of these simulation environments cannot be overstated. They serve as a testing ground for theoretical models, allowing researchers to observe the practical implications of their intelligent algorithms. Additionally, these platforms help in identifying potential issues and refining strategies before deployment in actual scenarios, ensuring that the agents are well-prepared to handle the complexities of real-world environments. In Table 2, a wide range of simulated environments is listed. Next, we will delve into these environments one by one, emphasizing their significance and features for future development.", + "bbox": [ + 512, + 131, + 894, + 388 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "3.1. MARL-based Simulation Environments", + "text_level": 1, + "bbox": [ + 514, + 401, + 815, + 414 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "This section provides an overview of several widely-used simulation environments designed for MARL. These platforms, such as Multi-Agent Particle Environment [59, 136, 137], and PettingZoo [182], offer diverse scenarios and functionalities for exploring cooperative and competitive agent interactions in both simple and complex tasks.", + "bbox": [ + 512, + 420, + 892, + 519 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/dcd89d2a1cccb373c66e14cf63b7c26091146804c5ab4fa00049a238a73e0215.jpg", + "image_caption": [ + "Communication", + "Figure 5: Typical Scenarios in Multi-Agent Particle Environment (MPE)." + ], + "image_footnote": [], + "bbox": [ + 522, + 532, + 643, + 623 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/aaea582c151a81fb3696d62ff4d12a2da361daeb71969d0cef534808282b728a.jpg", + "image_caption": [ + "Spread" + ], + "image_footnote": [], + "bbox": [ + 643, + 532, + 763, + 621 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/c5a5536b752b9767e384cd17a453f522e02db73bbd898ee3b1d32025f3d71938.jpg", + "image_caption": [ + "Reference" + ], + "image_footnote": [], + "bbox": [ + 766, + 532, + 884, + 621 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "3.1.1. Several Widely-used Environments on MARL", + "text_level": 1, + "bbox": [ + 514, + 697, + 867, + 712 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Multi-Agent Particle Environment (MPE) [59, 136, 137] is a versatile and widely-used MARL platform designed for research in both cooperative and competitive settings. Developed by OpenAI, it is primarily known for being the testing environment of the MAD-DPG algorithm [136]. MPE is a time-discrete, space-continuous 2D platform designed for evaluating MARL algorithms.", + "bbox": [ + 512, + 714, + 892, + 827 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Figure 5, initially derived from Malloy et al. [137], illustrates various scenarios within the Multi-Agent Particle Environment (MPE), including tasks", + "bbox": [ + 514, + 828, + 892, + 870 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 490, + 878, + 507, + 889 + ], + "page_idx": 19 + }, + { + "type": "table", + "img_path": "images/8e1f91109e3b966e14a78a4c15a7cd1df02119c369d997aa57d9ae3164b5be3d.jpg", + "table_caption": [ + "Table 2: Diverse MARL-based and LLMs-based Simulated Environments for Multi-Agent Systems." + ], + "table_footnote": [], + "table_body": "
CategoriesMulti-Agent System Environments
MARL-basedMulti-Agent Particle Environment (MPE) [59, 136, 137], PettingZoo [182], SMAC [134], SMAC-v2 [135], GFootball [55], Gym-Microrts [183], MAgent [184], Dexterous Hands [185, 186], OpenAI Gym [167], Gym-MiniGrid [187]4, Melting Pot [188]5, Capture The Flag6 [189], VillagerAgent [190], Minecraft [191, 192, 193], Unity ML-Agents [194], SUMO7 [195], Hanabi Learning [196, 197], Predator-Prey [198, 199]
LLMs-basedTDW-MAT [147, 148], C-WAH [99], Cuisineworld [149], AgentScope [100], Ro-CoBench [200], Generative Agents [181, 201], SocialAI school [202, 203], Welfare Diplomacy [204]
", + "bbox": [ + 213, + 152, + 784, + 312 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "such as adversarial interactions, cooperative crypto, object pushing, and team-based world navigation. Compatible with the widely-used Gym interface, it supports a variety of tasks ranging from fully cooperative to mixed cooperative-competitive scenarios, such as simple_adversary, simple_crypto, simple_spread, simple-speaker Listener, and simple_world_comm8. Each scenario highlights distinct cooperative and competitive dynamics among agents. MPE allows agents to interact and strategize within a visually simplistic UI where particles represent different entities. MPE is a open-source platform that widely adopted in the multi-agent system research, enabling extensive customization and contributing to its role as a standard tool for studying complex multi-agent dynamics.", + "bbox": [ + 100, + 336, + 480, + 562 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Overall, MPE is a pivotal resource in the MARL community, offering a well-rounded platform for experimentation and algorithm comparison. Its design and functionality have made it an indispensable tool for researchers seeking to push the boundaries of what is possible in multi-agent systems.", + "bbox": [ + 102, + 563, + 480, + 646 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "StarCraft Multi-Agent Challenge (SMAC)[134] is a widely-used benchmark for MARL that focuses on decentralized micromanagement tasks in the popular real-time strategy game StarCraft II[10]. In SMAC, multiple agents control individual units and must learn to cooperate and coordinate actions based on local, partial observations. The agents face complex challenges, including coordinating combat techniques like focus fire, kiting, and positioning, while the opponent", + "bbox": [ + 102, + 646, + 482, + 776 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "is controlled by the built-in StarCraft II AI. SMAC emphasizes problems such as partial observability, decentralized decision-making, and multi-agent credit assignment. The environment is structured to simulate real-world scenarios where agents must learn to collaborate without full knowledge of the global state. Agents' observations are restricted to a limited field of view, forcing them to rely on local information for decision-making. As shown in Figure 6, these multi-agent cooperative decision-making environments are respectively 2s vs 3z, 5m vs 6m, 6h vs 8z, MM2, where the inside numbers represent the number of units and the letters represent the unit types in general. In recent years, SMAC has become a standard benchmark for evaluating MARL algorithms, offering a rigorous and challenging environment for advancing the field.", + "bbox": [ + 512, + 336, + 894, + 564 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/8ce921f3f1029feaf7e4f9920d087711055b3f3b730fa94019ee7dfcc46a6eb1.jpg", + "image_caption": [ + "2s_vs_3z" + ], + "image_footnote": [], + "bbox": [ + 521, + 573, + 702, + 653 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/9ffcd120820e280580ba8b4135920ff1bb425175bb4ae45fd0b6b96459a05cdf.jpg", + "image_caption": [ + "5m_vs_6m" + ], + "image_footnote": [], + "bbox": [ + 707, + 573, + 887, + 653 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/c441759cf5f1f64bf32be940a7213d4e62acd6483c83f9d24fc127762b5ee44f.jpg", + "image_caption": [ + "6h_vs_8z" + ], + "image_footnote": [], + "bbox": [ + 521, + 674, + 700, + 757 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/062a4fad536eb0ec8c96ad5812fb9f6ca97a26bdff81ad9a24d287f63775e317.jpg", + "image_caption": [ + "MMM2", + "Figure 6: Several Typical Scenarios in StarCraft Multi-Agent Challenge (SMAC)." + ], + "image_footnote": [], + "bbox": [ + 705, + 674, + 885, + 757 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "StarCraft Multi-Agent Challenge 2 (SMACv2)[1134, 154, 91] However, SMAC [134] has limitations,", + "bbox": [ + 514, + 816, + 890, + 846 + ], + "page_idx": 20 + }, + { + "type": "page_footnote", + "text": "11StarCraft Multi-Agent Challenge 2 (SMACv2): https://", + "bbox": [ + 527, + 857, + 890, + 870 + ], + "page_idx": 20 + }, + { + "type": "page_footnote", + "text": "$^{8}$ Multi-Agent Particle Environment: https://github.com/openai/multiagent-particle-envs/tree/master/multiagent/scenarios.", + "bbox": [ + 102, + 799, + 478, + 835 + ], + "page_idx": 20 + }, + { + "type": "page_footnote", + "text": "$^{9}$ StarCraft Multi-Agent Challenge (SMAC): https://github.com/oxwhirl/smac.", + "bbox": [ + 102, + 835, + 477, + 857 + ], + "page_idx": 20 + }, + { + "type": "page_footnote", + "text": "10StarCraft II: https://starcraft2.blizzard.com/.", + "bbox": [ + 122, + 857, + 431, + 870 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "20", + "bbox": [ + 489, + 878, + 507, + 889 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "including insufficient stochasticity and partial observability, which allows agents to perform well with simple open-loop policies. To address these shortcomings, SMACv2 introduces procedural content generation (PCG), randomizing team compositions and agent positions, ensuring agents face novel, diverse scenarios. Several multi-agent decision-making scenarios are depicted in Figure 7, which are from Benjamin et al. [135]. This requires more sophisticated, closed-loop policies that condition on both ally and enemy information. Additionally, SMACv2 includes the Extended Partial Observability Challenge (EPO), where enemy observations are masked stochastically, forcing agents to adapt to incomplete information and communicate more effectively. SMACv2 thus represents a major evolution of the original benchmark, addressing key gaps such as the lack of stochasticity and meaningful partial observability. These changes make SMACv2 a more challenging environment, requiring agents to generalize across varied settings and improve coordination, communication, and decentralized decision-making. Overall, SMACv2 provides a more rigorous testbed for advancing the field of cooperative MARL.", + "bbox": [ + 100, + 131, + 480, + 458 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/5977daaa3a65dbecf1ae4ebd5d4273f9270a5994fc32afaecaa06a046981ff38.jpg", + "image_caption": [ + "(a) Terran", + "Figure 7: Several scenarios from SMACv2 showing agents battling the built-in AI." + ], + "image_footnote": [], + "bbox": [ + 110, + 468, + 292, + 569 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/cc1555bf214608175b7ecc0036ea0248cb323f3aa8b093c0775f6f0247e1a6cb.jpg", + "image_caption": [ + "(b) Protoss" + ], + "image_footnote": [], + "bbox": [ + 297, + 468, + 473, + 569 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Google Research Football Environment (GFootball) [55] is a state-of-the-art multi-agent simulation environment developed by the Google Research Brain Team. It is specifically designed for reinforcement learning research and is built on top of the open-source football game, GamePlay Football. GFootball is compatible with the OpenAI Gym API, making it a versatile tool not only for training intelligent agents but also for allowing players to interact with the built-in AI or trained agents using a keyboard or game controller. GFootball features an advanced, physics-based 3D football simulator where agents can be trained to play football, offering a challenging yet highly customizable platform for testing novel reinforcement learning algorithms and ideas. GFootball is tailored for multi-agent", + "bbox": [ + 100, + 631, + 482, + 845 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "experiments and multiplayer scenarios, enabling the exploration of more complex interactions and strategies. GFootball supports various scenarios, including fullgame simulations with varying difficulty levels, as well as simpler tasks in the Football Academy that focus on specific skills like passing or scoring.", + "bbox": [ + 512, + 131, + 892, + 218 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/fe125f7ad7146c8d20a68dd683d2c487a0c2aa784c5c3f04bb080f05675d86ff.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 526, + 231, + 705, + 309 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/e5ddbb87534db84e6d03407faae41692168b868eafe9607c3e2c250685d3d32e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 707, + 231, + 880, + 309 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/92ae51be2b8df48d17be35b8d6b2ad59fdf7b1204444fa6c3a8abf9fe3652419.jpg", + "image_caption": [ + "Figure 8: Typical examples of Training Scenarios in Football Academy." + ], + "image_footnote": [], + "bbox": [ + 524, + 310, + 705, + 388 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/cd543be592269ea50a5b717ea12435ce30b9687643df4f91e24b491f70ec8d60.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 707, + 310, + 880, + 388 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Moreover, training agents for the \"Football Benchmark\" can be quite challenging. To help researchers efficiently test and iterate on new ideas, researchers provide a toolset called \"Football Academy\", as illustrated in Figure 8, which includes a series of scenarios with varying levels of difficulty. These scenarios range from simple setups, such as a single player scoring against an open goal (e.g., approaching an open goal, scoring in an open goal, or scoring while running), to more complex team-based setups, where a controlled team must break through specific defensive formations (e.g., scoring while running against a goalkeeper, passing and shooting against a goalkeeper, and 3v1 against a goalkeeper). Additionally, the toolset covers common situations in football matches, such as corner kicks, simple counterattacks, and complex counterattacks. Lastly, as an famous open-source GitHub project12, it offers a unique opportunity for researchers and pushes the boundaries of AI research in a reproducible and scalable manner.", + "bbox": [ + 512, + 441, + 892, + 724 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Unity Machine Learning-Agents Toolkit[13] [194] is an open-source platform designed to enable games and simulations to serve as environments for training intelligent agents. Built on Unity's powerful game engine, it supports a wide range of AI and machine learn", + "bbox": [ + 512, + 725, + 892, + 797 + ], + "page_idx": 21 + }, + { + "type": "page_footnote", + "text": "12Google Research Football: google-research/football.", + "bbox": [ + 514, + 822, + 721, + 846 + ], + "page_idx": 21 + }, + { + "type": "page_footnote", + "text": "13Unity ML-Agents Toolkit: Unity-Technologies/ml-agents.", + "bbox": [ + 514, + 846, + 724, + 870 + ], + "page_idx": 21 + }, + { + "type": "page_footnote", + "text": "https://github.com/", + "bbox": [ + 752, + 824, + 890, + 835 + ], + "page_idx": 21 + }, + { + "type": "page_footnote", + "text": "https://github.com/", + "bbox": [ + 752, + 847, + 890, + 858 + ], + "page_idx": 21 + }, + { + "type": "footer", + "text": "github.com/oxwhirl/smacv2.", + "bbox": [ + 102, + 858, + 289, + 869 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "21", + "bbox": [ + 487, + 878, + 505, + 889 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "ing methods, including reinforcement learning, imitation learning, and neuroevolution, through an intuitive Python API. The platform includes state-of-the-art algorithm implementations (based on PyTorch), allowing researchers and developers to train agents for 2D, 3D, and VR/AR applications.", + "bbox": [ + 102, + 131, + 480, + 217 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/53875bfadf5937371eb9290dac80d5ffbbc6ef1faa61797072bc5b31078641a6.jpg", + "image_caption": [ + "Figure 9: Typical Training Scenarios in Unity Machine Learning-Agents Toolkit (released version: v0.11). From Left-to-right, up-to-down: (a) Basic, (b) 3DBall, (c) Crawler, (d) Push Block, (e) Tennis, (f) Worm, (g) Bouncer, (h) Grid World, (i) Walker, (j) Reacher, (k) Food Collector, (l) Pyramids, (m) Wall Jump, (n) Hallway, (o) Soccer Twos [194]." + ], + "image_footnote": [], + "bbox": [ + 110, + 231, + 475, + 475 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "ML-Agents is particularly useful for training NPC behaviors in diverse scenarios, automated testing of game builds, and evaluating game design decisions. It features a highly flexible simulation environment with realistic visuals, physics-driven interactions, and rich task complexity. By integrating tools for creating custom environments and supporting multi-agent and adversarial settings, the toolkit bridges the gap between AI research and practical applications in game development.", + "bbox": [ + 102, + 571, + 482, + 712 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "As seen from Figure 9, it depicts several typical multi-agent environments from the previous work of Juliani et al. [194]. The platform also provides key components such as a Python API, Unity SDK, and pre-built environments, enabling users to customize and evaluate their algorithms in Unity's interactive and visually rich settings. With its versatility and accessibility, Unity ML-Agents Toolkit has become an indispensable resource for both AI researchers and game developers, driving innovation in artificial intelligence and simulation-based learning.", + "bbox": [ + 102, + 714, + 482, + 870 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Gym-Microrts $^{14}$ [183] (pronounced \"Gym-micro-RTS\") is a fast and affordable reinforcement learning (RL) platform designed to facilitate research in fullgame Real-Time Strategy (RTS) games. Unlike traditional RTS research that demands extensive computational resources, Gym- $\\mu$ RTS allows training advanced agents using limited hardware, such as a single GPU and CPU setup, within reasonable timeframes. Figure 10 showcases a match between our best-trained agent (top-left) and CoacAI (bottom-right), the $2020\\mu$ RTS AI competition champion. The agent employs an efficient strategy, starting with resource harvesting and a worker rush to damage the enemy base, transitioning into midgame combat unit production to secure victory.", + "bbox": [ + 512, + 131, + 894, + 331 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/f72b8927d7ad0f20dd420e3c0537c50cb2a61fc6a171923ecd883bbc96ce2f69.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 537, + 340, + 694, + 445 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/7ed8fe37724e932a2a3ab825bea994f6e768276e6ff3027446672d60c35542b5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 697, + 342, + 870, + 442 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/b462ee5279fc07101d5d0eb3e7f398608f911c11288edf318519293f4d90f54c.jpg", + "image_caption": [ + "Figure 10: Screenshot of our best-trained agent (top-left) playing against CoacAI (bottom-right), the $2020\\mu \\mathrm{RTS}$ AI competition champion [183]." + ], + "image_footnote": [], + "bbox": [ + 537, + 448, + 702, + 555 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/bf1aad07e6b6dfae82304cdea60c98c659bb3ab718316201042409f300a4afad.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 707, + 443, + 870, + 555 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "The platform offers a simplified RTS environment that captures the core challenges of RTS games, including combinatorial action spaces, real-time decision-making, and partial observability. Gym- $\\mu$ RTS employs a low-level action space, enabling fine-grained control over individual units without AI assistance, which poses unique challenges and opportunities for RL algorithms. It supports Proximal Policy Optimization (PPO) and incorporates techniques like invalid action masking, action composition, and diverse training opponents to enhance training efficiency and agent performance.", + "bbox": [ + 512, + 617, + 894, + 774 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Gym- $\\mu$ RTS has demonstrated its effectiveness by producing state-of-the-art DRL agents capable of defeating top competition bots, such as CoacAI. The platform is open-source and provides all necessary tools for", + "bbox": [ + 512, + 774, + 892, + 832 + ], + "page_idx": 22 + }, + { + "type": "page_footnote", + "text": "$^{14}$ Gym-Microrts: gym-microrts.", + "bbox": [ + 515, + 846, + 623, + 870 + ], + "page_idx": 22 + }, + { + "type": "page_footnote", + "text": "https://github.com/kered9/", + "bbox": [ + 702, + 846, + 890, + 858 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "22", + "bbox": [ + 487, + 878, + 507, + 889 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "researchers to explore and advance RL techniques in RTS games, making it a valuable resource for both AI researchers and gaming enthusiasts.", + "bbox": [ + 102, + 131, + 480, + 174 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "$\\mathbf{MAgent}^{15}$ [184] is an open-source platform specifically designed to support large-scale MARL research, with a focus on exploring Artificial Collective Intelligence (ACI). Unlike traditional MARL platforms, MAgent excels in handling scenarios involving hundreds to millions of agents, making it ideal for studying complex interactions and emergent behaviors in large populations.", + "bbox": [ + 102, + 174, + 480, + 287 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "For instance, as shown in Figure 11, the \"Pursuit\" scenario is a classic example designed to showcase the emergent cooperative behaviors of agents in a predator-prey environment. In this setup, predators work together to capture preys while the preys attempt to evade capture. Each predator receives rewards for successfully attacking a prey, while preys are penalized if caught. Over time, predators learn to form collaborative strategies, such as surrounding and trapping preys, highlighting the emergence of local cooperation.", + "bbox": [ + 102, + 288, + 480, + 431 + ], + "page_idx": 23 + }, + { + "type": "image", + "img_path": "images/a8b8ab058ef58b895e471b30f19076c894ad38342e7c3636432125fa0ff8fe44.jpg", + "image_caption": [ + "Adversarial Pursuit" + ], + "image_footnote": [], + "bbox": [ + 117, + 445, + 290, + 577 + ], + "page_idx": 23 + }, + { + "type": "image", + "img_path": "images/6a7a5cc929d96a2e693b7eb6dd6b52ad97c671cee999876ac5f06b04b6595221.jpg", + "image_caption": [ + "Combined Arms" + ], + "image_footnote": [], + "bbox": [ + 295, + 447, + 468, + 576 + ], + "page_idx": 23 + }, + { + "type": "image", + "img_path": "images/a02b8f1699d713c9af9acb477efb8ff6330f22c91869f18ec626c6f6d413cd20.jpg", + "image_caption": [ + "Gather", + "Figure 11: Illustrations of one of the typical running example in MAGent, called \"Pursuit\" [184]." + ], + "image_footnote": [], + "bbox": [ + 117, + 596, + 290, + 720 + ], + "page_idx": 23 + }, + { + "type": "image", + "img_path": "images/fecce6d79d7fc54c61690b86b34ca2b5b7e04c3550cec69330b8de4f7f92f10e.jpg", + "image_caption": [ + "Tiger Deer" + ], + "image_footnote": [], + "bbox": [ + 295, + 592, + 468, + 720 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "The platform is based on a grid-world model where agents can perform actions such as moving, turning, or attacking, while perceiving both local and global", + "bbox": [ + 102, + 787, + 480, + 829 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "information. Through a flexible Python interface, researchers can easily customize the state space, action space, and reward mechanisms, enabling the rapid creation of complex cooperative or competitive environments. MAgent comes with several built-in scenarios, such as pursuit, resource gathering, and team-based battles, which highlight emergent social behaviors like cooperative strategies, competitive dynamics, and resource monopolization.", + "bbox": [ + 512, + 131, + 890, + 259 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "MAgent is highly scalable, leveraging GPU-based parallelism to simulate large-scale interactions efficiently. It also provides intuitive visualization tools for real-time observation of agent behaviors, facilitating analysis and debugging. Its flexibility and scalability make MAgent a powerful tool for MARL research, enabling the study of large-scale agent interactions, emergent behaviors, and the dynamics of artificial societies.", + "bbox": [ + 512, + 260, + 892, + 373 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "3.2. LLMs Reasoning-based Simulation Environments", + "text_level": 1, + "bbox": [ + 514, + 388, + 885, + 401 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "LLMs-based multi-agent systems have become an essential tool for enhancing the collaboration, reasoning, and decision-making capabilities of autonomous agents [116]. By integrating LLMs with simulation platforms, researchers can create complex test environments to explore the interactions of multi-agent systems in various tasks and scenarios. These simulation environments not only provide rich dynamic testing scenarios but also promote the widespread application of LLMs in task planning, coordination, and execution. The following will introduce several widely used simulation platforms for LLM multi-agent systems.", + "bbox": [ + 512, + 407, + 890, + 577 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "ThreeDWorld Multi-Agent Transport (TDW-MAT) $^{16}$ [147, 148] is a simulation environment for multi-agent embodied task, which is extended from the ThreeDWorld Transport Challenge [147] and is designed for visually-guided task-and-motion planning in physically realistic settings. It operates within the ThreeDWorld (TDW) platform, which offers high-fidelity sensory data, real-time physics-driven interactions, and near-photorealistic rendering. In TDW-MAT, embodied agents are tasked with transporting objects scattered throughout a simulated home environment using containers, emphasizing the need for coordination, physics awareness, and efficient planning. For instance, in the common scenario shown in Figure 12, the agent must transport objects scattered across multiple rooms and place them on the bed (marked with a green bounding box) in the bedroom.", + "bbox": [ + 512, + 577, + 892, + 819 + ], + "page_idx": 23 + }, + { + "type": "page_footnote", + "text": "15MAgent: https://github.com/geek-ai/MAgent.", + "bbox": [ + 115, + 856, + 423, + 870 + ], + "page_idx": 23 + }, + { + "type": "page_footnote", + "text": "16ThreeDWorld Multi-Agent Transport: https://github.com/ threadworld-mit/tdw.", + "bbox": [ + 514, + 846, + 890, + 869 + ], + "page_idx": 23 + }, + { + "type": "page_number", + "text": "23", + "bbox": [ + 489, + 879, + 507, + 889 + ], + "page_idx": 23 + }, + { + "type": "image", + "img_path": "images/a1e4180f2a0493ad23fc08481d7cb72ffbfc367e8264189dd05626a0c1ddb953.jpg", + "image_caption": [ + "Figure 12: An overview of one of the example task in ThreeDWorld Transport Challenge [147, 148]." + ], + "image_footnote": [], + "bbox": [ + 110, + 129, + 475, + 275 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Communicative Watch-And-Help (C-WAH) $^{17}$ [99] is a realistic multi-agent simulation environment and an extension of the Watch-And-Help Challenge platform, VirtualHome-Social [98]. C-WAH places a greater emphasis on cooperation and enhances communication between agents compared to VirtualHome-Social. Built on the VirtualHome-Social, C-WAH simulates common household activities where agents must collaborate to complete tasks such as preparing meals, washing dishes, and setting up a dinner table. As shown in Figure 13, C-WAH supports both symbolic and visual observation modes, allowing agents to perceive their surroundings either through detailed object information or egocentric RGB and depth images.", + "bbox": [ + 102, + 335, + 480, + 535 + ], + "page_idx": 24 + }, + { + "type": "image", + "img_path": "images/3ccc87a05c26f57ebdd77411aba1dbe0626dbca6891e3519086570eadf130ab7.jpg", + "image_caption": [ + "Figure 13: An typical object-moving task leveraging LLMs-based embodied agents within the Communicative Watch-And-Help [99]." + ], + "image_footnote": [], + "bbox": [ + 110, + 546, + 475, + 686 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Cuisineworld[18] [149] is a virtual kitchen environment designed to evaluate and enhance multi-agent collaboration and coordination (i.e., the working efficiency) in a gaming context. As shown in Figure 14, in this scenario, multiple agents work together to prepare", + "bbox": [ + 102, + 736, + 482, + 809 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "and complete dish orders within a limited time frame. The tasks range from simple preparations, like chopping ingredients, to complex cooking processes that involve multiple tools and steps. CuisineWorld is equipped with a textual interface, and it supports various levels of difficulty, making it a flexible and robust testbed for assessing the planning and scheduling capabilities of Large Foundation Models (LFMs). The environment also introduces a \"Collaboration Score\" (CoS) metric to measure the efficiency of agent coordination as task demands increase, providing a comprehensive benchmark for multi-agent system performance in dynamic and cooperative settings.", + "bbox": [ + 512, + 131, + 892, + 317 + ], + "page_idx": 24 + }, + { + "type": "image", + "img_path": "images/45d8ef4c1e22b8e2f0fce42e73adc7e8d124e191aac3818de1146886c84efdad.jpg", + "image_caption": [ + "Figure 14: An typical multi-agent cooperative scenario in the CuisineWorld platform [149]." + ], + "image_footnote": [], + "bbox": [ + 522, + 331, + 885, + 474 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "AgentScope[100] is a innovative, robust and flexible multi-agent platform designed to empower developers in building advanced multi-agent systems by leveraging the potential of LLMs. At its core, the platform employs a process-based message exchange mechanism, simplifying the complexities of agent communication and collaboration. This approach ensures smooth and efficient agent interaction, enabling developers to focus on designing workflows rather than low-level details. The platform stands out for its comprehensive fault-tolerance infrastructure, which includes retry mechanisms, rule-based corrections, and customizable error-handling configurations. AgentScope also excels in multi-modal support, seamlessly integrating text, images, audio, and video into its workflows. By decoupling data storage and transfer, it optimizes memory usage and enhances scalability, making it ideal for applications requiring rich multi-modal interactions. Additionally, its actor-based distributed framework enables efficient parallel execution and supports hybrid deployments, bridging the gap between local and distributed", + "bbox": [ + 512, + 525, + 892, + 824 + ], + "page_idx": 24 + }, + { + "type": "page_footnote", + "text": "17Communicative Watch-And-Help: https://github.com/xavierpuigf/watch_and_help.", + "bbox": [ + 102, + 834, + 480, + 858 + ], + "page_idx": 24 + }, + { + "type": "page_footnote", + "text": "18Cuisineworld: https://mindagent.github.io/.", + "bbox": [ + 122, + 858, + 415, + 870 + ], + "page_idx": 24 + }, + { + "type": "page_footnote", + "text": "19 AgentScope: https://github.com/modelscope/ agentscope.", + "bbox": [ + 515, + 845, + 892, + 870 + ], + "page_idx": 24 + }, + { + "type": "page_number", + "text": "24", + "bbox": [ + 489, + 878, + 507, + 889 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "environments with ease.", + "bbox": [ + 104, + 131, + 270, + 143 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/d6c01b2de5b8288035011a26518bccdfc88b0dd05ecce397647c732cb8a340e5.jpg", + "image_caption": [ + "Figure 15: The official multi-modal interaction Web UI page between agents in the AgentScope platform [100]." + ], + "image_footnote": [], + "bbox": [ + 109, + 155, + 475, + 297 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Moreover, to improve user interaction with multiple agents, AgentScope assigns distinct colors and icons to each agent, as shown in Figure 15, providing clear visual differentiation in both the terminal and web interface. Designed with user accessibility in mind, AgentScope provides intuitive programming tools, including pipelines and message centers, which streamline the development process. Its interactive user interfaces, both terminal- and web-based, allow developers to monitor performance, track costs, and engage with agents effectively. These features position AgentScope as a state-of-the-art platform for advancing multi-agent systems, combining ease of use with cutting-edge technology.", + "bbox": [ + 102, + 350, + 480, + 548 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "RoCoBench $^{20}$ RoCoBench is a benchmark platform, proposed by Mandi et al. [200], designed to evaluate and enhance the collaborative capabilities of multirobot systems powered by LLMs. Built as an extension to the RoCo project $^{21}$ , RoCoBench provides a realistic simulation environment where robotic agents interact and collaborate to complete complex tasks, as shown in Figure 16, such as sorting packages, assembling components, or preparing a workspace. RoCoBench places a strong emphasis on communication-driven collaboration, integrating both symbolic and visual interaction modes to enable robots to perceive and respond to their environment effectively. Each robot is equipped with LLMs-powered reasoning, facilitating real-time dialogue and coordination. Correspondingly, the benchmark introduces a \"Collaboration Efficiency Metric\" (CEM) to evaluate the effectiveness of multirobot teamwork, taking into account factors like task completion time, resource allocation, and the quality", + "bbox": [ + 102, + 549, + 482, + 820 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "of inter-robot communication. RoCoBench serves as a comprehensive platform for evaluating the potential of LLMs in driving dialectic multi-robot collaboration, offering a scalable and flexible environment for developers and researchers alike", + "bbox": [ + 512, + 131, + 894, + 202 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/3dd879c3d4d448f314c8aa26b5c93b0fa8235c3e4bcc25f245fe5d866f8dd71e.jpg", + "image_caption": [ + "Figure 16: An overview of RoCoBench, a collection of six multirobot collaboration tasks set in a tabletop manipulation environment. The scenarios encompass a diverse range of collaborative challenges, each demanding distinct communication and coordination strategies between robots, incorporating familiar, intuitive objects designed to align with the semantic understanding capabilities of LLMs [200]." + ], + "image_footnote": [], + "bbox": [ + 522, + 214, + 890, + 336 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Generative Agents $^{22}$ Park et al. [181, 201] introduces Generative Agents, a groundbreaking framework for simulating human behavior in interactive virtual worlds. These agents exhibit realistic individual and collective behaviors by incorporating dynamic memory, self-reflection, and action planning capabilities. The system leverages LLMs to store, retrieve, and synthesize memories into higher-level reasoning, enabling agents to adapt their actions based on personal experiences and evolving environmental changes. As illustrated in Figure 16, they present an interactive sandbox environment called Smallville, akin to \"The Sims,\" where 25 distinct virtual agents live, interact, and carry out daily activities. Each agent has a detailed initial profile, including personal traits, relationships, and goals, stored as \"seed memories.\" Agents engage in natural language-based dialogues and demonstrate social behaviors such as hosting events, making new acquaintances, and responding to user interventions. Generative Agents enable interactive applications in fields such as simulating realistic social dynamics for games and training simulations; designing dynamic, non-scripted virtual worlds for interactive systems; and exploring theories and behaviors in a controlled yet realistic virtual setting. The evaluations revealed the critical role of memory retrieval, self-reflection, and action planning in achieving coherent agent behaviors. Common issues,", + "bbox": [ + 512, + 434, + 894, + 819 + ], + "page_idx": 25 + }, + { + "type": "page_footnote", + "text": "$^{20}$ RoCoBench: https://project-roco.github.io/.", + "bbox": [ + 115, + 845, + 435, + 857 + ], + "page_idx": 25 + }, + { + "type": "page_footnote", + "text": "$^{21}$ RoCo Project: https://project-roco.github.io/.", + "bbox": [ + 117, + 857, + 436, + 870 + ], + "page_idx": 25 + }, + { + "type": "page_footnote", + "text": "22Generative Agents: https://youmingyeh.github.io/cs-book/papers/generative-agents.", + "bbox": [ + 515, + 845, + 892, + 870 + ], + "page_idx": 25 + }, + { + "type": "page_number", + "text": "25", + "bbox": [ + 489, + 878, + 507, + 889 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "such as exaggerated responses and overly formal communication, were identified as areas for improvement. Generative Agents push the boundaries of human behavior simulation, offering a robust framework for creating autonomous, memory-driven virtual agents.", + "bbox": [ + 102, + 131, + 480, + 203 + ], + "page_idx": 26 + }, + { + "type": "image", + "img_path": "images/36660833d9b669e811ee508cad96ba8f3a4987062eebdfae9f1f1f18a635a33d.jpg", + "image_caption": [ + "Figure 17: Generative agents serve as realistic simulations of human behavior, designed for interactive applications. In a sandbox environment inspired by The Sims, twenty-five agents engage in activities such as planning their routines, sharing updates, building relationships, and collaborating on group events, while allowing users to observe and interact with them. [181, 201]." + ], + "image_footnote": [], + "bbox": [ + 112, + 215, + 475, + 353 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "SocialAI school $^{23}$ Kovac et al. [202, 203] introduces The SocialAI School, a novel framework designed to explore and develop socio-cognitive abilities in artificial agents. The study emphasizes the importance of sociocognitive skills as foundational to human intelligence and cultural evolution. As shown in Figure 18, the SocialAI School provides a customizable suite of procedurally generated environments that enable systematic research into the socio-cognitive abilities required for artificial agents to interact with and contribute to complex cultures. Built on MiniGrid, it provides procedural environments for RL and LLMs-based agents to study social skills like joint attention, imitation, and scaffolding. Open-source and versatile, it enables diverse research, including generalizing social inferences, role reversal studies, and scaffolded learning. The SocialAI School represents a significant step toward enriching AI systems with socio-cognitive abilities inspired by human development.", + "bbox": [ + 102, + 447, + 482, + 715 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "Welfare Diplomacy[24] [204] is an innovative variant of the zero-sum game Diplomacy, designed to evaluate the cooperative capabilities of multi-agent systems. Unlike the original game, which focuses on a single winner, Welfare Diplomacy introduces a general-sum framework where players balance military con", + "bbox": [ + 102, + 717, + 480, + 804 + ], + "page_idx": 26 + }, + { + "type": "image", + "img_path": "images/7d31292bade85ac419cca1f29f6b55c388eca7f69e9ee3bdf8fdeaa392730e54.jpg", + "image_caption": [ + "Figure 18: A clear workflow of an agent acting in the SocialAI school environment [202, 203]." + ], + "image_footnote": [], + "bbox": [ + 526, + 131, + 880, + 300 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "quest with investments in domestic welfare. Players accumulate Welfare Points (WPs) throughout the game by prioritizing welfare over military expansion, and their total utility at the end of the game is determined by these points, removing the notion of a single \"winner\". Welfare Diplomacy enables clearer assessments of cooperation and provides stronger incentives for training cooperative AI agents. Players take on the roles of European powers, negotiating, forming alliances, and strategizing to compete for key supply centers. Orders are submitted and resolved simultaneously, with the goal of controlling a specified number of SCs to achieve victory, emphasizing a balance between cooperation and betrayal. Based on these rules, Welfare Diplomacy implements themselves via an open-source platform, and develops zero-shot baseline agents using advanced language models like GPT-4 [141, 140]. Experiments reveal that while these agents achieve high social welfare through mutual demilitarization, they remain vulnerable to exploitation, highlighting room for future improvement.", + "bbox": [ + 512, + 387, + 894, + 684 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "In summary, these cutting-edge LLMs-powered simulation environments—ranging from task-specific platforms like TDW-MAT [148, 147] and CuisineWorld [149] to socially-driven frameworks such as Generative Agents [201] and the SocialAI School [203]—highlight the transformative potential of integrating advanced AI reasoning and multi-agent systems. By fostering research on collaboration, social cognition, and cooperative decision-making, these tools not only advance our understanding of AI's capabilities but also pave the way for practical applications in dynamic, real-world scenarios.", + "bbox": [ + 512, + 700, + 894, + 869 + ], + "page_idx": 26 + }, + { + "type": "page_footnote", + "text": "23SocialAI school project: https://sites.google.com/view/socialai-school. 24Welfare Diplomacy: https://github.com/mukobi/welfare-diplomacy.", + "bbox": [ + 102, + 822, + 482, + 872 + ], + "page_idx": 26 + }, + { + "type": "page_number", + "text": "26", + "bbox": [ + 489, + 879, + 507, + 890 + ], + "page_idx": 26 + }, + { + "type": "image", + "img_path": "images/027c51c8ae31617b6e731a7c9c023fae96a39cb5a7aa76015ca260b4832f553b.jpg", + "image_caption": [ + "Figure 19: The Balkans in the Diplomacy map in Welfare Diplomacy [204]." + ], + "image_footnote": [], + "bbox": [ + 126, + 129, + 458, + 305 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "4. Practice Applications of Multi-Agent Decision-Making", + "text_level": 1, + "bbox": [ + 102, + 366, + 480, + 397 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Multi-agent cooperative decision-making has a wide range of practical applications across various domains. In this section, we delve into the practical applications of multi-agent decision-making, focusing on how advanced methods, particularly multi-agent MARL, are employed to address complex challenges in dynamic and evolving environments. We explore the contributions of advanced multi-agent systems across domains such as agriculture, disaster rescue, military simulations, traffic management, autonomous driving, and multi-robot collaboration. A broad array of applications applications is presented in Table 3. In the following, we will provide a detailed introduction to these applications, highlighting their impact and potential for future advancements.", + "bbox": [ + 102, + 407, + 480, + 620 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "4.1. MARL-based Intelligent Applications", + "text_level": 1, + "bbox": [ + 102, + 637, + 393, + 652 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Recently, a variety of MARL methods have been developed, fostering efficient collaboration, strategic learning, and adaptive problem-solving in multi-agent systems [83, 24, 25, 28]. Below, we highlight notable contributions that demonstrate the application of MARL in enhancing multi-agent collaboration and performance", + "bbox": [ + 102, + 657, + 480, + 755 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "In smart precious agriculture and continuous pest disease detection, Seewald et al. [11] addressed the challenge of continuous exploration for multi-agent systems with battery constraints by integrating ergodic search methods with energy-aware coverage. In disaster rescue, Qazzaz et al. [12] proposed a novel technique using a reinforcement learning multi Q-learning algorithm to optimize UAV connectivity operations in", + "bbox": [ + 102, + 757, + 482, + 870 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "challenging terrain. Samad et al. [23] presents a cloud-based multi-agent framework for efficiently managing aerial robots in disaster response scenarios, aiming to optimize rescue efforts by autonomously processing real-time sensory data to locate and assist injured individuals.", + "bbox": [ + 512, + 131, + 890, + 216 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "In military confrontations, Qi et al. [90] designed a distributed MARL framework based on the actor-work-learner architecture, addressing the issues of slow sample collection and low training efficiency in MARL within the MaCA [89] and SMAC 3D real-time gaming [134, 135] military simulation environments. Benke et al. [208] proposed a computational model for agent decision-making that incorporates strategic deception, enhancing the representation of deceptive behaviors in multi-agent simulations for military operations research. Sutagundar et al. [209] proposed a Context Aware Agent based Military Sensor Network (CAMSN) to enhance multi-sensor image fusion, using node and sink-driven contexts, forming an improved infrastructure for multi-sensor image fusion.", + "bbox": [ + 512, + 217, + 892, + 430 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "In efficient limited-bandwidth communication field, Wang et al. [242] proposed a method called IMAC (Informative Multi-Agent Communication) to address the problem of limited-bandwidth communication in MARL.", + "bbox": [ + 512, + 430, + 892, + 500 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "In the research of UAV swarm communications against jamming, Lv et al. [93] proposed a MARL-based scheme to optimize relay selection and power allocation. This strategy leverages network topology, channel states, and shared experiences to improve policy exploration and stability, ultimately enhancing anti-jamming performance.", + "bbox": [ + 512, + 501, + 890, + 600 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "In UAV pursuit-evasion [20, 21, 22], Kouzeghar [88] proposed a decentralized heterogeneous UAV swarm approach for multi-target pursuit using MARL technique and introduced a variant of the MADDPG [132] to address pursuit-evasion scenarios in non-stationary environments with random obstacles. Alexopoulos et al. [212] addressed the challenge of pursuit-evasion games involving two pursuing and one evading unmanned aerial vehicle (UAV) by introducing a hierarchical decomposition of the game. Luo et al. [213] proposed a cooperative maneuver decision-making method for multi-UAV pursuit-evasion scenarios using an improved MARL approach, which incorporates an enhanced CommNet network with a communication mechanism to address multi-agent coordination.", + "bbox": [ + 512, + 600, + 892, + 813 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "In large-scale traffic signal/flow control, Wang [214] proposed a curiosity-inspired algorithm to optimize safe and smooth traffic flow in autonomous vehicle on-ramp merging; Chu et al. [92] proposed a fully scalable", + "bbox": [ + 512, + 813, + 892, + 870 + ], + "page_idx": 27 + }, + { + "type": "page_number", + "text": "27", + "bbox": [ + 487, + 879, + 507, + 889 + ], + "page_idx": 27 + }, + { + "type": "table", + "img_path": "images/48f547c67fc780cf871fda26165072e845badaaa642603d3a11c7bf140ae0ce4.jpg", + "table_caption": [ + "Table 3: Categorized Applications of MARL and LLMsin Diverse Domains." + ], + "table_footnote": [], + "table_body": "
CategoryApplication AreaWorks / References
MARL-based MASSmart Precious Agriculture & Disaster RescueSeewald et al. [11], Qazzaz et al. [12], Samad et al. [23], Boubin et al. [205], Li et al. [206], Mahajan et al. [207]
Military ConfrontationsQi et al. [90], Benke et al. [208], Sutagundar et al. [209], Vangaru et al. [210], Wang et al. [211], MaCA [89], SMAC [134], SMAC-v2 [135]
UAV Pursuit-Evasion & Swarm Communications & NavigationKouzeghar [88], Alexopoulos et al. [212], Luo et al. [213], Lv et al. [93], Xue et al. [20], Rezwan et al. [21], Baroomi et al. [22]
Traffic Signal/Flow ControlWang [214], Chu et al. [92], Aboueleneen et al. [215], Yu et al. [216], Sun et al. [217], Azfar et al. [218], Bokade et al. [219], Kwesiga et al. [220], Zhang et al. [221]
Autonomous DrivingXue et al. [222], Liu et al. [19], Wen et al. [18], Jayawardana et al. [17], Liu et al. [223], Formanek et al. [224], Zhang et al. [225], Kotoku et al. [226], Hua et al. [227]
Multiple Robots CollaborativeGeorgios et al. [13], Silva et al. [14], Huang et al. [15], Cena et al. (SMART) [16], Kevin (SCRIMMAGE) [228], Liu et al. [95]
LLMs-based MASMulti-Agent CollaborationWu et al. (AutoGen) [103], Xiao et al. (CoE) [101], Chen et al. (AgentV-erse) [105], Liu et al. (DyLAN) [106], Zhang et al. (CoELA) [96]
Gaming InteractionXu et al. (LLM-Werewolf) [58], Gong et al. (MindAgent) [149], Xie et al. [229], Lin et al. [230], Jia et al. (GameFi) [231], Yin et al. (MIRAGE) [232], Zhang et al. (DVM) [233], Bonorino et al. [234]
Autonomous DrivingZheng et al. (PlanAgent) [113], Luo et al. (SenseRAG) [235], Mahmud et al. [236], Peng et al. (LearningFlow) [237], Karagounis et al. [238], Luo et al. [239], Gao et al. [240], Hegde et al. [241]
Multi-Modal ApplicationWang et al. (LangGraph) [111, 112], Zhang et al. (CrewAI) [114, 115], Zheng et al. (PlanAgent) [113], Wang et al. (MLLM-Tool) [102]
", + "bbox": [ + 134, + 152, + 860, + 598 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "and decentralized multi-agent deep reinforcement learning algorithm based on the advantage actor-critic (A2C) method.", + "bbox": [ + 102, + 621, + 480, + 664 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "In autonomous driving area, a large number of superior multi-agent decision-making algorithms and models are continuously being explored and devised. Xue et al. [222] developed a two-stage system framework for improving Multi-Agent Autonomous Driving Systems (MADS) by enabling agents to recognize and understand the Social Value Orientations (SVOs) of other agents. Liu et al. [19] proposed the Personality Modeling Network (PeMN), which includes a cooperation value function and personality parameters to model diverse interactions in highly interactive scenarios, addressing the issue of diverse driving styles in autonomous driving. Wen et al. [18] proposed a safe reinforcement learning algorithm called Parallel", + "bbox": [ + 102, + 671, + 482, + 870 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "Constrained Policy Optimization (PCPO) based on the actor-critic architecture to address the issues of unexplainable behaviors and lack of safety guarantees in autonomous driving. Jayawardana et al. [17] proposed enabling socially compatible driving by leveraging human driving data to learn a social preference model, integrating it with reinforcement learning-based AV policy synthesis using Social Value Orientation theory.", + "bbox": [ + 512, + 621, + 890, + 736 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "In multiple robots collaborative fields, Georgios et al. [13] introduces a novel cognitive architecture for large-scale multi-agent Learning from Demonstration (LfD), leveraging Federated Learning (FL) to enable scalable, collaborative, and AI-driven robotic systems in complex environments. Silva et al. [14] address the challenges and limitations in evaluating intelligent collaborative robots for Industry 4.0. The review emphasizes the urgent need for improved evaluation methods and", + "bbox": [ + 512, + 741, + 892, + 870 + ], + "page_idx": 28 + }, + { + "type": "page_number", + "text": "28", + "bbox": [ + 489, + 879, + 507, + 890 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "standards to account for the complexities posed by human variability, AI integration, and advanced control systems in industrial environments. Huang et al. [15] presents a multi-agent reinforcement learning approach using the MADDPG algorithm, enhanced with an experience sample optimizer, to train swarm robots for autonomous, collaborative exploration on Mars. This approach outperforms traditional DRL algorithms in efficiency as the number of robots and targets increases. The SMART multi-agent robotic system [16] is a comprehensive and advanced platform designed for executing coordinated robotic tasks. It integrates both hardware components, such as robots and IP-C Cameras, and software agents responsible for image processing, path planning, communication, and decision-making. By utilizing Work-Flow Petri Nets for modeling and control, the system effectively ensures coordination and successful task execution even in unstructured environments.", + "bbox": [ + 100, + 131, + 480, + 400 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Furthermore, the well-known project, Simulating Collaborative Robots in a Massive Multi-agent Game Environment (SCRIMMAGE)[25] [228], tackles the high costs of field testing robotic systems by offering a flexible and efficient simulation environment specifically designed for mobile robotics research. Unlike many existing simulators that are primarily designed for ground-based systems with high-fidelity multi-body physics models, SCRIMMAGE focuses on simulating large numbers of aerial vehicles, where precise collision detection and complex physics are often unnecessary. SCRIMMAGE is designed to be highly adaptable, with a plugin-based architecture that supports various levels of sensor fidelity, motion models, and network configurations. This flexibility allows the simulation of hundreds of aircraft with low-fidelity models or a smaller number with high-fidelity models on standard consumer hardware. Overall, SCRIMMAGE[26] provides a robust and scalable solution for testing and refining robotic algorithms in a controlled virtual environment, significantly reducing the risks and costs associated with physical testing.", + "bbox": [ + 100, + 401, + 480, + 714 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Liu et al. [95] proposed the Learning before Interaction (LBI) framework, a novel approach designed to enhance multi-agent decision-making through generative world models. Traditional generative models struggle with trial-and-error reasoning, often failing to produce reliable solutions for complex multi-agent tasks. To address this limitation, LBI integrates a language-", + "bbox": [ + 100, + 714, + 480, + 813 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "guided simulator into the MARL pipeline, enabling agents to learn grounded reasoning through simulated experiences. LBI consists of a world model composed of a dynamics model and a reward model. The dynamics model incorporates a vector quantized variational autoencoder (VQ-VAE) [243] for discrete image representation and a causal transformer to autoregressively generate interaction transitions. Meanwhile, the reward model employs a bidirectional transformer trained on expert demonstrations to provide task-specific rewards based on natural language descriptions. LBI further distinguishes itself by generating explainable interaction sequences and reward functions, providing interpretable solutions for multi-agent decision-making problems. By addressing challenges such as the compositional complexity of MARL environments and the scarcity of paired text-image datasets, LBI represents a significant advancement in the field.", + "bbox": [ + 512, + 131, + 894, + 387 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "Ye et al. [244] proposed an adaptive genetic algorithm (AGA) that dynamically adjusts crossover and mutation populations, leveraging the Dubins car model and state-transition strategies to optimize the allocation of heterogeneous UAVs in suppression of enemy air defense missions. Radac et al. combine two modelfree controller tuning techniques linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning as a novel mixed VRFT-Q learning method [94]. VRFT is initially employed to determine a stabilizing feedback controller using input-output experimental data within a model reference tracking framework. Subsequently, reinforcement Q-learning is applied in the same framework, utilizing input-state experimental data gathered under perturbed VRFT to ensure effective exploration. Extensive simulations on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS) demonstrates the mixed VRFT-Q's significant performance improvement over the Q-learning controllers and the VRFT controllers.", + "bbox": [ + 512, + 387, + 894, + 684 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "To address the lack of a general metric for quantifying policy differences in MARL problems, Hu et al. [84] proposed the Multi-Agent Policy Distance (MAPD), a tool designed to measure policy differences among agents. Additionally, they developed a Multi-Agent Dynamic Parameter Sharing (MADPS) algorithm based on MAPD, demonstrating its effectiveness in enhancing policy diversity and overall performance through extensive experiments. To addresses the challenge of cooperative MARL in scenarios with dynamic team compositions, Wang et al. [245] propose using mutual information as an augmented reward to encourage robustness in agent policies across different team configurations.", + "bbox": [ + 512, + 686, + 895, + 871 + ], + "page_idx": 29 + }, + { + "type": "page_footnote", + "text": "25 SCRIMMAGE Web: http://www.scrimmagesim.org/. 26 SCRIMMAGE project: https://github.com/gtri/ scrimmage.", + "bbox": [ + 102, + 834, + 480, + 870 + ], + "page_idx": 29 + }, + { + "type": "page_number", + "text": "29", + "bbox": [ + 489, + 878, + 507, + 889 + ], + "page_idx": 29 + }, + { + "type": "text", + "text": "They develop a multi-agent policy iteration algorithm with a fixed marginal distribution and demonstrate its strong zero-shot generalization to dynamic team compositions in complex cooperative tasks. Progressive Mutual Information Collaboration (PMIC) $^{27}$ is a novel framework that leverages mutual information (MI) to guide collaboration among agents, thereby enhancing performance in multi-agent cooperative tasks [246]. The key innovation of is its dual MI objectives: maximizing MI associated with superior collaborative behaviors and minimizing MI linked to inferior ones, ensuring more effective learning and avoiding sub-optimal collaborations. Wai et al. [83] proposes a novel double averaging primal-dual optimization algorithm for MARL, specifically targeting decentralized applications like sensor networks and swarm robotics. The MARL algorithm enables agents to collaboratively evaluate policies by incorporating neighboring gradient and local reward information, achieving fast finite-time convergence to the optimal solution in decentralized convex-concave saddle-point problems. To address the challenge of sparse rewards in MARL, Kang et al. [247] introduce the Dual Preferences-based Multi-Agent Reinforcement Learning (DPM) framework. DPM extends preference-based reinforcement learning (PbRL) by incorporating dual preference types-comparing both trajectories and individual agent contributions-thereby optimizing individual reward functions more effectively. DPM also leverages LLMs to gather preferences, mitigating issues associated with human-based preference collection. Experimental results in the StarCraft Multi-Agent Challenge (SMAC) [135] demonstrate that DPM significantly outperforms existing baselines, particularly in sparse reward settings.", + "bbox": [ + 105, + 128, + 480, + 612 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Traditional methods like soft attention struggle with scalability and efficiency in LMAS due to the overwhelming number of agent interactions and large observation spaces. To address these challenges of largescale multi-agent systems (LMAS) involving hundreds of agents, University of Chinese Academy of Sciences [248] introduces the Concentration Network (ConcNet), a novel reinforcement learning framework. ConcNet mimics human cognitive processes of concentration by prioritizing and aggregating observations based on motivational indices, such as expected survival time and state value. It allows the system to focus on the most relevant entities, enhancing decision-making efficiency in complex environments. In ConcNet, a novel concentration policy gradient architecture was further proposed,", + "bbox": [ + 105, + 615, + 480, + 828 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "demonstrating its superior performance and scalability in large-scale multi-agent scenarios, such as decentralized collective assault simulations. This research represents a significant advancement in the field, providing a scalable solution for effective decision-making in large-scale multi-agent environments.", + "bbox": [ + 517, + 131, + 890, + 216 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "In conclusion, MARL-based intelligent applications have shown exceptional adaptability across diverse domains such as autonomous driving, UAV systems, disaster response, and collaborative robotics [242, 88, 20, 19, 13, 16]. Key innovations, including communication-enhanced learning [168, 161, 174, 139], adaptive policy optimization, and mutual information [246] frameworks, have significantly advanced the field. While challenges like sparse rewards and scalability remain, these advancements highlight MARL's potential to address dynamic and complex multi-agent environments effectively, paving the way for further impactful developments.", + "bbox": [ + 517, + 218, + 890, + 401 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "4.2. LLMs reasoning-based Intelligent Applications", + "text_level": 1, + "bbox": [ + 517, + 420, + 868, + 435 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "To address diverse and complex challenges, a variety of frameworks leveraging LLMs have been developed, enabling advanced reasoning, collaboration, and task execution in multi-agent systems [116, 30, 57]. Below, we highlight notable contributions that demonstrate the application of LLMs in enhancing multi-agent decision-making and coordination.", + "bbox": [ + 517, + 444, + 890, + 542 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "Wu et al. [103] introduced AutoGen, an open-source framework designed to enable the development of next-generation LLM applications through multi-agent conversations. AutoGen allows for customizable agent interactions and the integration of LLMs, human inputs, and tools to collaboratively solve complex tasks. Xiao et al. [101] proposed Chain-of-Experts (CoE), a novel multi-agent framework designed to enhance reasoning in complex operations research (OR) problems using LLMs. Chen et al. [105] presented AgentVerse, a multiagent framework designed to facilitate collaboration among autonomous agents, inspired by human group dynamics. AgentVerse dynamically adjusts the composition and roles of agents throughout the problem-solving process, enhancing their ability to tackle complex tasks across various domains, including text understanding, reasoning, coding, and embodied AI. The framework consists of four stages: Expert Recruitment, Collaborative Decision-Making, Action Execution, and Evaluation. Chen et al. [107] introduced AutoAgents, a framework capable of adaptively generating and coordinating multiple specialized agents based on different tasks, thereby constructing efficient multi-agent teams", + "bbox": [ + 517, + 544, + 890, + 869 + ], + "page_idx": 30 + }, + { + "type": "page_footnote", + "text": "27PMIC code: https://github.com/yeshenpy/PMIC.", + "bbox": [ + 119, + 858, + 431, + 869 + ], + "page_idx": 30 + }, + { + "type": "page_number", + "text": "30", + "bbox": [ + 489, + 879, + 507, + 889 + ], + "page_idx": 30 + }, + { + "type": "text", + "text": "to accomplish complex tasks. Liu et al. [106] proposed the Dynamic LLM-Agent Network (DyLAN), a framework designed to enhance LLM-agent collaboration by enabling agents to interact dynamically based on task requirements, rather than within a static architecture. Xu et al. [58] proposed a novel multi-agent framework that combines LLMs with reinforcement learning to enhance strategic decision-making and communication in the Werewolf game $^{28}$ , effectively overcoming intrinsic biases and achieving human-level performance. Wen et al. [104] introduce the Multi-Agent Transformer (MAT), a novel architecture that frames cooperative MARL as a sequence modeling problem. Experiments on StarCraftII [154, 91, 134], Multi-Agent MuJoCo (MAMuJoCo) [249], Dexterous Hands Manipulation [185, 186], and Google Research Football [55] benchmarks demonstrate that it achieves superior performance and data efficiency by leveraging modern sequence models in an on-policy learning framework. Wang et al. [102] introduced MLLM-Tool $^{29}$ , a multimodal tool agent system that integrates open-source LLMs with multimodal encoders, enabling it to process visual and auditory inputs for selecting appropriate tools based on ambiguous multimodal instructions. Moreover, they introduced ToolMMBench, a novel benchmark with multi-modal inputs and multioption solutions, demonstrating its effectiveness in addressing real-world multimodal multi-agent scenarios. Zhang et al. [96] introduce CoELA, a Cooperative Embodied Language Agent framework that leverages LLMs to enhance multi-agent cooperation in complex, decentralized environments. CoELA integrates LLMs with cognitive-inspired modules for perception, memory and execution, allowing agents to plan, communicate, and collaborate effectively on long-horizon tasks, outperforming traditional planning-based methods such as Multi-Agent Transformer(MAT) [104], and showing promising results in human-agent interaction simulation environments, Communicative Watch-And-Help (C-WAH) [99] and ThreeDWorld Multi-Agent Transport (TDW-MAT) [147, 148]. Gong et al. [149] from Team of Li.FeiFei. introduce MindAgent, a novel infrastructure for evaluating planning and coordination capabilities in gaming interactions, leveraging large foundation models (LFMs) to coordinate multi-agent system (MAS), collaborate with human players, and enable in-context learning. Their team also present", + "bbox": [ + 102, + 129, + 480, + 800 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "\"Cuisineworld\", a new gaming scenario and benchmark for assessing multi-agent collaboration efficiency. Despite LLMs' success in various collaborative tasks, they struggle with spatial and decentralized decision-making required for flocking. Li et al. [57] explored the challenges faced by LLMs in solving multi-agent flocking tasks, where agents strive to stay close, avoid collisions, and maintain a formation. Sun et al. [180] proposed Corex, a novel framework that enhances complex reasoning by leveraging multi-model collaboration. Inspired by human cognitive processes, Corex employs three collaborative paradigms-Discuss, Review, and Retrieve-where different LLMs act as autonomous agents to collectively solve complex tasks. Corex empowers LLM agents to \"think outside the box\" by facilitating collaborative group discussions, effectively mitigating the cognitive biases inherent in individual LLMs. This approach not only enhances performance but also improves cost-effectiveness and annotation efficiency, offering a significant advantage in complex tasks.", + "bbox": [ + 512, + 131, + 894, + 416 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "Next, we will provide a detailed introduction to some outstanding achievements in the application of LLMs for multi-agent collaborative task execution.", + "bbox": [ + 512, + 416, + 894, + 458 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "MetaGPT: Existing LLMs-based multi-agent systems often struggle with complex tasks due to logical inconsistencies and cumulative hallucinations, leading to biased results. Hong et al. [108, 109] from DeepWisdom $^{31}$ proposed MetaGPT $^{32}$ , an innovative metaprogramming framework designed to enhance the collaboration capabilities of LLMs-based multi-agent systems. MetaGPT integrates Standard Operating Procedures (SOPs) commonly used in human workflows, thereby constructing a more efficient and coherent multi-agent collaboration system. MetaGPT employs an assembly-line approach, breaking down complex tasks into multiple subtasks and assigning them to agents with specific domain expertise. These agents collaborate during task execution through clearly defined roles and structured communication interfaces, reducing the risk of information distortion and misunderstanding. In summary, MetaGPT offers a flexible and powerful platform for developing LLMs-based multi-agent systems. Its unique meta-programming framework and rigorous workflow design enable it to excel in handling complex tasks, greatly advancing the field of multi-agent collaboration research.", + "bbox": [ + 512, + 458, + 895, + 784 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "CoAct: Hou et al. [97] proposed CoAct33, a multi-", + "bbox": [ + 531, + 785, + 892, + 800 + ], + "page_idx": 31 + }, + { + "type": "page_footnote", + "text": "28 Werewolf game: https://sites.google.com/view/strategic-language-agents/.", + "bbox": [ + 102, + 822, + 480, + 845 + ], + "page_idx": 31 + }, + { + "type": "page_footnote", + "text": "29MLLM-Tool: https://github.com/MLLM-Tool/MLLM-Tool.", + "bbox": [ + 102, + 846, + 480, + 869 + ], + "page_idx": 31 + }, + { + "type": "page_footnote", + "text": "30Cuisineworld: https://mindagent.github.io/.", + "bbox": [ + 527, + 822, + 826, + 835 + ], + "page_idx": 31 + }, + { + "type": "page_footnote", + "text": "31DeepWisdom: https://www deepwisdom.ai/.", + "bbox": [ + 529, + 835, + 813, + 846 + ], + "page_idx": 31 + }, + { + "type": "page_footnote", + "text": "32MetaGPT: https://github.com/geekan/MetaGPT.", + "bbox": [ + 529, + 846, + 840, + 857 + ], + "page_idx": 31 + }, + { + "type": "page_footnote", + "text": "33CoAct: https://github.com/dxhou/CoAct.", + "bbox": [ + 529, + 857, + 803, + 869 + ], + "page_idx": 31 + }, + { + "type": "page_number", + "text": "31", + "bbox": [ + 487, + 878, + 507, + 890 + ], + "page_idx": 31 + }, + { + "type": "text", + "text": "agent system based on LLMs designed for hierarchical collaboration tasks. The framework consists of six stages: task decomposition, subtask assignment and communication, subtask analysis and execution, feedback collection, progress evaluation, and replanning when necessary. The global planning agent plays a critical role in managing complex tasks. The local execution agent is responsible for executing specific subtasks. This hierarchical framework demonstrates strong adaptability and performance, particularly in complex real-world tasks requiring dynamic replanning and collaborative execution.", + "bbox": [ + 100, + 131, + 480, + 300 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "AutoGen: Microsoft [103, 146] introduced AutoGen $^{34}$ , a flexible framework for creating and managing multiple autonomous agents to collaboratively complete complex tasks, particularly in programming, planning, and creative writing domains. AutoGen allows users to define distinct agent roles, including specialists, general assistants, and decision-makers, ensuring clear task division and effective coordination. Agents interact in a structured conversational environment, exchanging messages to resolve tasks iteratively. AutoGen introduces feedback loops where agents analyze outputs, refine strategies, and optimize task completion autonomously. Notably, it supports integration with various LLMs, offering developers the flexibility to replace APIs without altering code significantly. In summary, AutoGen facilitates scalable, efficient, and robust multi-agent collaboration, demonstrating potential for applications ranging from enhanced ChatGPT systems to real-world industrial workflows.", + "bbox": [ + 100, + 302, + 480, + 571 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "XAgent: XAgent Team $^{35}$ [110] developed XAgent $^{36}$ is an open-source, LLMs-driven autonomous agent framework designed for solving complex tasks automatically and efficiently. As shown in Figure 20, it employs a dual-loop architecture: the outer loop for high-level task planning and coordination, and the inner loop for executing subtasks. The PlanAgent in the outer loop generates an initial plan by breaking a complex task into manageable subtasks, organizing them into a task queue. It iteratively monitors progress, optimizes plans based on feedback from the inner loop, and continues until all subtasks are completed. The inner loop utilizes ToolAgents, which employ various tools like file editors, Python notebooks, web browsers, and shell interfaces within a secure docker environment to execute subtasks. XAgent emphasizes autonomy, safety, and extensibility, allowing users to add new agents or tools", + "bbox": [ + 100, + 571, + 482, + 815 + ], + "page_idx": 32 + }, + { + "type": "image", + "img_path": "images/3b2c1190be538a01a6e3349b9ec31c50208a6352acfb17257690f6b17ec0e736.jpg", + "image_caption": [ + "Figure 20: An structure overview of the XAgents framework, highlighting the Task Node as the starting point, the sequence of Sub-Task Nodes forming the Task Execution Graph (TEG), and the Fusion Node integrating outputs for the final result [110]." + ], + "image_footnote": [], + "bbox": [ + 522, + 129, + 887, + 318 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "to enhance functionality. Its GUI facilitates user interaction while supporting human collaboration, enabling real-time guidance and assistance for challenging tasks.", + "bbox": [ + 512, + 401, + 892, + 443 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "PlanAgent: PlanAgent $^{37}$ , developed by Chinese Academy of Sciences and Li Auto [113], introduces a closed-loop motion planning framework for autonomous driving by leveraging multi-modal large language models (MLLMs). The system utilizes MLLM's multi-modal reasoning and commonsense understanding capabilities to generate hierarchical driving commands based on scene information. PlanAgent addresses key limitations of traditional rule-based and learning-based methods, including overfitting in long-tail scenarios and inefficiencies in scene representation. Its novel integration of MLLM-driven reasoning into autonomous driving planning establishes a new benchmark for safety and robustness in real-world applications.", + "bbox": [ + 512, + 443, + 892, + 653 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "LangGraph: LangChain Inc $^{38}$ [111, 112] introduced LangGraph $^{39}$ , a library designed for building stateful, multi-actor applications with LLMs, enabling the creation of complex agent and multi-agent workflows. Inspired by frameworks like Pregel and Apache Beam, LangGraph provides fine-grained control over workflows and state management while offering advanced features like persistence and human-in-the-loop capabilities. LangGraph stands out for its support of iterative workflows with cycles and branching, which differentiates it from DAG-based frameworks. Each", + "bbox": [ + 512, + 656, + 892, + 812 + ], + "page_idx": 32 + }, + { + "type": "page_footnote", + "text": "34 AutoGen: https://github.com/microsoft/autogen.", + "bbox": [ + 115, + 833, + 448, + 846 + ], + "page_idx": 32 + }, + { + "type": "page_footnote", + "text": "35XAgent Team: https://blog.x-agent.net/.", + "bbox": [ + 115, + 846, + 398, + 858 + ], + "page_idx": 32 + }, + { + "type": "page_footnote", + "text": "36XAgent: https://github.com/OpenBMB/XAgent.", + "bbox": [ + 115, + 858, + 421, + 870 + ], + "page_idx": 32 + }, + { + "type": "page_footnote", + "text": "37PlanAgent: http://www.chinasem.cn/planagent.", + "bbox": [ + 527, + 833, + 838, + 846 + ], + "page_idx": 32 + }, + { + "type": "page_footnote", + "text": "38LangChain Inc: https://langchain.ac.cn/langgraph.", + "bbox": [ + 529, + 846, + 870, + 858 + ], + "page_idx": 32 + }, + { + "type": "page_footnote", + "text": "39LangGraph: https://www.langchain.com/langgraph", + "bbox": [ + 529, + 858, + 862, + 870 + ], + "page_idx": 32 + }, + { + "type": "page_number", + "text": "32", + "bbox": [ + 487, + 878, + 507, + 890 + ], + "page_idx": 32 + }, + { + "type": "text", + "text": "graph execution generates a state, dynamically updated by node outputs, enabling reliable and adaptive agent behavior. Its built-in persistence allows workflows to pause and resume at any point, facilitating error recovery and advanced human-agent interactions, including \"time travel\" to modify past actions. LangGraph integrates seamlessly with LangChain [250, 179] but functions independently, offering flexibility for various applications, from dialogue agents and recommendation systems to natural language processing and game development. With streaming support, it processes outputs in real-time, making it suitable for tasks requiring immediate feedback. Its low-level architecture and customizable workflows make LangGraph a powerful tool for creating robust, scalable, and interactive LLMs-based systems.", + "bbox": [ + 100, + 131, + 480, + 359 + ], + "page_idx": 33 + }, + { + "type": "image", + "img_path": "images/1a5057af905a5275003bf194bbea8a1a1f3a4c961a1885581d7f2c3fed924e71.jpg", + "image_caption": [ + "Figure 21: A LangGraph workflow representation demonstrating conditional branching and iterative loops for document retrieval, grading, query transformation, and web search before generating a final output [111, 112]." + ], + "image_footnote": [], + "bbox": [ + 112, + 372, + 473, + 529 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "CrewAI: $\\mathrm{CrewAI^{40}}$ [114, 115] is an open-source framework designed to coordinate AI agents that specialize in role-playing and autonomous operations, enabling efficient collaboration to achieve complex goals. The framework's modular design allows users to create AI teams that operate like real-world teams, with agents assigned specific roles and tasks to ensure clear division of labor and shared objectives. As seen from 22, this framework operates in three primary stages: Agent Creation, where developers define roles with specific goals and tools; Task Management, enabling flexible task assignment and multi-view knowledge enhancement; and Execution and Collaboration, where agents interact in workflows to resolve tasks, with outputs parsed into reusable formats. CrewAI integrates seamlessly with the LangChain ecosystem, leveraging its tools and LLM", + "bbox": [ + 102, + 602, + 482, + 832 + ], + "page_idx": 33 + }, + { + "type": "image", + "img_path": "images/1915e555db021c43ce2d26f3d54bee668e602b1321975e13f09e6963a269360d.jpg", + "image_caption": [ + "Figure 22: An overview of the processing workflow for the roleplaying multi-agent framework, CrewAI [114, 115]." + ], + "image_footnote": [], + "bbox": [ + 526, + 136, + 878, + 294 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "capabilities, such as OpenAI and Google Gemini. The framework supports real-time decision-making and task adaptation, with future versions planned to include more advanced collaboration processes, such as consensus-driven workflows and autonomous decision-making. Its innovative features, such as role-based design, dynamic rule generation, and modular task workflows, position CrewAI as a robust and scalable framework for multi-agent collaboration across creative and industrial domains. Overall, CrewAI $^{41}$ offers a cutting-edge approach to multi-agent systems by integrating role-specific autonomy, flexible workflows, and advanced AI toolsets, making it a versatile framework for collaborative AI applications.", + "bbox": [ + 512, + 361, + 892, + 558 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "In summary, these frameworks and applications [108, 97, 103, 110] highlight the rapid advancements in leveraging LLMs for multi-agent collaboration, reasoning, and task execution. Each system introduces unique innovations—ranging from dynamic agent coordination to enhanced reasoning and human-in-the-loop workflows—demonstrating their potential to tackle complex, real-world challenges across various domains [113, 111, 114, 115]. These developments pave the way for more flexible, scalable, and efficient AI-driven solutions.", + "bbox": [ + 512, + 561, + 892, + 702 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "5. Challenges in MARL-based and LLMs-based approaches", + "text_level": 1, + "bbox": [ + 514, + 725, + 890, + 755 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "The extension of single-agent decision-making into multi-agent cooperative contexts introduces several challenges, including developing effective training", + "bbox": [ + 512, + 765, + 892, + 809 + ], + "page_idx": 33 + }, + { + "type": "page_footnote", + "text": "$^{41}$ CrewAI Multi-Agent System platform: https://www.deeplearning.ai/short-courses/ulti-ai-agent-systems-with-crewai/.", + "bbox": [ + 527, + 835, + 890, + 870 + ], + "page_idx": 33 + }, + { + "type": "page_footnote", + "text": "40CrewAI: https://github.com/crewAIInc/crewAI.", + "bbox": [ + 115, + 856, + 435, + 870 + ], + "page_idx": 33 + }, + { + "type": "page_number", + "text": "33", + "bbox": [ + 489, + 878, + 507, + 889 + ], + "page_idx": 33 + }, + { + "type": "text", + "text": "schemes for multiple agents learning and adapting simultaneously, managing increased computational complexity due to the more sophisticated and stochastic environments compared to single-agent settings, and addressing the foundational role of strategic interaction among agents. Additionally, ensuring the scalability of algorithms to handle larger observation and action spaces, facilitating coordination and cooperation among agents to achieve consistent goals, and dealing with non-stationary environments where agents' behaviors and strategies continuously evolve are also inevitable and critical challenges.", + "bbox": [ + 100, + 131, + 480, + 300 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "Applying multi-agent decision-making techniques to real-world problems, which often involve complex and dynamic interactions, further underscores the need for sophisticated and advanced approaches to effectively adapt these ever-increasing complexities. Multi-agent cooperative decision making significantly surpasses single-agent decision-making in terms of environmental stochasticity, complexity, the difficulty of strategy optimization, and so on. As shown in Figure 23, we present a tree diagram summarizing the existing challenges in MARL-based and LLMs-based multi-agent decision-making approaches.", + "bbox": [ + 100, + 302, + 480, + 473 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "5.1. Challenges in MARL-based multi-agent systems", + "text_level": 1, + "bbox": [ + 102, + 488, + 463, + 502 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "The advancement of MARL remains in its formative stages, with its potential for enabling effective multiagent coordination and achieving scalability in dynamic environments yet to be fully unlocked [7, 6, 8, 25, 28]. Challenges such as environmental stochasticity, strategy learning difficulty, non-stationarity, scalability, and reward complexity have emerged as major bottlenecks. This section provides an in-depth analysis of these challenges, exploring the current state, technical limitations, and potential solutions in MARL-based multiagent systems to enable more robust, efficient, and scalable decision-making frameworks.", + "bbox": [ + 100, + 505, + 480, + 677 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "1. Environmental Stochasticity and Complexity: [251, 25] In MARL-based decision-making systems, environmental dynamics are influenced not only by external factors but also by the behaviors and decisions of individual agents. This complex interaction results in high levels of stochasticity and complexity in the environment, making prediction and modeling significantly more difficult. For example, in autonomous driving scenarios [19, 214, 18, 222], the behavior of each vehicle affects the decisions of surrounding vehicles, thereby increasing the overall complexity of the system;", + "bbox": [ + 117, + 686, + 480, + 870 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "2. Strategy Learning Difficulty: [252, 86, 116] Strategy learning in MARL-based decision-making systems involves multidimensional challenges. Firstly, agents must consider the behaviors of other agents, and this interdependence increases the difficulty of strategy learning. Each agent not only has to optimize its own strategy but also predict and adapt to the strategy changes of others. Additionally, the vast joint action space of multiple agents makes it challenging for any single agent to learn effective joint strategies. The vast joint action space means that each agent needs to explore and learn within a larger decision space, which significantly increases the demands on computational resources and time;", + "bbox": [ + 527, + 131, + 894, + 344 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "3. Complexity of Reward Functions: [251] In MARL-based decision-making systems, reward functions become more complex [85, 150]. The rewards received from the environment in multiagent cooperative techniques are influenced not only by an individual agent's actions and the environment but also by the actions of other agents, which makes the stable policy learning and modeling process more difficult. In other words, an agent's reward depends not only on its own actions but also on the actions of other agents, making it challenging for the reinforcement learning-based multi-agent decision-making policies to converge. This intricate reward mechanism complicates the design and optimization of reward functions. Agents need to evaluate their behaviors' impact on the overall system through complex interactions to learn effective strategies;", + "bbox": [ + 527, + 344, + 894, + 599 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "4. Coordination and Cooperation: [253, 90, 254, 251] Furthermore, in MARL-based decision-making systems, agents need to coordinate and cooperate to achieve common goals. This requires establishing effective communication and coordination mechanisms among agents to ensure that their actions are globally consistent and complementary [49]. For example, in disaster rescue scenarios [57, 12], multiple drones need to coordinate their actions to cover the maximum area and utilize resources most efficiently;", + "bbox": [ + 527, + 600, + 892, + 756 + ], + "page_idx": 34 + }, + { + "type": "text", + "text": "5. Non-Stationary: [27, 25] The environment of MARL-based decision-making systems is nonstationary because each agent's behavior dynamically changes the state of the environment. This non-stationarity increases the difficulty of strategy learning, as agents must continually adapt to changes in the environment and the behaviors of other agents.", + "bbox": [ + 527, + 757, + 894, + 870 + ], + "page_idx": 34 + }, + { + "type": "page_number", + "text": "34", + "bbox": [ + 489, + 879, + 507, + 889 + ], + "page_idx": 34 + }, + { + "type": "image", + "img_path": "images/3e7322f2e7c56ceef6795b3c9c45da60a4c75359ab1c5eeb3db45b67704bb031.jpg", + "image_caption": [ + "Figure 23: A tree diagram of the challenges in MARL and LLMs-based multi-agent decision-making approaches." + ], + "image_footnote": [], + "bbox": [ + 144, + 127, + 855, + 367 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "6. Scalability: [87, 252, 25] Addressing scalability in MARL demands innovative approaches to tackle the exponential growth in complexity as the number of agents increases. Techniques that leverage knowledge reuse [172, 138, 168], such as parameter sharing and transfer learning [255, 256], have proven indispensable. When agents share structural similarities, information sharing can streamline the training process, enabling systems to scale more effectively. Transfer learning, in particular, allows agents to adapt knowledge from previous tasks to new, related ones, significantly accelerating learning in dynamic environments. Moreover, curriculum learning [257, 258] plays a pivotal role in tackling the increased complexity of training multiple agents. It enables gradual learning by exposing agents to progressively more challenging tasks, thereby improving policy generalization and accelerating convergence. Robustness remains critical for scalability, as learned policies must withstand environmental disturbances. Techniques like policy ensembles and adversarial training [259, 260] enhance resilience by fostering diversity and adaptability in policies. The DTDE paradigm addresses these issues but introduces new complexities [124, 131], such as environmental instability. One promising solution is employing Independent Deep Q-networks (IDQNs) [126, 261, 127], which adapt traditional single-agent approaches to multi-agent contexts.", + "bbox": [ + 115, + 414, + 484, + 843 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "Overall, the interplay between robustness and scal", + "bbox": [ + 119, + 856, + 482, + 871 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "ability in MARL is a key area for future exploration. While existing techniques provide strong foundations, integrating methods like meta-learning could offer a way for agents to rapidly adapt to new tasks and environments. Additionally, leveraging recent advances in graph neural networks might enhance the scalability of MARL by modeling agent interactions more efficiently. These directions hold promise for tackling the dynamic and large-scale nature of multi-agent environments.", + "bbox": [ + 512, + 414, + 895, + 544 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "5.2. Challenges in LLMs reasoning-based multi-agent systems", + "text_level": 1, + "bbox": [ + 514, + 558, + 894, + 586 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "The development of LLMs-based multi-agent systems is still in its early stages, and its advantages in multi-agent collaboration in cross-domain applications have not been fully realized. In this process, technical bottlenecks, design limitations, and imperfect evaluation methods have revealed numerous challenges. This section provides a comprehensive analysis of these challenges, exploring the current status, bottlenecks, and potential breakthrough directions of LLMs-based multiagent systems in key areas such as multi-modal interaction, system scalability, hallucination control, evaluation, and privacy protection.", + "bbox": [ + 512, + 590, + 894, + 762 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "1. Expansion of Multi-Modal Environments: [102, 113, 148] Current LLMs-based multi-agent systems primarily focus on text processing and generation, particularly excelling in language-based interactions. However, applications in multi-modal environments remain insufficient. Multi-modal environments require agents to handle various in", + "bbox": [ + 527, + 771, + 895, + 871 + ], + "page_idx": 35 + }, + { + "type": "page_number", + "text": "35", + "bbox": [ + 487, + 878, + 507, + 890 + ], + "page_idx": 35 + }, + { + "type": "text", + "text": "puts from images, audio, video, and physical sensors, while also generating multi-modal outputs, such as descriptions of visual scenes or simulations of physical actions. This cross-modal interaction not only demands stronger model processing capabilities but also requires efficient information fusion between agents. For example, in practical applications, one agent may need to extract visual features from an image and collaborate with other agents through language to accomplish complex tasks, posing new technical challenges. Future research should focus on building unified multi-modal frameworks that enable agents to efficiently understand and collaboratively process various types of data.", + "bbox": [ + 137, + 131, + 478, + 344 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "2. Hallucination Problem: [262, 30, 108, 263] The hallucination in LLMs, which involves generating false or inaccurate information, becomes more complex in multi-agent environments. This issue may be triggered within a single agent and further propagated through multi-agent interactions, ultimately negatively impacting the decision-making of the entire system. Because the information flow in multi-agent systems is interconnected, any misjudgment at one node can trigger a chain reaction. This characteristic makes the hallucination problem not only confined to the behavior of individual agents but also poses challenges to the stability of the entire system. Therefore, addressing this issue requires a dual approach: on one hand, improving model training methods to reduce the probability of hallucinations in individual agents; on the other hand, designing information verification mechanisms and propagation management strategies to minimize the spread of hallucinated information within the agent network.", + "bbox": [ + 119, + 346, + 480, + 642 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "3. Acquisition of Collective Intelligence: [34, 176, 253] Current LLMs-based multi-agent systems rely more on real-time feedback for learning rather than offline data, unlike traditional multi-agent systems [264, 265]. This real-time learning approach imposes higher requirements on the interactive environment [262, 266]. Since designing and maintaining a reliable real-time interactive environment is not easy, it limits the scalability of the system. Additionally, existing research mostly focuses on optimizing individual agents, neglecting the potential overall efficiency improvements that could arise from agent collaboration. For example, knowledge sharing and behavioral coordination among agents may create advantages of collective intelligence in certain complex tasks. Fu", + "bbox": [ + 119, + 644, + 480, + 869 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "ture research needs to explore how to fully leverage the potential of collective intelligence by optimizing multi-agent interaction strategies and collaboration mechanisms.", + "bbox": [ + 551, + 131, + 890, + 187 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "4. System Scalability: [262, 267, 253] As the number of agents in LLMs-based multi-agent systems increases, the demand for computational resources grows exponentially, posing challenges in resource-constrained environments. A single LLM agent already requires substantial computational power, and when the system scales to hundreds or thousands of agents, existing hardware and software architectures may not be able to support it. Furthermore, scaling the system introduces new complexities, such as how to efficiently allocate tasks, coordinate, and communicate among numerous agents. Studies have shown that the more agents there are, the more difficult it becomes to coordinate their operations, especially in reducing redundancy and conflicts. Therefore, future work needs to optimize resource utilization through the development of lightweight models and efficient communication protocols, while also exploring the scaling laws for agent expansion.", + "bbox": [ + 529, + 187, + 890, + 470 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "5. Evaluation and Benchmarking: [262, 253] Current evaluation methods and benchmark tests for LLMs-based multi-agent systems are still incomplete. Most research focuses solely on the performance of individual agents in specific tasks, neglecting the overall system performance in complex scenarios. Evaluating group behavior is more challenging because the dynamics and complexity of multi-agent systems are difficult to measure with a single metric. Additionally, the lack of a unified testing framework and benchmark data is a major obstacle when comparing the capabilities of different LLMs-based multi-agent systems across domains. Future research needs to develop comprehensive evaluation standards and universal benchmark tests, especially in key fields such as scientific experiments, economic analysis, and urban planning, to provide a basis for system performance comparison and improvement.", + "bbox": [ + 529, + 473, + 890, + 741 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "6. Interaction Efficiency and Cumulative Effects: [262, 268, 105, 254] The complexity of multiagent systems leads to prominent issues of low interaction efficiency and cumulative effects. Low interaction efficiency is mainly reflected in the need for generative agents to frequently query models, making the system inefficient in large-scale applications. On the other hand, because the system state highly depends on the results of the", + "bbox": [ + 529, + 743, + 890, + 869 + ], + "page_idx": 36 + }, + { + "type": "page_number", + "text": "36", + "bbox": [ + 489, + 879, + 507, + 889 + ], + "page_idx": 36 + }, + { + "type": "text", + "text": "previous round, when an error occurs in one round, it may accumulate and propagate to subsequent operations, ultimately degrading the system's overall performance. Future efforts should focus on designing more efficient communication protocols and intermediate result correction mechanisms to reduce interaction costs and the impact of cumulative errors.", + "bbox": [ + 134, + 131, + 480, + 244 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "7. Security and Privacy Issues: [5, 118, 269, 270, 271] Context sharing within multi-agent systems poses risks of introducing noise and privacy leaks. For example, sensitive information shared between agents (such as identities or locations) may be misused by untrusted nodes, thereby threatening the system's security. Addressing this issue requires a two-pronged approach: first, establishing clear organizational structures to restrict information access permissions; second, introducing more advanced trust management mechanisms, such as distributed trust systems based on consensus algorithms, to enhance the system's security and reliability.", + "bbox": [ + 115, + 247, + 480, + 445 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "In summary, LLMs-based multi-agent systems face challenges across multiple domains, including multimodal adaptation, scalability, evaluation methods, collective intelligence development, and privacy protection. These challenges not only reveal the current technological limitations but also provide ample space for future research. With advancements in technology and the deepening of interdisciplinary studies, LLMs-based multi-agent systems are expected to achieve significant breakthroughs both theoretically and in applications.", + "bbox": [ + 102, + 454, + 480, + 596 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "6. Future Research Prospects and Emerging Trends", + "text_level": 1, + "bbox": [ + 102, + 615, + 321, + 645 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "Multi-Agent Decision-Making Systems are entering a new era where LLMs are combined with MARL [116]. This combination can improve learning efficiency in complex dynamic environments. It also enables better multi-modal information processing, multi-task collaboration, and long-term planning [7, 25, 30, 28, 266]. In this section, we discuss future prospects and challenges of multi-agent decision-making system (MAS) research from theoretical, technical, application, and ethical perspectives.", + "bbox": [ + 102, + 655, + 480, + 797 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "6.1. Theoretical Development: From Traditional RL to LLMs-Enhanced MARL Framework", + "text_level": 1, + "bbox": [ + 102, + 810, + 480, + 838 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "LLMs-enhanced MARL redefines collaboration in multi-agent systems by introducing natural language", + "bbox": [ + 102, + 841, + 480, + 870 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "understanding and reasoning [14, 253]. Traditional MARL requires agents to learn control strategies in dynamic environments with limited data [8, 5, 27, 24]. However, this approach often faces challenges like low sample efficiency, difficult reward design, and poor generalization. LLMs, with their strong reasoning and knowledge representation capabilities, offer solutions [272, 30]. For example, they can process multi-modal information such as natural language and vision [113, 273, 102, 260], helping agents understand tasks and environments more effectively. This improves learning speed and generalization. Furthermore, LLMs can act as reasoning tools, providing additional context and knowledge to optimize long-term planning.", + "bbox": [ + 512, + 131, + 890, + 330 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "The LLMs-enhanced MARL framework is a groundbreaking integration of LLMs and MARL techniques, which includes roles such as information processor, reward designer, decision-maker, and generator [116]. Figure 24 presents a flowchart illustrating the structure of the LLMs-enhanced MARL framework, highlighting its four key roles. These roles work together to streamline task complexity and improve learning. For instance, LLMs can translate unstructured task descriptions into formal task semantics, reducing learning difficulty. They can also design advanced reward functions to accelerate learning in sparse-reward environments. These roles collectively address the challenges of task complexity, data efficiency, and generalization in MARL [29, 24, 251], while streamlining processes like reward design and policy generation. As shown in Table 4, we summarize recent advancements in LLMs-enhanced MARL methods across these four roles into a comprehensive table for clarity and comparison.", + "bbox": [ + 512, + 331, + 892, + 602 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "6.2. Technical Integration: From Multi-Modal to Multi-Task Optimization", + "text_level": 1, + "bbox": [ + 514, + 620, + 890, + 649 + ], + "page_idx": 37 + }, + { + "type": "text", + "text": "Combining LLMs and MARL significantly improves the ability to handle multi-modal information, multitask learning, and long-term task planning [273, 102, 272, 30]. Traditional MARL often requires separate modules to process visual, textual, or other forms of data. In contrast, LLMs can unify this processing, enabling comprehensive environment understanding. For example, in a robot task involving voice commands and visual inputs, LLMs can process both types of data simultaneously and generate actions directly. Additionally, LLMs provide a distinct advantage in multi-task learning due to their pre-trained knowledge [30, 57]. Through knowledge transfer, they help agents share experiences across different tasks, improving adaptability [255, 66]. For long-term planning, LLMs can break", + "bbox": [ + 512, + 657, + 892, + 870 + ], + "page_idx": 37 + }, + { + "type": "page_number", + "text": "37", + "bbox": [ + 487, + 879, + 507, + 889 + ], + "page_idx": 37 + }, + { + "type": "image", + "img_path": "images/ca0f8bf101575fe8e2080e3589b9515f531ad898f95ecfd4f8b80ccff013637c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 107, + 128, + 297, + 221 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/27395decb4a73fcf8ca53d914c366bd56d6e466f64923b049c2683ba7aa9ea47.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 302, + 128, + 494, + 221 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/001ede390b93b08a552ac07a9cab97f26b1666902dd4fca9440365ed87072e18.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 500, + 128, + 690, + 221 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/345c901536a9f3d5c821af422611f5d0a0f5220610e7455f10907432ce192167.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 699, + 128, + 884, + 221 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/48d61eca733bb6936526aa1e4ce4e6d1571acab57c3ce7861fae6cf215f23163.jpg", + "image_caption": [ + "a) LLM as information processor." + ], + "image_footnote": [], + "bbox": [ + 105, + 223, + 297, + 315 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/f239702c9eb5ab20c1490b9c5fc64499a1d8aee5b19d5946cbdc5157061ff5da.jpg", + "image_caption": [ + "b) LLM as reward designer." + ], + "image_footnote": [], + "bbox": [ + 302, + 223, + 494, + 315 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/11d07db349371dc5a08bc551dd94c462b655cd1302d907bb1c69d7a03ac850ac.jpg", + "image_caption": [ + "c) LLM as decision-maker", + "Figure 24: Schematic diagram of the LLMs-enhanced MARL framework based on Cao et al. [116], showcasing its core roles: information processor (a), reward designer (b), decision-maker (c), and generator (d)." + ], + "image_footnote": [], + "bbox": [ + 502, + 223, + 690, + 315 + ], + "page_idx": 38 + }, + { + "type": "image", + "img_path": "images/f0e13440a3152f156a53f69aa98085530dd0b8b475bfa74d08afe44a4dc001b4.jpg", + "image_caption": [ + "d) LLM as generator" + ], + "image_footnote": [], + "bbox": [ + 699, + 223, + 884, + 315 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "down complex tasks into subtasks, addressing challenges like the credit assignment problem. This capability is particularly useful in tasks requiring extended reasoning, such as construction tasks in Minecraft. In optimizing reinforcement learning's sample efficiency [308, 281], the generative capabilities of LLMs can provide agents with additional virtual samples through high-fidelity environment simulations [116, 308]. This not only reduces the cost of real-world learning but also offers high-quality trajectories that serve as valuable references for policy optimization. Furthermore, in sparse reward environments, LLMs can accelerate policy learning by automatically designing reward signals.", + "bbox": [ + 100, + 388, + 480, + 574 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "6.3. Application Expansion: Driving Intelligent Collaboration in Complex Scenarios", + "text_level": 1, + "bbox": [ + 102, + 592, + 480, + 620 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "The potential of LLMs-enhanced MARL in practical applications is enormous, especially in scenarios that require complex collaboration and real-time decision-making [116, 300, 183, 171]. For example, in the field of autonomous driving [19, 18, 17], the integration of LLMs with MARL can simultaneously process sensor data and natural language information (such as traffic regulations, passenger instructions, etc. [92]), thereby enhancing the safety and accuracy of decision-making [214, 74]. In the field of collaborative robots, LLMs can help multiple robots form a more intuitive communication mechanism, achieving highly complex task division and dynamic adjustment. In addition, in multi-objective optimization tasks such as smart grids and intelligent healthcare, LLMs can provide domain knowledge and optimization suggestions to assist reinforcement learning, design more practical reward functions, and thus", + "bbox": [ + 100, + 629, + 482, + 870 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "improve the overall efficiency of the system. In dynamic and complex environments such as disaster relief [12], LLMs can dynamically allocate roles and responsibilities according to task requirements, helping multiagent systems quickly adapt to changing environments and highly complex task divisions [296, 295, 297]. This capability provides a solid technical support for a wide range of applications.", + "bbox": [ + 512, + 388, + 892, + 502 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "6.4. Human Society Coordination: Balancing Technology and Ethics", + "text_level": 1, + "bbox": [ + 514, + 514, + 890, + 543 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "The integration of LLMs into MARL opens new avenues for advancing multi-agent systems, while also highlighting exciting research directions in improving technical efficiency and addressing ethical considerations. For instance, enhancing the robustness of LLMs in unfamiliar environments offers the opportunity to develop strategies for minimizing biases and hallucinations, thereby improving decision accuracy. Furthermore, the computational complexity and resource demands of LLMs present a chance to innovate in optimizing inference efficiency and scalability. This is especially relevant in dynamic multi-agent environments where real-time responsiveness is critical.", + "bbox": [ + 512, + 544, + 892, + 728 + ], + "page_idx": 38 + }, + { + "type": "text", + "text": "From an ethical perspective, the incorporation of LLMs calls for advancements in ensuring data privacy, safeguarding against adversarial attacks, and establishing clear accountability frameworks for AI-driven decisions. Sensitive domains such as healthcare and disaster response could particularly benefit from focused research on protecting sensitive information and enhancing system resilience. Additionally, improving the transparency and explainability of LLMs-driven decisions is another promising area for exploration, as it", + "bbox": [ + 512, + 728, + 892, + 870 + ], + "page_idx": 38 + }, + { + "type": "page_number", + "text": "38", + "bbox": [ + 487, + 879, + 507, + 889 + ], + "page_idx": 38 + }, + { + "type": "table", + "img_path": "images/e501f1a056b6434b72169c110125e7e9ecb0925eeb6b72e7b0f5a9a2a151b4e4.jpg", + "table_caption": [ + "Table 4: Summary of recent studies categorized by the four key roles of LLMs in MARL: Information Processor, Reward Designer, Decision-Maker, and Generator, highlighting their respective contributions and applications." + ], + "table_footnote": [], + "table_body": "
Method Types\nLLM as ...Researchers. / Works. / Refs.
InformationPoudel et al. (ReCoRe) [274], Choi et al. (ConPE) [275],
Paischer et al. (HELM) [276] and (Semantic HELM) [277],
ProcessorRadford et al. (CLIP) [278], Oord et al. (CPC) [279],
Michael et al. (CURL) [280], Schwarzer et al. (SPR) [281]
RewardKwon et al. (LLMrewardRL) [282], Song et al. (Self-Refined LLM) [283], Wu et al. (Read & Reward) [284],
Carta et al. (GLAM) [285], Chu et al. (Lafite-RL) [286],
DesignerKim et al. (ARP) [287], Yu et al. [288], Adeniji et al. (LAMP) [289], Madaan et al. (Self-Refine) [290], Ma et al. (Eureka) [291], Xie et al. (Text2Reward) [292]
Decision-MakerJanner et al. (TT-Offline RL) [293], Shi et al. (LaMo) [294], Li et al. (LLM scaffold) [295], Mezghani et al. (text BabyAI) [296], Grigsby et al. (AMAGO) [297], Zitkovich et al. (RT-2) [298], Yao et al. (CALM) [299], Hu et al. (instructRL) [300], Zhou et al. (LLM4Teach) [301]
GeneratorChen et al. (TransDreamer) [302], Das et al. (S2E) [303], Lin et al. (Dynalang) [304], Robine et al. (TWM) [305], Poudel et al. (LanGWM) [306], Lin et al. (HomeGrid) [307]
", + "bbox": [ + 107, + 186, + 500, + 508 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "would increase trust and user confidence in multi-agent systems.", + "bbox": [ + 102, + 558, + 482, + 588 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "By addressing these areas, future research can maximize the potential of LLMs-enhanced MARL systems, ensuring they are both technically effective and ethically sound in diverse, real-world applications.", + "bbox": [ + 102, + 601, + 482, + 659 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "Overall, the combination of LLMs and MARL brings new momentum to research and applications in multiagent systems. By enhancing collaboration through natural language understanding and leveraging large-scale knowledge, these systems can achieve greater efficiency and robustness in complex scenarios. However, fully unlocking their potential requires further exploration in theoretical methods, technological development, and ethical practices. With systematic advancements in these areas, LLMs-enhanced MARL can become the foundation for next-generation intelligent decision-making systems, transforming fields like autonomous driving, collaborative robotics, and healthcare, while shaping the future of AI research.", + "bbox": [ + 102, + 671, + 482, + 870 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "7. Conclusion", + "text_level": 1, + "bbox": [ + 514, + 131, + 621, + 145 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "Multi-agent cooperative decision-making has demonstrated remarkable potential in addressing complex tasks through intelligent collaboration and adaptability. In this survey, we systematically review the evolution of multi-agent systems, highlighting the shift from traditional methods, such as rule-based and game-theory approaches, to advanced paradigms like MARL and LLMs. We differentiate these methods by examining their unique capabilities, challenges, and applications in diverse environments, paying particular attention to dynamic and uncertain settings. In addition, we explore the critical role of simulation environments as a bridge between theoretical advancements and real-world implementation, emphasizing their influence on agent interaction, learning, and decision-making. Practical applications of multi-agent systems in domains such as autonomous driving, disaster response, and robotics further underscore their transformative potential. By summarizing advanced multi-agent decision-making methodologies, datasets, benchmarks, and future research directions, this survey aims to provide a comprehensive resource for researchers and practitioners. We hope it inspires future studies to address existing challenges, such as improving inter-agent communication and adaptability, while leveraging the innovative potential of DRL and LLMs-based approaches to advance multi-agent cooperative decision-making.", + "bbox": [ + 512, + 153, + 894, + 537 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "Acknowledgment", + "text_level": 1, + "bbox": [ + 514, + 554, + 648, + 569 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "The corresponding authors of this survey are B. Zhao and G.Yang from Xi'an Jiaotong University and Imperial College London. Guang Yang was supported in part by the ERC IMI (101005122), the H2020 (952172), the MRC (MC/PC/21013), the Royal Society (IEC\\NSFC\\211235), the NVIDIA Academic Hardware Grant Program, the SABER project supported by Boehringer Ingelheim Ltd, NIHR Imperial Biomedical Research Centre (RDA01), Wellcome Leap Dynamic Resilience, UKRI guarantee funding for Horizon Europe MSCA Postdoctoral Fellowships (EP/Z002206/1), and the UKRI Future Leaders Fellowship (MR/V023799/1). The authors would like to thank the editors and anonymous reviewers, who significantly enhanced the quality of the survey.", + "bbox": [ + 512, + 575, + 894, + 790 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "Declaration of Generative AI and AI-assisted Technologies in the Writing Process", + "text_level": 1, + "bbox": [ + 512, + 806, + 892, + 835 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "During the preparation of this work, the authors utilized generative AI and AI-assisted technologies for", + "bbox": [ + 512, + 841, + 894, + 871 + ], + "page_idx": 39 + }, + { + "type": "page_number", + "text": "39", + "bbox": [ + 487, + 878, + 509, + 890 + ], + "page_idx": 39 + }, + { + "type": "text", + "text": "proofreading and enhancing readability and language clarity in certain sections. The authors have carefully reviewed these contents to ensure accuracy and completeness, acknowledging that AI can generate authoritative-sounding output that may be incorrect, incomplete, or biased.", + "bbox": [ + 102, + 131, + 480, + 217 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "Appendix A. Technological Comparisons between Single-Agent and Multi-Agent (Under Reinforcement Learning)", + "text_level": 1, + "bbox": [ + 104, + 237, + 480, + 280 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "Here, we discuss a series of technological comparisons of both DRL-based single-agent and MARL-based multi-agent research.", + "bbox": [ + 102, + 288, + 480, + 330 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "In solving these single-agent sequential decision-making problems, Markov Decision Processes (MDP) is a powerful mathematical modeling framework, especially in uncertain environments. Since the decision-making process of an agent can inherently be modeled as a sequence of decisions, the single-agent decision-making process can be formulated as an typical MDP, similar to a Markov chain.", + "bbox": [ + 102, + 331, + 480, + 443 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "In contrast to single-agent DRL systems, multi-agent systems under the MARL techniques involve multiple agents interacting within a shared environment. POMDP is a powerful mathematical modeling framework. It is an extension of the MDP framework that is particularly well-suited for modeling decision-making in environments where the agent does not have full visibility of the entire state space. POMDPs extend MDPs to environments where the agent cannot fully observe the underlying state. Instead, the agent maintains a belief state, which is a probability distribution over the possible states.", + "bbox": [ + 102, + 445, + 480, + 613 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "Figure A.25 provides a comparative illustration of Markov Decision Processes (MDP) and Partially Observable Markov Decision Processes (POMDP), which correspond to single-agent and multi-agent reinforcement learning paradigms, respectively.", + "bbox": [ + 102, + 615, + 480, + 686 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "The left side of Figure A.25 depicts an MDP, which models single-agent decision-making in a fully observable environment. The agent selects an action $a$ from the action space $A$ based on the current state $s$ from the state space $S$ . The environment transitions to a new state $s'$ following the transition probability function $P(s' \\mid s, a)$ , and the agent receives a reward $r$ . The objective is to optimize a policy $\\pi^*$ that maximizes the cumulative reward. Since the entire state is observable, the decision-making process is relatively straightforward.", + "bbox": [ + 102, + 686, + 480, + 827 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "On the right side, the POMDP framework extends MDPs to multi-agent settings where agents operate under partial observability. Each agent $i$ receives only a", + "bbox": [ + 102, + 828, + 480, + 870 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "partial observation $o^i$ rather than the full state $S$ . The agents take individual actions $a^i$ , forming a joint action $a_t$ , which influences state transitions and results in individual rewards $r^i$ . The observations are generated according to the observation function $Z(o \\mid s', a)$ , requiring each agent to infer the missing state information and maintain a belief state for effective decision-making.", + "bbox": [ + 512, + 131, + 892, + 231 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "In summary, MDPs are well-suited for single-agent systems, where the environment is static and fully observable, allowing the agent to make optimal decisions based on complete knowledge of the state. On the other hand, POMDPs are crucial for multi-agent reinforcement learning scenarios, where multiple agents interact dynamically in an uncertain environment with limited information. This setting introduces challenges such as coordination, competition, and reward interdependencies, making decision-making significantly more complex.", + "bbox": [ + 512, + 231, + 892, + 387 + ], + "page_idx": 40 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 515, + 407, + 598, + 420 + ], + "page_idx": 40 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller, Playing atari with deep reinforcement learning (2013). arXiv:1312.5602. URL https://arxiv.org/abs/1312.5602", + "[2] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, D. Hassabis, Human-level control through deep reinforcement learning, Nature 518 (7540) (2015) 529-533. doi:10.1038/nature14236. URL https://doi.org/10.1038/nature14236", + "[3] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, D. Hassabis, Mastering the game of go with deep neural networks and tree search, Nature 529 (7587) (2016) 484-489. doi:10.1038/nature16961. URL https://doi.org/10.1038/nature16961", + "[4] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, D. Hassabis, Mastering the game of go without human knowledge, Nature 550 (7676) (2017) 354-359. doi:10.1038/nature24270. URL https://doi.org/10.1038/nature24270", + "[5] T. Li, K. Zhu, N. C. Luong, D. Niyato, Q. Wu, Y. Zhang, B. Chen, Applications of multi-agent reinforcement learning in future internet: A comprehensive survey, IEEE Communications Surveys & Tutorials 24 (2) (2022) 1240-1279. doi: 10.1109/COMST.2022.3160697.", + "[6] S. Gronauer, K. Diepold, Multi-agent deep reinforcement learning: a survey, Artificial Intelligence Review 55 (2) (2022) 895-943. doi:10.1007/s10462-021-09996-w. URL https://doi.org/10.1007/s10462-021-09996-w", + "[7] P. Yadav, A. Mishra, S. Kim, A comprehensive survey on multi-agent reinforcement learning for connected and automated vehicles, Sensors 23 (10) (2023). doi:10.3390/" + ], + "bbox": [ + 529, + 430, + 890, + 869 + ], + "page_idx": 40 + }, + { + "type": "page_number", + "text": "40", + "bbox": [ + 487, + 878, + 507, + 889 + ], + "page_idx": 40 + }, + { + "type": "image", + "img_path": "images/424dcd0683a6366cd83793e431932c1cd0e1322fbc5f7fa3309a42502a8009c0.jpg", + "image_caption": [ + "Markov Decision Processes (MDP) $\\langle S, A, P, R, \\gamma \\rangle$" + ], + "image_footnote": [], + "bbox": [ + 136, + 133, + 480, + 300 + ], + "page_idx": 41 + }, + { + "type": "image", + "img_path": "images/82ea2fe70ae861282e612beb352235a6bdbd2b72f4eb15bd6ddd82ee786cb44d.jpg", + "image_caption": [ + "Partially Observable Markov Decision Processes (POMDP) $\\langle S, A, P, R, \\gamma, O, Z \\rangle$", + "Figure A.25: The Markov Decision Process modeling for the single-agent reinforcement learning paradigm (left) and the Partially Observable Markov Decision Process modeling for the multi-agent reinforcement learning paradigm (right)." + ], + "image_footnote": [], + "bbox": [ + 480, + 133, + 860, + 297 + ], + "page_idx": 41 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "s23104710.", + "URL https://www.mdpi.com/1424-8220/23/10/4710", + "[8] J. Orr, A. Dutta, Multi-agent deep reinforcement learning for multi-robot applications: A survey, Sensors 23 (7) (2023). doi:10.3390/s23073625. URL https://www.mdpi.com/1424-8220/23/7/3625", + "[9] W. Jin, B. Zhao, Y. Zhang, J. Huang, H. Yu, Wordtrans-absa: Enhancing aspect-based sentiment analysis with masked language modeling for affective token prediction, Expert Systems with Applications 238 (2024) 122289. doi:https://doi.org/10.1016/j.eswa.2023.122289. URL https://www.sciencedirect.com/science/article/pii/S0957417423027914", + "[10] B. Zhao, W. Jin, Y. Zhang, S. Huang, G. Yang, Prompt learning for metonymy resolution: Enhancing performance with internal prior knowledge of pre-trained language models, Knowledge-Based Systems 279 (2023) 110928. doi:https://doi.org/10.1016/j.knosys.2023.110928. URL https://www.sciencedirect.com/science/article/pii/S0950705123006780", + "[11] A. Seewald, C. J. Lerch, M. Chancan, A. M. Dollar, I. Abraham, Energy-aware ergodic search: Continuous exploration for multi-agent systems with battery constraints (2024). arXiv: 2310.09470. URL https://arxiv.org/abs/2310.09470", + "[12] M. M. H. Qazzaz, S. A. R. Zaidi, D. C. McLernon, A. Salama, A. A. Al-Hameed, Optimizing search and rescue UAV connectivity in challenging terrain through multi q-learning (2024). arXiv:2405.10042. URL https://arxiv.org/abs/2405.10042", + "[13] G. T. Papadopoulos, M. Antona, C. Stephanidis, Towards open and expandable cognitive ai architectures for large-scale multi-agent human-robot collaborative learning, IEEE Access 9 (2021) 73890-73909. doi:10.1109/ACCESS.2021.3080517.", + "[14] M. D. Silva, R. Regnier, M. Makarov, G. Avrin, D. Dumur, Evaluation of intelligent collaborative robots: a review, in: 2023 IEEE/SICE International Symposium on System Integration (SII), 2023, pp. 1-7. doi:10.1109/SII55687.2023.10039365.", + "[15] Y. Huang, S. Wu, Z. Mu, X. Long, S. Chu, G. Zhao, A multi" + ], + "bbox": [ + 112, + 400, + 480, + 860 + ], + "page_idx": 41 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "agent reinforcement learning method for swarm robots in space collaborative exploration, in: 2020 6th International Conference on Control, Automation and Robotics (ICCAR), 2020, pp. 139-144. doi:10.1109/ICCAR49639.2020.9107997.", + "[16] C. G. Cena, P. F. Cardenas, R. S. Pazmino, L. Puglisi, R. A. Santonja, A cooperative multi-agent robotics system: Design and modelling, Expert Systems with Applications 40 (12) (2013) 4737-4748. doi:https://doi.org/10.1016/j.eswa.2013.01.048. URL https://www.sciencedirect.com/science/article/pii/S0957417413000791", + "[17] S. Jayawardana, V. Jayawardana, K. Vidanage, C. Wu, Multi-behavior learning for socially compatible autonomous driving, in: 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), 2023, pp. 4422-4427. doi:10.1109/ITSC57777.2023.10422120.", + "[18] L. Wen, J. Duan, S. E. Li, S. Xu, H. Peng, Safe reinforcement learning for autonomous vehicles through parallel constrained policy optimization, in: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), 2020, pp. 1-7. doi:10.1109/ITSC45102.2020.9294262.", + "[19] L. Weiwei, H. Wenxuan, J. Wei, L. Lanxin, G. Lingping, L. Yong, Learning to model diverse driving behaviors in highly interactive autonomous driving scenarios with multi-agent reinforcement learning (2024). arXiv:2402.13481. URL https://arxiv.org/abs/2402.13481", + "[20] Y. Xue, W. Chen, Multi-agent deep reinforcement learning for uavs navigation in unknown complex environment, IEEE Transactions on Intelligent Vehicles 9 (1) (2024) 2290-2303. doi:10.1109/TIV.2023.3298292.", + "[21] S. Rezwan, W. Choi, Artificial intelligence approaches for uav navigation: Recent advances and future challenges, IEEE Access 10 (2022) 26320-26339. doi:10.1109/ACCESS.2022.3157626.", + "[22] B. Al Baroomi, T. Myo, M. R. Ahmed, A. Al Shibli, M. H. Marhaban, M. S. Kaiser, Ant colony optimization-based path planning for uav navigation in dynamic environments, in: 2023 7th International Conference on Automation, Control and Robots (ICACR), 2023, pp. 168-173. doi:10.1109/ICACR59381.2023.10314603.", + "[23] T. Samad, S. Iqbal, A. W. Malik, O. Arif, P. Bloodsworth, A" + ], + "bbox": [ + 524, + 400, + 890, + 860 + ], + "page_idx": 41 + }, + { + "type": "page_number", + "text": "41", + "bbox": [ + 487, + 879, + 507, + 889 + ], + "page_idx": 41 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "multi-agent framework for cloud-based management of collaborative robots, International Journal of Advanced Robotic Systems 15 (4) (2018). doi:10.1177/1729881418785073.", + "[24] W. Du, S. Ding, A survey on multi-agent deep reinforcement learning: from the perspective of challenges and applications, Artificial Intelligence Review 54 (5) (2021) 3215-3238. doi: 10.1007/s10462-020-09938-y. URL https://doi.org/10.1007/s10462-020-09938-y", + "[25] Z. Ning, L. Xie, A survey on multi-agent reinforcement learning and its application, Journal of Automation and Intelligence 3 (2) (2024) 73-91. doi:https://doi.org/10.1016/j.jai.2024.02.003. URL https://www.sciencedirect.com/science/article/pii/S2949855424000042", + "[26] Q. Yang, R. Liu, Understanding the application of utility theory in robotics and artificial intelligence: A survey (2023). arXiv: 2306.09445. URL https://arxiv.org/abs/2306.09445", + "[27] P. Hernandez-Leal, M. Kaisers, T. Baarslag, E. M. de Cote, A survey of learning in multiagent environments: Dealing with non-stationarity (2019). arXiv:1707.09183. URL https://arxiv.org/abs/1707.09183", + "[28] C. Zhu, M. Dastani, S. Wang, A survey of multi-agent deep reinforcement learning with communication, Autonomous Agents and Multi-Agent Systems 38 (1) (2024) 4. doi: 10.1007/s10458-023-09633-6. URL https://doi.org/10.1007/s10458-023-09633-6", + "[29] T. T. Nguyen, N. D. Nguyen, S. Nahavandi, Deep reinforcement learning for multiagent systems: A review of challenges, solutions, and applications, IEEE Transactions on Cybernetics 50 (9) (2020) 3826-3839. doi:10.1109/TCYB.2020.2977374.", + "[30] L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang, Z. Chen, J. Tang, X. Chen, Y. Lin, W. X. Zhao, Z. Wei, J. Wen, A survey on large language model based autonomous agents, Frontiers of Computer Science 18 (6) (2024) 186345. doi:10.1007/s11704-024-40231-1. URL https://doi.org/10.1007/s11704-024-40231-1", + "[31] B. Zhao, W. Jin, J. Del Ser, G. Yang, Chatagri: Exploring potentials of chatgpt on cross-linguistic agricultural text classification, Neurocomputing 557 (2023) 126708. doi: https://doi.org/10.1016/j.neucom.2023.126708. URL https://www.sciencedirect.com/science/article/pii/S0925231223008317", + "[32] T. Miki, M. Nagao, H. Kobayashi, T. Nakamura, A simple rule based multi-agent control algorithm and its implementation using autonomous mobile robots, in: 2010 World Automation Congress, 2010, pp. 1-6.", + "[33] H. Yarahmadi, H. Navidi, M. Challenger, Improving the resource allocation in IoT systems based on the integration of reinforcement learning and rule-based approaches in multi-agent systems, in: 2024 8th International Conference on Smart Cities, Internet of Things and Applications (SCIIoT), 2024, pp. 135-141. doi:10.1109/SCIIoT62588.2024.10570102.", + "[34] S.-H. Wu, V.-W. Soo, A fuzzy game theoretic approach to multi-agent coordination, in: T. Ishida (Ed.), Multiagent Platforms, Springer Berlin Heidelberg, Berlin, Heidelberg, 1999, pp. 76-87. doi:10.1007/3-540-48826-X\\_6.", + "[35] H. Zhang, J. Zhang, G.-H. Yang, Y. Luo, Leader-based optimal coordination control for the consensus problem of multiagent differential games via fuzzy adaptive dynamic programming, IEEE Transactions on Fuzzy Systems 23 (1) (2015) 152-163. doi:10.1109/TFUZZ.2014.2310238.", + "[36] F. Ren, M. Zhang, Q. Bai, A fuzzy-based approach for partner selection in multi-agent systems, in: 6th IEEE/ACIS Interna" + ], + "bbox": [ + 112, + 134, + 480, + 865 + ], + "page_idx": 42 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "tional Conference on Computer and Information Science (ICIS 2007), 2007, pp. 457-462. doi:10.1109/ICIS.2007.21.", + "[37] D. Gu, H. Hu, Fuzzy multi-agent cooperative q-learning, in: 2005 IEEE International Conference on Information Acquisition, 2005, p. 5 pp. doi:10.1109/ICIA.2005.1635080.", + "[38] J. Wang, Y. Hong, J. Wang, J. Xu, Y. Tang, Q.-L. Han, J. Kurths, Cooperative and competitive multi-agent systems: From optimization to games, IEEE/CAA Journal of Automatica Sinica 9 (5) (2022) 763-783. doi:10.1109/JAS.2022.105506.", + "[39] Y. Guo, Q. Pan, Q. Sun, C. Zhao, D. Wang, M. Feng, Cooperative game-based multi-agent path planning with obstacle avoidance, in: 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE), 2019, pp. 1385-1390. doi: 10.1109/ISIE.2019.8781205.", + "[40] D. Schwung, A. Schwung, S. X. Ding, Distributed self-optimization of modular production units: A state-based potential game approach, IEEE Transactions on Cybernetics 52 (4) (2022) 2174–2185. doi:10.1109/TCYB.2020.3006620.", + "[41] X. Wang, J. Wang, J. Chen, Y. Yang, L. Kong, X. Liu, L. Jia, Y. Xu, A game-theoretic learning framework for multi-agent intelligent wireless networks (2019). arXiv:1812.01267. URL https://arxiv.org/abs/1812.01267", + "[42] W. Lin, Y. Chen, Q. Q. Wang, J. Zeng, J. Liu, Multi-agents based distributed-energy-resource management for intelligent microgrid with potential game algorithm, in: IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society, 2017, pp. 7795-7800. doi:10.1109/IECON.2017.8217366.", + "[43] H. Wang, Z. Ning, H. Luo, Y. Jiang, M. Huo, Game-based adaptive optimization approach for multi-agent systems, in: 2023 IEEE International Conference on Industrial Technology (ICIT), 2023, pp. 1-5. doi:10.1109/ICIT58465.2023.10143172.", + "[44] L. Bull, Evolutionary computing in multi-agent environments: Operators, in: V. W. Porto, N. Saravanan, D. Waagen, A. E. Eiben (Eds.), Evolutionary Programming VII, Springer Berlin Heidelberg, Berlin, Heidelberg, 1998, pp. 43-52.", + "[45] J. Liu, W. Zhong, L. Jiao, Multi-Agent Evolutionary Model for Global Numerical Optimization, Springer Berlin Heidelberg, Berlin, Heidelberg, 2010, pp. 13-48. doi:10.1007/978-3-642-13425-8\\2. URL https://doi.org/10.1007/978-3-642-13425-8_2", + "[46] D. Bloembergen, K. Tuyls, D. Hennes, M. Kaisers, Evolutionary dynamics of multi-agent learning: a survey, J. Artif. Int. Res. 53 (1) (2015) 659-697.", + "[47] D. Klijn, A. E. Eiben, A coevolutionary approach to deep multi-agent reinforcement learning, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '21, Association for Computing Machinery, New York, NY, USA, 2021, p. 283-284. doi:10.1145/3449726.3459576. URL https://doi.org/10.1145/3449726.3459576", + "[48] S. Yuan, K. Song, J. Chen, X. Tan, D. Li, D. Yang, Evoagent: Towards automatic multi-agent generation via evolutionary algorithms (2024). arXiv:2406.14228. URL https://arxiv.org/abs/2406.14228", + "[49] W. Zhang, H. Liu, Evolutionary game analysis of multi-agent cooperation strategy analysis in agricultural water conservancy ppp project under digitization background, Scientific Reports 14 (1) (2024) 22915. doi:10.1038/s41598-024-74065-5. URL https://doi.org/10.1038/s41598-024-74065-5", + "[50] T. Rashid, M. Samvelyan, C. Schroeder, G. Farquhar, J. Foerster, S. Whiteson, QMIX: Monotonic value function" + ], + "bbox": [ + 524, + 134, + 890, + 864 + ], + "page_idx": 42 + }, + { + "type": "page_number", + "text": "42", + "bbox": [ + 487, + 879, + 507, + 889 + ], + "page_idx": 42 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "factorisation for deep multi-agent reinforcement learning, in: J. Dy, A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, Vol. 80 of Proceedings of Machine Learning Research, PMLR, 2018, pp. 4295-4304. URL https://proceedings.mlr.press/v80/ rashid18a.html", + "[51] L. Kraemer, B. Banerjee, Multi-agent reinforcement learning as a rehearsal for decentralized planning, Neurocomputing 190 (2016) 82-94. doi:https://doi.org/10.1016/j.neucom.2016.01.031. URL https://www.sciencedirect.com/science/article/pii/S0925231216000783", + "[52] B. Yu, Z. Cai, J. He, Fast-qmix: Accelerating deep multiagent reinforcement learning with virtual weighted q-values, in: 2021 2nd International Conference on Electronics, Communications and Information Technology (CECIT), 2021, pp. 594-599. doi:10.1109/CECIT53797.2021.00110.", + "[53] K. Son, D. Kim, W. J. Kang, D. E. Hostallero, Y. Yi, QTRAN: Learning to factorize with transformation for cooperative multi-agent reinforcement learning, in: K. Chaudhuri, R. Salakhutdinov (Eds.), Proceedings of the 36th International Conference on Machine Learning, Vol. 97 of Proceedings of Machine Learning Research, PMLR, 2019, pp. 5887-5896. URL https://proceedings.mlr.press/v97/son19a.htm1", + "[54] T. Rashid, G. Farquhar, B. Peng, S. Whiteson, Weighted qmix: expanding monotonic value function factorisation for deep multi-agent reinforcement learning, in: Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS '20, Curran Associates Inc., Red Hook, NY, USA, 2020.", + "[55] K. Kurach, A. Raichuk, P. Stanczyk, M. Zajac, O. Bachem, L. Espeholt, C. Riquelme, D. Vincent, M. Michalski, O. Bousquet, S. Gelly, Google research football: A novel reinforcement learning environment, Proceedings of the AAAI Conference on Artificial Intelligence 34 (04) (2020) 4501-4510. doi:10.1609/aaai.v34i04.5878. URL https://doi.org/10.1609/aaai.v34i04.5878", + "[56] J. Wang, Z. Ren, T. Liu, Y. Yu, C. Zhang, QPLEX: duplex dueling multi-agent q-learning, in: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, OpenReview.net, 2021. URL https://openreview.net/forum?id=RcmkOxxIQV", + "[57] P. Li, V. Menon, B. Gudiguntla, D. Ting, L. Zhou, Challenges faced by large language models in solving multi-agent flocking (2024). arXiv:2404.04752. URL https://arxiv.org/abs/2404.04752", + "[58] Z. Xu, C. Yu, F. Fang, Y. Wang, Y. Wu, Language agents with reinforcement learning for strategic play in the werewolf game, in: R. Salakhutdinov, Z. Kolter, K. Heller, A. Weller, N. Oliver, J. Scarlett, F. Berkenkamp (Eds.), Proceedings of the 41st International Conference on Machine Learning, Vol. 235 of Proceedings of Machine Learning Research, PMLR, 2024, pp. 55434-55464. URL https://proceedings.mlr.press/v235/xu24ad.htm1", + "[59] I. Mordatch, P. Abbeel, Emergence of grounded compositional language in multi-agent populations, arXiv preprint arXiv:1703.04908 (2017).", + "[60] H. Schwartz, An object oriented approach to fuzzy actor-critic learning for multi-agent differential games, in: 2019 IEEE Symposium Series on Computational Intelligence (SSCI), 2019, pp. 183-190. doi:10.1109/SSCI44817.2019.9002707.", + "[61] I. Harmati, Multi-agent coordination for target tracking using" + ], + "bbox": [ + 112, + 134, + 480, + 865 + ], + "page_idx": 43 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "fuzzy inference system in game theoretic framework, in: 2006 IEEE Conference on Computer Aided Control System Design, 2006 IEEE International Conference on Control Applications, 2006 IEEE International Symposium on Intelligent Control, 2006, pp. 2390-2395. doi:10.1109/CACSD-CCA-ISIC.2006.4777014.", + "[62] C. W. Khuen, C. H. Yong, F. Haron, Multi-agent negotiation system using adaptive fuzzy logic in resource allocation, in: The 2nd International Conference on Distributed Frameworks for Multimedia Applications, 2006, pp. 1-7. doi:10.1109/DFMA.2006.296888.", + "[63] L. Yan, J. Liu, C. P. Chen, Y. Zhang, Z. Wu, Z. Liu, Game-based adaptive fuzzy optimal bipartite containment of nonlinear multiagent systems, IEEE Transactions on Fuzzy Systems 32 (3) (2024) 1455-1465. doi:10.1109/TFUZZ.2023.3327699.", + "[64] R. R. P. Vicerra, K. K. A. David, A. R. dela Cruz, E. A. Roxas, K. B. C. Simbulan, A. A. Bandala, E. P. Dadios, A multiple level mimo fuzzy logic based intelligence for multiple agent cooperative robot system, in: TENCON 2015 - 2015 IEEE Region 10 Conference, 2015, pp. 1-7. doi:10.1109/TENCON.2015.7372985.", + "[65] D. Gu, E. Yang, Fuzzy policy gradient reinforcement learning for leader-follower systems, in: IEEE International Conference Mechatronics and Automation, 2005, Vol. 3, 2005, pp. 1557-1561 Vol. 3. doi:10.1109/ICMA.2005.1626787.", + "[66] Y. Maruyama, A reasoning system for fuzzy distributed knowledge representation in multi-agent systems, in: 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2021, pp. 1-6. doi:10.1109/FUZZ45933.2021.9494454.", + "[67] J. Peng, M. Liu, J. Liu, K.-C. Lin, M. Wu, A coordination model using fuzzy reinforcement learning for multi-agent system, in: 2007 International Symposium on Collaborative Technologies and Systems, 2007, pp. 129-136. doi:10.1109/CTS.2007.4621748.", + "[68] E. Yang, D. Gu, A multiagent fuzzy policy reinforcement learning algorithm with application to leader-follower robotic systems, in: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006, pp. 3197-3202. doi: 10.1109/IROS.2006.282421.", + "[69] H. Wang, H. Luo, Y. Jiang, A game-based distributed fault-tolerant control method for multi-agent systems, in: 2022 IEEE 1st Industrial Electronics Society Annual On-Line Conference (ONCON), 2022, pp. 1-5. doi:10.1109/ONCON56984.2022.10126545.", + "[70] M. Lanctot, V. Zambaldi, A. Gruslys, A. Lazaridou, K. Tuyls, A unified game-theoretic approach to multiagent reinforcement learning, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, p. 4193-4206.", + "[71] C. Guo, L. Zhong, T. Liu, A study on game theory in decision interaction for multi intelligent agents based on information fusion, in: Y. Wu (Ed.), Computing and Intelligent Systems, Springer Berlin Heidelberg, Berlin, Heidelberg, 2011, pp. 442-452.", + "[72] J. Zhang, J. Zhang, The coverage control solutions based on evolutionary game theory in the multi-agent systems, in: 2019 12th Asian Control Conference (ASCC), 2019, pp. 1-6.", + "[73] R. Kong, N. Zhang, X. Bao, X. Lu, Multi-agent distributed optimization based on the game theory and its application in automobile chassis, in: 2018 37th Chinese Control Conference (CCC), 2018, pp. 1107-1111. doi:10.23919/ChiCC.2018.8484212.", + "[74] H. Wang, X. Zhang, H. Luo, X. Qiao, M. Huo, Y. Jiang, Dis" + ], + "bbox": [ + 524, + 134, + 890, + 865 + ], + "page_idx": 43 + }, + { + "type": "page_number", + "text": "43", + "bbox": [ + 487, + 879, + 507, + 889 + ], + "page_idx": 43 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "tributed fault tolerant control for multi-agent systems with sensor faults in non-cooperative games, in: 2023 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes (SAFEPPROCESS), 2023, pp. 1-6. doi:10.1109/ SAFEPPROCESS58597.2023.10295775.", + "[75] X. Dong, X. Li, S. Cheng, Energy management optimization of microgrid cluster based on multi-agent-system and hierarchical stackelberg game theory, IEEE Access 8 (2020) 206183-206197. doi:10.1109/ACCESS.2020.3037676.", + "[76] P. H. Nguyen, W. L. Kling, P. F. Ribeiro, A game theory strategy to integrate distributed agent-based functions in smart grids, IEEE Transactions on Smart Grid 4 (1) (2013) 568-576. doi:10.1109/TSG.2012.2236657.", + "[77] S. U. Khan, I. Ahmad, A semi-distributed axiomatic game theoretical mechanism for replicating data objects in large distributed computing systems, in: 2007 IEEE International Parallel and Distributed Processing Symposium, 2007, pp. 1-10. doi:10.1109/IPDPS.2007.370279.", + "[78] L. Xu, J. Zhu, B. Chen, Z. Yang, K. Liu, B. Dang, T. Zhang, Y. Yang, R. Huang, A distributed nanocluster based multiagent evolutionary network, Nature Communications 13 (1) (2022) 4698. doi:10.1038/s41467-022-32497-5. URL https://doi.org/10.1038/s41467-022-32497-5", + "[79] F. Seredynski, T. Kulpa, R. Hoffmann, Towards evolutionary self-optimization of large multi-agent systems, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '22, Association for Computing Machinery, New York, NY, USA, 2022, p. 200-203. doi: 10.1145/3520304.3529042. URL https://doi.org/10.1145/3520304.3529042", + "[80] H. Liu, Z. Li, K. Huang, R. Wang, G. Cheng, T. Li, Evolutionary reinforcement learning algorithm for large-scale multi-agent cooperation and confrontation applications, The Journal of Supercomputing 80 (2) (2024) 2319-2346. doi:10.1007/s11227-023-05551-2. URL https://doi.org/10.1007/s11227-023-05551-2", + "[81] Y. Dong, X. Liu, T. Li, C. L. P. Chen, Evolutionary game dynamics of multi-agent systems using local information considering hide right, Complex & Intelligent Systems 10 (1) (2024) 917-925. doi:10.1007/s40747-023-01172-7. URL https://doi.org/10.1007/s40747-023-01172-7", + "[82] H. Chen, C. Hu, Z. Huang, Optimal control of multiagent decision-making based on competence evolution, Discrete Dynamics in Nature and Society 2023 (1) (2023) 2179376. doi: https://doi.org/10.1155/2023/2179376.", + "[83] H.-T. Wai, Z. Yang, Z. Wang, M. Hong, Multi-agent reinforcement learning via double averaging primal-dual optimization, in: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, Curran Associates Inc., Red Hook, NY, USA, 2018, pp. 9672-9683.", + "[84] T. Hu, Z. Pu, X. Ai, T. Qiu, J. Yi, Measuring policy distance for multi-agent reinforcement learning (2024). arXiv:2401.11257. URL https://arxiv.org/abs/2401.11257", + "[85] P. Sunehag, G. Lever, A. Gruslys, W. M. Czarnecki, V. Zambaldi, M. Jaderberg, M. Lanctot, N. Sonnerat, J. Z. Leibo, K. Tuyls, T. Graepel, Value-decomposition networks for cooperative multi-agent learning based on team reward, in: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '18, International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2018, pp. 2085-2087.", + "[86] W. J. Yun, J. Park, J. Kim, Quantum multi-agent meta reinforcement learning, in: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth" + ], + "bbox": [ + 112, + 134, + 480, + 865 + ], + "page_idx": 44 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence, AAAI'23/IAAI'23/EAAI'23, AAAI Press, 2023. doi:10.1609/aaai.v37i9.26313. URL https://doi.org/10.1609/aaai.v37i9.26313", + "[87] W. Mao, H. Qiu, C. Wang, H. Franke, Z. Kalbarczyk, R. K. Iyer, T. Basar, Multi-agent meta-reinforcement learning: sharper convergence rates with task similarity, Curran Associates Inc., Red Hook, NY, USA, 2024.", + "[88] M. Kouzeghar, Y. Song, M. Meghjani, R. Bouffanais, Multi-target pursuit by a decentralized heterogeneous uav swarm using deep multi-agent reinforcement learning (2023). arXiv: 2303.01799. URL https://arxiv.org/abs/2303.01799", + "[89] F. Gao, S. Chen, M. Li, B. Huang, Maca: a multi-agent reinforcement learning platform for collective intelligence, in: 2019 IEEE 10th International Conference on Software Engineering and Service Science (ICSESS), 2019, pp. 108-111. doi:10.1109/ICSESS47205.2019.9040781.", + "[90] S. Qi, S. Zhang, X. Hou, J. Zhang, X. Wang, J. Xiao, Efficient distributed framework for collaborative multi-agent reinforcement learning (2022). arXiv:2205.05248. URL https://arxiv.org/abs/2205.05248", + "[91] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, J. Oh, D. Horgan, M. Kroiss, I. Danihelka, A. Huang, L. Sifre, T. Cai, J. P. Agapiou, M. Jaderberg, A. S. Vezhnevets, R. Leblond, T. Pohlen, V. Dalibard, D. Budden, Y. Sulsky, J. Molloy, T. L. Paine, C. Gulcehre, Z. Wang, T. Pfaff, Y. Wu, R. Ring, D. Yogatama, D. Wünsch, K. McKinney, O. Smith, T. Schaul, T. Lillicrap, K. Kavukcuoglu, D. Hassabis, C. Apps, D. Silver, Grandmaster level in starcraft ii using multi-agent reinforcement learning, Nature 575 (7782) (2019) 350-354. doi:10.1038/s41586-019-1724-z. URL https://doi.org/10.1038/s41586-019-1724-z", + "[92] T. Chu, J. Wang, L. Codeca, Z. Li, Multi-agent deep reinforcement learning for large-scale traffic signal control, IEEE Transactions on Intelligent Transportation Systems 21 (3) (2020) 1086-1095. doi:10.1109/TITS.2019.2901791.", + "[93] Z. Lv, L. Xiao, Y. Du, G. Niu, C. Xing, W. Xu, Multiagent reinforcement learning based uav swarm communications against jamming, Trans. Wireless. Comm. 22 (12) (2023) 9063-9075. doi:10.1109/TWC.2023.3268082. URL https://doi.org/10.1109/TWC.2023.3268082", + "[94] R.-E. P. Mircea-Bogdan Radac, R.-C. Roman, Model-free control performance improvement using virtual reference feedback tuning and reinforcement q-learning, International Journal of Systems Science 48 (5) (2017) 1071-1083. doi:10.1080/00207721.2016.1236423.", + "[95] Z. Liu, X. Yang, S. Sun, L. Qian, L. Wan, X. Chen, X. Lan, Grounded answers for multi-agent decision-making problem through generative world model, in: Advances in Neural Information Processing Systems (NeurIPS), 2024.", + "[96] H. Zhang, W. Du, J. Shan, Q. Zhou, Y. Du, J. B. Tenenbaum, T. Shu, C. Gan, Building cooperative embodied agents modularly with large language models, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=EnXJfQqyOK", + "[97] X. Hou, M. Yang, W. Jiao, X. Wang, Z. Tu, W. X. Zhao, Coact: A global-local hierarchy for autonomous agent collaboration (2024). arXiv:2406.13381. URL https://arxiv.org/abs/2406.13381", + "[98] X. Puig, K. K. Ra, M. Boben, J. Li, T. Wang, S. Fidler, A. Torralba, Virtualhome: Simulating household activities via programs, 2018 IEEE/CVF Conference on Computer Vision" + ], + "bbox": [ + 524, + 134, + 890, + 865 + ], + "page_idx": 44 + }, + { + "type": "page_number", + "text": "44", + "bbox": [ + 489, + 879, + 509, + 889 + ], + "page_idx": 44 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "and Pattern Recognition (2018) 8494-8502.", + "URL https://api_semanticscholar.org/CorpusID: 49317780", + "[99] X. Puig, T. Shu, S. Li, Z. Wang, Y.-H. Liao, J. B. Tenenbaum, S. Fidler, A. Torralba, Watch-and-help: A challenge for social perception and human-{ai} collaboration, in: International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=w_7JMpGZRhO", + "[100] D. Gao, Z. Li, X. Pan, W. Kuang, Z. Ma, B. Qian, F. Wei, W. Zhang, Y. Xie, D. Chen, L. Yao, H. Peng, Z. Zhang, L. Zhu, C. Cheng, H. Shi, Y. Li, B. Ding, J. Zhou, Agentscope: A flexible yet robust multi-agent platform (2024). arXiv:2402.14034. URL https://arxiv.org/abs/2402.14034", + "[101] Z. Xiao, D. Zhang, Y. Wu, L. Xu, Y. J. Wang, X. Han, X. Fu, T. Zhong, J. Zeng, M. Song, G. Chen, Chain-of-experts: When LLMs meet complex operations research problems, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=HobyL1B9CZ", + "[102] C. Wang, W. Luo, Q. Chen, H. Mai, J. Guo, S. Dong, X. M. Xuan, Z. Li, L. Ma, S. Gao, Mllm-tool: A multimodal large language model for tool agent learning, arXiv preprint arXiv:2401.10727 (2024).", + "[103] Q. Wu, G. Bansal, J. Zhang, Y. Wu, S. Zhang, E. E. Zhu, B. Li, L. Jiang, X. Zhang, C. Wang, Autogen: Enabling next-gen llm applications via multi-agent conversation, Tech. Rep. MSR-TR-2023-33, Microsoft (August 2023).", + "[104] M. Wen, J. Kuba, R. Lin, W. Zhang, Y. Wen, J. Wang, Y. Yang, Multi-agent reinforcement learning is a sequence modeling problem, in: S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh (Eds.), Advances in Neural Information Processing Systems, Vol. 35, Curran Associates, Inc., 2022, pp. 16509-16521. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/69413f87e5a34897cd010ca698097d0a-Paper-Conference.pdf", + "[105] W. Chen, Y. Su, J. Zuo, C. Yang, C. Yuan, C.-M. Chan, H. Yu, Y. Lu, Y.-H. Hung, C. Qian, Y. Qin, X. Cong, R. Xie, Z. Liu, M. Sun, J. Zhou, Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=EHg5GDnyq1", + "[106] Z. Liu, Y. Zhang, P. Li, Y. Liu, D. Yang, Dynamic LLM-agent network: An LLM-agent collaboration framework with agent team optimization (2024). URL https://openreview.net/forum?id=i43XCU54Br", + "[107] G. Chen, S. Dong, Y. Shu, G. Zhang, J. Sesay, B. Karlsson, J. Fu, Y. Shi, Autoagents: A framework for automatic agent generation, in: K. Larson (Ed.), Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, International Joint Conferences on Artificial Intelligence Organization, 2024, pp. 22–30, main Track. doi:10.24963/ijcai.2024/3.", + "URL https://doi.org/10.24963/ijcai.2024/3 \n[108] S. Hong, M. Zhuge, J. Chen, X. Zheng, Y. Cheng, J. Wang, C. Zhang, Z. Wang, S. K. S. Yau, Z. Lin, L. Zhou, C. Ran, L. Xiao, C. Wu, J. Schmidhuber, MetaGPT: Meta programming for a multi-agent collaborative framework, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=VtmBAGCN7o", + "[109] S. Hong, Y. Lin, B. Liu, B. Liu, B. Wu, D. Li, J. Chen," + ], + "bbox": [ + 105, + 134, + 500, + 864 + ], + "page_idx": 45 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "J. Zhang, J. Wang, L. Zhang, L. Zhang, M. Yang, M. Zhuge, T. Guo, T. Zhou, W. Tao, W. Wang, X. Tang, X. Lu, X. Zheng, X. Liang, Y. Fei, Y. Cheng, Z. Xu, C. Wu, Data interpreter: An llm agent for data science (2024). arXiv:2402.18679.", + "[110] X. Team, Xagent: An autonomous agent for complex task solving (2023).", + "[111] J. Wang, Z. Duan, Intelligent spark agents: A modular lang-graph framework for scalable, visualized, and enhanced big data machine learning workflows (2024). arXiv:2412.01490. URL https://arxiv.org/abs/2412.01490", + "[112] J. Wang, Z. Duan, Agent ai with langgraph: A modular framework for enhancing machine translation using large language models (2024). arXiv:2412.03801. URL https://arxiv.org/abs/2412.03801", + "[113] Y. Zheng, Z. Xing, Q. Zhang, B. Jin, P. Li, Y. Zheng, Z. Xia, K. Zhan, X. Lang, Y. Chen, D. Zhao, Planagent: A multimodal large language agent for closed-loop vehicle motion planning (2024). arXiv:2406.01587. URL https://arxiv.org/abs/2406.01587", + "[114] L. Zhang, Z. Ji, B. Chen, Crew: Facilitating human-ai teaming research (2025). arXiv:2408.00170. URL https://arxiv.org/abs/2408.00170", + "[115] Z. Duan, J. Wang, Exploration of llm multi-agent application implementation based on langgraph+crewai (2024). arXiv: 2411.18241. URL https://arxiv.org/abs/2411.18241", + "[116] Y. Cao, H. Zhao, Y. Cheng, T. Shu, Y. Chen, G. Liu, G. Liang, J. Zhao, J. Yan, Y. Li, Survey on large language model-enhanced reinforcement learning: Concept, taxonomy, and methods, IEEE Transactions on Neural Networks and Learning Systems (2024) 1-21doi:10.1109/TNNLS.2024.3497992.", + "[117] I. Marti, V. R. Tomas, A. Saez, J. J. Martinez, A rule-based multi-agent system for road traffic management, in: 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, Vol. 3, 2009, pp. 595-598. doi:10.1109/WI-IAT.2009.358.", + "[118] V. V. Nekhai, E. Trunova, I. Bilous, I. Bohdan, M. Voitsekhovska, Fuzzy game-theoretic modeling of a multi-agent cybersecurity management system for an agricultural enterprise, in: S. Shkarlet, A. Morozov, A. Palagin, D. Vinnikov, N. Stoianov, M. Zhelezniak, V. Kazymyr (Eds.), Mathematical Modeling and Simulation of Systems, Springer International Publishing, Cham, 2022, pp. 423-434.", + "[119] A. Ramezani, M. R. Andalibizadeh, S. Bahrampour, H. Ramezani, B. Moshiri, Select reliable strategy in multiagent systems using fuzzy logic-based fusion, in: 2008 Second Asia International Conference on Modelling & Simulation (AMS), 2008, pp. 13-17. doi:10.1109/AMS.2008.63.", + "[120] M. E. H. Charaf, M. Benattou, S. Azzouzi, A rule-based multi-agent system for testing distributed applications, in: 2012 International Conference on Multimedia Computing and Systems, 2012, pp. 967-972. doi:10.1109/ICMCS.2012.6320205.", + "[121] A. Daeichian, A. Haghani, Fuzzy q-learning-based multi-agent system for intelligent traffic control by a game theory approach, Arabian Journal for Science and Engineering 43 (6) (2018) 3241-3247. doi:10.1007/s13369-017-3018-9. URL https://doi.org/10.1007/s13369-017-3018-9", + "[122] C. Amato, A first introduction to cooperative multi-agent reinforcement learning (2024). arXiv:2405.06161. URL https://arxiv.org/abs/2405.06161", + "[123] Y. Zhou, S. Liu, Y. Qing, K. Chen, T. Zheng, Y. Huang, J. Song, M. Song, Is centralized training with decentralized execution framework centralized enough for marl? (2023)." + ], + "bbox": [ + 517, + 134, + 890, + 864 + ], + "page_idx": 45 + }, + { + "type": "page_number", + "text": "45", + "bbox": [ + 487, + 879, + 507, + 889 + ], + "page_idx": 45 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "arXiv:2305.17352.", + "URL https://arxiv.org/abs/2305.17352", + "[124] C. Amato, An introduction to centralized training for decentralized execution in cooperative multi-agent reinforcement learning (2024). arXiv:2409.03052. URL https://arxiv.org/abs/2409.03052", + "[125] P. K. Sharma, E. G. Zaroukian, R. Fernandez, A. Basak, D. E. Asher, Survey of recent multi-agent reinforcement learning algorithms utilizing centralized training, in: T. Pham, L. Solomon, M. E. Hohil (Eds.), Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, SPIE, 2021, p. 84. doi:10.1117/12.2585808. URL http://dx.doi.org/10.1117/12.2585808", + "[126] J. K. Gupta, M. Egorov, M. Kochenderfer, Cooperative multiagent control using deep reinforcement learning, in: G. Sukthankar, J. A. Rodriguez-Aguilar (Eds.), Autonomous Agents and Multiagent Systems, Springer International Publishing, Cham, 2017, pp. 66-83.", + "[127] C. Claus, C. Boutilier, The dynamics of reinforcement learning in cooperative multiagent systems, in: Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence, AAAI Press, 1998, pp. 746-752.", + "[128] M. Lauer, M. A. Riedmiller, An algorithm for distributed reinforcement learning in cooperative multi-agent systems, in: Proceedings of the Seventeenth International Conference on Machine Learning, ICML '00, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2000, p. 535-542.", + "[129] M. Bowling, M. Veloso, Multiagent learning using a variable learning rate, Artificial Intelligence 136 (2) (2002) 215-250. doi:https://doi.org/10.1016/S0004-3702(02)00121-2. URL https://www.sciencedirect.com/science/article/pii/S0004370202001212", + "[130] T. Ikeda, T. Shibuya, Centralized training with decentralized execution reinforcement learning for cooperative multi-agent systems with communication delay, in: 2022 61st Annual Conference of the Society of Instrument and Control Engineers (SICE), 2022, pp. 135-140. doi:10.23919/SICE56594.2022.9905866.", + "[131] Y. Zhou, S. Liu, Y. Qing, K. Chen, T. Zheng, Y. Huang, J. Song, M. Song, Is centralized training with decentralized execution framework centralized enough for marl? (2023). arXiv:2305.17352. URL https://arxiv.org/abs/2305.17352", + "[132] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, I. Mordatch, Multi-agent actor-critic for mixed cooperative-competitive environments, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, pp. 6382-6393.", + "[133] C. Yu, A. Velu, E. Vinitsky, J. Gao, Y. Wang, A. Bayen, Y. Wu, The surprising effectiveness of ppo in cooperative multi-agent games, in: Advances in Neural Information Processing Systems, Vol. 35, 2022, pp. 29914-29928. URL https://proceedings.neurips.cc/paper_files/paper/2022/bitize/9c1535a02f0ce079433344e14d910597-Paper-Datasets and_Benchmarks.pdf", + "[134] M. Samvelyan, T. Rashid, C. S. de Witt, G. Farquhar, N. Nardelli, T. G. J. Rudner, C.-M. Hung, P. H. S. Torr, J. Foerster, S. Whiteson, The StarCraft Multi-Agent Challenge, CoRR abs/1902.04043 (2019).", + "[135] B. Ellis, J. Cook, S. Moalla, M. Samvelyan, M. Sun, A. Mahajan, J. Foerster, S. Whiteson, Smacv2: An improved" + ], + "bbox": [ + 105, + 134, + 480, + 865 + ], + "page_idx": 46 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "benchmark for cooperative multi-agent reinforcement learning, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 37567-37593. \nURL https://proceedings.neurips.cc/paper_files/paper/2023/file/764c18ad230f9e7bf6a77ffc2312c55e-Paper-Datasets and_Benchmarks.pdf", + "[136] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, I. Mordatch, Multi-agent actor-critic for mixed cooperative-competitive environments, Neural Information Processing Systems (NIPS) (2017).", + "[137] T. Malloy, C. R. Sims, T. Klinger, M. Liu, M. Riemer, G. Tesauro, Capacity-limited decentralized actor-critic for multi-agent games, in: 2021 IEEE Conference on Games (CoG), 2021, pp. 1-8. doi:10.1109/CoG52621.2021.9619081.", + "[138] J. Jiang, Z. Lu, Learning attentional communication for multi-agent cooperation, in: Advances in Neural Information Processing Systems, Vol. 31, 2018, pp. 7254-7264. URL https://proceedings.neurips.cc/paper/2018/file/6a8018b3a00b69c008601b8becae392b-Paper.pdf", + "[139] C. Sun, Z. Zang, J. Li, J. Li, X. Xu, R. Wang, C. Zheng, T2mac: targeted and trusted multi-agent communication through selective engagement and evidence-driven integration, in: Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence, AAAI'24/IAAI'24/EAAI'24, AAAI Press, 2025. doi:10.1609/aaai.v38i13.29438. URL https://doi.org/10.1609/aaai.v38i13.29438", + "[140] OpenAI, Gpt-4 technical report, arXiv preprint arXiv:2303.08774 (2023). URL https://arxiv.org/abs/2303.08774", + "[141] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, R. Lowe, Training language models to follow instructions with human feedback, in: Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, Curran Associates Inc., Red Hook, NY, USA, 2024.", + "[142] R. Mao, G. Chen, X. Zhang, F. Guerin, E. Cambria, GPTEval: A survey on assessments of ChatGPT and GPT-4, in: N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), ELRA and ICCL, Torino, Italia, 2024, pp. 7844-7866. URL https://aclanthology.org/2024.lrec-main.693", + "[143] A. Grattafori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan, A. Yang, A. Fan, A. Goyal, A. Hartshorn, A. Yang, et al., The llama 3 herd of models (2024). arXiv:2407.21783. URL https://arxiv.org/abs/2407.21783", + "[144] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Roziere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, G. Lample, Llama: Open and efficient foundation language models (2023). arXiv:2302.13971. URL https://arxiv.org/abs/2302.13971" + ], + "bbox": [ + 517, + 134, + 892, + 865 + ], + "page_idx": 46 + }, + { + "type": "page_number", + "text": "46", + "bbox": [ + 487, + 879, + 507, + 889 + ], + "page_idx": 46 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[145] G. Team, Gemini: A family of highly capable multimodal models (2024). arXiv:2312.11805.", + "URL https://arxiv.org/abs/2312.11805", + "[146] V. Dibia, J. Chen, G. Bansal, S. Syed, A. Fourney, E. Zhu, C. Wang, S. Amershi, Autogen studio: A no-code developer tool for building and debugging multi-agent systems (2024). arXiv:2408.15247.", + "URL https://arxiv.org/abs/2408.15247", + "[147] C. Gan, S. Zhou, J. Schwartz, S. Alter, A. Bhandwaldar, D. Gutfreund, D. L. Yamins, J. J. DiCarlo, J. McDermott, A. Torralba, J. B. Tenenbaum, The threadworld transport challenge: A visually guided task-and-motion planning benchmark towards physically realistic embodied ai, in: 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 8847-8854. doi:10.1109/ICRA46639.2022.9812329.", + "[148] C. Gan, J. Schwartz, S. Alter, D. Mrowca, M. Schrimpf, J. Traer, J. De Freitas, J. Kubilius, A. Bhandwaldar, N. Haber, M. Sano, K. Kim, E. Wang, M. Lingelbach, A. Curtis, K. Feigelis, D. Bear, D. Gutfreund, D. Cox, A. Torralba, J. J. DiCarlo, J. Tenenbaum, J. McDermott, D. Yamins, Threadworld: A platform for interactive multi-modal physical simulation, in: J. Vanschoeren, S. Yeung (Eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, Vol. 1, 2021.", + "URL https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/735b90b4568125ed6c3f678819b6e058-Paper-round1.pdf", + "[149] R. Gong, Q. Huang, X. Ma, Y. Noda, Z. Durante, Z. Zheng, D. Terzopoulos, L. Fei-Fei, J. Gao, H. Vo, MindAgent: Emergent gaming interaction, in: K. Duh, H. Gomez, S. Bethard (Eds.), Findings of the Association for Computational Linguistics: NAACL 2024, Association for Computational Linguistics, Mexico City, Mexico, 2024, pp. 3154-3183. doi:10.18653/v1/2024-findings-naacl.200.", + "URL https://aclanthology.org/2024.findings-naacl.200", + "[150] K. Shao, Y. Zhu, Z. Tang, D. Zhao, Cooperative multiagent deep reinforcement learning with counterfactual reward, in: 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1-8. doi:10.1109/IJCNN48605.2020.9207169.", + "[151] K. R. Chandra, S. Borugadda, Multi agent deep reinforcement learning with deep q-network based energy efficiency and resource allocation in nomawireless systems, in: 2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT), 2023, pp. 1-8. doi:10.1109/ICEEICT56924.2023.10157052.", + "[152] A. M. Hafiz, G. M. Bhat, Deep q-network based multi-agent reinforcement learning with binary action agents (2020). arXiv: 2008.04109.", + "URL https://arxiv.org/abs/2008.04109", + "[153] C. S. de Witt, T. Gupta, D. Makoviichuk, V. Makoviychuk, P. H. S. Torr, M. Sun, S. Whiteson, Is independent learning all you need in the starcraft multi-agent challenge? (2020). arXiv:2011.09533.", + "URL https://arxiv.org/abs/2011.09533", + "[154] O. Vinyals, T. Ewalds, S. Bartunov, P. Georgiev, A. S. Vezhnevets, M. Yeo, A. Makhzani, H. Kuttler, J. Agapiou, J. Schrittwieser, J. Quan, S. Gaffney, S. Petersen, K. Simonyan, T. Schaul, H. van Hasselt, D. Silver, T. Lillicrap, K. Calderone, P. Keet, A. Brunasso, D. Lawrence, A. Ekermo, J. Repp, R. Tsing, Starcraft ii: A new challenge for reinforcement learning (2017). arXiv:1708.04782.", + "URL https://arxiv.org/abs/1708.04782" + ], + "bbox": [ + 105, + 133, + 480, + 865 + ], + "page_idx": 47 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[155] H. Shen, K. Zhang, M. Hong, T. Chen, Towards understanding asynchronous advantage actor-critic: Convergence and linear speedup, IEEE Transactions on Signal Processing 71 (2023) 2579-2594. doi:10.1109/tsp.2023.3268475. URL http://dx.doi.org/10.1109/TSP.2023.3268475", + "[156] S. Li, Y. Wu, X. Cui, H. Dong, F. Fang, S. Russell, Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, AAAI Press, 2019, pp. 4213-4220. doi:10.1609/aaai.v33i01.33014213.", + "[157] J. N. Foerster, G. Farquhar, T. Afouras, N. Nardelli, S. Whiteson, Counterfactual multi-agent policy gradients, in: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18, AAAI Press, 2018.", + "[158] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, O. Klimov, Proximal policy optimization algorithms (2017). arXiv: 1707.06347. URL https://arxiv.org/abs/1707.06347", + "[159] J. Schulman, S. Levine, P. Moritz, M. Jordan, P. Abbeel, Trust region policy optimization, in: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, JMLR.org, 2015, p. 1889-1897.", + "[160] J. G. Kuba, R. Chen, M. Wen, Y. Wen, F. Sun, J. Wang, Y. Yang, Trust region policy optimisation in multi-agent reinforcement learning, in: International Conference on Learning Representations, 2022. URL https://arxiv.org/abs/2109.11251", + "[161] S. Sukhbaatar, A. Szlam, R. Fergus, Learning multiagent communication with backpropagation, in: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, Curran Associates Inc., Red Hook, NY, USA, 2016, pp. 2252-2260.", + "[162] L. Matignon, G. J. Laurent, N. Le Fort-Piat, Hysteresic q-learning: an algorithm for decentralized reinforcement learning in cooperative multi-agent teams, in: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007, pp. 64-69. doi:10.1109/IROS.2007.4399095.", + "[163] E. Wei, S. Luke, Lenient learning in independent-learner stochastic cooperative games, J. Mach. Learn. Res. 17 (1) (2016) 2914–2955.", + "[164] M. Hausknecht, P. Stone, Deep recurrent q-learning for partially observable mdps (2017). arXiv:1507.06527. URL https://arxiv.org/abs/1507.06527", + "[165] A. Tampuu, T. Matiisen, D. Kodelja, I. Kuzovkin, K. Korjus, J. Aru, J. Aru, R. Vicente, Multiagent cooperation and competition with deep reinforcement learning, PLOS ONE 12 (4) (2017) 1-15. doi:10.1371/journal.pone.0172395. URL https://doi.org/10.1371/journal.pone.0172395", + "[166] S. Omidshafiei, J. Pazis, C. Amato, J. P. How, J. Vian, Deep decentralized multi-task multi-agent reinforcement learning under partial observability, in: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, p. 2681-2690.", + "[167] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, W. Zaremba, Openai gym (2016). arXiv: 1606.01540. URL https://arxiv.org/abs/1606.01540", + "[168] J. N. Foerster, Y. M. Assael, N. de Freitas, S. Whiteson, Learning to communicate with deep multi-agent reinforcement learning (2016). arXiv:1605.06676." + ], + "bbox": [ + 517, + 133, + 890, + 865 + ], + "page_idx": 47 + }, + { + "type": "page_number", + "text": "47", + "bbox": [ + 487, + 879, + 507, + 889 + ], + "page_idx": 47 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "URL https://arxiv.org/abs/1605.06676", + "[169] J. N. Foerster, Y. M. Assael, N. de Freitas, S. Whiteson, Learning to communicate to solve riddles with deep distributed recurrent q-networks (2016). arXiv:1602.02672. URL https://arxiv.org/abs/1602.02672", + "[170] L. Fan, Y.-y. Liu, S. Zhang, Partially observable multi-agent rl with enhanced deep distributed recurrent q-network, in: 2018 5th International Conference on Information Science and Control Engineering (ICISCE), 2018, pp. 375-379. doi:10.1109/ICISCE.2018.00085.", + "[171] S. Sukhbaatar, A. Szlam, R. Fergus, Learning multiagent communication with backpropagation, in: Advances in Neural Information Processing Systems, Vol. 29, 2016, pp. 2244-2252. URL https://proceedings.neurips.cc/paper/2016/file/55b1927fdafe39c48e5b73b5d61ea60-Paper.pdf", + "[172] P. Peng, Y. Wen, Y. Yang, Q. Yuan, Z. Tang, H. Long, J. Wang, Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play starcraft combat games (2017). arXiv:1703.10069. URL https://arxiv.org/abs/1703.10069", + "[173] G. Wang, D. Shi, C. Xue, H. Jiang, Y. Wang, Bic-ddpg: Bidirectionally-coordinated nets for deep multi-agent reinforcement learning, in: H. Gao, X. Wang, M. Iqbal, Y. Yin, J. Yin, N. Gu (Eds.), Collaborative Computing: Networking, Applications and Worksharing, Springer International Publishing, Cham, 2021, pp. 337-354.", + "[174] D. Fan, H. Shen, L. Dong, Multi-agent distributed deep deterministic policy gradient for partially observable tracking, Actuators 10 (10) (2021). doi:10.3390/act10100268. URL https://www.mdpi.com/2076-0825/10/10/268", + "[175] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, K. Kavukcuoglu, Asynchronous methods for deep reinforcement learning, in: Proceedings of The 33rd International Conference on Machine Learning, PMLR, 2016, pp. 1928-1937. URL https://proceedings.mlr.press/v48/mniha16.html", + "[176] L. Yuan, C. Wang, J. Wang, F. Zhang, F. Chen, C. Guan, Z. Zhang, C. Zhang, Y. Yu, Multi-agent concentrative coordination with decentralized task representation, in: L. D. Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, International Joint Conferences on Artificial Intelligence Organization, 2022, pp. 599–605, main Track. doi:10.24963/ijcai.2022/85. URL https://doi.org/10.24963/ijcai.2022/85", + "[177] S. Vanneste, A. Vanneste, K. Mets, T. D. Schepper, A. Anwar, S. Mercelis, P. Hellinckx, Learning to communicate using a communication critic and counterfactual reasoning, Neural Computing and Applications N/A (N/A) (2025) N/A. doi: 10.1007/s00521-024-10598-0. URL https://doi.org/10.1007/s00521-024-10598-0", + "[178] A. Singh, T. Jain, S. Sukhbaatar, Learning when to communicate at scale in multiagent cooperative and competitive tasks, in: International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bk1Wt24tvH", + "[179] D. Das, R. L. Rath, T. Singh, S. Mishra, V. Malik, R. Sobti, B. Brahma, LIm-based custom chatbot using langchain, in: A. E. Hassanien, S. Anand, A. Jaiswal, P. Kumar (Eds.), Innovative Computing and Communications, Springer Nature Singapore, Singapore, 2024, pp. 257-267.", + "[180] Q. Sun, Z. Yin, X. Li, Z. Wu, X. Qiu, L. Kong, Corex: Pushing the boundaries of complex reasoning through multi-model" + ], + "bbox": [ + 105, + 134, + 480, + 865 + ], + "page_idx": 48 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "collaboration (2023). arXiv:2310.00280.", + "[181] J. S. Park, L. Popowski, C. Cai, M. R. Morris, P. Liang, M. S. Bernstein, Social simulacra: Creating populated prototypes for social computing systems, in: Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST '22, Association for Computing Machinery, New York, NY, USA, 2022. doi:10.1145/3526113.3545616. URL https://doi.org/10.1145/3526113.3545616", + "[182] J. Terry, B. Black, N. Grammel, M. Jayakumar, A. Hari, R. Sullivan, L. Santos, R. Perez, C. Horsch, C. Dieffendahl, et al., Pettingzoo: A standard api for multi-agent reinforcement learning, in: Proceedings of the 35th Conference on Neural Information Processing Systems, in: Advances in Neural Information Processing Systems, Vol. 34, 2021, pp. 15032-15043.", + "[183] S. Huang, S. Ontañón, C. Bamford, L. Grela, Gym-µrts: Toward affordable full game real-time strategy games research with deep reinforcement learning, in: 2021 IEEE Conference on Games (CoG), Copenhagen, Denmark, August 17-20, 2021, IEEE, 2021, pp. 1-8. doi:10.1109/CoG52621.2021.9619076. URL https://doi.org/10.1109/CoG52621.2021.9619076", + "[184] L. Zheng, J. Yang, H. Cai, M. Zhou, W. Zhang, J. Wang, Y. Yu, Magent: A many-agent reinforcement learning platform for artificial collective intelligence, Proceedings of the AAAI Conference on Artificial Intelligence 32 (1) (2018). doi:10.1609/aaai.v32i1.11371. URL https://ojs.aaii.org/index.php/AAAI/article/view/11371", + "[185] C. Yu, P. Wang, Dexterous manipulation for multifingered robotic hands with reinforcement learning: A review, Frontiers in Neurorobotics 16 (2022). doi:10.3389/fnbot.2022.861825. URL https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2022.861825", + "[186] O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. Weng, W. Zaremba, Learning dexterous in-hand manipulation, The International Journal of Robotics Research 39 (1) (2020) 3-20. arXiv:https://doi.org/10.1177/0278364919887447. doi:10.1177/0278364919887447. URL https://doi.org/10.1177/0278364919887447", + "[187] M. Chevalier-Boisvert, B. Dai, M. Towers, R. de Lazcano, L. Willems, S. Lahlou, S. Pal, P. S. Castro, J. Terry, Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks (2023). arXiv: 2306.13831. URL https://arxiv.org/abs/2306.13831", + "[188] J. Z. Leibo, E. Duñez-Guzmán, A. S. Vezhnevets, J. P. Agapiou, P. Sunehag, R. Koster, J. Matyas, C. Beattie, I. Mordatch, T. Graepel, Scalable evaluation of multi-agent reinforcement learning with melting pot (2021). arXiv:2107.06857. URL https://arxiv.org/abs/2107.06857", + "[189] C. Mehlman, J. Abramov, G. Falco, Cat-and-mouse satellite dynamics: Divergent adversarial reinforcement learning for contested multi-agent space operations (2024). arXiv: 2409.17443. URL https://arxiv.org/abs/2409.17443", + "[190] Y. Dong, X. Zhu, Z. Pan, L. Zhu, Y. Yang, VillagerAgent: A graph-based multi-agent framework for coordinating complex task dependencies in Minecraft, in: L.-W. Ku, A. Martins, V. Srikumar (Eds.), Findings of the Association for Computational Linguistics: ACL 2024, Association for Computational" + ], + "bbox": [ + 517, + 134, + 892, + 865 + ], + "page_idx": 48 + }, + { + "type": "page_number", + "text": "48", + "bbox": [ + 487, + 879, + 507, + 889 + ], + "page_idx": 48 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Linguistics, Bangkok, Thailand, 2024, pp. 16290-16314. doi:10.18653/v1/2024-findings-acl.964. URL https://aclanthology.org/2024.findings-acl.964/", + "[191] H. Li, X. Yang, Z. Wang, X. Zhu, J. Zhou, Y. Qiao, X. Wang, H. Li, L. Lu, J. Dai, Auto mc-reward: Automated dense reward design with large language models for mycraft (2024). arXiv:2312.09238. URL https://arxiv.org/abs/2312.09238", + "[192] X. Zhu, Y. Chen, H. Tian, C. Tao, W. Su, C. Yang, G. Huang, B. Li, L. Lu, X. Wang, Y. Qiao, Z. Zhang, J. Dai, Ghost in the minecraft: Generally capable agents for open-world environments via large language models with text-based knowledge and memory (2023). arXiv:2305.17144. URL https://arxiv.org/abs/2305.17144", + "[193] S. C. Duncan, Minecraft, beyond construction and survival, Well Played 1 (1) (2011) 1-22.", + "[194] A. Juliani, V.-P. Berges, E. Teng, A. Cohen, J. Harper, C. Elion, C. Goy, Y. Gao, H. Henry, M. Mattar, D. Lange, Unity: A general platform for intelligent agents (2020). arXiv:1809.02627. URL https://arxiv.org/abs/1809.02627", + "[195] L. Hu, J. Tang, G. Zou, Z. Li, J. Zeng, M. Li, Simulation optimization of highway hard shoulder running based on multi-agent deep deterministic policy gradient algorithm, Alexandria Engineering Journal 117 (2025) 99-115. doi:https://doi.org/10.1016/j.aej.2024.12.110. URL https://www.sciencedirect.com/science/article/pii/S1110016824017095", + "[196] Anonymous, A generalist hanabi agent, in: Submitted to The Thirteenth International Conference on Learning Representations, 2024, under review. URL https://openreview.net/forum?id=pCj2sLNoJq", + "[197] F. Bredell, H. A. Engelbrecht, J. C. Schoeman, Augmenting the action space with conventions to improve multi-agent cooperation in hanabi (2024). arXiv:2412.06333. URL https://arxiv.org/abs/2412.06333", + "[198] M. Kölle, Y. Erpelding, F. Ritz, T. Phan, S. Illium, C. Linnhoff-Popien, Aquarium: A comprehensive framework for exploring predator-prey dynamics through multi-agent reinforcement learning algorithms (2024). arXiv:2401.07056. URL https://arxiv.org/abs/2401.07056", + "[199] A. Chatterjee, M. A. Abbasi, E. Venturino, J. Zhen, M. Haque, A predator-prey model with prey refuge: under a stochastic and deterministic environment, Nonlinear Dynamics 112 (15) (2024) 13667-13693. doi:10.1007/s11071-024-09756-9. URL https://doi.org/10.1007/s11071-024-09756-9", + "[200] Z. Mandi, S. Jain, S. Song, Roco: Dialectic multi-robot collaboration with large language models, in: 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024, pp. 286-299. doi:10.1109/ICRA57147.2024.10610855.", + "[201] J. S. Park, J. O'Brien, C. J. Cai, M. R. Morris, P. Liang, M. S. Bernstein, Generative agents: Interactive simulacra of human behavior, in: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23), ACM, ACM, 2023, p. October 2023. doi:10.1145/3586183.3606763.", + "[202] G. Kovac, R. Portelas, P. F. Dominey, P.-Y. Oudeyer, The social AI school: Insights from developmental psychology towards artificial socio-cultural agents, in: First Workshop on Theory of Mind in Communicating Agents, 2023. URL https://openreview.net/forum?id=Y5r8Wa670b", + "[203] G. Kovač, R. Portelas, P. F. Dominey, P.-Y. Oudeyer, The socialai school: a framework leveraging developmental psychol-" + ], + "bbox": [ + 105, + 133, + 480, + 865 + ], + "page_idx": 49 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "ogy toward artificial socio-cultural agents, Frontiers in Neurorobotics 18, part of the Research Topic: Theory of Mind in Robots and Intelligent Systems (2024). doi:10.3389/fnbot.2024.1396359.", + "[204] G. Mukobi, H. Erlebach, N. Lauffer, L. Hammond, A. Chan, J. Clifton, Welfare diplomacy: Benchmarking language model cooperation (2024). URL https://openreview.net/forum?id=AKJLnDgzkm", + "[205] J. Boubin, C. Burley, P. Han, B. Li, B. Porter, C. Stewart, Programming and deployment of autonomous swarms using multi-agent reinforcement learning (2021). arXiv:2105.10605. URL https://arxiv.org/abs/2105.10605", + "[206] T. Li, F. Xie, Y. Xiong, Q. Feng, Multi-arm robot task planning for fruit harvesting using multi-agent reinforcement learning (2023). arXiv:2303.00460. URL https://arxiv.org/abs/2303.00460", + "[207] A. Mahajan, S. Hegde, E. Shay, D. Wu, A. Prins, Comparative analysis of multi-agent reinforcement learning policies for crop planning decision support (2024). arXiv:2412.02057. URL https://arxiv.org/abs/2412.02057", + "[208] L. Benke, M. Papasimeon, T. Miller, Modelling strategic deceptive planning in adversarial multi-agent systems, in: S. Sarkadi, B. Wright, P. Masters, P. McBurney (Eds.), Deceptive AI, Springer International Publishing, Cham, 2021, pp. 76-83.", + "[209] A. V. Sutagundar, S. Manvi, Context aware multisensor image fusion for military sensor networks using multi-agent system, International Journal of Ad hoc, Sensor Ubiquitous Computing 2 (1) (2011) 147-167. doi:10.5121/ijasuc.2011.2113. URL http://dx.doi.org/10.5121/ijasuc.2011.2113", + "[210] S. Vangaru, D. Rosen, D. Green, R. Rodriguez, M. Wiecek, A. Johnson, A. M. Jones, W. C. Headley, A multi-agent reinforcement learning testbed for cognitive radio applications (2024). arXiv:2410.21521. URL https://arxiv.org/abs/2410.21521", + "[211] Z. Wang, L. Wang, Q. Yi, Y. Liu, A marl based multi-target tracking algorithm under jamming against radar (2024). arXiv:2412.12547. URL https://arxiv.org/abs/2412.12547", + "[212] A. Alexopoulos, T. Schmidt, E. Badreddin, Cooperative pursue in pursuit-evasion games with unmanned aerial vehicles, in: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 4538-4543. doi:10.1109/IROS.2015.7354022.", + "[213] D. Luo, Z. Fan, Z. Yang, Y. Xu, Multi-uav cooperative maneuver decision-making for pursuit-evasion using improved madrl, Defence Technology 35 (2024) 187-197. doi:https://doi.org/10.1016/j.dt.2023.11.013. URL https://www.sciencedirect.com/science/article/pii/S221491472300301X", + "[214] D. Wang, Multi-agent reinforcement learning for safe driving in on-ramp merging of autonomous vehicles, in: 2024 14th International Conference on Cloud Computing, Data Science & Engineering (Confluence), 2024, pp. 644-651. doi:10.1109/Confluence60223.2024.10463500.", + "[215] N. Aboueleneen, Y. Bello, A. Albaseer, A. R. Hussein, M. Abdallah, E. Hossain, Distributed traffic control in complex dynamic roadblocks: A multi-agent deep rl approach (2024). arXiv:2501.00211. URL https://arxiv.org/abs/2501.00211", + "[216] J. Yu, L. Liang, C. Guo, Z. Guo, S. Jin, G. Y. Li, Heterogeneous multi-agent reinforcement learning for distributed channel access in wlans (2024). arXiv:2412.14218. URL https://arxiv.org/abs/2412.14218" + ], + "bbox": [ + 517, + 134, + 892, + 865 + ], + "page_idx": 49 + }, + { + "type": "page_number", + "text": "49", + "bbox": [ + 487, + 878, + 507, + 889 + ], + "page_idx": 49 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[217] K. Sun, H. Yu, Reinforcement learning for freeway lane-change regulation via connected vehicles (2024). arXiv: 2412.04341. URL https://arxiv.org/abs/2412.04341", + "[218] T. Azfar, R. Ke, Traffic co-simulation framework empowered by infrastructure camera sensing and reinforcement learning (2024). arXiv:2412.03925. URL https://arxiv.org/abs/2412.03925", + "[219] R. Bokade, X. Jin, Offlight: An offline multi-agent reinforcement learning framework for traffic signal control (2024). arXiv:2411.06601. URL https://arxiv.org/abs/2411.06601", + "[220] D. K. Kwesiga, S. C. Vishnoi, A. Guin, M. Hunter, Integrating transit signal priority into multi-agent reinforcement learning based traffic signal control (2024). arXiv:2411.19359. URL https://arxiv.org/abs/2411.19359", + "[221] Y. Zhang, G. Zheng, Z. Liu, Q. Li, H. Zeng, Marlens: Understanding multi-agent reinforcement learning for traffic signal control via visual analytics, IEEE Transactions on Visualization and Computer Graphics (2024) 1-16doi:10.1109/tvcg.2024.3392587. URL http://dx.doi.org/10.1109/TVCG.2024.3392587", + "[222] J. Xue, D. Zhang, R. Xiong, Y. Wang, E. Liu, A two-stage based social preference recognition in multi-agent autonomous driving system (2023). arXiv:2310.03303. URL https://arxiv.org/abs/2310.03303", + "[223] J. Liu, P. Hang, X. Na, C. Huang, J. Sun, Cooperative decision-making for cavs at unsignalized intersections: A marl approach with attention and hierarchical game priors (2024). arXiv: 2409.05712. URL https://arxiv.org/abs/2409.05712", + "[224] C. Formanek, L. Beyers, C. R. Tilbury, J. P. Shock, A. Pretorius, Putting data at the centre of offline multi-agent reinforcement learning (2024). arXiv:2409.12001. URL https://arxiv.org/abs/2409.12001", + "[225] R. Zhang, J. Hou, F. Walter, S. Gu, J. Guan, F. Röhrbein, Y. Du, P. Cai, G. Chen, A. Knoll, Multi-agent reinforcement learning for autonomous driving: A survey (2024). arXiv: 2408.09675. URL https://arxiv.org/abs/2408.09675", + "[226] S. Kotoku, T. Mihana, A. Röhm, R. Horisaki, Decentralized multi-agent reinforcement learning algorithm using a cluster-synchronized laser network (2024). arXiv:2407.09124. URL https://arxiv.org/abs/2407.09124", + "[227] M. Hua, D. Chen, K. Jiang, F. Zhang, J. Wang, B. Wang, Q. Zhou, H. Xu, Communication-efficient marl for platoon stability and energy-efficiency co-optimization in cooperative adaptive cruise control of cavs (2024). arXiv:2406.11653. URL https://arxiv.org/abs/2406.11653", + "[228] K. DeMarco, E. Squires, M. Day, C. Pippin, Simulating collaborative robots in a massive multi-agent game environment (scrimmage), in: N. Correll, M. Schwager, M. Otte (Eds.), Distributed Autonomous Robotic Systems, Springer International Publishing, Cham, 2019, pp. 283-297.", + "[229] Y. Xie, Y. Liu, Z. Ma, L. Shi, X. Wang, W. Yuan, M. O. Jackson, Q. Mei, How different ai chatbots behave? benchmarking large language models in behavioral economics games (2024). arXiv:2412.12362. URL https://arxiv.org/abs/2412.12362", + "[230] W. Lin, J. Roberts, Y. Yang, S. Albanie, Z. Lu, K. Han, Beyond outcomes: Transparent assessment of llm reasoning in games (2024). arXiv:2412.13602. URL https://arxiv.org/abs/2412.13602", + "[231] F. Jia, J. Zheng, F. Li, Decentralized intelligence in game:" + ], + "bbox": [ + 105, + 134, + 480, + 865 + ], + "page_idx": 50 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Embodied ai agents and the convergence of defi and virtual ecosystems (2024). arXiv:2412.18601. \nURL https://arxiv.org/abs/2412.18601", + "[232] C. Yin, G. Zhouhong, D. Zhaohan, Y. Zheyu, C. Shaosheng, X. Yiqian, F. Hongwei, C. Ping, Mirage: Exploring how large language models perform in complex social interactive environments (2025). arXiv:2501.01652. URL https://arxiv.org/abs/2501.01652", + "[233] Z. Zhang, Y. Lan, Y. Chen, L. Wang, X. Wang, H. Wang, Dvm: Towards controllable llm agents in social deduction games (2025). arXiv:2501.06695. URL https://arxiv.org/abs/2501.06695", + "[234] A. Gonzalez-Bonorino, M. Capra, E. Pantoja, Llms model nonweird populations: Experiments with synthetic cultural agents (2025). arXiv:2501.06834. URL https://arxiv.org/abs/2501.06834", + "[235] X. Luo, F. Ding, F. Yang, Y. Zhou, J. Loo, H. H. Tew, C. Liu, Sensorag: Constructing environmental knowledge bases with proactive querying for llm-based autonomous driving (2025). arXiv:2501.03535. URL https://arxiv.org/abs/2501.03535", + "[236] D. Mahmud, H. Hajmohamed, S. Almentheri, S. Alqaydi, L. Aldaheri, R. A. Khalil, N. Saeed, Integrating llms with its: Recent advances, potentials, challenges, and future directions (2025). arXiv:2501.04437. URL https://arxiv.org/abs/2501.04437", + "[237] Z. Peng, Y. Wang, X. Han, L. Zheng, J. Ma, Learningflow: Automated policy learning workflow for urban driving with large language models (2025). arXiv:2501.05057. URL https://arxiv.org/abs/2501.05057", + "[238] A. Karagounis, Leveraging large language models for enhancing autonomous vehicle perception (2024). arXiv:2412.20230. URL https://arxiv.org/abs/2412.20230", + "[239] X. Luo, F. Ding, R. Chen, R. Panda, J. Loo, S. Zhang, \"what's happening\"- a human-centered multimodal interpreter explaining the actions of autonomous vehicles (2025). arXiv:2501.05322. URL https://arxiv.org/abs/2501.05322", + "[240] H. Gao, Y. Zhao, Application of vision-language model to pedestrians behavior and scene understanding in autonomous driving (2025). arXiv:2501.06680. URL https://arxiv.org/abs/2501.06680", + "[241] D. Hegde, R. Yasarla, H. Cai, S. Han, A. Bhattacharyya, S. Mahajan, L. Liu, R. Garrepalli, V. M. Patel, F. Porkli, Distilling multi-modal large language models for autonomous driving (2025). arXiv:2501.09757. URL https://arxiv.org/abs/2501.09757", + "[242] R. Wang, X. He, R. Yu, W. Qiu, B. An, Z. Rabinovich, Learning efficient multi-agent communication: An information bottleneck approach, in: H. D. III, A. Singh (Eds.), Proceedings of the 37th International Conference on Machine Learning, Vol. 119 of Proceedings of Machine Learning Research, PMLR, 2020, pp. 9908-9918. URL https://proceedings.mlr.press/v119/wang20i.html", + "[243] A. van den Oord, O. Vinyals, K. Kavukcuoglu, Neural discrete representation learning, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, p. 6309-6318.", + "[244] F. Ye, J. Chen, Y. Tian, T. Jiang, Cooperative task assignment of a heterogeneous multi-uav system using an adaptive genetic algorithm, Electronics 9 (4) (2020). doi:10.3390/electronics9040687." + ], + "bbox": [ + 517, + 134, + 890, + 863 + ], + "page_idx": 50 + }, + { + "type": "page_number", + "text": "50", + "bbox": [ + 489, + 878, + 509, + 889 + ], + "page_idx": 50 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "URL https://www.mdpi.com/2079-9292/9/4/687", + "[245] J. Wang, D. Ye, Z. Lu, Mutual-information regularized multiagent policy iteration, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 2617-2635. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/0799492e7be38b66d10ead5e8809616d-Paper-Conference.pdf", + "[246] P. Li, H. Tang, T. Yang, X. Hao, T. Sang, Y. Zheng, J. Hao, M. E. Taylor, Z. Wang, Pmic: Improving multi-agent reinforcement learning with progressive mutual information collaboration, ICML 2022 (2022).", + "[247] S. Kang, Y. Lee, S.-Y. Yun, DPM: Dual preferences-based multi-agent reinforcement learning, in: ICML 2024 Workshop on Models of Human Feedback for AI Alignment, 2024. URL https://openreview.net/forum?id=TW3DIP2h5p", + "[248] Q. Fu, T. Qiu, J. Yi, Z. Pu, S. Wu, Concentration network for reinforcement learning of large-scale multi-agent systems, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, AAAI Press, Beijing, China, 2022, p. 21165. doi:10.1609/aaai.v36i9.21165.", + "[249] B. Peng, T. Rashid, C. A. S. de Witt, P.-A. Kamienny, P. H. S. Torr, W. Böhmer, S. Whiteson, Facmac: Factored multi-agent centralised policy gradients (2021). arXiv:2003.06709. URL https://arxiv.org/abs/2003.06709", + "[250] K. Pandya, M. Holia, Automating customer service using langchain: Building custom open-source gpt chatbot for organizations (2023). arXiv:2310.05421. URL https://arxiv.org/abs/2310.05421", + "[251] H. Du, S. Thudumu, R. Vasa, K. Mouzakis, A survey on context-aware multi-agent systems: Techniques, challenges and future directions (2024). arXiv:2402.01968. URL https://arxiv.org/abs/2402.01968", + "[252] K. Hu, M. Li, Z. Song, K. Xu, Q. Xia, N. Sun, P. Zhou, M. Xia, A review of research on reinforcement learning algorithms for multi-agents, Neurocomputing 599 (2024) 128068. doi: https://doi.org/10.1016/j.neucom.2024.128068. URL https://www.sciencedirect.com/science/article/pii/S0925231224008397", + "[253] K.-T. Tran, D. Dao, M.-D. Nguyen, Q.-V. Pham, B. O'Sullivan, H. D. Nguyen, Multi-agent collaboration mechanisms: A survey of llms (2025). arXiv:2501.06322. URL https://arxiv.org/abs/2501.06322", + "[254] Y. Talebirad, A. Nadiri, Multi-agent collaboration: Harnessing the power of intelligent llm agents (2023). arXiv:2306.03314. URL https://arxiv.org/abs/2306.03314", + "[255] F. L. D. Silva, M. E. Taylor, A. H. R. Costa, Autonomously reusing knowledge in multiagent reinforcement learning, in: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, 2018, pp. 5487-5493.", + "[256] F. L. Da Silva, A. H. R. Costa, A survey on transfer learning for multiagent reinforcement learning systems, J. Artif. Int. Res. 64 (1) (2019) 645-703. doi:10.1613/jair.1.11396. URL https://doi.org/10.1613/jair.1.11396", + "[257] Y. Bengio, J. Louradour, R. Collobert, J. Weston, Curriculum learning, in: Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, Association for Computing Machinery, New York, NY, USA, 2009, p. 41-48. doi:10.1145/1553374.1553380. URL https://doi.org/10.1145/1553374.1553380", + "[258] X. Wang, Y. Chen, W. Zhu, A survey on curriculum learn" + ], + "bbox": [ + 105, + 133, + 500, + 865 + ], + "page_idx": 51 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "ing, IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (9) (2022) 4555-4576. doi:10.1109/TPAMI.2021.3069908.", + "[259] L. Pinto, J. Davidson, R. Sukthankar, A. Gupta, Robust adversarial reinforcement learning, in: Proceedings of the 34th International Conference on Machine Learning, PMLR, 2017, pp. 2817-2826.", + "[260] W. Jin, N. Wang, T. Tao, B. Shi, H. Bi, B. Zhao, H. Wu, H. Duan, G. Yang, A veracity dissemination consistency-based few-shot fake news detection framework by synergizing adversarial and contrastive self-supervised learning, Scientific Reports 14 (1) (2024) 19470. doi:10.1038/s41598-024-70039-9. URL https://doi.org/10.1038/s41598-024-70039-9", + "[261] J. Foerster, N. Nardelli, G. Farquhar, T. Afouras, P. H. S. Torr, P. Kohli, S. Whiteson, Stabilising experience replay for deep multi-agent reinforcement learning, in: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, p. 1146-1155.", + "[262] T. Guo, X. Chen, Y. Wang, R. Chang, S. Pei, N. V. Chawla, O. Wiest, X. Zhang, Large language model based multi-agents: A survey of progress and challenges, in: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI '24), IJCAI, 2024, pp. 8048-8057. doi: 10.24963/ijcai.2024/890. URL https://doi.org/10.24963/ijcai.2024/890", + "[263] L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, Q. Chen, W. Peng, X. Feng, B. Qin, T. Liu, A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions, ACM Transactions on Information Systems Accepted on 24 September 2024 (2024). doi: 10.1145/3703155. URL https://doi.org/10.1145/3703155", + "[264] E. Alonso, D. Kudenko, D. Kazakov (Eds.), Adaptive agents and multi-agent systems: adaptation and multi-agent learning, Springer-Verlag, Berlin, Heidelberg, 2003.", + "[265] K. Tuyls, P. J. Hoen, K. Verbeeck, S. Sen (Eds.), Learning and Adaption in Multi-Agent Systems, Vol. 3898 of Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 2006. URL https://link.springer.com/book/10.1007/978-3-540-33316-2", + "[266] S. Chen, Y. Liu, W. Han, W. Zhang, T. Liu, A survey on llmbased multi-agent system: Recent advances and new frontiers in application (2025). arXiv:2412.17481. URL https://arxiv.org/abs/2412.17481", + "[267] V. Dibia, Multi-agent llm applications — a review of current research, tools, and challenges, accessed from Victor Dibia's newsletter (2023). URL https://newsletter.victordibia.com/p/multi-agent-llm-applications-a-review", + "[268] A. Chan, R. Salganik, A. Markelius, C. Pang, N. Rajkumar, D. Krasheninnikov, L. Langosco, Z. He, Y. Duan, M. Carroll, M. Lin, A. Mayhew, K. Collins, M. Molamohammadi, J. Burden, W. Zhao, S. Rismani, K. Voudouris, U. Bhatt, A. Weller, D. Krueger, T. Maharaj, Harms from increasingly agentic algorithmic systems, in: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT '23, Association for Computing Machinery, New York, NY, USA, 2023, p. 651-666. doi:10.1145/3593013.3594033. URL https://doi.org/10.1145/3593013.3594033", + "[269] A. Amirkhani, A. H. Barshooi, Consensus in multi-agent systems: A review, Artificial Intelligence Review 55 (5) (2022) 3897-3935, accessed: 2022-06-01. doi:10.1007/s10462-021-10097-x. URL https://doi.org/10.1007/s10462-021-10097-x" + ], + "bbox": [ + 517, + 134, + 890, + 865 + ], + "page_idx": 51 + }, + { + "type": "page_number", + "text": "51", + "bbox": [ + 489, + 879, + 507, + 889 + ], + "page_idx": 51 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[270] G. Beydoun, G. Low, H. Mouratidis, B. Henderson-Sellers, A security-aware metamodel for multi-agent systems (mas), Information and Software Technology 51 (5) (2009) 832-845, sPECIAL ISSUE: Model-Driven Development for Secure Information Systems. doi:https://doi.org/10.1016/j.infsof.2008.05.003. URL https://www.sciencedirect.com/science/article/pii/S0950584908000724", + "[271] X. Xie, C. Zhang, Y. Zhu, Y. N. Wu, S.-C. Zhu, Congestion-aware multi-agent trajectory prediction for collision avoidance, in: 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 13693-13700. doi:10.1109/ICRA48506.2021.9560994.", + "[272] A. Stooke, K. Lee, P. Abbeel, M. Laskin, Decoupling representation learning from reinforcement learning, in: Proceedings of the 38th International Conference on Machine Learning, PMLR, 2021, pp. 9870-9879. URL https://proceedings.mlr.press/v139/stooke21a.html", + "[273] J. Luketina, N. Nardelli, G. Farquhar, J. Foerster, J. Andreas, E. Grefenstette, S. Whiteson, T. Rocktäschel, A survey of reinforcement learning informed by natural language, in: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, 2019, pp. 6309-6317. doi:10.24963/ijcai.2019/880. URL https://doi.org/10.24963/ijcai.2019/880", + "[274] R. P. Poudel, H. Pandya, S. Liwicki, R. Cipolla, Recore: Regularized contrastive representation learning of world model, in: 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 22904-22913. doi: 10.1109/CVPR52733.2024.02161.", + "[275] W. Choi, W. K. Kim, S. Kim, H. Woo, Efficient policy adaptation with contrastive prompt ensemble for embodied agents, in: Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Curran Associates Inc., Red Hook, NY, USA, 2024.", + "[276] F. Paischer, T. Adler, V. Patil, A. Bitto-Nemling, M. Holzleitner, S. Lehner, H. Eghbal-Zadeh, S. Hochreiter, History compression via language models in reinforcement learning, in: K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, S. Sabato (Eds.), Proceedings of the 39th International Conference on Machine Learning, Vol. 162 of Proceedings of Machine Learning Research, PMLR, 2022, pp. 17156-17185. URL https://proceedings.mlr.press/v162/paischer22a.html", + "[277] F. Paischer, T. Adler, M. Hofmacher, S. Hochreiter, Semantic HELM: A human-readable memory for reinforcement learning, in: Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=ebMPmx5mr7", + "[278] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, I. Sutskever, Learning transferable visual models from natural language supervision, in: M. Meila, T. Zhang (Eds.), Proceedings of the 38th International Conference on Machine Learning, Vol. 139 of Proceedings of Machine Learning Research, PMLR, 2021, pp. 8748-8763. URL https://proceedings.mlr.press/v139/radford21a.html", + "[279] A. van den Oord, Y. Li, O. Vinyals, Representation learning with contrastive predictive coding (2019). arXiv:1807.03748. URL https://arxiv.org/abs/1807.03748", + "[280] M. Laskin, A. Srinivas, P. Abbeel, Curl: contrastive unsupervised representations for reinforcement learning, in: Proceed-" + ], + "bbox": [ + 105, + 133, + 480, + 865 + ], + "page_idx": 52 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "ings of the 37th International Conference on Machine Learning, ICML'20, JMLR.org, 2020.", + "[281] M. Schwarzer, A. Anand, R. Goel, R. D. Hjelm, A. Courville, P. Bachman, Data-efficient reinforcement learning with self-predictive representations, in: International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=uCQfPZwRaUu", + "[282] M. Kwon, S. M. Xie, K. Bullard, D. Sadigh, Reward design with language models, in: The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=10uNUgI5K1", + "[283] J. Song, Z. Zhou, J. Liu, C. Fang, Z. Shu, L. Ma, Self-refined large language model as automated reward function designer for deep reinforcement learning in robotics (2023). arXiv: 2309.06687. URL https://arxiv.org/abs/2309.06687", + "[284] Y. Wu, Y. Fan, P. P. Liang, A. Azaria, Y. Li, T. M. Mitchell, Read and reap the rewards: Learning to play atari with the help of instruction manuals, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 1009-1023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/034d7bfeace2a9a258648b16fc626298-Paper-Conference.pdf", + "[285] T. Carta, C. Romac, T. Wolf, S. Lamprier, O. Sigaud, P.-Y. Oudeyer, Grounding large language models in interactive environments with online reinforcement learning, in: Proceedings of the 40th International Conference on Machine Learning, ICML'23, JMLR.org, 2023.", + "[286] K. Chu, X. Zhao, C. Weber, M. Li, S. Wermter, Accelerating reinforcement learning of robotic manipulations via feedback from large language models, in: Bridging the Gap between Cognitive Science and Robot Learning in the Real World: Progresses and New Directions, 2024. URL https://openreview.net/forum?id=MBeeqmD8Zk", + "[287] C. Kim, Y. Seo, H. Liu, L. Lee, J. Shin, H. Lee, K. Lee, Guide your agent with adaptive multimodal rewards, in: Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Curran Associates Inc., Red Hook, NY, USA, 2024.", + "[288] W. Yu, N. Gileadi, C. Fu, S. Kirmani, K.-H. Lee, M. G. Arenas, H.-T. L. Chiang, T. Erez, L. Hasenclever, J. Humplik, B. Ichter, T. Xiao, P. Xu, A. Zeng, T. Zhang, N. Heess, D. Sadigh, J. Tan, Y. Tassa, F. Xia, Language to rewards for robotic skill synthesis (2023). arXiv:2306.08647. URL https://arxiv.org/abs/2306.08647", + "[289] A. Adeniji, A. Xie, C. Sferrazza, Y. Seo, S. James, P. Abbeel, Language reward modulation for pretraining reinforcement learning (2024). URL https://openreview.net/forum?id $\\equiv$ SWRFC2Eup0", + "[290] A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y. Yang, S. Gupta, B. P. Majumder, K. Hermann, S. Welleck, A. Yazdanbakhsh, P. Clark, Self-refine: Iterative refinement with self-feedback (2023). arXiv:2303.17651. URL https://arxiv.org/abs/2303.17651", + "[291] Y. J. Ma, W. Liang, G. Wang, D.-A. Huang, O. Bastani, D. Jayaraman, Y. Zhu, L. Fan, A. Anandkumar, Eureka: Human-level reward design via coding large language models, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=IEduRU055F" + ], + "bbox": [ + 517, + 134, + 910, + 865 + ], + "page_idx": 52 + }, + { + "type": "page_number", + "text": "52", + "bbox": [ + 489, + 879, + 509, + 889 + ], + "page_idx": 52 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[292] T. Xie, S. Zhao, C. H. Wu, Y. Liu, Q. Luo, V. Zhong, Y. Yang, T. Yu, Text2reward: Reward shaping with language models for reinforcement learning, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=tUM39YTRxH", + "[293] M. Janner, Q. Li, S. Levine, Offline reinforcement learning as one big sequence modeling problem, in: Proceedings of the 35th International Conference on Neural Information Processing Systems, NIPS '21, Curran Associates Inc., Red Hook, NY, USA, 2024.", + "[294] R. Shi, Y. Liu, Y. Ze, S. S. Du, H. Xu, Unleashing the power of pre-trained language models for offline reinforcement learning, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=AY6aM13gGF", + "[295] S. Li, X. Puig, C. Paxton, Y. Du, C. Wang, L. Fan, T. Chen, D.-A. Huang, E. Akyurek, A. Anandkumar, J. Andreas, I. Mordatch, A. Torralba, Y. Zhu, Pre-trained language models for interactive decision-making, in: Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, Curran Associates Inc., Red Hook, NY, USA, 2024.", + "[296] L. Mezghani, P. Bojanowski, K. Alahari, S. Sukhbaatar, Think before you act: Unified policy for interleaving language reasoning with actions, in: Workshop on Reincarnating Reinforcement Learning at ICLR 2023, 2023. URL https://openreview.net/forum?id=HQmhQIi6mN", + "[297] J. Grigsby, L. Fan, Y. Zhu, Amago: Scalable in-context reinforcement learning for adaptive agents (2024). arXiv:2310.09971. URL https://arxiv.org/abs/2310.09971", + "[298] B. Zitkovich, T. Yu, S. Xu, P. Xu, T. Xiao, F. Xia, J. Wu, P. Wohlhart, S. Welker, A. Wahid, Q. Vuong, Vanhoucke, Rt-2: Vision-language-action models transfer web knowledge to robotic control, in: J. Tan, M. Toussaint, K. Darvish (Eds.), Proceedings of The 7th Conference on Robot Learning, Vol. 229 of Proceedings of Machine Learning Research, PMLR, 2023, pp. 2165–2183.", + "[299] S. Yao, R. Rao, M. Hausknecht, K. Narasimhan, Keep CALM and explore: Language models for action generation in text-based games, in: B. Webber, T. Cohn, Y. He, Y. Liu (Eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Online, 2020, pp. 8736-8754. doi:10.18653/v1/2020.emnlp-main.704. URL https://aclanthology.org/2020.emnlp-main.704/", + "[300] H. Hu, D. Sadigh, Language instructed reinforcement learning for human-ai coordination, in: Proceedings of the 40th International Conference on Machine Learning, ICML'23, JMLR.org, 2023.", + "[301] Z. Zhou, B. Hu, C. Zhao, P. Zhang, B. Liu, Large language model as a policy teacher for training reinforcement learning agents, in: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI '24, 2025. doi: 10.24963/ijcai.2024/627. URL https://doi.org/10.24963/ijcai.2024/627", + "[302] C. Chen, J. Yoon, Y.-F. Wu, S. Ahn, Transdreamer: Reinforcement learning with transformer world models, in: Deep RL Workshop NeurIPS 2021, 2021. URL https://openreview.net/forum?id=sVrzVAL90sA", + "[303] D. Das, S. Chernova, B. Kim, State2explanation: Concept-based explanations to benefit agent learning and user understanding, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural" + ], + "bbox": [ + 105, + 133, + 480, + 864 + ], + "page_idx": 53 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 67156-67182. \nURL https://proceedings.neurips.cc/paper_files/paper/2023/file/d4387c37b3b06e55f86eccdb8cd1f829-Paper-Conference.pdf", + "[304] J. Lin, Y. Du, O. Watkins, D. Hafner, P. Abbeel, D. Klein, A. Dragan, Learning to model the world with language (2024). arXiv:2308.01399. URL https://arxiv.org/abs/2308.01399", + "[305] J. Robine, M. Höftmann, T. Uelwer, S. Harmeling, Transformer-based world models are happy with 100k interactions, in: ICLR, 2023. URL https://openreview.net/forum?id=TdBaDGCpjly", + "[306] R. P. K. Poudel, H. Pandya, C. Zhang, R. Cipolla, Langwm: Language grounded world model (2023). arXiv:2311.17593. URL https://arxiv.org/abs/2311.17593", + "[307] J. Lin, Y. Du, O. Watkins, D. Hafner, P. Abbeel, D. Klein, A. Dragan, Learning to model the world with language (2024). URL https://openreview.net/forum?id=eWL0oaShEH", + "[308] Z. Yang, K. Ren, X. Luo, M. Liu, W. Liu, J. Bian, W. Zhang, D. Li, Towards applicable reinforcement learning: Improving the generalization and sample efficiency with policy ensemble, in: L. D. Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, International Joint Conferences on Artificial Intelligence Organization, 2022, pp. 3659-3665, main Track. doi: 10.24963/ijcai.2022/508. URL https://doi.org/10.24963/ijcai.2022/508" + ], + "bbox": [ + 517, + 134, + 912, + 483 + ], + "page_idx": 53 + }, + { + "type": "page_number", + "text": "53", + "bbox": [ + 489, + 879, + 507, + 890 + ], + "page_idx": 53 + } +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13415/d6bcf737-8470-4ed5-949a-6d04e53f6bf7_model.json b/data/2025/2503_13xxx/2503.13415/d6bcf737-8470-4ed5-949a-6d04e53f6bf7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6eac50db608e38eda32771f9508f59eea2e67d5e --- /dev/null +++ b/data/2025/2503_13xxx/2503.13415/d6bcf737-8470-4ed5-949a-6d04e53f6bf7_model.json @@ -0,0 +1,10241 @@ +[ + [ + { + "type": "title", + "bbox": [ + 0.105, + 0.129, + 0.212, + 0.149 + ], + "angle": 0, + "content": "Highlights" + }, + { + "type": "title", + "bbox": [ + 0.103, + 0.178, + 0.893, + 0.208 + ], + "angle": 0, + "content": "A Comprehensive Survey on Multi-Agent Cooperative Decision-Making: Scenarios, Approaches, Challenges and Perspectives" + }, + { + "type": "text", + "bbox": [ + 0.104, + 0.231, + 0.655, + 0.248 + ], + "angle": 0, + "content": "Weiqiang Jin, Hongyang Du, Biao Zhao, Xingwu Tian, Bohang Shi, Guang Yang" + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.367, + 0.483, + 0.396 + ], + "angle": 0, + "content": "- Provides a comprehensive survey of multi-agent decision-making methods." + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.474, + 0.483, + 0.503 + ], + "angle": 0, + "content": "- Analyzes key simulation environments for multi-agent reinforcement learning." + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.581, + 0.483, + 0.611 + ], + "angle": 0, + "content": "- Investigate decision-making approaches, including MARL and large language models." + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.688, + 0.483, + 0.717 + ], + "angle": 0, + "content": "- Identifies challenges and future research directions in multi-agent collaboration." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.796, + 0.482, + 0.825 + ], + "angle": 0, + "content": "- Reviews real-world applications in transportation, aerial systems, and automation." + }, + { + "type": "list", + "bbox": [ + 0.121, + 0.367, + 0.483, + 0.825 + ], + "angle": 0, + "content": null + }, + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.299, + 0.061, + 0.725 + ], + "angle": 270, + "content": "arXiv:2503.13415v1 [cs.MA] 17 Mar 2025" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.143, + 0.14, + 0.853, + 0.181 + ], + "angle": 0, + "content": "A Comprehensive Survey on Multi-Agent Cooperative Decision-Making: Scenarios, Approaches, Challenges and Perspectives" + }, + { + "type": "text", + "bbox": [ + 0.186, + 0.2, + 0.812, + 0.216 + ], + "angle": 0, + "content": "Weiqiang Jina, Hongyang Du, Biao Zhaoa,\\*, Xingwu Tiana, Bohang Shi a, Guang Yang c,d,e,f,*" + }, + { + "type": "text", + "bbox": [ + 0.125, + 0.225, + 0.872, + 0.294 + ], + "angle": 0, + "content": "\\(^{a}\\)School of Information and Communications Engineering, Xi'an Jiaotong University, Innovation Harbour, Xi'an, 710049, Shaanxi, China \n\\(^{b}\\)Department of Electrical and Electronic Engineering, The University of Hong Kong (HKU), Hong Kong, Hong Kong, China \n\\(^{c}\\)Bioengineering Department and Imperial-X, Imperial College London, London, W12 7SL, UK \n\\(^{d}\\)National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK \n\\(^{e}\\)Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK \n\\(^{f}\\)School of Biomedical Engineering & Imaging Sciences, King's College London, London, WC2R 2LS, UK" + }, + { + "type": "title", + "bbox": [ + 0.105, + 0.348, + 0.172, + 0.36 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.368, + 0.893, + 0.596 + ], + "angle": 0, + "content": "With the rapid development of artificial intelligence, intelligent decision-making techniques have gradually surpassed human levels in various human-machine competitions, especially in complex multi-agent cooperative task scenarios. Multi-agent cooperative decision-making involves multiple agents working together to complete established tasks and achieve specific objectives. These techniques are widely applicable in real-world scenarios such as autonomous driving, drone navigation, disaster rescue, and simulated military confrontations. This paper begins with a comprehensive survey of the leading simulation environments and platforms used for multi-agent cooperative decision-making. Specifically, we provide an in-depth analysis for these simulation environments from various perspectives, including task formats, reward allocation, and the underlying technologies employed. Subsequently, we provide a comprehensive overview of the mainstream intelligent decision-making approaches, algorithms and models for multi-agent systems (MAS). These approaches can be broadly categorized into five types: rule-based (primarily fuzzy logic), game theory-based, evolutionary algorithms-based, deep multi-agent reinforcement learning (MARL)-based, and large language models (LLMs) reasoning-based. Given the significant advantages of MARL and LLMs-based decision-making methods over the traditional rule, game theory, and evolutionary algorithms, this paper focuses on these multi-agent methods utilizing MARL and LLMs-based techniques. We provide an in-depth discussion of these approaches, highlighting their methodology taxonomies, advantages, and drawbacks. Further, several prominent research directions in the future and potential challenges of multi-agent cooperative decision-making are also detailed." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.605, + 0.179, + 0.618 + ], + "angle": 0, + "content": "Keywords:" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.619, + 0.888, + 0.647 + ], + "angle": 0, + "content": "Intelligent decision-making, Multi-agent systems, Multi-agent cooperative environments, Multi-agent reinforcement learning, Large language models." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.674, + 0.221, + 0.687 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.706, + 0.481, + 0.734 + ], + "angle": 0, + "content": "1.1. Research Backgrounds of Multi-Agent Decision-Making" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.746, + 0.481, + 0.774 + ], + "angle": 0, + "content": "With the continuous advancement of science and technology, intelligent decision-making technology has" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.674, + 0.892, + 0.815 + ], + "angle": 0, + "content": "made rapid progress. These technologies have gradually surpassed human capabilities in various human-machine game competitions, even exceeding the top human levels. Over the past few decades, especially following the successful application of Deep Q-Networks (DQN) [1, 2] in the Arita game and the victories of AlphaGo and AlphaZero [3, 4] over top human opponents, these landmark achievements have significantly propelled the advancement of intelligent decision-making research." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.829, + 0.892, + 0.871 + ], + "angle": 0, + "content": "To meet the growing complexity of real-world applications and the increasing demand for more sophisticated, reliable, and efficient intelligent systems, multi" + }, + { + "type": "page_footnote", + "bbox": [ + 0.103, + 0.802, + 0.48, + 0.871 + ], + "angle": 0, + "content": "*Corresponding authors: Biao Zhao and Guang Yang. Email addresses: weiqiangjin@stu.xjtu.edu.cn (Weiqiang Jin), duhy@hku.hk (Hongyang Du), biaozhao@xjtu.edu.cn (Biao Zhao), txw_xjtu@163.com (Xingwu Tian), Bh_567@stu.xjtu.edu.cn (Bohang Shi), g.yang@imperial.ac.uk (Guang Yang)" + }, + { + "type": "footer", + "bbox": [ + 0.104, + 0.88, + 0.327, + 0.892 + ], + "angle": 0, + "content": "Preprint submitted to Information fusion" + }, + { + "type": "footer", + "bbox": [ + 0.803, + 0.88, + 0.892, + 0.891 + ], + "angle": 0, + "content": "March 18, 2025" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.107, + 0.128, + 0.896, + 0.397 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.154, + 0.408, + 0.842, + 0.421 + ], + "angle": 0, + "content": "Figure 1: An overview of the evolution of scenarios and methods in decision-making from single-agent to multi-agent systems." + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.444, + 0.483, + 0.557 + ], + "angle": 0, + "content": "agent cooperative decision-making has rapidly evolved from simple single-agent scenarios [5, 6, 7, 8]. Multiagent cooperative decision-making is a crucial subfield within machine learning (ML) [9] and artificial intelligence (AI) [10]. It involves multiple interacting agents working together to complete established tasks across diverse well-designed dynamic simulated environments and various complex real-world systems." + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.558, + 0.483, + 0.814 + ], + "angle": 0, + "content": "As depicted in Figure 1, the evolution research progress from single-agent to multi-agent decision-making systems, along with methodological comparisons, highlights that this rapidly advancing field is a crucial step toward achieving human-level AI and the Artificial General Intelligence (AGI) age. Multiagent cooperative decision-making has a wide range of practical applications and many fundamental theoretical works across various domains. The service scenarios are extensive, encompassing smart agriculture management [11, 12], intelligent collaborative robots [13, 14, 15, 16], self-driving collaborative obstacle avoidance [17, 18, 19], autonomous navigation [20, 21, 22] as well as joint rescue tasks [12, 23]. Correspondingly, considering the rapid pace of technological advancement and the multifaceted needs of the real world, in this work, we focus on the comprehensive study of multi-agent cooperative decision-making." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.826, + 0.429, + 0.841 + ], + "angle": 0, + "content": "1.2. Overview of Previous Multi-Agent Surveys" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.843, + 0.483, + 0.871 + ], + "angle": 0, + "content": "Concurrent with the fast-paced advancements in multi-agent cooperative decision-making, there has" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.444, + 0.893, + 0.514 + ], + "angle": 0, + "content": "been a marked increase in systematic literature reviews in this domain [24, 6, 8, 25]. These reviews have covered a wide range of topics, from theoretical innovations to practical applications, providing a comprehensive overview of the state-of-the-art." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.516, + 0.895, + 0.872 + ], + "angle": 0, + "content": "Ning et al. [25] provided a comprehensive overview of the evolution, challenges, and applications of multi-agent reinforcement learning (MARL)-based intelligent agents, including its practical implementation aspects. Gronauer et al. [6] provided an overview of recent developments in multi-agent deep reinforcement learning, focusing on training schemes, emergent agent behaviors, and the unique challenges of the multi-agent domain, while also discussing future research directions. Yang et al. [26] explored the utility theory application in AI robotics, focusing on how utility AI models can guide decision-making and cooperation in multi-agent/robot systems. Orr et al. [8] reviewed recent advancements in MARL, particularly its applications in multi-robot systems, while discussing current challenges and potential future applications. Du et al. [24] provided a systematic overview of multiagent deep reinforcement learning for MAS, focusing on its challenges, methodologies, and applications. Pamul et al. [7] provided a comprehensive analysis of the application of MARL in connected and automated vehicles (CAVs), identifying current developments, existing research directions, and challenges. Hernandez-Leal et al. [27] provided a comprehensive overview of approaches to addressing opponent-induced non" + }, + { + "type": "page_number", + "bbox": [ + 0.493, + 0.879, + 0.506, + 0.891 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.102, + 0.133, + 0.482, + 0.262 + ], + "angle": 0, + "content": "stationarity in multi-agent learning, categorizing algorithms into a new framework and offering insights into their effectiveness across different environments. The survey by Zhu et al. [28] provided a systematic classification and analysis of MARL systems that incorporate communication, encompassing recent advanced CommMARL research and identifying key dimensions that influence the design and development of these multi-agent systems." + }, + { + "type": "title", + "bbox": [ + 0.105, + 0.275, + 0.373, + 0.289 + ], + "angle": 0, + "content": "1.3. Motivations of the Current Survey" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.293, + 0.482, + 0.378 + ], + "angle": 0, + "content": "However, despite the growing body of work in this field, existing surveys often have noticeable limitations [24, 6, 25, 28]. Specifically, our thorough investigation reveals that most current reviews and surveys share several common and significant significant drawbacks and limitations:" + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.388, + 0.482, + 0.459 + ], + "angle": 0, + "content": "- Limited Research Scope: Previous literature reviews [27, 28] predominantly remain within the primary framework of reinforcement learning and have not broken through theoretical limitations, resulting in a lack of comprehensive coverage." + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.469, + 0.482, + 0.554 + ], + "angle": 0, + "content": "- Neglect of Environments: Previous literature reviews [29, 6, 30] have largely concentrated on methodological and algorithmic advancements, frequently overlooking the essential role of simulation environments and platforms in multi-agent intelligent decision-making." + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.563, + 0.482, + 0.649 + ], + "angle": 0, + "content": "- Under-emphasis of Project Implementation: Prior surveys [25, 28, 30] often focus on theoretical models and overlook detailed implementation aspects, including code-bases and project architectures. This gap limits readers' ability to fully understand and apply the findings." + }, + { + "type": "list", + "bbox": [ + 0.121, + 0.388, + 0.482, + 0.649 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.658, + 0.482, + 0.715 + ], + "angle": 0, + "content": "To address the aforementioned limitations and challenges, we recognize the need for more systematic and comprehensive reviews in the multi-agent intelligent decision-making field." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.715, + 0.482, + 0.871 + ], + "angle": 0, + "content": "Firstly, current reviews overly emphasize deep reinforcement learning and fail to adequately consider other potentially effective intelligent decision-making methods [24, 5, 7, 28]. Secondly, with the rapid development of large language models (LLMs), their potential in natural language processing, knowledge representation, and complex decision-making has become increasingly apparent. However, current surveys have largely overlooked their integration. Additionally, existing reviews often neglect the critical role of simulation environments in the development of multi-agent systems." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.133, + 0.894, + 0.346 + ], + "angle": 0, + "content": "However, simulation environments are not merely auxiliary tools but are an integral part of the MAS development and evaluation process. The agents' learning and decision-making processes are influenced and constrained by these environments, making it equally important to understand and develop these environments as it is to focus on the algorithms themselves. Finally, the lack of attention to practical implementation details in current reviews has resulted in a disconnect between theory and practice. This survey will delve into the specifics of project implementation, including code structures, system architecture, and the challenges encountered during development, to enhance research reproducibility and facilitate the effective translation of theoretical research into practical applications." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.347, + 0.893, + 0.543 + ], + "angle": 0, + "content": "Building on the motivations outlined earlier, this survey extends beyond the scope of previous reviews, which were often limited to specific areas of discussion. We treat multi-agent environments as equally important components, alongside the methods and techniques, and provide a thorough introduction to the most advanced algorithms and simulation environments. Moreover, we categorize various multi-agent cooperative decision-making methods from a more fundamental implementation perspective. In summary, this survey seeks to provide a more comprehensive and practical framework for research in multi-agent cooperative decision-making, thereby advancing the continuous development of this critical field." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.558, + 0.862, + 0.571 + ], + "angle": 0, + "content": "1.4. The Survey Overview/Contents Organization" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.573, + 0.895, + 0.872 + ], + "angle": 0, + "content": "As depicted in Figure 2, we have structured the survey to reflect our research approach, with each main and sub-branch corresponding to a specific part: First, in Section 1, we introduce the research background of multi-agent cooperative decision-making, discuss the drawbacks of previous surveys, and outline the organizational structure of this survey. Given that MARL and LLMs-based intelligent decision-making methods demonstrate significant advantages and future potential, our primary attentions are placed on Deep MARL-based and LLMs-based methods due to their superior ability to manage dynamic and uncertain environments. In Section 2, we then delve into mainstream intelligent decision-making approaches, algorithms, and models. We categorize these approaches, with a continued focus on MARL-based and LLMs-based methods, discussing their methodologies, advantages, and limitations. Following this, in Section 3, we provide an in-depth analysis of the leading simulation environments and platforms for multi-agent cooperative decision-making, again focusing on Deep MARL-based and LLMs-based" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.88, + 0.505, + 0.89 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.107, + 0.131, + 0.896, + 0.438 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.457, + 0.893, + 0.482 + ], + "angle": 0, + "content": "Figure 2: Illustration of our systematic review of multi-agent intelligent decision-making research. Compared to previous reviews, we have incorporated comprehensive introduction and analysis, with each segment corresponding to a specific chapter in the survey." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.504, + 0.482, + 0.591 + ], + "angle": 0, + "content": "methods. Furthermore, in Section 4, we discuss the practical applications of multi-agent decision-making systems, such as autonomous driving, UAV navigation, and collaborative robotics. Finally, in Sections 5 and 6, we explore the potential challenges and future research directions of multi-agent cooperative decision-making." + }, + { + "type": "title", + "bbox": [ + 0.105, + 0.605, + 0.312, + 0.619 + ], + "angle": 0, + "content": "1.5. How to read this survey?" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.622, + 0.482, + 0.707 + ], + "angle": 0, + "content": "This survey caters to a diverse readership, each with varying levels of expertise and interest in different aspects of multi-agent decision-making systems. To help readers efficiently find the content that interests them, we offer the following guide, providing direction based on different topics:" + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.72, + 0.482, + 0.833 + ], + "angle": 0, + "content": "- For those interested in rule (fuzzy logic)-based, game theory-based, and evolutionary algorithm-based decision-making research, please refer to Section 2.2.1, 2.2.2, and 2.2.3. This section provides a comprehensive analysis of the rule and game-based methods in multi-agent systems, detailing their corresponding technological taxonomies, features, and limitations." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.843, + 0.482, + 0.871 + ], + "angle": 0, + "content": "- For those interested in MARL-based decision-making research, please refer to Section 2.3. This" + }, + { + "type": "list", + "bbox": [ + 0.121, + 0.72, + 0.482, + 0.871 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.547, + 0.504, + 0.893, + 0.561 + ], + "angle": 0, + "content": "section provides a comprehensive analysis of the deep MARL-based methods in multi-agent systems, detailing their corresponding technological taxonomies, advantages, and limitations." + }, + { + "type": "text", + "bbox": [ + 0.533, + 0.57, + 0.893, + 0.67 + ], + "angle": 0, + "content": "- If you are focused on decision-making based on LLMs, Section 2.4 will offer you an in-depth exploration, with the corresponding technological taxonomies, advantages, and limitations. This part discusses the unique capabilities of LLMs in multiagent environments and their potential applications, especially in reasoning and decision-making." + }, + { + "type": "text", + "bbox": [ + 0.533, + 0.678, + 0.893, + 0.763 + ], + "angle": 0, + "content": "- For readers focused on the well-known simulation environments of MAS, we suggest reading Section 3, which primarily covers an introduction to MARL-based Simulation Environments (Section 3.1) and LLMs Reasoning-based Simulation Environments (Section 3.2)." + }, + { + "type": "text", + "bbox": [ + 0.533, + 0.772, + 0.893, + 0.87 + ], + "angle": 0, + "content": "- If your interest lies in the practical applications of multi-agent decision-making systems, Section 4 will be of particular relevance. This section offers a detailed discussion of how these systems are applied across various fields, such as autonomous driving, UAV navigation, and collaborative robotics." + }, + { + "type": "list", + "bbox": [ + 0.533, + 0.57, + 0.893, + 0.87 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.88, + 0.505, + 0.89 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.121, + 0.133, + 0.481, + 0.218 + ], + "angle": 0, + "content": "- If you are interested in the challenges and problems faced by existing multi-agent decision-making methods, Section 5 provides an in-depth discussion, exploring the limitations of current approaches and unresolved issues in the field, offering insights into these challenges." + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.228, + 0.481, + 0.326 + ], + "angle": 0, + "content": "- Finally, if you wish to learn about future research directions and the prospects for multi-agent decision-making technique, we recommend reading Section 6. This section looks ahead to future research trends and potential breakthroughs, exploring key directions that could drive the field forward." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.349, + 0.434, + 0.364 + ], + "angle": 0, + "content": "2. Multi-Agent Decision-Making Taxonomies" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.374, + 0.481, + 0.685 + ], + "angle": 0, + "content": "This section discusses the taxonomies of decision-making in multi-agent systems and their related techniques. The multi-agent cooperative decision-making methods can be broadly classified into five categories: rule-based (primarily fuzzy logic), game theory-based, evolutionary algorithms-based methods, MARL-based approaches, and LLMs-based methods [31]. Although these rule-based, game theory-based, and evolutionary algorithms-based solutions demonstrate a degree of effectiveness, they typically rely heavily on pre-designed strategies and assumptions. This dependence limits their adaptability to changing and complex environments and ill-suited for handling highly dynamic and uncertain scenarios. In contrast, DRL-based and LLMs reasoning-based solutions offer more dynamic and flexible approaches, capable of learning and adapting to new strategies on the fly. Therefore, these methods have significant advantages in dealing with dynamic and uncertain environments. Thus, special research attentions are placed on DRL-based and LLMs-based methods due to their significant advantages in handling dynamic and uncertain environments." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.687, + 0.481, + 0.8 + ], + "angle": 0, + "content": "The analysis is conducted from multiple perspectives, including agent interaction dynamics, mainstream paradigms of cooperative decision-making, MARL (multi-agent reinforcement learning), and LLM (large language model)-driven multi-agent systems, aiming to provide a systematic framework and technical foundation for the design and optimization of multi-agent decision-making." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.8, + 0.481, + 0.871 + ], + "angle": 0, + "content": "Specifically, Section 2.1 analyzes agent interaction dynamics in MAS, categorizing them into four typical types: fully cooperative, fully competitive, mixed cooperative-competitive, and self-interested, while discussing their impact on overall system behavior. Subse" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.133, + 0.893, + 0.19 + ], + "angle": 0, + "content": "sequently, Sections 2.2, 2.3, and 2.4 introduce mainstream paradigms of cooperative decision-making, MARL-based decision-making methods, and LLMs-based multi-agent systems, respectively." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.203, + 0.893, + 0.231 + ], + "angle": 0, + "content": "2.1. Agent Interaction Dynamics for Multi-Agent Systems" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.235, + 0.893, + 0.349 + ], + "angle": 0, + "content": "In multi-agent systems, the nature of interaction among agents can be categorized into distinct relational dynamics, i.e., agent interaction dynamics, each influencing the overall system behavior and outcomes. These dynamics are critical to understanding and designing intelligent systems where agents operate within shared environments. Below, we detail the primary types of agent relationships:" + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.356, + 0.893, + 0.485 + ], + "angle": 0, + "content": "1. Fully Cooperative: In this scenario setting, all agents have aligned objectives, meaning they share identical reward structures and work towards a common goal. The agents operate with complete cooperation, aiming to maximize collective benefits. This relationship is typical in systems where synergy is essential, and the success of one agent directly contributes to the success of others, ensuring mutual reinforcement of strategies and actions." + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.486, + 0.893, + 0.626 + ], + "angle": 0, + "content": "2. Fully Competitive: This relationship is characterized by a zero-sum game dynamic, where the gain of one agent directly translates into the loss of another. Agents are in direct opposition, with their objectives fundamentally misaligned. This is commonly observed in competitive environments such as robotic competitions, where agents are designed to outperform each other, and success is measured relative to the failure or underperformance of others." + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.629, + 0.893, + 0.798 + ], + "angle": 0, + "content": "3. Mixed Cooperative and Competitive: In most real-world scenarios, agents may engage in both cooperation and competition simultaneously. This type of relationship is evident in team-based environments like robotic soccer, where agents within the same team cooperate to achieve a shared objective (e.g., scoring goals), but simultaneously compete against agents from opposing teams. The complexity of such systems lies in balancing internal cooperation with external competition, often requiring sophisticated strategies to optimize outcomes at both individual and collective levels." + }, + { + "type": "text", + "bbox": [ + 0.528, + 0.8, + 0.893, + 0.871 + ], + "angle": 0, + "content": "4. Self-Interested: In self-interested dynamics, each agent acts primarily to maximize its own utility, with little regard for the impact on others. An agent's actions might incidentally benefit or harm other agents, but these effects are not a concern for" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.88, + 0.505, + 0.89 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.137, + 0.133, + 0.481, + 0.233 + ], + "angle": 0, + "content": "the self-interested agent. This relationship is pertinent in scenarios where agents are designed to prioritize personal gain over collective welfare, often leading to outcomes where the overall system efficiency is not necessarily optimized, as individual agents do not account for the potential externalities of their actions on the environment or other agents." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.242, + 0.481, + 0.354 + ], + "angle": 0, + "content": "Overall, these agent interaction dynamics is crucial for the design and analysis of multi-agent systems, as they directly impact the strategies employed by agents and the overall system performance. The complexity of agent interactions in mixed or self-interested relationships often requires advanced coordination mechanisms and incentive structures to manage potential conflicts and ensure desired outcomes." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.356, + 0.481, + 0.468 + ], + "angle": 0, + "content": "While the overarching concept of agent interaction dynamics holds some value, this survey focuses more on analyzing the characteristics of these methods from a technical and scientific perspective. Therefore, distinguishing relationships is not the primary emphasis of this study. Instead, we will proceed with a more comprehensive taxonomy of Multi-Agent Decision-making Systems." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.483, + 0.481, + 0.512 + ], + "angle": 0, + "content": "2.2. Mainstream Paradigms of Multi-Agent Cooperative Decision-Making" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.515, + 0.483, + 0.743 + ], + "angle": 0, + "content": "In multi-agent cooperative decision-making, several mainstream paradigms exist, each leveraging different techniques to tackle challenges associated with coordination, learning, adaptability, and optimization among autonomous agents. These paradigms utilize diverse approaches, including rule-based (primarily fuzzy logic) systems [32, 33, 34, 35, 36, 37], game theory-based [38, 39, 40, 41, 42, 43], evolutionary algorithms-based [44, 45, 46, 47, 48, 49], MARL-based [50, 51, 52, 53, 54, 55, 56], and LLMs-based [57, 30, 58, 59] multiagent decision-making systems. Each of these methods has distinct strengths and applications, depending on the problem context and the complexity of interactions between agents. For a comprehensive overview, please refer to Table 1, which provides a detailed classification of these paradigms." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.757, + 0.406, + 0.77 + ], + "angle": 0, + "content": "2.2.1. Rule-Based (Primarily Fuzzy Logic)" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.772, + 0.481, + 0.87 + ], + "angle": 0, + "content": "Rule-based decision-making, particularly fuzzy logic, has been widely adopted in multi-agent systems (MAS) due to its ability to handle uncertainty, imprecise data, and dynamic environments [117, 118, 36, 119]. Fuzzy logic enables agents to make adaptive, human-like decisions by mapping inputs to linguistic rules rather than strict mathematical models." + }, + { + "type": "text", + "bbox": [ + 0.515, + 0.133, + 0.894, + 0.872 + ], + "angle": 0, + "content": "Miki et al. [32] presented a rule-based multi-agent control algorithm that utilizes local information instead of absolute coordinates, making it more practical for real-world applications. Charaf et al. [120] introduced a rule-based multi-agent system to address coordination challenges, such as controllability and observability, in distributed testing environments. Yarahmadi et al. [33] reviewed the applications of multi-agent systems in Cyber-Physical Systems (CPS) and the Internet of Things (IoT), proposing a combination of learning and rule-based reasoning to improve decision-making in MAS. Marti et al. [117] presented an expert rule-based system using multi-agent technology to support traffic management during weather-related issues. Daeichian et al. [121] used fuzzy logic in combination with Q-learning and game theory to control traffic lights autonomously. Wu et al. [34] introduced a fuzzy-theoretic game framework that integrates fuzzy logic with game theory to handle uncertainty in utility values during multi-agent decision making. Nekhai et al. [118] devised a cybersecurity management model for agricultural enterprises using a multi-agent system (MAS) based on fuzzy logical reasoning. Ramezani et al. [119] applied fuzzy logic to multi-agent decision-making in soccer robot teams, combining cooperative and non-cooperative game strategies. Zhang et al. [35] introduced a new online method for optimal coordination control in multi-agent differential games, combining fuzzy logic, and adaptive dynamic programming. Ren et al. [36] presented a fuzzy logic-based approach for partner selection in multi-agent systems, emphasizing flexibility and adaptability in dynamic environments. Gu et al. [37] introduced a cooperative reinforcement learning algorithm for multi-agent systems using a leader-follower framework, modeled as a Stackelberg game. Schwartz et al. [60] introduced a multi-agent fuzzy actor-critic learning algorithm for differential games. Harmati et al. [61] proposed a game-theoretic model for coordinating multiple robots in target tracking, using a semi-cooperative Stackelberg equilibrium and a fuzzy inference system for high-level cost tuning. Khuen et al. [62] introduced an Adaptive Fuzzy Logic (AFL) approach for multi-agent systems with negotiation capabilities, focusing on resource allocation. Yan et al. [63] proposed a graphical game-based adaptive fuzzy optimal bipartite containment control scheme for high-order nonlinear multi-agent systems (MASs). Vicerra et al. [64] proposed a multi-agent robot system using a pure fuzzy logic-based artificial intelligence model. Gu et al. [65] presented a fuzzy logic-based policy gradient multi-agent reinforcement learning algorithm for leader-follower systems, where fuzzy logic" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.88, + 0.505, + 0.89 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.21, + 0.141, + 0.786, + 0.152 + ], + "angle": 0, + "content": "Table 1: Representative Methods in Mainstream Paradigms of Multi-Agent Cooperative Decision-Making." + }, + { + "type": "table", + "bbox": [ + 0.133, + 0.153, + 0.865, + 0.459 + ], + "angle": 0, + "content": "
ParadigmRepresentative Methods and Key References
Rule-Based (Primarily Fuzzy Logic)Miki et al. [32], Yarahmadi et al. [33], Wu et al. [34], Zhang et al. [35], Ren et al. [36], Gu et al. [37], Schwartz et al. [60], Harmati et al. [61], Khuen et al. [62], Yan et al. [63], Vicerra et al. [64], Gu et al. [65], Maruyama et al. [66], Peng et al. [67], Yang et al. [68]
Game Theory-basedWang et al. [38], Guo et al. [39], Schwung et al. [40], Wang et al. [41], Lin et al. [42], Wang et al. [43], Wang et al. [69], Lanctot et al. [70], Guo et al. [71], Zhang et al. [72], Kong et al. [73], Wang et al. [74], Dong et al. [75], Nguyen et al. [76], Schwung et al. [40], Khan et al. [77]
Evolutionary Algorithms-basedLiu et al. [45], Xu et al. [78], Daan et al. [46], Franciszek et al. [79], Larry et al. [44], Daan et al. [47], Liu et al. [80], Yuan et al. [48], Dong et al. [81], Chen et al. [82], Zhang et al. [49]
MARL-basedWai et al. [83], Hu et al. [84], Son et al. [53], Yu et al. [52], Rashid et al. [50], Rashid et al. [54], Sunehag et al. [85], Huang et al. [15], Xu et al. [58], Yun et al. [86], Mao et al. [87], Kraemer et al. [51], Kouzeghar et al. [88], Gao et al. [89], Liu et al. [19], Qi et al. [90], Vinyals et al. [91], Lu et al. [18], Chu et al. [92], et al. [20], Kurach et al. [55], Lv et al. [93], Radac et al. [94], Wang et al. [56], Liu et al. [95]
LLMs-basedMordatch et al. [59], Zhang et al. [96], Xu et al. [58], Li et al. [57], Wang et al. [30], Zhao et al. [10], Hou et al. [97], Puig et al. [98, 99], Gao et al. [100], Xiao et al. [101], Wang et al. [102], Wu et al. [103], Wen et al. [104], Chen et al. [105], Liu et al. [106], Chen et al. [107], Hong et al. [108, 109], XAgent Team [110], Wang et al. [111, 112], Zheng et al. [113], Zhang et al. [114, 115], Cao et al. [116]
" + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.483, + 0.481, + 0.637 + ], + "angle": 0, + "content": "controllers act as policies. Maruyama et al. [66] extended the classical framework for reasoning about distributed knowledge, incorporating fuzzy logic to handle uncertainty and degrees of certainty within multi-agent systems. Peng et al. [67] proposed a two-layer coordination model for multi-agent systems using fuzzy reinforcement learning. Yang et al. [68] presented a multiagent reinforcement learning algorithm with fuzzy policy to address control challenges in cooperative multi-agent systems, particularly for autonomous robotic formations." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.64, + 0.481, + 0.738 + ], + "angle": 0, + "content": "Overall, fuzzy logic remains a foundational approach for rule-based decision-making in MAS, offering interpretability and robustness in uncertain environments. In the future, fuzzy logic will be further integrated with LLMs, hierarchical decision architectures, and multiagent planning, enabling more precise and adaptive decision-making in complex real-world scenarios." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.756, + 0.29, + 0.77 + ], + "angle": 0, + "content": "2.2.2. Game theory-based" + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.772, + 0.481, + 0.871 + ], + "angle": 0, + "content": "Game theory provides a structured framework for analyzing strategic interactions in multi-agent systems. It enables agents to make rational decisions in cooperative, competitive, or mixed scenarios through equilibrium-based optimization [5, 29]. Traditional methods such as Nash equilibrium and Stackelberg games form the foundation, while modern approaches" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.483, + 0.892, + 0.511 + ], + "angle": 0, + "content": "integrate reinforcement learning and Bayesian inference to enhance adaptability in dynamic environments." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.513, + 0.893, + 0.627 + ], + "angle": 0, + "content": "Wang et al. [38] provided a broad discussion on game-theoretic approaches in multi-agent systems, covering cooperative and non-cooperative scenarios. Guo et al. [39] applied game theory to multi-agent path planning, leveraging Nash equilibrium to optimize navigation and obstacle avoidance. Zhang et al. [72] developed a distributed control algorithm that ensures optimal coverage while maintaining network connectivity." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.628, + 0.894, + 0.798 + ], + "angle": 0, + "content": "Beyond fundamental decision-making, game theory has been applied in communication networks and energy systems. Wang et al. [41] utilized game-theoretic learning to enhance resource allocation in wireless networks while countering adversarial actions like jamming. Lin et al. [42] introduced potential game theory to optimize distributed energy management in microgrids, where agents autonomously coordinate power distribution. Dong et al. [75] further extended this approach using a hierarchical Stackelberg model for energy trading, balancing incentives between microgrids and individual agents." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.8, + 0.894, + 0.871 + ], + "angle": 0, + "content": "Incorporating machine learning with game theory has also led to advances in multi-agent optimization. Schwung et al. [40] combined potential game theory with reinforcement learning for adaptive production scheduling, while Wang et al. [74] designed a" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.88, + 0.505, + 0.89 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.103, + 0.133, + 0.481, + 0.218 + ], + "angle": 0, + "content": "Nash equilibrium-based fault-tolerant control strategy for multi-agent systems. Additionally, game-theoretic methods have been explored for distributed computing, as shown by Khan et al. [77], who developed a replica placement strategy to minimize data access delays in distributed systems." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.219, + 0.481, + 0.318 + ], + "angle": 0, + "content": "Overall, game theory remains a cornerstone of multiagent decision-making, offering well-defined theoretical guarantees while enabling dynamic adaptation through hybrid approaches. Future research will likely focus on integrating game theory with deep learning and large language models to enhance strategic reasoning in high-dimensional, uncertain environments." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.34, + 0.373, + 0.355 + ], + "angle": 0, + "content": "2.2.3. Evolutionary Algorithms-based" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.357, + 0.481, + 0.469 + ], + "angle": 0, + "content": "Evolutionary algorithms (EAs) provide a bio-inspired approach to optimization in multi-agent systems by leveraging principles such as natural selection, mutation, and recombination [79, 47, 81]. By allowing agents to evolve their strategies iteratively, EAs are particularly effective for problems requiring continuous learning, large-scale coordination, and self-organized behavior." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.471, + 0.481, + 0.612 + ], + "angle": 0, + "content": "Liu et al. [45] introduced the Multi-Agent Genetic Algorithm (MAGA), where agents interact through competition and cooperation to optimize global solutions. Xu et al. [78] extended this idea to hardware-based multi-agent systems, using nanoclusters as physical agents to achieve large-scale parallel computation. Daan et al. [46] explored the role of evolutionary strategies in dynamic environments such as financial markets, smart grids, and robotics, demonstrating how adaptive algorithms can handle real-world uncertainties." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.614, + 0.481, + 0.77 + ], + "angle": 0, + "content": "Franciszek et al. [79] proposed a self-optimization model integrating cellular automata and game theory, simulating competitive evolutionary interactions among agents. Larry et al. [44] analyzed the trade-offs between mutation and recombination, showing that mutation can sometimes outperform traditional recombination strategies in evolutionary computing. To further enhance adaptability, Daan et al. [47] introduced Deep Neuroevolution (DNE), applying coevolutionary techniques to complex multi-agent scenarios, including Atari games." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.772, + 0.481, + 0.871 + ], + "angle": 0, + "content": "Recent studies have focused on scaling evolutionary learning to larger agent populations. Liu et al. [80] developed Evolutionary Reinforcement Learning (ERL), a scalable approach that partitions learning into multiple stages, ensuring better adaptability in multiagent environments. Yuan et al. [48] introduced EvoAgent, a framework that extends LLMs-based au" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.133, + 0.892, + 0.16 + ], + "angle": 0, + "content": "tonomous agents into multi-agent systems using evolutionary techniques like mutation and selection." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.162, + 0.893, + 0.331 + ], + "angle": 0, + "content": "Evolutionary game theory has also been explored to improve cooperative behavior. Dong et al. [81] designed a three-strategy decision model, where agents adopt conservative or adaptive strategies based on their interactions with neighbors, fostering long-term cooperation. Chen et al. [82] proposed a kinetic decision-making model grounded in rarefied gas dynamics, offering a new perspective on agent evolution using the Boltzmann equation. Zhang et al. [49] applied evolutionary game theory to policy optimization, analyzing cooperation strategies among governments, enterprises, and farmers in agricultural water conservation projects." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.332, + 0.892, + 0.431 + ], + "angle": 0, + "content": "Overall, evolutionary algorithms provide a robust framework for decentralized decision-making, allowing agents to self-improve and adapt in uncertain environments. In the future, evolutionary algorithms will be further integrated with deep learning, hierarchical evolution, and large-scale multi-agent coordination, enabling more adaptive, autonomous MAS." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.444, + 0.807, + 0.458 + ], + "angle": 0, + "content": "2.2.4. MARL-based Multi-Agent Systems" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.459, + 0.892, + 0.559 + ], + "angle": 0, + "content": "Before introducing the MARL-based multi-agent systems (MAS), we provide a detailed discussion in Appendix A on the key technological comparisons and methodological principles of both DRL-based single-agent systems and MARL-based MAS. This helps readers build the necessary background knowledge for better understanding the following discussions." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.56, + 0.893, + 0.742 + ], + "angle": 0, + "content": "Multi-Agent Reinforcement Learning offers a structured framework to tackle decision-making in MAS, where autonomous agents interact with each other and a shared environment. The MAS research in MARL is broadly divided into three paradigms: Centralized Training with Centralized Execution (CTCE) [122, 123], Decentralized Training with Decentralized Execution (DTDE) [122], and Centralized Training with Decentralized Execution (CTDE) [51, 124]. Each paradigm is designed to address specific challenges such as coordination, scalability, and policy optimization, providing tailored solutions for diverse scenarios." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.744, + 0.893, + 0.871 + ], + "angle": 0, + "content": "Centralized Training with Centralized Execution The CTCE paradigm [125, 123] relies on a central controller that governs all agents by aggregating their observations, actions, and rewards to make joint decisions. While this paradigm enables high levels of coordination, its scalability is limited in large-scale systems. Multi-Agent DQN (MADQN) [126, 1, 2] is a representative method, employing parameter-sharing mechanisms to handle cooperative tasks effectively. However," + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.88, + 0.505, + 0.89 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.104, + 0.133, + 0.481, + 0.162 + ], + "angle": 0, + "content": "its reliance on centralized control restricts its applicability in dynamic environments with numerous agents." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.168, + 0.483, + 0.368 + ], + "angle": 0, + "content": "Decentralized Training with Decentralized Execution The DTDE paradigm [122] emphasizes independent learning and execution, where each agent interacts with the environment individually and updates its policy based solely on local observations and rewards. This paradigm excels in scalability and robustness, especially in scenarios with limited communication. Notable methods include Independent Q-Learning (IQL) [127, 128] and Decentralized REINFORCE [129], which allow agents to learn autonomously. Despite its advantages, DTDE faces challenges such as learning nonstationarity, where the environment changes as other agents adapt, and difficulty in addressing the credit assignment problem in cooperative settings." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.374, + 0.483, + 0.63 + ], + "angle": 0, + "content": "Centralized Training with Decentralized Execution The CTDE paradigm [51, 130, 131] combines the strengths of centralized training and decentralized execution, making it the most prominent paradigm in MARL research. During training, a central controller aggregates information from all agents to optimize their policies, but during execution, each agent operates independently based on its own observations. CTDE addresses key challenges like non-stationarity and scalability, with methods such as Value Decomposition Networks (VDN) [85] and QMIX [50, 52] for value-based learning, Multi-Agent Deep Deterministic Policy Gradient (MADDPG) [132] for actor-critic frameworks, and Multi-Agent Proximal Policy Optimization (MAPPO) [133] for policy gradient optimization. These approaches are widely applied in complex environments like StarCraft II [134, 135] and the Multi-Agent Particle Environment (MPE) [136, 137]." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.637, + 0.483, + 0.765 + ], + "angle": 0, + "content": "Communication-based MARL Algorithms Additionally, communication-based MARL algorithms have emerged to enhance coordination by enabling agents to share critical information during training and execution. Examples include Attentional Communication (ATOC) [138] and Targeted Multi-Agent Communication (TarMAC) [139], which use advanced mechanisms to improve the efficiency and effectiveness of inter-agent communication in cooperative tasks." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.772, + 0.483, + 0.871 + ], + "angle": 0, + "content": "By structuring MARL methods within these paradigms, researchers provide a clear framework for addressing the diverse challenges of multi-agent decision-making. From autonomous driving fleets to resource allocation systems, MARL continues to push the boundaries of what distributed intelligent systems can achieve [124]." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.133, + 0.803, + 0.147 + ], + "angle": 0, + "content": "2.2.5. LLMs-based Multi-Agent Systems" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.15, + 0.894, + 0.421 + ], + "angle": 0, + "content": "Although LLMs like GPT [140, 141, 142], Llama [143, 144], and Gemini [145] support very long input contexts, their ability to understand complex inputs still varies. In this context, multi-agent collaboration optimizes task execution through role assignment, enabling better performance through collaboration among multiple agents compared to a single agent. Each agent has an independent workflow, memory, and can seek help from other agents when necessary. LLMs-based Multi-Agent Systems represent a relatively new multi-agent decision-making model that leverages the powerful capabilities of language models, to enhance communication and collaboration between autonomous agents. In an LLMs-based multi-agent system, agents communicate via natural language or symbolic representations, breaking down complex tasks into smaller, more manageable subtasks. One important feature of LLMs-based systems is the hierarchical organization of agents, typically consisting of two levels [30, 116]:" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.423, + 0.893, + 0.466 + ], + "angle": 0, + "content": "1) Global planning agents, responsible for high-level decisions such as task decomposition, resource allocation, and overall strategy management." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.468, + 0.894, + 0.596 + ], + "angle": 0, + "content": "2) Local execution agents, which are responsible for executing specific subtasks and providing feedback to the global planning agent. These agents are generally more focused on local tasks but communicate progress and challenges with the global level for adjustments. This decomposition makes distributed problem solving possible, with agents sharing information, strategies, and goals through language, thus advancing task execution together." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.599, + 0.894, + 0.784 + ], + "angle": 0, + "content": "For example, frameworks like AutoGen [103, 146, 107], Crew AI [114, 115], and LangGraph [111, 112] provide rich tools for building multi-agent solutions, supporting efficient cooperation and interaction between agents. Through these frameworks, developers can build virtual teams that leverage the strengths of different agents in distributed tasks. Additionally, LLMs-based multi-agent systems possess adaptive re-planning capabilities, enabling them to adjust in dynamic environments. When agents encounter changes or new information, they can quickly update strategies or re-plan tasks using language models, ensuring the system remains aligned with changing goals." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.786, + 0.894, + 0.871 + ], + "angle": 0, + "content": "Firstly, LLMs-based multi-agent environments have emerged as pivotal platforms for advancing research in multi-agent collaboration, reasoning, and task execution. For instance, environments such as Three-DWorld Multi-Agent Transport (TDW-MAT) [147, 148], Communicative Watch-And-Help (C-WAH) [99]," + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.88, + 0.505, + 0.89 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.102, + 0.132, + 0.482, + 0.345 + ], + "angle": 0, + "content": "Cuisineworld [149], and AgentScope [100] offer diverse settings for evaluating and enhancing multi-agent systems in various contexts, from household chores to gaming interactions and beyond. For instance, MindAgent [149] is a novel infrastructure for evaluating planning and coordination capabilities in gaming interactions, leveraging large foundation models (LFMs) to coordinate multi-agent systems, collaborate with human players. Communicative Watch-And-Help (C-WAH) [98, 99] is a realistic multi-agent simulation environment and an extension of the Watch-And-Help Challenge platform, VirtualHome-Social. AgentScope [100] is a robust and flexible multi-agent platform designed to empower developers in building advanced multi-agent systems by leveraging the potential of LLMs." + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.346, + 0.483, + 0.871 + ], + "angle": 0, + "content": "Meanwhile, LLMs-based multi-agent systems have broad applications and great prospects [30, 116]. They can collaborate in robotic teams to perform complex tasks, such as product assembly or joint research, ensuring seamless interaction and cooperation [58, 59, 57]. In autonomous driving, LLMs help vehicles communicate, sharing real-time data and navigation strategies to achieve coordinated actions. Moreover, LLMs can support agents (such as drones) in disaster response, transmitting critical information to help systems efficiently respond to crises. Wu et al. [103] proposed AutoGen, an open-source framework for developing next-generation LLM applications through multi-agent conversations, allowing customizable agent interactions and integration of LLMs, human inputs, and tools. Xiao et al. [101] introduced Chain-of-Experts (CoE), a multi-agent framework that enhances reasoning in complex operations research (OR) problems using LLMs, with domain-specific roles and a conductor for guidance. Chen et al. [105] presented AgentVerse, a multi-agent framework inspired by human group dynamics, dynamically adjusting agent roles and composition to enhance complex task-solving across various domains. Chen et al. [107] developed AutoAgents, a framework that adaptively generates and coordinates multiple specialized agents for efficient task completion. Liu et al. [106] proposed Dynamic LLM-Agent Network (DyLAN), a framework that enhances LLM-agent collaboration through dynamic interactions based on task requirements. Zhang et al. [96] introduced CoELA, a Cooperative Embodied Language Agent framework that leverages LLMs to enhance multi-agent cooperation in complex, decentralized environments. Hong et al. [108, 109] proposed MetaGPT, a meta-programming framework that enhances LLMs-based multi-agent system collaboration using Standard Operating Procedures (SOPs). XAgent Team [110] developed XAgent, an" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.133, + 0.895, + 0.417 + ], + "angle": 0, + "content": "open-source, LLM-driven autonomous agent framework for solving complex tasks using a dual-loop architecture for task planning and execution. Zheng et al. [113] introduced PlanAgent, a closed-loop motion planning framework for autonomous driving using multi-modal LLMs to generate hierarchical driving commands. Wang et al. [111, 112] developed LangGraph, a library for building stateful, multi-actor applications with LLMs, offering fine-grained control over workflows and state management. Zhang et al. [114, 115] introduced CrewAI, an open-source framework for coordinating AI agents in role-playing and autonomous operations, with a modular design for efficient collaboration. Hou et al. [97] proposed CoAct, a hierarchical multi-agent system leveraging LLMs for collaborative task execution. It features a global planning agent for task decomposition and strategy management, and a local execution agent for subtask implementation, feedback collection, and adaptive replanning, ensuring alignment with overarching goals." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.417, + 0.895, + 0.601 + ], + "angle": 0, + "content": "Despite the strong capabilities of LLMs in small to medium-sized multi-agent systems, scalability remains an open issue, particularly in maintaining coherent communication between large numbers of agents in large environments. As the number of agents increases, the complexity of coordinating their behaviors through language also intensifies. Finding a balance between agent autonomy and effective collaboration is a significant challenge. Additionally, LLMs are often seen as black-box models, meaning understanding the reasoning process behind an agent's decision-making can be difficult. The lack of transparency poses challenges for trust and debugging." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.601, + 0.894, + 0.716 + ], + "angle": 0, + "content": "In summary, LLMs-based multi-agent systems hold great potential in a variety of applications, offering an advanced way to model and solve complex decision-making problems that require high levels of coordination, adaptability, and communication between agents. By optimizing task decomposition, collaboration, and feedback mechanisms, LLMs bring unprecedented efficiency and flexibility to multi-agent systems." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.727, + 0.892, + 0.755 + ], + "angle": 0, + "content": "2.3. MARL-based Multi-Agent Decision-Making Tax-onomies" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.758, + 0.895, + 0.871 + ], + "angle": 0, + "content": "In multi-agent systems, where multiple autonomous agents interact with a shared environment and often with each other, the complexity of decision-making increases significantly. To achieve optimal performance, agents need to learn not only how to act individually but also how to coordinate with others. One of the central challenges in MARL-based multi-agent systems is determining how much information should be" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.879, + 0.51, + 0.891 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.103, + 0.133, + 0.482, + 0.248 + ], + "angle": 0, + "content": "shared among agents during different phases of learning and deployment. The MARL research is typically structured into three primary paradigms: CTCE [150, 151, 152, 126, 123], DTDE [122, 153], and CTDE [124, 130, 51]. As illustrated in Figure 3, we will next explain the principles and differences of the three methods in conjunction with this conceptual framework diagram." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.269, + 0.482, + 0.297 + ], + "angle": 0, + "content": "2.3.1. Centralized Training with Decentralized Execution (CTDE)" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.299, + 0.482, + 0.4 + ], + "angle": 0, + "content": "As shown in the left of Figure 3, CTDE is a hybrid MARL approach that combines the strengths of both centralized and decentralized systems [124]. In CTDE, each agent possesses its own policy network, which is trained under the guidance of a central controller. This approach is characterized by a two-phase process: centralized training followed by decentralized execution." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.414, + 0.482, + 0.512 + ], + "angle": 0, + "content": "1. Centralized Training (Phase 1): During the training phase, the central controller collects data from all agents, including their observations, actions, and rewards. This centralized data aggregation allows the controller to oversee the learning process and facilitate the training of each agent's policy network." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.516, + 0.482, + 0.602 + ], + "angle": 0, + "content": "2. Decentralized Execution (Phase 2): Once the training is complete, the central controller's involvement ceases, and each agent operates independently. At execution, agents make decisions based on their own observations using their trained policy networks." + }, + { + "type": "list", + "bbox": [ + 0.117, + 0.414, + 0.482, + 0.602 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.616, + 0.483, + 0.871 + ], + "angle": 0, + "content": "In some communication-constrained scenarios, agents often cannot share or fully share their observations of the environment. Instead, they must make decisions independently based on their own local observations and policies, which limits the applicability of fully centralized methods. To overcome this challenge, Kraemer et al. [124, 51] proposed the CTDE learning paradigm. The CTDE agents have access to global environmental state information and the observations of other agents during the training phase, allowing them to learn a joint policy together. However, during the execution phase, each agent relies solely on its own observations and the trained policy to make independent decisions. It combines the advantages of fully decentralized and fully centralized methods, effectively mitigating issues such as learning non-stationarity and the curse of dimensionality, making it the dominant paradigm in current MARL solutions." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.133, + 0.894, + 0.218 + ], + "angle": 0, + "content": "Under CTDE, MARL algorithms can primarily be categorized into three types based on their technical implementations: value function decomposition-based algorithms, actor-critic-based algorithms, and algorithms based on policy gradient methods, such as proximal policy optimization (PPO)." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.218, + 0.893, + 0.488 + ], + "angle": 0, + "content": "1. Value Decomposition-based Algorithms Value decomposition-based algorithms mainly address the challenge of estimating the joint state-action value function (Q-value) in multi-agent systems, which is difficult due to the high dimensionality of the joint action space. Instead of directly estimating this joint value function, these algorithms decompose it into more manageable individual state-action value functions (Q-value) for each agent. During execution, each agent selects its action based on its own value function. In training, the joint value function is computed from individual value functions, and the temporal difference error of the joint value guides the learning of the individual functions. A key principle these algorithms must satisfy is the Individual-Global-Max (IGM) principle, ensuring that the actions maximizing the joint value are consistent with those maximizing individual values. Different algorithms use various methods to approximate or satisfy the IGM principle." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.488, + 0.894, + 0.7 + ], + "angle": 0, + "content": "Value Decomposition Networks (VDN) [85] is one of the earliest value decomposition-based algorithms in CTDE-based MARL models. VDN simplifies the estimation of the joint state-action value function by assuming that it can be represented as the sum of the individual state-action value functions of all agents. It means that the joint value function is obtained by simply adding up the individual value functions, which does not take into account the varying contributions of each agent's Q-value. However, the assumption made by VDN is a sufficient but not necessary condition for satisfying the IGM principle, which can limit its applicability. Additionally, VDN does not utilize global state information during training, further restricting its effectiveness in more complex environments." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.7, + 0.894, + 0.871 + ], + "angle": 0, + "content": "To address this issue, Rashid et al. [50] proposed the QMIX algorithm within the CTDE paradigm. QMIX assumes a monotonic nonlinear relationship between the joint state-action value function and the individual state-action value functions of agents. To implement this, QMIX introduces a mixing network that computes the joint state-action value function based on the individual Q-values of all agents. This mixing network is designed with non-negative parameters to ensure that the monotonicity assumption is met. QMIX has been successfully applied in various scenarios and is considered one of the most successful value decomposition al" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.109, + 0.14, + 0.361, + 0.294 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.37, + 0.14, + 0.616, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.62, + 0.14, + 0.891, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.309, + 0.893, + 0.334 + ], + "angle": 0, + "content": "Figure 3: The paradigms visualization of CTDE (left), DTDE (centre), and CTCE (right), consisting of three crucial elements: agent (i.e., algorithm or model), environment, central controller (Optional)." + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.357, + 0.481, + 0.485 + ], + "angle": 0, + "content": "gorithms to date. By enforcing a monotonic relationship between the joint action Q-values and individual Q-values, QMIX simplifies the policy decomposition process, facilitating decentralized decision-making. However, the monotonicity assumption, while sufficient for ensuring the Individual-Global-Max (IGM) principle, is not a necessary condition. This limitation restricts the algorithm's applicability in situations where an agent's optimal action depends on the actions of other agents." + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.486, + 0.483, + 0.785 + ], + "angle": 0, + "content": "Weighted QMIX [54] builds upon QMIX and addresses this limitation by introducing a novel weighting mechanism during the projection of Q-values, which is widely used for cooperative MARL scenarios. In QMIX, the projection of Q-learning targets into the representable space is done with equal weighting across all joint actions, which can lead to suboptimal policy representations, even if the true optimal Q-values \\((\\mathbf{Q}^{*})\\) are known. To overcome this, Weighted QMIX introduces two weighting schemes-Centrally-Weighted (CW) QMIX and Optimistically-Weighted (OW) QMIX-that place greater emphasis on the better joint actions during this projection process. The weighting schemes ensure that the correct maximal action is recovered for any set of joint action Q-values, effectively improving the algorithm's ability to learn optimal policies. These schemes in Weighted QMIX enhances the representational capacity of QMIX, demonstrating improved results on both predator-prey scenarios of Multi-Agent Particle Environment (MPE) [137] and the challenging StarCraft II benchmarks [154, 91, 134]." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.786, + 0.483, + 0.871 + ], + "angle": 0, + "content": "Since then, numerous methods building on value function decomposition have been developed. QPLEX [56] introduces a novel duplex dueling network architecture for multi-agent Q-learning, designed to nonlinearly decompose the joint state-action value function while embedding the IGM principle directly into the" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.357, + 0.893, + 0.57 + ], + "angle": 0, + "content": "network structure. Fast-QMIX [52] enhances the original Q-MIX by dynamically assigning virtual weighted Q-values with an additional network, improving convergence speed, stability, and overall performance in cooperative multi-agent scenarios. QTRAN [53] introduces a more flexible factorization method that overcomes the structural limitations of Q-MIX, where the joint Q-value is constrained to be a monotonic function of the individual Q-values, thereby imposing a specific structural form on the factorization. Specifically, QTRAN introduces a necessary and sufficient condition for the IGM principle and incorporates two additional loss terms into the loss function to constrain the training of individual Q-value functions, ensuring they satisfy this IGM principle." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.572, + 0.895, + 0.799 + ], + "angle": 0, + "content": "2. Actor-Critic-based Algorithms: Actor-Critic-based algorithms [132, 60, 155] represent a foundational class of methods within the CTDE paradigm, offering a flexible and effective approach for tackling the challenges of multi-agent environments. These algorithms combine the strengths of policy optimization (actor) with value estimation (critic), allowing agents to learn robust and adaptive strategies in both cooperative and competitive settings. By leveraging a centralized critic during training, Actor-Critic-based methods [136, 137, 92] address key issues such as environmental non-stationarity and credit assignment, enabling effective policy optimization in dynamic and complex multiagent scenarios. Below, we discuss several prominent Actor-Critic-based approaches and their contributions to advancing MARL." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.8, + 0.894, + 0.871 + ], + "angle": 0, + "content": "MADDPG [132] is a typical Actor-Critic-based CTDE approach specifically designed to address the challenges of multi-agent environments, where agents engage in both cooperative and competitive interactions. Traditional reinforcement learning algorithms, such as" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.88, + 0.509, + 0.89 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.107, + 0.133, + 0.48, + 0.529 + ], + "angle": 0, + "content": "Q-learning and policy gradient methods, struggle in multi-agent settings due to issues like non-stationarity where the environment constantly changes as other agents learn-and increased variance with the growing number of agents. MADDPG adapts the actor-critic framework by incorporating a centralized critic during training, which has access to the actions and observations of all agents. This centralized critic helps mitigate the non-stationarity problem by learning a more stable value function that considers the joint action space. During execution, however, each agent independently follows its policy (actor) based on local observations, enabling decentralized decision-making. It allows each agent to successfully learn and execute complex coordination strategies, outperforming existing methods in both cooperative and competitive multiagent environments. To address the computational challenges of continuous action spaces, Li et al. [156] extend the MADDPG algorithm to Multi-Agent Mutual Information Maximization Deep Deterministic Policy Gradient (M3DDPG) by incorporating a minimax approach to enhance robustness in multi-agent environments. M3DDPG introduce Multi-Agent Adversarial Learning (MAAL), which efficiently solves the minimax formulation, ensuring agents can generalize even when opponents' policies change and leading to significant improvements over existing baselines in mixed cooperative-competitive scenarios." + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.531, + 0.48, + 0.743 + ], + "angle": 0, + "content": "Counterfactual Multi-Agent Policy Gradient (COMA) [157] is a cooperative algorithm based on the Actor-Critic framework that uses centralized learning to address the credit assignment problem in multi-agent settings. COMA employs a centralized critic to compute advantage functions for each agent, using counterfactual baselines to reduce policy dependencies among agents and improve learning efficiency. Each agent has its own policy network, but the shared centralized critic evaluates joint Q-values by considering the collective state and action information of all agents. This approach minimizes the negative impacts of policy dependencies and allows for a more comprehensive assessment of each agent's behavior, enhancing overall policy optimization." + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.744, + 0.48, + 0.87 + ], + "angle": 0, + "content": "3. Proximal Policy Optimization-based Algorithms: Proximal Policy Optimization (PPO) [158] is a widely used CTDE reinforcement learning algorithm that has been adapted and extended to address challenges in MARL. Within the CTDE paradigm, PPO and its multi-agent variants have shown remarkable effectiveness in balancing policy optimization efficiency and stability. PPO was introduced by Schulman et al. [158] as an efficient policy gradient algorithm designed to im" + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.133, + 0.892, + 0.259 + ], + "angle": 0, + "content": "prove upon the trust region policy optimization (TRPO) framework [159]. PPO employs a clipped surrogate objective function that simplifies the trust region constraint in TRPO, allowing for stable updates without overly restrictive computational overhead. The key innovation of PPO lies in its ability to control the magnitude of policy updates through the clipping mechanism, which ensures that policies do not deviate excessively from their previous versions." + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.261, + 0.892, + 0.487 + ], + "angle": 0, + "content": "In MARL, Multi-Agent PPO (MAPPO) [133] extends PPO to the centralized critic paradigm. MAPPO uses a centralized value function (critic) that evaluates joint states and actions during training, while agents execute independently using their decentralized policies. MAPPO has demonstrated superior performance in various cooperative and competitive multi-agent environments, such as the StarCraft II [134, 135] and Multi-Agent Particle Environment (MPE) [59, 136, 137] benchmarks. The centralized critic allows for improved credit assignment and non-stationarity handling during training, while the decentralized execution ensures scalability. While MAPPO leverages parameter sharing among agents, this assumption may not hold in heterogeneous-agent systems where agents differ in capabilities, objectives, or action spaces." + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.488, + 0.892, + 0.742 + ], + "angle": 0, + "content": "To address this, Kuba et al. [160] proposed Heterogeneous-Agent Trust Region Policy Optimization (HATRPO) and Heterogeneous-Agent Proximal Policy Optimization (HAPPO). These algorithms remove the parameter-sharing assumption, allowing for individualized policy networks for each agent. HATRPO builds upon TRPO by introducing a sequential update scheme, where only one agent updates its policy at a time while the policies of other agents remain fixed. This approach ensures monotonic improvement in joint policies, as it approximates the Nash equilibrium under certain conditions, such as full observability and deterministic environments. HAPPO extends PPO in a similar vein, replacing parameter sharing with individualized policies. Like HATRPO, HAPPO employs a sequential update mechanism, but it retains the computational efficiency and practical simplicity of PPO's clipped objective function." + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.743, + 0.892, + 0.87 + ], + "angle": 0, + "content": "Both HATRPO and HAPPO utilize a sequential update process where one agent updates its policy while others remain fixed. This prevents conflicts during policy optimization and ensures theoretical convergence to a stable joint policy. Moreover, HATRPO and HAPPO provide monotonic improvement guarantees under specific conditions. By removing the parameter-sharing constraint, these algorithms enable agents to learn tailored policies that account for their unique roles and" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.108, + 0.132, + 0.484, + 0.189 + ], + "angle": 0, + "content": "capabilities. Both algorithms perform competitively in benchmark tasks, demonstrating their ability to scale to high-dimensional state-action spaces while maintaining robust coordination among agents." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.19, + 0.484, + 0.303 + ], + "angle": 0, + "content": "PPO-based algorithms, including MAPPO [133], HATRPO [160], and HAPPO [160], have revolutionized multi-agent reinforcement learning by combining the stability of PPO with the coordination benefits of centralized critics. These algorithms have proven effective across a wide array of cooperative and competitive MARL tasks, offering strong performance and scalability." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.304, + 0.484, + 0.473 + ], + "angle": 0, + "content": "3. Other Categories of Algorithms within the CTDE Paradigm: In addition to the well-established categories of Value Decomposition-based, Actor-Critic-based, and Proximal Policy Optimization (PPO)-based algorithms, the MARL research has seen significant advancements through innovative optimizations and enhancements within CTDE paradigm that are not confined to these traditional classifications. These approaches aim to address the inherent challenges of multi-agent environments, such as non-stationarity and limited communication, to improve overall cooperation and policy learning efficiency." + }, + { + "type": "text", + "bbox": [ + 0.108, + 0.475, + 0.484, + 0.87 + ], + "angle": 0, + "content": "For example, Centralized Advising and Decentralized Pruning (CADP) is a novel framework introduced by Zhou et al. [131] to address limitations in the CTDE paradigm. CADP enhances the training process by allowing agents to explicitly communicate and exchange advice during centralized training, thus improving joint-policy exploration. To maintain decentralized execution, CADP incorporates a smooth model pruning mechanism that gradually restricts agent communication without compromising their cooperative capabilities, demonstrating its superior performance on multiagent StarCraft II SMAC and Google Research Football benchmarks. CommNet [161] introduces a neural model where multiple agents learn to communicate continuously and collaboratively through a shared communication channel, optimizing their performance on fully cooperative tasks. The method allows agents to develop their own communication protocols during training, leading to improved coordination and task-solving capabilities. Mao et al. [87] introduced a novel Meta-MARL framework by integrating game-theoretical meta-learning with MARL algorithms using the CTDE's framework, such as the Actor-Critic-based COMA [157]. This framework offers initialization-dependent convergence guarantees and significantly improves convergence rates by addressing related tasks collectively. Yun et al. [86] proposed a novel approach called Quantum Meta Multi-Agent Reinforce" + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.133, + 0.894, + 0.331 + ], + "angle": 0, + "content": "ment Learning (QM2ARL), achieving high rewards, fast convergence, and quick adaptation in dynamic environments. QM2ARL leverages the unique dual-dimensional trainability of Quantum Neural Networks (QNNs) to enhance MARL. Liu et al. [95] proposed the Learning before Interaction (LBI) framework, which integrates a language-guided simulator into the multiagent reinforcement learning pipeline to address complex decision-making problems. By leveraging a generative world model with dynamics and reward components, LBI generates trial-and-error experiences to improve policy learning, demonstrating superior performance and generalization on the StarCraft Multi-Agent Challenge benchmark [134, 135]." + }, + { + "type": "title", + "bbox": [ + 0.519, + 0.345, + 0.894, + 0.372 + ], + "angle": 0, + "content": "2.3.2. Decentralized Training with Decentralized Execution (DTDE)" + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.374, + 0.894, + 0.516 + ], + "angle": 0, + "content": "As shown in the centre of Figure 3, DTDE represents a fully decentralized mechanism where each agent interacts independently with the environment and updates its own policy based on its own observations and rewards [122]. In this framework, each agent trains and operates completely independently, relying only on its own observations and rewards to update its strategy. DTDE is particularly suited for environments with limited communication or no global coordination, offering strong scalability and robustness [123]." + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.517, + 0.894, + 0.743 + ], + "angle": 0, + "content": "The core idea behind DTDE is the independence of agents [122]. Each agent interacts with its environment and learns without requiring information from others. This makes DTDE scalable, but it also introduces challenges such as non-stationarity, where the environment appears to change as other agents adapt their strategies. This characteristic makes DTDE a valuable and challenging area of research. The theoretical foundation of DTDE is often based on Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs). As described by Amato et al. [122, 124], a Dec-POMDP models a decentralized decision-making environment where agents operate independently with limited observations while aiming to maximize a shared reward. The decentralized nature of DTDE requires each agent to learn optimal strategies based on local information only." + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.744, + 0.894, + 0.87 + ], + "angle": 0, + "content": "First and foremost, one of the earliest DTDE approaches is Independent Q-Learning (IQL) by et al. [127]. Here, each agent applies Q-learning independently, maintaining its own Q-function and updating it based on local observations and rewards. However, IQL faces several challenges, such as the non-stationary nature of the environment caused by other agents learning simultaneously. It also struggles with credit assignment, where it is hard to determine how an individual agent" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.88, + 0.51, + 0.89 + ], + "angle": 0, + "content": "14" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.104, + 0.133, + 0.481, + 0.162 + ], + "angle": 0, + "content": "contributes to the team's success. To address these issues, several extensions of IQL have been proposed:" + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.171, + 0.481, + 0.255 + ], + "angle": 0, + "content": "- Distributed Q-Learning [128] optimistically assumes other agents always take optimal actions, focusing on learning from high-reward interactions. While effective in deterministic settings, it can be overly optimistic in environments with randomness." + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.265, + 0.481, + 0.336 + ], + "angle": 0, + "content": "- Hysteresic Q-Learning [162] By introducing two learning rates—one for positive updates and another, smaller rate for negative updates—hysteretic Q-learning balances optimism with robustness in stochastic environments." + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.345, + 0.481, + 0.431 + ], + "angle": 0, + "content": "- Lenient Q-Learning [163] dynamically adjusts how lenient the agent is in updating its values, depending on how frequently specific state-action pairs are encountered. It allows for more exploration in the early stages of learning while focusing on optimization later." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.44, + 0.481, + 0.681 + ], + "angle": 0, + "content": "As MARL problems become more complex, DTDE methods have been extended to deep learning. Deep Q-Networks [150, 151, 152, 126] and Deep Recurrent Q-Networks [164] have been adapted for decentralized settings, enabling agents to handle high-dimensional state and action spaces. Independent DRQN (IDRQN) [165], for example, combines DRQN with independent learning, but its asynchronous experience replay can cause instability. To solve this, Concurrent Experience Replay Trajectories (CERTs) [166] synchronize experience replay among agents, reducing non-stationarity and improving learning efficiency. Other advancements include Deep Hysteretic DRQN (Dec-HDRQN) [166], which combines hysteretic updates with deep neural networks and uses concurrent buffers to handle decentralized data. These methods have shown robust performance in partially observable environments." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.682, + 0.481, + 0.754 + ], + "angle": 0, + "content": "In the DTDE paradigm, policy gradient methods offer an alternative to value-based approaches, particularly for scenarios involving continuous action spaces [122]. Several policy gradient DTDE methods have been proposed:" + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.763, + 0.481, + 0.82 + ], + "angle": 0, + "content": "- Decentralized REINFORCE [129] independently optimizes each agent's policy using gradient ascent based on rewards observed during episodes. While simple, it is less sample-efficient." + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.829, + 0.481, + 0.871 + ], + "angle": 0, + "content": "- Independent Actor-Critic (IAC) [157] Combining value estimation (critic) and policy optimization (actor), IAC enables agents to learn faster" + }, + { + "type": "text", + "bbox": [ + 0.548, + 0.133, + 0.892, + 0.161 + ], + "angle": 0, + "content": "and update more frequently, improving sample efficiency." + }, + { + "type": "text", + "bbox": [ + 0.534, + 0.172, + 0.892, + 0.243 + ], + "angle": 0, + "content": "- Independent Proximal Policy Optimization (IPPO) [158, 153, 133] Extending Proximal Policy Optimization (PPO) to decentralized settings, IPPO improves policy stability by limiting how much policies can change between updates." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.256, + 0.893, + 0.44 + ], + "angle": 0, + "content": "Despite its advantages, DTDE still faces significant challenges [122, 123]: 1. Non-Stationarity: As other agents learn and adapt, the environment appears dynamic and unstable to each agent, making convergence difficult; 2. Credit Assignment: It is hard to determine how each agent's actions contribute to the team's overall reward in cooperative tasks; 3. Trade-Offs Between Scalability and Performance: While DTDE scales well, its performance may be limited in tasks requiring high levels of coordination. To overcome these challenges, future research could focus on improving communication strategies during training and more robust strategies for dynamic environments." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.441, + 0.893, + 0.569 + ], + "angle": 0, + "content": "In conclusion, the DTDE paradigm provides a powerful framework for solving distributed decision-making problems, balancing scalability, independence, and efficiency. It has been successfully applied in areas such as autonomous driving, distributed energy management, and swarm robotics. As research continues, DTDE is expected to play a larger role in real-world multi-agent systems, especially in scenarios requiring robust, independent learning." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.585, + 0.892, + 0.613 + ], + "angle": 0, + "content": "2.3.3. Centralized Training with Centralized Execution (CTCE)" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.615, + 0.892, + 0.757 + ], + "angle": 0, + "content": "As shown in the right of Figure 3, Centralized Training with Centralized Execution (CTCE) stands out as a fully centralized mechanism to MARL decision-making, where all agents transmit their information to a central controller [150, 151, 152, 126]. This central controller has access to the observations, actions, and rewards of all agents. The agents themselves do not possess policy networks and are not responsible for making decisions. Instead, they simply execute the directives issued by the central controller [125, 123]." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.758, + 0.893, + 0.871 + ], + "angle": 0, + "content": "Multi-Agent DQN [126] is a classic example of the CTCE paradigm, where DQN is combined with a parameter-sharing mechanism to address tasks in multi-agent environments. Gupta et al. [126] firstly extends three single-agent DRL algorithms (DQN [1, 2], TRPO, and A3C) to multi-agent systems, resulting in Multi-Agent-DQN, Multi-Agent-TRPO, and Multi-Agent-A3C. These approaches were designed to learn" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "15" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.107, + 0.129, + 0.485, + 0.657 + ], + "angle": 0, + "content": "cooperative policies in complex, partially observable environments without requiring explicit communication between agents. The DQN algorithm based on multi-agent settings, also known as PS-DQN (Parameter Sharing DQN), effectively utilizes curriculum learning to handle increasing task complexity. By starting with fewer agents and gradually increasing the number, the model scales well to more complex scenarios. Further, this foundational work has led to numerous enhancements and variants based on Multi-Agent DQN, each designed to address specific challenges in multi-agent systems, such as CoRe [150], MARL-DQN [151], and [152]. CoRe [150] introduces a counterfactual reward mechanism into MARL to address the credit assignment problem in cooperative settings. By computing the difference in global rewards when an agent hypothetically changes its action while others keep theirs fixed, CoRe enhances the standard DQN framework, significantly improving learning efficiency and performance in cooperative tasks. MARL-DQN [151] optimizes energy efficiency and resource allocation in NOMA wireless systems by using MARL framework combined with Deep Q-Networks. By combining MARL with DQN, it dynamically adjusts power and time allocation to minimize energy consumption while ensuring quality of service, outperforming traditional methods in terms of efficiency and performance. Hafiz et al. [152] proposed a simplified and efficient multi-agent DQN-based multi-agent system (MAS) that addresses the challenges of complexity, resource demands, and training difficulties inherent in more advanced MARL frameworks. The work introduced a shared state and reward system while maintaining agent-specific actions, which streamlines the experience replay process. The significant improvements in tasks such as Cartpole-v1\\(^1\\), LunarLander-v2\\(^2\\), and Maze Traversal\\(^3\\) from OpenAI Gym [167] demonstrates the model's effectiveness and superiority." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.676, + 0.484, + 0.704 + ], + "angle": 0, + "content": "2.3.4. Addition Taxonomies and Efforts of Communication-based MARL Algorithms" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.706, + 0.482, + 0.763 + ], + "angle": 0, + "content": "As outlined above, three primary paradigms—CTCE, DTDE, and CTDE—have emerged in the MARL domain to tackle the challenges associated with training and execution in multi-agent systems. Each of these" + }, + { + "type": "page_footnote", + "bbox": [ + 0.103, + 0.789, + 0.481, + 0.813 + ], + "angle": 0, + "content": "1Cartpole-v1 game: https://www.gymlibrary.dev/environments/classic_control/cart_pole/." + }, + { + "type": "page_footnote", + "bbox": [ + 0.103, + 0.813, + 0.48, + 0.847 + ], + "angle": 0, + "content": "2LunarLander-v2 game: https://www.gymlibrary.dev/environments/box2d/lunar_lander/ and https://github.com/topics/lunarlander-v2." + }, + { + "type": "page_footnote", + "bbox": [ + 0.104, + 0.847, + 0.48, + 0.87 + ], + "angle": 0, + "content": "3Maze Traversal game: https://github.com/vision-mini/MazeSolverLLM." + }, + { + "type": "list", + "bbox": [ + 0.103, + 0.789, + 0.481, + 0.87 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.132, + 0.893, + 0.188 + ], + "angle": 0, + "content": "paradigms has its strengths and limitations, yet all face inherent difficulties in handling communication among agents, which is critical for effective collaboration and decision-making." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.189, + 0.893, + 0.401 + ], + "angle": 0, + "content": "Specifically, the CTCE paradigm, while providing a fully integrated framework for learning and execution, struggles with scalability as the system size grows. The DTDE paradigm, on the other hand, allows for independent agent training and execution, but often lacks the necessary coordination required for global task optimization. The CTDE paradigm has emerged as a widely adopted approach due to its ability to leverage centralized information during training to learn effective policies, while enabling decentralized execution to operate efficiently in distributed environments. However, even in CTDE, the communication between agents during execution is a bottleneck, prompting researchers to focus on improving communication strategies to enhance system performance." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.402, + 0.893, + 0.672 + ], + "angle": 0, + "content": "Communication-based MARL algorithms have made significant progress in overcoming these challenges. From the perspective of communication protocols and languages, communication-based MARL methods can be categorized into three types: broadcasting communication, targeted communication, and networked communication, as shown in Figure 4. From the technical angle, we provide an overview of these communication-based MARL advancements, categorizing the algorithms into three main groups based on their focus: (1) value function-based Communication-based MARL, (2) policy search-based Communication-based MARL, and (3) Communication-based MARL algorithms designed to improve communication efficiency. These approaches represent the forefront of research in enabling agents to effectively share information, coordinate actions, and optimize performance in complex environments. Here, we provide a detailed introduction to these approaches." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.672, + 0.894, + 0.856 + ], + "angle": 0, + "content": "Value Function-Based Communication-based MARL: For Value Function-Based Communication-based MARL Algorithms, several notable works include Differentiable Inter-Agent Learning (DIAL) [168] and Deep Distributed Recurrent Q-Networks (DDRQN) [169, 170]. Among them, DIAL facilitates effective collaboration and optimization of joint action policies by enabling the exchange of gradients of Q-functions between agents. On the other hand, DDRQN leverages recurrent neural networks to address partially observable environments, allowing agents to share critical Q-values or hidden states and make adaptive decisions in dynamic settings." + }, + { + "type": "text", + "bbox": [ + 0.531, + 0.857, + 0.893, + 0.871 + ], + "angle": 0, + "content": "Policy Search-Based Communication-based" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.88, + 0.509, + 0.89 + ], + "angle": 0, + "content": "16" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.116, + 0.134, + 0.341, + 0.283 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.131, + 0.303, + 0.336, + 0.316 + ], + "angle": 0, + "content": "(1) Broadcasting communication" + }, + { + "type": "image", + "bbox": [ + 0.382, + 0.134, + 0.597, + 0.299 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.399, + 0.303, + 0.578, + 0.316 + ], + "angle": 0, + "content": "(2) Targeted communication" + }, + { + "type": "image", + "bbox": [ + 0.644, + 0.135, + 0.874, + 0.299 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.667, + 0.303, + 0.86, + 0.316 + ], + "angle": 0, + "content": "(3) Networked communication" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.343, + 0.894, + 0.4 + ], + "angle": 0, + "content": "Figure 4: A schematic representation of three distinct communication methods among agents, with arrows indicating the direction of message transmission. (a) Broadcasting communication: The activated agent transmits messages to all other agents within the communication network. (b) Targeted communication: Agents selectively communicate with specific target agents based on a supervisory mechanism that regulates the timing, content, and recipients of the messages. (c) Networked communication: Agents engage in localized interactions with their neighboring agents within the network." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.421, + 0.482, + 0.791 + ], + "angle": 0, + "content": "MARL: For Policy Search-Based Communication-based MARL Algorithms, significant progress has been made with approaches such as Communication Networks (CommNet) [171], Bidirectional Coordinated Network (BiCNet) [172, 173], Multi-Agent Distributed MADDPG (MD-MADDPG) [174, 132], Intrinsic A3C [175, 155], and Multi-Agent Communication and Coordination (MACC) [176, 177]. Among them, CommNet [171] proposes a centralized but differentiable communication framework where agents share encoded signals to form a global context, improving policy decisions. BiCNet [172, 173] enhances coordination among agents by employing bidirectional recurrent layers, making it suitable for complex tasks. MD-MADDPG [174, 132] combines centralized training and decentralized execution, enabling agents to exchange critical state-action information during training for robust policy learning. Intrinsic A3C [175, 155] introduces intrinsic motivation to encourage effective exploration in sparse-reward scenarios, with agents sharing intrinsic rewards through communication to boost performance. Finally, Multi-Agent Communication and Coordination (MACC) [176, 177] focuses on adaptive communication mechanisms, providing stable and secure coordination to enhance training and execution in dynamic multi-agent environments." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.8, + 0.482, + 0.871 + ], + "angle": 0, + "content": "Communication-based MARL Enhancing Communication Efficiency: For algorithms aimed at enhancing communication efficiency, several outstanding approaches include Attentional Communication (ATOC) [138], Targeted Multi-Agent Communication" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.421, + 0.893, + 0.606 + ], + "angle": 0, + "content": "(TarMAC) [139], Inter-Agent Centralized Communication (IC3Net) [178]. Attentional Communication (ATOC) [138] employs an attention mechanism to dynamically determine when communication is necessary, achieving a balance between efficiency and coordination. Targeted Multi-Agent Communication (TarMAC) [139] introduces targeted attention mechanisms to direct messages to relevant teammates, minimizing redundant communication, and improving overall performance. Inter-Agent Centralized Communication (IC3Net) [178] incorporates a gating mechanism that allows agents to learn when and how to communicate, optimizing both the frequency and quality of interactions." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.608, + 0.892, + 0.706 + ], + "angle": 0, + "content": "These research advances in Communication-based MARL methods demonstrate significant strides in enabling agents to share information and achieve coordinated decision-making in MAS. These advancements will pave the way for deploying MARL in real-world scenarios where efficient and effective communication is essential." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.731, + 0.857, + 0.744 + ], + "angle": 0, + "content": "2.4. LLMs-based Multi-Agent System Taxonomies" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.758, + 0.892, + 0.87 + ], + "angle": 0, + "content": "The field of LLMs-based multi-agent systems has seen significant advancements, with researchers exploring various aspects of these systems to enhance their capabilities and applications [116, 30]. A comprehensive taxonomy can help categorize and understand the different dimensions of LLMs-based multi-agent systems, including architectural design, application domains, evaluation methods, and future research directions." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.88, + 0.509, + 0.89 + ], + "angle": 0, + "content": "17" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.104, + 0.133, + 0.295, + 0.146 + ], + "angle": 0, + "content": "2.4.1. Architectural Design" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.148, + 0.481, + 0.276 + ], + "angle": 0, + "content": "The design of architectures for LLMs-based multiagent systems is a critical component in harnessing the full potential of LLMs to enhance the capabilities of autonomous agents. Architectural design encompasses the framework and mechanisms that enable agents to interact, adapt, and make decisions in complex and dynamic environments. This section explores two primary levels of autonomy within these systems: Adaptive Autonomy and Self-Organizing Autonomy." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.286, + 0.481, + 0.513 + ], + "angle": 0, + "content": "- Adaptive Autonomy: [103, 146, 113, 110] Adaptive autonomy refers to systems where agents can adjust their behavior within a predefined framework. These agents are designed to operate within the constraints set by the system architects but can adapt their actions based on the specific requirements of the task at hand. For example, in a task-specific adaptation scenario, an agent might adjust its search strategy in an information retrieval task based on the relevance of the results. In a context-aware adaptation scenario, an agent might change its communication style based on the social context of the interaction. This level of autonomy is crucial for agents that need to operate in dynamic environments where the task requirements can change over time." + }, + { + "type": "text", + "bbox": [ + 0.122, + 0.522, + 0.481, + 0.735 + ], + "angle": 0, + "content": "- Self-Organizing Autonomy: [101, 103, 105, 102, 104, 154] Self-organizing autonomy represents a higher level of autonomy where agents can dynamically adapt their behavior without predefined structures. This allows for more flexible and context-aware interactions among agents. For instance, in dynamic task allocation, agents can assign tasks to each other based on the current state of the environment and their individual skills. Emergent behavior is another key feature at this level, where agents can form coalitions or develop new strategies to solve complex problems. This level of autonomy is essential for multi-agent systems that need to operate in highly dynamic and unpredictable environments." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.749, + 0.24, + 0.762 + ], + "angle": 0, + "content": "2.4.2. Applications" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.763, + 0.481, + 0.821 + ], + "angle": 0, + "content": "In the social sciences [148, 99, 112], LLMs-based agents have been used to simulate various social phenomena, providing insights into human behavior and social dynamics." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.829, + 0.481, + 0.871 + ], + "angle": 0, + "content": "1) Economic Agents: [179, 101] LLMs can be used to model economic agents, similar to how economists use the concept of homo economicus." + }, + { + "type": "text", + "bbox": [ + 0.547, + 0.133, + 0.892, + 0.274 + ], + "angle": 0, + "content": "Experiments have shown that LLMs can produce results qualitatively similar to those of traditional economic models, making them a promising tool for exploring new social science insights. For example, in market simulation, LLMs can predict market trends and the impact of economic policies. In behavioral economics, LLMs can model individual and group decision-making processes, providing a more nuanced understanding of economic behavior." + }, + { + "type": "text", + "bbox": [ + 0.533, + 0.283, + 0.892, + 0.452 + ], + "angle": 0, + "content": "- 2) Social Network Simulation: [147, 148, 99, 180] The Social-network Simulation System (S3) uses LLMs-based agents to simulate social networks, accurately replicating individual attitudes, emotions, and behaviors. This system can model the propagation of information, attitudes, and emotions at the population level, providing valuable insights into social dynamics. For example, it can simulate how information spreads through social networks and identify influential nodes, or model the evolution of social norms and behaviors over time." + }, + { + "type": "text", + "bbox": [ + 0.533, + 0.463, + 0.892, + 0.605 + ], + "angle": 0, + "content": "- 3) User Behavior Analysis: [97, 111, 112] LLMs are employed for user simulation in recommender systems, demonstrating superiority over baseline simulation systems. They can generate reliable user behaviors, improving the accuracy of recommendations. For example, in personalized recommendations, LLMs can generate user profiles and behaviors to optimize recommendation algorithms. In user engagement, LLMs can simulate user interactions to optimize user retention and engagement." + }, + { + "type": "text", + "bbox": [ + 0.515, + 0.613, + 0.892, + 0.669 + ], + "angle": 0, + "content": "In the natural sciences [100, 149, 181], LLMs-based agents have been used to simulate complex systems and processes, providing insights into natural phenomena and scientific theories." + }, + { + "type": "text", + "bbox": [ + 0.533, + 0.678, + 0.892, + 0.833 + ], + "angle": 0, + "content": "1) Macroeconomic Simulation: LLMs-based agents are used for macroeconomic simulation, making realistic decisions and reproducing classic macroeconomic phenomena. These agents can simulate the impact of economic policies on the macroeconomy, providing a more accurate and dynamic model of economic behavior. For example, they can simulate the interactions between different economic sectors and their impact on the overall economy, helping policymakers make more informed decisions." + }, + { + "type": "text", + "bbox": [ + 0.533, + 0.843, + 0.892, + 0.871 + ], + "angle": 0, + "content": "2) Generative Agent-Based Modeling: This approach couples mechanistic models with genera" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "18" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.136, + 0.133, + 0.481, + 0.26 + ], + "angle": 0, + "content": "tive artificial intelligence to unveil social system dynamics, such as norm diffusion and social dynamics. By combining the strengths of both approaches, researchers can model complex social systems and predict their behavior over time. For example, they can model the spread of diseases in a population, the impact of environmental changes on ecosystems, or the evolution of social norms in a community." + }, + { + "type": "text", + "bbox": [ + 0.104, + 0.269, + 0.481, + 0.312 + ], + "angle": 0, + "content": "In engineering [103, 146, 114, 115], LLMs-based agents have been used to develop and optimize complex systems, improving efficiency and performance." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.32, + 0.481, + 0.463 + ], + "angle": 0, + "content": "- 1) Software Development: LLMs-based agents are used for software development, facilitating sophisticated interactions and decision-making in a wide range of contexts. These agents can assist in code generation, bug detection, and system optimization, improving the productivity and quality of software development. For example, they can generate code snippets based on natural language descriptions, detect bugs in code, and suggest optimizations to improve performance." + }, + { + "type": "text", + "bbox": [ + 0.121, + 0.471, + 0.483, + 0.613 + ], + "angle": 0, + "content": "- 2) Multi-Robot Systems: LLMs-based multi-agent systems are used to simulate complex real-world environments effectively, enabling interactions among diverse agents to solve various tasks. These systems can coordinate the actions of multiple robots, optimizing their behavior to achieve common goals. For example, they can be used in search and rescue operations, where multiple robots need to coordinate their actions to locate and rescue victims." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.634, + 0.481, + 0.663 + ], + "angle": 0, + "content": "3. Simulation Environments of Multi-Agent Decision-Making" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.673, + 0.483, + 0.871 + ], + "angle": 0, + "content": "First and foremost, the designs and implementations of multi-agent cooperative simulation environments are crucial in the historical research of multi-agent decision-making, which are widely utilized in practical applications and production. These simulation environments form the foundation for conducting efficient and effective studies in multi-agent cooperative decision-making. Specifically, a dynamic multi-agent cooperative decision-making environment refers to predetermined scenarios and platforms where multiple agents collaborate to solve problems, complete tasks, and achieve goals. Such environments provide not only a platform for testing and validating various intelligent decision-making algorithms but also help" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.133, + 0.895, + 0.39 + ], + "angle": 0, + "content": "us better understand the behaviors and interactions of agents in dynamic settings. By simulating these interactions, researchers can gain insights into how agents coordinate and adapt to changing conditions, thereby improving the robustness and efficiency of multi-agent systems in real-world applications. Consequently, the importance of these simulation environments cannot be overstated. They serve as a testing ground for theoretical models, allowing researchers to observe the practical implications of their intelligent algorithms. Additionally, these platforms help in identifying potential issues and refining strategies before deployment in actual scenarios, ensuring that the agents are well-prepared to handle the complexities of real-world environments. In Table 2, a wide range of simulated environments is listed. Next, we will delve into these environments one by one, emphasizing their significance and features for future development." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.403, + 0.816, + 0.416 + ], + "angle": 0, + "content": "3.1. MARL-based Simulation Environments" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.421, + 0.894, + 0.52 + ], + "angle": 0, + "content": "This section provides an overview of several widely-used simulation environments designed for MARL. These platforms, such as Multi-Agent Particle Environment [59, 136, 137], and PettingZoo [182], offer diverse scenarios and functionalities for exploring cooperative and competitive agent interactions in both simple and complex tasks." + }, + { + "type": "image", + "bbox": [ + 0.524, + 0.533, + 0.644, + 0.624 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.541, + 0.624, + 0.623, + 0.633 + ], + "angle": 0, + "content": "Communication" + }, + { + "type": "image", + "bbox": [ + 0.645, + 0.533, + 0.764, + 0.622 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.685, + 0.624, + 0.722, + 0.633 + ], + "angle": 0, + "content": "Spread" + }, + { + "type": "image", + "bbox": [ + 0.768, + 0.533, + 0.885, + 0.622 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.799, + 0.624, + 0.85, + 0.633 + ], + "angle": 0, + "content": "Reference" + }, + { + "type": "image_caption", + "bbox": [ + 0.515, + 0.647, + 0.894, + 0.671 + ], + "angle": 0, + "content": "Figure 5: Typical Scenarios in Multi-Agent Particle Environment (MPE)." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.699, + 0.868, + 0.713 + ], + "angle": 0, + "content": "3.1.1. Several Widely-used Environments on MARL" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.715, + 0.894, + 0.828 + ], + "angle": 0, + "content": "Multi-Agent Particle Environment (MPE) [59, 136, 137] is a versatile and widely-used MARL platform designed for research in both cooperative and competitive settings. Developed by OpenAI, it is primarily known for being the testing environment of the MAD-DPG algorithm [136]. MPE is a time-discrete, space-continuous 2D platform designed for evaluating MARL algorithms." + }, + { + "type": "text", + "bbox": [ + 0.515, + 0.829, + 0.894, + 0.871 + ], + "angle": 0, + "content": "Figure 5, initially derived from Malloy et al. [137], illustrates various scenarios within the Multi-Agent Particle Environment (MPE), including tasks" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.879, + 0.509, + 0.89 + ], + "angle": 0, + "content": "19" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.227, + 0.14, + 0.77, + 0.152 + ], + "angle": 0, + "content": "Table 2: Diverse MARL-based and LLMs-based Simulated Environments for Multi-Agent Systems." + }, + { + "type": "table", + "bbox": [ + 0.215, + 0.153, + 0.785, + 0.313 + ], + "angle": 0, + "content": "
CategoriesMulti-Agent System Environments
MARL-basedMulti-Agent Particle Environment (MPE) [59, 136, 137], PettingZoo [182], SMAC [134], SMAC-v2 [135], GFootball [55], Gym-Microrts [183], MAgent [184], Dexterous Hands [185, 186], OpenAI Gym [167], Gym-MiniGrid [187]4, Melting Pot [188]5, Capture The Flag6 [189], VillagerAgent [190], Minecraft [191, 192, 193], Unity ML-Agents [194], SUMO7 [195], Hanabi Learning [196, 197], Predator-Prey [198, 199]
LLMs-basedTDW-MAT [147, 148], C-WAH [99], Cuisineworld [149], AgentScope [100], Ro-CoBench [200], Generative Agents [181, 201], SocialAI school [202, 203], Welfare Diplomacy [204]
" + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.337, + 0.482, + 0.563 + ], + "angle": 0, + "content": "such as adversarial interactions, cooperative crypto, object pushing, and team-based world navigation. Compatible with the widely-used Gym interface, it supports a variety of tasks ranging from fully cooperative to mixed cooperative-competitive scenarios, such as simple_adversary, simple_crypto, simple_spread, simple-speaker Listener, and simple_world_comm8. Each scenario highlights distinct cooperative and competitive dynamics among agents. MPE allows agents to interact and strategize within a visually simplistic UI where particles represent different entities. MPE is a open-source platform that widely adopted in the multi-agent system research, enabling extensive customization and contributing to its role as a standard tool for studying complex multi-agent dynamics." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.564, + 0.482, + 0.648 + ], + "angle": 0, + "content": "Overall, MPE is a pivotal resource in the MARL community, offering a well-rounded platform for experimentation and algorithm comparison. Its design and functionality have made it an indispensable tool for researchers seeking to push the boundaries of what is possible in multi-agent systems." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.648, + 0.483, + 0.777 + ], + "angle": 0, + "content": "StarCraft Multi-Agent Challenge (SMAC)[134] is a widely-used benchmark for MARL that focuses on decentralized micromanagement tasks in the popular real-time strategy game StarCraft II[10]. In SMAC, multiple agents control individual units and must learn to cooperate and coordinate actions based on local, partial observations. The agents face complex challenges, including coordinating combat techniques like focus fire, kiting, and positioning, while the opponent" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.337, + 0.895, + 0.565 + ], + "angle": 0, + "content": "is controlled by the built-in StarCraft II AI. SMAC emphasizes problems such as partial observability, decentralized decision-making, and multi-agent credit assignment. The environment is structured to simulate real-world scenarios where agents must learn to collaborate without full knowledge of the global state. Agents' observations are restricted to a limited field of view, forcing them to rely on local information for decision-making. As shown in Figure 6, these multi-agent cooperative decision-making environments are respectively 2s vs 3z, 5m vs 6m, 6h vs 8z, MM2, where the inside numbers represent the number of units and the letters represent the unit types in general. In recent years, SMAC has become a standard benchmark for evaluating MARL algorithms, offering a rigorous and challenging environment for advancing the field." + }, + { + "type": "image", + "bbox": [ + 0.522, + 0.574, + 0.703, + 0.655 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.59, + 0.658, + 0.642, + 0.668 + ], + "angle": 0, + "content": "2s_vs_3z" + }, + { + "type": "image", + "bbox": [ + 0.708, + 0.574, + 0.888, + 0.655 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.767, + 0.658, + 0.83, + 0.669 + ], + "angle": 0, + "content": "5m_vs_6m" + }, + { + "type": "image", + "bbox": [ + 0.522, + 0.675, + 0.702, + 0.758 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.589, + 0.76, + 0.638, + 0.769 + ], + "angle": 0, + "content": "6h_vs_8z" + }, + { + "type": "image", + "bbox": [ + 0.706, + 0.675, + 0.887, + 0.758 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.787, + 0.76, + 0.825, + 0.769 + ], + "angle": 0, + "content": "MMM2" + }, + { + "type": "image_caption", + "bbox": [ + 0.515, + 0.783, + 0.892, + 0.807 + ], + "angle": 0, + "content": "Figure 6: Several Typical Scenarios in StarCraft Multi-Agent Challenge (SMAC)." + }, + { + "type": "text", + "bbox": [ + 0.515, + 0.817, + 0.892, + 0.847 + ], + "angle": 0, + "content": "StarCraft Multi-Agent Challenge 2 (SMACv2)[1134, 154, 91] However, SMAC [134] has limitations," + }, + { + "type": "page_footnote", + "bbox": [ + 0.529, + 0.858, + 0.892, + 0.871 + ], + "angle": 0, + "content": "11StarCraft Multi-Agent Challenge 2 (SMACv2): https://" + }, + { + "type": "page_footnote", + "bbox": [ + 0.103, + 0.8, + 0.479, + 0.836 + ], + "angle": 0, + "content": "\\(^{8}\\)Multi-Agent Particle Environment: https://github.com/openai/multiagent-particle-envs/tree/master/multiagent/scenarios." + }, + { + "type": "page_footnote", + "bbox": [ + 0.103, + 0.836, + 0.478, + 0.858 + ], + "angle": 0, + "content": "\\(^{9}\\)StarCraft Multi-Agent Challenge (SMAC): https://github.com/oxwhirl/smac." + }, + { + "type": "page_footnote", + "bbox": [ + 0.123, + 0.858, + 0.433, + 0.871 + ], + "angle": 0, + "content": "10StarCraft II: https://starcraft2.blizzard.com/." + }, + { + "type": "list", + "bbox": [ + 0.103, + 0.8, + 0.479, + 0.871 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.879, + 0.509, + 0.89 + ], + "angle": 0, + "content": "20" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.102, + 0.132, + 0.482, + 0.46 + ], + "angle": 0, + "content": "including insufficient stochasticity and partial observability, which allows agents to perform well with simple open-loop policies. To address these shortcomings, SMACv2 introduces procedural content generation (PCG), randomizing team compositions and agent positions, ensuring agents face novel, diverse scenarios. Several multi-agent decision-making scenarios are depicted in Figure 7, which are from Benjamin et al. [135]. This requires more sophisticated, closed-loop policies that condition on both ally and enemy information. Additionally, SMACv2 includes the Extended Partial Observability Challenge (EPO), where enemy observations are masked stochastically, forcing agents to adapt to incomplete information and communicate more effectively. SMACv2 thus represents a major evolution of the original benchmark, addressing key gaps such as the lack of stochasticity and meaningful partial observability. These changes make SMACv2 a more challenging environment, requiring agents to generalize across varied settings and improve coordination, communication, and decentralized decision-making. Overall, SMACv2 provides a more rigorous testbed for advancing the field of cooperative MARL." + }, + { + "type": "image", + "bbox": [ + 0.111, + 0.469, + 0.293, + 0.57 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.175, + 0.574, + 0.227, + 0.584 + ], + "angle": 0, + "content": "(a) Terran" + }, + { + "type": "image", + "bbox": [ + 0.298, + 0.469, + 0.475, + 0.57 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.354, + 0.574, + 0.414, + 0.584 + ], + "angle": 0, + "content": "(b) Protoss" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.599, + 0.481, + 0.621 + ], + "angle": 0, + "content": "Figure 7: Several scenarios from SMACv2 showing agents battling the built-in AI." + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.632, + 0.483, + 0.846 + ], + "angle": 0, + "content": "Google Research Football Environment (GFootball) [55] is a state-of-the-art multi-agent simulation environment developed by the Google Research Brain Team. It is specifically designed for reinforcement learning research and is built on top of the open-source football game, GamePlay Football. GFootball is compatible with the OpenAI Gym API, making it a versatile tool not only for training intelligent agents but also for allowing players to interact with the built-in AI or trained agents using a keyboard or game controller. GFootball features an advanced, physics-based 3D football simulator where agents can be trained to play football, offering a challenging yet highly customizable platform for testing novel reinforcement learning algorithms and ideas. GFootball is tailored for multi-agent" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.133, + 0.894, + 0.219 + ], + "angle": 0, + "content": "experiments and multiplayer scenarios, enabling the exploration of more complex interactions and strategies. GFootball supports various scenarios, including fullgame simulations with varying difficulty levels, as well as simpler tasks in the Football Academy that focus on specific skills like passing or scoring." + }, + { + "type": "image", + "bbox": [ + 0.527, + 0.232, + 0.707, + 0.31 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.709, + 0.232, + 0.882, + 0.31 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.526, + 0.311, + 0.707, + 0.389 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.709, + 0.311, + 0.882, + 0.39 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.515, + 0.404, + 0.893, + 0.428 + ], + "angle": 0, + "content": "Figure 8: Typical examples of Training Scenarios in Football Academy." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.442, + 0.893, + 0.725 + ], + "angle": 0, + "content": "Moreover, training agents for the \"Football Benchmark\" can be quite challenging. To help researchers efficiently test and iterate on new ideas, researchers provide a toolset called \"Football Academy\", as illustrated in Figure 8, which includes a series of scenarios with varying levels of difficulty. These scenarios range from simple setups, such as a single player scoring against an open goal (e.g., approaching an open goal, scoring in an open goal, or scoring while running), to more complex team-based setups, where a controlled team must break through specific defensive formations (e.g., scoring while running against a goalkeeper, passing and shooting against a goalkeeper, and 3v1 against a goalkeeper). Additionally, the toolset covers common situations in football matches, such as corner kicks, simple counterattacks, and complex counterattacks. Lastly, as an famous open-source GitHub project12, it offers a unique opportunity for researchers and pushes the boundaries of AI research in a reproducible and scalable manner." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.726, + 0.894, + 0.799 + ], + "angle": 0, + "content": "Unity Machine Learning-Agents Toolkit[13] [194] is an open-source platform designed to enable games and simulations to serve as environments for training intelligent agents. Built on Unity's powerful game engine, it supports a wide range of AI and machine learn" + }, + { + "type": "page_footnote", + "bbox": [ + 0.515, + 0.823, + 0.722, + 0.847 + ], + "angle": 0, + "content": "12Google Research Football: google-research/football." + }, + { + "type": "page_footnote", + "bbox": [ + 0.515, + 0.847, + 0.725, + 0.871 + ], + "angle": 0, + "content": "13Unity ML-Agents Toolkit: Unity-Technologies/ml-agents." + }, + { + "type": "list", + "bbox": [ + 0.515, + 0.823, + 0.725, + 0.871 + ], + "angle": 0, + "content": null + }, + { + "type": "page_footnote", + "bbox": [ + 0.753, + 0.825, + 0.892, + 0.837 + ], + "angle": 0, + "content": "https://github.com/" + }, + { + "type": "page_footnote", + "bbox": [ + 0.753, + 0.848, + 0.892, + 0.859 + ], + "angle": 0, + "content": "https://github.com/" + }, + { + "type": "list", + "bbox": [ + 0.753, + 0.825, + 0.892, + 0.859 + ], + "angle": 0, + "content": null + }, + { + "type": "footer", + "bbox": [ + 0.104, + 0.859, + 0.29, + 0.87 + ], + "angle": 0, + "content": "github.com/oxwhirl/smacv2." + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.879, + 0.507, + 0.89 + ], + "angle": 0, + "content": "21" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.103, + 0.132, + 0.482, + 0.218 + ], + "angle": 0, + "content": "ing methods, including reinforcement learning, imitation learning, and neuroevolution, through an intuitive Python API. The platform includes state-of-the-art algorithm implementations (based on PyTorch), allowing researchers and developers to train agents for 2D, 3D, and VR/AR applications." + }, + { + "type": "image", + "bbox": [ + 0.112, + 0.233, + 0.477, + 0.476 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.488, + 0.483, + 0.557 + ], + "angle": 0, + "content": "Figure 9: Typical Training Scenarios in Unity Machine Learning-Agents Toolkit (released version: v0.11). From Left-to-right, up-to-down: (a) Basic, (b) 3DBall, (c) Crawler, (d) Push Block, (e) Tennis, (f) Worm, (g) Bouncer, (h) Grid World, (i) Walker, (j) Reacher, (k) Food Collector, (l) Pyramids, (m) Wall Jump, (n) Hallway, (o) Soccer Twos [194]." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.572, + 0.483, + 0.713 + ], + "angle": 0, + "content": "ML-Agents is particularly useful for training NPC behaviors in diverse scenarios, automated testing of game builds, and evaluating game design decisions. It features a highly flexible simulation environment with realistic visuals, physics-driven interactions, and rich task complexity. By integrating tools for creating custom environments and supporting multi-agent and adversarial settings, the toolkit bridges the gap between AI research and practical applications in game development." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.715, + 0.483, + 0.871 + ], + "angle": 0, + "content": "As seen from Figure 9, it depicts several typical multi-agent environments from the previous work of Juliani et al. [194]. The platform also provides key components such as a Python API, Unity SDK, and pre-built environments, enabling users to customize and evaluate their algorithms in Unity's interactive and visually rich settings. With its versatility and accessibility, Unity ML-Agents Toolkit has become an indispensable resource for both AI researchers and game developers, driving innovation in artificial intelligence and simulation-based learning." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.132, + 0.895, + 0.332 + ], + "angle": 0, + "content": "Gym-Microrts\\(^{14}\\) [183] (pronounced \"Gym-micro-RTS\") is a fast and affordable reinforcement learning (RL) platform designed to facilitate research in fullgame Real-Time Strategy (RTS) games. Unlike traditional RTS research that demands extensive computational resources, Gym- \\(\\mu\\)RTS allows training advanced agents using limited hardware, such as a single GPU and CPU setup, within reasonable timeframes. Figure 10 showcases a match between our best-trained agent (top-left) and CoacAI (bottom-right), the \\(2020\\mu\\)RTS AI competition champion. The agent employs an efficient strategy, starting with resource harvesting and a worker rush to damage the enemy base, transitioning into midgame combat unit production to secure victory." + }, + { + "type": "image", + "bbox": [ + 0.539, + 0.341, + 0.695, + 0.447 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.699, + 0.343, + 0.871, + 0.443 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.539, + 0.449, + 0.703, + 0.556 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.708, + 0.444, + 0.871, + 0.556 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.514, + 0.572, + 0.894, + 0.607 + ], + "angle": 0, + "content": "Figure 10: Screenshot of our best-trained agent (top-left) playing against CoacAI (bottom-right), the \\(2020\\mu \\mathrm{RTS}\\) AI competition champion [183]." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.618, + 0.895, + 0.775 + ], + "angle": 0, + "content": "The platform offers a simplified RTS environment that captures the core challenges of RTS games, including combinatorial action spaces, real-time decision-making, and partial observability. Gym- \\(\\mu\\)RTS employs a low-level action space, enabling fine-grained control over individual units without AI assistance, which poses unique challenges and opportunities for RL algorithms. It supports Proximal Policy Optimization (PPO) and incorporates techniques like invalid action masking, action composition, and diverse training opponents to enhance training efficiency and agent performance." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.775, + 0.894, + 0.833 + ], + "angle": 0, + "content": "Gym- \\(\\mu\\)RTS has demonstrated its effectiveness by producing state-of-the-art DRL agents capable of defeating top competition bots, such as CoacAI. The platform is open-source and provides all necessary tools for" + }, + { + "type": "page_footnote", + "bbox": [ + 0.516, + 0.847, + 0.625, + 0.871 + ], + "angle": 0, + "content": "\\(^{14}\\)Gym-Microrts: gym-microrts." + }, + { + "type": "page_footnote", + "bbox": [ + 0.703, + 0.847, + 0.892, + 0.859 + ], + "angle": 0, + "content": "https://github.com/kered9/" + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.879, + 0.509, + 0.89 + ], + "angle": 0, + "content": "22" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.103, + 0.133, + 0.481, + 0.175 + ], + "angle": 0, + "content": "researchers to explore and advance RL techniques in RTS games, making it a valuable resource for both AI researchers and gaming enthusiasts." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.175, + 0.481, + 0.288 + ], + "angle": 0, + "content": "\\(\\mathbf{MAgent}^{15}\\) [184] is an open-source platform specifically designed to support large-scale MARL research, with a focus on exploring Artificial Collective Intelligence (ACI). Unlike traditional MARL platforms, MAgent excels in handling scenarios involving hundreds to millions of agents, making it ideal for studying complex interactions and emergent behaviors in large populations." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.29, + 0.481, + 0.432 + ], + "angle": 0, + "content": "For instance, as shown in Figure 11, the \"Pursuit\" scenario is a classic example designed to showcase the emergent cooperative behaviors of agents in a predator-prey environment. In this setup, predators work together to capture preys while the preys attempt to evade capture. Each predator receives rewards for successfully attacking a prey, while preys are penalized if caught. Over time, predators learn to form collaborative strategies, such as surrounding and trapping preys, highlighting the emergence of local cooperation." + }, + { + "type": "image", + "bbox": [ + 0.118, + 0.447, + 0.291, + 0.579 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.149, + 0.58, + 0.258, + 0.589 + ], + "angle": 0, + "content": "Adversarial Pursuit" + }, + { + "type": "image", + "bbox": [ + 0.297, + 0.448, + 0.47, + 0.577 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.339, + 0.578, + 0.432, + 0.587 + ], + "angle": 0, + "content": "Combined Arms" + }, + { + "type": "image", + "bbox": [ + 0.118, + 0.597, + 0.291, + 0.721 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.179, + 0.723, + 0.219, + 0.733 + ], + "angle": 0, + "content": "Gather" + }, + { + "type": "image", + "bbox": [ + 0.297, + 0.593, + 0.47, + 0.721 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.347, + 0.723, + 0.408, + 0.734 + ], + "angle": 0, + "content": "Tiger Deer" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.75, + 0.481, + 0.773 + ], + "angle": 0, + "content": "Figure 11: Illustrations of one of the typical running example in MAGent, called \"Pursuit\" [184]." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.788, + 0.481, + 0.831 + ], + "angle": 0, + "content": "The platform is based on a grid-world model where agents can perform actions such as moving, turning, or attacking, while perceiving both local and global" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.133, + 0.892, + 0.26 + ], + "angle": 0, + "content": "information. Through a flexible Python interface, researchers can easily customize the state space, action space, and reward mechanisms, enabling the rapid creation of complex cooperative or competitive environments. MAgent comes with several built-in scenarios, such as pursuit, resource gathering, and team-based battles, which highlight emergent social behaviors like cooperative strategies, competitive dynamics, and resource monopolization." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.261, + 0.893, + 0.374 + ], + "angle": 0, + "content": "MAgent is highly scalable, leveraging GPU-based parallelism to simulate large-scale interactions efficiently. It also provides intuitive visualization tools for real-time observation of agent behaviors, facilitating analysis and debugging. Its flexibility and scalability make MAgent a powerful tool for MARL research, enabling the study of large-scale agent interactions, emergent behaviors, and the dynamics of artificial societies." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.39, + 0.886, + 0.403 + ], + "angle": 0, + "content": "3.2. LLMs Reasoning-based Simulation Environments" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.408, + 0.892, + 0.578 + ], + "angle": 0, + "content": "LLMs-based multi-agent systems have become an essential tool for enhancing the collaboration, reasoning, and decision-making capabilities of autonomous agents [116]. By integrating LLMs with simulation platforms, researchers can create complex test environments to explore the interactions of multi-agent systems in various tasks and scenarios. These simulation environments not only provide rich dynamic testing scenarios but also promote the widespread application of LLMs in task planning, coordination, and execution. The following will introduce several widely used simulation platforms for LLM multi-agent systems." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.579, + 0.893, + 0.82 + ], + "angle": 0, + "content": "ThreeDWorld Multi-Agent Transport (TDW-MAT)\\(^{16}\\) [147, 148] is a simulation environment for multi-agent embodied task, which is extended from the ThreeDWorld Transport Challenge [147] and is designed for visually-guided task-and-motion planning in physically realistic settings. It operates within the ThreeDWorld (TDW) platform, which offers high-fidelity sensory data, real-time physics-driven interactions, and near-photorealistic rendering. In TDW-MAT, embodied agents are tasked with transporting objects scattered throughout a simulated home environment using containers, emphasizing the need for coordination, physics awareness, and efficient planning. For instance, in the common scenario shown in Figure 12, the agent must transport objects scattered across multiple rooms and place them on the bed (marked with a green bounding box) in the bedroom." + }, + { + "type": "page_footnote", + "bbox": [ + 0.117, + 0.857, + 0.424, + 0.871 + ], + "angle": 0, + "content": "15MAgent: https://github.com/geek-ai/MAgent." + }, + { + "type": "page_footnote", + "bbox": [ + 0.515, + 0.847, + 0.892, + 0.87 + ], + "angle": 0, + "content": "16ThreeDWorld Multi-Agent Transport: https://github.com/ threadworld-mit/tdw." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "23" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.111, + 0.13, + 0.476, + 0.277 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.289, + 0.481, + 0.313 + ], + "angle": 0, + "content": "Figure 12: An overview of one of the example task in ThreeDWorld Transport Challenge [147, 148]." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.336, + 0.481, + 0.536 + ], + "angle": 0, + "content": "Communicative Watch-And-Help (C-WAH)\\(^{17}\\) [99] is a realistic multi-agent simulation environment and an extension of the Watch-And-Help Challenge platform, VirtualHome-Social [98]. C-WAH places a greater emphasis on cooperation and enhances communication between agents compared to VirtualHome-Social. Built on the VirtualHome-Social, C-WAH simulates common household activities where agents must collaborate to complete tasks such as preparing meals, washing dishes, and setting up a dinner table. As shown in Figure 13, C-WAH supports both symbolic and visual observation modes, allowing agents to perceive their surroundings either through detailed object information or egocentric RGB and depth images." + }, + { + "type": "image", + "bbox": [ + 0.111, + 0.548, + 0.476, + 0.687 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.699, + 0.481, + 0.723 + ], + "angle": 0, + "content": "Figure 13: An typical object-moving task leveraging LLMs-based embodied agents within the Communicative Watch-And-Help [99]." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.737, + 0.483, + 0.81 + ], + "angle": 0, + "content": "Cuisineworld[18] [149] is a virtual kitchen environment designed to evaluate and enhance multi-agent collaboration and coordination (i.e., the working efficiency) in a gaming context. As shown in Figure 14, in this scenario, multiple agents work together to prepare" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.133, + 0.894, + 0.318 + ], + "angle": 0, + "content": "and complete dish orders within a limited time frame. The tasks range from simple preparations, like chopping ingredients, to complex cooking processes that involve multiple tools and steps. CuisineWorld is equipped with a textual interface, and it supports various levels of difficulty, making it a flexible and robust testbed for assessing the planning and scheduling capabilities of Large Foundation Models (LFMs). The environment also introduces a \"Collaboration Score\" (CoS) metric to measure the efficiency of agent coordination as task demands increase, providing a comprehensive benchmark for multi-agent system performance in dynamic and cooperative settings." + }, + { + "type": "image", + "bbox": [ + 0.524, + 0.332, + 0.886, + 0.475 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.515, + 0.489, + 0.893, + 0.513 + ], + "angle": 0, + "content": "Figure 14: An typical multi-agent cooperative scenario in the CuisineWorld platform [149]." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.526, + 0.893, + 0.825 + ], + "angle": 0, + "content": "AgentScope[100] is a innovative, robust and flexible multi-agent platform designed to empower developers in building advanced multi-agent systems by leveraging the potential of LLMs. At its core, the platform employs a process-based message exchange mechanism, simplifying the complexities of agent communication and collaboration. This approach ensures smooth and efficient agent interaction, enabling developers to focus on designing workflows rather than low-level details. The platform stands out for its comprehensive fault-tolerance infrastructure, which includes retry mechanisms, rule-based corrections, and customizable error-handling configurations. AgentScope also excels in multi-modal support, seamlessly integrating text, images, audio, and video into its workflows. By decoupling data storage and transfer, it optimizes memory usage and enhances scalability, making it ideal for applications requiring rich multi-modal interactions. Additionally, its actor-based distributed framework enables efficient parallel execution and supports hybrid deployments, bridging the gap between local and distributed" + }, + { + "type": "page_footnote", + "bbox": [ + 0.103, + 0.835, + 0.481, + 0.859 + ], + "angle": 0, + "content": "17Communicative Watch-And-Help: https://github.com/xavierpuigf/watch_and_help." + }, + { + "type": "page_footnote", + "bbox": [ + 0.123, + 0.859, + 0.416, + 0.871 + ], + "angle": 0, + "content": "18Cuisineworld: https://mindagent.github.io/." + }, + { + "type": "list", + "bbox": [ + 0.103, + 0.835, + 0.481, + 0.871 + ], + "angle": 0, + "content": null + }, + { + "type": "page_footnote", + "bbox": [ + 0.516, + 0.846, + 0.893, + 0.871 + ], + "angle": 0, + "content": "19 AgentScope: https://github.com/modelscope/ agentscope." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.879, + 0.509, + 0.89 + ], + "angle": 0, + "content": "24" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.105, + 0.133, + 0.271, + 0.145 + ], + "angle": 0, + "content": "environments with ease." + }, + { + "type": "image", + "bbox": [ + 0.11, + 0.156, + 0.477, + 0.298 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.31, + 0.481, + 0.334 + ], + "angle": 0, + "content": "Figure 15: The official multi-modal interaction Web UI page between agents in the AgentScope platform [100]." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.351, + 0.481, + 0.549 + ], + "angle": 0, + "content": "Moreover, to improve user interaction with multiple agents, AgentScope assigns distinct colors and icons to each agent, as shown in Figure 15, providing clear visual differentiation in both the terminal and web interface. Designed with user accessibility in mind, AgentScope provides intuitive programming tools, including pipelines and message centers, which streamline the development process. Its interactive user interfaces, both terminal- and web-based, allow developers to monitor performance, track costs, and engage with agents effectively. These features position AgentScope as a state-of-the-art platform for advancing multi-agent systems, combining ease of use with cutting-edge technology." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.55, + 0.483, + 0.821 + ], + "angle": 0, + "content": "RoCoBench\\(^{20}\\) RoCoBench is a benchmark platform, proposed by Mandi et al. [200], designed to evaluate and enhance the collaborative capabilities of multirobot systems powered by LLMs. Built as an extension to the RoCo project\\(^{21}\\), RoCoBench provides a realistic simulation environment where robotic agents interact and collaborate to complete complex tasks, as shown in Figure 16, such as sorting packages, assembling components, or preparing a workspace. RoCoBench places a strong emphasis on communication-driven collaboration, integrating both symbolic and visual interaction modes to enable robots to perceive and respond to their environment effectively. Each robot is equipped with LLMs-powered reasoning, facilitating real-time dialogue and coordination. Correspondingly, the benchmark introduces a \"Collaboration Efficiency Metric\" (CEM) to evaluate the effectiveness of multirobot teamwork, taking into account factors like task completion time, resource allocation, and the quality" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.133, + 0.895, + 0.203 + ], + "angle": 0, + "content": "of inter-robot communication. RoCoBench serves as a comprehensive platform for evaluating the potential of LLMs in driving dialectic multi-robot collaboration, offering a scalable and flexible environment for developers and researchers alike" + }, + { + "type": "image", + "bbox": [ + 0.523, + 0.215, + 0.891, + 0.337 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.514, + 0.348, + 0.894, + 0.416 + ], + "angle": 0, + "content": "Figure 16: An overview of RoCoBench, a collection of six multirobot collaboration tasks set in a tabletop manipulation environment. The scenarios encompass a diverse range of collaborative challenges, each demanding distinct communication and coordination strategies between robots, incorporating familiar, intuitive objects designed to align with the semantic understanding capabilities of LLMs [200]." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.435, + 0.895, + 0.82 + ], + "angle": 0, + "content": "Generative Agents\\(^{22}\\) Park et al. [181, 201] introduces Generative Agents, a groundbreaking framework for simulating human behavior in interactive virtual worlds. These agents exhibit realistic individual and collective behaviors by incorporating dynamic memory, self-reflection, and action planning capabilities. The system leverages LLMs to store, retrieve, and synthesize memories into higher-level reasoning, enabling agents to adapt their actions based on personal experiences and evolving environmental changes. As illustrated in Figure 16, they present an interactive sandbox environment called Smallville, akin to \"The Sims,\" where 25 distinct virtual agents live, interact, and carry out daily activities. Each agent has a detailed initial profile, including personal traits, relationships, and goals, stored as \"seed memories.\" Agents engage in natural language-based dialogues and demonstrate social behaviors such as hosting events, making new acquaintances, and responding to user interventions. Generative Agents enable interactive applications in fields such as simulating realistic social dynamics for games and training simulations; designing dynamic, non-scripted virtual worlds for interactive systems; and exploring theories and behaviors in a controlled yet realistic virtual setting. The evaluations revealed the critical role of memory retrieval, self-reflection, and action planning in achieving coherent agent behaviors. Common issues," + }, + { + "type": "page_footnote", + "bbox": [ + 0.116, + 0.846, + 0.436, + 0.858 + ], + "angle": 0, + "content": "\\(^{20}\\)RoCoBench: https://project-roco.github.io/." + }, + { + "type": "page_footnote", + "bbox": [ + 0.119, + 0.858, + 0.438, + 0.871 + ], + "angle": 0, + "content": "\\(^{21}\\)RoCo Project: https://project-roco.github.io/." + }, + { + "type": "list", + "bbox": [ + 0.116, + 0.846, + 0.438, + 0.871 + ], + "angle": 0, + "content": null + }, + { + "type": "page_footnote", + "bbox": [ + 0.516, + 0.846, + 0.894, + 0.871 + ], + "angle": 0, + "content": "22Generative Agents: https://youmingyeh.github.io/cs-book/papers/generative-agents." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.879, + 0.508, + 0.89 + ], + "angle": 0, + "content": "25" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.103, + 0.133, + 0.482, + 0.204 + ], + "angle": 0, + "content": "such as exaggerated responses and overly formal communication, were identified as areas for improvement. Generative Agents push the boundaries of human behavior simulation, offering a robust framework for creating autonomous, memory-driven virtual agents." + }, + { + "type": "image", + "bbox": [ + 0.113, + 0.216, + 0.476, + 0.354 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.367, + 0.483, + 0.437 + ], + "angle": 0, + "content": "Figure 17: Generative agents serve as realistic simulations of human behavior, designed for interactive applications. In a sandbox environment inspired by The Sims, twenty-five agents engage in activities such as planning their routines, sharing updates, building relationships, and collaborating on group events, while allowing users to observe and interact with them. [181, 201]." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.448, + 0.483, + 0.717 + ], + "angle": 0, + "content": "SocialAI school\\(^{23}\\) Kovac et al. [202, 203] introduces The SocialAI School, a novel framework designed to explore and develop socio-cognitive abilities in artificial agents. The study emphasizes the importance of sociocognitive skills as foundational to human intelligence and cultural evolution. As shown in Figure 18, the SocialAI School provides a customizable suite of procedurally generated environments that enable systematic research into the socio-cognitive abilities required for artificial agents to interact with and contribute to complex cultures. Built on MiniGrid, it provides procedural environments for RL and LLMs-based agents to study social skills like joint attention, imitation, and scaffolding. Open-source and versatile, it enables diverse research, including generalizing social inferences, role reversal studies, and scaffolded learning. The SocialAI School represents a significant step toward enriching AI systems with socio-cognitive abilities inspired by human development." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.718, + 0.482, + 0.805 + ], + "angle": 0, + "content": "Welfare Diplomacy[24] [204] is an innovative variant of the zero-sum game Diplomacy, designed to evaluate the cooperative capabilities of multi-agent systems. Unlike the original game, which focuses on a single winner, Welfare Diplomacy introduces a general-sum framework where players balance military con" + }, + { + "type": "image", + "bbox": [ + 0.527, + 0.132, + 0.881, + 0.301 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.515, + 0.315, + 0.893, + 0.339 + ], + "angle": 0, + "content": "Figure 18: A clear workflow of an agent acting in the SocialAI school environment [202, 203]." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.388, + 0.895, + 0.686 + ], + "angle": 0, + "content": "quest with investments in domestic welfare. Players accumulate Welfare Points (WPs) throughout the game by prioritizing welfare over military expansion, and their total utility at the end of the game is determined by these points, removing the notion of a single \"winner\". Welfare Diplomacy enables clearer assessments of cooperation and provides stronger incentives for training cooperative AI agents. Players take on the roles of European powers, negotiating, forming alliances, and strategizing to compete for key supply centers. Orders are submitted and resolved simultaneously, with the goal of controlling a specified number of SCs to achieve victory, emphasizing a balance between cooperation and betrayal. Based on these rules, Welfare Diplomacy implements themselves via an open-source platform, and develops zero-shot baseline agents using advanced language models like GPT-4 [141, 140]. Experiments reveal that while these agents achieve high social welfare through mutual demilitarization, they remain vulnerable to exploitation, highlighting room for future improvement." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.701, + 0.895, + 0.87 + ], + "angle": 0, + "content": "In summary, these cutting-edge LLMs-powered simulation environments—ranging from task-specific platforms like TDW-MAT [148, 147] and CuisineWorld [149] to socially-driven frameworks such as Generative Agents [201] and the SocialAI School [203]—highlight the transformative potential of integrating advanced AI reasoning and multi-agent systems. By fostering research on collaboration, social cognition, and cooperative decision-making, these tools not only advance our understanding of AI's capabilities but also pave the way for practical applications in dynamic, real-world scenarios." + }, + { + "type": "page_footnote", + "bbox": [ + 0.103, + 0.823, + 0.483, + 0.873 + ], + "angle": 0, + "content": "23SocialAI school project: https://sites.google.com/view/socialai-school. 24Welfare Diplomacy: https://github.com/mukobi/welfare-diplomacy." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.88, + 0.509, + 0.891 + ], + "angle": 0, + "content": "26" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.127, + 0.13, + 0.46, + 0.306 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.104, + 0.32, + 0.481, + 0.345 + ], + "angle": 0, + "content": "Figure 19: The Balkans in the Diplomacy map in Welfare Diplomacy [204]." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.367, + 0.481, + 0.398 + ], + "angle": 0, + "content": "4. Practice Applications of Multi-Agent Decision-Making" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.409, + 0.481, + 0.621 + ], + "angle": 0, + "content": "Multi-agent cooperative decision-making has a wide range of practical applications across various domains. In this section, we delve into the practical applications of multi-agent decision-making, focusing on how advanced methods, particularly multi-agent MARL, are employed to address complex challenges in dynamic and evolving environments. We explore the contributions of advanced multi-agent systems across domains such as agriculture, disaster rescue, military simulations, traffic management, autonomous driving, and multi-robot collaboration. A broad array of applications applications is presented in Table 3. In the following, we will provide a detailed introduction to these applications, highlighting their impact and potential for future advancements." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.638, + 0.394, + 0.653 + ], + "angle": 0, + "content": "4.1. MARL-based Intelligent Applications" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.658, + 0.481, + 0.756 + ], + "angle": 0, + "content": "Recently, a variety of MARL methods have been developed, fostering efficient collaboration, strategic learning, and adaptive problem-solving in multi-agent systems [83, 24, 25, 28]. Below, we highlight notable contributions that demonstrate the application of MARL in enhancing multi-agent collaboration and performance" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.758, + 0.483, + 0.871 + ], + "angle": 0, + "content": "In smart precious agriculture and continuous pest disease detection, Seewald et al. [11] addressed the challenge of continuous exploration for multi-agent systems with battery constraints by integrating ergodic search methods with energy-aware coverage. In disaster rescue, Qazzaz et al. [12] proposed a novel technique using a reinforcement learning multi Q-learning algorithm to optimize UAV connectivity operations in" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.133, + 0.892, + 0.217 + ], + "angle": 0, + "content": "challenging terrain. Samad et al. [23] presents a cloud-based multi-agent framework for efficiently managing aerial robots in disaster response scenarios, aiming to optimize rescue efforts by autonomously processing real-time sensory data to locate and assist injured individuals." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.218, + 0.893, + 0.431 + ], + "angle": 0, + "content": "In military confrontations, Qi et al. [90] designed a distributed MARL framework based on the actor-work-learner architecture, addressing the issues of slow sample collection and low training efficiency in MARL within the MaCA [89] and SMAC 3D real-time gaming [134, 135] military simulation environments. Benke et al. [208] proposed a computational model for agent decision-making that incorporates strategic deception, enhancing the representation of deceptive behaviors in multi-agent simulations for military operations research. Sutagundar et al. [209] proposed a Context Aware Agent based Military Sensor Network (CAMSN) to enhance multi-sensor image fusion, using node and sink-driven contexts, forming an improved infrastructure for multi-sensor image fusion." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.431, + 0.893, + 0.501 + ], + "angle": 0, + "content": "In efficient limited-bandwidth communication field, Wang et al. [242] proposed a method called IMAC (Informative Multi-Agent Communication) to address the problem of limited-bandwidth communication in MARL." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.502, + 0.892, + 0.601 + ], + "angle": 0, + "content": "In the research of UAV swarm communications against jamming, Lv et al. [93] proposed a MARL-based scheme to optimize relay selection and power allocation. This strategy leverages network topology, channel states, and shared experiences to improve policy exploration and stability, ultimately enhancing anti-jamming performance." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.601, + 0.893, + 0.814 + ], + "angle": 0, + "content": "In UAV pursuit-evasion [20, 21, 22], Kouzeghar [88] proposed a decentralized heterogeneous UAV swarm approach for multi-target pursuit using MARL technique and introduced a variant of the MADDPG [132] to address pursuit-evasion scenarios in non-stationary environments with random obstacles. Alexopoulos et al. [212] addressed the challenge of pursuit-evasion games involving two pursuing and one evading unmanned aerial vehicle (UAV) by introducing a hierarchical decomposition of the game. Luo et al. [213] proposed a cooperative maneuver decision-making method for multi-UAV pursuit-evasion scenarios using an improved MARL approach, which incorporates an enhanced CommNet network with a communication mechanism to address multi-agent coordination." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.814, + 0.893, + 0.871 + ], + "angle": 0, + "content": "In large-scale traffic signal/flow control, Wang [214] proposed a curiosity-inspired algorithm to optimize safe and smooth traffic flow in autonomous vehicle on-ramp merging; Chu et al. [92] proposed a fully scalable" + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "27" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.291, + 0.141, + 0.706, + 0.152 + ], + "angle": 0, + "content": "Table 3: Categorized Applications of MARL and LLMsin Diverse Domains." + }, + { + "type": "table", + "bbox": [ + 0.136, + 0.153, + 0.861, + 0.599 + ], + "angle": 0, + "content": "
CategoryApplication AreaWorks / References
MARL-based MASSmart Precious Agriculture & Disaster RescueSeewald et al. [11], Qazzaz et al. [12], Samad et al. [23], Boubin et al. [205], Li et al. [206], Mahajan et al. [207]
Military ConfrontationsQi et al. [90], Benke et al. [208], Sutagundar et al. [209], Vangaru et al. [210], Wang et al. [211], MaCA [89], SMAC [134], SMAC-v2 [135]
UAV Pursuit-Evasion & Swarm Communications & NavigationKouzeghar [88], Alexopoulos et al. [212], Luo et al. [213], Lv et al. [93], Xue et al. [20], Rezwan et al. [21], Baroomi et al. [22]
Traffic Signal/Flow ControlWang [214], Chu et al. [92], Aboueleneen et al. [215], Yu et al. [216], Sun et al. [217], Azfar et al. [218], Bokade et al. [219], Kwesiga et al. [220], Zhang et al. [221]
Autonomous DrivingXue et al. [222], Liu et al. [19], Wen et al. [18], Jayawardana et al. [17], Liu et al. [223], Formanek et al. [224], Zhang et al. [225], Kotoku et al. [226], Hua et al. [227]
Multiple Robots CollaborativeGeorgios et al. [13], Silva et al. [14], Huang et al. [15], Cena et al. (SMART) [16], Kevin (SCRIMMAGE) [228], Liu et al. [95]
LLMs-based MASMulti-Agent CollaborationWu et al. (AutoGen) [103], Xiao et al. (CoE) [101], Chen et al. (AgentV-erse) [105], Liu et al. (DyLAN) [106], Zhang et al. (CoELA) [96]
Gaming InteractionXu et al. (LLM-Werewolf) [58], Gong et al. (MindAgent) [149], Xie et al. [229], Lin et al. [230], Jia et al. (GameFi) [231], Yin et al. (MIRAGE) [232], Zhang et al. (DVM) [233], Bonorino et al. [234]
Autonomous DrivingZheng et al. (PlanAgent) [113], Luo et al. (SenseRAG) [235], Mahmud et al. [236], Peng et al. (LearningFlow) [237], Karagounis et al. [238], Luo et al. [239], Gao et al. [240], Hegde et al. [241]
Multi-Modal ApplicationWang et al. (LangGraph) [111, 112], Zhang et al. (CrewAI) [114, 115], Zheng et al. (PlanAgent) [113], Wang et al. (MLLM-Tool) [102]
" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.623, + 0.482, + 0.665 + ], + "angle": 0, + "content": "and decentralized multi-agent deep reinforcement learning algorithm based on the advantage actor-critic (A2C) method." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.673, + 0.483, + 0.871 + ], + "angle": 0, + "content": "In autonomous driving area, a large number of superior multi-agent decision-making algorithms and models are continuously being explored and devised. Xue et al. [222] developed a two-stage system framework for improving Multi-Agent Autonomous Driving Systems (MADS) by enabling agents to recognize and understand the Social Value Orientations (SVOs) of other agents. Liu et al. [19] proposed the Personality Modeling Network (PeMN), which includes a cooperation value function and personality parameters to model diverse interactions in highly interactive scenarios, addressing the issue of diverse driving styles in autonomous driving. Wen et al. [18] proposed a safe reinforcement learning algorithm called Parallel" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.623, + 0.892, + 0.737 + ], + "angle": 0, + "content": "Constrained Policy Optimization (PCPO) based on the actor-critic architecture to address the issues of unexplainable behaviors and lack of safety guarantees in autonomous driving. Jayawardana et al. [17] proposed enabling socially compatible driving by leveraging human driving data to learn a social preference model, integrating it with reinforcement learning-based AV policy synthesis using Social Value Orientation theory." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.743, + 0.893, + 0.871 + ], + "angle": 0, + "content": "In multiple robots collaborative fields, Georgios et al. [13] introduces a novel cognitive architecture for large-scale multi-agent Learning from Demonstration (LfD), leveraging Federated Learning (FL) to enable scalable, collaborative, and AI-driven robotic systems in complex environments. Silva et al. [14] address the challenges and limitations in evaluating intelligent collaborative robots for Industry 4.0. The review emphasizes the urgent need for improved evaluation methods and" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.88, + 0.509, + 0.891 + ], + "angle": 0, + "content": "28" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.102, + 0.132, + 0.482, + 0.401 + ], + "angle": 0, + "content": "standards to account for the complexities posed by human variability, AI integration, and advanced control systems in industrial environments. Huang et al. [15] presents a multi-agent reinforcement learning approach using the MADDPG algorithm, enhanced with an experience sample optimizer, to train swarm robots for autonomous, collaborative exploration on Mars. This approach outperforms traditional DRL algorithms in efficiency as the number of robots and targets increases. The SMART multi-agent robotic system [16] is a comprehensive and advanced platform designed for executing coordinated robotic tasks. It integrates both hardware components, such as robots and IP-C Cameras, and software agents responsible for image processing, path planning, communication, and decision-making. By utilizing Work-Flow Petri Nets for modeling and control, the system effectively ensures coordination and successful task execution even in unstructured environments." + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.402, + 0.482, + 0.715 + ], + "angle": 0, + "content": "Furthermore, the well-known project, Simulating Collaborative Robots in a Massive Multi-agent Game Environment (SCRIMMAGE)[25] [228], tackles the high costs of field testing robotic systems by offering a flexible and efficient simulation environment specifically designed for mobile robotics research. Unlike many existing simulators that are primarily designed for ground-based systems with high-fidelity multi-body physics models, SCRIMMAGE focuses on simulating large numbers of aerial vehicles, where precise collision detection and complex physics are often unnecessary. SCRIMMAGE is designed to be highly adaptable, with a plugin-based architecture that supports various levels of sensor fidelity, motion models, and network configurations. This flexibility allows the simulation of hundreds of aircraft with low-fidelity models or a smaller number with high-fidelity models on standard consumer hardware. Overall, SCRIMMAGE[26] provides a robust and scalable solution for testing and refining robotic algorithms in a controlled virtual environment, significantly reducing the risks and costs associated with physical testing." + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.715, + 0.482, + 0.814 + ], + "angle": 0, + "content": "Liu et al. [95] proposed the Learning before Interaction (LBI) framework, a novel approach designed to enhance multi-agent decision-making through generative world models. Traditional generative models struggle with trial-and-error reasoning, often failing to produce reliable solutions for complex multi-agent tasks. To address this limitation, LBI integrates a language-" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.133, + 0.895, + 0.388 + ], + "angle": 0, + "content": "guided simulator into the MARL pipeline, enabling agents to learn grounded reasoning through simulated experiences. LBI consists of a world model composed of a dynamics model and a reward model. The dynamics model incorporates a vector quantized variational autoencoder (VQ-VAE) [243] for discrete image representation and a causal transformer to autoregressively generate interaction transitions. Meanwhile, the reward model employs a bidirectional transformer trained on expert demonstrations to provide task-specific rewards based on natural language descriptions. LBI further distinguishes itself by generating explainable interaction sequences and reward functions, providing interpretable solutions for multi-agent decision-making problems. By addressing challenges such as the compositional complexity of MARL environments and the scarcity of paired text-image datasets, LBI represents a significant advancement in the field." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.388, + 0.895, + 0.686 + ], + "angle": 0, + "content": "Ye et al. [244] proposed an adaptive genetic algorithm (AGA) that dynamically adjusts crossover and mutation populations, leveraging the Dubins car model and state-transition strategies to optimize the allocation of heterogeneous UAVs in suppression of enemy air defense missions. Radac et al. combine two modelfree controller tuning techniques linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning as a novel mixed VRFT-Q learning method [94]. VRFT is initially employed to determine a stabilizing feedback controller using input-output experimental data within a model reference tracking framework. Subsequently, reinforcement Q-learning is applied in the same framework, utilizing input-state experimental data gathered under perturbed VRFT to ensure effective exploration. Extensive simulations on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS) demonstrates the mixed VRFT-Q's significant performance improvement over the Q-learning controllers and the VRFT controllers." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.687, + 0.896, + 0.872 + ], + "angle": 0, + "content": "To address the lack of a general metric for quantifying policy differences in MARL problems, Hu et al. [84] proposed the Multi-Agent Policy Distance (MAPD), a tool designed to measure policy differences among agents. Additionally, they developed a Multi-Agent Dynamic Parameter Sharing (MADPS) algorithm based on MAPD, demonstrating its effectiveness in enhancing policy diversity and overall performance through extensive experiments. To addresses the challenge of cooperative MARL in scenarios with dynamic team compositions, Wang et al. [245] propose using mutual information as an augmented reward to encourage robustness in agent policies across different team configurations." + }, + { + "type": "page_footnote", + "bbox": [ + 0.104, + 0.835, + 0.482, + 0.871 + ], + "angle": 0, + "content": "25 SCRIMMAGE Web: http://www.scrimmagesim.org/. 26 SCRIMMAGE project: https://github.com/gtri/ scrimmage." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.879, + 0.509, + 0.89 + ], + "angle": 0, + "content": "29" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.107, + 0.129, + 0.481, + 0.613 + ], + "angle": 0, + "content": "They develop a multi-agent policy iteration algorithm with a fixed marginal distribution and demonstrate its strong zero-shot generalization to dynamic team compositions in complex cooperative tasks. Progressive Mutual Information Collaboration (PMIC)\\(^{27}\\) is a novel framework that leverages mutual information (MI) to guide collaboration among agents, thereby enhancing performance in multi-agent cooperative tasks [246]. The key innovation of is its dual MI objectives: maximizing MI associated with superior collaborative behaviors and minimizing MI linked to inferior ones, ensuring more effective learning and avoiding sub-optimal collaborations. Wai et al. [83] proposes a novel double averaging primal-dual optimization algorithm for MARL, specifically targeting decentralized applications like sensor networks and swarm robotics. The MARL algorithm enables agents to collaboratively evaluate policies by incorporating neighboring gradient and local reward information, achieving fast finite-time convergence to the optimal solution in decentralized convex-concave saddle-point problems. To address the challenge of sparse rewards in MARL, Kang et al. [247] introduce the Dual Preferences-based Multi-Agent Reinforcement Learning (DPM) framework. DPM extends preference-based reinforcement learning (PbRL) by incorporating dual preference types-comparing both trajectories and individual agent contributions-thereby optimizing individual reward functions more effectively. DPM also leverages LLMs to gather preferences, mitigating issues associated with human-based preference collection. Experimental results in the StarCraft Multi-Agent Challenge (SMAC) [135] demonstrate that DPM significantly outperforms existing baselines, particularly in sparse reward settings." + }, + { + "type": "text", + "bbox": [ + 0.107, + 0.617, + 0.481, + 0.829 + ], + "angle": 0, + "content": "Traditional methods like soft attention struggle with scalability and efficiency in LMAS due to the overwhelming number of agent interactions and large observation spaces. To address these challenges of largescale multi-agent systems (LMAS) involving hundreds of agents, University of Chinese Academy of Sciences [248] introduces the Concentration Network (ConcNet), a novel reinforcement learning framework. ConcNet mimics human cognitive processes of concentration by prioritizing and aggregating observations based on motivational indices, such as expected survival time and state value. It allows the system to focus on the most relevant entities, enhancing decision-making efficiency in complex environments. In ConcNet, a novel concentration policy gradient architecture was further proposed," + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.133, + 0.892, + 0.217 + ], + "angle": 0, + "content": "demonstrating its superior performance and scalability in large-scale multi-agent scenarios, such as decentralized collective assault simulations. This research represents a significant advancement in the field, providing a scalable solution for effective decision-making in large-scale multi-agent environments." + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.219, + 0.892, + 0.402 + ], + "angle": 0, + "content": "In conclusion, MARL-based intelligent applications have shown exceptional adaptability across diverse domains such as autonomous driving, UAV systems, disaster response, and collaborative robotics [242, 88, 20, 19, 13, 16]. Key innovations, including communication-enhanced learning [168, 161, 174, 139], adaptive policy optimization, and mutual information [246] frameworks, have significantly advanced the field. While challenges like sparse rewards and scalability remain, these advancements highlight MARL's potential to address dynamic and complex multi-agent environments effectively, paving the way for further impactful developments." + }, + { + "type": "title", + "bbox": [ + 0.519, + 0.422, + 0.869, + 0.436 + ], + "angle": 0, + "content": "4.2. LLMs reasoning-based Intelligent Applications" + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.445, + 0.892, + 0.543 + ], + "angle": 0, + "content": "To address diverse and complex challenges, a variety of frameworks leveraging LLMs have been developed, enabling advanced reasoning, collaboration, and task execution in multi-agent systems [116, 30, 57]. Below, we highlight notable contributions that demonstrate the application of LLMs in enhancing multi-agent decision-making and coordination." + }, + { + "type": "text", + "bbox": [ + 0.519, + 0.545, + 0.892, + 0.87 + ], + "angle": 0, + "content": "Wu et al. [103] introduced AutoGen, an open-source framework designed to enable the development of next-generation LLM applications through multi-agent conversations. AutoGen allows for customizable agent interactions and the integration of LLMs, human inputs, and tools to collaboratively solve complex tasks. Xiao et al. [101] proposed Chain-of-Experts (CoE), a novel multi-agent framework designed to enhance reasoning in complex operations research (OR) problems using LLMs. Chen et al. [105] presented AgentVerse, a multiagent framework designed to facilitate collaboration among autonomous agents, inspired by human group dynamics. AgentVerse dynamically adjusts the composition and roles of agents throughout the problem-solving process, enhancing their ability to tackle complex tasks across various domains, including text understanding, reasoning, coding, and embodied AI. The framework consists of four stages: Expert Recruitment, Collaborative Decision-Making, Action Execution, and Evaluation. Chen et al. [107] introduced AutoAgents, a framework capable of adaptively generating and coordinating multiple specialized agents based on different tasks, thereby constructing efficient multi-agent teams" + }, + { + "type": "page_footnote", + "bbox": [ + 0.12, + 0.859, + 0.432, + 0.87 + ], + "angle": 0, + "content": "27PMIC code: https://github.com/yeshenpy/PMIC." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "30" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.103, + 0.13, + 0.482, + 0.801 + ], + "angle": 0, + "content": "to accomplish complex tasks. Liu et al. [106] proposed the Dynamic LLM-Agent Network (DyLAN), a framework designed to enhance LLM-agent collaboration by enabling agents to interact dynamically based on task requirements, rather than within a static architecture. Xu et al. [58] proposed a novel multi-agent framework that combines LLMs with reinforcement learning to enhance strategic decision-making and communication in the Werewolf game\\(^{28}\\), effectively overcoming intrinsic biases and achieving human-level performance. Wen et al. [104] introduce the Multi-Agent Transformer (MAT), a novel architecture that frames cooperative MARL as a sequence modeling problem. Experiments on StarCraftII [154, 91, 134], Multi-Agent MuJoCo (MAMuJoCo) [249], Dexterous Hands Manipulation [185, 186], and Google Research Football [55] benchmarks demonstrate that it achieves superior performance and data efficiency by leveraging modern sequence models in an on-policy learning framework. Wang et al. [102] introduced MLLM-Tool\\(^{29}\\), a multimodal tool agent system that integrates open-source LLMs with multimodal encoders, enabling it to process visual and auditory inputs for selecting appropriate tools based on ambiguous multimodal instructions. Moreover, they introduced ToolMMBench, a novel benchmark with multi-modal inputs and multioption solutions, demonstrating its effectiveness in addressing real-world multimodal multi-agent scenarios. Zhang et al. [96] introduce CoELA, a Cooperative Embodied Language Agent framework that leverages LLMs to enhance multi-agent cooperation in complex, decentralized environments. CoELA integrates LLMs with cognitive-inspired modules for perception, memory and execution, allowing agents to plan, communicate, and collaborate effectively on long-horizon tasks, outperforming traditional planning-based methods such as Multi-Agent Transformer(MAT) [104], and showing promising results in human-agent interaction simulation environments, Communicative Watch-And-Help (C-WAH) [99] and ThreeDWorld Multi-Agent Transport (TDW-MAT) [147, 148]. Gong et al. [149] from Team of Li.FeiFei. introduce MindAgent, a novel infrastructure for evaluating planning and coordination capabilities in gaming interactions, leveraging large foundation models (LFMs) to coordinate multi-agent system (MAS), collaborate with human players, and enable in-context learning. Their team also present" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.132, + 0.895, + 0.417 + ], + "angle": 0, + "content": "\"Cuisineworld\", a new gaming scenario and benchmark for assessing multi-agent collaboration efficiency. Despite LLMs' success in various collaborative tasks, they struggle with spatial and decentralized decision-making required for flocking. Li et al. [57] explored the challenges faced by LLMs in solving multi-agent flocking tasks, where agents strive to stay close, avoid collisions, and maintain a formation. Sun et al. [180] proposed Corex, a novel framework that enhances complex reasoning by leveraging multi-model collaboration. Inspired by human cognitive processes, Corex employs three collaborative paradigms-Discuss, Review, and Retrieve-where different LLMs act as autonomous agents to collectively solve complex tasks. Corex empowers LLM agents to \"think outside the box\" by facilitating collaborative group discussions, effectively mitigating the cognitive biases inherent in individual LLMs. This approach not only enhances performance but also improves cost-effectiveness and annotation efficiency, offering a significant advantage in complex tasks." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.417, + 0.895, + 0.459 + ], + "angle": 0, + "content": "Next, we will provide a detailed introduction to some outstanding achievements in the application of LLMs for multi-agent collaborative task execution." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.459, + 0.896, + 0.785 + ], + "angle": 0, + "content": "MetaGPT: Existing LLMs-based multi-agent systems often struggle with complex tasks due to logical inconsistencies and cumulative hallucinations, leading to biased results. Hong et al. [108, 109] from DeepWisdom\\(^{31}\\) proposed MetaGPT\\(^{32}\\), an innovative metaprogramming framework designed to enhance the collaboration capabilities of LLMs-based multi-agent systems. MetaGPT integrates Standard Operating Procedures (SOPs) commonly used in human workflows, thereby constructing a more efficient and coherent multi-agent collaboration system. MetaGPT employs an assembly-line approach, breaking down complex tasks into multiple subtasks and assigning them to agents with specific domain expertise. These agents collaborate during task execution through clearly defined roles and structured communication interfaces, reducing the risk of information distortion and misunderstanding. In summary, MetaGPT offers a flexible and powerful platform for developing LLMs-based multi-agent systems. Its unique meta-programming framework and rigorous workflow design enable it to excel in handling complex tasks, greatly advancing the field of multi-agent collaboration research." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.786, + 0.894, + 0.801 + ], + "angle": 0, + "content": "CoAct: Hou et al. [97] proposed CoAct33, a multi-" + }, + { + "type": "page_footnote", + "bbox": [ + 0.103, + 0.823, + 0.482, + 0.846 + ], + "angle": 0, + "content": "28 Werewolf game: https://sites.google.com/view/strategic-language-agents/." + }, + { + "type": "page_footnote", + "bbox": [ + 0.103, + 0.847, + 0.482, + 0.87 + ], + "angle": 0, + "content": "29MLLM-Tool: https://github.com/MLLM-Tool/MLLM-Tool." + }, + { + "type": "list", + "bbox": [ + 0.103, + 0.823, + 0.482, + 0.87 + ], + "angle": 0, + "content": null + }, + { + "type": "page_footnote", + "bbox": [ + 0.529, + 0.823, + 0.828, + 0.836 + ], + "angle": 0, + "content": "30Cuisineworld: https://mindagent.github.io/." + }, + { + "type": "page_footnote", + "bbox": [ + 0.53, + 0.836, + 0.815, + 0.847 + ], + "angle": 0, + "content": "31DeepWisdom: https://www deepwisdom.ai/." + }, + { + "type": "page_footnote", + "bbox": [ + 0.53, + 0.847, + 0.842, + 0.858 + ], + "angle": 0, + "content": "32MetaGPT: https://github.com/geekan/MetaGPT." + }, + { + "type": "page_footnote", + "bbox": [ + 0.53, + 0.858, + 0.805, + 0.87 + ], + "angle": 0, + "content": "33CoAct: https://github.com/dxhou/CoAct." + }, + { + "type": "list", + "bbox": [ + 0.529, + 0.823, + 0.842, + 0.87 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.879, + 0.508, + 0.891 + ], + "angle": 0, + "content": "31" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.102, + 0.132, + 0.482, + 0.302 + ], + "angle": 0, + "content": "agent system based on LLMs designed for hierarchical collaboration tasks. The framework consists of six stages: task decomposition, subtask assignment and communication, subtask analysis and execution, feedback collection, progress evaluation, and replanning when necessary. The global planning agent plays a critical role in managing complex tasks. The local execution agent is responsible for executing specific subtasks. This hierarchical framework demonstrates strong adaptability and performance, particularly in complex real-world tasks requiring dynamic replanning and collaborative execution." + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.303, + 0.482, + 0.572 + ], + "angle": 0, + "content": "AutoGen: Microsoft [103, 146] introduced AutoGen\\(^{34}\\), a flexible framework for creating and managing multiple autonomous agents to collaboratively complete complex tasks, particularly in programming, planning, and creative writing domains. AutoGen allows users to define distinct agent roles, including specialists, general assistants, and decision-makers, ensuring clear task division and effective coordination. Agents interact in a structured conversational environment, exchanging messages to resolve tasks iteratively. AutoGen introduces feedback loops where agents analyze outputs, refine strategies, and optimize task completion autonomously. Notably, it supports integration with various LLMs, offering developers the flexibility to replace APIs without altering code significantly. In summary, AutoGen facilitates scalable, efficient, and robust multi-agent collaboration, demonstrating potential for applications ranging from enhanced ChatGPT systems to real-world industrial workflows." + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.572, + 0.483, + 0.816 + ], + "angle": 0, + "content": "XAgent: XAgent Team\\(^{35}\\) [110] developed XAgent\\(^{36}\\) is an open-source, LLMs-driven autonomous agent framework designed for solving complex tasks automatically and efficiently. As shown in Figure 20, it employs a dual-loop architecture: the outer loop for high-level task planning and coordination, and the inner loop for executing subtasks. The PlanAgent in the outer loop generates an initial plan by breaking a complex task into manageable subtasks, organizing them into a task queue. It iteratively monitors progress, optimizes plans based on feedback from the inner loop, and continues until all subtasks are completed. The inner loop utilizes ToolAgents, which employ various tools like file editors, Python notebooks, web browsers, and shell interfaces within a secure docker environment to execute subtasks. XAgent emphasizes autonomy, safety, and extensibility, allowing users to add new agents or tools" + }, + { + "type": "image", + "bbox": [ + 0.523, + 0.13, + 0.888, + 0.319 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.514, + 0.332, + 0.894, + 0.38 + ], + "angle": 0, + "content": "Figure 20: An structure overview of the XAgents framework, highlighting the Task Node as the starting point, the sequence of Sub-Task Nodes forming the Task Execution Graph (TEG), and the Fusion Node integrating outputs for the final result [110]." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.402, + 0.893, + 0.444 + ], + "angle": 0, + "content": "to enhance functionality. Its GUI facilitates user interaction while supporting human collaboration, enabling real-time guidance and assistance for challenging tasks." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.444, + 0.893, + 0.655 + ], + "angle": 0, + "content": "PlanAgent: PlanAgent\\(^{37}\\), developed by Chinese Academy of Sciences and Li Auto [113], introduces a closed-loop motion planning framework for autonomous driving by leveraging multi-modal large language models (MLLMs). The system utilizes MLLM's multi-modal reasoning and commonsense understanding capabilities to generate hierarchical driving commands based on scene information. PlanAgent addresses key limitations of traditional rule-based and learning-based methods, including overfitting in long-tail scenarios and inefficiencies in scene representation. Its novel integration of MLLM-driven reasoning into autonomous driving planning establishes a new benchmark for safety and robustness in real-world applications." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.657, + 0.894, + 0.813 + ], + "angle": 0, + "content": "LangGraph: LangChain Inc\\(^{38}\\) [111, 112] introduced LangGraph\\(^{39}\\), a library designed for building stateful, multi-actor applications with LLMs, enabling the creation of complex agent and multi-agent workflows. Inspired by frameworks like Pregel and Apache Beam, LangGraph provides fine-grained control over workflows and state management while offering advanced features like persistence and human-in-the-loop capabilities. LangGraph stands out for its support of iterative workflows with cycles and branching, which differentiates it from DAG-based frameworks. Each" + }, + { + "type": "page_footnote", + "bbox": [ + 0.116, + 0.834, + 0.449, + 0.847 + ], + "angle": 0, + "content": "34 AutoGen: https://github.com/microsoft/autogen." + }, + { + "type": "page_footnote", + "bbox": [ + 0.117, + 0.847, + 0.399, + 0.859 + ], + "angle": 0, + "content": "35XAgent Team: https://blog.x-agent.net/." + }, + { + "type": "page_footnote", + "bbox": [ + 0.117, + 0.859, + 0.422, + 0.871 + ], + "angle": 0, + "content": "36XAgent: https://github.com/OpenBMB/XAgent." + }, + { + "type": "list", + "bbox": [ + 0.116, + 0.834, + 0.449, + 0.871 + ], + "angle": 0, + "content": null + }, + { + "type": "page_footnote", + "bbox": [ + 0.529, + 0.834, + 0.84, + 0.847 + ], + "angle": 0, + "content": "37PlanAgent: http://www.chinasem.cn/planagent." + }, + { + "type": "page_footnote", + "bbox": [ + 0.531, + 0.847, + 0.871, + 0.859 + ], + "angle": 0, + "content": "38LangChain Inc: https://langchain.ac.cn/langgraph." + }, + { + "type": "page_footnote", + "bbox": [ + 0.531, + 0.859, + 0.863, + 0.871 + ], + "angle": 0, + "content": "39LangGraph: https://www.langchain.com/langgraph" + }, + { + "type": "list", + "bbox": [ + 0.529, + 0.834, + 0.871, + 0.871 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.879, + 0.509, + 0.891 + ], + "angle": 0, + "content": "32" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.102, + 0.133, + 0.482, + 0.36 + ], + "angle": 0, + "content": "graph execution generates a state, dynamically updated by node outputs, enabling reliable and adaptive agent behavior. Its built-in persistence allows workflows to pause and resume at any point, facilitating error recovery and advanced human-agent interactions, including \"time travel\" to modify past actions. LangGraph integrates seamlessly with LangChain [250, 179] but functions independently, offering flexibility for various applications, from dialogue agents and recommendation systems to natural language processing and game development. With streaming support, it processes outputs in real-time, making it suitable for tasks requiring immediate feedback. Its low-level architecture and customizable workflows make LangGraph a powerful tool for creating robust, scalable, and interactive LLMs-based systems." + }, + { + "type": "image", + "bbox": [ + 0.113, + 0.373, + 0.475, + 0.53 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.545, + 0.483, + 0.592 + ], + "angle": 0, + "content": "Figure 21: A LangGraph workflow representation demonstrating conditional branching and iterative loops for document retrieval, grading, query transformation, and web search before generating a final output [111, 112]." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.604, + 0.483, + 0.833 + ], + "angle": 0, + "content": "CrewAI: \\(\\mathrm{CrewAI^{40}}\\) [114, 115] is an open-source framework designed to coordinate AI agents that specialize in role-playing and autonomous operations, enabling efficient collaboration to achieve complex goals. The framework's modular design allows users to create AI teams that operate like real-world teams, with agents assigned specific roles and tasks to ensure clear division of labor and shared objectives. As seen from 22, this framework operates in three primary stages: Agent Creation, where developers define roles with specific goals and tools; Task Management, enabling flexible task assignment and multi-view knowledge enhancement; and Execution and Collaboration, where agents interact in workflows to resolve tasks, with outputs parsed into reusable formats. CrewAI integrates seamlessly with the LangChain ecosystem, leveraging its tools and LLM" + }, + { + "type": "image", + "bbox": [ + 0.527, + 0.137, + 0.88, + 0.295 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.515, + 0.315, + 0.893, + 0.34 + ], + "angle": 0, + "content": "Figure 22: An overview of the processing workflow for the roleplaying multi-agent framework, CrewAI [114, 115]." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.362, + 0.893, + 0.56 + ], + "angle": 0, + "content": "capabilities, such as OpenAI and Google Gemini. The framework supports real-time decision-making and task adaptation, with future versions planned to include more advanced collaboration processes, such as consensus-driven workflows and autonomous decision-making. Its innovative features, such as role-based design, dynamic rule generation, and modular task workflows, position CrewAI as a robust and scalable framework for multi-agent collaboration across creative and industrial domains. Overall, CrewAI \\(^{41}\\) offers a cutting-edge approach to multi-agent systems by integrating role-specific autonomy, flexible workflows, and advanced AI toolsets, making it a versatile framework for collaborative AI applications." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.562, + 0.894, + 0.703 + ], + "angle": 0, + "content": "In summary, these frameworks and applications [108, 97, 103, 110] highlight the rapid advancements in leveraging LLMs for multi-agent collaboration, reasoning, and task execution. Each system introduces unique innovations—ranging from dynamic agent coordination to enhanced reasoning and human-in-the-loop workflows—demonstrating their potential to tackle complex, real-world challenges across various domains [113, 111, 114, 115]. These developments pave the way for more flexible, scalable, and efficient AI-driven solutions." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.726, + 0.892, + 0.756 + ], + "angle": 0, + "content": "5. Challenges in MARL-based and LLMs-based approaches" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.766, + 0.893, + 0.81 + ], + "angle": 0, + "content": "The extension of single-agent decision-making into multi-agent cooperative contexts introduces several challenges, including developing effective training" + }, + { + "type": "page_footnote", + "bbox": [ + 0.529, + 0.836, + 0.892, + 0.871 + ], + "angle": 0, + "content": "\\(^{41}\\)CrewAI Multi-Agent System platform: https://www.deeplearning.ai/short-courses/ulti-ai-agent-systems-with-crewai/." + }, + { + "type": "page_footnote", + "bbox": [ + 0.117, + 0.857, + 0.436, + 0.871 + ], + "angle": 0, + "content": "40CrewAI: https://github.com/crewAIInc/crewAI." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.879, + 0.508, + 0.89 + ], + "angle": 0, + "content": "33" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.102, + 0.133, + 0.482, + 0.302 + ], + "angle": 0, + "content": "schemes for multiple agents learning and adapting simultaneously, managing increased computational complexity due to the more sophisticated and stochastic environments compared to single-agent settings, and addressing the foundational role of strategic interaction among agents. Additionally, ensuring the scalability of algorithms to handle larger observation and action spaces, facilitating coordination and cooperation among agents to achieve consistent goals, and dealing with non-stationary environments where agents' behaviors and strategies continuously evolve are also inevitable and critical challenges." + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.303, + 0.482, + 0.474 + ], + "angle": 0, + "content": "Applying multi-agent decision-making techniques to real-world problems, which often involve complex and dynamic interactions, further underscores the need for sophisticated and advanced approaches to effectively adapt these ever-increasing complexities. Multi-agent cooperative decision making significantly surpasses single-agent decision-making in terms of environmental stochasticity, complexity, the difficulty of strategy optimization, and so on. As shown in Figure 23, we present a tree diagram summarizing the existing challenges in MARL-based and LLMs-based multi-agent decision-making approaches." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.489, + 0.465, + 0.503 + ], + "angle": 0, + "content": "5.1. Challenges in MARL-based multi-agent systems" + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.506, + 0.482, + 0.678 + ], + "angle": 0, + "content": "The advancement of MARL remains in its formative stages, with its potential for enabling effective multiagent coordination and achieving scalability in dynamic environments yet to be fully unlocked [7, 6, 8, 25, 28]. Challenges such as environmental stochasticity, strategy learning difficulty, non-stationarity, scalability, and reward complexity have emerged as major bottlenecks. This section provides an in-depth analysis of these challenges, exploring the current state, technical limitations, and potential solutions in MARL-based multiagent systems to enable more robust, efficient, and scalable decision-making frameworks." + }, + { + "type": "text", + "bbox": [ + 0.118, + 0.687, + 0.482, + 0.871 + ], + "angle": 0, + "content": "1. Environmental Stochasticity and Complexity: [251, 25] In MARL-based decision-making systems, environmental dynamics are influenced not only by external factors but also by the behaviors and decisions of individual agents. This complex interaction results in high levels of stochasticity and complexity in the environment, making prediction and modeling significantly more difficult. For example, in autonomous driving scenarios [19, 214, 18, 222], the behavior of each vehicle affects the decisions of surrounding vehicles, thereby increasing the overall complexity of the system;" + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.133, + 0.895, + 0.345 + ], + "angle": 0, + "content": "2. Strategy Learning Difficulty: [252, 86, 116] Strategy learning in MARL-based decision-making systems involves multidimensional challenges. Firstly, agents must consider the behaviors of other agents, and this interdependence increases the difficulty of strategy learning. Each agent not only has to optimize its own strategy but also predict and adapt to the strategy changes of others. Additionally, the vast joint action space of multiple agents makes it challenging for any single agent to learn effective joint strategies. The vast joint action space means that each agent needs to explore and learn within a larger decision space, which significantly increases the demands on computational resources and time;" + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.346, + 0.895, + 0.6 + ], + "angle": 0, + "content": "3. Complexity of Reward Functions: [251] In MARL-based decision-making systems, reward functions become more complex [85, 150]. The rewards received from the environment in multiagent cooperative techniques are influenced not only by an individual agent's actions and the environment but also by the actions of other agents, which makes the stable policy learning and modeling process more difficult. In other words, an agent's reward depends not only on its own actions but also on the actions of other agents, making it challenging for the reinforcement learning-based multi-agent decision-making policies to converge. This intricate reward mechanism complicates the design and optimization of reward functions. Agents need to evaluate their behaviors' impact on the overall system through complex interactions to learn effective strategies;" + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.601, + 0.894, + 0.757 + ], + "angle": 0, + "content": "4. Coordination and Cooperation: [253, 90, 254, 251] Furthermore, in MARL-based decision-making systems, agents need to coordinate and cooperate to achieve common goals. This requires establishing effective communication and coordination mechanisms among agents to ensure that their actions are globally consistent and complementary [49]. For example, in disaster rescue scenarios [57, 12], multiple drones need to coordinate their actions to cover the maximum area and utilize resources most efficiently;" + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.758, + 0.895, + 0.871 + ], + "angle": 0, + "content": "5. Non-Stationary: [27, 25] The environment of MARL-based decision-making systems is nonstationary because each agent's behavior dynamically changes the state of the environment. This non-stationarity increases the difficulty of strategy learning, as agents must continually adapt to changes in the environment and the behaviors of other agents." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.88, + 0.509, + 0.89 + ], + "angle": 0, + "content": "34" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.145, + 0.128, + 0.856, + 0.368 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.19, + 0.38, + 0.807, + 0.394 + ], + "angle": 0, + "content": "Figure 23: A tree diagram of the challenges in MARL and LLMs-based multi-agent decision-making approaches." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.416, + 0.485, + 0.844 + ], + "angle": 0, + "content": "6. Scalability: [87, 252, 25] Addressing scalability in MARL demands innovative approaches to tackle the exponential growth in complexity as the number of agents increases. Techniques that leverage knowledge reuse [172, 138, 168], such as parameter sharing and transfer learning [255, 256], have proven indispensable. When agents share structural similarities, information sharing can streamline the training process, enabling systems to scale more effectively. Transfer learning, in particular, allows agents to adapt knowledge from previous tasks to new, related ones, significantly accelerating learning in dynamic environments. Moreover, curriculum learning [257, 258] plays a pivotal role in tackling the increased complexity of training multiple agents. It enables gradual learning by exposing agents to progressively more challenging tasks, thereby improving policy generalization and accelerating convergence. Robustness remains critical for scalability, as learned policies must withstand environmental disturbances. Techniques like policy ensembles and adversarial training [259, 260] enhance resilience by fostering diversity and adaptability in policies. The DTDE paradigm addresses these issues but introduces new complexities [124, 131], such as environmental instability. One promising solution is employing Independent Deep Q-networks (IDQNs) [126, 261, 127], which adapt traditional single-agent approaches to multi-agent contexts." + }, + { + "type": "text", + "bbox": [ + 0.12, + 0.857, + 0.483, + 0.872 + ], + "angle": 0, + "content": "Overall, the interplay between robustness and scal" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.416, + 0.896, + 0.545 + ], + "angle": 0, + "content": "ability in MARL is a key area for future exploration. While existing techniques provide strong foundations, integrating methods like meta-learning could offer a way for agents to rapidly adapt to new tasks and environments. Additionally, leveraging recent advances in graph neural networks might enhance the scalability of MARL by modeling agent interactions more efficiently. These directions hold promise for tackling the dynamic and large-scale nature of multi-agent environments." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.559, + 0.895, + 0.587 + ], + "angle": 0, + "content": "5.2. Challenges in LLMs reasoning-based multi-agent systems" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.591, + 0.895, + 0.763 + ], + "angle": 0, + "content": "The development of LLMs-based multi-agent systems is still in its early stages, and its advantages in multi-agent collaboration in cross-domain applications have not been fully realized. In this process, technical bottlenecks, design limitations, and imperfect evaluation methods have revealed numerous challenges. This section provides a comprehensive analysis of these challenges, exploring the current status, bottlenecks, and potential breakthrough directions of LLMs-based multiagent systems in key areas such as multi-modal interaction, system scalability, hallucination control, evaluation, and privacy protection." + }, + { + "type": "text", + "bbox": [ + 0.529, + 0.772, + 0.896, + 0.872 + ], + "angle": 0, + "content": "1. Expansion of Multi-Modal Environments: [102, 113, 148] Current LLMs-based multi-agent systems primarily focus on text processing and generation, particularly excelling in language-based interactions. However, applications in multi-modal environments remain insufficient. Multi-modal environments require agents to handle various in" + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.879, + 0.509, + 0.891 + ], + "angle": 0, + "content": "35" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.139, + 0.133, + 0.48, + 0.345 + ], + "angle": 0, + "content": "puts from images, audio, video, and physical sensors, while also generating multi-modal outputs, such as descriptions of visual scenes or simulations of physical actions. This cross-modal interaction not only demands stronger model processing capabilities but also requires efficient information fusion between agents. For example, in practical applications, one agent may need to extract visual features from an image and collaborate with other agents through language to accomplish complex tasks, posing new technical challenges. Future research should focus on building unified multi-modal frameworks that enable agents to efficiently understand and collaboratively process various types of data." + }, + { + "type": "text", + "bbox": [ + 0.12, + 0.347, + 0.481, + 0.643 + ], + "angle": 0, + "content": "2. Hallucination Problem: [262, 30, 108, 263] The hallucination in LLMs, which involves generating false or inaccurate information, becomes more complex in multi-agent environments. This issue may be triggered within a single agent and further propagated through multi-agent interactions, ultimately negatively impacting the decision-making of the entire system. Because the information flow in multi-agent systems is interconnected, any misjudgment at one node can trigger a chain reaction. This characteristic makes the hallucination problem not only confined to the behavior of individual agents but also poses challenges to the stability of the entire system. Therefore, addressing this issue requires a dual approach: on one hand, improving model training methods to reduce the probability of hallucinations in individual agents; on the other hand, designing information verification mechanisms and propagation management strategies to minimize the spread of hallucinated information within the agent network." + }, + { + "type": "text", + "bbox": [ + 0.12, + 0.645, + 0.481, + 0.87 + ], + "angle": 0, + "content": "3. Acquisition of Collective Intelligence: [34, 176, 253] Current LLMs-based multi-agent systems rely more on real-time feedback for learning rather than offline data, unlike traditional multi-agent systems [264, 265]. This real-time learning approach imposes higher requirements on the interactive environment [262, 266]. Since designing and maintaining a reliable real-time interactive environment is not easy, it limits the scalability of the system. Additionally, existing research mostly focuses on optimizing individual agents, neglecting the potential overall efficiency improvements that could arise from agent collaboration. For example, knowledge sharing and behavioral coordination among agents may create advantages of collective intelligence in certain complex tasks. Fu" + }, + { + "type": "text", + "bbox": [ + 0.552, + 0.133, + 0.892, + 0.188 + ], + "angle": 0, + "content": "ture research needs to explore how to fully leverage the potential of collective intelligence by optimizing multi-agent interaction strategies and collaboration mechanisms." + }, + { + "type": "text", + "bbox": [ + 0.531, + 0.189, + 0.892, + 0.472 + ], + "angle": 0, + "content": "4. System Scalability: [262, 267, 253] As the number of agents in LLMs-based multi-agent systems increases, the demand for computational resources grows exponentially, posing challenges in resource-constrained environments. A single LLM agent already requires substantial computational power, and when the system scales to hundreds or thousands of agents, existing hardware and software architectures may not be able to support it. Furthermore, scaling the system introduces new complexities, such as how to efficiently allocate tasks, coordinate, and communicate among numerous agents. Studies have shown that the more agents there are, the more difficult it becomes to coordinate their operations, especially in reducing redundancy and conflicts. Therefore, future work needs to optimize resource utilization through the development of lightweight models and efficient communication protocols, while also exploring the scaling laws for agent expansion." + }, + { + "type": "text", + "bbox": [ + 0.531, + 0.474, + 0.892, + 0.743 + ], + "angle": 0, + "content": "5. Evaluation and Benchmarking: [262, 253] Current evaluation methods and benchmark tests for LLMs-based multi-agent systems are still incomplete. Most research focuses solely on the performance of individual agents in specific tasks, neglecting the overall system performance in complex scenarios. Evaluating group behavior is more challenging because the dynamics and complexity of multi-agent systems are difficult to measure with a single metric. Additionally, the lack of a unified testing framework and benchmark data is a major obstacle when comparing the capabilities of different LLMs-based multi-agent systems across domains. Future research needs to develop comprehensive evaluation standards and universal benchmark tests, especially in key fields such as scientific experiments, economic analysis, and urban planning, to provide a basis for system performance comparison and improvement." + }, + { + "type": "text", + "bbox": [ + 0.531, + 0.744, + 0.892, + 0.87 + ], + "angle": 0, + "content": "6. Interaction Efficiency and Cumulative Effects: [262, 268, 105, 254] The complexity of multiagent systems leads to prominent issues of low interaction efficiency and cumulative effects. Low interaction efficiency is mainly reflected in the need for generative agents to frequently query models, making the system inefficient in large-scale applications. On the other hand, because the system state highly depends on the results of the" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "36" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.136, + 0.133, + 0.481, + 0.246 + ], + "angle": 0, + "content": "previous round, when an error occurs in one round, it may accumulate and propagate to subsequent operations, ultimately degrading the system's overall performance. Future efforts should focus on designing more efficient communication protocols and intermediate result correction mechanisms to reduce interaction costs and the impact of cumulative errors." + }, + { + "type": "text", + "bbox": [ + 0.117, + 0.248, + 0.481, + 0.446 + ], + "angle": 0, + "content": "7. Security and Privacy Issues: [5, 118, 269, 270, 271] Context sharing within multi-agent systems poses risks of introducing noise and privacy leaks. For example, sensitive information shared between agents (such as identities or locations) may be misused by untrusted nodes, thereby threatening the system's security. Addressing this issue requires a two-pronged approach: first, establishing clear organizational structures to restrict information access permissions; second, introducing more advanced trust management mechanisms, such as distributed trust systems based on consensus algorithms, to enhance the system's security and reliability." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.455, + 0.481, + 0.597 + ], + "angle": 0, + "content": "In summary, LLMs-based multi-agent systems face challenges across multiple domains, including multimodal adaptation, scalability, evaluation methods, collective intelligence development, and privacy protection. These challenges not only reveal the current technological limitations but also provide ample space for future research. With advancements in technology and the deepening of interdisciplinary studies, LLMs-based multi-agent systems are expected to achieve significant breakthroughs both theoretically and in applications." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.617, + 0.322, + 0.646 + ], + "angle": 0, + "content": "6. Future Research Prospects and Emerging Trends" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.656, + 0.481, + 0.798 + ], + "angle": 0, + "content": "Multi-Agent Decision-Making Systems are entering a new era where LLMs are combined with MARL [116]. This combination can improve learning efficiency in complex dynamic environments. It also enables better multi-modal information processing, multi-task collaboration, and long-term planning [7, 25, 30, 28, 266]. In this section, we discuss future prospects and challenges of multi-agent decision-making system (MAS) research from theoretical, technical, application, and ethical perspectives." + }, + { + "type": "title", + "bbox": [ + 0.104, + 0.811, + 0.481, + 0.839 + ], + "angle": 0, + "content": "6.1. Theoretical Development: From Traditional RL to LLMs-Enhanced MARL Framework" + }, + { + "type": "text", + "bbox": [ + 0.104, + 0.842, + 0.481, + 0.871 + ], + "angle": 0, + "content": "LLMs-enhanced MARL redefines collaboration in multi-agent systems by introducing natural language" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.133, + 0.892, + 0.331 + ], + "angle": 0, + "content": "understanding and reasoning [14, 253]. Traditional MARL requires agents to learn control strategies in dynamic environments with limited data [8, 5, 27, 24]. However, this approach often faces challenges like low sample efficiency, difficult reward design, and poor generalization. LLMs, with their strong reasoning and knowledge representation capabilities, offer solutions [272, 30]. For example, they can process multi-modal information such as natural language and vision [113, 273, 102, 260], helping agents understand tasks and environments more effectively. This improves learning speed and generalization. Furthermore, LLMs can act as reasoning tools, providing additional context and knowledge to optimize long-term planning." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.332, + 0.893, + 0.603 + ], + "angle": 0, + "content": "The LLMs-enhanced MARL framework is a groundbreaking integration of LLMs and MARL techniques, which includes roles such as information processor, reward designer, decision-maker, and generator [116]. Figure 24 presents a flowchart illustrating the structure of the LLMs-enhanced MARL framework, highlighting its four key roles. These roles work together to streamline task complexity and improve learning. For instance, LLMs can translate unstructured task descriptions into formal task semantics, reducing learning difficulty. They can also design advanced reward functions to accelerate learning in sparse-reward environments. These roles collectively address the challenges of task complexity, data efficiency, and generalization in MARL [29, 24, 251], while streamlining processes like reward design and policy generation. As shown in Table 4, we summarize recent advancements in LLMs-enhanced MARL methods across these four roles into a comprehensive table for clarity and comparison." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.621, + 0.892, + 0.65 + ], + "angle": 0, + "content": "6.2. Technical Integration: From Multi-Modal to Multi-Task Optimization" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.658, + 0.893, + 0.871 + ], + "angle": 0, + "content": "Combining LLMs and MARL significantly improves the ability to handle multi-modal information, multitask learning, and long-term task planning [273, 102, 272, 30]. Traditional MARL often requires separate modules to process visual, textual, or other forms of data. In contrast, LLMs can unify this processing, enabling comprehensive environment understanding. For example, in a robot task involving voice commands and visual inputs, LLMs can process both types of data simultaneously and generate actions directly. Additionally, LLMs provide a distinct advantage in multi-task learning due to their pre-trained knowledge [30, 57]. Through knowledge transfer, they help agents share experiences across different tasks, improving adaptability [255, 66]. For long-term planning, LLMs can break" + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "37" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.108, + 0.129, + 0.298, + 0.222 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.304, + 0.129, + 0.495, + 0.222 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.502, + 0.129, + 0.692, + 0.222 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.7, + 0.129, + 0.885, + 0.222 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.107, + 0.224, + 0.298, + 0.316 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.125, + 0.318, + 0.282, + 0.328 + ], + "angle": 0, + "content": "a) LLM as information processor." + }, + { + "type": "image", + "bbox": [ + 0.304, + 0.224, + 0.495, + 0.316 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.326, + 0.318, + 0.456, + 0.328 + ], + "angle": 0, + "content": "b) LLM as reward designer." + }, + { + "type": "image", + "bbox": [ + 0.503, + 0.224, + 0.692, + 0.316 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.529, + 0.318, + 0.653, + 0.327 + ], + "angle": 0, + "content": "c) LLM as decision-maker" + }, + { + "type": "image", + "bbox": [ + 0.7, + 0.224, + 0.885, + 0.316 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.749, + 0.317, + 0.846, + 0.327 + ], + "angle": 0, + "content": "d) LLM as generator" + }, + { + "type": "image_caption", + "bbox": [ + 0.102, + 0.343, + 0.894, + 0.367 + ], + "angle": 0, + "content": "Figure 24: Schematic diagram of the LLMs-enhanced MARL framework based on Cao et al. [116], showcasing its core roles: information processor (a), reward designer (b), decision-maker (c), and generator (d)." + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.39, + 0.481, + 0.575 + ], + "angle": 0, + "content": "down complex tasks into subtasks, addressing challenges like the credit assignment problem. This capability is particularly useful in tasks requiring extended reasoning, such as construction tasks in Minecraft. In optimizing reinforcement learning's sample efficiency [308, 281], the generative capabilities of LLMs can provide agents with additional virtual samples through high-fidelity environment simulations [116, 308]. This not only reduces the cost of real-world learning but also offers high-quality trajectories that serve as valuable references for policy optimization. Furthermore, in sparse reward environments, LLMs can accelerate policy learning by automatically designing reward signals." + }, + { + "type": "title", + "bbox": [ + 0.103, + 0.593, + 0.481, + 0.621 + ], + "angle": 0, + "content": "6.3. Application Expansion: Driving Intelligent Collaboration in Complex Scenarios" + }, + { + "type": "text", + "bbox": [ + 0.102, + 0.63, + 0.483, + 0.871 + ], + "angle": 0, + "content": "The potential of LLMs-enhanced MARL in practical applications is enormous, especially in scenarios that require complex collaboration and real-time decision-making [116, 300, 183, 171]. For example, in the field of autonomous driving [19, 18, 17], the integration of LLMs with MARL can simultaneously process sensor data and natural language information (such as traffic regulations, passenger instructions, etc. [92]), thereby enhancing the safety and accuracy of decision-making [214, 74]. In the field of collaborative robots, LLMs can help multiple robots form a more intuitive communication mechanism, achieving highly complex task division and dynamic adjustment. In addition, in multi-objective optimization tasks such as smart grids and intelligent healthcare, LLMs can provide domain knowledge and optimization suggestions to assist reinforcement learning, design more practical reward functions, and thus" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.39, + 0.893, + 0.504 + ], + "angle": 0, + "content": "improve the overall efficiency of the system. In dynamic and complex environments such as disaster relief [12], LLMs can dynamically allocate roles and responsibilities according to task requirements, helping multiagent systems quickly adapt to changing environments and highly complex task divisions [296, 295, 297]. This capability provides a solid technical support for a wide range of applications." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.515, + 0.892, + 0.544 + ], + "angle": 0, + "content": "6.4. Human Society Coordination: Balancing Technology and Ethics" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.545, + 0.893, + 0.729 + ], + "angle": 0, + "content": "The integration of LLMs into MARL opens new avenues for advancing multi-agent systems, while also highlighting exciting research directions in improving technical efficiency and addressing ethical considerations. For instance, enhancing the robustness of LLMs in unfamiliar environments offers the opportunity to develop strategies for minimizing biases and hallucinations, thereby improving decision accuracy. Furthermore, the computational complexity and resource demands of LLMs present a chance to innovate in optimizing inference efficiency and scalability. This is especially relevant in dynamic multi-agent environments where real-time responsiveness is critical." + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.73, + 0.894, + 0.871 + ], + "angle": 0, + "content": "From an ethical perspective, the incorporation of LLMs calls for advancements in ensuring data privacy, safeguarding against adversarial attacks, and establishing clear accountability frameworks for AI-driven decisions. Sensitive domains such as healthcare and disaster response could particularly benefit from focused research on protecting sensitive information and enhancing system resilience. Additionally, improving the transparency and explainability of LLMs-driven decisions is another promising area for exploration, as it" + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "38" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.104, + 0.141, + 0.483, + 0.186 + ], + "angle": 0, + "content": "Table 4: Summary of recent studies categorized by the four key roles of LLMs in MARL: Information Processor, Reward Designer, Decision-Maker, and Generator, highlighting their respective contributions and applications." + }, + { + "type": "table", + "bbox": [ + 0.108, + 0.187, + 0.501, + 0.51 + ], + "angle": 0, + "content": "
Method Types\nLLM as ...Researchers. / Works. / Refs.
InformationPoudel et al. (ReCoRe) [274], Choi et al. (ConPE) [275],
Paischer et al. (HELM) [276] and (Semantic HELM) [277],
ProcessorRadford et al. (CLIP) [278], Oord et al. (CPC) [279],
Michael et al. (CURL) [280], Schwarzer et al. (SPR) [281]
RewardKwon et al. (LLMrewardRL) [282], Song et al. (Self-Refined LLM) [283], Wu et al. (Read & Reward) [284],
Carta et al. (GLAM) [285], Chu et al. (Lafite-RL) [286],
DesignerKim et al. (ARP) [287], Yu et al. [288], Adeniji et al. (LAMP) [289], Madaan et al. (Self-Refine) [290], Ma et al. (Eureka) [291], Xie et al. (Text2Reward) [292]
Decision-MakerJanner et al. (TT-Offline RL) [293], Shi et al. (LaMo) [294], Li et al. (LLM scaffold) [295], Mezghani et al. (text BabyAI) [296], Grigsby et al. (AMAGO) [297], Zitkovich et al. (RT-2) [298], Yao et al. (CALM) [299], Hu et al. (instructRL) [300], Zhou et al. (LLM4Teach) [301]
GeneratorChen et al. (TransDreamer) [302], Das et al. (S2E) [303], Lin et al. (Dynalang) [304], Robine et al. (TWM) [305], Poudel et al. (LanGWM) [306], Lin et al. (HomeGrid) [307]
" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.56, + 0.483, + 0.589 + ], + "angle": 0, + "content": "would increase trust and user confidence in multi-agent systems." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.602, + 0.483, + 0.66 + ], + "angle": 0, + "content": "By addressing these areas, future research can maximize the potential of LLMs-enhanced MARL systems, ensuring they are both technically effective and ethically sound in diverse, real-world applications." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.672, + 0.483, + 0.871 + ], + "angle": 0, + "content": "Overall, the combination of LLMs and MARL brings new momentum to research and applications in multiagent systems. By enhancing collaboration through natural language understanding and leveraging large-scale knowledge, these systems can achieve greater efficiency and robustness in complex scenarios. However, fully unlocking their potential requires further exploration in theoretical methods, technological development, and ethical practices. With systematic advancements in these areas, LLMs-enhanced MARL can become the foundation for next-generation intelligent decision-making systems, transforming fields like autonomous driving, collaborative robotics, and healthcare, while shaping the future of AI research." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.133, + 0.623, + 0.146 + ], + "angle": 0, + "content": "7. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.154, + 0.895, + 0.538 + ], + "angle": 0, + "content": "Multi-agent cooperative decision-making has demonstrated remarkable potential in addressing complex tasks through intelligent collaboration and adaptability. In this survey, we systematically review the evolution of multi-agent systems, highlighting the shift from traditional methods, such as rule-based and game-theory approaches, to advanced paradigms like MARL and LLMs. We differentiate these methods by examining their unique capabilities, challenges, and applications in diverse environments, paying particular attention to dynamic and uncertain settings. In addition, we explore the critical role of simulation environments as a bridge between theoretical advancements and real-world implementation, emphasizing their influence on agent interaction, learning, and decision-making. Practical applications of multi-agent systems in domains such as autonomous driving, disaster response, and robotics further underscore their transformative potential. By summarizing advanced multi-agent decision-making methodologies, datasets, benchmarks, and future research directions, this survey aims to provide a comprehensive resource for researchers and practitioners. We hope it inspires future studies to address existing challenges, such as improving inter-agent communication and adaptability, while leveraging the innovative potential of DRL and LLMs-based approaches to advance multi-agent cooperative decision-making." + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.555, + 0.649, + 0.57 + ], + "angle": 0, + "content": "Acknowledgment" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.576, + 0.895, + 0.791 + ], + "angle": 0, + "content": "The corresponding authors of this survey are B. Zhao and G.Yang from Xi'an Jiaotong University and Imperial College London. Guang Yang was supported in part by the ERC IMI (101005122), the H2020 (952172), the MRC (MC/PC/21013), the Royal Society (IEC\\NSFC\\211235), the NVIDIA Academic Hardware Grant Program, the SABER project supported by Boehringer Ingelheim Ltd, NIHR Imperial Biomedical Research Centre (RDA01), Wellcome Leap Dynamic Resilience, UKRI guarantee funding for Horizon Europe MSCA Postdoctoral Fellowships (EP/Z002206/1), and the UKRI Future Leaders Fellowship (MR/V023799/1). The authors would like to thank the editors and anonymous reviewers, who significantly enhanced the quality of the survey." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.807, + 0.893, + 0.837 + ], + "angle": 0, + "content": "Declaration of Generative AI and AI-assisted Technologies in the Writing Process" + }, + { + "type": "text", + "bbox": [ + 0.514, + 0.843, + 0.895, + 0.872 + ], + "angle": 0, + "content": "During the preparation of this work, the authors utilized generative AI and AI-assisted technologies for" + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.879, + 0.51, + 0.891 + ], + "angle": 0, + "content": "39" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.103, + 0.133, + 0.482, + 0.218 + ], + "angle": 0, + "content": "proofreading and enhancing readability and language clarity in certain sections. The authors have carefully reviewed these contents to ensure accuracy and completeness, acknowledging that AI can generate authoritative-sounding output that may be incorrect, incomplete, or biased." + }, + { + "type": "title", + "bbox": [ + 0.105, + 0.238, + 0.482, + 0.281 + ], + "angle": 0, + "content": "Appendix A. Technological Comparisons between Single-Agent and Multi-Agent (Under Reinforcement Learning)" + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.289, + 0.482, + 0.331 + ], + "angle": 0, + "content": "Here, we discuss a series of technological comparisons of both DRL-based single-agent and MARL-based multi-agent research." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.332, + 0.482, + 0.444 + ], + "angle": 0, + "content": "In solving these single-agent sequential decision-making problems, Markov Decision Processes (MDP) is a powerful mathematical modeling framework, especially in uncertain environments. Since the decision-making process of an agent can inherently be modeled as a sequence of decisions, the single-agent decision-making process can be formulated as an typical MDP, similar to a Markov chain." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.446, + 0.482, + 0.614 + ], + "angle": 0, + "content": "In contrast to single-agent DRL systems, multi-agent systems under the MARL techniques involve multiple agents interacting within a shared environment. POMDP is a powerful mathematical modeling framework. It is an extension of the MDP framework that is particularly well-suited for modeling decision-making in environments where the agent does not have full visibility of the entire state space. POMDPs extend MDPs to environments where the agent cannot fully observe the underlying state. Instead, the agent maintains a belief state, which is a probability distribution over the possible states." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.616, + 0.482, + 0.687 + ], + "angle": 0, + "content": "Figure A.25 provides a comparative illustration of Markov Decision Processes (MDP) and Partially Observable Markov Decision Processes (POMDP), which correspond to single-agent and multi-agent reinforcement learning paradigms, respectively." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.687, + 0.482, + 0.828 + ], + "angle": 0, + "content": "The left side of Figure A.25 depicts an MDP, which models single-agent decision-making in a fully observable environment. The agent selects an action \\(a\\) from the action space \\(A\\) based on the current state \\(s\\) from the state space \\(S\\). The environment transitions to a new state \\(s'\\) following the transition probability function \\(P(s' \\mid s, a)\\), and the agent receives a reward \\(r\\). The objective is to optimize a policy \\(\\pi^*\\) that maximizes the cumulative reward. Since the entire state is observable, the decision-making process is relatively straightforward." + }, + { + "type": "text", + "bbox": [ + 0.103, + 0.829, + 0.482, + 0.871 + ], + "angle": 0, + "content": "On the right side, the POMDP framework extends MDPs to multi-agent settings where agents operate under partial observability. Each agent \\(i\\) receives only a" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.133, + 0.894, + 0.232 + ], + "angle": 0, + "content": "partial observation \\(o^i\\) rather than the full state \\(S\\). The agents take individual actions \\(a^i\\), forming a joint action \\(a_t\\), which influences state transitions and results in individual rewards \\(r^i\\). The observations are generated according to the observation function \\(Z(o \\mid s', a)\\), requiring each agent to infer the missing state information and maintain a belief state for effective decision-making." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.233, + 0.894, + 0.388 + ], + "angle": 0, + "content": "In summary, MDPs are well-suited for single-agent systems, where the environment is static and fully observable, allowing the agent to make optimal decisions based on complete knowledge of the state. On the other hand, POMDPs are crucial for multi-agent reinforcement learning scenarios, where multiple agents interact dynamically in an uncertain environment with limited information. This setting introduces challenges such as coordination, competition, and reward interdependencies, making decision-making significantly more complex." + }, + { + "type": "title", + "bbox": [ + 0.517, + 0.409, + 0.599, + 0.421 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.531, + 0.431, + 0.892, + 0.477 + ], + "angle": 0, + "content": "[1] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller, Playing atari with deep reinforcement learning (2013). arXiv:1312.5602. URL https://arxiv.org/abs/1312.5602" + }, + { + "type": "ref_text", + "bbox": [ + 0.532, + 0.478, + 0.892, + 0.566 + ], + "angle": 0, + "content": "[2] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, D. Hassabis, Human-level control through deep reinforcement learning, Nature 518 (7540) (2015) 529-533. doi:10.1038/nature14236. URL https://doi.org/10.1038/nature14236" + }, + { + "type": "ref_text", + "bbox": [ + 0.532, + 0.567, + 0.892, + 0.656 + ], + "angle": 0, + "content": "[3] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, D. Hassabis, Mastering the game of go with deep neural networks and tree search, Nature 529 (7587) (2016) 484-489. doi:10.1038/nature16961. URL https://doi.org/10.1038/nature16961" + }, + { + "type": "ref_text", + "bbox": [ + 0.532, + 0.657, + 0.892, + 0.735 + ], + "angle": 0, + "content": "[4] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, D. Hassabis, Mastering the game of go without human knowledge, Nature 550 (7676) (2017) 354-359. doi:10.1038/nature24270. URL https://doi.org/10.1038/nature24270" + }, + { + "type": "ref_text", + "bbox": [ + 0.532, + 0.736, + 0.892, + 0.791 + ], + "angle": 0, + "content": "[5] T. Li, K. Zhu, N. C. Luong, D. Niyato, Q. Wu, Y. Zhang, B. Chen, Applications of multi-agent reinforcement learning in future internet: A comprehensive survey, IEEE Communications Surveys & Tutorials 24 (2) (2022) 1240-1279. doi: 10.1109/COMST.2022.3160697." + }, + { + "type": "ref_text", + "bbox": [ + 0.532, + 0.792, + 0.892, + 0.835 + ], + "angle": 0, + "content": "[6] S. Gronauer, K. Diepold, Multi-agent deep reinforcement learning: a survey, Artificial Intelligence Review 55 (2) (2022) 895-943. doi:10.1007/s10462-021-09996-w. URL https://doi.org/10.1007/s10462-021-09996-w" + }, + { + "type": "ref_text", + "bbox": [ + 0.532, + 0.837, + 0.892, + 0.87 + ], + "angle": 0, + "content": "[7] P. Yadav, A. Mishra, S. Kim, A comprehensive survey on multi-agent reinforcement learning for connected and automated vehicles, Sensors 23 (10) (2023). doi:10.3390/" + }, + { + "type": "list", + "bbox": [ + 0.531, + 0.431, + 0.892, + 0.87 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.879, + 0.509, + 0.89 + ], + "angle": 0, + "content": "40" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.137, + 0.134, + 0.481, + 0.302 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.138, + 0.305, + 0.448, + 0.32 + ], + "angle": 0, + "content": "Markov Decision Processes (MDP) \\(\\langle S, A, P, R, \\gamma \\rangle\\)" + }, + { + "type": "image", + "bbox": [ + 0.481, + 0.134, + 0.862, + 0.298 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.534, + 0.304, + 0.781, + 0.334 + ], + "angle": 0, + "content": "Partially Observable Markov Decision Processes (POMDP) \\(\\langle S, A, P, R, \\gamma, O, Z \\rangle\\)" + }, + { + "type": "image_caption", + "bbox": [ + 0.103, + 0.351, + 0.894, + 0.376 + ], + "angle": 0, + "content": "Figure A.25: The Markov Decision Process modeling for the single-agent reinforcement learning paradigm (left) and the Partially Observable Markov Decision Process modeling for the multi-agent reinforcement learning paradigm (right)." + }, + { + "type": "text", + "bbox": [ + 0.144, + 0.401, + 0.214, + 0.412 + ], + "angle": 0, + "content": "s23104710." + }, + { + "type": "text", + "bbox": [ + 0.144, + 0.412, + 0.469, + 0.423 + ], + "angle": 0, + "content": "URL https://www.mdpi.com/1424-8220/23/10/4710" + }, + { + "type": "ref_text", + "bbox": [ + 0.12, + 0.423, + 0.481, + 0.467 + ], + "angle": 0, + "content": "[8] J. Orr, A. Dutta, Multi-agent deep reinforcement learning for multi-robot applications: A survey, Sensors 23 (7) (2023). doi:10.3390/s23073625. URL https://www.mdpi.com/1424-8220/23/7/3625" + }, + { + "type": "ref_text", + "bbox": [ + 0.12, + 0.468, + 0.481, + 0.546 + ], + "angle": 0, + "content": "[9] W. Jin, B. Zhao, Y. Zhang, J. Huang, H. Yu, Wordtrans-absa: Enhancing aspect-based sentiment analysis with masked language modeling for affective token prediction, Expert Systems with Applications 238 (2024) 122289. doi:https://doi.org/10.1016/j.eswa.2023.122289. URL https://www.sciencedirect.com/science/article/pii/S0957417423027914" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.547, + 0.481, + 0.625 + ], + "angle": 0, + "content": "[10] B. Zhao, W. Jin, Y. Zhang, S. Huang, G. Yang, Prompt learning for metonymy resolution: Enhancing performance with internal prior knowledge of pre-trained language models, Knowledge-Based Systems 279 (2023) 110928. doi:https://doi.org/10.1016/j.knosys.2023.110928. URL https://www.sciencedirect.com/science/article/pii/S0950705123006780" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.626, + 0.481, + 0.682 + ], + "angle": 0, + "content": "[11] A. Seewald, C. J. Lerch, M. Chancan, A. M. Dollar, I. Abraham, Energy-aware ergodic search: Continuous exploration for multi-agent systems with battery constraints (2024). arXiv: 2310.09470. URL https://arxiv.org/abs/2310.09470" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.682, + 0.481, + 0.738 + ], + "angle": 0, + "content": "[12] M. M. H. Qazzaz, S. A. R. Zaidi, D. C. McLernon, A. Salama, A. A. Al-Hameed, Optimizing search and rescue UAV connectivity in challenging terrain through multi q-learning (2024). arXiv:2405.10042. URL https://arxiv.org/abs/2405.10042" + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.738, + 0.481, + 0.793 + ], + "angle": 0, + "content": "[13] G. T. Papadopoulos, M. Antona, C. Stephanidis, Towards open and expandable cognitive ai architectures for large-scale multi-agent human-robot collaborative learning, IEEE Access 9 (2021) 73890-73909. doi:10.1109/ACCESS.2021.3080517." + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.794, + 0.481, + 0.849 + ], + "angle": 0, + "content": "[14] M. D. Silva, R. Regnier, M. Makarov, G. Avrin, D. Dumur, Evaluation of intelligent collaborative robots: a review, in: 2023 IEEE/SICE International Symposium on System Integration (SII), 2023, pp. 1-7. doi:10.1109/SII55687.2023.10039365." + }, + { + "type": "ref_text", + "bbox": [ + 0.114, + 0.85, + 0.481, + 0.862 + ], + "angle": 0, + "content": "[15] Y. Huang, S. Wu, Z. Mu, X. Long, S. Chu, G. Zhao, A multi" + }, + { + "type": "list", + "bbox": [ + 0.114, + 0.401, + 0.481, + 0.862 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.553, + 0.401, + 0.892, + 0.445 + ], + "angle": 0, + "content": "agent reinforcement learning method for swarm robots in space collaborative exploration, in: 2020 6th International Conference on Control, Automation and Robotics (ICCAR), 2020, pp. 139-144. doi:10.1109/ICCAR49639.2020.9107997." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.446, + 0.892, + 0.524 + ], + "angle": 0, + "content": "[16] C. G. Cena, P. F. Cardenas, R. S. Pazmino, L. Puglisi, R. A. Santonja, A cooperative multi-agent robotics system: Design and modelling, Expert Systems with Applications 40 (12) (2013) 4737-4748. doi:https://doi.org/10.1016/j.eswa.2013.01.048. URL https://www.sciencedirect.com/science/article/pii/S0957417413000791" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.525, + 0.892, + 0.58 + ], + "angle": 0, + "content": "[17] S. Jayawardana, V. Jayawardana, K. Vidanage, C. Wu, Multi-behavior learning for socially compatible autonomous driving, in: 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), 2023, pp. 4422-4427. doi:10.1109/ITSC57777.2023.10422120." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.58, + 0.892, + 0.636 + ], + "angle": 0, + "content": "[18] L. Wen, J. Duan, S. E. Li, S. Xu, H. Peng, Safe reinforcement learning for autonomous vehicles through parallel constrained policy optimization, in: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), 2020, pp. 1-7. doi:10.1109/ITSC45102.2020.9294262." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.637, + 0.892, + 0.693 + ], + "angle": 0, + "content": "[19] L. Weiwei, H. Wenxuan, J. Wei, L. Lanxin, G. Lingping, L. Yong, Learning to model diverse driving behaviors in highly interactive autonomous driving scenarios with multi-agent reinforcement learning (2024). arXiv:2402.13481. URL https://arxiv.org/abs/2402.13481" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.693, + 0.892, + 0.738 + ], + "angle": 0, + "content": "[20] Y. Xue, W. Chen, Multi-agent deep reinforcement learning for uavs navigation in unknown complex environment, IEEE Transactions on Intelligent Vehicles 9 (1) (2024) 2290-2303. doi:10.1109/TIV.2023.3298292." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.738, + 0.892, + 0.782 + ], + "angle": 0, + "content": "[21] S. Rezwan, W. Choi, Artificial intelligence approaches for uav navigation: Recent advances and future challenges, IEEE Access 10 (2022) 26320-26339. doi:10.1109/ACCESS.2022.3157626." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.783, + 0.892, + 0.849 + ], + "angle": 0, + "content": "[22] B. Al Baroomi, T. Myo, M. R. Ahmed, A. Al Shibli, M. H. Marhaban, M. S. Kaiser, Ant colony optimization-based path planning for uav navigation in dynamic environments, in: 2023 7th International Conference on Automation, Control and Robots (ICACR), 2023, pp. 168-173. doi:10.1109/ICACR59381.2023.10314603." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.85, + 0.892, + 0.862 + ], + "angle": 0, + "content": "[23] T. Samad, S. Iqbal, A. W. Malik, O. Arif, P. Bloodsworth, A" + }, + { + "type": "list", + "bbox": [ + 0.526, + 0.401, + 0.892, + 0.862 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "41" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.142, + 0.135, + 0.482, + 0.168 + ], + "angle": 0, + "content": "multi-agent framework for cloud-based management of collaborative robots, International Journal of Advanced Robotic Systems 15 (4) (2018). doi:10.1177/1729881418785073." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.169, + 0.482, + 0.224 + ], + "angle": 0, + "content": "[24] W. Du, S. Ding, A survey on multi-agent deep reinforcement learning: from the perspective of challenges and applications, Artificial Intelligence Review 54 (5) (2021) 3215-3238. doi: 10.1007/s10462-020-09938-y. URL https://doi.org/10.1007/s10462-020-09938-y" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.225, + 0.482, + 0.292 + ], + "angle": 0, + "content": "[25] Z. Ning, L. Xie, A survey on multi-agent reinforcement learning and its application, Journal of Automation and Intelligence 3 (2) (2024) 73-91. doi:https://doi.org/10.1016/j.jai.2024.02.003. URL https://www.sciencedirect.com/science/article/pii/S2949855424000042" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.293, + 0.481, + 0.337 + ], + "angle": 0, + "content": "[26] Q. Yang, R. Liu, Understanding the application of utility theory in robotics and artificial intelligence: A survey (2023). arXiv: 2306.09445. URL https://arxiv.org/abs/2306.09445" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.338, + 0.481, + 0.382 + ], + "angle": 0, + "content": "[27] P. Hernandez-Leal, M. Kaisers, T. Baarslag, E. M. de Cote, A survey of learning in multiagent environments: Dealing with non-stationarity (2019). arXiv:1707.09183. URL https://arxiv.org/abs/1707.09183" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.383, + 0.481, + 0.438 + ], + "angle": 0, + "content": "[28] C. Zhu, M. Dastani, S. Wang, A survey of multi-agent deep reinforcement learning with communication, Autonomous Agents and Multi-Agent Systems 38 (1) (2024) 4. doi: 10.1007/s10458-023-09633-6. URL https://doi.org/10.1007/s10458-023-09633-6" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.439, + 0.481, + 0.493 + ], + "angle": 0, + "content": "[29] T. T. Nguyen, N. D. Nguyen, S. Nahavandi, Deep reinforcement learning for multiagent systems: A review of challenges, solutions, and applications, IEEE Transactions on Cybernetics 50 (9) (2020) 3826-3839. doi:10.1109/TCYB.2020.2977374." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.495, + 0.481, + 0.562 + ], + "angle": 0, + "content": "[30] L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang, Z. Chen, J. Tang, X. Chen, Y. Lin, W. X. Zhao, Z. Wei, J. Wen, A survey on large language model based autonomous agents, Frontiers of Computer Science 18 (6) (2024) 186345. doi:10.1007/s11704-024-40231-1. URL https://doi.org/10.1007/s11704-024-40231-1" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.563, + 0.481, + 0.629 + ], + "angle": 0, + "content": "[31] B. Zhao, W. Jin, J. Del Ser, G. Yang, Chatagri: Exploring potentials of chatgpt on cross-linguistic agricultural text classification, Neurocomputing 557 (2023) 126708. doi: https://doi.org/10.1016/j.neucom.2023.126708. URL https://www.sciencedirect.com/science/article/pii/S0925231223008317" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.63, + 0.481, + 0.674 + ], + "angle": 0, + "content": "[32] T. Miki, M. Nagao, H. Kobayashi, T. Nakamura, A simple rule based multi-agent control algorithm and its implementation using autonomous mobile robots, in: 2010 World Automation Congress, 2010, pp. 1-6." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.675, + 0.481, + 0.741 + ], + "angle": 0, + "content": "[33] H. Yarahmadi, H. Navidi, M. Challenger, Improving the resource allocation in IoT systems based on the integration of reinforcement learning and rule-based approaches in multi-agent systems, in: 2024 8th International Conference on Smart Cities, Internet of Things and Applications (SCIIoT), 2024, pp. 135-141. doi:10.1109/SCIIoT62588.2024.10570102." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.742, + 0.481, + 0.787 + ], + "angle": 0, + "content": "[34] S.-H. Wu, V.-W. Soo, A fuzzy game theoretic approach to multi-agent coordination, in: T. Ishida (Ed.), Multiagent Platforms, Springer Berlin Heidelberg, Berlin, Heidelberg, 1999, pp. 76-87. doi:10.1007/3-540-48826-X\\_6." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.788, + 0.481, + 0.842 + ], + "angle": 0, + "content": "[35] H. Zhang, J. Zhang, G.-H. Yang, Y. Luo, Leader-based optimal coordination control for the consensus problem of multiagent differential games via fuzzy adaptive dynamic programming, IEEE Transactions on Fuzzy Systems 23 (1) (2015) 152-163. doi:10.1109/TFUZZ.2014.2310238." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.843, + 0.481, + 0.866 + ], + "angle": 0, + "content": "[36] F. Ren, M. Zhang, Q. Bai, A fuzzy-based approach for partner selection in multi-agent systems, in: 6th IEEE/ACIS Interna" + }, + { + "type": "list", + "bbox": [ + 0.113, + 0.135, + 0.482, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.554, + 0.135, + 0.892, + 0.157 + ], + "angle": 0, + "content": "tional Conference on Computer and Information Science (ICIS 2007), 2007, pp. 457-462. doi:10.1109/ICIS.2007.21." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.159, + 0.892, + 0.191 + ], + "angle": 0, + "content": "[37] D. Gu, H. Hu, Fuzzy multi-agent cooperative q-learning, in: 2005 IEEE International Conference on Information Acquisition, 2005, p. 5 pp. doi:10.1109/ICIA.2005.1635080." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.192, + 0.892, + 0.246 + ], + "angle": 0, + "content": "[38] J. Wang, Y. Hong, J. Wang, J. Xu, Y. Tang, Q.-L. Han, J. Kurths, Cooperative and competitive multi-agent systems: From optimization to games, IEEE/CAA Journal of Automatica Sinica 9 (5) (2022) 763-783. doi:10.1109/JAS.2022.105506." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.248, + 0.892, + 0.303 + ], + "angle": 0, + "content": "[39] Y. Guo, Q. Pan, Q. Sun, C. Zhao, D. Wang, M. Feng, Cooperative game-based multi-agent path planning with obstacle avoidance, in: 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE), 2019, pp. 1385-1390. doi: 10.1109/ISIE.2019.8781205." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.304, + 0.892, + 0.348 + ], + "angle": 0, + "content": "[40] D. Schwung, A. Schwung, S. X. Ding, Distributed self-optimization of modular production units: A state-based potential game approach, IEEE Transactions on Cybernetics 52 (4) (2022) 2174–2185. doi:10.1109/TCYB.2020.3006620." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.349, + 0.892, + 0.393 + ], + "angle": 0, + "content": "[41] X. Wang, J. Wang, J. Chen, Y. Yang, L. Kong, X. Liu, L. Jia, Y. Xu, A game-theoretic learning framework for multi-agent intelligent wireless networks (2019). arXiv:1812.01267. URL https://arxiv.org/abs/1812.01267" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.394, + 0.892, + 0.46 + ], + "angle": 0, + "content": "[42] W. Lin, Y. Chen, Q. Q. Wang, J. Zeng, J. Liu, Multi-agents based distributed-energy-resource management for intelligent microgrid with potential game algorithm, in: IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society, 2017, pp. 7795-7800. doi:10.1109/IECON.2017.8217366." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.461, + 0.892, + 0.516 + ], + "angle": 0, + "content": "[43] H. Wang, Z. Ning, H. Luo, Y. Jiang, M. Huo, Game-based adaptive optimization approach for multi-agent systems, in: 2023 IEEE International Conference on Industrial Technology (ICIT), 2023, pp. 1-5. doi:10.1109/ICIT58465.2023.10143172." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.518, + 0.892, + 0.562 + ], + "angle": 0, + "content": "[44] L. Bull, Evolutionary computing in multi-agent environments: Operators, in: V. W. Porto, N. Saravanan, D. Waagen, A. E. Eiben (Eds.), Evolutionary Programming VII, Springer Berlin Heidelberg, Berlin, Heidelberg, 1998, pp. 43-52." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.563, + 0.892, + 0.628 + ], + "angle": 0, + "content": "[45] J. Liu, W. Zhong, L. Jiao, Multi-Agent Evolutionary Model for Global Numerical Optimization, Springer Berlin Heidelberg, Berlin, Heidelberg, 2010, pp. 13-48. doi:10.1007/978-3-642-13425-8\\2. URL https://doi.org/10.1007/978-3-642-13425-8_2" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.63, + 0.892, + 0.662 + ], + "angle": 0, + "content": "[46] D. Bloembergen, K. Tuyls, D. Hennes, M. Kaisers, Evolutionary dynamics of multi-agent learning: a survey, J. Artif. Int. Res. 53 (1) (2015) 659-697." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.663, + 0.892, + 0.741 + ], + "angle": 0, + "content": "[47] D. Klijn, A. E. Eiben, A coevolutionary approach to deep multi-agent reinforcement learning, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '21, Association for Computing Machinery, New York, NY, USA, 2021, p. 283-284. doi:10.1145/3449726.3459576. URL https://doi.org/10.1145/3449726.3459576" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.742, + 0.892, + 0.787 + ], + "angle": 0, + "content": "[48] S. Yuan, K. Song, J. Chen, X. Tan, D. Li, D. Yang, Evoagent: Towards automatic multi-agent generation via evolutionary algorithms (2024). arXiv:2406.14228. URL https://arxiv.org/abs/2406.14228" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.788, + 0.892, + 0.843 + ], + "angle": 0, + "content": "[49] W. Zhang, H. Liu, Evolutionary game analysis of multi-agent cooperation strategy analysis in agricultural water conservancy ppp project under digitization background, Scientific Reports 14 (1) (2024) 22915. doi:10.1038/s41598-024-74065-5. URL https://doi.org/10.1038/s41598-024-74065-5" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.844, + 0.892, + 0.865 + ], + "angle": 0, + "content": "[50] T. Rashid, M. Samvelyan, C. Schroeder, G. Farquhar, J. Foerster, S. Whiteson, QMIX: Monotonic value function" + }, + { + "type": "list", + "bbox": [ + 0.526, + 0.135, + 0.892, + 0.865 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.88, + 0.509, + 0.89 + ], + "angle": 0, + "content": "42" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.142, + 0.135, + 0.481, + 0.202 + ], + "angle": 0, + "content": "factorisation for deep multi-agent reinforcement learning, in: J. Dy, A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, Vol. 80 of Proceedings of Machine Learning Research, PMLR, 2018, pp. 4295-4304. URL https://proceedings.mlr.press/v80/ rashid18a.html" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.203, + 0.481, + 0.27 + ], + "angle": 0, + "content": "[51] L. Kraemer, B. Banerjee, Multi-agent reinforcement learning as a rehearsal for decentralized planning, Neurocomputing 190 (2016) 82-94. doi:https://doi.org/10.1016/j.neucom.2016.01.031. URL https://www.sciencedirect.com/science/article/pii/S0925231216000783" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.271, + 0.481, + 0.325 + ], + "angle": 0, + "content": "[52] B. Yu, Z. Cai, J. He, Fast-qmix: Accelerating deep multiagent reinforcement learning with virtual weighted q-values, in: 2021 2nd International Conference on Electronics, Communications and Information Technology (CECIT), 2021, pp. 594-599. doi:10.1109/CECIT53797.2021.00110." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.327, + 0.481, + 0.415 + ], + "angle": 0, + "content": "[53] K. Son, D. Kim, W. J. Kang, D. E. Hostallero, Y. Yi, QTRAN: Learning to factorize with transformation for cooperative multi-agent reinforcement learning, in: K. Chaudhuri, R. Salakhutdinov (Eds.), Proceedings of the 36th International Conference on Machine Learning, Vol. 97 of Proceedings of Machine Learning Research, PMLR, 2019, pp. 5887-5896. URL https://proceedings.mlr.press/v97/son19a.htm1" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.416, + 0.481, + 0.483 + ], + "angle": 0, + "content": "[54] T. Rashid, G. Farquhar, B. Peng, S. Whiteson, Weighted qmix: expanding monotonic value function factorisation for deep multi-agent reinforcement learning, in: Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS '20, Curran Associates Inc., Red Hook, NY, USA, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.484, + 0.481, + 0.562 + ], + "angle": 0, + "content": "[55] K. Kurach, A. Raichuk, P. Stanczyk, M. Zajac, O. Bachem, L. Espeholt, C. Riquelme, D. Vincent, M. Michalski, O. Bousquet, S. Gelly, Google research football: A novel reinforcement learning environment, Proceedings of the AAAI Conference on Artificial Intelligence 34 (04) (2020) 4501-4510. doi:10.1609/aaai.v34i04.5878. URL https://doi.org/10.1609/aaai.v34i04.5878" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.563, + 0.481, + 0.619 + ], + "angle": 0, + "content": "[56] J. Wang, Z. Ren, T. Liu, Y. Yu, C. Zhang, QPLEX: duplex dueling multi-agent q-learning, in: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, OpenReview.net, 2021. URL https://openreview.net/forum?id=RcmkOxxIQV" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.62, + 0.481, + 0.663 + ], + "angle": 0, + "content": "[57] P. Li, V. Menon, B. Gudiguntla, D. Ting, L. Zhou, Challenges faced by large language models in solving multi-agent flocking (2024). arXiv:2404.04752. URL https://arxiv.org/abs/2404.04752" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.664, + 0.481, + 0.764 + ], + "angle": 0, + "content": "[58] Z. Xu, C. Yu, F. Fang, Y. Wang, Y. Wu, Language agents with reinforcement learning for strategic play in the werewolf game, in: R. Salakhutdinov, Z. Kolter, K. Heller, A. Weller, N. Oliver, J. Scarlett, F. Berkenkamp (Eds.), Proceedings of the 41st International Conference on Machine Learning, Vol. 235 of Proceedings of Machine Learning Research, PMLR, 2024, pp. 55434-55464. URL https://proceedings.mlr.press/v235/xu24ad.htm1" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.765, + 0.481, + 0.798 + ], + "angle": 0, + "content": "[59] I. Mordatch, P. Abbeel, Emergence of grounded compositional language in multi-agent populations, arXiv preprint arXiv:1703.04908 (2017)." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.799, + 0.481, + 0.853 + ], + "angle": 0, + "content": "[60] H. Schwartz, An object oriented approach to fuzzy actor-critic learning for multi-agent differential games, in: 2019 IEEE Symposium Series on Computational Intelligence (SSCI), 2019, pp. 183-190. doi:10.1109/SSCI44817.2019.9002707." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.854, + 0.481, + 0.866 + ], + "angle": 0, + "content": "[61] I. Harmati, Multi-agent coordination for target tracking using" + }, + { + "type": "list", + "bbox": [ + 0.113, + 0.135, + 0.481, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.554, + 0.135, + 0.892, + 0.202 + ], + "angle": 0, + "content": "fuzzy inference system in game theoretic framework, in: 2006 IEEE Conference on Computer Aided Control System Design, 2006 IEEE International Conference on Control Applications, 2006 IEEE International Symposium on Intelligent Control, 2006, pp. 2390-2395. doi:10.1109/CACSD-CCA-ISIC.2006.4777014." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.203, + 0.892, + 0.257 + ], + "angle": 0, + "content": "[62] C. W. Khuen, C. H. Yong, F. Haron, Multi-agent negotiation system using adaptive fuzzy logic in resource allocation, in: The 2nd International Conference on Distributed Frameworks for Multimedia Applications, 2006, pp. 1-7. doi:10.1109/DFMA.2006.296888." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.259, + 0.892, + 0.313 + ], + "angle": 0, + "content": "[63] L. Yan, J. Liu, C. P. Chen, Y. Zhang, Z. Wu, Z. Liu, Game-based adaptive fuzzy optimal bipartite containment of nonlinear multiagent systems, IEEE Transactions on Fuzzy Systems 32 (3) (2024) 1455-1465. doi:10.1109/TFUZZ.2023.3327699." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.315, + 0.892, + 0.381 + ], + "angle": 0, + "content": "[64] R. R. P. Vicerra, K. K. A. David, A. R. dela Cruz, E. A. Roxas, K. B. C. Simbulan, A. A. Bandala, E. P. Dadios, A multiple level mimo fuzzy logic based intelligence for multiple agent cooperative robot system, in: TENCON 2015 - 2015 IEEE Region 10 Conference, 2015, pp. 1-7. doi:10.1109/TENCON.2015.7372985." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.382, + 0.892, + 0.427 + ], + "angle": 0, + "content": "[65] D. Gu, E. Yang, Fuzzy policy gradient reinforcement learning for leader-follower systems, in: IEEE International Conference Mechatronics and Automation, 2005, Vol. 3, 2005, pp. 1557-1561 Vol. 3. doi:10.1109/ICMA.2005.1626787." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.428, + 0.892, + 0.471 + ], + "angle": 0, + "content": "[66] Y. Maruyama, A reasoning system for fuzzy distributed knowledge representation in multi-agent systems, in: 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2021, pp. 1-6. doi:10.1109/FUZZ45933.2021.9494454." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.472, + 0.892, + 0.527 + ], + "angle": 0, + "content": "[67] J. Peng, M. Liu, J. Liu, K.-C. Lin, M. Wu, A coordination model using fuzzy reinforcement learning for multi-agent system, in: 2007 International Symposium on Collaborative Technologies and Systems, 2007, pp. 129-136. doi:10.1109/CTS.2007.4621748." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.529, + 0.892, + 0.583 + ], + "angle": 0, + "content": "[68] E. Yang, D. Gu, A multiagent fuzzy policy reinforcement learning algorithm with application to leader-follower robotic systems, in: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006, pp. 3197-3202. doi: 10.1109/IROS.2006.282421." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.584, + 0.892, + 0.64 + ], + "angle": 0, + "content": "[69] H. Wang, H. Luo, Y. Jiang, A game-based distributed fault-tolerant control method for multi-agent systems, in: 2022 IEEE 1st Industrial Electronics Society Annual On-Line Conference (ONCON), 2022, pp. 1-5. doi:10.1109/ONCON56984.2022.10126545." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.641, + 0.892, + 0.707 + ], + "angle": 0, + "content": "[70] M. Lanctot, V. Zambaldi, A. Gruslys, A. Lazaridou, K. Tuyls, A unified game-theoretic approach to multiagent reinforcement learning, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, p. 4193-4206." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.709, + 0.892, + 0.763 + ], + "angle": 0, + "content": "[71] C. Guo, L. Zhong, T. Liu, A study on game theory in decision interaction for multi intelligent agents based on information fusion, in: Y. Wu (Ed.), Computing and Intelligent Systems, Springer Berlin Heidelberg, Berlin, Heidelberg, 2011, pp. 442-452." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.765, + 0.892, + 0.798 + ], + "angle": 0, + "content": "[72] J. Zhang, J. Zhang, The coverage control solutions based on evolutionary game theory in the multi-agent systems, in: 2019 12th Asian Control Conference (ASCC), 2019, pp. 1-6." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.799, + 0.892, + 0.853 + ], + "angle": 0, + "content": "[73] R. Kong, N. Zhang, X. Bao, X. Lu, Multi-agent distributed optimization based on the game theory and its application in automobile chassis, in: 2018 37th Chinese Control Conference (CCC), 2018, pp. 1107-1111. doi:10.23919/ChiCC.2018.8484212." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.854, + 0.892, + 0.866 + ], + "angle": 0, + "content": "[74] H. Wang, X. Zhang, H. Luo, X. Qiao, M. Huo, Y. Jiang, Dis" + }, + { + "type": "list", + "bbox": [ + 0.526, + 0.135, + 0.892, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.88, + 0.509, + 0.89 + ], + "angle": 0, + "content": "43" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.142, + 0.135, + 0.481, + 0.19 + ], + "angle": 0, + "content": "tributed fault tolerant control for multi-agent systems with sensor faults in non-cooperative games, in: 2023 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes (SAFEPPROCESS), 2023, pp. 1-6. doi:10.1109/ SAFEPPROCESS58597.2023.10295775." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.191, + 0.482, + 0.236 + ], + "angle": 0, + "content": "[75] X. Dong, X. Li, S. Cheng, Energy management optimization of microgrid cluster based on multi-agent-system and hierarchical stackelberg game theory, IEEE Access 8 (2020) 206183-206197. doi:10.1109/ACCESS.2020.3037676." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.237, + 0.481, + 0.28 + ], + "angle": 0, + "content": "[76] P. H. Nguyen, W. L. Kling, P. F. Ribeiro, A game theory strategy to integrate distributed agent-based functions in smart grids, IEEE Transactions on Smart Grid 4 (1) (2013) 568-576. doi:10.1109/TSG.2012.2236657." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.281, + 0.481, + 0.336 + ], + "angle": 0, + "content": "[77] S. U. Khan, I. Ahmad, A semi-distributed axiomatic game theoretical mechanism for replicating data objects in large distributed computing systems, in: 2007 IEEE International Parallel and Distributed Processing Symposium, 2007, pp. 1-10. doi:10.1109/IPDPS.2007.370279." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.337, + 0.481, + 0.393 + ], + "angle": 0, + "content": "[78] L. Xu, J. Zhu, B. Chen, Z. Yang, K. Liu, B. Dang, T. Zhang, Y. Yang, R. Huang, A distributed nanocluster based multiagent evolutionary network, Nature Communications 13 (1) (2022) 4698. doi:10.1038/s41467-022-32497-5. URL https://doi.org/10.1038/s41467-022-32497-5" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.394, + 0.481, + 0.472 + ], + "angle": 0, + "content": "[79] F. Seredynski, T. Kulpa, R. Hoffmann, Towards evolutionary self-optimization of large multi-agent systems, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '22, Association for Computing Machinery, New York, NY, USA, 2022, p. 200-203. doi: 10.1145/3520304.3529042. URL https://doi.org/10.1145/3520304.3529042" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.473, + 0.481, + 0.54 + ], + "angle": 0, + "content": "[80] H. Liu, Z. Li, K. Huang, R. Wang, G. Cheng, T. Li, Evolutionary reinforcement learning algorithm for large-scale multi-agent cooperation and confrontation applications, The Journal of Supercomputing 80 (2) (2024) 2319-2346. doi:10.1007/s11227-023-05551-2. URL https://doi.org/10.1007/s11227-023-05551-2" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.54, + 0.481, + 0.596 + ], + "angle": 0, + "content": "[81] Y. Dong, X. Liu, T. Li, C. L. P. Chen, Evolutionary game dynamics of multi-agent systems using local information considering hide right, Complex & Intelligent Systems 10 (1) (2024) 917-925. doi:10.1007/s40747-023-01172-7. URL https://doi.org/10.1007/s40747-023-01172-7" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.596, + 0.481, + 0.641 + ], + "angle": 0, + "content": "[82] H. Chen, C. Hu, Z. Huang, Optimal control of multiagent decision-making based on competence evolution, Discrete Dynamics in Nature and Society 2023 (1) (2023) 2179376. doi: https://doi.org/10.1155/2023/2179376." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.642, + 0.481, + 0.697 + ], + "angle": 0, + "content": "[83] H.-T. Wai, Z. Yang, Z. Wang, M. Hong, Multi-agent reinforcement learning via double averaging primal-dual optimization, in: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, Curran Associates Inc., Red Hook, NY, USA, 2018, pp. 9672-9683." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.698, + 0.481, + 0.742 + ], + "angle": 0, + "content": "[84] T. Hu, Z. Pu, X. Ai, T. Qiu, J. Yi, Measuring policy distance for multi-agent reinforcement learning (2024). arXiv:2401.11257. URL https://arxiv.org/abs/2401.11257" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.742, + 0.481, + 0.831 + ], + "angle": 0, + "content": "[85] P. Sunehag, G. Lever, A. Gruslys, W. M. Czarnecki, V. Zambaldi, M. Jaderberg, M. Lanctot, N. Sonnerat, J. Z. Leibo, K. Tuyls, T. Graepel, Value-decomposition networks for cooperative multi-agent learning based on team reward, in: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '18, International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2018, pp. 2085-2087." + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.832, + 0.481, + 0.866 + ], + "angle": 0, + "content": "[86] W. J. Yun, J. Park, J. Kim, Quantum multi-agent meta reinforcement learning, in: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth" + }, + { + "type": "list", + "bbox": [ + 0.113, + 0.135, + 0.482, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.554, + 0.135, + 0.892, + 0.191 + ], + "angle": 0, + "content": "Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence, AAAI'23/IAAI'23/EAAI'23, AAAI Press, 2023. doi:10.1609/aaai.v37i9.26313. URL https://doi.org/10.1609/aaai.v37i9.26313" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.192, + 0.892, + 0.236 + ], + "angle": 0, + "content": "[87] W. Mao, H. Qiu, C. Wang, H. Franke, Z. Kalbarczyk, R. K. Iyer, T. Basar, Multi-agent meta-reinforcement learning: sharper convergence rates with task similarity, Curran Associates Inc., Red Hook, NY, USA, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.237, + 0.892, + 0.292 + ], + "angle": 0, + "content": "[88] M. Kouzeghar, Y. Song, M. Meghjani, R. Bouffanais, Multi-target pursuit by a decentralized heterogeneous uav swarm using deep multi-agent reinforcement learning (2023). arXiv: 2303.01799. URL https://arxiv.org/abs/2303.01799" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.293, + 0.892, + 0.348 + ], + "angle": 0, + "content": "[89] F. Gao, S. Chen, M. Li, B. Huang, Maca: a multi-agent reinforcement learning platform for collective intelligence, in: 2019 IEEE 10th International Conference on Software Engineering and Service Science (ICSESS), 2019, pp. 108-111. doi:10.1109/ICSESS47205.2019.9040781." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.349, + 0.892, + 0.393 + ], + "angle": 0, + "content": "[90] S. Qi, S. Zhang, X. Hou, J. Zhang, X. Wang, J. Xiao, Efficient distributed framework for collaborative multi-agent reinforcement learning (2022). arXiv:2205.05248. URL https://arxiv.org/abs/2205.05248" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.394, + 0.892, + 0.528 + ], + "angle": 0, + "content": "[91] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, J. Oh, D. Horgan, M. Kroiss, I. Danihelka, A. Huang, L. Sifre, T. Cai, J. P. Agapiou, M. Jaderberg, A. S. Vezhnevets, R. Leblond, T. Pohlen, V. Dalibard, D. Budden, Y. Sulsky, J. Molloy, T. L. Paine, C. Gulcehre, Z. Wang, T. Pfaff, Y. Wu, R. Ring, D. Yogatama, D. Wünsch, K. McKinney, O. Smith, T. Schaul, T. Lillicrap, K. Kavukcuoglu, D. Hassabis, C. Apps, D. Silver, Grandmaster level in starcraft ii using multi-agent reinforcement learning, Nature 575 (7782) (2019) 350-354. doi:10.1038/s41586-019-1724-z. URL https://doi.org/10.1038/s41586-019-1724-z" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.529, + 0.892, + 0.573 + ], + "angle": 0, + "content": "[92] T. Chu, J. Wang, L. Codeca, Z. Li, Multi-agent deep reinforcement learning for large-scale traffic signal control, IEEE Transactions on Intelligent Transportation Systems 21 (3) (2020) 1086-1095. doi:10.1109/TITS.2019.2901791." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.574, + 0.892, + 0.629 + ], + "angle": 0, + "content": "[93] Z. Lv, L. Xiao, Y. Du, G. Niu, C. Xing, W. Xu, Multiagent reinforcement learning based uav swarm communications against jamming, Trans. Wireless. Comm. 22 (12) (2023) 9063-9075. doi:10.1109/TWC.2023.3268082. URL https://doi.org/10.1109/TWC.2023.3268082" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.63, + 0.892, + 0.685 + ], + "angle": 0, + "content": "[94] R.-E. P. Mircea-Bogdan Radac, R.-C. Roman, Model-free control performance improvement using virtual reference feedback tuning and reinforcement q-learning, International Journal of Systems Science 48 (5) (2017) 1071-1083. doi:10.1080/00207721.2016.1236423." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.686, + 0.892, + 0.731 + ], + "angle": 0, + "content": "[95] Z. Liu, X. Yang, S. Sun, L. Qian, L. Wan, X. Chen, X. Lan, Grounded answers for multi-agent decision-making problem through generative world model, in: Advances in Neural Information Processing Systems (NeurIPS), 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.731, + 0.892, + 0.787 + ], + "angle": 0, + "content": "[96] H. Zhang, W. Du, J. Shan, Q. Zhou, Y. Du, J. B. Tenenbaum, T. Shu, C. Gan, Building cooperative embodied agents modularly with large language models, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=EnXJfQqyOK" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.788, + 0.892, + 0.832 + ], + "angle": 0, + "content": "[97] X. Hou, M. Yang, W. Jiao, X. Wang, Z. Tu, W. X. Zhao, Coact: A global-local hierarchy for autonomous agent collaboration (2024). arXiv:2406.13381. URL https://arxiv.org/abs/2406.13381" + }, + { + "type": "ref_text", + "bbox": [ + 0.526, + 0.833, + 0.892, + 0.866 + ], + "angle": 0, + "content": "[98] X. Puig, K. K. Ra, M. Boben, J. Li, T. Wang, S. Fidler, A. Torralba, Virtualhome: Simulating household activities via programs, 2018 IEEE/CVF Conference on Computer Vision" + }, + { + "type": "list", + "bbox": [ + 0.526, + 0.135, + 0.892, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.88, + 0.51, + 0.89 + ], + "angle": 0, + "content": "44" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.135, + 0.382, + 0.146 + ], + "angle": 0, + "content": "and Pattern Recognition (2018) 8494-8502." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.147, + 0.478, + 0.168 + ], + "angle": 0, + "content": "URL https://api_semanticscholar.org/CorpusID: 49317780" + }, + { + "type": "ref_text", + "bbox": [ + 0.113, + 0.169, + 0.48, + 0.236 + ], + "angle": 0, + "content": "[99] X. Puig, T. Shu, S. Li, Z. Wang, Y.-H. Liao, J. B. Tenenbaum, S. Fidler, A. Torralba, Watch-and-help: A challenge for social perception and human-{ai} collaboration, in: International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=w_7JMpGZRhO" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.237, + 0.48, + 0.303 + ], + "angle": 0, + "content": "[100] D. Gao, Z. Li, X. Pan, W. Kuang, Z. Ma, B. Qian, F. Wei, W. Zhang, Y. Xie, D. Chen, L. Yao, H. Peng, Z. Zhang, L. Zhu, C. Cheng, H. Shi, Y. Li, B. Ding, J. Zhou, Agentscope: A flexible yet robust multi-agent platform (2024). arXiv:2402.14034. URL https://arxiv.org/abs/2402.14034" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.304, + 0.48, + 0.371 + ], + "angle": 0, + "content": "[101] Z. Xiao, D. Zhang, Y. Wu, L. Xu, Y. J. Wang, X. Han, X. Fu, T. Zhong, J. Zeng, M. Song, G. Chen, Chain-of-experts: When LLMs meet complex operations research problems, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=HobyL1B9CZ" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.372, + 0.48, + 0.415 + ], + "angle": 0, + "content": "[102] C. Wang, W. Luo, Q. Chen, H. Mai, J. Guo, S. Dong, X. M. Xuan, Z. Li, L. Ma, S. Gao, Mllm-tool: A multimodal large language model for tool agent learning, arXiv preprint arXiv:2401.10727 (2024)." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.416, + 0.48, + 0.461 + ], + "angle": 0, + "content": "[103] Q. Wu, G. Bansal, J. Zhang, Y. Wu, S. Zhang, E. E. Zhu, B. Li, L. Jiang, X. Zhang, C. Wang, Autogen: Enabling next-gen llm applications via multi-agent conversation, Tech. Rep. MSR-TR-2023-33, Microsoft (August 2023)." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.461, + 0.501, + 0.574 + ], + "angle": 0, + "content": "[104] M. Wen, J. Kuba, R. Lin, W. Zhang, Y. Wen, J. Wang, Y. Yang, Multi-agent reinforcement learning is a sequence modeling problem, in: S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh (Eds.), Advances in Neural Information Processing Systems, Vol. 35, Curran Associates, Inc., 2022, pp. 16509-16521. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/69413f87e5a34897cd010ca698097d0a-Paper-Conference.pdf" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.574, + 0.48, + 0.641 + ], + "angle": 0, + "content": "[105] W. Chen, Y. Su, J. Zuo, C. Yang, C. Yuan, C.-M. Chan, H. Yu, Y. Lu, Y.-H. Hung, C. Qian, Y. Qin, X. Cong, R. Xie, Z. Liu, M. Sun, J. Zhou, Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=EHg5GDnyq1" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.642, + 0.48, + 0.686 + ], + "angle": 0, + "content": "[106] Z. Liu, Y. Zhang, P. Li, Y. Liu, D. Yang, Dynamic LLM-agent network: An LLM-agent collaboration framework with agent team optimization (2024). URL https://openreview.net/forum?id=i43XCU54Br" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.686, + 0.48, + 0.764 + ], + "angle": 0, + "content": "[107] G. Chen, S. Dong, Y. Shu, G. Zhang, J. Sesay, B. Karlsson, J. Fu, Y. Shi, Autoagents: A framework for automatic agent generation, in: K. Larson (Ed.), Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, International Joint Conferences on Artificial Intelligence Organization, 2024, pp. 22–30, main Track. doi:10.24963/ijcai.2024/3." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.765, + 0.48, + 0.853 + ], + "angle": 0, + "content": "URL https://doi.org/10.24963/ijcai.2024/3 \n[108] S. Hong, M. Zhuge, J. Chen, X. Zheng, Y. Cheng, J. Wang, C. Zhang, Z. Wang, S. K. S. Yau, Z. Lin, L. Zhou, C. Ran, L. Xiao, C. Wu, J. Schmidhuber, MetaGPT: Meta programming for a multi-agent collaborative framework, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=VtmBAGCN7o" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.854, + 0.48, + 0.865 + ], + "angle": 0, + "content": "[109] S. Hong, Y. Lin, B. Liu, B. Liu, B. Wu, D. Li, J. Chen," + }, + { + "type": "list", + "bbox": [ + 0.107, + 0.135, + 0.501, + 0.865 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.554, + 0.135, + 0.892, + 0.18 + ], + "angle": 0, + "content": "J. Zhang, J. Wang, L. Zhang, L. Zhang, M. Yang, M. Zhuge, T. Guo, T. Zhou, W. Tao, W. Wang, X. Tang, X. Lu, X. Zheng, X. Liang, Y. Fei, Y. Cheng, Z. Xu, C. Wu, Data interpreter: An llm agent for data science (2024). arXiv:2402.18679." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.181, + 0.892, + 0.202 + ], + "angle": 0, + "content": "[110] X. Team, Xagent: An autonomous agent for complex task solving (2023)." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.203, + 0.892, + 0.258 + ], + "angle": 0, + "content": "[111] J. Wang, Z. Duan, Intelligent spark agents: A modular lang-graph framework for scalable, visualized, and enhanced big data machine learning workflows (2024). arXiv:2412.01490. URL https://arxiv.org/abs/2412.01490" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.259, + 0.892, + 0.303 + ], + "angle": 0, + "content": "[112] J. Wang, Z. Duan, Agent ai with langgraph: A modular framework for enhancing machine translation using large language models (2024). arXiv:2412.03801. URL https://arxiv.org/abs/2412.03801" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.304, + 0.892, + 0.359 + ], + "angle": 0, + "content": "[113] Y. Zheng, Z. Xing, Q. Zhang, B. Jin, P. Li, Y. Zheng, Z. Xia, K. Zhan, X. Lang, Y. Chen, D. Zhao, Planagent: A multimodal large language agent for closed-loop vehicle motion planning (2024). arXiv:2406.01587. URL https://arxiv.org/abs/2406.01587" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.36, + 0.892, + 0.393 + ], + "angle": 0, + "content": "[114] L. Zhang, Z. Ji, B. Chen, Crew: Facilitating human-ai teaming research (2025). arXiv:2408.00170. URL https://arxiv.org/abs/2408.00170" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.394, + 0.892, + 0.438 + ], + "angle": 0, + "content": "[115] Z. Duan, J. Wang, Exploration of llm multi-agent application implementation based on langgraph+crewai (2024). arXiv: 2411.18241. URL https://arxiv.org/abs/2411.18241" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.439, + 0.892, + 0.495 + ], + "angle": 0, + "content": "[116] Y. Cao, H. Zhao, Y. Cheng, T. Shu, Y. Chen, G. Liu, G. Liang, J. Zhao, J. Yan, Y. Li, Survey on large language model-enhanced reinforcement learning: Concept, taxonomy, and methods, IEEE Transactions on Neural Networks and Learning Systems (2024) 1-21doi:10.1109/TNNLS.2024.3497992." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.495, + 0.892, + 0.55 + ], + "angle": 0, + "content": "[117] I. Marti, V. R. Tomas, A. Saez, J. J. Martinez, A rule-based multi-agent system for road traffic management, in: 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, Vol. 3, 2009, pp. 595-598. doi:10.1109/WI-IAT.2009.358." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.551, + 0.892, + 0.629 + ], + "angle": 0, + "content": "[118] V. V. Nekhai, E. Trunova, I. Bilous, I. Bohdan, M. Voitsekhovska, Fuzzy game-theoretic modeling of a multi-agent cybersecurity management system for an agricultural enterprise, in: S. Shkarlet, A. Morozov, A. Palagin, D. Vinnikov, N. Stoianov, M. Zhelezniak, V. Kazymyr (Eds.), Mathematical Modeling and Simulation of Systems, Springer International Publishing, Cham, 2022, pp. 423-434." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.63, + 0.892, + 0.686 + ], + "angle": 0, + "content": "[119] A. Ramezani, M. R. Andalibizadeh, S. Bahrampour, H. Ramezani, B. Moshiri, Select reliable strategy in multiagent systems using fuzzy logic-based fusion, in: 2008 Second Asia International Conference on Modelling & Simulation (AMS), 2008, pp. 13-17. doi:10.1109/AMS.2008.63." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.686, + 0.892, + 0.741 + ], + "angle": 0, + "content": "[120] M. E. H. Charaf, M. Benattou, S. Azzouzi, A rule-based multi-agent system for testing distributed applications, in: 2012 International Conference on Multimedia Computing and Systems, 2012, pp. 967-972. doi:10.1109/ICMCS.2012.6320205." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.742, + 0.892, + 0.798 + ], + "angle": 0, + "content": "[121] A. Daeichian, A. Haghani, Fuzzy q-learning-based multi-agent system for intelligent traffic control by a game theory approach, Arabian Journal for Science and Engineering 43 (6) (2018) 3241-3247. doi:10.1007/s13369-017-3018-9. URL https://doi.org/10.1007/s13369-017-3018-9" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.799, + 0.892, + 0.831 + ], + "angle": 0, + "content": "[122] C. Amato, A first introduction to cooperative multi-agent reinforcement learning (2024). arXiv:2405.06161. URL https://arxiv.org/abs/2405.06161" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.832, + 0.892, + 0.865 + ], + "angle": 0, + "content": "[123] Y. Zhou, S. Liu, Y. Qing, K. Chen, T. Zheng, Y. Huang, J. Song, M. Song, Is centralized training with decentralized execution framework centralized enough for marl? (2023)." + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.135, + 0.892, + 0.865 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "45" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.135, + 0.264, + 0.145 + ], + "angle": 0, + "content": "arXiv:2305.17352." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.147, + 0.405, + 0.157 + ], + "angle": 0, + "content": "URL https://arxiv.org/abs/2305.17352" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.159, + 0.482, + 0.202 + ], + "angle": 0, + "content": "[124] C. Amato, An introduction to centralized training for decentralized execution in cooperative multi-agent reinforcement learning (2024). arXiv:2409.03052. URL https://arxiv.org/abs/2409.03052" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.203, + 0.482, + 0.281 + ], + "angle": 0, + "content": "[125] P. K. Sharma, E. G. Zaroukian, R. Fernandez, A. Basak, D. E. Asher, Survey of recent multi-agent reinforcement learning algorithms utilizing centralized training, in: T. Pham, L. Solomon, M. E. Hohil (Eds.), Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, SPIE, 2021, p. 84. doi:10.1117/12.2585808. URL http://dx.doi.org/10.1117/12.2585808" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.281, + 0.482, + 0.337 + ], + "angle": 0, + "content": "[126] J. K. Gupta, M. Egorov, M. Kochenderfer, Cooperative multiagent control using deep reinforcement learning, in: G. Sukthankar, J. A. Rodriguez-Aguilar (Eds.), Autonomous Agents and Multiagent Systems, Springer International Publishing, Cham, 2017, pp. 66-83." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.338, + 0.482, + 0.393 + ], + "angle": 0, + "content": "[127] C. Claus, C. Boutilier, The dynamics of reinforcement learning in cooperative multiagent systems, in: Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence, AAAI Press, 1998, pp. 746-752." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.394, + 0.482, + 0.45 + ], + "angle": 0, + "content": "[128] M. Lauer, M. A. Riedmiller, An algorithm for distributed reinforcement learning in cooperative multi-agent systems, in: Proceedings of the Seventeenth International Conference on Machine Learning, ICML '00, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2000, p. 535-542." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.451, + 0.482, + 0.517 + ], + "angle": 0, + "content": "[129] M. Bowling, M. Veloso, Multiagent learning using a variable learning rate, Artificial Intelligence 136 (2) (2002) 215-250. doi:https://doi.org/10.1016/S0004-3702(02)00121-2. URL https://www.sciencedirect.com/science/article/pii/S0004370202001212" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.518, + 0.482, + 0.583 + ], + "angle": 0, + "content": "[130] T. Ikeda, T. Shibuya, Centralized training with decentralized execution reinforcement learning for cooperative multi-agent systems with communication delay, in: 2022 61st Annual Conference of the Society of Instrument and Control Engineers (SICE), 2022, pp. 135-140. doi:10.23919/SICE56594.2022.9905866." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.584, + 0.482, + 0.641 + ], + "angle": 0, + "content": "[131] Y. Zhou, S. Liu, Y. Qing, K. Chen, T. Zheng, Y. Huang, J. Song, M. Song, Is centralized training with decentralized execution framework centralized enough for marl? (2023). arXiv:2305.17352. URL https://arxiv.org/abs/2305.17352" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.642, + 0.482, + 0.707 + ], + "angle": 0, + "content": "[132] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, I. Mordatch, Multi-agent actor-critic for mixed cooperative-competitive environments, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, pp. 6382-6393." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.708, + 0.482, + 0.798 + ], + "angle": 0, + "content": "[133] C. Yu, A. Velu, E. Vinitsky, J. Gao, Y. Wang, A. Bayen, Y. Wu, The surprising effectiveness of ppo in cooperative multi-agent games, in: Advances in Neural Information Processing Systems, Vol. 35, 2022, pp. 29914-29928. URL https://proceedings.neurips.cc/paper_files/paper/2022/bitize/9c1535a02f0ce079433344e14d910597-Paper-Datasets and_Benchmarks.pdf" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.799, + 0.482, + 0.842 + ], + "angle": 0, + "content": "[134] M. Samvelyan, T. Rashid, C. S. de Witt, G. Farquhar, N. Nardelli, T. G. J. Rudner, C.-M. Hung, P. H. S. Torr, J. Foerster, S. Whiteson, The StarCraft Multi-Agent Challenge, CoRR abs/1902.04043 (2019)." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.843, + 0.482, + 0.866 + ], + "angle": 0, + "content": "[135] B. Ellis, J. Cook, S. Moalla, M. Samvelyan, M. Sun, A. Mahajan, J. Foerster, S. Whiteson, Smacv2: An improved" + }, + { + "type": "list", + "bbox": [ + 0.107, + 0.135, + 0.482, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.553, + 0.135, + 0.894, + 0.236 + ], + "angle": 0, + "content": "benchmark for cooperative multi-agent reinforcement learning, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 37567-37593. \nURL https://proceedings.neurips.cc/paper_files/paper/2023/file/764c18ad230f9e7bf6a77ffc2312c55e-Paper-Datasets and_Benchmarks.pdf" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.237, + 0.894, + 0.28 + ], + "angle": 0, + "content": "[136] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, I. Mordatch, Multi-agent actor-critic for mixed cooperative-competitive environments, Neural Information Processing Systems (NIPS) (2017)." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.281, + 0.894, + 0.336 + ], + "angle": 0, + "content": "[137] T. Malloy, C. R. Sims, T. Klinger, M. Liu, M. Riemer, G. Tesauro, Capacity-limited decentralized actor-critic for multi-agent games, in: 2021 IEEE Conference on Games (CoG), 2021, pp. 1-8. doi:10.1109/CoG52621.2021.9619081." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.337, + 0.893, + 0.405 + ], + "angle": 0, + "content": "[138] J. Jiang, Z. Lu, Learning attentional communication for multi-agent cooperation, in: Advances in Neural Information Processing Systems, Vol. 31, 2018, pp. 7254-7264. URL https://proceedings.neurips.cc/paper/2018/file/6a8018b3a00b69c008601b8becae392b-Paper.pdf" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.405, + 0.893, + 0.518 + ], + "angle": 0, + "content": "[139] C. Sun, Z. Zang, J. Li, J. Li, X. Xu, R. Wang, C. Zheng, T2mac: targeted and trusted multi-agent communication through selective engagement and evidence-driven integration, in: Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence, AAAI'24/IAAI'24/EAAI'24, AAAI Press, 2025. doi:10.1609/aaai.v38i13.29438. URL https://doi.org/10.1609/aaai.v38i13.29438" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.518, + 0.893, + 0.551 + ], + "angle": 0, + "content": "[140] OpenAI, Gpt-4 technical report, arXiv preprint arXiv:2303.08774 (2023). URL https://arxiv.org/abs/2303.08774" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.552, + 0.893, + 0.64 + ], + "angle": 0, + "content": "[141] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, R. Lowe, Training language models to follow instructions with human feedback, in: Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, Curran Associates Inc., Red Hook, NY, USA, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.641, + 0.893, + 0.741 + ], + "angle": 0, + "content": "[142] R. Mao, G. Chen, X. Zhang, F. Guerin, E. Cambria, GPTEval: A survey on assessments of ChatGPT and GPT-4, in: N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), ELRA and ICCL, Torino, Italia, 2024, pp. 7844-7866. URL https://aclanthology.org/2024.lrec-main.693" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.742, + 0.893, + 0.798 + ], + "angle": 0, + "content": "[143] A. Grattafori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan, A. Yang, A. Fan, A. Goyal, A. Hartshorn, A. Yang, et al., The llama 3 herd of models (2024). arXiv:2407.21783. URL https://arxiv.org/abs/2407.21783" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.799, + 0.893, + 0.866 + ], + "angle": 0, + "content": "[144] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Roziere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, G. Lample, Llama: Open and efficient foundation language models (2023). arXiv:2302.13971. URL https://arxiv.org/abs/2302.13971" + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.135, + 0.894, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.88, + 0.509, + 0.89 + ], + "angle": 0, + "content": "46" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.134, + 0.482, + 0.157 + ], + "angle": 0, + "content": "[145] G. Team, Gemini: A family of highly capable multimodal models (2024). arXiv:2312.11805." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.158, + 0.405, + 0.169 + ], + "angle": 0, + "content": "URL https://arxiv.org/abs/2312.11805" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.17, + 0.48, + 0.213 + ], + "angle": 0, + "content": "[146] V. Dibia, J. Chen, G. Bansal, S. Syed, A. Fourney, E. Zhu, C. Wang, S. Amershi, Autogen studio: A no-code developer tool for building and debugging multi-agent systems (2024). arXiv:2408.15247." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.214, + 0.404, + 0.224 + ], + "angle": 0, + "content": "URL https://arxiv.org/abs/2408.15247" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.225, + 0.481, + 0.303 + ], + "angle": 0, + "content": "[147] C. Gan, S. Zhou, J. Schwartz, S. Alter, A. Bhandwaldar, D. Gutfreund, D. L. Yamins, J. J. DiCarlo, J. McDermott, A. Torralba, J. B. Tenenbaum, The threadworld transport challenge: A visually guided task-and-motion planning benchmark towards physically realistic embodied ai, in: 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 8847-8854. doi:10.1109/ICRA46639.2022.9812329." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.304, + 0.481, + 0.404 + ], + "angle": 0, + "content": "[148] C. Gan, J. Schwartz, S. Alter, D. Mrowca, M. Schrimpf, J. Traer, J. De Freitas, J. Kubilius, A. Bhandwaldar, N. Haber, M. Sano, K. Kim, E. Wang, M. Lingelbach, A. Curtis, K. Feigelis, D. Bear, D. Gutfreund, D. Cox, A. Torralba, J. J. DiCarlo, J. Tenenbaum, J. McDermott, D. Yamins, Threadworld: A platform for interactive multi-modal physical simulation, in: J. Vanschoeren, S. Yeung (Eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, Vol. 1, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.405, + 0.478, + 0.449 + ], + "angle": 0, + "content": "URL https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/735b90b4568125ed6c3f678819b6e058-Paper-round1.pdf" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.45, + 0.481, + 0.528 + ], + "angle": 0, + "content": "[149] R. Gong, Q. Huang, X. Ma, Y. Noda, Z. Durante, Z. Zheng, D. Terzopoulos, L. Fei-Fei, J. Gao, H. Vo, MindAgent: Emergent gaming interaction, in: K. Duh, H. Gomez, S. Bethard (Eds.), Findings of the Association for Computational Linguistics: NAACL 2024, Association for Computational Linguistics, Mexico City, Mexico, 2024, pp. 3154-3183. doi:10.18653/v1/2024-findings-naacl.200." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.529, + 0.478, + 0.55 + ], + "angle": 0, + "content": "URL https://aclanthology.org/2024.findings-naacl.200" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.551, + 0.481, + 0.606 + ], + "angle": 0, + "content": "[150] K. Shao, Y. Zhu, Z. Tang, D. Zhao, Cooperative multiagent deep reinforcement learning with counterfactual reward, in: 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1-8. doi:10.1109/IJCNN48605.2020.9207169." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.608, + 0.481, + 0.674 + ], + "angle": 0, + "content": "[151] K. R. Chandra, S. Borugadda, Multi agent deep reinforcement learning with deep q-network based energy efficiency and resource allocation in nomawireless systems, in: 2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT), 2023, pp. 1-8. doi:10.1109/ICEEICT56924.2023.10157052." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.675, + 0.48, + 0.707 + ], + "angle": 0, + "content": "[152] A. M. Hafiz, G. M. Bhat, Deep q-network based multi-agent reinforcement learning with binary action agents (2020). arXiv: 2008.04109." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.709, + 0.404, + 0.719 + ], + "angle": 0, + "content": "URL https://arxiv.org/abs/2008.04109" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.72, + 0.481, + 0.764 + ], + "angle": 0, + "content": "[153] C. S. de Witt, T. Gupta, D. Makoviichuk, V. Makoviychuk, P. H. S. Torr, M. Sun, S. Whiteson, Is independent learning all you need in the starcraft multi-agent challenge? (2020). arXiv:2011.09533." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.766, + 0.404, + 0.776 + ], + "angle": 0, + "content": "URL https://arxiv.org/abs/2011.09533" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.777, + 0.481, + 0.853 + ], + "angle": 0, + "content": "[154] O. Vinyals, T. Ewalds, S. Bartunov, P. Georgiev, A. S. Vezhnevets, M. Yeo, A. Makhzani, H. Kuttler, J. Agapiou, J. Schrittwieser, J. Quan, S. Gaffney, S. Petersen, K. Simonyan, T. Schaul, H. van Hasselt, D. Silver, T. Lillicrap, K. Calderone, P. Keet, A. Brunasso, D. Lawrence, A. Ekermo, J. Repp, R. Tsing, Starcraft ii: A new challenge for reinforcement learning (2017). arXiv:1708.04782." + }, + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.855, + 0.404, + 0.866 + ], + "angle": 0, + "content": "URL https://arxiv.org/abs/1708.04782" + }, + { + "type": "list", + "bbox": [ + 0.107, + 0.134, + 0.482, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.134, + 0.892, + 0.191 + ], + "angle": 0, + "content": "[155] H. Shen, K. Zhang, M. Hong, T. Chen, Towards understanding asynchronous advantage actor-critic: Convergence and linear speedup, IEEE Transactions on Signal Processing 71 (2023) 2579-2594. doi:10.1109/tsp.2023.3268475. URL http://dx.doi.org/10.1109/TSP.2023.3268475" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.192, + 0.892, + 0.247 + ], + "angle": 0, + "content": "[156] S. Li, Y. Wu, X. Cui, H. Dong, F. Fang, S. Russell, Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, AAAI Press, 2019, pp. 4213-4220. doi:10.1609/aaai.v33i01.33014213." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.248, + 0.892, + 0.325 + ], + "angle": 0, + "content": "[157] J. N. Foerster, G. Farquhar, T. Afouras, N. Nardelli, S. Whiteson, Counterfactual multi-agent policy gradients, in: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18, AAAI Press, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.327, + 0.892, + 0.371 + ], + "angle": 0, + "content": "[158] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, O. Klimov, Proximal policy optimization algorithms (2017). arXiv: 1707.06347. URL https://arxiv.org/abs/1707.06347" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.372, + 0.892, + 0.427 + ], + "angle": 0, + "content": "[159] J. Schulman, S. Levine, P. Moritz, M. Jordan, P. Abbeel, Trust region policy optimization, in: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, JMLR.org, 2015, p. 1889-1897." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.428, + 0.892, + 0.483 + ], + "angle": 0, + "content": "[160] J. G. Kuba, R. Chen, M. Wen, Y. Wen, F. Sun, J. Wang, Y. Yang, Trust region policy optimisation in multi-agent reinforcement learning, in: International Conference on Learning Representations, 2022. URL https://arxiv.org/abs/2109.11251" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.485, + 0.892, + 0.539 + ], + "angle": 0, + "content": "[161] S. Sukhbaatar, A. Szlam, R. Fergus, Learning multiagent communication with backpropagation, in: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, Curran Associates Inc., Red Hook, NY, USA, 2016, pp. 2252-2260." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.54, + 0.892, + 0.596 + ], + "angle": 0, + "content": "[162] L. Matignon, G. J. Laurent, N. Le Fort-Piat, Hysteresic q-learning: an algorithm for decentralized reinforcement learning in cooperative multi-agent teams, in: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007, pp. 64-69. doi:10.1109/IROS.2007.4399095." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.597, + 0.892, + 0.629 + ], + "angle": 0, + "content": "[163] E. Wei, S. Luke, Lenient learning in independent-learner stochastic cooperative games, J. Mach. Learn. Res. 17 (1) (2016) 2914–2955." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.63, + 0.892, + 0.663 + ], + "angle": 0, + "content": "[164] M. Hausknecht, P. Stone, Deep recurrent q-learning for partially observable mdps (2017). arXiv:1507.06527. URL https://arxiv.org/abs/1507.06527" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.664, + 0.892, + 0.731 + ], + "angle": 0, + "content": "[165] A. Tampuu, T. Matiisen, D. Kodelja, I. Kuzovkin, K. Korjus, J. Aru, J. Aru, R. Vicente, Multiagent cooperation and competition with deep reinforcement learning, PLOS ONE 12 (4) (2017) 1-15. doi:10.1371/journal.pone.0172395. URL https://doi.org/10.1371/journal.pone.0172395" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.731, + 0.892, + 0.787 + ], + "angle": 0, + "content": "[166] S. Omidshafiei, J. Pazis, C. Amato, J. P. How, J. Vian, Deep decentralized multi-task multi-agent reinforcement learning under partial observability, in: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, p. 2681-2690." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.788, + 0.892, + 0.832 + ], + "angle": 0, + "content": "[167] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, W. Zaremba, Openai gym (2016). arXiv: 1606.01540. URL https://arxiv.org/abs/1606.01540" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.833, + 0.892, + 0.866 + ], + "angle": 0, + "content": "[168] J. N. Foerster, Y. M. Assael, N. de Freitas, S. Whiteson, Learning to communicate with deep multi-agent reinforcement learning (2016). arXiv:1605.06676." + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.134, + 0.892, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "47" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.135, + 0.404, + 0.146 + ], + "angle": 0, + "content": "URL https://arxiv.org/abs/1605.06676" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.147, + 0.482, + 0.191 + ], + "angle": 0, + "content": "[169] J. N. Foerster, Y. M. Assael, N. de Freitas, S. Whiteson, Learning to communicate to solve riddles with deep distributed recurrent q-networks (2016). arXiv:1602.02672. URL https://arxiv.org/abs/1602.02672" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.192, + 0.482, + 0.247 + ], + "angle": 0, + "content": "[170] L. Fan, Y.-y. Liu, S. Zhang, Partially observable multi-agent rl with enhanced deep distributed recurrent q-network, in: 2018 5th International Conference on Information Science and Control Engineering (ICISCE), 2018, pp. 375-379. doi:10.1109/ICISCE.2018.00085." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.248, + 0.482, + 0.326 + ], + "angle": 0, + "content": "[171] S. Sukhbaatar, A. Szlam, R. Fergus, Learning multiagent communication with backpropagation, in: Advances in Neural Information Processing Systems, Vol. 29, 2016, pp. 2244-2252. URL https://proceedings.neurips.cc/paper/2016/file/55b1927fdafe39c48e5b73b5d61ea60-Paper.pdf" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.327, + 0.482, + 0.383 + ], + "angle": 0, + "content": "[172] P. Peng, Y. Wen, Y. Yang, Q. Yuan, Z. Tang, H. Long, J. Wang, Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play starcraft combat games (2017). arXiv:1703.10069. URL https://arxiv.org/abs/1703.10069" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.383, + 0.482, + 0.45 + ], + "angle": 0, + "content": "[173] G. Wang, D. Shi, C. Xue, H. Jiang, Y. Wang, Bic-ddpg: Bidirectionally-coordinated nets for deep multi-agent reinforcement learning, in: H. Gao, X. Wang, M. Iqbal, Y. Yin, J. Yin, N. Gu (Eds.), Collaborative Computing: Networking, Applications and Worksharing, Springer International Publishing, Cham, 2021, pp. 337-354." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.45, + 0.482, + 0.495 + ], + "angle": 0, + "content": "[174] D. Fan, H. Shen, L. Dong, Multi-agent distributed deep deterministic policy gradient for partially observable tracking, Actuators 10 (10) (2021). doi:10.3390/act10100268. URL https://www.mdpi.com/2076-0825/10/10/268" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.496, + 0.482, + 0.573 + ], + "angle": 0, + "content": "[175] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, K. Kavukcuoglu, Asynchronous methods for deep reinforcement learning, in: Proceedings of The 33rd International Conference on Machine Learning, PMLR, 2016, pp. 1928-1937. URL https://proceedings.mlr.press/v48/mniha16.html" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.574, + 0.482, + 0.663 + ], + "angle": 0, + "content": "[176] L. Yuan, C. Wang, J. Wang, F. Zhang, F. Chen, C. Guan, Z. Zhang, C. Zhang, Y. Yu, Multi-agent concentrative coordination with decentralized task representation, in: L. D. Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, International Joint Conferences on Artificial Intelligence Organization, 2022, pp. 599–605, main Track. doi:10.24963/ijcai.2022/85. URL https://doi.org/10.24963/ijcai.2022/85" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.664, + 0.482, + 0.731 + ], + "angle": 0, + "content": "[177] S. Vanneste, A. Vanneste, K. Mets, T. D. Schepper, A. Anwar, S. Mercelis, P. Hellinckx, Learning to communicate using a communication critic and counterfactual reasoning, Neural Computing and Applications N/A (N/A) (2025) N/A. doi: 10.1007/s00521-024-10598-0. URL https://doi.org/10.1007/s00521-024-10598-0" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.731, + 0.482, + 0.787 + ], + "angle": 0, + "content": "[178] A. Singh, T. Jain, S. Sukhbaatar, Learning when to communicate at scale in multiagent cooperative and competitive tasks, in: International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bk1Wt24tvH" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.788, + 0.482, + 0.844 + ], + "angle": 0, + "content": "[179] D. Das, R. L. Rath, T. Singh, S. Mishra, V. Malik, R. Sobti, B. Brahma, LIm-based custom chatbot using langchain, in: A. E. Hassanien, S. Anand, A. Jaiswal, P. Kumar (Eds.), Innovative Computing and Communications, Springer Nature Singapore, Singapore, 2024, pp. 257-267." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.845, + 0.482, + 0.866 + ], + "angle": 0, + "content": "[180] Q. Sun, Z. Yin, X. Li, Z. Wu, X. Qiu, L. Kong, Corex: Pushing the boundaries of complex reasoning through multi-model" + }, + { + "type": "list", + "bbox": [ + 0.107, + 0.135, + 0.482, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.554, + 0.135, + 0.793, + 0.146 + ], + "angle": 0, + "content": "collaboration (2023). arXiv:2310.00280." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.147, + 0.894, + 0.225 + ], + "angle": 0, + "content": "[181] J. S. Park, L. Popowski, C. Cai, M. R. Morris, P. Liang, M. S. Bernstein, Social simulacra: Creating populated prototypes for social computing systems, in: Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST '22, Association for Computing Machinery, New York, NY, USA, 2022. doi:10.1145/3526113.3545616. URL https://doi.org/10.1145/3526113.3545616" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.226, + 0.894, + 0.292 + ], + "angle": 0, + "content": "[182] J. Terry, B. Black, N. Grammel, M. Jayakumar, A. Hari, R. Sullivan, L. Santos, R. Perez, C. Horsch, C. Dieffendahl, et al., Pettingzoo: A standard api for multi-agent reinforcement learning, in: Proceedings of the 35th Conference on Neural Information Processing Systems, in: Advances in Neural Information Processing Systems, Vol. 34, 2021, pp. 15032-15043." + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.293, + 0.893, + 0.382 + ], + "angle": 0, + "content": "[183] S. Huang, S. Ontañón, C. Bamford, L. Grela, Gym-µrts: Toward affordable full game real-time strategy games research with deep reinforcement learning, in: 2021 IEEE Conference on Games (CoG), Copenhagen, Denmark, August 17-20, 2021, IEEE, 2021, pp. 1-8. doi:10.1109/CoG52621.2021.9619076. URL https://doi.org/10.1109/CoG52621.2021.9619076" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.383, + 0.893, + 0.461 + ], + "angle": 0, + "content": "[184] L. Zheng, J. Yang, H. Cai, M. Zhou, W. Zhang, J. Wang, Y. Yu, Magent: A many-agent reinforcement learning platform for artificial collective intelligence, Proceedings of the AAAI Conference on Artificial Intelligence 32 (1) (2018). doi:10.1609/aaai.v32i1.11371. URL https://ojs.aaii.org/index.php/AAAI/article/view/11371" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.462, + 0.893, + 0.538 + ], + "angle": 0, + "content": "[185] C. Yu, P. Wang, Dexterous manipulation for multifingered robotic hands with reinforcement learning: A review, Frontiers in Neurorobotics 16 (2022). doi:10.3389/fnbot.2022.861825. URL https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2022.861825" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.54, + 0.893, + 0.629 + ], + "angle": 0, + "content": "[186] O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. Weng, W. Zaremba, Learning dexterous in-hand manipulation, The International Journal of Robotics Research 39 (1) (2020) 3-20. arXiv:https://doi.org/10.1177/0278364919887447. doi:10.1177/0278364919887447. URL https://doi.org/10.1177/0278364919887447" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.63, + 0.893, + 0.696 + ], + "angle": 0, + "content": "[187] M. Chevalier-Boisvert, B. Dai, M. Towers, R. de Lazcano, L. Willems, S. Lahlou, S. Pal, P. S. Castro, J. Terry, Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks (2023). arXiv: 2306.13831. URL https://arxiv.org/abs/2306.13831" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.698, + 0.893, + 0.753 + ], + "angle": 0, + "content": "[188] J. Z. Leibo, E. Duñez-Guzmán, A. S. Vezhnevets, J. P. Agapiou, P. Sunehag, R. Koster, J. Matyas, C. Beattie, I. Mordatch, T. Graepel, Scalable evaluation of multi-agent reinforcement learning with melting pot (2021). arXiv:2107.06857. URL https://arxiv.org/abs/2107.06857" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.755, + 0.893, + 0.809 + ], + "angle": 0, + "content": "[189] C. Mehlman, J. Abramov, G. Falco, Cat-and-mouse satellite dynamics: Divergent adversarial reinforcement learning for contested multi-agent space operations (2024). arXiv: 2409.17443. URL https://arxiv.org/abs/2409.17443" + }, + { + "type": "ref_text", + "bbox": [ + 0.519, + 0.81, + 0.893, + 0.866 + ], + "angle": 0, + "content": "[190] Y. Dong, X. Zhu, Z. Pan, L. Zhu, Y. Yang, VillagerAgent: A graph-based multi-agent framework for coordinating complex task dependencies in Minecraft, in: L.-W. Ku, A. Martins, V. Srikumar (Eds.), Findings of the Association for Computational Linguistics: ACL 2024, Association for Computational" + }, + { + "type": "list", + "bbox": [ + 0.519, + 0.135, + 0.894, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.88, + 0.509, + 0.89 + ], + "angle": 0, + "content": "48" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.142, + 0.134, + 0.48, + 0.18 + ], + "angle": 0, + "content": "Linguistics, Bangkok, Thailand, 2024, pp. 16290-16314. doi:10.18653/v1/2024-findings-acl.964. URL https://aclanthology.org/2024.findings-acl.964/" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.18, + 0.482, + 0.236 + ], + "angle": 0, + "content": "[191] H. Li, X. Yang, Z. Wang, X. Zhu, J. Zhou, Y. Qiao, X. Wang, H. Li, L. Lu, J. Dai, Auto mc-reward: Automated dense reward design with large language models for mycraft (2024). arXiv:2312.09238. URL https://arxiv.org/abs/2312.09238" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.236, + 0.482, + 0.304 + ], + "angle": 0, + "content": "[192] X. Zhu, Y. Chen, H. Tian, C. Tao, W. Su, C. Yang, G. Huang, B. Li, L. Lu, X. Wang, Y. Qiao, Z. Zhang, J. Dai, Ghost in the minecraft: Generally capable agents for open-world environments via large language models with text-based knowledge and memory (2023). arXiv:2305.17144. URL https://arxiv.org/abs/2305.17144" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.304, + 0.481, + 0.326 + ], + "angle": 0, + "content": "[193] S. C. Duncan, Minecraft, beyond construction and survival, Well Played 1 (1) (2011) 1-22." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.326, + 0.481, + 0.383 + ], + "angle": 0, + "content": "[194] A. Juliani, V.-P. Berges, E. Teng, A. Cohen, J. Harper, C. Elion, C. Goy, Y. Gao, H. Henry, M. Mattar, D. Lange, Unity: A general platform for intelligent agents (2020). arXiv:1809.02627. URL https://arxiv.org/abs/1809.02627" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.383, + 0.481, + 0.461 + ], + "angle": 0, + "content": "[195] L. Hu, J. Tang, G. Zou, Z. Li, J. Zeng, M. Li, Simulation optimization of highway hard shoulder running based on multi-agent deep deterministic policy gradient algorithm, Alexandria Engineering Journal 117 (2025) 99-115. doi:https://doi.org/10.1016/j.aej.2024.12.110. URL https://www.sciencedirect.com/science/article/pii/S1110016824017095" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.461, + 0.481, + 0.507 + ], + "angle": 0, + "content": "[196] Anonymous, A generalist hanabi agent, in: Submitted to The Thirteenth International Conference on Learning Representations, 2024, under review. URL https://openreview.net/forum?id=pCj2sLNoJq" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.507, + 0.481, + 0.551 + ], + "angle": 0, + "content": "[197] F. Bredell, H. A. Engelbrecht, J. C. Schoeman, Augmenting the action space with conventions to improve multi-agent cooperation in hanabi (2024). arXiv:2412.06333. URL https://arxiv.org/abs/2412.06333" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.551, + 0.481, + 0.607 + ], + "angle": 0, + "content": "[198] M. Kölle, Y. Erpelding, F. Ritz, T. Phan, S. Illium, C. Linnhoff-Popien, Aquarium: A comprehensive framework for exploring predator-prey dynamics through multi-agent reinforcement learning algorithms (2024). arXiv:2401.07056. URL https://arxiv.org/abs/2401.07056" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.608, + 0.481, + 0.675 + ], + "angle": 0, + "content": "[199] A. Chatterjee, M. A. Abbasi, E. Venturino, J. Zhen, M. Haque, A predator-prey model with prey refuge: under a stochastic and deterministic environment, Nonlinear Dynamics 112 (15) (2024) 13667-13693. doi:10.1007/s11071-024-09756-9. URL https://doi.org/10.1007/s11071-024-09756-9" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.675, + 0.481, + 0.719 + ], + "angle": 0, + "content": "[200] Z. Mandi, S. Jain, S. Song, Roco: Dialectic multi-robot collaboration with large language models, in: 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024, pp. 286-299. doi:10.1109/ICRA57147.2024.10610855." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.72, + 0.481, + 0.786 + ], + "angle": 0, + "content": "[201] J. S. Park, J. O'Brien, C. J. Cai, M. R. Morris, P. Liang, M. S. Bernstein, Generative agents: Interactive simulacra of human behavior, in: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23), ACM, ACM, 2023, p. October 2023. doi:10.1145/3586183.3606763." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.787, + 0.481, + 0.844 + ], + "angle": 0, + "content": "[202] G. Kovac, R. Portelas, P. F. Dominey, P.-Y. Oudeyer, The social AI school: Insights from developmental psychology towards artificial socio-cultural agents, in: First Workshop on Theory of Mind in Communicating Agents, 2023. URL https://openreview.net/forum?id=Y5r8Wa670b" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.844, + 0.481, + 0.866 + ], + "angle": 0, + "content": "[203] G. Kovač, R. Portelas, P. F. Dominey, P.-Y. Oudeyer, The socialai school: a framework leveraging developmental psychol-" + }, + { + "type": "list", + "bbox": [ + 0.107, + 0.134, + 0.482, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.553, + 0.135, + 0.893, + 0.179 + ], + "angle": 0, + "content": "ogy toward artificial socio-cultural agents, Frontiers in Neurorobotics 18, part of the Research Topic: Theory of Mind in Robots and Intelligent Systems (2024). doi:10.3389/fnbot.2024.1396359." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.18, + 0.893, + 0.225 + ], + "angle": 0, + "content": "[204] G. Mukobi, H. Erlebach, N. Lauffer, L. Hammond, A. Chan, J. Clifton, Welfare diplomacy: Benchmarking language model cooperation (2024). URL https://openreview.net/forum?id=AKJLnDgzkm" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.225, + 0.893, + 0.281 + ], + "angle": 0, + "content": "[205] J. Boubin, C. Burley, P. Han, B. Li, B. Porter, C. Stewart, Programming and deployment of autonomous swarms using multi-agent reinforcement learning (2021). arXiv:2105.10605. URL https://arxiv.org/abs/2105.10605" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.281, + 0.893, + 0.326 + ], + "angle": 0, + "content": "[206] T. Li, F. Xie, Y. Xiong, Q. Feng, Multi-arm robot task planning for fruit harvesting using multi-agent reinforcement learning (2023). arXiv:2303.00460. URL https://arxiv.org/abs/2303.00460" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.326, + 0.893, + 0.371 + ], + "angle": 0, + "content": "[207] A. Mahajan, S. Hegde, E. Shay, D. Wu, A. Prins, Comparative analysis of multi-agent reinforcement learning policies for crop planning decision support (2024). arXiv:2412.02057. URL https://arxiv.org/abs/2412.02057" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.371, + 0.893, + 0.427 + ], + "angle": 0, + "content": "[208] L. Benke, M. Papasimeon, T. Miller, Modelling strategic deceptive planning in adversarial multi-agent systems, in: S. Sarkadi, B. Wright, P. Masters, P. McBurney (Eds.), Deceptive AI, Springer International Publishing, Cham, 2021, pp. 76-83." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.427, + 0.893, + 0.483 + ], + "angle": 0, + "content": "[209] A. V. Sutagundar, S. Manvi, Context aware multisensor image fusion for military sensor networks using multi-agent system, International Journal of Ad hoc, Sensor Ubiquitous Computing 2 (1) (2011) 147-167. doi:10.5121/ijasuc.2011.2113. URL http://dx.doi.org/10.5121/ijasuc.2011.2113" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.484, + 0.893, + 0.54 + ], + "angle": 0, + "content": "[210] S. Vangaru, D. Rosen, D. Green, R. Rodriguez, M. Wiecek, A. Johnson, A. M. Jones, W. C. Headley, A multi-agent reinforcement learning testbed for cognitive radio applications (2024). arXiv:2410.21521. URL https://arxiv.org/abs/2410.21521" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.54, + 0.893, + 0.585 + ], + "angle": 0, + "content": "[211] Z. Wang, L. Wang, Q. Yi, Y. Liu, A marl based multi-target tracking algorithm under jamming against radar (2024). arXiv:2412.12547. URL https://arxiv.org/abs/2412.12547" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.584, + 0.893, + 0.64 + ], + "angle": 0, + "content": "[212] A. Alexopoulos, T. Schmidt, E. Badreddin, Cooperative pursue in pursuit-evasion games with unmanned aerial vehicles, in: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 4538-4543. doi:10.1109/IROS.2015.7354022." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.641, + 0.893, + 0.708 + ], + "angle": 0, + "content": "[213] D. Luo, Z. Fan, Z. Yang, Y. Xu, Multi-uav cooperative maneuver decision-making for pursuit-evasion using improved madrl, Defence Technology 35 (2024) 187-197. doi:https://doi.org/10.1016/j.dt.2023.11.013. URL https://www.sciencedirect.com/science/article/pii/S221491472300301X" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.708, + 0.893, + 0.764 + ], + "angle": 0, + "content": "[214] D. Wang, Multi-agent reinforcement learning for safe driving in on-ramp merging of autonomous vehicles, in: 2024 14th International Conference on Cloud Computing, Data Science & Engineering (Confluence), 2024, pp. 644-651. doi:10.1109/Confluence60223.2024.10463500." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.764, + 0.893, + 0.821 + ], + "angle": 0, + "content": "[215] N. Aboueleneen, Y. Bello, A. Albaseer, A. R. Hussein, M. Abdallah, E. Hossain, Distributed traffic control in complex dynamic roadblocks: A multi-agent deep rl approach (2024). arXiv:2501.00211. URL https://arxiv.org/abs/2501.00211" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.821, + 0.893, + 0.866 + ], + "angle": 0, + "content": "[216] J. Yu, L. Liang, C. Guo, Z. Guo, S. Jin, G. Y. Li, Heterogeneous multi-agent reinforcement learning for distributed channel access in wlans (2024). arXiv:2412.14218. URL https://arxiv.org/abs/2412.14218" + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.135, + 0.893, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.489, + 0.879, + 0.509, + 0.89 + ], + "angle": 0, + "content": "49" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.135, + 0.482, + 0.18 + ], + "angle": 0, + "content": "[217] K. Sun, H. Yu, Reinforcement learning for freeway lane-change regulation via connected vehicles (2024). arXiv: 2412.04341. URL https://arxiv.org/abs/2412.04341" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.181, + 0.482, + 0.224 + ], + "angle": 0, + "content": "[218] T. Azfar, R. Ke, Traffic co-simulation framework empowered by infrastructure camera sensing and reinforcement learning (2024). arXiv:2412.03925. URL https://arxiv.org/abs/2412.03925" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.225, + 0.482, + 0.27 + ], + "angle": 0, + "content": "[219] R. Bokade, X. Jin, Offlight: An offline multi-agent reinforcement learning framework for traffic signal control (2024). arXiv:2411.06601. URL https://arxiv.org/abs/2411.06601" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.271, + 0.482, + 0.315 + ], + "angle": 0, + "content": "[220] D. K. Kwesiga, S. C. Vishnoi, A. Guin, M. Hunter, Integrating transit signal priority into multi-agent reinforcement learning based traffic signal control (2024). arXiv:2411.19359. URL https://arxiv.org/abs/2411.19359" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.316, + 0.482, + 0.392 + ], + "angle": 0, + "content": "[221] Y. Zhang, G. Zheng, Z. Liu, Q. Li, H. Zeng, Marlens: Understanding multi-agent reinforcement learning for traffic signal control via visual analytics, IEEE Transactions on Visualization and Computer Graphics (2024) 1-16doi:10.1109/tvcg.2024.3392587. URL http://dx.doi.org/10.1109/TVCG.2024.3392587" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.393, + 0.482, + 0.438 + ], + "angle": 0, + "content": "[222] J. Xue, D. Zhang, R. Xiong, Y. Wang, E. Liu, A two-stage based social preference recognition in multi-agent autonomous driving system (2023). arXiv:2310.03303. URL https://arxiv.org/abs/2310.03303" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.439, + 0.482, + 0.495 + ], + "angle": 0, + "content": "[223] J. Liu, P. Hang, X. Na, C. Huang, J. Sun, Cooperative decision-making for cavs at unsignalized intersections: A marl approach with attention and hierarchical game priors (2024). arXiv: 2409.05712. URL https://arxiv.org/abs/2409.05712" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.496, + 0.482, + 0.54 + ], + "angle": 0, + "content": "[224] C. Formanek, L. Beyers, C. R. Tilbury, J. P. Shock, A. Pretorius, Putting data at the centre of offline multi-agent reinforcement learning (2024). arXiv:2409.12001. URL https://arxiv.org/abs/2409.12001" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.54, + 0.482, + 0.596 + ], + "angle": 0, + "content": "[225] R. Zhang, J. Hou, F. Walter, S. Gu, J. Guan, F. Röhrbein, Y. Du, P. Cai, G. Chen, A. Knoll, Multi-agent reinforcement learning for autonomous driving: A survey (2024). arXiv: 2408.09675. URL https://arxiv.org/abs/2408.09675" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.597, + 0.482, + 0.641 + ], + "angle": 0, + "content": "[226] S. Kotoku, T. Mihana, A. Röhm, R. Horisaki, Decentralized multi-agent reinforcement learning algorithm using a cluster-synchronized laser network (2024). arXiv:2407.09124. URL https://arxiv.org/abs/2407.09124" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.642, + 0.482, + 0.697 + ], + "angle": 0, + "content": "[227] M. Hua, D. Chen, K. Jiang, F. Zhang, J. Wang, B. Wang, Q. Zhou, H. Xu, Communication-efficient marl for platoon stability and energy-efficiency co-optimization in cooperative adaptive cruise control of cavs (2024). arXiv:2406.11653. URL https://arxiv.org/abs/2406.11653" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.698, + 0.482, + 0.753 + ], + "angle": 0, + "content": "[228] K. DeMarco, E. Squires, M. Day, C. Pippin, Simulating collaborative robots in a massive multi-agent game environment (scrimmage), in: N. Correll, M. Schwager, M. Otte (Eds.), Distributed Autonomous Robotic Systems, Springer International Publishing, Cham, 2019, pp. 283-297." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.755, + 0.482, + 0.809 + ], + "angle": 0, + "content": "[229] Y. Xie, Y. Liu, Z. Ma, L. Shi, X. Wang, W. Yuan, M. O. Jackson, Q. Mei, How different ai chatbots behave? benchmarking large language models in behavioral economics games (2024). arXiv:2412.12362. URL https://arxiv.org/abs/2412.12362" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.81, + 0.482, + 0.855 + ], + "angle": 0, + "content": "[230] W. Lin, J. Roberts, Y. Yang, S. Albanie, Z. Lu, K. Han, Beyond outcomes: Transparent assessment of llm reasoning in games (2024). arXiv:2412.13602. URL https://arxiv.org/abs/2412.13602" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.855, + 0.482, + 0.866 + ], + "angle": 0, + "content": "[231] F. Jia, J. Zheng, F. Li, Decentralized intelligence in game:" + }, + { + "type": "list", + "bbox": [ + 0.107, + 0.135, + 0.482, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.554, + 0.135, + 0.892, + 0.169 + ], + "angle": 0, + "content": "Embodied ai agents and the convergence of defi and virtual ecosystems (2024). arXiv:2412.18601. \nURL https://arxiv.org/abs/2412.18601" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.17, + 0.892, + 0.224 + ], + "angle": 0, + "content": "[232] C. Yin, G. Zhouhong, D. Zhaohan, Y. Zheyu, C. Shaosheng, X. Yiqian, F. Hongwei, C. Ping, Mirage: Exploring how large language models perform in complex social interactive environments (2025). arXiv:2501.01652. URL https://arxiv.org/abs/2501.01652" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.225, + 0.892, + 0.27 + ], + "angle": 0, + "content": "[233] Z. Zhang, Y. Lan, Y. Chen, L. Wang, X. Wang, H. Wang, Dvm: Towards controllable llm agents in social deduction games (2025). arXiv:2501.06695. URL https://arxiv.org/abs/2501.06695" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.271, + 0.892, + 0.315 + ], + "angle": 0, + "content": "[234] A. Gonzalez-Bonorino, M. Capra, E. Pantoja, Llms model nonweird populations: Experiments with synthetic cultural agents (2025). arXiv:2501.06834. URL https://arxiv.org/abs/2501.06834" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.316, + 0.892, + 0.371 + ], + "angle": 0, + "content": "[235] X. Luo, F. Ding, F. Yang, Y. Zhou, J. Loo, H. H. Tew, C. Liu, Sensorag: Constructing environmental knowledge bases with proactive querying for llm-based autonomous driving (2025). arXiv:2501.03535. URL https://arxiv.org/abs/2501.03535" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.372, + 0.892, + 0.427 + ], + "angle": 0, + "content": "[236] D. Mahmud, H. Hajmohamed, S. Almentheri, S. Alqaydi, L. Aldaheri, R. A. Khalil, N. Saeed, Integrating llms with its: Recent advances, potentials, challenges, and future directions (2025). arXiv:2501.04437. URL https://arxiv.org/abs/2501.04437" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.428, + 0.892, + 0.472 + ], + "angle": 0, + "content": "[237] Z. Peng, Y. Wang, X. Han, L. Zheng, J. Ma, Learningflow: Automated policy learning workflow for urban driving with large language models (2025). arXiv:2501.05057. URL https://arxiv.org/abs/2501.05057" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.473, + 0.892, + 0.517 + ], + "angle": 0, + "content": "[238] A. Karagounis, Leveraging large language models for enhancing autonomous vehicle perception (2024). arXiv:2412.20230. URL https://arxiv.org/abs/2412.20230" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.518, + 0.892, + 0.573 + ], + "angle": 0, + "content": "[239] X. Luo, F. Ding, R. Chen, R. Panda, J. Loo, S. Zhang, \"what's happening\"- a human-centered multimodal interpreter explaining the actions of autonomous vehicles (2025). arXiv:2501.05322. URL https://arxiv.org/abs/2501.05322" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.574, + 0.892, + 0.618 + ], + "angle": 0, + "content": "[240] H. Gao, Y. Zhao, Application of vision-language model to pedestrians behavior and scene understanding in autonomous driving (2025). arXiv:2501.06680. URL https://arxiv.org/abs/2501.06680" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.619, + 0.892, + 0.674 + ], + "angle": 0, + "content": "[241] D. Hegde, R. Yasarla, H. Cai, S. Han, A. Bhattacharyya, S. Mahajan, L. Liu, R. Garrepalli, V. M. Patel, F. Porkli, Distilling multi-modal large language models for autonomous driving (2025). arXiv:2501.09757. URL https://arxiv.org/abs/2501.09757" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.675, + 0.892, + 0.764 + ], + "angle": 0, + "content": "[242] R. Wang, X. He, R. Yu, W. Qiu, B. An, Z. Rabinovich, Learning efficient multi-agent communication: An information bottleneck approach, in: H. D. III, A. Singh (Eds.), Proceedings of the 37th International Conference on Machine Learning, Vol. 119 of Proceedings of Machine Learning Research, PMLR, 2020, pp. 9908-9918. URL https://proceedings.mlr.press/v119/wang20i.html" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.765, + 0.892, + 0.82 + ], + "angle": 0, + "content": "[243] A. van den Oord, O. Vinyals, K. Kavukcuoglu, Neural discrete representation learning, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, p. 6309-6318." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.821, + 0.892, + 0.864 + ], + "angle": 0, + "content": "[244] F. Ye, J. Chen, Y. Tian, T. Jiang, Cooperative task assignment of a heterogeneous multi-uav system using an adaptive genetic algorithm, Electronics 9 (4) (2020). doi:10.3390/electronics9040687." + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.135, + 0.892, + 0.864 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.879, + 0.51, + 0.89 + ], + "angle": 0, + "content": "50" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.144, + 0.134, + 0.448, + 0.146 + ], + "angle": 0, + "content": "URL https://www.mdpi.com/2079-9292/9/4/687" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.147, + 0.501, + 0.246 + ], + "angle": 0, + "content": "[245] J. Wang, D. Ye, Z. Lu, Mutual-information regularized multiagent policy iteration, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 2617-2635. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/0799492e7be38b66d10ead5e8809616d-Paper-Conference.pdf" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.247, + 0.482, + 0.292 + ], + "angle": 0, + "content": "[246] P. Li, H. Tang, T. Yang, X. Hao, T. Sang, Y. Zheng, J. Hao, M. E. Taylor, Z. Wang, Pmic: Improving multi-agent reinforcement learning with progressive mutual information collaboration, ICML 2022 (2022)." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.293, + 0.482, + 0.337 + ], + "angle": 0, + "content": "[247] S. Kang, Y. Lee, S.-Y. Yun, DPM: Dual preferences-based multi-agent reinforcement learning, in: ICML 2024 Workshop on Models of Human Feedback for AI Alignment, 2024. URL https://openreview.net/forum?id=TW3DIP2h5p" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.338, + 0.481, + 0.393 + ], + "angle": 0, + "content": "[248] Q. Fu, T. Qiu, J. Yi, Z. Pu, S. Wu, Concentration network for reinforcement learning of large-scale multi-agent systems, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, AAAI Press, Beijing, China, 2022, p. 21165. doi:10.1609/aaai.v36i9.21165." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.394, + 0.481, + 0.438 + ], + "angle": 0, + "content": "[249] B. Peng, T. Rashid, C. A. S. de Witt, P.-A. Kamienny, P. H. S. Torr, W. Böhmer, S. Whiteson, Facmac: Factored multi-agent centralised policy gradients (2021). arXiv:2003.06709. URL https://arxiv.org/abs/2003.06709" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.439, + 0.481, + 0.483 + ], + "angle": 0, + "content": "[250] K. Pandya, M. Holia, Automating customer service using langchain: Building custom open-source gpt chatbot for organizations (2023). arXiv:2310.05421. URL https://arxiv.org/abs/2310.05421" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.485, + 0.481, + 0.528 + ], + "angle": 0, + "content": "[251] H. Du, S. Thudumu, R. Vasa, K. Mouzakis, A survey on context-aware multi-agent systems: Techniques, challenges and future directions (2024). arXiv:2402.01968. URL https://arxiv.org/abs/2402.01968" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.529, + 0.481, + 0.595 + ], + "angle": 0, + "content": "[252] K. Hu, M. Li, Z. Song, K. Xu, Q. Xia, N. Sun, P. Zhou, M. Xia, A review of research on reinforcement learning algorithms for multi-agents, Neurocomputing 599 (2024) 128068. doi: https://doi.org/10.1016/j.neucom.2024.128068. URL https://www.sciencedirect.com/science/article/pii/S0925231224008397" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.597, + 0.481, + 0.641 + ], + "angle": 0, + "content": "[253] K.-T. Tran, D. Dao, M.-D. Nguyen, Q.-V. Pham, B. O'Sullivan, H. D. Nguyen, Multi-agent collaboration mechanisms: A survey of llms (2025). arXiv:2501.06322. URL https://arxiv.org/abs/2501.06322" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.642, + 0.481, + 0.686 + ], + "angle": 0, + "content": "[254] Y. Talebirad, A. Nadiri, Multi-agent collaboration: Harnessing the power of intelligent llm agents (2023). arXiv:2306.03314. URL https://arxiv.org/abs/2306.03314" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.687, + 0.481, + 0.742 + ], + "angle": 0, + "content": "[255] F. L. D. Silva, M. E. Taylor, A. H. R. Costa, Autonomously reusing knowledge in multiagent reinforcement learning, in: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, 2018, pp. 5487-5493." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.743, + 0.481, + 0.787 + ], + "angle": 0, + "content": "[256] F. L. Da Silva, A. H. R. Costa, A survey on transfer learning for multiagent reinforcement learning systems, J. Artif. Int. Res. 64 (1) (2019) 645-703. doi:10.1613/jair.1.11396. URL https://doi.org/10.1613/jair.1.11396" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.788, + 0.481, + 0.854 + ], + "angle": 0, + "content": "[257] Y. Bengio, J. Louradour, R. Collobert, J. Weston, Curriculum learning, in: Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, Association for Computing Machinery, New York, NY, USA, 2009, p. 41-48. doi:10.1145/1553374.1553380. URL https://doi.org/10.1145/1553374.1553380" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.855, + 0.481, + 0.866 + ], + "angle": 0, + "content": "[258] X. Wang, Y. Chen, W. Zhu, A survey on curriculum learn" + }, + { + "type": "list", + "bbox": [ + 0.107, + 0.134, + 0.501, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.554, + 0.135, + 0.892, + 0.168 + ], + "angle": 0, + "content": "ing, IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (9) (2022) 4555-4576. doi:10.1109/TPAMI.2021.3069908." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.169, + 0.892, + 0.213 + ], + "angle": 0, + "content": "[259] L. Pinto, J. Davidson, R. Sukthankar, A. Gupta, Robust adversarial reinforcement learning, in: Proceedings of the 34th International Conference on Machine Learning, PMLR, 2017, pp. 2817-2826." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.214, + 0.892, + 0.292 + ], + "angle": 0, + "content": "[260] W. Jin, N. Wang, T. Tao, B. Shi, H. Bi, B. Zhao, H. Wu, H. Duan, G. Yang, A veracity dissemination consistency-based few-shot fake news detection framework by synergizing adversarial and contrastive self-supervised learning, Scientific Reports 14 (1) (2024) 19470. doi:10.1038/s41598-024-70039-9. URL https://doi.org/10.1038/s41598-024-70039-9" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.293, + 0.892, + 0.349 + ], + "angle": 0, + "content": "[261] J. Foerster, N. Nardelli, G. Farquhar, T. Afouras, P. H. S. Torr, P. Kohli, S. Whiteson, Stabilising experience replay for deep multi-agent reinforcement learning, in: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, p. 1146-1155." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.349, + 0.892, + 0.427 + ], + "angle": 0, + "content": "[262] T. Guo, X. Chen, Y. Wang, R. Chang, S. Pei, N. V. Chawla, O. Wiest, X. Zhang, Large language model based multi-agents: A survey of progress and challenges, in: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI '24), IJCAI, 2024, pp. 8048-8057. doi: 10.24963/ijcai.2024/890. URL https://doi.org/10.24963/ijcai.2024/890" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.428, + 0.892, + 0.505 + ], + "angle": 0, + "content": "[263] L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, Q. Chen, W. Peng, X. Feng, B. Qin, T. Liu, A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions, ACM Transactions on Information Systems Accepted on 24 September 2024 (2024). doi: 10.1145/3703155. URL https://doi.org/10.1145/3703155" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.507, + 0.892, + 0.539 + ], + "angle": 0, + "content": "[264] E. Alonso, D. Kudenko, D. Kazakov (Eds.), Adaptive agents and multi-agent systems: adaptation and multi-agent learning, Springer-Verlag, Berlin, Heidelberg, 2003." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.54, + 0.892, + 0.595 + ], + "angle": 0, + "content": "[265] K. Tuyls, P. J. Hoen, K. Verbeeck, S. Sen (Eds.), Learning and Adaption in Multi-Agent Systems, Vol. 3898 of Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 2006. URL https://link.springer.com/book/10.1007/978-3-540-33316-2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.597, + 0.892, + 0.641 + ], + "angle": 0, + "content": "[266] S. Chen, Y. Liu, W. Han, W. Zhang, T. Liu, A survey on llmbased multi-agent system: Recent advances and new frontiers in application (2025). arXiv:2412.17481. URL https://arxiv.org/abs/2412.17481" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.642, + 0.892, + 0.696 + ], + "angle": 0, + "content": "[267] V. Dibia, Multi-agent llm applications — a review of current research, tools, and challenges, accessed from Victor Dibia's newsletter (2023). URL https://newsletter.victordibia.com/p/multi-agent-llm-applications-a-review" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.697, + 0.892, + 0.809 + ], + "angle": 0, + "content": "[268] A. Chan, R. Salganik, A. Markelius, C. Pang, N. Rajkumar, D. Krasheninnikov, L. Langosco, Z. He, Y. Duan, M. Carroll, M. Lin, A. Mayhew, K. Collins, M. Molamohammadi, J. Burden, W. Zhao, S. Rismani, K. Voudouris, U. Bhatt, A. Weller, D. Krueger, T. Maharaj, Harms from increasingly agentic algorithmic systems, in: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT '23, Association for Computing Machinery, New York, NY, USA, 2023, p. 651-666. doi:10.1145/3593013.3594033. URL https://doi.org/10.1145/3593013.3594033" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.81, + 0.892, + 0.866 + ], + "angle": 0, + "content": "[269] A. Amirkhani, A. H. Barshooi, Consensus in multi-agent systems: A review, Artificial Intelligence Review 55 (5) (2022) 3897-3935, accessed: 2022-06-01. doi:10.1007/s10462-021-10097-x. URL https://doi.org/10.1007/s10462-021-10097-x" + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.135, + 0.892, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.88, + 0.508, + 0.89 + ], + "angle": 0, + "content": "51" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.134, + 0.482, + 0.224 + ], + "angle": 0, + "content": "[270] G. Beydoun, G. Low, H. Mouratidis, B. Henderson-Sellers, A security-aware metamodel for multi-agent systems (mas), Information and Software Technology 51 (5) (2009) 832-845, sPECIAL ISSUE: Model-Driven Development for Secure Information Systems. doi:https://doi.org/10.1016/j.infsof.2008.05.003. URL https://www.sciencedirect.com/science/article/pii/S0950584908000724" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.225, + 0.482, + 0.28 + ], + "angle": 0, + "content": "[271] X. Xie, C. Zhang, Y. Zhu, Y. N. Wu, S.-C. Zhu, Congestion-aware multi-agent trajectory prediction for collision avoidance, in: 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 13693-13700. doi:10.1109/ICRA48506.2021.9560994." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.281, + 0.482, + 0.348 + ], + "angle": 0, + "content": "[272] A. Stooke, K. Lee, P. Abbeel, M. Laskin, Decoupling representation learning from reinforcement learning, in: Proceedings of the 38th International Conference on Machine Learning, PMLR, 2021, pp. 9870-9879. URL https://proceedings.mlr.press/v139/stooke21a.html" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.349, + 0.482, + 0.427 + ], + "angle": 0, + "content": "[273] J. Luketina, N. Nardelli, G. Farquhar, J. Foerster, J. Andreas, E. Grefenstette, S. Whiteson, T. Rocktäschel, A survey of reinforcement learning informed by natural language, in: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, 2019, pp. 6309-6317. doi:10.24963/ijcai.2019/880. URL https://doi.org/10.24963/ijcai.2019/880" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.428, + 0.482, + 0.483 + ], + "angle": 0, + "content": "[274] R. P. Poudel, H. Pandya, S. Liwicki, R. Cipolla, Recore: Regularized contrastive representation learning of world model, in: 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 22904-22913. doi: 10.1109/CVPR52733.2024.02161." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.484, + 0.482, + 0.539 + ], + "angle": 0, + "content": "[275] W. Choi, W. K. Kim, S. Kim, H. Woo, Efficient policy adaptation with contrastive prompt ensemble for embodied agents, in: Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Curran Associates Inc., Red Hook, NY, USA, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.54, + 0.482, + 0.641 + ], + "angle": 0, + "content": "[276] F. Paischer, T. Adler, V. Patil, A. Bitto-Nemling, M. Holzleitner, S. Lehner, H. Eghbal-Zadeh, S. Hochreiter, History compression via language models in reinforcement learning, in: K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, S. Sabato (Eds.), Proceedings of the 39th International Conference on Machine Learning, Vol. 162 of Proceedings of Machine Learning Research, PMLR, 2022, pp. 17156-17185. URL https://proceedings.mlr.press/v162/paischer22a.html" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.641, + 0.482, + 0.698 + ], + "angle": 0, + "content": "[277] F. Paischer, T. Adler, M. Hofmacher, S. Hochreiter, Semantic HELM: A human-readable memory for reinforcement learning, in: Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=ebMPmx5mr7" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.698, + 0.482, + 0.797 + ], + "angle": 0, + "content": "[278] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, I. Sutskever, Learning transferable visual models from natural language supervision, in: M. Meila, T. Zhang (Eds.), Proceedings of the 38th International Conference on Machine Learning, Vol. 139 of Proceedings of Machine Learning Research, PMLR, 2021, pp. 8748-8763. URL https://proceedings.mlr.press/v139/radford21a.html" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.798, + 0.482, + 0.844 + ], + "angle": 0, + "content": "[279] A. van den Oord, Y. Li, O. Vinyals, Representation learning with contrastive predictive coding (2019). arXiv:1807.03748. URL https://arxiv.org/abs/1807.03748" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.844, + 0.482, + 0.866 + ], + "angle": 0, + "content": "[280] M. Laskin, A. Srinivas, P. Abbeel, Curl: contrastive unsupervised representations for reinforcement learning, in: Proceed-" + }, + { + "type": "list", + "bbox": [ + 0.107, + 0.134, + 0.482, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.554, + 0.135, + 0.892, + 0.158 + ], + "angle": 0, + "content": "ings of the 37th International Conference on Machine Learning, ICML'20, JMLR.org, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.158, + 0.892, + 0.224 + ], + "angle": 0, + "content": "[281] M. Schwarzer, A. Anand, R. Goel, R. D. Hjelm, A. Courville, P. Bachman, Data-efficient reinforcement learning with self-predictive representations, in: International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=uCQfPZwRaUu" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.225, + 0.892, + 0.271 + ], + "angle": 0, + "content": "[282] M. Kwon, S. M. Xie, K. Bullard, D. Sadigh, Reward design with language models, in: The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=10uNUgI5K1" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.271, + 0.892, + 0.326 + ], + "angle": 0, + "content": "[283] J. Song, Z. Zhou, J. Liu, C. Fang, Z. Shu, L. Ma, Self-refined large language model as automated reward function designer for deep reinforcement learning in robotics (2023). arXiv: 2309.06687. URL https://arxiv.org/abs/2309.06687" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.326, + 0.912, + 0.438 + ], + "angle": 0, + "content": "[284] Y. Wu, Y. Fan, P. P. Liang, A. Azaria, Y. Li, T. M. Mitchell, Read and reap the rewards: Learning to play atari with the help of instruction manuals, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 1009-1023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/034d7bfeace2a9a258648b16fc626298-Paper-Conference.pdf" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.439, + 0.892, + 0.495 + ], + "angle": 0, + "content": "[285] T. Carta, C. Romac, T. Wolf, S. Lamprier, O. Sigaud, P.-Y. Oudeyer, Grounding large language models in interactive environments with online reinforcement learning, in: Proceedings of the 40th International Conference on Machine Learning, ICML'23, JMLR.org, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.495, + 0.892, + 0.562 + ], + "angle": 0, + "content": "[286] K. Chu, X. Zhao, C. Weber, M. Li, S. Wermter, Accelerating reinforcement learning of robotic manipulations via feedback from large language models, in: Bridging the Gap between Cognitive Science and Robot Learning in the Real World: Progresses and New Directions, 2024. URL https://openreview.net/forum?id=MBeeqmD8Zk" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.562, + 0.892, + 0.618 + ], + "angle": 0, + "content": "[287] C. Kim, Y. Seo, H. Liu, L. Lee, J. Shin, H. Lee, K. Lee, Guide your agent with adaptive multimodal rewards, in: Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Curran Associates Inc., Red Hook, NY, USA, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.619, + 0.892, + 0.686 + ], + "angle": 0, + "content": "[288] W. Yu, N. Gileadi, C. Fu, S. Kirmani, K.-H. Lee, M. G. Arenas, H.-T. L. Chiang, T. Erez, L. Hasenclever, J. Humplik, B. Ichter, T. Xiao, P. Xu, A. Zeng, T. Zhang, N. Heess, D. Sadigh, J. Tan, Y. Tassa, F. Xia, Language to rewards for robotic skill synthesis (2023). arXiv:2306.08647. URL https://arxiv.org/abs/2306.08647" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.686, + 0.892, + 0.732 + ], + "angle": 0, + "content": "[289] A. Adeniji, A. Xie, C. Sferrazza, Y. Seo, S. James, P. Abbeel, Language reward modulation for pretraining reinforcement learning (2024). URL https://openreview.net/forum?id \\(\\equiv\\) SWRFC2Eup0" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.732, + 0.892, + 0.798 + ], + "angle": 0, + "content": "[290] A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y. Yang, S. Gupta, B. P. Majumder, K. Hermann, S. Welleck, A. Yazdanbakhsh, P. Clark, Self-refine: Iterative refinement with self-feedback (2023). arXiv:2303.17651. URL https://arxiv.org/abs/2303.17651" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.799, + 0.892, + 0.866 + ], + "angle": 0, + "content": "[291] Y. J. Ma, W. Liang, G. Wang, D.-A. Huang, O. Bastani, D. Jayaraman, Y. Zhu, L. Fan, A. Anandkumar, Eureka: Human-level reward design via coding large language models, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=IEduRU055F" + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.135, + 0.912, + 0.866 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.88, + 0.51, + 0.89 + ], + "angle": 0, + "content": "52" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.134, + 0.482, + 0.19 + ], + "angle": 0, + "content": "[292] T. Xie, S. Zhao, C. H. Wu, Y. Liu, Q. Luo, V. Zhong, Y. Yang, T. Yu, Text2reward: Reward shaping with language models for reinforcement learning, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=tUM39YTRxH" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.191, + 0.482, + 0.246 + ], + "angle": 0, + "content": "[293] M. Janner, Q. Li, S. Levine, Offline reinforcement learning as one big sequence modeling problem, in: Proceedings of the 35th International Conference on Neural Information Processing Systems, NIPS '21, Curran Associates Inc., Red Hook, NY, USA, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.247, + 0.482, + 0.304 + ], + "angle": 0, + "content": "[294] R. Shi, Y. Liu, Y. Ze, S. S. Du, H. Xu, Unleashing the power of pre-trained language models for offline reinforcement learning, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=AY6aM13gGF" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.305, + 0.482, + 0.371 + ], + "angle": 0, + "content": "[295] S. Li, X. Puig, C. Paxton, Y. Du, C. Wang, L. Fan, T. Chen, D.-A. Huang, E. Akyurek, A. Anandkumar, J. Andreas, I. Mordatch, A. Torralba, Y. Zhu, Pre-trained language models for interactive decision-making, in: Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, Curran Associates Inc., Red Hook, NY, USA, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.371, + 0.482, + 0.427 + ], + "angle": 0, + "content": "[296] L. Mezghani, P. Bojanowski, K. Alahari, S. Sukhbaatar, Think before you act: Unified policy for interleaving language reasoning with actions, in: Workshop on Reincarnating Reinforcement Learning at ICLR 2023, 2023. URL https://openreview.net/forum?id=HQmhQIi6mN" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.428, + 0.482, + 0.471 + ], + "angle": 0, + "content": "[297] J. Grigsby, L. Fan, Y. Zhu, Amago: Scalable in-context reinforcement learning for adaptive agents (2024). arXiv:2310.09971. URL https://arxiv.org/abs/2310.09971" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.472, + 0.482, + 0.55 + ], + "angle": 0, + "content": "[298] B. Zitkovich, T. Yu, S. Xu, P. Xu, T. Xiao, F. Xia, J. Wu, P. Wohlhart, S. Welker, A. Wahid, Q. Vuong, Vanhoucke, Rt-2: Vision-language-action models transfer web knowledge to robotic control, in: J. Tan, M. Toussaint, K. Darvish (Eds.), Proceedings of The 7th Conference on Robot Learning, Vol. 229 of Proceedings of Machine Learning Research, PMLR, 2023, pp. 2165–2183." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.551, + 0.482, + 0.652 + ], + "angle": 0, + "content": "[299] S. Yao, R. Rao, M. Hausknecht, K. Narasimhan, Keep CALM and explore: Language models for action generation in text-based games, in: B. Webber, T. Cohn, Y. He, Y. Liu (Eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Online, 2020, pp. 8736-8754. doi:10.18653/v1/2020.emnlp-main.704. URL https://aclanthology.org/2020.emnlp-main.704/" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.653, + 0.482, + 0.696 + ], + "angle": 0, + "content": "[300] H. Hu, D. Sadigh, Language instructed reinforcement learning for human-ai coordination, in: Proceedings of the 40th International Conference on Machine Learning, ICML'23, JMLR.org, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.697, + 0.482, + 0.764 + ], + "angle": 0, + "content": "[301] Z. Zhou, B. Hu, C. Zhao, P. Zhang, B. Liu, Large language model as a policy teacher for training reinforcement learning agents, in: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI '24, 2025. doi: 10.24963/ijcai.2024/627. URL https://doi.org/10.24963/ijcai.2024/627" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.765, + 0.482, + 0.82 + ], + "angle": 0, + "content": "[302] C. Chen, J. Yoon, Y.-F. Wu, S. Ahn, Transdreamer: Reinforcement learning with transformer world models, in: Deep RL Workshop NeurIPS 2021, 2021. URL https://openreview.net/forum?id=sVrzVAL90sA" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.821, + 0.482, + 0.865 + ], + "angle": 0, + "content": "[303] D. Das, S. Chernova, B. Kim, State2explanation: Concept-based explanations to benefit agent learning and user understanding, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural" + }, + { + "type": "list", + "bbox": [ + 0.107, + 0.134, + 0.482, + 0.865 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.553, + 0.135, + 0.913, + 0.203 + ], + "angle": 0, + "content": "Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 67156-67182. \nURL https://proceedings.neurips.cc/paper_files/paper/2023/file/d4387c37b3b06e55f86eccdb8cd1f829-Paper-Conference.pdf" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.203, + 0.893, + 0.247 + ], + "angle": 0, + "content": "[304] J. Lin, Y. Du, O. Watkins, D. Hafner, P. Abbeel, D. Klein, A. Dragan, Learning to model the world with language (2024). arXiv:2308.01399. URL https://arxiv.org/abs/2308.01399" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.248, + 0.893, + 0.304 + ], + "angle": 0, + "content": "[305] J. Robine, M. Höftmann, T. Uelwer, S. Harmeling, Transformer-based world models are happy with 100k interactions, in: ICLR, 2023. URL https://openreview.net/forum?id=TdBaDGCpjly" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.305, + 0.893, + 0.348 + ], + "angle": 0, + "content": "[306] R. P. K. Poudel, H. Pandya, C. Zhang, R. Cipolla, Langwm: Language grounded world model (2023). arXiv:2311.17593. URL https://arxiv.org/abs/2311.17593" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.349, + 0.893, + 0.383 + ], + "angle": 0, + "content": "[307] J. Lin, Y. Du, O. Watkins, D. Hafner, P. Abbeel, D. Klein, A. Dragan, Learning to model the world with language (2024). URL https://openreview.net/forum?id=eWL0oaShEH" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.383, + 0.893, + 0.484 + ], + "angle": 0, + "content": "[308] Z. Yang, K. Ren, X. Luo, M. Liu, W. Liu, J. Bian, W. Zhang, D. Li, Towards applicable reinforcement learning: Improving the generalization and sample efficiency with policy ensemble, in: L. D. Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, International Joint Conferences on Artificial Intelligence Organization, 2022, pp. 3659-3665, main Track. doi: 10.24963/ijcai.2022/508. URL https://doi.org/10.24963/ijcai.2022/508" + }, + { + "type": "list", + "bbox": [ + 0.518, + 0.135, + 0.913, + 0.484 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.88, + 0.509, + 0.891 + ], + "angle": 0, + "content": "53" + } + ] +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13415/d6bcf737-8470-4ed5-949a-6d04e53f6bf7_origin.pdf b/data/2025/2503_13xxx/2503.13415/d6bcf737-8470-4ed5-949a-6d04e53f6bf7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d79124a4161b5f747d0c805b8b9ca06936ce2db3 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13415/d6bcf737-8470-4ed5-949a-6d04e53f6bf7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20523402a8d58ab93c1cb436ab38e43f3a276bad100ce4e592c8dc586bad7d46 +size 11544534 diff --git a/data/2025/2503_13xxx/2503.13415/full.md b/data/2025/2503_13xxx/2503.13415/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fddc04967538b16091c3f2027504cd0209edcd80 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13415/full.md @@ -0,0 +1,1208 @@ +# Highlights + +# A Comprehensive Survey on Multi-Agent Cooperative Decision-Making: Scenarios, Approaches, Challenges and Perspectives + +Weiqiang Jin, Hongyang Du, Biao Zhao, Xingwu Tian, Bohang Shi, Guang Yang + +- Provides a comprehensive survey of multi-agent decision-making methods. +- Analyzes key simulation environments for multi-agent reinforcement learning. +- Investigate decision-making approaches, including MARL and large language models. +- Identifies challenges and future research directions in multi-agent collaboration. +- Reviews real-world applications in transportation, aerial systems, and automation. + +# A Comprehensive Survey on Multi-Agent Cooperative Decision-Making: Scenarios, Approaches, Challenges and Perspectives + +Weiqiang Jina, Hongyang Du, Biao Zhaoa,\*, Xingwu Tiana, Bohang Shi a, Guang Yang c,d,e,f,* + +$^{a}$ School of Information and Communications Engineering, Xi'an Jiaotong University, Innovation Harbour, Xi'an, 710049, Shaanxi, China + $^{b}$ Department of Electrical and Electronic Engineering, The University of Hong Kong (HKU), Hong Kong, Hong Kong, China + $^{c}$ Bioengineering Department and Imperial-X, Imperial College London, London, W12 7SL, UK + $^{d}$ National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK + $^{e}$ Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK + $^{f}$ School of Biomedical Engineering & Imaging Sciences, King's College London, London, WC2R 2LS, UK + +# Abstract + +With the rapid development of artificial intelligence, intelligent decision-making techniques have gradually surpassed human levels in various human-machine competitions, especially in complex multi-agent cooperative task scenarios. Multi-agent cooperative decision-making involves multiple agents working together to complete established tasks and achieve specific objectives. These techniques are widely applicable in real-world scenarios such as autonomous driving, drone navigation, disaster rescue, and simulated military confrontations. This paper begins with a comprehensive survey of the leading simulation environments and platforms used for multi-agent cooperative decision-making. Specifically, we provide an in-depth analysis for these simulation environments from various perspectives, including task formats, reward allocation, and the underlying technologies employed. Subsequently, we provide a comprehensive overview of the mainstream intelligent decision-making approaches, algorithms and models for multi-agent systems (MAS). These approaches can be broadly categorized into five types: rule-based (primarily fuzzy logic), game theory-based, evolutionary algorithms-based, deep multi-agent reinforcement learning (MARL)-based, and large language models (LLMs) reasoning-based. Given the significant advantages of MARL and LLMs-based decision-making methods over the traditional rule, game theory, and evolutionary algorithms, this paper focuses on these multi-agent methods utilizing MARL and LLMs-based techniques. We provide an in-depth discussion of these approaches, highlighting their methodology taxonomies, advantages, and drawbacks. Further, several prominent research directions in the future and potential challenges of multi-agent cooperative decision-making are also detailed. + +# Keywords: + +Intelligent decision-making, Multi-agent systems, Multi-agent cooperative environments, Multi-agent reinforcement learning, Large language models. + +# 1. Introduction + +# 1.1. Research Backgrounds of Multi-Agent Decision-Making + +With the continuous advancement of science and technology, intelligent decision-making technology has + +made rapid progress. These technologies have gradually surpassed human capabilities in various human-machine game competitions, even exceeding the top human levels. Over the past few decades, especially following the successful application of Deep Q-Networks (DQN) [1, 2] in the Arita game and the victories of AlphaGo and AlphaZero [3, 4] over top human opponents, these landmark achievements have significantly propelled the advancement of intelligent decision-making research. + +To meet the growing complexity of real-world applications and the increasing demand for more sophisticated, reliable, and efficient intelligent systems, multi + +![](images/06418d1620b5a8ff60e8838d831282725f05ecef98c24bf5bc1a890f42541a06.jpg) +Figure 1: An overview of the evolution of scenarios and methods in decision-making from single-agent to multi-agent systems. + +agent cooperative decision-making has rapidly evolved from simple single-agent scenarios [5, 6, 7, 8]. Multiagent cooperative decision-making is a crucial subfield within machine learning (ML) [9] and artificial intelligence (AI) [10]. It involves multiple interacting agents working together to complete established tasks across diverse well-designed dynamic simulated environments and various complex real-world systems. + +As depicted in Figure 1, the evolution research progress from single-agent to multi-agent decision-making systems, along with methodological comparisons, highlights that this rapidly advancing field is a crucial step toward achieving human-level AI and the Artificial General Intelligence (AGI) age. Multiagent cooperative decision-making has a wide range of practical applications and many fundamental theoretical works across various domains. The service scenarios are extensive, encompassing smart agriculture management [11, 12], intelligent collaborative robots [13, 14, 15, 16], self-driving collaborative obstacle avoidance [17, 18, 19], autonomous navigation [20, 21, 22] as well as joint rescue tasks [12, 23]. Correspondingly, considering the rapid pace of technological advancement and the multifaceted needs of the real world, in this work, we focus on the comprehensive study of multi-agent cooperative decision-making. + +# 1.2. Overview of Previous Multi-Agent Surveys + +Concurrent with the fast-paced advancements in multi-agent cooperative decision-making, there has + +been a marked increase in systematic literature reviews in this domain [24, 6, 8, 25]. These reviews have covered a wide range of topics, from theoretical innovations to practical applications, providing a comprehensive overview of the state-of-the-art. + +Ning et al. [25] provided a comprehensive overview of the evolution, challenges, and applications of multi-agent reinforcement learning (MARL)-based intelligent agents, including its practical implementation aspects. Gronauer et al. [6] provided an overview of recent developments in multi-agent deep reinforcement learning, focusing on training schemes, emergent agent behaviors, and the unique challenges of the multi-agent domain, while also discussing future research directions. Yang et al. [26] explored the utility theory application in AI robotics, focusing on how utility AI models can guide decision-making and cooperation in multi-agent/robot systems. Orr et al. [8] reviewed recent advancements in MARL, particularly its applications in multi-robot systems, while discussing current challenges and potential future applications. Du et al. [24] provided a systematic overview of multiagent deep reinforcement learning for MAS, focusing on its challenges, methodologies, and applications. Pamul et al. [7] provided a comprehensive analysis of the application of MARL in connected and automated vehicles (CAVs), identifying current developments, existing research directions, and challenges. Hernandez-Leal et al. [27] provided a comprehensive overview of approaches to addressing opponent-induced non + +stationarity in multi-agent learning, categorizing algorithms into a new framework and offering insights into their effectiveness across different environments. The survey by Zhu et al. [28] provided a systematic classification and analysis of MARL systems that incorporate communication, encompassing recent advanced CommMARL research and identifying key dimensions that influence the design and development of these multi-agent systems. + +# 1.3. Motivations of the Current Survey + +However, despite the growing body of work in this field, existing surveys often have noticeable limitations [24, 6, 25, 28]. Specifically, our thorough investigation reveals that most current reviews and surveys share several common and significant significant drawbacks and limitations: + +- Limited Research Scope: Previous literature reviews [27, 28] predominantly remain within the primary framework of reinforcement learning and have not broken through theoretical limitations, resulting in a lack of comprehensive coverage. +- Neglect of Environments: Previous literature reviews [29, 6, 30] have largely concentrated on methodological and algorithmic advancements, frequently overlooking the essential role of simulation environments and platforms in multi-agent intelligent decision-making. +- Under-emphasis of Project Implementation: Prior surveys [25, 28, 30] often focus on theoretical models and overlook detailed implementation aspects, including code-bases and project architectures. This gap limits readers' ability to fully understand and apply the findings. + +To address the aforementioned limitations and challenges, we recognize the need for more systematic and comprehensive reviews in the multi-agent intelligent decision-making field. + +Firstly, current reviews overly emphasize deep reinforcement learning and fail to adequately consider other potentially effective intelligent decision-making methods [24, 5, 7, 28]. Secondly, with the rapid development of large language models (LLMs), their potential in natural language processing, knowledge representation, and complex decision-making has become increasingly apparent. However, current surveys have largely overlooked their integration. Additionally, existing reviews often neglect the critical role of simulation environments in the development of multi-agent systems. + +However, simulation environments are not merely auxiliary tools but are an integral part of the MAS development and evaluation process. The agents' learning and decision-making processes are influenced and constrained by these environments, making it equally important to understand and develop these environments as it is to focus on the algorithms themselves. Finally, the lack of attention to practical implementation details in current reviews has resulted in a disconnect between theory and practice. This survey will delve into the specifics of project implementation, including code structures, system architecture, and the challenges encountered during development, to enhance research reproducibility and facilitate the effective translation of theoretical research into practical applications. + +Building on the motivations outlined earlier, this survey extends beyond the scope of previous reviews, which were often limited to specific areas of discussion. We treat multi-agent environments as equally important components, alongside the methods and techniques, and provide a thorough introduction to the most advanced algorithms and simulation environments. Moreover, we categorize various multi-agent cooperative decision-making methods from a more fundamental implementation perspective. In summary, this survey seeks to provide a more comprehensive and practical framework for research in multi-agent cooperative decision-making, thereby advancing the continuous development of this critical field. + +# 1.4. The Survey Overview/Contents Organization + +As depicted in Figure 2, we have structured the survey to reflect our research approach, with each main and sub-branch corresponding to a specific part: First, in Section 1, we introduce the research background of multi-agent cooperative decision-making, discuss the drawbacks of previous surveys, and outline the organizational structure of this survey. Given that MARL and LLMs-based intelligent decision-making methods demonstrate significant advantages and future potential, our primary attentions are placed on Deep MARL-based and LLMs-based methods due to their superior ability to manage dynamic and uncertain environments. In Section 2, we then delve into mainstream intelligent decision-making approaches, algorithms, and models. We categorize these approaches, with a continued focus on MARL-based and LLMs-based methods, discussing their methodologies, advantages, and limitations. Following this, in Section 3, we provide an in-depth analysis of the leading simulation environments and platforms for multi-agent cooperative decision-making, again focusing on Deep MARL-based and LLMs-based + +![](images/6fd6b397e7b96d468eedec8a8c9751022174d5ea5d13ed48ce08682d7cec5141.jpg) +Figure 2: Illustration of our systematic review of multi-agent intelligent decision-making research. Compared to previous reviews, we have incorporated comprehensive introduction and analysis, with each segment corresponding to a specific chapter in the survey. + +methods. Furthermore, in Section 4, we discuss the practical applications of multi-agent decision-making systems, such as autonomous driving, UAV navigation, and collaborative robotics. Finally, in Sections 5 and 6, we explore the potential challenges and future research directions of multi-agent cooperative decision-making. + +# 1.5. How to read this survey? + +This survey caters to a diverse readership, each with varying levels of expertise and interest in different aspects of multi-agent decision-making systems. To help readers efficiently find the content that interests them, we offer the following guide, providing direction based on different topics: + +- For those interested in rule (fuzzy logic)-based, game theory-based, and evolutionary algorithm-based decision-making research, please refer to Section 2.2.1, 2.2.2, and 2.2.3. This section provides a comprehensive analysis of the rule and game-based methods in multi-agent systems, detailing their corresponding technological taxonomies, features, and limitations. +- For those interested in MARL-based decision-making research, please refer to Section 2.3. This + +section provides a comprehensive analysis of the deep MARL-based methods in multi-agent systems, detailing their corresponding technological taxonomies, advantages, and limitations. + +- If you are focused on decision-making based on LLMs, Section 2.4 will offer you an in-depth exploration, with the corresponding technological taxonomies, advantages, and limitations. This part discusses the unique capabilities of LLMs in multiagent environments and their potential applications, especially in reasoning and decision-making. +- For readers focused on the well-known simulation environments of MAS, we suggest reading Section 3, which primarily covers an introduction to MARL-based Simulation Environments (Section 3.1) and LLMs Reasoning-based Simulation Environments (Section 3.2). +- If your interest lies in the practical applications of multi-agent decision-making systems, Section 4 will be of particular relevance. This section offers a detailed discussion of how these systems are applied across various fields, such as autonomous driving, UAV navigation, and collaborative robotics. + +- If you are interested in the challenges and problems faced by existing multi-agent decision-making methods, Section 5 provides an in-depth discussion, exploring the limitations of current approaches and unresolved issues in the field, offering insights into these challenges. + +- Finally, if you wish to learn about future research directions and the prospects for multi-agent decision-making technique, we recommend reading Section 6. This section looks ahead to future research trends and potential breakthroughs, exploring key directions that could drive the field forward. + +# 2. Multi-Agent Decision-Making Taxonomies + +This section discusses the taxonomies of decision-making in multi-agent systems and their related techniques. The multi-agent cooperative decision-making methods can be broadly classified into five categories: rule-based (primarily fuzzy logic), game theory-based, evolutionary algorithms-based methods, MARL-based approaches, and LLMs-based methods [31]. Although these rule-based, game theory-based, and evolutionary algorithms-based solutions demonstrate a degree of effectiveness, they typically rely heavily on pre-designed strategies and assumptions. This dependence limits their adaptability to changing and complex environments and ill-suited for handling highly dynamic and uncertain scenarios. In contrast, DRL-based and LLMs reasoning-based solutions offer more dynamic and flexible approaches, capable of learning and adapting to new strategies on the fly. Therefore, these methods have significant advantages in dealing with dynamic and uncertain environments. Thus, special research attentions are placed on DRL-based and LLMs-based methods due to their significant advantages in handling dynamic and uncertain environments. + +The analysis is conducted from multiple perspectives, including agent interaction dynamics, mainstream paradigms of cooperative decision-making, MARL (multi-agent reinforcement learning), and LLM (large language model)-driven multi-agent systems, aiming to provide a systematic framework and technical foundation for the design and optimization of multi-agent decision-making. + +Specifically, Section 2.1 analyzes agent interaction dynamics in MAS, categorizing them into four typical types: fully cooperative, fully competitive, mixed cooperative-competitive, and self-interested, while discussing their impact on overall system behavior. Subse + +sequently, Sections 2.2, 2.3, and 2.4 introduce mainstream paradigms of cooperative decision-making, MARL-based decision-making methods, and LLMs-based multi-agent systems, respectively. + +# 2.1. Agent Interaction Dynamics for Multi-Agent Systems + +In multi-agent systems, the nature of interaction among agents can be categorized into distinct relational dynamics, i.e., agent interaction dynamics, each influencing the overall system behavior and outcomes. These dynamics are critical to understanding and designing intelligent systems where agents operate within shared environments. Below, we detail the primary types of agent relationships: + +1. Fully Cooperative: In this scenario setting, all agents have aligned objectives, meaning they share identical reward structures and work towards a common goal. The agents operate with complete cooperation, aiming to maximize collective benefits. This relationship is typical in systems where synergy is essential, and the success of one agent directly contributes to the success of others, ensuring mutual reinforcement of strategies and actions. + +2. Fully Competitive: This relationship is characterized by a zero-sum game dynamic, where the gain of one agent directly translates into the loss of another. Agents are in direct opposition, with their objectives fundamentally misaligned. This is commonly observed in competitive environments such as robotic competitions, where agents are designed to outperform each other, and success is measured relative to the failure or underperformance of others. + +3. Mixed Cooperative and Competitive: In most real-world scenarios, agents may engage in both cooperation and competition simultaneously. This type of relationship is evident in team-based environments like robotic soccer, where agents within the same team cooperate to achieve a shared objective (e.g., scoring goals), but simultaneously compete against agents from opposing teams. The complexity of such systems lies in balancing internal cooperation with external competition, often requiring sophisticated strategies to optimize outcomes at both individual and collective levels. + +4. Self-Interested: In self-interested dynamics, each agent acts primarily to maximize its own utility, with little regard for the impact on others. An agent's actions might incidentally benefit or harm other agents, but these effects are not a concern for + +the self-interested agent. This relationship is pertinent in scenarios where agents are designed to prioritize personal gain over collective welfare, often leading to outcomes where the overall system efficiency is not necessarily optimized, as individual agents do not account for the potential externalities of their actions on the environment or other agents. + +Overall, these agent interaction dynamics is crucial for the design and analysis of multi-agent systems, as they directly impact the strategies employed by agents and the overall system performance. The complexity of agent interactions in mixed or self-interested relationships often requires advanced coordination mechanisms and incentive structures to manage potential conflicts and ensure desired outcomes. + +While the overarching concept of agent interaction dynamics holds some value, this survey focuses more on analyzing the characteristics of these methods from a technical and scientific perspective. Therefore, distinguishing relationships is not the primary emphasis of this study. Instead, we will proceed with a more comprehensive taxonomy of Multi-Agent Decision-making Systems. + +# 2.2. Mainstream Paradigms of Multi-Agent Cooperative Decision-Making + +In multi-agent cooperative decision-making, several mainstream paradigms exist, each leveraging different techniques to tackle challenges associated with coordination, learning, adaptability, and optimization among autonomous agents. These paradigms utilize diverse approaches, including rule-based (primarily fuzzy logic) systems [32, 33, 34, 35, 36, 37], game theory-based [38, 39, 40, 41, 42, 43], evolutionary algorithms-based [44, 45, 46, 47, 48, 49], MARL-based [50, 51, 52, 53, 54, 55, 56], and LLMs-based [57, 30, 58, 59] multiagent decision-making systems. Each of these methods has distinct strengths and applications, depending on the problem context and the complexity of interactions between agents. For a comprehensive overview, please refer to Table 1, which provides a detailed classification of these paradigms. + +# 2.2.1. Rule-Based (Primarily Fuzzy Logic) + +Rule-based decision-making, particularly fuzzy logic, has been widely adopted in multi-agent systems (MAS) due to its ability to handle uncertainty, imprecise data, and dynamic environments [117, 118, 36, 119]. Fuzzy logic enables agents to make adaptive, human-like decisions by mapping inputs to linguistic rules rather than strict mathematical models. + +Miki et al. [32] presented a rule-based multi-agent control algorithm that utilizes local information instead of absolute coordinates, making it more practical for real-world applications. Charaf et al. [120] introduced a rule-based multi-agent system to address coordination challenges, such as controllability and observability, in distributed testing environments. Yarahmadi et al. [33] reviewed the applications of multi-agent systems in Cyber-Physical Systems (CPS) and the Internet of Things (IoT), proposing a combination of learning and rule-based reasoning to improve decision-making in MAS. Marti et al. [117] presented an expert rule-based system using multi-agent technology to support traffic management during weather-related issues. Daeichian et al. [121] used fuzzy logic in combination with Q-learning and game theory to control traffic lights autonomously. Wu et al. [34] introduced a fuzzy-theoretic game framework that integrates fuzzy logic with game theory to handle uncertainty in utility values during multi-agent decision making. Nekhai et al. [118] devised a cybersecurity management model for agricultural enterprises using a multi-agent system (MAS) based on fuzzy logical reasoning. Ramezani et al. [119] applied fuzzy logic to multi-agent decision-making in soccer robot teams, combining cooperative and non-cooperative game strategies. Zhang et al. [35] introduced a new online method for optimal coordination control in multi-agent differential games, combining fuzzy logic, and adaptive dynamic programming. Ren et al. [36] presented a fuzzy logic-based approach for partner selection in multi-agent systems, emphasizing flexibility and adaptability in dynamic environments. Gu et al. [37] introduced a cooperative reinforcement learning algorithm for multi-agent systems using a leader-follower framework, modeled as a Stackelberg game. Schwartz et al. [60] introduced a multi-agent fuzzy actor-critic learning algorithm for differential games. Harmati et al. [61] proposed a game-theoretic model for coordinating multiple robots in target tracking, using a semi-cooperative Stackelberg equilibrium and a fuzzy inference system for high-level cost tuning. Khuen et al. [62] introduced an Adaptive Fuzzy Logic (AFL) approach for multi-agent systems with negotiation capabilities, focusing on resource allocation. Yan et al. [63] proposed a graphical game-based adaptive fuzzy optimal bipartite containment control scheme for high-order nonlinear multi-agent systems (MASs). Vicerra et al. [64] proposed a multi-agent robot system using a pure fuzzy logic-based artificial intelligence model. Gu et al. [65] presented a fuzzy logic-based policy gradient multi-agent reinforcement learning algorithm for leader-follower systems, where fuzzy logic + +Table 1: Representative Methods in Mainstream Paradigms of Multi-Agent Cooperative Decision-Making. + +
ParadigmRepresentative Methods and Key References
Rule-Based (Primarily Fuzzy Logic)Miki et al. [32], Yarahmadi et al. [33], Wu et al. [34], Zhang et al. [35], Ren et al. [36], Gu et al. [37], Schwartz et al. [60], Harmati et al. [61], Khuen et al. [62], Yan et al. [63], Vicerra et al. [64], Gu et al. [65], Maruyama et al. [66], Peng et al. [67], Yang et al. [68]
Game Theory-basedWang et al. [38], Guo et al. [39], Schwung et al. [40], Wang et al. [41], Lin et al. [42], Wang et al. [43], Wang et al. [69], Lanctot et al. [70], Guo et al. [71], Zhang et al. [72], Kong et al. [73], Wang et al. [74], Dong et al. [75], Nguyen et al. [76], Schwung et al. [40], Khan et al. [77]
Evolutionary Algorithms-basedLiu et al. [45], Xu et al. [78], Daan et al. [46], Franciszek et al. [79], Larry et al. [44], Daan et al. [47], Liu et al. [80], Yuan et al. [48], Dong et al. [81], Chen et al. [82], Zhang et al. [49]
MARL-basedWai et al. [83], Hu et al. [84], Son et al. [53], Yu et al. [52], Rashid et al. [50], Rashid et al. [54], Sunehag et al. [85], Huang et al. [15], Xu et al. [58], Yun et al. [86], Mao et al. [87], Kraemer et al. [51], Kouzeghar et al. [88], Gao et al. [89], Liu et al. [19], Qi et al. [90], Vinyals et al. [91], Lu et al. [18], Chu et al. [92], et al. [20], Kurach et al. [55], Lv et al. [93], Radac et al. [94], Wang et al. [56], Liu et al. [95]
LLMs-basedMordatch et al. [59], Zhang et al. [96], Xu et al. [58], Li et al. [57], Wang et al. [30], Zhao et al. [10], Hou et al. [97], Puig et al. [98, 99], Gao et al. [100], Xiao et al. [101], Wang et al. [102], Wu et al. [103], Wen et al. [104], Chen et al. [105], Liu et al. [106], Chen et al. [107], Hong et al. [108, 109], XAgent Team [110], Wang et al. [111, 112], Zheng et al. [113], Zhang et al. [114, 115], Cao et al. [116]
+ +controllers act as policies. Maruyama et al. [66] extended the classical framework for reasoning about distributed knowledge, incorporating fuzzy logic to handle uncertainty and degrees of certainty within multi-agent systems. Peng et al. [67] proposed a two-layer coordination model for multi-agent systems using fuzzy reinforcement learning. Yang et al. [68] presented a multiagent reinforcement learning algorithm with fuzzy policy to address control challenges in cooperative multi-agent systems, particularly for autonomous robotic formations. + +Overall, fuzzy logic remains a foundational approach for rule-based decision-making in MAS, offering interpretability and robustness in uncertain environments. In the future, fuzzy logic will be further integrated with LLMs, hierarchical decision architectures, and multiagent planning, enabling more precise and adaptive decision-making in complex real-world scenarios. + +# 2.2.2. Game theory-based + +Game theory provides a structured framework for analyzing strategic interactions in multi-agent systems. It enables agents to make rational decisions in cooperative, competitive, or mixed scenarios through equilibrium-based optimization [5, 29]. Traditional methods such as Nash equilibrium and Stackelberg games form the foundation, while modern approaches + +integrate reinforcement learning and Bayesian inference to enhance adaptability in dynamic environments. + +Wang et al. [38] provided a broad discussion on game-theoretic approaches in multi-agent systems, covering cooperative and non-cooperative scenarios. Guo et al. [39] applied game theory to multi-agent path planning, leveraging Nash equilibrium to optimize navigation and obstacle avoidance. Zhang et al. [72] developed a distributed control algorithm that ensures optimal coverage while maintaining network connectivity. + +Beyond fundamental decision-making, game theory has been applied in communication networks and energy systems. Wang et al. [41] utilized game-theoretic learning to enhance resource allocation in wireless networks while countering adversarial actions like jamming. Lin et al. [42] introduced potential game theory to optimize distributed energy management in microgrids, where agents autonomously coordinate power distribution. Dong et al. [75] further extended this approach using a hierarchical Stackelberg model for energy trading, balancing incentives between microgrids and individual agents. + +Incorporating machine learning with game theory has also led to advances in multi-agent optimization. Schwung et al. [40] combined potential game theory with reinforcement learning for adaptive production scheduling, while Wang et al. [74] designed a + +Nash equilibrium-based fault-tolerant control strategy for multi-agent systems. Additionally, game-theoretic methods have been explored for distributed computing, as shown by Khan et al. [77], who developed a replica placement strategy to minimize data access delays in distributed systems. + +Overall, game theory remains a cornerstone of multiagent decision-making, offering well-defined theoretical guarantees while enabling dynamic adaptation through hybrid approaches. Future research will likely focus on integrating game theory with deep learning and large language models to enhance strategic reasoning in high-dimensional, uncertain environments. + +# 2.2.3. Evolutionary Algorithms-based + +Evolutionary algorithms (EAs) provide a bio-inspired approach to optimization in multi-agent systems by leveraging principles such as natural selection, mutation, and recombination [79, 47, 81]. By allowing agents to evolve their strategies iteratively, EAs are particularly effective for problems requiring continuous learning, large-scale coordination, and self-organized behavior. + +Liu et al. [45] introduced the Multi-Agent Genetic Algorithm (MAGA), where agents interact through competition and cooperation to optimize global solutions. Xu et al. [78] extended this idea to hardware-based multi-agent systems, using nanoclusters as physical agents to achieve large-scale parallel computation. Daan et al. [46] explored the role of evolutionary strategies in dynamic environments such as financial markets, smart grids, and robotics, demonstrating how adaptive algorithms can handle real-world uncertainties. + +Franciszek et al. [79] proposed a self-optimization model integrating cellular automata and game theory, simulating competitive evolutionary interactions among agents. Larry et al. [44] analyzed the trade-offs between mutation and recombination, showing that mutation can sometimes outperform traditional recombination strategies in evolutionary computing. To further enhance adaptability, Daan et al. [47] introduced Deep Neuroevolution (DNE), applying coevolutionary techniques to complex multi-agent scenarios, including Atari games. + +Recent studies have focused on scaling evolutionary learning to larger agent populations. Liu et al. [80] developed Evolutionary Reinforcement Learning (ERL), a scalable approach that partitions learning into multiple stages, ensuring better adaptability in multiagent environments. Yuan et al. [48] introduced EvoAgent, a framework that extends LLMs-based au + +tonomous agents into multi-agent systems using evolutionary techniques like mutation and selection. + +Evolutionary game theory has also been explored to improve cooperative behavior. Dong et al. [81] designed a three-strategy decision model, where agents adopt conservative or adaptive strategies based on their interactions with neighbors, fostering long-term cooperation. Chen et al. [82] proposed a kinetic decision-making model grounded in rarefied gas dynamics, offering a new perspective on agent evolution using the Boltzmann equation. Zhang et al. [49] applied evolutionary game theory to policy optimization, analyzing cooperation strategies among governments, enterprises, and farmers in agricultural water conservation projects. + +Overall, evolutionary algorithms provide a robust framework for decentralized decision-making, allowing agents to self-improve and adapt in uncertain environments. In the future, evolutionary algorithms will be further integrated with deep learning, hierarchical evolution, and large-scale multi-agent coordination, enabling more adaptive, autonomous MAS. + +# 2.2.4. MARL-based Multi-Agent Systems + +Before introducing the MARL-based multi-agent systems (MAS), we provide a detailed discussion in Appendix A on the key technological comparisons and methodological principles of both DRL-based single-agent systems and MARL-based MAS. This helps readers build the necessary background knowledge for better understanding the following discussions. + +Multi-Agent Reinforcement Learning offers a structured framework to tackle decision-making in MAS, where autonomous agents interact with each other and a shared environment. The MAS research in MARL is broadly divided into three paradigms: Centralized Training with Centralized Execution (CTCE) [122, 123], Decentralized Training with Decentralized Execution (DTDE) [122], and Centralized Training with Decentralized Execution (CTDE) [51, 124]. Each paradigm is designed to address specific challenges such as coordination, scalability, and policy optimization, providing tailored solutions for diverse scenarios. + +Centralized Training with Centralized Execution The CTCE paradigm [125, 123] relies on a central controller that governs all agents by aggregating their observations, actions, and rewards to make joint decisions. While this paradigm enables high levels of coordination, its scalability is limited in large-scale systems. Multi-Agent DQN (MADQN) [126, 1, 2] is a representative method, employing parameter-sharing mechanisms to handle cooperative tasks effectively. However, + +its reliance on centralized control restricts its applicability in dynamic environments with numerous agents. + +Decentralized Training with Decentralized Execution The DTDE paradigm [122] emphasizes independent learning and execution, where each agent interacts with the environment individually and updates its policy based solely on local observations and rewards. This paradigm excels in scalability and robustness, especially in scenarios with limited communication. Notable methods include Independent Q-Learning (IQL) [127, 128] and Decentralized REINFORCE [129], which allow agents to learn autonomously. Despite its advantages, DTDE faces challenges such as learning nonstationarity, where the environment changes as other agents adapt, and difficulty in addressing the credit assignment problem in cooperative settings. + +Centralized Training with Decentralized Execution The CTDE paradigm [51, 130, 131] combines the strengths of centralized training and decentralized execution, making it the most prominent paradigm in MARL research. During training, a central controller aggregates information from all agents to optimize their policies, but during execution, each agent operates independently based on its own observations. CTDE addresses key challenges like non-stationarity and scalability, with methods such as Value Decomposition Networks (VDN) [85] and QMIX [50, 52] for value-based learning, Multi-Agent Deep Deterministic Policy Gradient (MADDPG) [132] for actor-critic frameworks, and Multi-Agent Proximal Policy Optimization (MAPPO) [133] for policy gradient optimization. These approaches are widely applied in complex environments like StarCraft II [134, 135] and the Multi-Agent Particle Environment (MPE) [136, 137]. + +Communication-based MARL Algorithms Additionally, communication-based MARL algorithms have emerged to enhance coordination by enabling agents to share critical information during training and execution. Examples include Attentional Communication (ATOC) [138] and Targeted Multi-Agent Communication (TarMAC) [139], which use advanced mechanisms to improve the efficiency and effectiveness of inter-agent communication in cooperative tasks. + +By structuring MARL methods within these paradigms, researchers provide a clear framework for addressing the diverse challenges of multi-agent decision-making. From autonomous driving fleets to resource allocation systems, MARL continues to push the boundaries of what distributed intelligent systems can achieve [124]. + +# 2.2.5. LLMs-based Multi-Agent Systems + +Although LLMs like GPT [140, 141, 142], Llama [143, 144], and Gemini [145] support very long input contexts, their ability to understand complex inputs still varies. In this context, multi-agent collaboration optimizes task execution through role assignment, enabling better performance through collaboration among multiple agents compared to a single agent. Each agent has an independent workflow, memory, and can seek help from other agents when necessary. LLMs-based Multi-Agent Systems represent a relatively new multi-agent decision-making model that leverages the powerful capabilities of language models, to enhance communication and collaboration between autonomous agents. In an LLMs-based multi-agent system, agents communicate via natural language or symbolic representations, breaking down complex tasks into smaller, more manageable subtasks. One important feature of LLMs-based systems is the hierarchical organization of agents, typically consisting of two levels [30, 116]: + +1) Global planning agents, responsible for high-level decisions such as task decomposition, resource allocation, and overall strategy management. + +2) Local execution agents, which are responsible for executing specific subtasks and providing feedback to the global planning agent. These agents are generally more focused on local tasks but communicate progress and challenges with the global level for adjustments. This decomposition makes distributed problem solving possible, with agents sharing information, strategies, and goals through language, thus advancing task execution together. + +For example, frameworks like AutoGen [103, 146, 107], Crew AI [114, 115], and LangGraph [111, 112] provide rich tools for building multi-agent solutions, supporting efficient cooperation and interaction between agents. Through these frameworks, developers can build virtual teams that leverage the strengths of different agents in distributed tasks. Additionally, LLMs-based multi-agent systems possess adaptive re-planning capabilities, enabling them to adjust in dynamic environments. When agents encounter changes or new information, they can quickly update strategies or re-plan tasks using language models, ensuring the system remains aligned with changing goals. + +Firstly, LLMs-based multi-agent environments have emerged as pivotal platforms for advancing research in multi-agent collaboration, reasoning, and task execution. For instance, environments such as Three-DWorld Multi-Agent Transport (TDW-MAT) [147, 148], Communicative Watch-And-Help (C-WAH) [99], + +Cuisineworld [149], and AgentScope [100] offer diverse settings for evaluating and enhancing multi-agent systems in various contexts, from household chores to gaming interactions and beyond. For instance, MindAgent [149] is a novel infrastructure for evaluating planning and coordination capabilities in gaming interactions, leveraging large foundation models (LFMs) to coordinate multi-agent systems, collaborate with human players. Communicative Watch-And-Help (C-WAH) [98, 99] is a realistic multi-agent simulation environment and an extension of the Watch-And-Help Challenge platform, VirtualHome-Social. AgentScope [100] is a robust and flexible multi-agent platform designed to empower developers in building advanced multi-agent systems by leveraging the potential of LLMs. + +Meanwhile, LLMs-based multi-agent systems have broad applications and great prospects [30, 116]. They can collaborate in robotic teams to perform complex tasks, such as product assembly or joint research, ensuring seamless interaction and cooperation [58, 59, 57]. In autonomous driving, LLMs help vehicles communicate, sharing real-time data and navigation strategies to achieve coordinated actions. Moreover, LLMs can support agents (such as drones) in disaster response, transmitting critical information to help systems efficiently respond to crises. Wu et al. [103] proposed AutoGen, an open-source framework for developing next-generation LLM applications through multi-agent conversations, allowing customizable agent interactions and integration of LLMs, human inputs, and tools. Xiao et al. [101] introduced Chain-of-Experts (CoE), a multi-agent framework that enhances reasoning in complex operations research (OR) problems using LLMs, with domain-specific roles and a conductor for guidance. Chen et al. [105] presented AgentVerse, a multi-agent framework inspired by human group dynamics, dynamically adjusting agent roles and composition to enhance complex task-solving across various domains. Chen et al. [107] developed AutoAgents, a framework that adaptively generates and coordinates multiple specialized agents for efficient task completion. Liu et al. [106] proposed Dynamic LLM-Agent Network (DyLAN), a framework that enhances LLM-agent collaboration through dynamic interactions based on task requirements. Zhang et al. [96] introduced CoELA, a Cooperative Embodied Language Agent framework that leverages LLMs to enhance multi-agent cooperation in complex, decentralized environments. Hong et al. [108, 109] proposed MetaGPT, a meta-programming framework that enhances LLMs-based multi-agent system collaboration using Standard Operating Procedures (SOPs). XAgent Team [110] developed XAgent, an + +open-source, LLM-driven autonomous agent framework for solving complex tasks using a dual-loop architecture for task planning and execution. Zheng et al. [113] introduced PlanAgent, a closed-loop motion planning framework for autonomous driving using multi-modal LLMs to generate hierarchical driving commands. Wang et al. [111, 112] developed LangGraph, a library for building stateful, multi-actor applications with LLMs, offering fine-grained control over workflows and state management. Zhang et al. [114, 115] introduced CrewAI, an open-source framework for coordinating AI agents in role-playing and autonomous operations, with a modular design for efficient collaboration. Hou et al. [97] proposed CoAct, a hierarchical multi-agent system leveraging LLMs for collaborative task execution. It features a global planning agent for task decomposition and strategy management, and a local execution agent for subtask implementation, feedback collection, and adaptive replanning, ensuring alignment with overarching goals. + +Despite the strong capabilities of LLMs in small to medium-sized multi-agent systems, scalability remains an open issue, particularly in maintaining coherent communication between large numbers of agents in large environments. As the number of agents increases, the complexity of coordinating their behaviors through language also intensifies. Finding a balance between agent autonomy and effective collaboration is a significant challenge. Additionally, LLMs are often seen as black-box models, meaning understanding the reasoning process behind an agent's decision-making can be difficult. The lack of transparency poses challenges for trust and debugging. + +In summary, LLMs-based multi-agent systems hold great potential in a variety of applications, offering an advanced way to model and solve complex decision-making problems that require high levels of coordination, adaptability, and communication between agents. By optimizing task decomposition, collaboration, and feedback mechanisms, LLMs bring unprecedented efficiency and flexibility to multi-agent systems. + +# 2.3. MARL-based Multi-Agent Decision-Making Tax-onomies + +In multi-agent systems, where multiple autonomous agents interact with a shared environment and often with each other, the complexity of decision-making increases significantly. To achieve optimal performance, agents need to learn not only how to act individually but also how to coordinate with others. One of the central challenges in MARL-based multi-agent systems is determining how much information should be + +shared among agents during different phases of learning and deployment. The MARL research is typically structured into three primary paradigms: CTCE [150, 151, 152, 126, 123], DTDE [122, 153], and CTDE [124, 130, 51]. As illustrated in Figure 3, we will next explain the principles and differences of the three methods in conjunction with this conceptual framework diagram. + +# 2.3.1. Centralized Training with Decentralized Execution (CTDE) + +As shown in the left of Figure 3, CTDE is a hybrid MARL approach that combines the strengths of both centralized and decentralized systems [124]. In CTDE, each agent possesses its own policy network, which is trained under the guidance of a central controller. This approach is characterized by a two-phase process: centralized training followed by decentralized execution. + +1. Centralized Training (Phase 1): During the training phase, the central controller collects data from all agents, including their observations, actions, and rewards. This centralized data aggregation allows the controller to oversee the learning process and facilitate the training of each agent's policy network. +2. Decentralized Execution (Phase 2): Once the training is complete, the central controller's involvement ceases, and each agent operates independently. At execution, agents make decisions based on their own observations using their trained policy networks. + +In some communication-constrained scenarios, agents often cannot share or fully share their observations of the environment. Instead, they must make decisions independently based on their own local observations and policies, which limits the applicability of fully centralized methods. To overcome this challenge, Kraemer et al. [124, 51] proposed the CTDE learning paradigm. The CTDE agents have access to global environmental state information and the observations of other agents during the training phase, allowing them to learn a joint policy together. However, during the execution phase, each agent relies solely on its own observations and the trained policy to make independent decisions. It combines the advantages of fully decentralized and fully centralized methods, effectively mitigating issues such as learning non-stationarity and the curse of dimensionality, making it the dominant paradigm in current MARL solutions. + +Under CTDE, MARL algorithms can primarily be categorized into three types based on their technical implementations: value function decomposition-based algorithms, actor-critic-based algorithms, and algorithms based on policy gradient methods, such as proximal policy optimization (PPO). + +1. Value Decomposition-based Algorithms Value decomposition-based algorithms mainly address the challenge of estimating the joint state-action value function (Q-value) in multi-agent systems, which is difficult due to the high dimensionality of the joint action space. Instead of directly estimating this joint value function, these algorithms decompose it into more manageable individual state-action value functions (Q-value) for each agent. During execution, each agent selects its action based on its own value function. In training, the joint value function is computed from individual value functions, and the temporal difference error of the joint value guides the learning of the individual functions. A key principle these algorithms must satisfy is the Individual-Global-Max (IGM) principle, ensuring that the actions maximizing the joint value are consistent with those maximizing individual values. Different algorithms use various methods to approximate or satisfy the IGM principle. + +Value Decomposition Networks (VDN) [85] is one of the earliest value decomposition-based algorithms in CTDE-based MARL models. VDN simplifies the estimation of the joint state-action value function by assuming that it can be represented as the sum of the individual state-action value functions of all agents. It means that the joint value function is obtained by simply adding up the individual value functions, which does not take into account the varying contributions of each agent's Q-value. However, the assumption made by VDN is a sufficient but not necessary condition for satisfying the IGM principle, which can limit its applicability. Additionally, VDN does not utilize global state information during training, further restricting its effectiveness in more complex environments. + +To address this issue, Rashid et al. [50] proposed the QMIX algorithm within the CTDE paradigm. QMIX assumes a monotonic nonlinear relationship between the joint state-action value function and the individual state-action value functions of agents. To implement this, QMIX introduces a mixing network that computes the joint state-action value function based on the individual Q-values of all agents. This mixing network is designed with non-negative parameters to ensure that the monotonicity assumption is met. QMIX has been successfully applied in various scenarios and is considered one of the most successful value decomposition al + +![](images/a7c702553a9d5fa72fd71c9d66c7618d6f75fff815db9d11a1cdaba21c0183ca.jpg) +Figure 3: The paradigms visualization of CTDE (left), DTDE (centre), and CTCE (right), consisting of three crucial elements: agent (i.e., algorithm or model), environment, central controller (Optional). + +![](images/0bca6a2597b2b0575317349d7a3b6f6c565e49cb2666036c98971bd5711bbe0f.jpg) + +![](images/4524d0bfb9ac0487743344e13bb71efc4a8b1faea35f3507db44ab1a20d4252d.jpg) + +gorithms to date. By enforcing a monotonic relationship between the joint action Q-values and individual Q-values, QMIX simplifies the policy decomposition process, facilitating decentralized decision-making. However, the monotonicity assumption, while sufficient for ensuring the Individual-Global-Max (IGM) principle, is not a necessary condition. This limitation restricts the algorithm's applicability in situations where an agent's optimal action depends on the actions of other agents. + +Weighted QMIX [54] builds upon QMIX and addresses this limitation by introducing a novel weighting mechanism during the projection of Q-values, which is widely used for cooperative MARL scenarios. In QMIX, the projection of Q-learning targets into the representable space is done with equal weighting across all joint actions, which can lead to suboptimal policy representations, even if the true optimal Q-values $(\mathbf{Q}^{*})$ are known. To overcome this, Weighted QMIX introduces two weighting schemes-Centrally-Weighted (CW) QMIX and Optimistically-Weighted (OW) QMIX-that place greater emphasis on the better joint actions during this projection process. The weighting schemes ensure that the correct maximal action is recovered for any set of joint action Q-values, effectively improving the algorithm's ability to learn optimal policies. These schemes in Weighted QMIX enhances the representational capacity of QMIX, demonstrating improved results on both predator-prey scenarios of Multi-Agent Particle Environment (MPE) [137] and the challenging StarCraft II benchmarks [154, 91, 134]. + +Since then, numerous methods building on value function decomposition have been developed. QPLEX [56] introduces a novel duplex dueling network architecture for multi-agent Q-learning, designed to nonlinearly decompose the joint state-action value function while embedding the IGM principle directly into the + +network structure. Fast-QMIX [52] enhances the original Q-MIX by dynamically assigning virtual weighted Q-values with an additional network, improving convergence speed, stability, and overall performance in cooperative multi-agent scenarios. QTRAN [53] introduces a more flexible factorization method that overcomes the structural limitations of Q-MIX, where the joint Q-value is constrained to be a monotonic function of the individual Q-values, thereby imposing a specific structural form on the factorization. Specifically, QTRAN introduces a necessary and sufficient condition for the IGM principle and incorporates two additional loss terms into the loss function to constrain the training of individual Q-value functions, ensuring they satisfy this IGM principle. + +2. Actor-Critic-based Algorithms: Actor-Critic-based algorithms [132, 60, 155] represent a foundational class of methods within the CTDE paradigm, offering a flexible and effective approach for tackling the challenges of multi-agent environments. These algorithms combine the strengths of policy optimization (actor) with value estimation (critic), allowing agents to learn robust and adaptive strategies in both cooperative and competitive settings. By leveraging a centralized critic during training, Actor-Critic-based methods [136, 137, 92] address key issues such as environmental non-stationarity and credit assignment, enabling effective policy optimization in dynamic and complex multiagent scenarios. Below, we discuss several prominent Actor-Critic-based approaches and their contributions to advancing MARL. + +MADDPG [132] is a typical Actor-Critic-based CTDE approach specifically designed to address the challenges of multi-agent environments, where agents engage in both cooperative and competitive interactions. Traditional reinforcement learning algorithms, such as + +Q-learning and policy gradient methods, struggle in multi-agent settings due to issues like non-stationarity where the environment constantly changes as other agents learn-and increased variance with the growing number of agents. MADDPG adapts the actor-critic framework by incorporating a centralized critic during training, which has access to the actions and observations of all agents. This centralized critic helps mitigate the non-stationarity problem by learning a more stable value function that considers the joint action space. During execution, however, each agent independently follows its policy (actor) based on local observations, enabling decentralized decision-making. It allows each agent to successfully learn and execute complex coordination strategies, outperforming existing methods in both cooperative and competitive multiagent environments. To address the computational challenges of continuous action spaces, Li et al. [156] extend the MADDPG algorithm to Multi-Agent Mutual Information Maximization Deep Deterministic Policy Gradient (M3DDPG) by incorporating a minimax approach to enhance robustness in multi-agent environments. M3DDPG introduce Multi-Agent Adversarial Learning (MAAL), which efficiently solves the minimax formulation, ensuring agents can generalize even when opponents' policies change and leading to significant improvements over existing baselines in mixed cooperative-competitive scenarios. + +Counterfactual Multi-Agent Policy Gradient (COMA) [157] is a cooperative algorithm based on the Actor-Critic framework that uses centralized learning to address the credit assignment problem in multi-agent settings. COMA employs a centralized critic to compute advantage functions for each agent, using counterfactual baselines to reduce policy dependencies among agents and improve learning efficiency. Each agent has its own policy network, but the shared centralized critic evaluates joint Q-values by considering the collective state and action information of all agents. This approach minimizes the negative impacts of policy dependencies and allows for a more comprehensive assessment of each agent's behavior, enhancing overall policy optimization. + +3. Proximal Policy Optimization-based Algorithms: Proximal Policy Optimization (PPO) [158] is a widely used CTDE reinforcement learning algorithm that has been adapted and extended to address challenges in MARL. Within the CTDE paradigm, PPO and its multi-agent variants have shown remarkable effectiveness in balancing policy optimization efficiency and stability. PPO was introduced by Schulman et al. [158] as an efficient policy gradient algorithm designed to im + +prove upon the trust region policy optimization (TRPO) framework [159]. PPO employs a clipped surrogate objective function that simplifies the trust region constraint in TRPO, allowing for stable updates without overly restrictive computational overhead. The key innovation of PPO lies in its ability to control the magnitude of policy updates through the clipping mechanism, which ensures that policies do not deviate excessively from their previous versions. + +In MARL, Multi-Agent PPO (MAPPO) [133] extends PPO to the centralized critic paradigm. MAPPO uses a centralized value function (critic) that evaluates joint states and actions during training, while agents execute independently using their decentralized policies. MAPPO has demonstrated superior performance in various cooperative and competitive multi-agent environments, such as the StarCraft II [134, 135] and Multi-Agent Particle Environment (MPE) [59, 136, 137] benchmarks. The centralized critic allows for improved credit assignment and non-stationarity handling during training, while the decentralized execution ensures scalability. While MAPPO leverages parameter sharing among agents, this assumption may not hold in heterogeneous-agent systems where agents differ in capabilities, objectives, or action spaces. + +To address this, Kuba et al. [160] proposed Heterogeneous-Agent Trust Region Policy Optimization (HATRPO) and Heterogeneous-Agent Proximal Policy Optimization (HAPPO). These algorithms remove the parameter-sharing assumption, allowing for individualized policy networks for each agent. HATRPO builds upon TRPO by introducing a sequential update scheme, where only one agent updates its policy at a time while the policies of other agents remain fixed. This approach ensures monotonic improvement in joint policies, as it approximates the Nash equilibrium under certain conditions, such as full observability and deterministic environments. HAPPO extends PPO in a similar vein, replacing parameter sharing with individualized policies. Like HATRPO, HAPPO employs a sequential update mechanism, but it retains the computational efficiency and practical simplicity of PPO's clipped objective function. + +Both HATRPO and HAPPO utilize a sequential update process where one agent updates its policy while others remain fixed. This prevents conflicts during policy optimization and ensures theoretical convergence to a stable joint policy. Moreover, HATRPO and HAPPO provide monotonic improvement guarantees under specific conditions. By removing the parameter-sharing constraint, these algorithms enable agents to learn tailored policies that account for their unique roles and + +capabilities. Both algorithms perform competitively in benchmark tasks, demonstrating their ability to scale to high-dimensional state-action spaces while maintaining robust coordination among agents. + +PPO-based algorithms, including MAPPO [133], HATRPO [160], and HAPPO [160], have revolutionized multi-agent reinforcement learning by combining the stability of PPO with the coordination benefits of centralized critics. These algorithms have proven effective across a wide array of cooperative and competitive MARL tasks, offering strong performance and scalability. + +3. Other Categories of Algorithms within the CTDE Paradigm: In addition to the well-established categories of Value Decomposition-based, Actor-Critic-based, and Proximal Policy Optimization (PPO)-based algorithms, the MARL research has seen significant advancements through innovative optimizations and enhancements within CTDE paradigm that are not confined to these traditional classifications. These approaches aim to address the inherent challenges of multi-agent environments, such as non-stationarity and limited communication, to improve overall cooperation and policy learning efficiency. + +For example, Centralized Advising and Decentralized Pruning (CADP) is a novel framework introduced by Zhou et al. [131] to address limitations in the CTDE paradigm. CADP enhances the training process by allowing agents to explicitly communicate and exchange advice during centralized training, thus improving joint-policy exploration. To maintain decentralized execution, CADP incorporates a smooth model pruning mechanism that gradually restricts agent communication without compromising their cooperative capabilities, demonstrating its superior performance on multiagent StarCraft II SMAC and Google Research Football benchmarks. CommNet [161] introduces a neural model where multiple agents learn to communicate continuously and collaboratively through a shared communication channel, optimizing their performance on fully cooperative tasks. The method allows agents to develop their own communication protocols during training, leading to improved coordination and task-solving capabilities. Mao et al. [87] introduced a novel Meta-MARL framework by integrating game-theoretical meta-learning with MARL algorithms using the CTDE's framework, such as the Actor-Critic-based COMA [157]. This framework offers initialization-dependent convergence guarantees and significantly improves convergence rates by addressing related tasks collectively. Yun et al. [86] proposed a novel approach called Quantum Meta Multi-Agent Reinforce + +ment Learning (QM2ARL), achieving high rewards, fast convergence, and quick adaptation in dynamic environments. QM2ARL leverages the unique dual-dimensional trainability of Quantum Neural Networks (QNNs) to enhance MARL. Liu et al. [95] proposed the Learning before Interaction (LBI) framework, which integrates a language-guided simulator into the multiagent reinforcement learning pipeline to address complex decision-making problems. By leveraging a generative world model with dynamics and reward components, LBI generates trial-and-error experiences to improve policy learning, demonstrating superior performance and generalization on the StarCraft Multi-Agent Challenge benchmark [134, 135]. + +# 2.3.2. Decentralized Training with Decentralized Execution (DTDE) + +As shown in the centre of Figure 3, DTDE represents a fully decentralized mechanism where each agent interacts independently with the environment and updates its own policy based on its own observations and rewards [122]. In this framework, each agent trains and operates completely independently, relying only on its own observations and rewards to update its strategy. DTDE is particularly suited for environments with limited communication or no global coordination, offering strong scalability and robustness [123]. + +The core idea behind DTDE is the independence of agents [122]. Each agent interacts with its environment and learns without requiring information from others. This makes DTDE scalable, but it also introduces challenges such as non-stationarity, where the environment appears to change as other agents adapt their strategies. This characteristic makes DTDE a valuable and challenging area of research. The theoretical foundation of DTDE is often based on Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs). As described by Amato et al. [122, 124], a Dec-POMDP models a decentralized decision-making environment where agents operate independently with limited observations while aiming to maximize a shared reward. The decentralized nature of DTDE requires each agent to learn optimal strategies based on local information only. + +First and foremost, one of the earliest DTDE approaches is Independent Q-Learning (IQL) by et al. [127]. Here, each agent applies Q-learning independently, maintaining its own Q-function and updating it based on local observations and rewards. However, IQL faces several challenges, such as the non-stationary nature of the environment caused by other agents learning simultaneously. It also struggles with credit assignment, where it is hard to determine how an individual agent + +contributes to the team's success. To address these issues, several extensions of IQL have been proposed: + +- Distributed Q-Learning [128] optimistically assumes other agents always take optimal actions, focusing on learning from high-reward interactions. While effective in deterministic settings, it can be overly optimistic in environments with randomness. + +- Hysteresic Q-Learning [162] By introducing two learning rates—one for positive updates and another, smaller rate for negative updates—hysteretic Q-learning balances optimism with robustness in stochastic environments. + +- Lenient Q-Learning [163] dynamically adjusts how lenient the agent is in updating its values, depending on how frequently specific state-action pairs are encountered. It allows for more exploration in the early stages of learning while focusing on optimization later. + +As MARL problems become more complex, DTDE methods have been extended to deep learning. Deep Q-Networks [150, 151, 152, 126] and Deep Recurrent Q-Networks [164] have been adapted for decentralized settings, enabling agents to handle high-dimensional state and action spaces. Independent DRQN (IDRQN) [165], for example, combines DRQN with independent learning, but its asynchronous experience replay can cause instability. To solve this, Concurrent Experience Replay Trajectories (CERTs) [166] synchronize experience replay among agents, reducing non-stationarity and improving learning efficiency. Other advancements include Deep Hysteretic DRQN (Dec-HDRQN) [166], which combines hysteretic updates with deep neural networks and uses concurrent buffers to handle decentralized data. These methods have shown robust performance in partially observable environments. + +In the DTDE paradigm, policy gradient methods offer an alternative to value-based approaches, particularly for scenarios involving continuous action spaces [122]. Several policy gradient DTDE methods have been proposed: + +- Decentralized REINFORCE [129] independently optimizes each agent's policy using gradient ascent based on rewards observed during episodes. While simple, it is less sample-efficient. + +- Independent Actor-Critic (IAC) [157] Combining value estimation (critic) and policy optimization (actor), IAC enables agents to learn faster + +and update more frequently, improving sample efficiency. + +- Independent Proximal Policy Optimization (IPPO) [158, 153, 133] Extending Proximal Policy Optimization (PPO) to decentralized settings, IPPO improves policy stability by limiting how much policies can change between updates. + +Despite its advantages, DTDE still faces significant challenges [122, 123]: 1. Non-Stationarity: As other agents learn and adapt, the environment appears dynamic and unstable to each agent, making convergence difficult; 2. Credit Assignment: It is hard to determine how each agent's actions contribute to the team's overall reward in cooperative tasks; 3. Trade-Offs Between Scalability and Performance: While DTDE scales well, its performance may be limited in tasks requiring high levels of coordination. To overcome these challenges, future research could focus on improving communication strategies during training and more robust strategies for dynamic environments. + +In conclusion, the DTDE paradigm provides a powerful framework for solving distributed decision-making problems, balancing scalability, independence, and efficiency. It has been successfully applied in areas such as autonomous driving, distributed energy management, and swarm robotics. As research continues, DTDE is expected to play a larger role in real-world multi-agent systems, especially in scenarios requiring robust, independent learning. + +# 2.3.3. Centralized Training with Centralized Execution (CTCE) + +As shown in the right of Figure 3, Centralized Training with Centralized Execution (CTCE) stands out as a fully centralized mechanism to MARL decision-making, where all agents transmit their information to a central controller [150, 151, 152, 126]. This central controller has access to the observations, actions, and rewards of all agents. The agents themselves do not possess policy networks and are not responsible for making decisions. Instead, they simply execute the directives issued by the central controller [125, 123]. + +Multi-Agent DQN [126] is a classic example of the CTCE paradigm, where DQN is combined with a parameter-sharing mechanism to address tasks in multi-agent environments. Gupta et al. [126] firstly extends three single-agent DRL algorithms (DQN [1, 2], TRPO, and A3C) to multi-agent systems, resulting in Multi-Agent-DQN, Multi-Agent-TRPO, and Multi-Agent-A3C. These approaches were designed to learn + +cooperative policies in complex, partially observable environments without requiring explicit communication between agents. The DQN algorithm based on multi-agent settings, also known as PS-DQN (Parameter Sharing DQN), effectively utilizes curriculum learning to handle increasing task complexity. By starting with fewer agents and gradually increasing the number, the model scales well to more complex scenarios. Further, this foundational work has led to numerous enhancements and variants based on Multi-Agent DQN, each designed to address specific challenges in multi-agent systems, such as CoRe [150], MARL-DQN [151], and [152]. CoRe [150] introduces a counterfactual reward mechanism into MARL to address the credit assignment problem in cooperative settings. By computing the difference in global rewards when an agent hypothetically changes its action while others keep theirs fixed, CoRe enhances the standard DQN framework, significantly improving learning efficiency and performance in cooperative tasks. MARL-DQN [151] optimizes energy efficiency and resource allocation in NOMA wireless systems by using MARL framework combined with Deep Q-Networks. By combining MARL with DQN, it dynamically adjusts power and time allocation to minimize energy consumption while ensuring quality of service, outperforming traditional methods in terms of efficiency and performance. Hafiz et al. [152] proposed a simplified and efficient multi-agent DQN-based multi-agent system (MAS) that addresses the challenges of complexity, resource demands, and training difficulties inherent in more advanced MARL frameworks. The work introduced a shared state and reward system while maintaining agent-specific actions, which streamlines the experience replay process. The significant improvements in tasks such as Cartpole-v1 $^1$ , LunarLander-v2 $^2$ , and Maze Traversal $^3$ from OpenAI Gym [167] demonstrates the model's effectiveness and superiority. + +# 2.3.4. Addition Taxonomies and Efforts of Communication-based MARL Algorithms + +As outlined above, three primary paradigms—CTCE, DTDE, and CTDE—have emerged in the MARL domain to tackle the challenges associated with training and execution in multi-agent systems. Each of these + +paradigms has its strengths and limitations, yet all face inherent difficulties in handling communication among agents, which is critical for effective collaboration and decision-making. + +Specifically, the CTCE paradigm, while providing a fully integrated framework for learning and execution, struggles with scalability as the system size grows. The DTDE paradigm, on the other hand, allows for independent agent training and execution, but often lacks the necessary coordination required for global task optimization. The CTDE paradigm has emerged as a widely adopted approach due to its ability to leverage centralized information during training to learn effective policies, while enabling decentralized execution to operate efficiently in distributed environments. However, even in CTDE, the communication between agents during execution is a bottleneck, prompting researchers to focus on improving communication strategies to enhance system performance. + +Communication-based MARL algorithms have made significant progress in overcoming these challenges. From the perspective of communication protocols and languages, communication-based MARL methods can be categorized into three types: broadcasting communication, targeted communication, and networked communication, as shown in Figure 4. From the technical angle, we provide an overview of these communication-based MARL advancements, categorizing the algorithms into three main groups based on their focus: (1) value function-based Communication-based MARL, (2) policy search-based Communication-based MARL, and (3) Communication-based MARL algorithms designed to improve communication efficiency. These approaches represent the forefront of research in enabling agents to effectively share information, coordinate actions, and optimize performance in complex environments. Here, we provide a detailed introduction to these approaches. + +Value Function-Based Communication-based MARL: For Value Function-Based Communication-based MARL Algorithms, several notable works include Differentiable Inter-Agent Learning (DIAL) [168] and Deep Distributed Recurrent Q-Networks (DDRQN) [169, 170]. Among them, DIAL facilitates effective collaboration and optimization of joint action policies by enabling the exchange of gradients of Q-functions between agents. On the other hand, DDRQN leverages recurrent neural networks to address partially observable environments, allowing agents to share critical Q-values or hidden states and make adaptive decisions in dynamic settings. + +Policy Search-Based Communication-based + +![](images/9250e939563f539e78b13d5c0bf791e94784531aabd5105d5e00674c0b29e491.jpg) +(1) Broadcasting communication + +![](images/f382e9eeb15ff5c88549bd6c0b57199c910ae4030f046b9be51bb2700944e725.jpg) +(2) Targeted communication + +![](images/494d99f68b7befe7e5aeb7bcab2ce12458baeb66a4548b45686ea81abbdb200e.jpg) +(3) Networked communication +Figure 4: A schematic representation of three distinct communication methods among agents, with arrows indicating the direction of message transmission. (a) Broadcasting communication: The activated agent transmits messages to all other agents within the communication network. (b) Targeted communication: Agents selectively communicate with specific target agents based on a supervisory mechanism that regulates the timing, content, and recipients of the messages. (c) Networked communication: Agents engage in localized interactions with their neighboring agents within the network. + +MARL: For Policy Search-Based Communication-based MARL Algorithms, significant progress has been made with approaches such as Communication Networks (CommNet) [171], Bidirectional Coordinated Network (BiCNet) [172, 173], Multi-Agent Distributed MADDPG (MD-MADDPG) [174, 132], Intrinsic A3C [175, 155], and Multi-Agent Communication and Coordination (MACC) [176, 177]. Among them, CommNet [171] proposes a centralized but differentiable communication framework where agents share encoded signals to form a global context, improving policy decisions. BiCNet [172, 173] enhances coordination among agents by employing bidirectional recurrent layers, making it suitable for complex tasks. MD-MADDPG [174, 132] combines centralized training and decentralized execution, enabling agents to exchange critical state-action information during training for robust policy learning. Intrinsic A3C [175, 155] introduces intrinsic motivation to encourage effective exploration in sparse-reward scenarios, with agents sharing intrinsic rewards through communication to boost performance. Finally, Multi-Agent Communication and Coordination (MACC) [176, 177] focuses on adaptive communication mechanisms, providing stable and secure coordination to enhance training and execution in dynamic multi-agent environments. + +Communication-based MARL Enhancing Communication Efficiency: For algorithms aimed at enhancing communication efficiency, several outstanding approaches include Attentional Communication (ATOC) [138], Targeted Multi-Agent Communication + +(TarMAC) [139], Inter-Agent Centralized Communication (IC3Net) [178]. Attentional Communication (ATOC) [138] employs an attention mechanism to dynamically determine when communication is necessary, achieving a balance between efficiency and coordination. Targeted Multi-Agent Communication (TarMAC) [139] introduces targeted attention mechanisms to direct messages to relevant teammates, minimizing redundant communication, and improving overall performance. Inter-Agent Centralized Communication (IC3Net) [178] incorporates a gating mechanism that allows agents to learn when and how to communicate, optimizing both the frequency and quality of interactions. + +These research advances in Communication-based MARL methods demonstrate significant strides in enabling agents to share information and achieve coordinated decision-making in MAS. These advancements will pave the way for deploying MARL in real-world scenarios where efficient and effective communication is essential. + +# 2.4. LLMs-based Multi-Agent System Taxonomies + +The field of LLMs-based multi-agent systems has seen significant advancements, with researchers exploring various aspects of these systems to enhance their capabilities and applications [116, 30]. A comprehensive taxonomy can help categorize and understand the different dimensions of LLMs-based multi-agent systems, including architectural design, application domains, evaluation methods, and future research directions. + +# 2.4.1. Architectural Design + +The design of architectures for LLMs-based multiagent systems is a critical component in harnessing the full potential of LLMs to enhance the capabilities of autonomous agents. Architectural design encompasses the framework and mechanisms that enable agents to interact, adapt, and make decisions in complex and dynamic environments. This section explores two primary levels of autonomy within these systems: Adaptive Autonomy and Self-Organizing Autonomy. + +- Adaptive Autonomy: [103, 146, 113, 110] Adaptive autonomy refers to systems where agents can adjust their behavior within a predefined framework. These agents are designed to operate within the constraints set by the system architects but can adapt their actions based on the specific requirements of the task at hand. For example, in a task-specific adaptation scenario, an agent might adjust its search strategy in an information retrieval task based on the relevance of the results. In a context-aware adaptation scenario, an agent might change its communication style based on the social context of the interaction. This level of autonomy is crucial for agents that need to operate in dynamic environments where the task requirements can change over time. + +- Self-Organizing Autonomy: [101, 103, 105, 102, 104, 154] Self-organizing autonomy represents a higher level of autonomy where agents can dynamically adapt their behavior without predefined structures. This allows for more flexible and context-aware interactions among agents. For instance, in dynamic task allocation, agents can assign tasks to each other based on the current state of the environment and their individual skills. Emergent behavior is another key feature at this level, where agents can form coalitions or develop new strategies to solve complex problems. This level of autonomy is essential for multi-agent systems that need to operate in highly dynamic and unpredictable environments. + +# 2.4.2. Applications + +In the social sciences [148, 99, 112], LLMs-based agents have been used to simulate various social phenomena, providing insights into human behavior and social dynamics. + +1) Economic Agents: [179, 101] LLMs can be used to model economic agents, similar to how economists use the concept of homo economicus. + +Experiments have shown that LLMs can produce results qualitatively similar to those of traditional economic models, making them a promising tool for exploring new social science insights. For example, in market simulation, LLMs can predict market trends and the impact of economic policies. In behavioral economics, LLMs can model individual and group decision-making processes, providing a more nuanced understanding of economic behavior. + +- 2) Social Network Simulation: [147, 148, 99, 180] The Social-network Simulation System (S3) uses LLMs-based agents to simulate social networks, accurately replicating individual attitudes, emotions, and behaviors. This system can model the propagation of information, attitudes, and emotions at the population level, providing valuable insights into social dynamics. For example, it can simulate how information spreads through social networks and identify influential nodes, or model the evolution of social norms and behaviors over time. + +- 3) User Behavior Analysis: [97, 111, 112] LLMs are employed for user simulation in recommender systems, demonstrating superiority over baseline simulation systems. They can generate reliable user behaviors, improving the accuracy of recommendations. For example, in personalized recommendations, LLMs can generate user profiles and behaviors to optimize recommendation algorithms. In user engagement, LLMs can simulate user interactions to optimize user retention and engagement. + +In the natural sciences [100, 149, 181], LLMs-based agents have been used to simulate complex systems and processes, providing insights into natural phenomena and scientific theories. + +1) Macroeconomic Simulation: LLMs-based agents are used for macroeconomic simulation, making realistic decisions and reproducing classic macroeconomic phenomena. These agents can simulate the impact of economic policies on the macroeconomy, providing a more accurate and dynamic model of economic behavior. For example, they can simulate the interactions between different economic sectors and their impact on the overall economy, helping policymakers make more informed decisions. + +2) Generative Agent-Based Modeling: This approach couples mechanistic models with genera + +tive artificial intelligence to unveil social system dynamics, such as norm diffusion and social dynamics. By combining the strengths of both approaches, researchers can model complex social systems and predict their behavior over time. For example, they can model the spread of diseases in a population, the impact of environmental changes on ecosystems, or the evolution of social norms in a community. + +In engineering [103, 146, 114, 115], LLMs-based agents have been used to develop and optimize complex systems, improving efficiency and performance. + +- 1) Software Development: LLMs-based agents are used for software development, facilitating sophisticated interactions and decision-making in a wide range of contexts. These agents can assist in code generation, bug detection, and system optimization, improving the productivity and quality of software development. For example, they can generate code snippets based on natural language descriptions, detect bugs in code, and suggest optimizations to improve performance. + +- 2) Multi-Robot Systems: LLMs-based multi-agent systems are used to simulate complex real-world environments effectively, enabling interactions among diverse agents to solve various tasks. These systems can coordinate the actions of multiple robots, optimizing their behavior to achieve common goals. For example, they can be used in search and rescue operations, where multiple robots need to coordinate their actions to locate and rescue victims. + +# 3. Simulation Environments of Multi-Agent Decision-Making + +First and foremost, the designs and implementations of multi-agent cooperative simulation environments are crucial in the historical research of multi-agent decision-making, which are widely utilized in practical applications and production. These simulation environments form the foundation for conducting efficient and effective studies in multi-agent cooperative decision-making. Specifically, a dynamic multi-agent cooperative decision-making environment refers to predetermined scenarios and platforms where multiple agents collaborate to solve problems, complete tasks, and achieve goals. Such environments provide not only a platform for testing and validating various intelligent decision-making algorithms but also help + +us better understand the behaviors and interactions of agents in dynamic settings. By simulating these interactions, researchers can gain insights into how agents coordinate and adapt to changing conditions, thereby improving the robustness and efficiency of multi-agent systems in real-world applications. Consequently, the importance of these simulation environments cannot be overstated. They serve as a testing ground for theoretical models, allowing researchers to observe the practical implications of their intelligent algorithms. Additionally, these platforms help in identifying potential issues and refining strategies before deployment in actual scenarios, ensuring that the agents are well-prepared to handle the complexities of real-world environments. In Table 2, a wide range of simulated environments is listed. Next, we will delve into these environments one by one, emphasizing their significance and features for future development. + +# 3.1. MARL-based Simulation Environments + +This section provides an overview of several widely-used simulation environments designed for MARL. These platforms, such as Multi-Agent Particle Environment [59, 136, 137], and PettingZoo [182], offer diverse scenarios and functionalities for exploring cooperative and competitive agent interactions in both simple and complex tasks. + +![](images/dcd89d2a1cccb373c66e14cf63b7c26091146804c5ab4fa00049a238a73e0215.jpg) +Communication +Figure 5: Typical Scenarios in Multi-Agent Particle Environment (MPE). + +![](images/aaea582c151a81fb3696d62ff4d12a2da361daeb71969d0cef534808282b728a.jpg) +Spread + +![](images/c5a5536b752b9767e384cd17a453f522e02db73bbd898ee3b1d32025f3d71938.jpg) +Reference + +# 3.1.1. Several Widely-used Environments on MARL + +Multi-Agent Particle Environment (MPE) [59, 136, 137] is a versatile and widely-used MARL platform designed for research in both cooperative and competitive settings. Developed by OpenAI, it is primarily known for being the testing environment of the MAD-DPG algorithm [136]. MPE is a time-discrete, space-continuous 2D platform designed for evaluating MARL algorithms. + +Figure 5, initially derived from Malloy et al. [137], illustrates various scenarios within the Multi-Agent Particle Environment (MPE), including tasks + +Table 2: Diverse MARL-based and LLMs-based Simulated Environments for Multi-Agent Systems. + +
CategoriesMulti-Agent System Environments
MARL-basedMulti-Agent Particle Environment (MPE) [59, 136, 137], PettingZoo [182], SMAC [134], SMAC-v2 [135], GFootball [55], Gym-Microrts [183], MAgent [184], Dexterous Hands [185, 186], OpenAI Gym [167], Gym-MiniGrid [187]4, Melting Pot [188]5, Capture The Flag6 [189], VillagerAgent [190], Minecraft [191, 192, 193], Unity ML-Agents [194], SUMO7 [195], Hanabi Learning [196, 197], Predator-Prey [198, 199]
LLMs-basedTDW-MAT [147, 148], C-WAH [99], Cuisineworld [149], AgentScope [100], Ro-CoBench [200], Generative Agents [181, 201], SocialAI school [202, 203], Welfare Diplomacy [204]
+ +such as adversarial interactions, cooperative crypto, object pushing, and team-based world navigation. Compatible with the widely-used Gym interface, it supports a variety of tasks ranging from fully cooperative to mixed cooperative-competitive scenarios, such as simple_adversary, simple_crypto, simple_spread, simple-speaker Listener, and simple_world_comm8. Each scenario highlights distinct cooperative and competitive dynamics among agents. MPE allows agents to interact and strategize within a visually simplistic UI where particles represent different entities. MPE is a open-source platform that widely adopted in the multi-agent system research, enabling extensive customization and contributing to its role as a standard tool for studying complex multi-agent dynamics. + +Overall, MPE is a pivotal resource in the MARL community, offering a well-rounded platform for experimentation and algorithm comparison. Its design and functionality have made it an indispensable tool for researchers seeking to push the boundaries of what is possible in multi-agent systems. + +StarCraft Multi-Agent Challenge (SMAC)[134] is a widely-used benchmark for MARL that focuses on decentralized micromanagement tasks in the popular real-time strategy game StarCraft II[10]. In SMAC, multiple agents control individual units and must learn to cooperate and coordinate actions based on local, partial observations. The agents face complex challenges, including coordinating combat techniques like focus fire, kiting, and positioning, while the opponent + +is controlled by the built-in StarCraft II AI. SMAC emphasizes problems such as partial observability, decentralized decision-making, and multi-agent credit assignment. The environment is structured to simulate real-world scenarios where agents must learn to collaborate without full knowledge of the global state. Agents' observations are restricted to a limited field of view, forcing them to rely on local information for decision-making. As shown in Figure 6, these multi-agent cooperative decision-making environments are respectively 2s vs 3z, 5m vs 6m, 6h vs 8z, MM2, where the inside numbers represent the number of units and the letters represent the unit types in general. In recent years, SMAC has become a standard benchmark for evaluating MARL algorithms, offering a rigorous and challenging environment for advancing the field. + +![](images/8ce921f3f1029feaf7e4f9920d087711055b3f3b730fa94019ee7dfcc46a6eb1.jpg) +2s_vs_3z + +![](images/9ffcd120820e280580ba8b4135920ff1bb425175bb4ae45fd0b6b96459a05cdf.jpg) +5m_vs_6m + +![](images/c441759cf5f1f64bf32be940a7213d4e62acd6483c83f9d24fc127762b5ee44f.jpg) +6h_vs_8z + +![](images/062a4fad536eb0ec8c96ad5812fb9f6ca97a26bdff81ad9a24d287f63775e317.jpg) +MMM2 +Figure 6: Several Typical Scenarios in StarCraft Multi-Agent Challenge (SMAC). + +StarCraft Multi-Agent Challenge 2 (SMACv2)[1134, 154, 91] However, SMAC [134] has limitations, + +including insufficient stochasticity and partial observability, which allows agents to perform well with simple open-loop policies. To address these shortcomings, SMACv2 introduces procedural content generation (PCG), randomizing team compositions and agent positions, ensuring agents face novel, diverse scenarios. Several multi-agent decision-making scenarios are depicted in Figure 7, which are from Benjamin et al. [135]. This requires more sophisticated, closed-loop policies that condition on both ally and enemy information. Additionally, SMACv2 includes the Extended Partial Observability Challenge (EPO), where enemy observations are masked stochastically, forcing agents to adapt to incomplete information and communicate more effectively. SMACv2 thus represents a major evolution of the original benchmark, addressing key gaps such as the lack of stochasticity and meaningful partial observability. These changes make SMACv2 a more challenging environment, requiring agents to generalize across varied settings and improve coordination, communication, and decentralized decision-making. Overall, SMACv2 provides a more rigorous testbed for advancing the field of cooperative MARL. + +![](images/5977daaa3a65dbecf1ae4ebd5d4273f9270a5994fc32afaecaa06a046981ff38.jpg) +(a) Terran +Figure 7: Several scenarios from SMACv2 showing agents battling the built-in AI. + +![](images/cc1555bf214608175b7ecc0036ea0248cb323f3aa8b093c0775f6f0247e1a6cb.jpg) +(b) Protoss + +Google Research Football Environment (GFootball) [55] is a state-of-the-art multi-agent simulation environment developed by the Google Research Brain Team. It is specifically designed for reinforcement learning research and is built on top of the open-source football game, GamePlay Football. GFootball is compatible with the OpenAI Gym API, making it a versatile tool not only for training intelligent agents but also for allowing players to interact with the built-in AI or trained agents using a keyboard or game controller. GFootball features an advanced, physics-based 3D football simulator where agents can be trained to play football, offering a challenging yet highly customizable platform for testing novel reinforcement learning algorithms and ideas. GFootball is tailored for multi-agent + +experiments and multiplayer scenarios, enabling the exploration of more complex interactions and strategies. GFootball supports various scenarios, including fullgame simulations with varying difficulty levels, as well as simpler tasks in the Football Academy that focus on specific skills like passing or scoring. + +![](images/fe125f7ad7146c8d20a68dd683d2c487a0c2aa784c5c3f04bb080f05675d86ff.jpg) + +![](images/e5ddbb87534db84e6d03407faae41692168b868eafe9607c3e2c250685d3d32e.jpg) + +![](images/92ae51be2b8df48d17be35b8d6b2ad59fdf7b1204444fa6c3a8abf9fe3652419.jpg) +Figure 8: Typical examples of Training Scenarios in Football Academy. + +![](images/cd543be592269ea50a5b717ea12435ce30b9687643df4f91e24b491f70ec8d60.jpg) + +Moreover, training agents for the "Football Benchmark" can be quite challenging. To help researchers efficiently test and iterate on new ideas, researchers provide a toolset called "Football Academy", as illustrated in Figure 8, which includes a series of scenarios with varying levels of difficulty. These scenarios range from simple setups, such as a single player scoring against an open goal (e.g., approaching an open goal, scoring in an open goal, or scoring while running), to more complex team-based setups, where a controlled team must break through specific defensive formations (e.g., scoring while running against a goalkeeper, passing and shooting against a goalkeeper, and 3v1 against a goalkeeper). Additionally, the toolset covers common situations in football matches, such as corner kicks, simple counterattacks, and complex counterattacks. Lastly, as an famous open-source GitHub project12, it offers a unique opportunity for researchers and pushes the boundaries of AI research in a reproducible and scalable manner. + +Unity Machine Learning-Agents Toolkit[13] [194] is an open-source platform designed to enable games and simulations to serve as environments for training intelligent agents. Built on Unity's powerful game engine, it supports a wide range of AI and machine learn + +ing methods, including reinforcement learning, imitation learning, and neuroevolution, through an intuitive Python API. The platform includes state-of-the-art algorithm implementations (based on PyTorch), allowing researchers and developers to train agents for 2D, 3D, and VR/AR applications. + +![](images/53875bfadf5937371eb9290dac80d5ffbbc6ef1faa61797072bc5b31078641a6.jpg) +Figure 9: Typical Training Scenarios in Unity Machine Learning-Agents Toolkit (released version: v0.11). From Left-to-right, up-to-down: (a) Basic, (b) 3DBall, (c) Crawler, (d) Push Block, (e) Tennis, (f) Worm, (g) Bouncer, (h) Grid World, (i) Walker, (j) Reacher, (k) Food Collector, (l) Pyramids, (m) Wall Jump, (n) Hallway, (o) Soccer Twos [194]. + +ML-Agents is particularly useful for training NPC behaviors in diverse scenarios, automated testing of game builds, and evaluating game design decisions. It features a highly flexible simulation environment with realistic visuals, physics-driven interactions, and rich task complexity. By integrating tools for creating custom environments and supporting multi-agent and adversarial settings, the toolkit bridges the gap between AI research and practical applications in game development. + +As seen from Figure 9, it depicts several typical multi-agent environments from the previous work of Juliani et al. [194]. The platform also provides key components such as a Python API, Unity SDK, and pre-built environments, enabling users to customize and evaluate their algorithms in Unity's interactive and visually rich settings. With its versatility and accessibility, Unity ML-Agents Toolkit has become an indispensable resource for both AI researchers and game developers, driving innovation in artificial intelligence and simulation-based learning. + +Gym-Microrts $^{14}$ [183] (pronounced "Gym-micro-RTS") is a fast and affordable reinforcement learning (RL) platform designed to facilitate research in fullgame Real-Time Strategy (RTS) games. Unlike traditional RTS research that demands extensive computational resources, Gym- $\mu$ RTS allows training advanced agents using limited hardware, such as a single GPU and CPU setup, within reasonable timeframes. Figure 10 showcases a match between our best-trained agent (top-left) and CoacAI (bottom-right), the $2020\mu$ RTS AI competition champion. The agent employs an efficient strategy, starting with resource harvesting and a worker rush to damage the enemy base, transitioning into midgame combat unit production to secure victory. + +![](images/f72b8927d7ad0f20dd420e3c0537c50cb2a61fc6a171923ecd883bbc96ce2f69.jpg) + +![](images/7ed8fe37724e932a2a3ab825bea994f6e768276e6ff3027446672d60c35542b5.jpg) + +![](images/b462ee5279fc07101d5d0eb3e7f398608f911c11288edf318519293f4d90f54c.jpg) +Figure 10: Screenshot of our best-trained agent (top-left) playing against CoacAI (bottom-right), the $2020\mu \mathrm{RTS}$ AI competition champion [183]. + +![](images/bf1aad07e6b6dfae82304cdea60c98c659bb3ab718316201042409f300a4afad.jpg) + +The platform offers a simplified RTS environment that captures the core challenges of RTS games, including combinatorial action spaces, real-time decision-making, and partial observability. Gym- $\mu$ RTS employs a low-level action space, enabling fine-grained control over individual units without AI assistance, which poses unique challenges and opportunities for RL algorithms. It supports Proximal Policy Optimization (PPO) and incorporates techniques like invalid action masking, action composition, and diverse training opponents to enhance training efficiency and agent performance. + +Gym- $\mu$ RTS has demonstrated its effectiveness by producing state-of-the-art DRL agents capable of defeating top competition bots, such as CoacAI. The platform is open-source and provides all necessary tools for + +researchers to explore and advance RL techniques in RTS games, making it a valuable resource for both AI researchers and gaming enthusiasts. + +$\mathbf{MAgent}^{15}$ [184] is an open-source platform specifically designed to support large-scale MARL research, with a focus on exploring Artificial Collective Intelligence (ACI). Unlike traditional MARL platforms, MAgent excels in handling scenarios involving hundreds to millions of agents, making it ideal for studying complex interactions and emergent behaviors in large populations. + +For instance, as shown in Figure 11, the "Pursuit" scenario is a classic example designed to showcase the emergent cooperative behaviors of agents in a predator-prey environment. In this setup, predators work together to capture preys while the preys attempt to evade capture. Each predator receives rewards for successfully attacking a prey, while preys are penalized if caught. Over time, predators learn to form collaborative strategies, such as surrounding and trapping preys, highlighting the emergence of local cooperation. + +![](images/a8b8ab058ef58b895e471b30f19076c894ad38342e7c3636432125fa0ff8fe44.jpg) +Adversarial Pursuit + +![](images/6a7a5cc929d96a2e693b7eb6dd6b52ad97c671cee999876ac5f06b04b6595221.jpg) +Combined Arms + +![](images/a02b8f1699d713c9af9acb477efb8ff6330f22c91869f18ec626c6f6d413cd20.jpg) +Gather +Figure 11: Illustrations of one of the typical running example in MAGent, called "Pursuit" [184]. + +![](images/fecce6d79d7fc54c61690b86b34ca2b5b7e04c3550cec69330b8de4f7f92f10e.jpg) +Tiger Deer + +The platform is based on a grid-world model where agents can perform actions such as moving, turning, or attacking, while perceiving both local and global + +information. Through a flexible Python interface, researchers can easily customize the state space, action space, and reward mechanisms, enabling the rapid creation of complex cooperative or competitive environments. MAgent comes with several built-in scenarios, such as pursuit, resource gathering, and team-based battles, which highlight emergent social behaviors like cooperative strategies, competitive dynamics, and resource monopolization. + +MAgent is highly scalable, leveraging GPU-based parallelism to simulate large-scale interactions efficiently. It also provides intuitive visualization tools for real-time observation of agent behaviors, facilitating analysis and debugging. Its flexibility and scalability make MAgent a powerful tool for MARL research, enabling the study of large-scale agent interactions, emergent behaviors, and the dynamics of artificial societies. + +# 3.2. LLMs Reasoning-based Simulation Environments + +LLMs-based multi-agent systems have become an essential tool for enhancing the collaboration, reasoning, and decision-making capabilities of autonomous agents [116]. By integrating LLMs with simulation platforms, researchers can create complex test environments to explore the interactions of multi-agent systems in various tasks and scenarios. These simulation environments not only provide rich dynamic testing scenarios but also promote the widespread application of LLMs in task planning, coordination, and execution. The following will introduce several widely used simulation platforms for LLM multi-agent systems. + +ThreeDWorld Multi-Agent Transport (TDW-MAT) $^{16}$ [147, 148] is a simulation environment for multi-agent embodied task, which is extended from the ThreeDWorld Transport Challenge [147] and is designed for visually-guided task-and-motion planning in physically realistic settings. It operates within the ThreeDWorld (TDW) platform, which offers high-fidelity sensory data, real-time physics-driven interactions, and near-photorealistic rendering. In TDW-MAT, embodied agents are tasked with transporting objects scattered throughout a simulated home environment using containers, emphasizing the need for coordination, physics awareness, and efficient planning. For instance, in the common scenario shown in Figure 12, the agent must transport objects scattered across multiple rooms and place them on the bed (marked with a green bounding box) in the bedroom. + +![](images/a1e4180f2a0493ad23fc08481d7cb72ffbfc367e8264189dd05626a0c1ddb953.jpg) +Figure 12: An overview of one of the example task in ThreeDWorld Transport Challenge [147, 148]. + +Communicative Watch-And-Help (C-WAH) $^{17}$ [99] is a realistic multi-agent simulation environment and an extension of the Watch-And-Help Challenge platform, VirtualHome-Social [98]. C-WAH places a greater emphasis on cooperation and enhances communication between agents compared to VirtualHome-Social. Built on the VirtualHome-Social, C-WAH simulates common household activities where agents must collaborate to complete tasks such as preparing meals, washing dishes, and setting up a dinner table. As shown in Figure 13, C-WAH supports both symbolic and visual observation modes, allowing agents to perceive their surroundings either through detailed object information or egocentric RGB and depth images. + +![](images/3ccc87a05c26f57ebdd77411aba1dbe0626dbca6891e3519086570eadf130ab7.jpg) +Figure 13: An typical object-moving task leveraging LLMs-based embodied agents within the Communicative Watch-And-Help [99]. + +Cuisineworld[18] [149] is a virtual kitchen environment designed to evaluate and enhance multi-agent collaboration and coordination (i.e., the working efficiency) in a gaming context. As shown in Figure 14, in this scenario, multiple agents work together to prepare + +and complete dish orders within a limited time frame. The tasks range from simple preparations, like chopping ingredients, to complex cooking processes that involve multiple tools and steps. CuisineWorld is equipped with a textual interface, and it supports various levels of difficulty, making it a flexible and robust testbed for assessing the planning and scheduling capabilities of Large Foundation Models (LFMs). The environment also introduces a "Collaboration Score" (CoS) metric to measure the efficiency of agent coordination as task demands increase, providing a comprehensive benchmark for multi-agent system performance in dynamic and cooperative settings. + +![](images/45d8ef4c1e22b8e2f0fce42e73adc7e8d124e191aac3818de1146886c84efdad.jpg) +Figure 14: An typical multi-agent cooperative scenario in the CuisineWorld platform [149]. + +AgentScope[100] is a innovative, robust and flexible multi-agent platform designed to empower developers in building advanced multi-agent systems by leveraging the potential of LLMs. At its core, the platform employs a process-based message exchange mechanism, simplifying the complexities of agent communication and collaboration. This approach ensures smooth and efficient agent interaction, enabling developers to focus on designing workflows rather than low-level details. The platform stands out for its comprehensive fault-tolerance infrastructure, which includes retry mechanisms, rule-based corrections, and customizable error-handling configurations. AgentScope also excels in multi-modal support, seamlessly integrating text, images, audio, and video into its workflows. By decoupling data storage and transfer, it optimizes memory usage and enhances scalability, making it ideal for applications requiring rich multi-modal interactions. Additionally, its actor-based distributed framework enables efficient parallel execution and supports hybrid deployments, bridging the gap between local and distributed + +environments with ease. + +![](images/d6c01b2de5b8288035011a26518bccdfc88b0dd05ecce397647c732cb8a340e5.jpg) +Figure 15: The official multi-modal interaction Web UI page between agents in the AgentScope platform [100]. + +Moreover, to improve user interaction with multiple agents, AgentScope assigns distinct colors and icons to each agent, as shown in Figure 15, providing clear visual differentiation in both the terminal and web interface. Designed with user accessibility in mind, AgentScope provides intuitive programming tools, including pipelines and message centers, which streamline the development process. Its interactive user interfaces, both terminal- and web-based, allow developers to monitor performance, track costs, and engage with agents effectively. These features position AgentScope as a state-of-the-art platform for advancing multi-agent systems, combining ease of use with cutting-edge technology. + +RoCoBench $^{20}$ RoCoBench is a benchmark platform, proposed by Mandi et al. [200], designed to evaluate and enhance the collaborative capabilities of multirobot systems powered by LLMs. Built as an extension to the RoCo project $^{21}$ , RoCoBench provides a realistic simulation environment where robotic agents interact and collaborate to complete complex tasks, as shown in Figure 16, such as sorting packages, assembling components, or preparing a workspace. RoCoBench places a strong emphasis on communication-driven collaboration, integrating both symbolic and visual interaction modes to enable robots to perceive and respond to their environment effectively. Each robot is equipped with LLMs-powered reasoning, facilitating real-time dialogue and coordination. Correspondingly, the benchmark introduces a "Collaboration Efficiency Metric" (CEM) to evaluate the effectiveness of multirobot teamwork, taking into account factors like task completion time, resource allocation, and the quality + +of inter-robot communication. RoCoBench serves as a comprehensive platform for evaluating the potential of LLMs in driving dialectic multi-robot collaboration, offering a scalable and flexible environment for developers and researchers alike + +![](images/3dd879c3d4d448f314c8aa26b5c93b0fa8235c3e4bcc25f245fe5d866f8dd71e.jpg) +Figure 16: An overview of RoCoBench, a collection of six multirobot collaboration tasks set in a tabletop manipulation environment. The scenarios encompass a diverse range of collaborative challenges, each demanding distinct communication and coordination strategies between robots, incorporating familiar, intuitive objects designed to align with the semantic understanding capabilities of LLMs [200]. + +Generative Agents $^{22}$ Park et al. [181, 201] introduces Generative Agents, a groundbreaking framework for simulating human behavior in interactive virtual worlds. These agents exhibit realistic individual and collective behaviors by incorporating dynamic memory, self-reflection, and action planning capabilities. The system leverages LLMs to store, retrieve, and synthesize memories into higher-level reasoning, enabling agents to adapt their actions based on personal experiences and evolving environmental changes. As illustrated in Figure 16, they present an interactive sandbox environment called Smallville, akin to "The Sims," where 25 distinct virtual agents live, interact, and carry out daily activities. Each agent has a detailed initial profile, including personal traits, relationships, and goals, stored as "seed memories." Agents engage in natural language-based dialogues and demonstrate social behaviors such as hosting events, making new acquaintances, and responding to user interventions. Generative Agents enable interactive applications in fields such as simulating realistic social dynamics for games and training simulations; designing dynamic, non-scripted virtual worlds for interactive systems; and exploring theories and behaviors in a controlled yet realistic virtual setting. The evaluations revealed the critical role of memory retrieval, self-reflection, and action planning in achieving coherent agent behaviors. Common issues, + +such as exaggerated responses and overly formal communication, were identified as areas for improvement. Generative Agents push the boundaries of human behavior simulation, offering a robust framework for creating autonomous, memory-driven virtual agents. + +![](images/36660833d9b669e811ee508cad96ba8f3a4987062eebdfae9f1f1f18a635a33d.jpg) +Figure 17: Generative agents serve as realistic simulations of human behavior, designed for interactive applications. In a sandbox environment inspired by The Sims, twenty-five agents engage in activities such as planning their routines, sharing updates, building relationships, and collaborating on group events, while allowing users to observe and interact with them. [181, 201]. + +SocialAI school $^{23}$ Kovac et al. [202, 203] introduces The SocialAI School, a novel framework designed to explore and develop socio-cognitive abilities in artificial agents. The study emphasizes the importance of sociocognitive skills as foundational to human intelligence and cultural evolution. As shown in Figure 18, the SocialAI School provides a customizable suite of procedurally generated environments that enable systematic research into the socio-cognitive abilities required for artificial agents to interact with and contribute to complex cultures. Built on MiniGrid, it provides procedural environments for RL and LLMs-based agents to study social skills like joint attention, imitation, and scaffolding. Open-source and versatile, it enables diverse research, including generalizing social inferences, role reversal studies, and scaffolded learning. The SocialAI School represents a significant step toward enriching AI systems with socio-cognitive abilities inspired by human development. + +Welfare Diplomacy[24] [204] is an innovative variant of the zero-sum game Diplomacy, designed to evaluate the cooperative capabilities of multi-agent systems. Unlike the original game, which focuses on a single winner, Welfare Diplomacy introduces a general-sum framework where players balance military con + +![](images/7d31292bade85ac419cca1f29f6b55c388eca7f69e9ee3bdf8fdeaa392730e54.jpg) +Figure 18: A clear workflow of an agent acting in the SocialAI school environment [202, 203]. + +quest with investments in domestic welfare. Players accumulate Welfare Points (WPs) throughout the game by prioritizing welfare over military expansion, and their total utility at the end of the game is determined by these points, removing the notion of a single "winner". Welfare Diplomacy enables clearer assessments of cooperation and provides stronger incentives for training cooperative AI agents. Players take on the roles of European powers, negotiating, forming alliances, and strategizing to compete for key supply centers. Orders are submitted and resolved simultaneously, with the goal of controlling a specified number of SCs to achieve victory, emphasizing a balance between cooperation and betrayal. Based on these rules, Welfare Diplomacy implements themselves via an open-source platform, and develops zero-shot baseline agents using advanced language models like GPT-4 [141, 140]. Experiments reveal that while these agents achieve high social welfare through mutual demilitarization, they remain vulnerable to exploitation, highlighting room for future improvement. + +In summary, these cutting-edge LLMs-powered simulation environments—ranging from task-specific platforms like TDW-MAT [148, 147] and CuisineWorld [149] to socially-driven frameworks such as Generative Agents [201] and the SocialAI School [203]—highlight the transformative potential of integrating advanced AI reasoning and multi-agent systems. By fostering research on collaboration, social cognition, and cooperative decision-making, these tools not only advance our understanding of AI's capabilities but also pave the way for practical applications in dynamic, real-world scenarios. + +![](images/027c51c8ae31617b6e731a7c9c023fae96a39cb5a7aa76015ca260b4832f553b.jpg) +Figure 19: The Balkans in the Diplomacy map in Welfare Diplomacy [204]. + +# 4. Practice Applications of Multi-Agent Decision-Making + +Multi-agent cooperative decision-making has a wide range of practical applications across various domains. In this section, we delve into the practical applications of multi-agent decision-making, focusing on how advanced methods, particularly multi-agent MARL, are employed to address complex challenges in dynamic and evolving environments. We explore the contributions of advanced multi-agent systems across domains such as agriculture, disaster rescue, military simulations, traffic management, autonomous driving, and multi-robot collaboration. A broad array of applications applications is presented in Table 3. In the following, we will provide a detailed introduction to these applications, highlighting their impact and potential for future advancements. + +# 4.1. MARL-based Intelligent Applications + +Recently, a variety of MARL methods have been developed, fostering efficient collaboration, strategic learning, and adaptive problem-solving in multi-agent systems [83, 24, 25, 28]. Below, we highlight notable contributions that demonstrate the application of MARL in enhancing multi-agent collaboration and performance + +In smart precious agriculture and continuous pest disease detection, Seewald et al. [11] addressed the challenge of continuous exploration for multi-agent systems with battery constraints by integrating ergodic search methods with energy-aware coverage. In disaster rescue, Qazzaz et al. [12] proposed a novel technique using a reinforcement learning multi Q-learning algorithm to optimize UAV connectivity operations in + +challenging terrain. Samad et al. [23] presents a cloud-based multi-agent framework for efficiently managing aerial robots in disaster response scenarios, aiming to optimize rescue efforts by autonomously processing real-time sensory data to locate and assist injured individuals. + +In military confrontations, Qi et al. [90] designed a distributed MARL framework based on the actor-work-learner architecture, addressing the issues of slow sample collection and low training efficiency in MARL within the MaCA [89] and SMAC 3D real-time gaming [134, 135] military simulation environments. Benke et al. [208] proposed a computational model for agent decision-making that incorporates strategic deception, enhancing the representation of deceptive behaviors in multi-agent simulations for military operations research. Sutagundar et al. [209] proposed a Context Aware Agent based Military Sensor Network (CAMSN) to enhance multi-sensor image fusion, using node and sink-driven contexts, forming an improved infrastructure for multi-sensor image fusion. + +In efficient limited-bandwidth communication field, Wang et al. [242] proposed a method called IMAC (Informative Multi-Agent Communication) to address the problem of limited-bandwidth communication in MARL. + +In the research of UAV swarm communications against jamming, Lv et al. [93] proposed a MARL-based scheme to optimize relay selection and power allocation. This strategy leverages network topology, channel states, and shared experiences to improve policy exploration and stability, ultimately enhancing anti-jamming performance. + +In UAV pursuit-evasion [20, 21, 22], Kouzeghar [88] proposed a decentralized heterogeneous UAV swarm approach for multi-target pursuit using MARL technique and introduced a variant of the MADDPG [132] to address pursuit-evasion scenarios in non-stationary environments with random obstacles. Alexopoulos et al. [212] addressed the challenge of pursuit-evasion games involving two pursuing and one evading unmanned aerial vehicle (UAV) by introducing a hierarchical decomposition of the game. Luo et al. [213] proposed a cooperative maneuver decision-making method for multi-UAV pursuit-evasion scenarios using an improved MARL approach, which incorporates an enhanced CommNet network with a communication mechanism to address multi-agent coordination. + +In large-scale traffic signal/flow control, Wang [214] proposed a curiosity-inspired algorithm to optimize safe and smooth traffic flow in autonomous vehicle on-ramp merging; Chu et al. [92] proposed a fully scalable + +Table 3: Categorized Applications of MARL and LLMsin Diverse Domains. + +
CategoryApplication AreaWorks / References
MARL-based MASSmart Precious Agriculture & Disaster RescueSeewald et al. [11], Qazzaz et al. [12], Samad et al. [23], Boubin et al. [205], Li et al. [206], Mahajan et al. [207]
Military ConfrontationsQi et al. [90], Benke et al. [208], Sutagundar et al. [209], Vangaru et al. [210], Wang et al. [211], MaCA [89], SMAC [134], SMAC-v2 [135]
UAV Pursuit-Evasion & Swarm Communications & NavigationKouzeghar [88], Alexopoulos et al. [212], Luo et al. [213], Lv et al. [93], Xue et al. [20], Rezwan et al. [21], Baroomi et al. [22]
Traffic Signal/Flow ControlWang [214], Chu et al. [92], Aboueleneen et al. [215], Yu et al. [216], Sun et al. [217], Azfar et al. [218], Bokade et al. [219], Kwesiga et al. [220], Zhang et al. [221]
Autonomous DrivingXue et al. [222], Liu et al. [19], Wen et al. [18], Jayawardana et al. [17], Liu et al. [223], Formanek et al. [224], Zhang et al. [225], Kotoku et al. [226], Hua et al. [227]
Multiple Robots CollaborativeGeorgios et al. [13], Silva et al. [14], Huang et al. [15], Cena et al. (SMART) [16], Kevin (SCRIMMAGE) [228], Liu et al. [95]
LLMs-based MASMulti-Agent CollaborationWu et al. (AutoGen) [103], Xiao et al. (CoE) [101], Chen et al. (AgentV-erse) [105], Liu et al. (DyLAN) [106], Zhang et al. (CoELA) [96]
Gaming InteractionXu et al. (LLM-Werewolf) [58], Gong et al. (MindAgent) [149], Xie et al. [229], Lin et al. [230], Jia et al. (GameFi) [231], Yin et al. (MIRAGE) [232], Zhang et al. (DVM) [233], Bonorino et al. [234]
Autonomous DrivingZheng et al. (PlanAgent) [113], Luo et al. (SenseRAG) [235], Mahmud et al. [236], Peng et al. (LearningFlow) [237], Karagounis et al. [238], Luo et al. [239], Gao et al. [240], Hegde et al. [241]
Multi-Modal ApplicationWang et al. (LangGraph) [111, 112], Zhang et al. (CrewAI) [114, 115], Zheng et al. (PlanAgent) [113], Wang et al. (MLLM-Tool) [102]
+ +and decentralized multi-agent deep reinforcement learning algorithm based on the advantage actor-critic (A2C) method. + +In autonomous driving area, a large number of superior multi-agent decision-making algorithms and models are continuously being explored and devised. Xue et al. [222] developed a two-stage system framework for improving Multi-Agent Autonomous Driving Systems (MADS) by enabling agents to recognize and understand the Social Value Orientations (SVOs) of other agents. Liu et al. [19] proposed the Personality Modeling Network (PeMN), which includes a cooperation value function and personality parameters to model diverse interactions in highly interactive scenarios, addressing the issue of diverse driving styles in autonomous driving. Wen et al. [18] proposed a safe reinforcement learning algorithm called Parallel + +Constrained Policy Optimization (PCPO) based on the actor-critic architecture to address the issues of unexplainable behaviors and lack of safety guarantees in autonomous driving. Jayawardana et al. [17] proposed enabling socially compatible driving by leveraging human driving data to learn a social preference model, integrating it with reinforcement learning-based AV policy synthesis using Social Value Orientation theory. + +In multiple robots collaborative fields, Georgios et al. [13] introduces a novel cognitive architecture for large-scale multi-agent Learning from Demonstration (LfD), leveraging Federated Learning (FL) to enable scalable, collaborative, and AI-driven robotic systems in complex environments. Silva et al. [14] address the challenges and limitations in evaluating intelligent collaborative robots for Industry 4.0. The review emphasizes the urgent need for improved evaluation methods and + +standards to account for the complexities posed by human variability, AI integration, and advanced control systems in industrial environments. Huang et al. [15] presents a multi-agent reinforcement learning approach using the MADDPG algorithm, enhanced with an experience sample optimizer, to train swarm robots for autonomous, collaborative exploration on Mars. This approach outperforms traditional DRL algorithms in efficiency as the number of robots and targets increases. The SMART multi-agent robotic system [16] is a comprehensive and advanced platform designed for executing coordinated robotic tasks. It integrates both hardware components, such as robots and IP-C Cameras, and software agents responsible for image processing, path planning, communication, and decision-making. By utilizing Work-Flow Petri Nets for modeling and control, the system effectively ensures coordination and successful task execution even in unstructured environments. + +Furthermore, the well-known project, Simulating Collaborative Robots in a Massive Multi-agent Game Environment (SCRIMMAGE)[25] [228], tackles the high costs of field testing robotic systems by offering a flexible and efficient simulation environment specifically designed for mobile robotics research. Unlike many existing simulators that are primarily designed for ground-based systems with high-fidelity multi-body physics models, SCRIMMAGE focuses on simulating large numbers of aerial vehicles, where precise collision detection and complex physics are often unnecessary. SCRIMMAGE is designed to be highly adaptable, with a plugin-based architecture that supports various levels of sensor fidelity, motion models, and network configurations. This flexibility allows the simulation of hundreds of aircraft with low-fidelity models or a smaller number with high-fidelity models on standard consumer hardware. Overall, SCRIMMAGE[26] provides a robust and scalable solution for testing and refining robotic algorithms in a controlled virtual environment, significantly reducing the risks and costs associated with physical testing. + +Liu et al. [95] proposed the Learning before Interaction (LBI) framework, a novel approach designed to enhance multi-agent decision-making through generative world models. Traditional generative models struggle with trial-and-error reasoning, often failing to produce reliable solutions for complex multi-agent tasks. To address this limitation, LBI integrates a language- + +guided simulator into the MARL pipeline, enabling agents to learn grounded reasoning through simulated experiences. LBI consists of a world model composed of a dynamics model and a reward model. The dynamics model incorporates a vector quantized variational autoencoder (VQ-VAE) [243] for discrete image representation and a causal transformer to autoregressively generate interaction transitions. Meanwhile, the reward model employs a bidirectional transformer trained on expert demonstrations to provide task-specific rewards based on natural language descriptions. LBI further distinguishes itself by generating explainable interaction sequences and reward functions, providing interpretable solutions for multi-agent decision-making problems. By addressing challenges such as the compositional complexity of MARL environments and the scarcity of paired text-image datasets, LBI represents a significant advancement in the field. + +Ye et al. [244] proposed an adaptive genetic algorithm (AGA) that dynamically adjusts crossover and mutation populations, leveraging the Dubins car model and state-transition strategies to optimize the allocation of heterogeneous UAVs in suppression of enemy air defense missions. Radac et al. combine two modelfree controller tuning techniques linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning as a novel mixed VRFT-Q learning method [94]. VRFT is initially employed to determine a stabilizing feedback controller using input-output experimental data within a model reference tracking framework. Subsequently, reinforcement Q-learning is applied in the same framework, utilizing input-state experimental data gathered under perturbed VRFT to ensure effective exploration. Extensive simulations on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS) demonstrates the mixed VRFT-Q's significant performance improvement over the Q-learning controllers and the VRFT controllers. + +To address the lack of a general metric for quantifying policy differences in MARL problems, Hu et al. [84] proposed the Multi-Agent Policy Distance (MAPD), a tool designed to measure policy differences among agents. Additionally, they developed a Multi-Agent Dynamic Parameter Sharing (MADPS) algorithm based on MAPD, demonstrating its effectiveness in enhancing policy diversity and overall performance through extensive experiments. To addresses the challenge of cooperative MARL in scenarios with dynamic team compositions, Wang et al. [245] propose using mutual information as an augmented reward to encourage robustness in agent policies across different team configurations. + +They develop a multi-agent policy iteration algorithm with a fixed marginal distribution and demonstrate its strong zero-shot generalization to dynamic team compositions in complex cooperative tasks. Progressive Mutual Information Collaboration (PMIC) $^{27}$ is a novel framework that leverages mutual information (MI) to guide collaboration among agents, thereby enhancing performance in multi-agent cooperative tasks [246]. The key innovation of is its dual MI objectives: maximizing MI associated with superior collaborative behaviors and minimizing MI linked to inferior ones, ensuring more effective learning and avoiding sub-optimal collaborations. Wai et al. [83] proposes a novel double averaging primal-dual optimization algorithm for MARL, specifically targeting decentralized applications like sensor networks and swarm robotics. The MARL algorithm enables agents to collaboratively evaluate policies by incorporating neighboring gradient and local reward information, achieving fast finite-time convergence to the optimal solution in decentralized convex-concave saddle-point problems. To address the challenge of sparse rewards in MARL, Kang et al. [247] introduce the Dual Preferences-based Multi-Agent Reinforcement Learning (DPM) framework. DPM extends preference-based reinforcement learning (PbRL) by incorporating dual preference types-comparing both trajectories and individual agent contributions-thereby optimizing individual reward functions more effectively. DPM also leverages LLMs to gather preferences, mitigating issues associated with human-based preference collection. Experimental results in the StarCraft Multi-Agent Challenge (SMAC) [135] demonstrate that DPM significantly outperforms existing baselines, particularly in sparse reward settings. + +Traditional methods like soft attention struggle with scalability and efficiency in LMAS due to the overwhelming number of agent interactions and large observation spaces. To address these challenges of largescale multi-agent systems (LMAS) involving hundreds of agents, University of Chinese Academy of Sciences [248] introduces the Concentration Network (ConcNet), a novel reinforcement learning framework. ConcNet mimics human cognitive processes of concentration by prioritizing and aggregating observations based on motivational indices, such as expected survival time and state value. It allows the system to focus on the most relevant entities, enhancing decision-making efficiency in complex environments. In ConcNet, a novel concentration policy gradient architecture was further proposed, + +demonstrating its superior performance and scalability in large-scale multi-agent scenarios, such as decentralized collective assault simulations. This research represents a significant advancement in the field, providing a scalable solution for effective decision-making in large-scale multi-agent environments. + +In conclusion, MARL-based intelligent applications have shown exceptional adaptability across diverse domains such as autonomous driving, UAV systems, disaster response, and collaborative robotics [242, 88, 20, 19, 13, 16]. Key innovations, including communication-enhanced learning [168, 161, 174, 139], adaptive policy optimization, and mutual information [246] frameworks, have significantly advanced the field. While challenges like sparse rewards and scalability remain, these advancements highlight MARL's potential to address dynamic and complex multi-agent environments effectively, paving the way for further impactful developments. + +# 4.2. LLMs reasoning-based Intelligent Applications + +To address diverse and complex challenges, a variety of frameworks leveraging LLMs have been developed, enabling advanced reasoning, collaboration, and task execution in multi-agent systems [116, 30, 57]. Below, we highlight notable contributions that demonstrate the application of LLMs in enhancing multi-agent decision-making and coordination. + +Wu et al. [103] introduced AutoGen, an open-source framework designed to enable the development of next-generation LLM applications through multi-agent conversations. AutoGen allows for customizable agent interactions and the integration of LLMs, human inputs, and tools to collaboratively solve complex tasks. Xiao et al. [101] proposed Chain-of-Experts (CoE), a novel multi-agent framework designed to enhance reasoning in complex operations research (OR) problems using LLMs. Chen et al. [105] presented AgentVerse, a multiagent framework designed to facilitate collaboration among autonomous agents, inspired by human group dynamics. AgentVerse dynamically adjusts the composition and roles of agents throughout the problem-solving process, enhancing their ability to tackle complex tasks across various domains, including text understanding, reasoning, coding, and embodied AI. The framework consists of four stages: Expert Recruitment, Collaborative Decision-Making, Action Execution, and Evaluation. Chen et al. [107] introduced AutoAgents, a framework capable of adaptively generating and coordinating multiple specialized agents based on different tasks, thereby constructing efficient multi-agent teams + +to accomplish complex tasks. Liu et al. [106] proposed the Dynamic LLM-Agent Network (DyLAN), a framework designed to enhance LLM-agent collaboration by enabling agents to interact dynamically based on task requirements, rather than within a static architecture. Xu et al. [58] proposed a novel multi-agent framework that combines LLMs with reinforcement learning to enhance strategic decision-making and communication in the Werewolf game $^{28}$ , effectively overcoming intrinsic biases and achieving human-level performance. Wen et al. [104] introduce the Multi-Agent Transformer (MAT), a novel architecture that frames cooperative MARL as a sequence modeling problem. Experiments on StarCraftII [154, 91, 134], Multi-Agent MuJoCo (MAMuJoCo) [249], Dexterous Hands Manipulation [185, 186], and Google Research Football [55] benchmarks demonstrate that it achieves superior performance and data efficiency by leveraging modern sequence models in an on-policy learning framework. Wang et al. [102] introduced MLLM-Tool $^{29}$ , a multimodal tool agent system that integrates open-source LLMs with multimodal encoders, enabling it to process visual and auditory inputs for selecting appropriate tools based on ambiguous multimodal instructions. Moreover, they introduced ToolMMBench, a novel benchmark with multi-modal inputs and multioption solutions, demonstrating its effectiveness in addressing real-world multimodal multi-agent scenarios. Zhang et al. [96] introduce CoELA, a Cooperative Embodied Language Agent framework that leverages LLMs to enhance multi-agent cooperation in complex, decentralized environments. CoELA integrates LLMs with cognitive-inspired modules for perception, memory and execution, allowing agents to plan, communicate, and collaborate effectively on long-horizon tasks, outperforming traditional planning-based methods such as Multi-Agent Transformer(MAT) [104], and showing promising results in human-agent interaction simulation environments, Communicative Watch-And-Help (C-WAH) [99] and ThreeDWorld Multi-Agent Transport (TDW-MAT) [147, 148]. Gong et al. [149] from Team of Li.FeiFei. introduce MindAgent, a novel infrastructure for evaluating planning and coordination capabilities in gaming interactions, leveraging large foundation models (LFMs) to coordinate multi-agent system (MAS), collaborate with human players, and enable in-context learning. Their team also present + +"Cuisineworld", a new gaming scenario and benchmark for assessing multi-agent collaboration efficiency. Despite LLMs' success in various collaborative tasks, they struggle with spatial and decentralized decision-making required for flocking. Li et al. [57] explored the challenges faced by LLMs in solving multi-agent flocking tasks, where agents strive to stay close, avoid collisions, and maintain a formation. Sun et al. [180] proposed Corex, a novel framework that enhances complex reasoning by leveraging multi-model collaboration. Inspired by human cognitive processes, Corex employs three collaborative paradigms-Discuss, Review, and Retrieve-where different LLMs act as autonomous agents to collectively solve complex tasks. Corex empowers LLM agents to "think outside the box" by facilitating collaborative group discussions, effectively mitigating the cognitive biases inherent in individual LLMs. This approach not only enhances performance but also improves cost-effectiveness and annotation efficiency, offering a significant advantage in complex tasks. + +Next, we will provide a detailed introduction to some outstanding achievements in the application of LLMs for multi-agent collaborative task execution. + +MetaGPT: Existing LLMs-based multi-agent systems often struggle with complex tasks due to logical inconsistencies and cumulative hallucinations, leading to biased results. Hong et al. [108, 109] from DeepWisdom $^{31}$ proposed MetaGPT $^{32}$ , an innovative metaprogramming framework designed to enhance the collaboration capabilities of LLMs-based multi-agent systems. MetaGPT integrates Standard Operating Procedures (SOPs) commonly used in human workflows, thereby constructing a more efficient and coherent multi-agent collaboration system. MetaGPT employs an assembly-line approach, breaking down complex tasks into multiple subtasks and assigning them to agents with specific domain expertise. These agents collaborate during task execution through clearly defined roles and structured communication interfaces, reducing the risk of information distortion and misunderstanding. In summary, MetaGPT offers a flexible and powerful platform for developing LLMs-based multi-agent systems. Its unique meta-programming framework and rigorous workflow design enable it to excel in handling complex tasks, greatly advancing the field of multi-agent collaboration research. + +CoAct: Hou et al. [97] proposed CoAct33, a multi- + +agent system based on LLMs designed for hierarchical collaboration tasks. The framework consists of six stages: task decomposition, subtask assignment and communication, subtask analysis and execution, feedback collection, progress evaluation, and replanning when necessary. The global planning agent plays a critical role in managing complex tasks. The local execution agent is responsible for executing specific subtasks. This hierarchical framework demonstrates strong adaptability and performance, particularly in complex real-world tasks requiring dynamic replanning and collaborative execution. + +AutoGen: Microsoft [103, 146] introduced AutoGen $^{34}$ , a flexible framework for creating and managing multiple autonomous agents to collaboratively complete complex tasks, particularly in programming, planning, and creative writing domains. AutoGen allows users to define distinct agent roles, including specialists, general assistants, and decision-makers, ensuring clear task division and effective coordination. Agents interact in a structured conversational environment, exchanging messages to resolve tasks iteratively. AutoGen introduces feedback loops where agents analyze outputs, refine strategies, and optimize task completion autonomously. Notably, it supports integration with various LLMs, offering developers the flexibility to replace APIs without altering code significantly. In summary, AutoGen facilitates scalable, efficient, and robust multi-agent collaboration, demonstrating potential for applications ranging from enhanced ChatGPT systems to real-world industrial workflows. + +XAgent: XAgent Team $^{35}$ [110] developed XAgent $^{36}$ is an open-source, LLMs-driven autonomous agent framework designed for solving complex tasks automatically and efficiently. As shown in Figure 20, it employs a dual-loop architecture: the outer loop for high-level task planning and coordination, and the inner loop for executing subtasks. The PlanAgent in the outer loop generates an initial plan by breaking a complex task into manageable subtasks, organizing them into a task queue. It iteratively monitors progress, optimizes plans based on feedback from the inner loop, and continues until all subtasks are completed. The inner loop utilizes ToolAgents, which employ various tools like file editors, Python notebooks, web browsers, and shell interfaces within a secure docker environment to execute subtasks. XAgent emphasizes autonomy, safety, and extensibility, allowing users to add new agents or tools + +![](images/3b2c1190be538a01a6e3349b9ec31c50208a6352acfb17257690f6b17ec0e736.jpg) +Figure 20: An structure overview of the XAgents framework, highlighting the Task Node as the starting point, the sequence of Sub-Task Nodes forming the Task Execution Graph (TEG), and the Fusion Node integrating outputs for the final result [110]. + +to enhance functionality. Its GUI facilitates user interaction while supporting human collaboration, enabling real-time guidance and assistance for challenging tasks. + +PlanAgent: PlanAgent $^{37}$ , developed by Chinese Academy of Sciences and Li Auto [113], introduces a closed-loop motion planning framework for autonomous driving by leveraging multi-modal large language models (MLLMs). The system utilizes MLLM's multi-modal reasoning and commonsense understanding capabilities to generate hierarchical driving commands based on scene information. PlanAgent addresses key limitations of traditional rule-based and learning-based methods, including overfitting in long-tail scenarios and inefficiencies in scene representation. Its novel integration of MLLM-driven reasoning into autonomous driving planning establishes a new benchmark for safety and robustness in real-world applications. + +LangGraph: LangChain Inc $^{38}$ [111, 112] introduced LangGraph $^{39}$ , a library designed for building stateful, multi-actor applications with LLMs, enabling the creation of complex agent and multi-agent workflows. Inspired by frameworks like Pregel and Apache Beam, LangGraph provides fine-grained control over workflows and state management while offering advanced features like persistence and human-in-the-loop capabilities. LangGraph stands out for its support of iterative workflows with cycles and branching, which differentiates it from DAG-based frameworks. Each + +graph execution generates a state, dynamically updated by node outputs, enabling reliable and adaptive agent behavior. Its built-in persistence allows workflows to pause and resume at any point, facilitating error recovery and advanced human-agent interactions, including "time travel" to modify past actions. LangGraph integrates seamlessly with LangChain [250, 179] but functions independently, offering flexibility for various applications, from dialogue agents and recommendation systems to natural language processing and game development. With streaming support, it processes outputs in real-time, making it suitable for tasks requiring immediate feedback. Its low-level architecture and customizable workflows make LangGraph a powerful tool for creating robust, scalable, and interactive LLMs-based systems. + +![](images/1a5057af905a5275003bf194bbea8a1a1f3a4c961a1885581d7f2c3fed924e71.jpg) +Figure 21: A LangGraph workflow representation demonstrating conditional branching and iterative loops for document retrieval, grading, query transformation, and web search before generating a final output [111, 112]. + +CrewAI: $\mathrm{CrewAI^{40}}$ [114, 115] is an open-source framework designed to coordinate AI agents that specialize in role-playing and autonomous operations, enabling efficient collaboration to achieve complex goals. The framework's modular design allows users to create AI teams that operate like real-world teams, with agents assigned specific roles and tasks to ensure clear division of labor and shared objectives. As seen from 22, this framework operates in three primary stages: Agent Creation, where developers define roles with specific goals and tools; Task Management, enabling flexible task assignment and multi-view knowledge enhancement; and Execution and Collaboration, where agents interact in workflows to resolve tasks, with outputs parsed into reusable formats. CrewAI integrates seamlessly with the LangChain ecosystem, leveraging its tools and LLM + +![](images/1915e555db021c43ce2d26f3d54bee668e602b1321975e13f09e6963a269360d.jpg) +Figure 22: An overview of the processing workflow for the roleplaying multi-agent framework, CrewAI [114, 115]. + +capabilities, such as OpenAI and Google Gemini. The framework supports real-time decision-making and task adaptation, with future versions planned to include more advanced collaboration processes, such as consensus-driven workflows and autonomous decision-making. Its innovative features, such as role-based design, dynamic rule generation, and modular task workflows, position CrewAI as a robust and scalable framework for multi-agent collaboration across creative and industrial domains. Overall, CrewAI $^{41}$ offers a cutting-edge approach to multi-agent systems by integrating role-specific autonomy, flexible workflows, and advanced AI toolsets, making it a versatile framework for collaborative AI applications. + +In summary, these frameworks and applications [108, 97, 103, 110] highlight the rapid advancements in leveraging LLMs for multi-agent collaboration, reasoning, and task execution. Each system introduces unique innovations—ranging from dynamic agent coordination to enhanced reasoning and human-in-the-loop workflows—demonstrating their potential to tackle complex, real-world challenges across various domains [113, 111, 114, 115]. These developments pave the way for more flexible, scalable, and efficient AI-driven solutions. + +# 5. Challenges in MARL-based and LLMs-based approaches + +The extension of single-agent decision-making into multi-agent cooperative contexts introduces several challenges, including developing effective training + +schemes for multiple agents learning and adapting simultaneously, managing increased computational complexity due to the more sophisticated and stochastic environments compared to single-agent settings, and addressing the foundational role of strategic interaction among agents. Additionally, ensuring the scalability of algorithms to handle larger observation and action spaces, facilitating coordination and cooperation among agents to achieve consistent goals, and dealing with non-stationary environments where agents' behaviors and strategies continuously evolve are also inevitable and critical challenges. + +Applying multi-agent decision-making techniques to real-world problems, which often involve complex and dynamic interactions, further underscores the need for sophisticated and advanced approaches to effectively adapt these ever-increasing complexities. Multi-agent cooperative decision making significantly surpasses single-agent decision-making in terms of environmental stochasticity, complexity, the difficulty of strategy optimization, and so on. As shown in Figure 23, we present a tree diagram summarizing the existing challenges in MARL-based and LLMs-based multi-agent decision-making approaches. + +# 5.1. Challenges in MARL-based multi-agent systems + +The advancement of MARL remains in its formative stages, with its potential for enabling effective multiagent coordination and achieving scalability in dynamic environments yet to be fully unlocked [7, 6, 8, 25, 28]. Challenges such as environmental stochasticity, strategy learning difficulty, non-stationarity, scalability, and reward complexity have emerged as major bottlenecks. This section provides an in-depth analysis of these challenges, exploring the current state, technical limitations, and potential solutions in MARL-based multiagent systems to enable more robust, efficient, and scalable decision-making frameworks. + +1. Environmental Stochasticity and Complexity: [251, 25] In MARL-based decision-making systems, environmental dynamics are influenced not only by external factors but also by the behaviors and decisions of individual agents. This complex interaction results in high levels of stochasticity and complexity in the environment, making prediction and modeling significantly more difficult. For example, in autonomous driving scenarios [19, 214, 18, 222], the behavior of each vehicle affects the decisions of surrounding vehicles, thereby increasing the overall complexity of the system; + +2. Strategy Learning Difficulty: [252, 86, 116] Strategy learning in MARL-based decision-making systems involves multidimensional challenges. Firstly, agents must consider the behaviors of other agents, and this interdependence increases the difficulty of strategy learning. Each agent not only has to optimize its own strategy but also predict and adapt to the strategy changes of others. Additionally, the vast joint action space of multiple agents makes it challenging for any single agent to learn effective joint strategies. The vast joint action space means that each agent needs to explore and learn within a larger decision space, which significantly increases the demands on computational resources and time; + +3. Complexity of Reward Functions: [251] In MARL-based decision-making systems, reward functions become more complex [85, 150]. The rewards received from the environment in multiagent cooperative techniques are influenced not only by an individual agent's actions and the environment but also by the actions of other agents, which makes the stable policy learning and modeling process more difficult. In other words, an agent's reward depends not only on its own actions but also on the actions of other agents, making it challenging for the reinforcement learning-based multi-agent decision-making policies to converge. This intricate reward mechanism complicates the design and optimization of reward functions. Agents need to evaluate their behaviors' impact on the overall system through complex interactions to learn effective strategies; + +4. Coordination and Cooperation: [253, 90, 254, 251] Furthermore, in MARL-based decision-making systems, agents need to coordinate and cooperate to achieve common goals. This requires establishing effective communication and coordination mechanisms among agents to ensure that their actions are globally consistent and complementary [49]. For example, in disaster rescue scenarios [57, 12], multiple drones need to coordinate their actions to cover the maximum area and utilize resources most efficiently; + +5. Non-Stationary: [27, 25] The environment of MARL-based decision-making systems is nonstationary because each agent's behavior dynamically changes the state of the environment. This non-stationarity increases the difficulty of strategy learning, as agents must continually adapt to changes in the environment and the behaviors of other agents. + +![](images/3e7322f2e7c56ceef6795b3c9c45da60a4c75359ab1c5eeb3db45b67704bb031.jpg) +Figure 23: A tree diagram of the challenges in MARL and LLMs-based multi-agent decision-making approaches. + +6. Scalability: [87, 252, 25] Addressing scalability in MARL demands innovative approaches to tackle the exponential growth in complexity as the number of agents increases. Techniques that leverage knowledge reuse [172, 138, 168], such as parameter sharing and transfer learning [255, 256], have proven indispensable. When agents share structural similarities, information sharing can streamline the training process, enabling systems to scale more effectively. Transfer learning, in particular, allows agents to adapt knowledge from previous tasks to new, related ones, significantly accelerating learning in dynamic environments. Moreover, curriculum learning [257, 258] plays a pivotal role in tackling the increased complexity of training multiple agents. It enables gradual learning by exposing agents to progressively more challenging tasks, thereby improving policy generalization and accelerating convergence. Robustness remains critical for scalability, as learned policies must withstand environmental disturbances. Techniques like policy ensembles and adversarial training [259, 260] enhance resilience by fostering diversity and adaptability in policies. The DTDE paradigm addresses these issues but introduces new complexities [124, 131], such as environmental instability. One promising solution is employing Independent Deep Q-networks (IDQNs) [126, 261, 127], which adapt traditional single-agent approaches to multi-agent contexts. + +Overall, the interplay between robustness and scal + +ability in MARL is a key area for future exploration. While existing techniques provide strong foundations, integrating methods like meta-learning could offer a way for agents to rapidly adapt to new tasks and environments. Additionally, leveraging recent advances in graph neural networks might enhance the scalability of MARL by modeling agent interactions more efficiently. These directions hold promise for tackling the dynamic and large-scale nature of multi-agent environments. + +# 5.2. Challenges in LLMs reasoning-based multi-agent systems + +The development of LLMs-based multi-agent systems is still in its early stages, and its advantages in multi-agent collaboration in cross-domain applications have not been fully realized. In this process, technical bottlenecks, design limitations, and imperfect evaluation methods have revealed numerous challenges. This section provides a comprehensive analysis of these challenges, exploring the current status, bottlenecks, and potential breakthrough directions of LLMs-based multiagent systems in key areas such as multi-modal interaction, system scalability, hallucination control, evaluation, and privacy protection. + +1. Expansion of Multi-Modal Environments: [102, 113, 148] Current LLMs-based multi-agent systems primarily focus on text processing and generation, particularly excelling in language-based interactions. However, applications in multi-modal environments remain insufficient. Multi-modal environments require agents to handle various in + +puts from images, audio, video, and physical sensors, while also generating multi-modal outputs, such as descriptions of visual scenes or simulations of physical actions. This cross-modal interaction not only demands stronger model processing capabilities but also requires efficient information fusion between agents. For example, in practical applications, one agent may need to extract visual features from an image and collaborate with other agents through language to accomplish complex tasks, posing new technical challenges. Future research should focus on building unified multi-modal frameworks that enable agents to efficiently understand and collaboratively process various types of data. + +2. Hallucination Problem: [262, 30, 108, 263] The hallucination in LLMs, which involves generating false or inaccurate information, becomes more complex in multi-agent environments. This issue may be triggered within a single agent and further propagated through multi-agent interactions, ultimately negatively impacting the decision-making of the entire system. Because the information flow in multi-agent systems is interconnected, any misjudgment at one node can trigger a chain reaction. This characteristic makes the hallucination problem not only confined to the behavior of individual agents but also poses challenges to the stability of the entire system. Therefore, addressing this issue requires a dual approach: on one hand, improving model training methods to reduce the probability of hallucinations in individual agents; on the other hand, designing information verification mechanisms and propagation management strategies to minimize the spread of hallucinated information within the agent network. + +3. Acquisition of Collective Intelligence: [34, 176, 253] Current LLMs-based multi-agent systems rely more on real-time feedback for learning rather than offline data, unlike traditional multi-agent systems [264, 265]. This real-time learning approach imposes higher requirements on the interactive environment [262, 266]. Since designing and maintaining a reliable real-time interactive environment is not easy, it limits the scalability of the system. Additionally, existing research mostly focuses on optimizing individual agents, neglecting the potential overall efficiency improvements that could arise from agent collaboration. For example, knowledge sharing and behavioral coordination among agents may create advantages of collective intelligence in certain complex tasks. Fu + +ture research needs to explore how to fully leverage the potential of collective intelligence by optimizing multi-agent interaction strategies and collaboration mechanisms. + +4. System Scalability: [262, 267, 253] As the number of agents in LLMs-based multi-agent systems increases, the demand for computational resources grows exponentially, posing challenges in resource-constrained environments. A single LLM agent already requires substantial computational power, and when the system scales to hundreds or thousands of agents, existing hardware and software architectures may not be able to support it. Furthermore, scaling the system introduces new complexities, such as how to efficiently allocate tasks, coordinate, and communicate among numerous agents. Studies have shown that the more agents there are, the more difficult it becomes to coordinate their operations, especially in reducing redundancy and conflicts. Therefore, future work needs to optimize resource utilization through the development of lightweight models and efficient communication protocols, while also exploring the scaling laws for agent expansion. + +5. Evaluation and Benchmarking: [262, 253] Current evaluation methods and benchmark tests for LLMs-based multi-agent systems are still incomplete. Most research focuses solely on the performance of individual agents in specific tasks, neglecting the overall system performance in complex scenarios. Evaluating group behavior is more challenging because the dynamics and complexity of multi-agent systems are difficult to measure with a single metric. Additionally, the lack of a unified testing framework and benchmark data is a major obstacle when comparing the capabilities of different LLMs-based multi-agent systems across domains. Future research needs to develop comprehensive evaluation standards and universal benchmark tests, especially in key fields such as scientific experiments, economic analysis, and urban planning, to provide a basis for system performance comparison and improvement. + +6. Interaction Efficiency and Cumulative Effects: [262, 268, 105, 254] The complexity of multiagent systems leads to prominent issues of low interaction efficiency and cumulative effects. Low interaction efficiency is mainly reflected in the need for generative agents to frequently query models, making the system inefficient in large-scale applications. On the other hand, because the system state highly depends on the results of the + +previous round, when an error occurs in one round, it may accumulate and propagate to subsequent operations, ultimately degrading the system's overall performance. Future efforts should focus on designing more efficient communication protocols and intermediate result correction mechanisms to reduce interaction costs and the impact of cumulative errors. + +7. Security and Privacy Issues: [5, 118, 269, 270, 271] Context sharing within multi-agent systems poses risks of introducing noise and privacy leaks. For example, sensitive information shared between agents (such as identities or locations) may be misused by untrusted nodes, thereby threatening the system's security. Addressing this issue requires a two-pronged approach: first, establishing clear organizational structures to restrict information access permissions; second, introducing more advanced trust management mechanisms, such as distributed trust systems based on consensus algorithms, to enhance the system's security and reliability. + +In summary, LLMs-based multi-agent systems face challenges across multiple domains, including multimodal adaptation, scalability, evaluation methods, collective intelligence development, and privacy protection. These challenges not only reveal the current technological limitations but also provide ample space for future research. With advancements in technology and the deepening of interdisciplinary studies, LLMs-based multi-agent systems are expected to achieve significant breakthroughs both theoretically and in applications. + +# 6. Future Research Prospects and Emerging Trends + +Multi-Agent Decision-Making Systems are entering a new era where LLMs are combined with MARL [116]. This combination can improve learning efficiency in complex dynamic environments. It also enables better multi-modal information processing, multi-task collaboration, and long-term planning [7, 25, 30, 28, 266]. In this section, we discuss future prospects and challenges of multi-agent decision-making system (MAS) research from theoretical, technical, application, and ethical perspectives. + +# 6.1. Theoretical Development: From Traditional RL to LLMs-Enhanced MARL Framework + +LLMs-enhanced MARL redefines collaboration in multi-agent systems by introducing natural language + +understanding and reasoning [14, 253]. Traditional MARL requires agents to learn control strategies in dynamic environments with limited data [8, 5, 27, 24]. However, this approach often faces challenges like low sample efficiency, difficult reward design, and poor generalization. LLMs, with their strong reasoning and knowledge representation capabilities, offer solutions [272, 30]. For example, they can process multi-modal information such as natural language and vision [113, 273, 102, 260], helping agents understand tasks and environments more effectively. This improves learning speed and generalization. Furthermore, LLMs can act as reasoning tools, providing additional context and knowledge to optimize long-term planning. + +The LLMs-enhanced MARL framework is a groundbreaking integration of LLMs and MARL techniques, which includes roles such as information processor, reward designer, decision-maker, and generator [116]. Figure 24 presents a flowchart illustrating the structure of the LLMs-enhanced MARL framework, highlighting its four key roles. These roles work together to streamline task complexity and improve learning. For instance, LLMs can translate unstructured task descriptions into formal task semantics, reducing learning difficulty. They can also design advanced reward functions to accelerate learning in sparse-reward environments. These roles collectively address the challenges of task complexity, data efficiency, and generalization in MARL [29, 24, 251], while streamlining processes like reward design and policy generation. As shown in Table 4, we summarize recent advancements in LLMs-enhanced MARL methods across these four roles into a comprehensive table for clarity and comparison. + +# 6.2. Technical Integration: From Multi-Modal to Multi-Task Optimization + +Combining LLMs and MARL significantly improves the ability to handle multi-modal information, multitask learning, and long-term task planning [273, 102, 272, 30]. Traditional MARL often requires separate modules to process visual, textual, or other forms of data. In contrast, LLMs can unify this processing, enabling comprehensive environment understanding. For example, in a robot task involving voice commands and visual inputs, LLMs can process both types of data simultaneously and generate actions directly. Additionally, LLMs provide a distinct advantage in multi-task learning due to their pre-trained knowledge [30, 57]. Through knowledge transfer, they help agents share experiences across different tasks, improving adaptability [255, 66]. For long-term planning, LLMs can break + +![](images/ca0f8bf101575fe8e2080e3589b9515f531ad898f95ecfd4f8b80ccff013637c.jpg) + +![](images/27395decb4a73fcf8ca53d914c366bd56d6e466f64923b049c2683ba7aa9ea47.jpg) + +![](images/001ede390b93b08a552ac07a9cab97f26b1666902dd4fca9440365ed87072e18.jpg) + +![](images/345c901536a9f3d5c821af422611f5d0a0f5220610e7455f10907432ce192167.jpg) + +![](images/48d61eca733bb6936526aa1e4ce4e6d1571acab57c3ce7861fae6cf215f23163.jpg) +a) LLM as information processor. + +![](images/f239702c9eb5ab20c1490b9c5fc64499a1d8aee5b19d5946cbdc5157061ff5da.jpg) +b) LLM as reward designer. + +![](images/11d07db349371dc5a08bc551dd94c462b655cd1302d907bb1c69d7a03ac850ac.jpg) +c) LLM as decision-maker +Figure 24: Schematic diagram of the LLMs-enhanced MARL framework based on Cao et al. [116], showcasing its core roles: information processor (a), reward designer (b), decision-maker (c), and generator (d). + +![](images/f0e13440a3152f156a53f69aa98085530dd0b8b475bfa74d08afe44a4dc001b4.jpg) +d) LLM as generator + +down complex tasks into subtasks, addressing challenges like the credit assignment problem. This capability is particularly useful in tasks requiring extended reasoning, such as construction tasks in Minecraft. In optimizing reinforcement learning's sample efficiency [308, 281], the generative capabilities of LLMs can provide agents with additional virtual samples through high-fidelity environment simulations [116, 308]. This not only reduces the cost of real-world learning but also offers high-quality trajectories that serve as valuable references for policy optimization. Furthermore, in sparse reward environments, LLMs can accelerate policy learning by automatically designing reward signals. + +# 6.3. Application Expansion: Driving Intelligent Collaboration in Complex Scenarios + +The potential of LLMs-enhanced MARL in practical applications is enormous, especially in scenarios that require complex collaboration and real-time decision-making [116, 300, 183, 171]. For example, in the field of autonomous driving [19, 18, 17], the integration of LLMs with MARL can simultaneously process sensor data and natural language information (such as traffic regulations, passenger instructions, etc. [92]), thereby enhancing the safety and accuracy of decision-making [214, 74]. In the field of collaborative robots, LLMs can help multiple robots form a more intuitive communication mechanism, achieving highly complex task division and dynamic adjustment. In addition, in multi-objective optimization tasks such as smart grids and intelligent healthcare, LLMs can provide domain knowledge and optimization suggestions to assist reinforcement learning, design more practical reward functions, and thus + +improve the overall efficiency of the system. In dynamic and complex environments such as disaster relief [12], LLMs can dynamically allocate roles and responsibilities according to task requirements, helping multiagent systems quickly adapt to changing environments and highly complex task divisions [296, 295, 297]. This capability provides a solid technical support for a wide range of applications. + +# 6.4. Human Society Coordination: Balancing Technology and Ethics + +The integration of LLMs into MARL opens new avenues for advancing multi-agent systems, while also highlighting exciting research directions in improving technical efficiency and addressing ethical considerations. For instance, enhancing the robustness of LLMs in unfamiliar environments offers the opportunity to develop strategies for minimizing biases and hallucinations, thereby improving decision accuracy. Furthermore, the computational complexity and resource demands of LLMs present a chance to innovate in optimizing inference efficiency and scalability. This is especially relevant in dynamic multi-agent environments where real-time responsiveness is critical. + +From an ethical perspective, the incorporation of LLMs calls for advancements in ensuring data privacy, safeguarding against adversarial attacks, and establishing clear accountability frameworks for AI-driven decisions. Sensitive domains such as healthcare and disaster response could particularly benefit from focused research on protecting sensitive information and enhancing system resilience. Additionally, improving the transparency and explainability of LLMs-driven decisions is another promising area for exploration, as it + +Table 4: Summary of recent studies categorized by the four key roles of LLMs in MARL: Information Processor, Reward Designer, Decision-Maker, and Generator, highlighting their respective contributions and applications. + +
Method Types +LLM as ...Researchers. / Works. / Refs.
InformationPoudel et al. (ReCoRe) [274], Choi et al. (ConPE) [275],
Paischer et al. (HELM) [276] and (Semantic HELM) [277],
ProcessorRadford et al. (CLIP) [278], Oord et al. (CPC) [279],
Michael et al. (CURL) [280], Schwarzer et al. (SPR) [281]
RewardKwon et al. (LLMrewardRL) [282], Song et al. (Self-Refined LLM) [283], Wu et al. (Read & Reward) [284],
Carta et al. (GLAM) [285], Chu et al. (Lafite-RL) [286],
DesignerKim et al. (ARP) [287], Yu et al. [288], Adeniji et al. (LAMP) [289], Madaan et al. (Self-Refine) [290], Ma et al. (Eureka) [291], Xie et al. (Text2Reward) [292]
Decision-MakerJanner et al. (TT-Offline RL) [293], Shi et al. (LaMo) [294], Li et al. (LLM scaffold) [295], Mezghani et al. (text BabyAI) [296], Grigsby et al. (AMAGO) [297], Zitkovich et al. (RT-2) [298], Yao et al. (CALM) [299], Hu et al. (instructRL) [300], Zhou et al. (LLM4Teach) [301]
GeneratorChen et al. (TransDreamer) [302], Das et al. (S2E) [303], Lin et al. (Dynalang) [304], Robine et al. (TWM) [305], Poudel et al. (LanGWM) [306], Lin et al. (HomeGrid) [307]
+ +would increase trust and user confidence in multi-agent systems. + +By addressing these areas, future research can maximize the potential of LLMs-enhanced MARL systems, ensuring they are both technically effective and ethically sound in diverse, real-world applications. + +Overall, the combination of LLMs and MARL brings new momentum to research and applications in multiagent systems. By enhancing collaboration through natural language understanding and leveraging large-scale knowledge, these systems can achieve greater efficiency and robustness in complex scenarios. However, fully unlocking their potential requires further exploration in theoretical methods, technological development, and ethical practices. With systematic advancements in these areas, LLMs-enhanced MARL can become the foundation for next-generation intelligent decision-making systems, transforming fields like autonomous driving, collaborative robotics, and healthcare, while shaping the future of AI research. + +# 7. Conclusion + +Multi-agent cooperative decision-making has demonstrated remarkable potential in addressing complex tasks through intelligent collaboration and adaptability. In this survey, we systematically review the evolution of multi-agent systems, highlighting the shift from traditional methods, such as rule-based and game-theory approaches, to advanced paradigms like MARL and LLMs. We differentiate these methods by examining their unique capabilities, challenges, and applications in diverse environments, paying particular attention to dynamic and uncertain settings. In addition, we explore the critical role of simulation environments as a bridge between theoretical advancements and real-world implementation, emphasizing their influence on agent interaction, learning, and decision-making. Practical applications of multi-agent systems in domains such as autonomous driving, disaster response, and robotics further underscore their transformative potential. By summarizing advanced multi-agent decision-making methodologies, datasets, benchmarks, and future research directions, this survey aims to provide a comprehensive resource for researchers and practitioners. We hope it inspires future studies to address existing challenges, such as improving inter-agent communication and adaptability, while leveraging the innovative potential of DRL and LLMs-based approaches to advance multi-agent cooperative decision-making. + +# Acknowledgment + +The corresponding authors of this survey are B. Zhao and G.Yang from Xi'an Jiaotong University and Imperial College London. Guang Yang was supported in part by the ERC IMI (101005122), the H2020 (952172), the MRC (MC/PC/21013), the Royal Society (IEC\NSFC\211235), the NVIDIA Academic Hardware Grant Program, the SABER project supported by Boehringer Ingelheim Ltd, NIHR Imperial Biomedical Research Centre (RDA01), Wellcome Leap Dynamic Resilience, UKRI guarantee funding for Horizon Europe MSCA Postdoctoral Fellowships (EP/Z002206/1), and the UKRI Future Leaders Fellowship (MR/V023799/1). The authors would like to thank the editors and anonymous reviewers, who significantly enhanced the quality of the survey. + +# Declaration of Generative AI and AI-assisted Technologies in the Writing Process + +During the preparation of this work, the authors utilized generative AI and AI-assisted technologies for + +proofreading and enhancing readability and language clarity in certain sections. The authors have carefully reviewed these contents to ensure accuracy and completeness, acknowledging that AI can generate authoritative-sounding output that may be incorrect, incomplete, or biased. + +# Appendix A. Technological Comparisons between Single-Agent and Multi-Agent (Under Reinforcement Learning) + +Here, we discuss a series of technological comparisons of both DRL-based single-agent and MARL-based multi-agent research. + +In solving these single-agent sequential decision-making problems, Markov Decision Processes (MDP) is a powerful mathematical modeling framework, especially in uncertain environments. Since the decision-making process of an agent can inherently be modeled as a sequence of decisions, the single-agent decision-making process can be formulated as an typical MDP, similar to a Markov chain. + +In contrast to single-agent DRL systems, multi-agent systems under the MARL techniques involve multiple agents interacting within a shared environment. POMDP is a powerful mathematical modeling framework. It is an extension of the MDP framework that is particularly well-suited for modeling decision-making in environments where the agent does not have full visibility of the entire state space. POMDPs extend MDPs to environments where the agent cannot fully observe the underlying state. Instead, the agent maintains a belief state, which is a probability distribution over the possible states. + +Figure A.25 provides a comparative illustration of Markov Decision Processes (MDP) and Partially Observable Markov Decision Processes (POMDP), which correspond to single-agent and multi-agent reinforcement learning paradigms, respectively. + +The left side of Figure A.25 depicts an MDP, which models single-agent decision-making in a fully observable environment. The agent selects an action $a$ from the action space $A$ based on the current state $s$ from the state space $S$ . The environment transitions to a new state $s'$ following the transition probability function $P(s' \mid s, a)$ , and the agent receives a reward $r$ . The objective is to optimize a policy $\pi^*$ that maximizes the cumulative reward. Since the entire state is observable, the decision-making process is relatively straightforward. + +On the right side, the POMDP framework extends MDPs to multi-agent settings where agents operate under partial observability. Each agent $i$ receives only a + +partial observation $o^i$ rather than the full state $S$ . The agents take individual actions $a^i$ , forming a joint action $a_t$ , which influences state transitions and results in individual rewards $r^i$ . The observations are generated according to the observation function $Z(o \mid s', a)$ , requiring each agent to infer the missing state information and maintain a belief state for effective decision-making. + +In summary, MDPs are well-suited for single-agent systems, where the environment is static and fully observable, allowing the agent to make optimal decisions based on complete knowledge of the state. On the other hand, POMDPs are crucial for multi-agent reinforcement learning scenarios, where multiple agents interact dynamically in an uncertain environment with limited information. This setting introduces challenges such as coordination, competition, and reward interdependencies, making decision-making significantly more complex. + +# References + +[1] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller, Playing atari with deep reinforcement learning (2013). arXiv:1312.5602. URL https://arxiv.org/abs/1312.5602 +[2] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, D. Hassabis, Human-level control through deep reinforcement learning, Nature 518 (7540) (2015) 529-533. doi:10.1038/nature14236. URL https://doi.org/10.1038/nature14236 +[3] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, D. Hassabis, Mastering the game of go with deep neural networks and tree search, Nature 529 (7587) (2016) 484-489. doi:10.1038/nature16961. URL https://doi.org/10.1038/nature16961 +[4] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, D. Hassabis, Mastering the game of go without human knowledge, Nature 550 (7676) (2017) 354-359. doi:10.1038/nature24270. URL https://doi.org/10.1038/nature24270 +[5] T. Li, K. Zhu, N. C. Luong, D. Niyato, Q. Wu, Y. Zhang, B. Chen, Applications of multi-agent reinforcement learning in future internet: A comprehensive survey, IEEE Communications Surveys & Tutorials 24 (2) (2022) 1240-1279. doi: 10.1109/COMST.2022.3160697. +[6] S. Gronauer, K. Diepold, Multi-agent deep reinforcement learning: a survey, Artificial Intelligence Review 55 (2) (2022) 895-943. doi:10.1007/s10462-021-09996-w. URL https://doi.org/10.1007/s10462-021-09996-w +[7] P. Yadav, A. Mishra, S. Kim, A comprehensive survey on multi-agent reinforcement learning for connected and automated vehicles, Sensors 23 (10) (2023). doi:10.3390/ + +![](images/424dcd0683a6366cd83793e431932c1cd0e1322fbc5f7fa3309a42502a8009c0.jpg) +Markov Decision Processes (MDP) $\langle S, A, P, R, \gamma \rangle$ + +![](images/82ea2fe70ae861282e612beb352235a6bdbd2b72f4eb15bd6ddd82ee786cb44d.jpg) +Partially Observable Markov Decision Processes (POMDP) $\langle S, A, P, R, \gamma, O, Z \rangle$ +Figure A.25: The Markov Decision Process modeling for the single-agent reinforcement learning paradigm (left) and the Partially Observable Markov Decision Process modeling for the multi-agent reinforcement learning paradigm (right). + +s23104710. +URL https://www.mdpi.com/1424-8220/23/10/4710 +[8] J. Orr, A. Dutta, Multi-agent deep reinforcement learning for multi-robot applications: A survey, Sensors 23 (7) (2023). doi:10.3390/s23073625. URL https://www.mdpi.com/1424-8220/23/7/3625 +[9] W. Jin, B. Zhao, Y. Zhang, J. Huang, H. Yu, Wordtrans-absa: Enhancing aspect-based sentiment analysis with masked language modeling for affective token prediction, Expert Systems with Applications 238 (2024) 122289. doi:https://doi.org/10.1016/j.eswa.2023.122289. URL https://www.sciencedirect.com/science/article/pii/S0957417423027914 +[10] B. Zhao, W. Jin, Y. Zhang, S. Huang, G. Yang, Prompt learning for metonymy resolution: Enhancing performance with internal prior knowledge of pre-trained language models, Knowledge-Based Systems 279 (2023) 110928. doi:https://doi.org/10.1016/j.knosys.2023.110928. URL https://www.sciencedirect.com/science/article/pii/S0950705123006780 +[11] A. Seewald, C. J. Lerch, M. Chancan, A. M. Dollar, I. Abraham, Energy-aware ergodic search: Continuous exploration for multi-agent systems with battery constraints (2024). arXiv: 2310.09470. URL https://arxiv.org/abs/2310.09470 +[12] M. M. H. Qazzaz, S. A. R. Zaidi, D. C. McLernon, A. Salama, A. A. Al-Hameed, Optimizing search and rescue UAV connectivity in challenging terrain through multi q-learning (2024). arXiv:2405.10042. URL https://arxiv.org/abs/2405.10042 +[13] G. T. Papadopoulos, M. Antona, C. Stephanidis, Towards open and expandable cognitive ai architectures for large-scale multi-agent human-robot collaborative learning, IEEE Access 9 (2021) 73890-73909. doi:10.1109/ACCESS.2021.3080517. +[14] M. D. Silva, R. Regnier, M. Makarov, G. Avrin, D. Dumur, Evaluation of intelligent collaborative robots: a review, in: 2023 IEEE/SICE International Symposium on System Integration (SII), 2023, pp. 1-7. doi:10.1109/SII55687.2023.10039365. +[15] Y. Huang, S. Wu, Z. Mu, X. Long, S. Chu, G. Zhao, A multi + +agent reinforcement learning method for swarm robots in space collaborative exploration, in: 2020 6th International Conference on Control, Automation and Robotics (ICCAR), 2020, pp. 139-144. doi:10.1109/ICCAR49639.2020.9107997. +[16] C. G. Cena, P. F. Cardenas, R. S. Pazmino, L. Puglisi, R. A. Santonja, A cooperative multi-agent robotics system: Design and modelling, Expert Systems with Applications 40 (12) (2013) 4737-4748. doi:https://doi.org/10.1016/j.eswa.2013.01.048. URL https://www.sciencedirect.com/science/article/pii/S0957417413000791 +[17] S. Jayawardana, V. Jayawardana, K. Vidanage, C. Wu, Multi-behavior learning for socially compatible autonomous driving, in: 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), 2023, pp. 4422-4427. doi:10.1109/ITSC57777.2023.10422120. +[18] L. Wen, J. Duan, S. E. Li, S. Xu, H. Peng, Safe reinforcement learning for autonomous vehicles through parallel constrained policy optimization, in: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), 2020, pp. 1-7. doi:10.1109/ITSC45102.2020.9294262. +[19] L. Weiwei, H. Wenxuan, J. Wei, L. Lanxin, G. Lingping, L. Yong, Learning to model diverse driving behaviors in highly interactive autonomous driving scenarios with multi-agent reinforcement learning (2024). arXiv:2402.13481. URL https://arxiv.org/abs/2402.13481 +[20] Y. Xue, W. Chen, Multi-agent deep reinforcement learning for uavs navigation in unknown complex environment, IEEE Transactions on Intelligent Vehicles 9 (1) (2024) 2290-2303. doi:10.1109/TIV.2023.3298292. +[21] S. Rezwan, W. Choi, Artificial intelligence approaches for uav navigation: Recent advances and future challenges, IEEE Access 10 (2022) 26320-26339. doi:10.1109/ACCESS.2022.3157626. +[22] B. Al Baroomi, T. Myo, M. R. Ahmed, A. Al Shibli, M. H. Marhaban, M. S. Kaiser, Ant colony optimization-based path planning for uav navigation in dynamic environments, in: 2023 7th International Conference on Automation, Control and Robots (ICACR), 2023, pp. 168-173. doi:10.1109/ICACR59381.2023.10314603. +[23] T. Samad, S. Iqbal, A. W. Malik, O. Arif, P. Bloodsworth, A + +multi-agent framework for cloud-based management of collaborative robots, International Journal of Advanced Robotic Systems 15 (4) (2018). doi:10.1177/1729881418785073. +[24] W. Du, S. Ding, A survey on multi-agent deep reinforcement learning: from the perspective of challenges and applications, Artificial Intelligence Review 54 (5) (2021) 3215-3238. doi: 10.1007/s10462-020-09938-y. URL https://doi.org/10.1007/s10462-020-09938-y +[25] Z. Ning, L. Xie, A survey on multi-agent reinforcement learning and its application, Journal of Automation and Intelligence 3 (2) (2024) 73-91. doi:https://doi.org/10.1016/j.jai.2024.02.003. URL https://www.sciencedirect.com/science/article/pii/S2949855424000042 +[26] Q. Yang, R. Liu, Understanding the application of utility theory in robotics and artificial intelligence: A survey (2023). arXiv: 2306.09445. URL https://arxiv.org/abs/2306.09445 +[27] P. Hernandez-Leal, M. Kaisers, T. Baarslag, E. M. de Cote, A survey of learning in multiagent environments: Dealing with non-stationarity (2019). arXiv:1707.09183. URL https://arxiv.org/abs/1707.09183 +[28] C. Zhu, M. Dastani, S. Wang, A survey of multi-agent deep reinforcement learning with communication, Autonomous Agents and Multi-Agent Systems 38 (1) (2024) 4. doi: 10.1007/s10458-023-09633-6. URL https://doi.org/10.1007/s10458-023-09633-6 +[29] T. T. Nguyen, N. D. Nguyen, S. Nahavandi, Deep reinforcement learning for multiagent systems: A review of challenges, solutions, and applications, IEEE Transactions on Cybernetics 50 (9) (2020) 3826-3839. doi:10.1109/TCYB.2020.2977374. +[30] L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang, Z. Chen, J. Tang, X. Chen, Y. Lin, W. X. Zhao, Z. Wei, J. Wen, A survey on large language model based autonomous agents, Frontiers of Computer Science 18 (6) (2024) 186345. doi:10.1007/s11704-024-40231-1. URL https://doi.org/10.1007/s11704-024-40231-1 +[31] B. Zhao, W. Jin, J. Del Ser, G. Yang, Chatagri: Exploring potentials of chatgpt on cross-linguistic agricultural text classification, Neurocomputing 557 (2023) 126708. doi: https://doi.org/10.1016/j.neucom.2023.126708. URL https://www.sciencedirect.com/science/article/pii/S0925231223008317 +[32] T. Miki, M. Nagao, H. Kobayashi, T. Nakamura, A simple rule based multi-agent control algorithm and its implementation using autonomous mobile robots, in: 2010 World Automation Congress, 2010, pp. 1-6. +[33] H. Yarahmadi, H. Navidi, M. Challenger, Improving the resource allocation in IoT systems based on the integration of reinforcement learning and rule-based approaches in multi-agent systems, in: 2024 8th International Conference on Smart Cities, Internet of Things and Applications (SCIIoT), 2024, pp. 135-141. doi:10.1109/SCIIoT62588.2024.10570102. +[34] S.-H. Wu, V.-W. Soo, A fuzzy game theoretic approach to multi-agent coordination, in: T. Ishida (Ed.), Multiagent Platforms, Springer Berlin Heidelberg, Berlin, Heidelberg, 1999, pp. 76-87. doi:10.1007/3-540-48826-X\_6. +[35] H. Zhang, J. Zhang, G.-H. Yang, Y. Luo, Leader-based optimal coordination control for the consensus problem of multiagent differential games via fuzzy adaptive dynamic programming, IEEE Transactions on Fuzzy Systems 23 (1) (2015) 152-163. doi:10.1109/TFUZZ.2014.2310238. +[36] F. Ren, M. Zhang, Q. Bai, A fuzzy-based approach for partner selection in multi-agent systems, in: 6th IEEE/ACIS Interna + +tional Conference on Computer and Information Science (ICIS 2007), 2007, pp. 457-462. doi:10.1109/ICIS.2007.21. +[37] D. Gu, H. Hu, Fuzzy multi-agent cooperative q-learning, in: 2005 IEEE International Conference on Information Acquisition, 2005, p. 5 pp. doi:10.1109/ICIA.2005.1635080. +[38] J. Wang, Y. Hong, J. Wang, J. Xu, Y. Tang, Q.-L. Han, J. Kurths, Cooperative and competitive multi-agent systems: From optimization to games, IEEE/CAA Journal of Automatica Sinica 9 (5) (2022) 763-783. doi:10.1109/JAS.2022.105506. +[39] Y. Guo, Q. Pan, Q. Sun, C. Zhao, D. Wang, M. Feng, Cooperative game-based multi-agent path planning with obstacle avoidance, in: 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE), 2019, pp. 1385-1390. doi: 10.1109/ISIE.2019.8781205. +[40] D. Schwung, A. Schwung, S. X. Ding, Distributed self-optimization of modular production units: A state-based potential game approach, IEEE Transactions on Cybernetics 52 (4) (2022) 2174–2185. doi:10.1109/TCYB.2020.3006620. +[41] X. Wang, J. Wang, J. Chen, Y. Yang, L. Kong, X. Liu, L. Jia, Y. Xu, A game-theoretic learning framework for multi-agent intelligent wireless networks (2019). arXiv:1812.01267. URL https://arxiv.org/abs/1812.01267 +[42] W. Lin, Y. Chen, Q. Q. Wang, J. Zeng, J. Liu, Multi-agents based distributed-energy-resource management for intelligent microgrid with potential game algorithm, in: IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society, 2017, pp. 7795-7800. doi:10.1109/IECON.2017.8217366. +[43] H. Wang, Z. Ning, H. Luo, Y. Jiang, M. Huo, Game-based adaptive optimization approach for multi-agent systems, in: 2023 IEEE International Conference on Industrial Technology (ICIT), 2023, pp. 1-5. doi:10.1109/ICIT58465.2023.10143172. +[44] L. Bull, Evolutionary computing in multi-agent environments: Operators, in: V. W. Porto, N. Saravanan, D. Waagen, A. E. Eiben (Eds.), Evolutionary Programming VII, Springer Berlin Heidelberg, Berlin, Heidelberg, 1998, pp. 43-52. +[45] J. Liu, W. Zhong, L. Jiao, Multi-Agent Evolutionary Model for Global Numerical Optimization, Springer Berlin Heidelberg, Berlin, Heidelberg, 2010, pp. 13-48. doi:10.1007/978-3-642-13425-8\2. URL https://doi.org/10.1007/978-3-642-13425-8_2 +[46] D. Bloembergen, K. Tuyls, D. Hennes, M. Kaisers, Evolutionary dynamics of multi-agent learning: a survey, J. Artif. Int. Res. 53 (1) (2015) 659-697. +[47] D. Klijn, A. E. Eiben, A coevolutionary approach to deep multi-agent reinforcement learning, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '21, Association for Computing Machinery, New York, NY, USA, 2021, p. 283-284. doi:10.1145/3449726.3459576. URL https://doi.org/10.1145/3449726.3459576 +[48] S. Yuan, K. Song, J. Chen, X. Tan, D. Li, D. Yang, Evoagent: Towards automatic multi-agent generation via evolutionary algorithms (2024). arXiv:2406.14228. URL https://arxiv.org/abs/2406.14228 +[49] W. Zhang, H. Liu, Evolutionary game analysis of multi-agent cooperation strategy analysis in agricultural water conservancy ppp project under digitization background, Scientific Reports 14 (1) (2024) 22915. doi:10.1038/s41598-024-74065-5. URL https://doi.org/10.1038/s41598-024-74065-5 +[50] T. Rashid, M. Samvelyan, C. Schroeder, G. Farquhar, J. Foerster, S. Whiteson, QMIX: Monotonic value function + +factorisation for deep multi-agent reinforcement learning, in: J. Dy, A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, Vol. 80 of Proceedings of Machine Learning Research, PMLR, 2018, pp. 4295-4304. URL https://proceedings.mlr.press/v80/ rashid18a.html +[51] L. Kraemer, B. Banerjee, Multi-agent reinforcement learning as a rehearsal for decentralized planning, Neurocomputing 190 (2016) 82-94. doi:https://doi.org/10.1016/j.neucom.2016.01.031. URL https://www.sciencedirect.com/science/article/pii/S0925231216000783 +[52] B. Yu, Z. Cai, J. He, Fast-qmix: Accelerating deep multiagent reinforcement learning with virtual weighted q-values, in: 2021 2nd International Conference on Electronics, Communications and Information Technology (CECIT), 2021, pp. 594-599. doi:10.1109/CECIT53797.2021.00110. +[53] K. Son, D. Kim, W. J. Kang, D. E. Hostallero, Y. Yi, QTRAN: Learning to factorize with transformation for cooperative multi-agent reinforcement learning, in: K. Chaudhuri, R. Salakhutdinov (Eds.), Proceedings of the 36th International Conference on Machine Learning, Vol. 97 of Proceedings of Machine Learning Research, PMLR, 2019, pp. 5887-5896. URL https://proceedings.mlr.press/v97/son19a.htm1 +[54] T. Rashid, G. Farquhar, B. Peng, S. Whiteson, Weighted qmix: expanding monotonic value function factorisation for deep multi-agent reinforcement learning, in: Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS '20, Curran Associates Inc., Red Hook, NY, USA, 2020. +[55] K. Kurach, A. Raichuk, P. Stanczyk, M. Zajac, O. Bachem, L. Espeholt, C. Riquelme, D. Vincent, M. Michalski, O. Bousquet, S. Gelly, Google research football: A novel reinforcement learning environment, Proceedings of the AAAI Conference on Artificial Intelligence 34 (04) (2020) 4501-4510. doi:10.1609/aaai.v34i04.5878. URL https://doi.org/10.1609/aaai.v34i04.5878 +[56] J. Wang, Z. Ren, T. Liu, Y. Yu, C. Zhang, QPLEX: duplex dueling multi-agent q-learning, in: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, OpenReview.net, 2021. URL https://openreview.net/forum?id=RcmkOxxIQV +[57] P. Li, V. Menon, B. Gudiguntla, D. Ting, L. Zhou, Challenges faced by large language models in solving multi-agent flocking (2024). arXiv:2404.04752. URL https://arxiv.org/abs/2404.04752 +[58] Z. Xu, C. Yu, F. Fang, Y. Wang, Y. Wu, Language agents with reinforcement learning for strategic play in the werewolf game, in: R. Salakhutdinov, Z. Kolter, K. Heller, A. Weller, N. Oliver, J. Scarlett, F. Berkenkamp (Eds.), Proceedings of the 41st International Conference on Machine Learning, Vol. 235 of Proceedings of Machine Learning Research, PMLR, 2024, pp. 55434-55464. URL https://proceedings.mlr.press/v235/xu24ad.htm1 +[59] I. Mordatch, P. Abbeel, Emergence of grounded compositional language in multi-agent populations, arXiv preprint arXiv:1703.04908 (2017). +[60] H. Schwartz, An object oriented approach to fuzzy actor-critic learning for multi-agent differential games, in: 2019 IEEE Symposium Series on Computational Intelligence (SSCI), 2019, pp. 183-190. doi:10.1109/SSCI44817.2019.9002707. +[61] I. Harmati, Multi-agent coordination for target tracking using + +fuzzy inference system in game theoretic framework, in: 2006 IEEE Conference on Computer Aided Control System Design, 2006 IEEE International Conference on Control Applications, 2006 IEEE International Symposium on Intelligent Control, 2006, pp. 2390-2395. doi:10.1109/CACSD-CCA-ISIC.2006.4777014. +[62] C. W. Khuen, C. H. Yong, F. Haron, Multi-agent negotiation system using adaptive fuzzy logic in resource allocation, in: The 2nd International Conference on Distributed Frameworks for Multimedia Applications, 2006, pp. 1-7. doi:10.1109/DFMA.2006.296888. +[63] L. Yan, J. Liu, C. P. Chen, Y. Zhang, Z. Wu, Z. Liu, Game-based adaptive fuzzy optimal bipartite containment of nonlinear multiagent systems, IEEE Transactions on Fuzzy Systems 32 (3) (2024) 1455-1465. doi:10.1109/TFUZZ.2023.3327699. +[64] R. R. P. Vicerra, K. K. A. David, A. R. dela Cruz, E. A. Roxas, K. B. C. Simbulan, A. A. Bandala, E. P. Dadios, A multiple level mimo fuzzy logic based intelligence for multiple agent cooperative robot system, in: TENCON 2015 - 2015 IEEE Region 10 Conference, 2015, pp. 1-7. doi:10.1109/TENCON.2015.7372985. +[65] D. Gu, E. Yang, Fuzzy policy gradient reinforcement learning for leader-follower systems, in: IEEE International Conference Mechatronics and Automation, 2005, Vol. 3, 2005, pp. 1557-1561 Vol. 3. doi:10.1109/ICMA.2005.1626787. +[66] Y. Maruyama, A reasoning system for fuzzy distributed knowledge representation in multi-agent systems, in: 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2021, pp. 1-6. doi:10.1109/FUZZ45933.2021.9494454. +[67] J. Peng, M. Liu, J. Liu, K.-C. Lin, M. Wu, A coordination model using fuzzy reinforcement learning for multi-agent system, in: 2007 International Symposium on Collaborative Technologies and Systems, 2007, pp. 129-136. doi:10.1109/CTS.2007.4621748. +[68] E. Yang, D. Gu, A multiagent fuzzy policy reinforcement learning algorithm with application to leader-follower robotic systems, in: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006, pp. 3197-3202. doi: 10.1109/IROS.2006.282421. +[69] H. Wang, H. Luo, Y. Jiang, A game-based distributed fault-tolerant control method for multi-agent systems, in: 2022 IEEE 1st Industrial Electronics Society Annual On-Line Conference (ONCON), 2022, pp. 1-5. doi:10.1109/ONCON56984.2022.10126545. +[70] M. Lanctot, V. Zambaldi, A. Gruslys, A. Lazaridou, K. Tuyls, A unified game-theoretic approach to multiagent reinforcement learning, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, p. 4193-4206. +[71] C. Guo, L. Zhong, T. Liu, A study on game theory in decision interaction for multi intelligent agents based on information fusion, in: Y. Wu (Ed.), Computing and Intelligent Systems, Springer Berlin Heidelberg, Berlin, Heidelberg, 2011, pp. 442-452. +[72] J. Zhang, J. Zhang, The coverage control solutions based on evolutionary game theory in the multi-agent systems, in: 2019 12th Asian Control Conference (ASCC), 2019, pp. 1-6. +[73] R. Kong, N. Zhang, X. Bao, X. Lu, Multi-agent distributed optimization based on the game theory and its application in automobile chassis, in: 2018 37th Chinese Control Conference (CCC), 2018, pp. 1107-1111. doi:10.23919/ChiCC.2018.8484212. +[74] H. Wang, X. Zhang, H. Luo, X. Qiao, M. Huo, Y. Jiang, Dis + +tributed fault tolerant control for multi-agent systems with sensor faults in non-cooperative games, in: 2023 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes (SAFEPPROCESS), 2023, pp. 1-6. doi:10.1109/ SAFEPPROCESS58597.2023.10295775. +[75] X. Dong, X. Li, S. Cheng, Energy management optimization of microgrid cluster based on multi-agent-system and hierarchical stackelberg game theory, IEEE Access 8 (2020) 206183-206197. doi:10.1109/ACCESS.2020.3037676. +[76] P. H. Nguyen, W. L. Kling, P. F. Ribeiro, A game theory strategy to integrate distributed agent-based functions in smart grids, IEEE Transactions on Smart Grid 4 (1) (2013) 568-576. doi:10.1109/TSG.2012.2236657. +[77] S. U. Khan, I. Ahmad, A semi-distributed axiomatic game theoretical mechanism for replicating data objects in large distributed computing systems, in: 2007 IEEE International Parallel and Distributed Processing Symposium, 2007, pp. 1-10. doi:10.1109/IPDPS.2007.370279. +[78] L. Xu, J. Zhu, B. Chen, Z. Yang, K. Liu, B. Dang, T. Zhang, Y. Yang, R. Huang, A distributed nanocluster based multiagent evolutionary network, Nature Communications 13 (1) (2022) 4698. doi:10.1038/s41467-022-32497-5. URL https://doi.org/10.1038/s41467-022-32497-5 +[79] F. Seredynski, T. Kulpa, R. Hoffmann, Towards evolutionary self-optimization of large multi-agent systems, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '22, Association for Computing Machinery, New York, NY, USA, 2022, p. 200-203. doi: 10.1145/3520304.3529042. URL https://doi.org/10.1145/3520304.3529042 +[80] H. Liu, Z. Li, K. Huang, R. Wang, G. Cheng, T. Li, Evolutionary reinforcement learning algorithm for large-scale multi-agent cooperation and confrontation applications, The Journal of Supercomputing 80 (2) (2024) 2319-2346. doi:10.1007/s11227-023-05551-2. URL https://doi.org/10.1007/s11227-023-05551-2 +[81] Y. Dong, X. Liu, T. Li, C. L. P. Chen, Evolutionary game dynamics of multi-agent systems using local information considering hide right, Complex & Intelligent Systems 10 (1) (2024) 917-925. doi:10.1007/s40747-023-01172-7. URL https://doi.org/10.1007/s40747-023-01172-7 +[82] H. Chen, C. Hu, Z. Huang, Optimal control of multiagent decision-making based on competence evolution, Discrete Dynamics in Nature and Society 2023 (1) (2023) 2179376. doi: https://doi.org/10.1155/2023/2179376. +[83] H.-T. Wai, Z. Yang, Z. Wang, M. Hong, Multi-agent reinforcement learning via double averaging primal-dual optimization, in: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, Curran Associates Inc., Red Hook, NY, USA, 2018, pp. 9672-9683. +[84] T. Hu, Z. Pu, X. Ai, T. Qiu, J. Yi, Measuring policy distance for multi-agent reinforcement learning (2024). arXiv:2401.11257. URL https://arxiv.org/abs/2401.11257 +[85] P. Sunehag, G. Lever, A. Gruslys, W. M. Czarnecki, V. Zambaldi, M. Jaderberg, M. Lanctot, N. Sonnerat, J. Z. Leibo, K. Tuyls, T. Graepel, Value-decomposition networks for cooperative multi-agent learning based on team reward, in: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '18, International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2018, pp. 2085-2087. +[86] W. J. Yun, J. Park, J. Kim, Quantum multi-agent meta reinforcement learning, in: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth + +Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence, AAAI'23/IAAI'23/EAAI'23, AAAI Press, 2023. doi:10.1609/aaai.v37i9.26313. URL https://doi.org/10.1609/aaai.v37i9.26313 +[87] W. Mao, H. Qiu, C. Wang, H. Franke, Z. Kalbarczyk, R. K. Iyer, T. Basar, Multi-agent meta-reinforcement learning: sharper convergence rates with task similarity, Curran Associates Inc., Red Hook, NY, USA, 2024. +[88] M. Kouzeghar, Y. Song, M. Meghjani, R. Bouffanais, Multi-target pursuit by a decentralized heterogeneous uav swarm using deep multi-agent reinforcement learning (2023). arXiv: 2303.01799. URL https://arxiv.org/abs/2303.01799 +[89] F. Gao, S. Chen, M. Li, B. Huang, Maca: a multi-agent reinforcement learning platform for collective intelligence, in: 2019 IEEE 10th International Conference on Software Engineering and Service Science (ICSESS), 2019, pp. 108-111. doi:10.1109/ICSESS47205.2019.9040781. +[90] S. Qi, S. Zhang, X. Hou, J. Zhang, X. Wang, J. Xiao, Efficient distributed framework for collaborative multi-agent reinforcement learning (2022). arXiv:2205.05248. URL https://arxiv.org/abs/2205.05248 +[91] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, J. Oh, D. Horgan, M. Kroiss, I. Danihelka, A. Huang, L. Sifre, T. Cai, J. P. Agapiou, M. Jaderberg, A. S. Vezhnevets, R. Leblond, T. Pohlen, V. Dalibard, D. Budden, Y. Sulsky, J. Molloy, T. L. Paine, C. Gulcehre, Z. Wang, T. Pfaff, Y. Wu, R. Ring, D. Yogatama, D. Wünsch, K. McKinney, O. Smith, T. Schaul, T. Lillicrap, K. Kavukcuoglu, D. Hassabis, C. Apps, D. Silver, Grandmaster level in starcraft ii using multi-agent reinforcement learning, Nature 575 (7782) (2019) 350-354. doi:10.1038/s41586-019-1724-z. URL https://doi.org/10.1038/s41586-019-1724-z +[92] T. Chu, J. Wang, L. Codeca, Z. Li, Multi-agent deep reinforcement learning for large-scale traffic signal control, IEEE Transactions on Intelligent Transportation Systems 21 (3) (2020) 1086-1095. doi:10.1109/TITS.2019.2901791. +[93] Z. Lv, L. Xiao, Y. Du, G. Niu, C. Xing, W. Xu, Multiagent reinforcement learning based uav swarm communications against jamming, Trans. Wireless. Comm. 22 (12) (2023) 9063-9075. doi:10.1109/TWC.2023.3268082. URL https://doi.org/10.1109/TWC.2023.3268082 +[94] R.-E. P. Mircea-Bogdan Radac, R.-C. Roman, Model-free control performance improvement using virtual reference feedback tuning and reinforcement q-learning, International Journal of Systems Science 48 (5) (2017) 1071-1083. doi:10.1080/00207721.2016.1236423. +[95] Z. Liu, X. Yang, S. Sun, L. Qian, L. Wan, X. Chen, X. Lan, Grounded answers for multi-agent decision-making problem through generative world model, in: Advances in Neural Information Processing Systems (NeurIPS), 2024. +[96] H. Zhang, W. Du, J. Shan, Q. Zhou, Y. Du, J. B. Tenenbaum, T. Shu, C. Gan, Building cooperative embodied agents modularly with large language models, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=EnXJfQqyOK +[97] X. Hou, M. Yang, W. Jiao, X. Wang, Z. Tu, W. X. Zhao, Coact: A global-local hierarchy for autonomous agent collaboration (2024). arXiv:2406.13381. URL https://arxiv.org/abs/2406.13381 +[98] X. Puig, K. K. Ra, M. Boben, J. Li, T. Wang, S. Fidler, A. Torralba, Virtualhome: Simulating household activities via programs, 2018 IEEE/CVF Conference on Computer Vision + +and Pattern Recognition (2018) 8494-8502. +URL https://api_semanticscholar.org/CorpusID: 49317780 +[99] X. Puig, T. Shu, S. Li, Z. Wang, Y.-H. Liao, J. B. Tenenbaum, S. Fidler, A. Torralba, Watch-and-help: A challenge for social perception and human-{ai} collaboration, in: International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=w_7JMpGZRhO +[100] D. Gao, Z. Li, X. Pan, W. Kuang, Z. Ma, B. Qian, F. Wei, W. Zhang, Y. Xie, D. Chen, L. Yao, H. Peng, Z. Zhang, L. Zhu, C. Cheng, H. Shi, Y. Li, B. Ding, J. Zhou, Agentscope: A flexible yet robust multi-agent platform (2024). arXiv:2402.14034. URL https://arxiv.org/abs/2402.14034 +[101] Z. Xiao, D. Zhang, Y. Wu, L. Xu, Y. J. Wang, X. Han, X. Fu, T. Zhong, J. Zeng, M. Song, G. Chen, Chain-of-experts: When LLMs meet complex operations research problems, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=HobyL1B9CZ +[102] C. Wang, W. Luo, Q. Chen, H. Mai, J. Guo, S. Dong, X. M. Xuan, Z. Li, L. Ma, S. Gao, Mllm-tool: A multimodal large language model for tool agent learning, arXiv preprint arXiv:2401.10727 (2024). +[103] Q. Wu, G. Bansal, J. Zhang, Y. Wu, S. Zhang, E. E. Zhu, B. Li, L. Jiang, X. Zhang, C. Wang, Autogen: Enabling next-gen llm applications via multi-agent conversation, Tech. Rep. MSR-TR-2023-33, Microsoft (August 2023). +[104] M. Wen, J. Kuba, R. Lin, W. Zhang, Y. Wen, J. Wang, Y. Yang, Multi-agent reinforcement learning is a sequence modeling problem, in: S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh (Eds.), Advances in Neural Information Processing Systems, Vol. 35, Curran Associates, Inc., 2022, pp. 16509-16521. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/69413f87e5a34897cd010ca698097d0a-Paper-Conference.pdf +[105] W. Chen, Y. Su, J. Zuo, C. Yang, C. Yuan, C.-M. Chan, H. Yu, Y. Lu, Y.-H. Hung, C. Qian, Y. Qin, X. Cong, R. Xie, Z. Liu, M. Sun, J. Zhou, Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=EHg5GDnyq1 +[106] Z. Liu, Y. Zhang, P. Li, Y. Liu, D. Yang, Dynamic LLM-agent network: An LLM-agent collaboration framework with agent team optimization (2024). URL https://openreview.net/forum?id=i43XCU54Br +[107] G. Chen, S. Dong, Y. Shu, G. Zhang, J. Sesay, B. Karlsson, J. Fu, Y. Shi, Autoagents: A framework for automatic agent generation, in: K. Larson (Ed.), Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, International Joint Conferences on Artificial Intelligence Organization, 2024, pp. 22–30, main Track. doi:10.24963/ijcai.2024/3. +URL https://doi.org/10.24963/ijcai.2024/3 +[108] S. Hong, M. Zhuge, J. Chen, X. Zheng, Y. Cheng, J. Wang, C. Zhang, Z. Wang, S. K. S. Yau, Z. Lin, L. Zhou, C. Ran, L. Xiao, C. Wu, J. Schmidhuber, MetaGPT: Meta programming for a multi-agent collaborative framework, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=VtmBAGCN7o +[109] S. Hong, Y. Lin, B. Liu, B. Liu, B. Wu, D. Li, J. Chen, + +J. Zhang, J. Wang, L. Zhang, L. Zhang, M. Yang, M. Zhuge, T. Guo, T. Zhou, W. Tao, W. Wang, X. Tang, X. Lu, X. Zheng, X. Liang, Y. Fei, Y. Cheng, Z. Xu, C. Wu, Data interpreter: An llm agent for data science (2024). arXiv:2402.18679. +[110] X. Team, Xagent: An autonomous agent for complex task solving (2023). +[111] J. Wang, Z. Duan, Intelligent spark agents: A modular lang-graph framework for scalable, visualized, and enhanced big data machine learning workflows (2024). arXiv:2412.01490. URL https://arxiv.org/abs/2412.01490 +[112] J. Wang, Z. Duan, Agent ai with langgraph: A modular framework for enhancing machine translation using large language models (2024). arXiv:2412.03801. URL https://arxiv.org/abs/2412.03801 +[113] Y. Zheng, Z. Xing, Q. Zhang, B. Jin, P. Li, Y. Zheng, Z. Xia, K. Zhan, X. Lang, Y. Chen, D. Zhao, Planagent: A multimodal large language agent for closed-loop vehicle motion planning (2024). arXiv:2406.01587. URL https://arxiv.org/abs/2406.01587 +[114] L. Zhang, Z. Ji, B. Chen, Crew: Facilitating human-ai teaming research (2025). arXiv:2408.00170. URL https://arxiv.org/abs/2408.00170 +[115] Z. Duan, J. Wang, Exploration of llm multi-agent application implementation based on langgraph+crewai (2024). arXiv: 2411.18241. URL https://arxiv.org/abs/2411.18241 +[116] Y. Cao, H. Zhao, Y. Cheng, T. Shu, Y. Chen, G. Liu, G. Liang, J. Zhao, J. Yan, Y. Li, Survey on large language model-enhanced reinforcement learning: Concept, taxonomy, and methods, IEEE Transactions on Neural Networks and Learning Systems (2024) 1-21doi:10.1109/TNNLS.2024.3497992. +[117] I. Marti, V. R. Tomas, A. Saez, J. J. Martinez, A rule-based multi-agent system for road traffic management, in: 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, Vol. 3, 2009, pp. 595-598. doi:10.1109/WI-IAT.2009.358. +[118] V. V. Nekhai, E. Trunova, I. Bilous, I. Bohdan, M. Voitsekhovska, Fuzzy game-theoretic modeling of a multi-agent cybersecurity management system for an agricultural enterprise, in: S. Shkarlet, A. Morozov, A. Palagin, D. Vinnikov, N. Stoianov, M. Zhelezniak, V. Kazymyr (Eds.), Mathematical Modeling and Simulation of Systems, Springer International Publishing, Cham, 2022, pp. 423-434. +[119] A. Ramezani, M. R. Andalibizadeh, S. Bahrampour, H. Ramezani, B. Moshiri, Select reliable strategy in multiagent systems using fuzzy logic-based fusion, in: 2008 Second Asia International Conference on Modelling & Simulation (AMS), 2008, pp. 13-17. doi:10.1109/AMS.2008.63. +[120] M. E. H. Charaf, M. Benattou, S. Azzouzi, A rule-based multi-agent system for testing distributed applications, in: 2012 International Conference on Multimedia Computing and Systems, 2012, pp. 967-972. doi:10.1109/ICMCS.2012.6320205. +[121] A. Daeichian, A. Haghani, Fuzzy q-learning-based multi-agent system for intelligent traffic control by a game theory approach, Arabian Journal for Science and Engineering 43 (6) (2018) 3241-3247. doi:10.1007/s13369-017-3018-9. URL https://doi.org/10.1007/s13369-017-3018-9 +[122] C. Amato, A first introduction to cooperative multi-agent reinforcement learning (2024). arXiv:2405.06161. URL https://arxiv.org/abs/2405.06161 +[123] Y. Zhou, S. Liu, Y. Qing, K. Chen, T. Zheng, Y. Huang, J. Song, M. Song, Is centralized training with decentralized execution framework centralized enough for marl? (2023). + +arXiv:2305.17352. +URL https://arxiv.org/abs/2305.17352 +[124] C. Amato, An introduction to centralized training for decentralized execution in cooperative multi-agent reinforcement learning (2024). arXiv:2409.03052. URL https://arxiv.org/abs/2409.03052 +[125] P. K. Sharma, E. G. Zaroukian, R. Fernandez, A. Basak, D. E. Asher, Survey of recent multi-agent reinforcement learning algorithms utilizing centralized training, in: T. Pham, L. Solomon, M. E. Hohil (Eds.), Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, SPIE, 2021, p. 84. doi:10.1117/12.2585808. URL http://dx.doi.org/10.1117/12.2585808 +[126] J. K. Gupta, M. Egorov, M. Kochenderfer, Cooperative multiagent control using deep reinforcement learning, in: G. Sukthankar, J. A. Rodriguez-Aguilar (Eds.), Autonomous Agents and Multiagent Systems, Springer International Publishing, Cham, 2017, pp. 66-83. +[127] C. Claus, C. Boutilier, The dynamics of reinforcement learning in cooperative multiagent systems, in: Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence, AAAI Press, 1998, pp. 746-752. +[128] M. Lauer, M. A. Riedmiller, An algorithm for distributed reinforcement learning in cooperative multi-agent systems, in: Proceedings of the Seventeenth International Conference on Machine Learning, ICML '00, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2000, p. 535-542. +[129] M. Bowling, M. Veloso, Multiagent learning using a variable learning rate, Artificial Intelligence 136 (2) (2002) 215-250. doi:https://doi.org/10.1016/S0004-3702(02)00121-2. URL https://www.sciencedirect.com/science/article/pii/S0004370202001212 +[130] T. Ikeda, T. Shibuya, Centralized training with decentralized execution reinforcement learning for cooperative multi-agent systems with communication delay, in: 2022 61st Annual Conference of the Society of Instrument and Control Engineers (SICE), 2022, pp. 135-140. doi:10.23919/SICE56594.2022.9905866. +[131] Y. Zhou, S. Liu, Y. Qing, K. Chen, T. Zheng, Y. Huang, J. Song, M. Song, Is centralized training with decentralized execution framework centralized enough for marl? (2023). arXiv:2305.17352. URL https://arxiv.org/abs/2305.17352 +[132] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, I. Mordatch, Multi-agent actor-critic for mixed cooperative-competitive environments, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, pp. 6382-6393. +[133] C. Yu, A. Velu, E. Vinitsky, J. Gao, Y. Wang, A. Bayen, Y. Wu, The surprising effectiveness of ppo in cooperative multi-agent games, in: Advances in Neural Information Processing Systems, Vol. 35, 2022, pp. 29914-29928. URL https://proceedings.neurips.cc/paper_files/paper/2022/bitize/9c1535a02f0ce079433344e14d910597-Paper-Datasets and_Benchmarks.pdf +[134] M. Samvelyan, T. Rashid, C. S. de Witt, G. Farquhar, N. Nardelli, T. G. J. Rudner, C.-M. Hung, P. H. S. Torr, J. Foerster, S. Whiteson, The StarCraft Multi-Agent Challenge, CoRR abs/1902.04043 (2019). +[135] B. Ellis, J. Cook, S. Moalla, M. Samvelyan, M. Sun, A. Mahajan, J. Foerster, S. Whiteson, Smacv2: An improved + +benchmark for cooperative multi-agent reinforcement learning, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 37567-37593. +URL https://proceedings.neurips.cc/paper_files/paper/2023/file/764c18ad230f9e7bf6a77ffc2312c55e-Paper-Datasets and_Benchmarks.pdf +[136] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, I. Mordatch, Multi-agent actor-critic for mixed cooperative-competitive environments, Neural Information Processing Systems (NIPS) (2017). +[137] T. Malloy, C. R. Sims, T. Klinger, M. Liu, M. Riemer, G. Tesauro, Capacity-limited decentralized actor-critic for multi-agent games, in: 2021 IEEE Conference on Games (CoG), 2021, pp. 1-8. doi:10.1109/CoG52621.2021.9619081. +[138] J. Jiang, Z. Lu, Learning attentional communication for multi-agent cooperation, in: Advances in Neural Information Processing Systems, Vol. 31, 2018, pp. 7254-7264. URL https://proceedings.neurips.cc/paper/2018/file/6a8018b3a00b69c008601b8becae392b-Paper.pdf +[139] C. Sun, Z. Zang, J. Li, J. Li, X. Xu, R. Wang, C. Zheng, T2mac: targeted and trusted multi-agent communication through selective engagement and evidence-driven integration, in: Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence, AAAI'24/IAAI'24/EAAI'24, AAAI Press, 2025. doi:10.1609/aaai.v38i13.29438. URL https://doi.org/10.1609/aaai.v38i13.29438 +[140] OpenAI, Gpt-4 technical report, arXiv preprint arXiv:2303.08774 (2023). URL https://arxiv.org/abs/2303.08774 +[141] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, R. Lowe, Training language models to follow instructions with human feedback, in: Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, Curran Associates Inc., Red Hook, NY, USA, 2024. +[142] R. Mao, G. Chen, X. Zhang, F. Guerin, E. Cambria, GPTEval: A survey on assessments of ChatGPT and GPT-4, in: N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), ELRA and ICCL, Torino, Italia, 2024, pp. 7844-7866. URL https://aclanthology.org/2024.lrec-main.693 +[143] A. Grattafori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan, A. Yang, A. Fan, A. Goyal, A. Hartshorn, A. Yang, et al., The llama 3 herd of models (2024). arXiv:2407.21783. URL https://arxiv.org/abs/2407.21783 +[144] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Roziere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, G. Lample, Llama: Open and efficient foundation language models (2023). arXiv:2302.13971. URL https://arxiv.org/abs/2302.13971 + +[145] G. Team, Gemini: A family of highly capable multimodal models (2024). arXiv:2312.11805. +URL https://arxiv.org/abs/2312.11805 +[146] V. Dibia, J. Chen, G. Bansal, S. Syed, A. Fourney, E. Zhu, C. Wang, S. Amershi, Autogen studio: A no-code developer tool for building and debugging multi-agent systems (2024). arXiv:2408.15247. +URL https://arxiv.org/abs/2408.15247 +[147] C. Gan, S. Zhou, J. Schwartz, S. Alter, A. Bhandwaldar, D. Gutfreund, D. L. Yamins, J. J. DiCarlo, J. McDermott, A. Torralba, J. B. Tenenbaum, The threadworld transport challenge: A visually guided task-and-motion planning benchmark towards physically realistic embodied ai, in: 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 8847-8854. doi:10.1109/ICRA46639.2022.9812329. +[148] C. Gan, J. Schwartz, S. Alter, D. Mrowca, M. Schrimpf, J. Traer, J. De Freitas, J. Kubilius, A. Bhandwaldar, N. Haber, M. Sano, K. Kim, E. Wang, M. Lingelbach, A. Curtis, K. Feigelis, D. Bear, D. Gutfreund, D. Cox, A. Torralba, J. J. DiCarlo, J. Tenenbaum, J. McDermott, D. Yamins, Threadworld: A platform for interactive multi-modal physical simulation, in: J. Vanschoeren, S. Yeung (Eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, Vol. 1, 2021. +URL https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/735b90b4568125ed6c3f678819b6e058-Paper-round1.pdf +[149] R. Gong, Q. Huang, X. Ma, Y. Noda, Z. Durante, Z. Zheng, D. Terzopoulos, L. Fei-Fei, J. Gao, H. Vo, MindAgent: Emergent gaming interaction, in: K. Duh, H. Gomez, S. Bethard (Eds.), Findings of the Association for Computational Linguistics: NAACL 2024, Association for Computational Linguistics, Mexico City, Mexico, 2024, pp. 3154-3183. doi:10.18653/v1/2024-findings-naacl.200. +URL https://aclanthology.org/2024.findings-naacl.200 +[150] K. Shao, Y. Zhu, Z. Tang, D. Zhao, Cooperative multiagent deep reinforcement learning with counterfactual reward, in: 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1-8. doi:10.1109/IJCNN48605.2020.9207169. +[151] K. R. Chandra, S. Borugadda, Multi agent deep reinforcement learning with deep q-network based energy efficiency and resource allocation in nomawireless systems, in: 2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT), 2023, pp. 1-8. doi:10.1109/ICEEICT56924.2023.10157052. +[152] A. M. Hafiz, G. M. Bhat, Deep q-network based multi-agent reinforcement learning with binary action agents (2020). arXiv: 2008.04109. +URL https://arxiv.org/abs/2008.04109 +[153] C. S. de Witt, T. Gupta, D. Makoviichuk, V. Makoviychuk, P. H. S. Torr, M. Sun, S. Whiteson, Is independent learning all you need in the starcraft multi-agent challenge? (2020). arXiv:2011.09533. +URL https://arxiv.org/abs/2011.09533 +[154] O. Vinyals, T. Ewalds, S. Bartunov, P. Georgiev, A. S. Vezhnevets, M. Yeo, A. Makhzani, H. Kuttler, J. Agapiou, J. Schrittwieser, J. Quan, S. Gaffney, S. Petersen, K. Simonyan, T. Schaul, H. van Hasselt, D. Silver, T. Lillicrap, K. Calderone, P. Keet, A. Brunasso, D. Lawrence, A. Ekermo, J. Repp, R. Tsing, Starcraft ii: A new challenge for reinforcement learning (2017). arXiv:1708.04782. +URL https://arxiv.org/abs/1708.04782 + +[155] H. Shen, K. Zhang, M. Hong, T. Chen, Towards understanding asynchronous advantage actor-critic: Convergence and linear speedup, IEEE Transactions on Signal Processing 71 (2023) 2579-2594. doi:10.1109/tsp.2023.3268475. URL http://dx.doi.org/10.1109/TSP.2023.3268475 +[156] S. Li, Y. Wu, X. Cui, H. Dong, F. Fang, S. Russell, Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, AAAI Press, 2019, pp. 4213-4220. doi:10.1609/aaai.v33i01.33014213. +[157] J. N. Foerster, G. Farquhar, T. Afouras, N. Nardelli, S. Whiteson, Counterfactual multi-agent policy gradients, in: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18, AAAI Press, 2018. +[158] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, O. Klimov, Proximal policy optimization algorithms (2017). arXiv: 1707.06347. URL https://arxiv.org/abs/1707.06347 +[159] J. Schulman, S. Levine, P. Moritz, M. Jordan, P. Abbeel, Trust region policy optimization, in: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, JMLR.org, 2015, p. 1889-1897. +[160] J. G. Kuba, R. Chen, M. Wen, Y. Wen, F. Sun, J. Wang, Y. Yang, Trust region policy optimisation in multi-agent reinforcement learning, in: International Conference on Learning Representations, 2022. URL https://arxiv.org/abs/2109.11251 +[161] S. Sukhbaatar, A. Szlam, R. Fergus, Learning multiagent communication with backpropagation, in: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, Curran Associates Inc., Red Hook, NY, USA, 2016, pp. 2252-2260. +[162] L. Matignon, G. J. Laurent, N. Le Fort-Piat, Hysteresic q-learning: an algorithm for decentralized reinforcement learning in cooperative multi-agent teams, in: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007, pp. 64-69. doi:10.1109/IROS.2007.4399095. +[163] E. Wei, S. Luke, Lenient learning in independent-learner stochastic cooperative games, J. Mach. Learn. Res. 17 (1) (2016) 2914–2955. +[164] M. Hausknecht, P. Stone, Deep recurrent q-learning for partially observable mdps (2017). arXiv:1507.06527. URL https://arxiv.org/abs/1507.06527 +[165] A. Tampuu, T. Matiisen, D. Kodelja, I. Kuzovkin, K. Korjus, J. Aru, J. Aru, R. Vicente, Multiagent cooperation and competition with deep reinforcement learning, PLOS ONE 12 (4) (2017) 1-15. doi:10.1371/journal.pone.0172395. URL https://doi.org/10.1371/journal.pone.0172395 +[166] S. Omidshafiei, J. Pazis, C. Amato, J. P. How, J. Vian, Deep decentralized multi-task multi-agent reinforcement learning under partial observability, in: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, p. 2681-2690. +[167] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, W. Zaremba, Openai gym (2016). arXiv: 1606.01540. URL https://arxiv.org/abs/1606.01540 +[168] J. N. Foerster, Y. M. Assael, N. de Freitas, S. Whiteson, Learning to communicate with deep multi-agent reinforcement learning (2016). arXiv:1605.06676. + +URL https://arxiv.org/abs/1605.06676 +[169] J. N. Foerster, Y. M. Assael, N. de Freitas, S. Whiteson, Learning to communicate to solve riddles with deep distributed recurrent q-networks (2016). arXiv:1602.02672. URL https://arxiv.org/abs/1602.02672 +[170] L. Fan, Y.-y. Liu, S. Zhang, Partially observable multi-agent rl with enhanced deep distributed recurrent q-network, in: 2018 5th International Conference on Information Science and Control Engineering (ICISCE), 2018, pp. 375-379. doi:10.1109/ICISCE.2018.00085. +[171] S. Sukhbaatar, A. Szlam, R. Fergus, Learning multiagent communication with backpropagation, in: Advances in Neural Information Processing Systems, Vol. 29, 2016, pp. 2244-2252. URL https://proceedings.neurips.cc/paper/2016/file/55b1927fdafe39c48e5b73b5d61ea60-Paper.pdf +[172] P. Peng, Y. Wen, Y. Yang, Q. Yuan, Z. Tang, H. Long, J. Wang, Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play starcraft combat games (2017). arXiv:1703.10069. URL https://arxiv.org/abs/1703.10069 +[173] G. Wang, D. Shi, C. Xue, H. Jiang, Y. Wang, Bic-ddpg: Bidirectionally-coordinated nets for deep multi-agent reinforcement learning, in: H. Gao, X. Wang, M. Iqbal, Y. Yin, J. Yin, N. Gu (Eds.), Collaborative Computing: Networking, Applications and Worksharing, Springer International Publishing, Cham, 2021, pp. 337-354. +[174] D. Fan, H. Shen, L. Dong, Multi-agent distributed deep deterministic policy gradient for partially observable tracking, Actuators 10 (10) (2021). doi:10.3390/act10100268. URL https://www.mdpi.com/2076-0825/10/10/268 +[175] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, K. Kavukcuoglu, Asynchronous methods for deep reinforcement learning, in: Proceedings of The 33rd International Conference on Machine Learning, PMLR, 2016, pp. 1928-1937. URL https://proceedings.mlr.press/v48/mniha16.html +[176] L. Yuan, C. Wang, J. Wang, F. Zhang, F. Chen, C. Guan, Z. Zhang, C. Zhang, Y. Yu, Multi-agent concentrative coordination with decentralized task representation, in: L. D. Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, International Joint Conferences on Artificial Intelligence Organization, 2022, pp. 599–605, main Track. doi:10.24963/ijcai.2022/85. URL https://doi.org/10.24963/ijcai.2022/85 +[177] S. Vanneste, A. Vanneste, K. Mets, T. D. Schepper, A. Anwar, S. Mercelis, P. Hellinckx, Learning to communicate using a communication critic and counterfactual reasoning, Neural Computing and Applications N/A (N/A) (2025) N/A. doi: 10.1007/s00521-024-10598-0. URL https://doi.org/10.1007/s00521-024-10598-0 +[178] A. Singh, T. Jain, S. Sukhbaatar, Learning when to communicate at scale in multiagent cooperative and competitive tasks, in: International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bk1Wt24tvH +[179] D. Das, R. L. Rath, T. Singh, S. Mishra, V. Malik, R. Sobti, B. Brahma, LIm-based custom chatbot using langchain, in: A. E. Hassanien, S. Anand, A. Jaiswal, P. Kumar (Eds.), Innovative Computing and Communications, Springer Nature Singapore, Singapore, 2024, pp. 257-267. +[180] Q. Sun, Z. Yin, X. Li, Z. Wu, X. Qiu, L. Kong, Corex: Pushing the boundaries of complex reasoning through multi-model + +collaboration (2023). arXiv:2310.00280. +[181] J. S. Park, L. Popowski, C. Cai, M. R. Morris, P. Liang, M. S. Bernstein, Social simulacra: Creating populated prototypes for social computing systems, in: Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST '22, Association for Computing Machinery, New York, NY, USA, 2022. doi:10.1145/3526113.3545616. URL https://doi.org/10.1145/3526113.3545616 +[182] J. Terry, B. Black, N. Grammel, M. Jayakumar, A. Hari, R. Sullivan, L. Santos, R. Perez, C. Horsch, C. Dieffendahl, et al., Pettingzoo: A standard api for multi-agent reinforcement learning, in: Proceedings of the 35th Conference on Neural Information Processing Systems, in: Advances in Neural Information Processing Systems, Vol. 34, 2021, pp. 15032-15043. +[183] S. Huang, S. Ontañón, C. Bamford, L. Grela, Gym-µrts: Toward affordable full game real-time strategy games research with deep reinforcement learning, in: 2021 IEEE Conference on Games (CoG), Copenhagen, Denmark, August 17-20, 2021, IEEE, 2021, pp. 1-8. doi:10.1109/CoG52621.2021.9619076. URL https://doi.org/10.1109/CoG52621.2021.9619076 +[184] L. Zheng, J. Yang, H. Cai, M. Zhou, W. Zhang, J. Wang, Y. Yu, Magent: A many-agent reinforcement learning platform for artificial collective intelligence, Proceedings of the AAAI Conference on Artificial Intelligence 32 (1) (2018). doi:10.1609/aaai.v32i1.11371. URL https://ojs.aaii.org/index.php/AAAI/article/view/11371 +[185] C. Yu, P. Wang, Dexterous manipulation for multifingered robotic hands with reinforcement learning: A review, Frontiers in Neurorobotics 16 (2022). doi:10.3389/fnbot.2022.861825. URL https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2022.861825 +[186] O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. Weng, W. Zaremba, Learning dexterous in-hand manipulation, The International Journal of Robotics Research 39 (1) (2020) 3-20. arXiv:https://doi.org/10.1177/0278364919887447. doi:10.1177/0278364919887447. URL https://doi.org/10.1177/0278364919887447 +[187] M. Chevalier-Boisvert, B. Dai, M. Towers, R. de Lazcano, L. Willems, S. Lahlou, S. Pal, P. S. Castro, J. Terry, Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks (2023). arXiv: 2306.13831. URL https://arxiv.org/abs/2306.13831 +[188] J. Z. Leibo, E. Duñez-Guzmán, A. S. Vezhnevets, J. P. Agapiou, P. Sunehag, R. Koster, J. Matyas, C. Beattie, I. Mordatch, T. Graepel, Scalable evaluation of multi-agent reinforcement learning with melting pot (2021). arXiv:2107.06857. URL https://arxiv.org/abs/2107.06857 +[189] C. Mehlman, J. Abramov, G. Falco, Cat-and-mouse satellite dynamics: Divergent adversarial reinforcement learning for contested multi-agent space operations (2024). arXiv: 2409.17443. URL https://arxiv.org/abs/2409.17443 +[190] Y. Dong, X. Zhu, Z. Pan, L. Zhu, Y. Yang, VillagerAgent: A graph-based multi-agent framework for coordinating complex task dependencies in Minecraft, in: L.-W. Ku, A. Martins, V. Srikumar (Eds.), Findings of the Association for Computational Linguistics: ACL 2024, Association for Computational + +Linguistics, Bangkok, Thailand, 2024, pp. 16290-16314. doi:10.18653/v1/2024-findings-acl.964. URL https://aclanthology.org/2024.findings-acl.964/ +[191] H. Li, X. Yang, Z. Wang, X. Zhu, J. Zhou, Y. Qiao, X. Wang, H. Li, L. Lu, J. Dai, Auto mc-reward: Automated dense reward design with large language models for mycraft (2024). arXiv:2312.09238. URL https://arxiv.org/abs/2312.09238 +[192] X. Zhu, Y. Chen, H. Tian, C. Tao, W. Su, C. Yang, G. Huang, B. Li, L. Lu, X. Wang, Y. Qiao, Z. Zhang, J. Dai, Ghost in the minecraft: Generally capable agents for open-world environments via large language models with text-based knowledge and memory (2023). arXiv:2305.17144. URL https://arxiv.org/abs/2305.17144 +[193] S. C. Duncan, Minecraft, beyond construction and survival, Well Played 1 (1) (2011) 1-22. +[194] A. Juliani, V.-P. Berges, E. Teng, A. Cohen, J. Harper, C. Elion, C. Goy, Y. Gao, H. Henry, M. Mattar, D. Lange, Unity: A general platform for intelligent agents (2020). arXiv:1809.02627. URL https://arxiv.org/abs/1809.02627 +[195] L. Hu, J. Tang, G. Zou, Z. Li, J. Zeng, M. Li, Simulation optimization of highway hard shoulder running based on multi-agent deep deterministic policy gradient algorithm, Alexandria Engineering Journal 117 (2025) 99-115. doi:https://doi.org/10.1016/j.aej.2024.12.110. URL https://www.sciencedirect.com/science/article/pii/S1110016824017095 +[196] Anonymous, A generalist hanabi agent, in: Submitted to The Thirteenth International Conference on Learning Representations, 2024, under review. URL https://openreview.net/forum?id=pCj2sLNoJq +[197] F. Bredell, H. A. Engelbrecht, J. C. Schoeman, Augmenting the action space with conventions to improve multi-agent cooperation in hanabi (2024). arXiv:2412.06333. URL https://arxiv.org/abs/2412.06333 +[198] M. Kölle, Y. Erpelding, F. Ritz, T. Phan, S. Illium, C. Linnhoff-Popien, Aquarium: A comprehensive framework for exploring predator-prey dynamics through multi-agent reinforcement learning algorithms (2024). arXiv:2401.07056. URL https://arxiv.org/abs/2401.07056 +[199] A. Chatterjee, M. A. Abbasi, E. Venturino, J. Zhen, M. Haque, A predator-prey model with prey refuge: under a stochastic and deterministic environment, Nonlinear Dynamics 112 (15) (2024) 13667-13693. doi:10.1007/s11071-024-09756-9. URL https://doi.org/10.1007/s11071-024-09756-9 +[200] Z. Mandi, S. Jain, S. Song, Roco: Dialectic multi-robot collaboration with large language models, in: 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024, pp. 286-299. doi:10.1109/ICRA57147.2024.10610855. +[201] J. S. Park, J. O'Brien, C. J. Cai, M. R. Morris, P. Liang, M. S. Bernstein, Generative agents: Interactive simulacra of human behavior, in: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23), ACM, ACM, 2023, p. October 2023. doi:10.1145/3586183.3606763. +[202] G. Kovac, R. Portelas, P. F. Dominey, P.-Y. Oudeyer, The social AI school: Insights from developmental psychology towards artificial socio-cultural agents, in: First Workshop on Theory of Mind in Communicating Agents, 2023. URL https://openreview.net/forum?id=Y5r8Wa670b +[203] G. Kovač, R. Portelas, P. F. Dominey, P.-Y. Oudeyer, The socialai school: a framework leveraging developmental psychol- + +ogy toward artificial socio-cultural agents, Frontiers in Neurorobotics 18, part of the Research Topic: Theory of Mind in Robots and Intelligent Systems (2024). doi:10.3389/fnbot.2024.1396359. +[204] G. Mukobi, H. Erlebach, N. Lauffer, L. Hammond, A. Chan, J. Clifton, Welfare diplomacy: Benchmarking language model cooperation (2024). URL https://openreview.net/forum?id=AKJLnDgzkm +[205] J. Boubin, C. Burley, P. Han, B. Li, B. Porter, C. Stewart, Programming and deployment of autonomous swarms using multi-agent reinforcement learning (2021). arXiv:2105.10605. URL https://arxiv.org/abs/2105.10605 +[206] T. Li, F. Xie, Y. Xiong, Q. Feng, Multi-arm robot task planning for fruit harvesting using multi-agent reinforcement learning (2023). arXiv:2303.00460. URL https://arxiv.org/abs/2303.00460 +[207] A. Mahajan, S. Hegde, E. Shay, D. Wu, A. Prins, Comparative analysis of multi-agent reinforcement learning policies for crop planning decision support (2024). arXiv:2412.02057. URL https://arxiv.org/abs/2412.02057 +[208] L. Benke, M. Papasimeon, T. Miller, Modelling strategic deceptive planning in adversarial multi-agent systems, in: S. Sarkadi, B. Wright, P. Masters, P. McBurney (Eds.), Deceptive AI, Springer International Publishing, Cham, 2021, pp. 76-83. +[209] A. V. Sutagundar, S. Manvi, Context aware multisensor image fusion for military sensor networks using multi-agent system, International Journal of Ad hoc, Sensor Ubiquitous Computing 2 (1) (2011) 147-167. doi:10.5121/ijasuc.2011.2113. URL http://dx.doi.org/10.5121/ijasuc.2011.2113 +[210] S. Vangaru, D. Rosen, D. Green, R. Rodriguez, M. Wiecek, A. Johnson, A. M. Jones, W. C. Headley, A multi-agent reinforcement learning testbed for cognitive radio applications (2024). arXiv:2410.21521. URL https://arxiv.org/abs/2410.21521 +[211] Z. Wang, L. Wang, Q. Yi, Y. Liu, A marl based multi-target tracking algorithm under jamming against radar (2024). arXiv:2412.12547. URL https://arxiv.org/abs/2412.12547 +[212] A. Alexopoulos, T. Schmidt, E. Badreddin, Cooperative pursue in pursuit-evasion games with unmanned aerial vehicles, in: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 4538-4543. doi:10.1109/IROS.2015.7354022. +[213] D. Luo, Z. Fan, Z. Yang, Y. Xu, Multi-uav cooperative maneuver decision-making for pursuit-evasion using improved madrl, Defence Technology 35 (2024) 187-197. doi:https://doi.org/10.1016/j.dt.2023.11.013. URL https://www.sciencedirect.com/science/article/pii/S221491472300301X +[214] D. Wang, Multi-agent reinforcement learning for safe driving in on-ramp merging of autonomous vehicles, in: 2024 14th International Conference on Cloud Computing, Data Science & Engineering (Confluence), 2024, pp. 644-651. doi:10.1109/Confluence60223.2024.10463500. +[215] N. Aboueleneen, Y. Bello, A. Albaseer, A. R. Hussein, M. Abdallah, E. Hossain, Distributed traffic control in complex dynamic roadblocks: A multi-agent deep rl approach (2024). arXiv:2501.00211. URL https://arxiv.org/abs/2501.00211 +[216] J. Yu, L. Liang, C. Guo, Z. Guo, S. Jin, G. Y. Li, Heterogeneous multi-agent reinforcement learning for distributed channel access in wlans (2024). arXiv:2412.14218. URL https://arxiv.org/abs/2412.14218 + +[217] K. Sun, H. Yu, Reinforcement learning for freeway lane-change regulation via connected vehicles (2024). arXiv: 2412.04341. URL https://arxiv.org/abs/2412.04341 +[218] T. Azfar, R. Ke, Traffic co-simulation framework empowered by infrastructure camera sensing and reinforcement learning (2024). arXiv:2412.03925. URL https://arxiv.org/abs/2412.03925 +[219] R. Bokade, X. Jin, Offlight: An offline multi-agent reinforcement learning framework for traffic signal control (2024). arXiv:2411.06601. URL https://arxiv.org/abs/2411.06601 +[220] D. K. Kwesiga, S. C. Vishnoi, A. Guin, M. Hunter, Integrating transit signal priority into multi-agent reinforcement learning based traffic signal control (2024). arXiv:2411.19359. URL https://arxiv.org/abs/2411.19359 +[221] Y. Zhang, G. Zheng, Z. Liu, Q. Li, H. Zeng, Marlens: Understanding multi-agent reinforcement learning for traffic signal control via visual analytics, IEEE Transactions on Visualization and Computer Graphics (2024) 1-16doi:10.1109/tvcg.2024.3392587. URL http://dx.doi.org/10.1109/TVCG.2024.3392587 +[222] J. Xue, D. Zhang, R. Xiong, Y. Wang, E. Liu, A two-stage based social preference recognition in multi-agent autonomous driving system (2023). arXiv:2310.03303. URL https://arxiv.org/abs/2310.03303 +[223] J. Liu, P. Hang, X. Na, C. Huang, J. Sun, Cooperative decision-making for cavs at unsignalized intersections: A marl approach with attention and hierarchical game priors (2024). arXiv: 2409.05712. URL https://arxiv.org/abs/2409.05712 +[224] C. Formanek, L. Beyers, C. R. Tilbury, J. P. Shock, A. Pretorius, Putting data at the centre of offline multi-agent reinforcement learning (2024). arXiv:2409.12001. URL https://arxiv.org/abs/2409.12001 +[225] R. Zhang, J. Hou, F. Walter, S. Gu, J. Guan, F. Röhrbein, Y. Du, P. Cai, G. Chen, A. Knoll, Multi-agent reinforcement learning for autonomous driving: A survey (2024). arXiv: 2408.09675. URL https://arxiv.org/abs/2408.09675 +[226] S. Kotoku, T. Mihana, A. Röhm, R. Horisaki, Decentralized multi-agent reinforcement learning algorithm using a cluster-synchronized laser network (2024). arXiv:2407.09124. URL https://arxiv.org/abs/2407.09124 +[227] M. Hua, D. Chen, K. Jiang, F. Zhang, J. Wang, B. Wang, Q. Zhou, H. Xu, Communication-efficient marl for platoon stability and energy-efficiency co-optimization in cooperative adaptive cruise control of cavs (2024). arXiv:2406.11653. URL https://arxiv.org/abs/2406.11653 +[228] K. DeMarco, E. Squires, M. Day, C. Pippin, Simulating collaborative robots in a massive multi-agent game environment (scrimmage), in: N. Correll, M. Schwager, M. Otte (Eds.), Distributed Autonomous Robotic Systems, Springer International Publishing, Cham, 2019, pp. 283-297. +[229] Y. Xie, Y. Liu, Z. Ma, L. Shi, X. Wang, W. Yuan, M. O. Jackson, Q. Mei, How different ai chatbots behave? benchmarking large language models in behavioral economics games (2024). arXiv:2412.12362. URL https://arxiv.org/abs/2412.12362 +[230] W. Lin, J. Roberts, Y. Yang, S. Albanie, Z. Lu, K. Han, Beyond outcomes: Transparent assessment of llm reasoning in games (2024). arXiv:2412.13602. URL https://arxiv.org/abs/2412.13602 +[231] F. Jia, J. Zheng, F. Li, Decentralized intelligence in game: + +Embodied ai agents and the convergence of defi and virtual ecosystems (2024). arXiv:2412.18601. +URL https://arxiv.org/abs/2412.18601 +[232] C. Yin, G. Zhouhong, D. Zhaohan, Y. Zheyu, C. Shaosheng, X. Yiqian, F. Hongwei, C. Ping, Mirage: Exploring how large language models perform in complex social interactive environments (2025). arXiv:2501.01652. URL https://arxiv.org/abs/2501.01652 +[233] Z. Zhang, Y. Lan, Y. Chen, L. Wang, X. Wang, H. Wang, Dvm: Towards controllable llm agents in social deduction games (2025). arXiv:2501.06695. URL https://arxiv.org/abs/2501.06695 +[234] A. Gonzalez-Bonorino, M. Capra, E. Pantoja, Llms model nonweird populations: Experiments with synthetic cultural agents (2025). arXiv:2501.06834. URL https://arxiv.org/abs/2501.06834 +[235] X. Luo, F. Ding, F. Yang, Y. Zhou, J. Loo, H. H. Tew, C. Liu, Sensorag: Constructing environmental knowledge bases with proactive querying for llm-based autonomous driving (2025). arXiv:2501.03535. URL https://arxiv.org/abs/2501.03535 +[236] D. Mahmud, H. Hajmohamed, S. Almentheri, S. Alqaydi, L. Aldaheri, R. A. Khalil, N. Saeed, Integrating llms with its: Recent advances, potentials, challenges, and future directions (2025). arXiv:2501.04437. URL https://arxiv.org/abs/2501.04437 +[237] Z. Peng, Y. Wang, X. Han, L. Zheng, J. Ma, Learningflow: Automated policy learning workflow for urban driving with large language models (2025). arXiv:2501.05057. URL https://arxiv.org/abs/2501.05057 +[238] A. Karagounis, Leveraging large language models for enhancing autonomous vehicle perception (2024). arXiv:2412.20230. URL https://arxiv.org/abs/2412.20230 +[239] X. Luo, F. Ding, R. Chen, R. Panda, J. Loo, S. Zhang, "what's happening"- a human-centered multimodal interpreter explaining the actions of autonomous vehicles (2025). arXiv:2501.05322. URL https://arxiv.org/abs/2501.05322 +[240] H. Gao, Y. Zhao, Application of vision-language model to pedestrians behavior and scene understanding in autonomous driving (2025). arXiv:2501.06680. URL https://arxiv.org/abs/2501.06680 +[241] D. Hegde, R. Yasarla, H. Cai, S. Han, A. Bhattacharyya, S. Mahajan, L. Liu, R. Garrepalli, V. M. Patel, F. Porkli, Distilling multi-modal large language models for autonomous driving (2025). arXiv:2501.09757. URL https://arxiv.org/abs/2501.09757 +[242] R. Wang, X. He, R. Yu, W. Qiu, B. An, Z. Rabinovich, Learning efficient multi-agent communication: An information bottleneck approach, in: H. D. III, A. Singh (Eds.), Proceedings of the 37th International Conference on Machine Learning, Vol. 119 of Proceedings of Machine Learning Research, PMLR, 2020, pp. 9908-9918. URL https://proceedings.mlr.press/v119/wang20i.html +[243] A. van den Oord, O. Vinyals, K. Kavukcuoglu, Neural discrete representation learning, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, p. 6309-6318. +[244] F. Ye, J. Chen, Y. Tian, T. Jiang, Cooperative task assignment of a heterogeneous multi-uav system using an adaptive genetic algorithm, Electronics 9 (4) (2020). doi:10.3390/electronics9040687. + +URL https://www.mdpi.com/2079-9292/9/4/687 +[245] J. Wang, D. Ye, Z. Lu, Mutual-information regularized multiagent policy iteration, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 2617-2635. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/0799492e7be38b66d10ead5e8809616d-Paper-Conference.pdf +[246] P. Li, H. Tang, T. Yang, X. Hao, T. Sang, Y. Zheng, J. Hao, M. E. Taylor, Z. Wang, Pmic: Improving multi-agent reinforcement learning with progressive mutual information collaboration, ICML 2022 (2022). +[247] S. Kang, Y. Lee, S.-Y. Yun, DPM: Dual preferences-based multi-agent reinforcement learning, in: ICML 2024 Workshop on Models of Human Feedback for AI Alignment, 2024. URL https://openreview.net/forum?id=TW3DIP2h5p +[248] Q. Fu, T. Qiu, J. Yi, Z. Pu, S. Wu, Concentration network for reinforcement learning of large-scale multi-agent systems, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, AAAI Press, Beijing, China, 2022, p. 21165. doi:10.1609/aaai.v36i9.21165. +[249] B. Peng, T. Rashid, C. A. S. de Witt, P.-A. Kamienny, P. H. S. Torr, W. Böhmer, S. Whiteson, Facmac: Factored multi-agent centralised policy gradients (2021). arXiv:2003.06709. URL https://arxiv.org/abs/2003.06709 +[250] K. Pandya, M. Holia, Automating customer service using langchain: Building custom open-source gpt chatbot for organizations (2023). arXiv:2310.05421. URL https://arxiv.org/abs/2310.05421 +[251] H. Du, S. Thudumu, R. Vasa, K. Mouzakis, A survey on context-aware multi-agent systems: Techniques, challenges and future directions (2024). arXiv:2402.01968. URL https://arxiv.org/abs/2402.01968 +[252] K. Hu, M. Li, Z. Song, K. Xu, Q. Xia, N. Sun, P. Zhou, M. Xia, A review of research on reinforcement learning algorithms for multi-agents, Neurocomputing 599 (2024) 128068. doi: https://doi.org/10.1016/j.neucom.2024.128068. URL https://www.sciencedirect.com/science/article/pii/S0925231224008397 +[253] K.-T. Tran, D. Dao, M.-D. Nguyen, Q.-V. Pham, B. O'Sullivan, H. D. Nguyen, Multi-agent collaboration mechanisms: A survey of llms (2025). arXiv:2501.06322. URL https://arxiv.org/abs/2501.06322 +[254] Y. Talebirad, A. Nadiri, Multi-agent collaboration: Harnessing the power of intelligent llm agents (2023). arXiv:2306.03314. URL https://arxiv.org/abs/2306.03314 +[255] F. L. D. Silva, M. E. Taylor, A. H. R. Costa, Autonomously reusing knowledge in multiagent reinforcement learning, in: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, 2018, pp. 5487-5493. +[256] F. L. Da Silva, A. H. R. Costa, A survey on transfer learning for multiagent reinforcement learning systems, J. Artif. Int. Res. 64 (1) (2019) 645-703. doi:10.1613/jair.1.11396. URL https://doi.org/10.1613/jair.1.11396 +[257] Y. Bengio, J. Louradour, R. Collobert, J. Weston, Curriculum learning, in: Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, Association for Computing Machinery, New York, NY, USA, 2009, p. 41-48. doi:10.1145/1553374.1553380. URL https://doi.org/10.1145/1553374.1553380 +[258] X. Wang, Y. Chen, W. Zhu, A survey on curriculum learn + +ing, IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (9) (2022) 4555-4576. doi:10.1109/TPAMI.2021.3069908. +[259] L. Pinto, J. Davidson, R. Sukthankar, A. Gupta, Robust adversarial reinforcement learning, in: Proceedings of the 34th International Conference on Machine Learning, PMLR, 2017, pp. 2817-2826. +[260] W. Jin, N. Wang, T. Tao, B. Shi, H. Bi, B. Zhao, H. Wu, H. Duan, G. Yang, A veracity dissemination consistency-based few-shot fake news detection framework by synergizing adversarial and contrastive self-supervised learning, Scientific Reports 14 (1) (2024) 19470. doi:10.1038/s41598-024-70039-9. URL https://doi.org/10.1038/s41598-024-70039-9 +[261] J. Foerster, N. Nardelli, G. Farquhar, T. Afouras, P. H. S. Torr, P. Kohli, S. Whiteson, Stabilising experience replay for deep multi-agent reinforcement learning, in: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, p. 1146-1155. +[262] T. Guo, X. Chen, Y. Wang, R. Chang, S. Pei, N. V. Chawla, O. Wiest, X. Zhang, Large language model based multi-agents: A survey of progress and challenges, in: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI '24), IJCAI, 2024, pp. 8048-8057. doi: 10.24963/ijcai.2024/890. URL https://doi.org/10.24963/ijcai.2024/890 +[263] L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, Q. Chen, W. Peng, X. Feng, B. Qin, T. Liu, A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions, ACM Transactions on Information Systems Accepted on 24 September 2024 (2024). doi: 10.1145/3703155. URL https://doi.org/10.1145/3703155 +[264] E. Alonso, D. Kudenko, D. Kazakov (Eds.), Adaptive agents and multi-agent systems: adaptation and multi-agent learning, Springer-Verlag, Berlin, Heidelberg, 2003. +[265] K. Tuyls, P. J. Hoen, K. Verbeeck, S. Sen (Eds.), Learning and Adaption in Multi-Agent Systems, Vol. 3898 of Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 2006. URL https://link.springer.com/book/10.1007/978-3-540-33316-2 +[266] S. Chen, Y. Liu, W. Han, W. Zhang, T. Liu, A survey on llmbased multi-agent system: Recent advances and new frontiers in application (2025). arXiv:2412.17481. URL https://arxiv.org/abs/2412.17481 +[267] V. Dibia, Multi-agent llm applications — a review of current research, tools, and challenges, accessed from Victor Dibia's newsletter (2023). URL https://newsletter.victordibia.com/p/multi-agent-llm-applications-a-review +[268] A. Chan, R. Salganik, A. Markelius, C. Pang, N. Rajkumar, D. Krasheninnikov, L. Langosco, Z. He, Y. Duan, M. Carroll, M. Lin, A. Mayhew, K. Collins, M. Molamohammadi, J. Burden, W. Zhao, S. Rismani, K. Voudouris, U. Bhatt, A. Weller, D. Krueger, T. Maharaj, Harms from increasingly agentic algorithmic systems, in: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT '23, Association for Computing Machinery, New York, NY, USA, 2023, p. 651-666. doi:10.1145/3593013.3594033. URL https://doi.org/10.1145/3593013.3594033 +[269] A. Amirkhani, A. H. Barshooi, Consensus in multi-agent systems: A review, Artificial Intelligence Review 55 (5) (2022) 3897-3935, accessed: 2022-06-01. doi:10.1007/s10462-021-10097-x. URL https://doi.org/10.1007/s10462-021-10097-x + +[270] G. Beydoun, G. Low, H. Mouratidis, B. Henderson-Sellers, A security-aware metamodel for multi-agent systems (mas), Information and Software Technology 51 (5) (2009) 832-845, sPECIAL ISSUE: Model-Driven Development for Secure Information Systems. doi:https://doi.org/10.1016/j.infsof.2008.05.003. URL https://www.sciencedirect.com/science/article/pii/S0950584908000724 +[271] X. Xie, C. Zhang, Y. Zhu, Y. N. Wu, S.-C. Zhu, Congestion-aware multi-agent trajectory prediction for collision avoidance, in: 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 13693-13700. doi:10.1109/ICRA48506.2021.9560994. +[272] A. Stooke, K. Lee, P. Abbeel, M. Laskin, Decoupling representation learning from reinforcement learning, in: Proceedings of the 38th International Conference on Machine Learning, PMLR, 2021, pp. 9870-9879. URL https://proceedings.mlr.press/v139/stooke21a.html +[273] J. Luketina, N. Nardelli, G. Farquhar, J. Foerster, J. Andreas, E. Grefenstette, S. Whiteson, T. Rocktäschel, A survey of reinforcement learning informed by natural language, in: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, 2019, pp. 6309-6317. doi:10.24963/ijcai.2019/880. URL https://doi.org/10.24963/ijcai.2019/880 +[274] R. P. Poudel, H. Pandya, S. Liwicki, R. Cipolla, Recore: Regularized contrastive representation learning of world model, in: 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 22904-22913. doi: 10.1109/CVPR52733.2024.02161. +[275] W. Choi, W. K. Kim, S. Kim, H. Woo, Efficient policy adaptation with contrastive prompt ensemble for embodied agents, in: Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Curran Associates Inc., Red Hook, NY, USA, 2024. +[276] F. Paischer, T. Adler, V. Patil, A. Bitto-Nemling, M. Holzleitner, S. Lehner, H. Eghbal-Zadeh, S. Hochreiter, History compression via language models in reinforcement learning, in: K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, S. Sabato (Eds.), Proceedings of the 39th International Conference on Machine Learning, Vol. 162 of Proceedings of Machine Learning Research, PMLR, 2022, pp. 17156-17185. URL https://proceedings.mlr.press/v162/paischer22a.html +[277] F. Paischer, T. Adler, M. Hofmacher, S. Hochreiter, Semantic HELM: A human-readable memory for reinforcement learning, in: Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=ebMPmx5mr7 +[278] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, I. Sutskever, Learning transferable visual models from natural language supervision, in: M. Meila, T. Zhang (Eds.), Proceedings of the 38th International Conference on Machine Learning, Vol. 139 of Proceedings of Machine Learning Research, PMLR, 2021, pp. 8748-8763. URL https://proceedings.mlr.press/v139/radford21a.html +[279] A. van den Oord, Y. Li, O. Vinyals, Representation learning with contrastive predictive coding (2019). arXiv:1807.03748. URL https://arxiv.org/abs/1807.03748 +[280] M. Laskin, A. Srinivas, P. Abbeel, Curl: contrastive unsupervised representations for reinforcement learning, in: Proceed- + +ings of the 37th International Conference on Machine Learning, ICML'20, JMLR.org, 2020. +[281] M. Schwarzer, A. Anand, R. Goel, R. D. Hjelm, A. Courville, P. Bachman, Data-efficient reinforcement learning with self-predictive representations, in: International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=uCQfPZwRaUu +[282] M. Kwon, S. M. Xie, K. Bullard, D. Sadigh, Reward design with language models, in: The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=10uNUgI5K1 +[283] J. Song, Z. Zhou, J. Liu, C. Fang, Z. Shu, L. Ma, Self-refined large language model as automated reward function designer for deep reinforcement learning in robotics (2023). arXiv: 2309.06687. URL https://arxiv.org/abs/2309.06687 +[284] Y. Wu, Y. Fan, P. P. Liang, A. Azaria, Y. Li, T. M. Mitchell, Read and reap the rewards: Learning to play atari with the help of instruction manuals, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 1009-1023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/034d7bfeace2a9a258648b16fc626298-Paper-Conference.pdf +[285] T. Carta, C. Romac, T. Wolf, S. Lamprier, O. Sigaud, P.-Y. Oudeyer, Grounding large language models in interactive environments with online reinforcement learning, in: Proceedings of the 40th International Conference on Machine Learning, ICML'23, JMLR.org, 2023. +[286] K. Chu, X. Zhao, C. Weber, M. Li, S. Wermter, Accelerating reinforcement learning of robotic manipulations via feedback from large language models, in: Bridging the Gap between Cognitive Science and Robot Learning in the Real World: Progresses and New Directions, 2024. URL https://openreview.net/forum?id=MBeeqmD8Zk +[287] C. Kim, Y. Seo, H. Liu, L. Lee, J. Shin, H. Lee, K. Lee, Guide your agent with adaptive multimodal rewards, in: Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Curran Associates Inc., Red Hook, NY, USA, 2024. +[288] W. Yu, N. Gileadi, C. Fu, S. Kirmani, K.-H. Lee, M. G. Arenas, H.-T. L. Chiang, T. Erez, L. Hasenclever, J. Humplik, B. Ichter, T. Xiao, P. Xu, A. Zeng, T. Zhang, N. Heess, D. Sadigh, J. Tan, Y. Tassa, F. Xia, Language to rewards for robotic skill synthesis (2023). arXiv:2306.08647. URL https://arxiv.org/abs/2306.08647 +[289] A. Adeniji, A. Xie, C. Sferrazza, Y. Seo, S. James, P. Abbeel, Language reward modulation for pretraining reinforcement learning (2024). URL https://openreview.net/forum?id $\equiv$ SWRFC2Eup0 +[290] A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y. Yang, S. Gupta, B. P. Majumder, K. Hermann, S. Welleck, A. Yazdanbakhsh, P. Clark, Self-refine: Iterative refinement with self-feedback (2023). arXiv:2303.17651. URL https://arxiv.org/abs/2303.17651 +[291] Y. J. Ma, W. Liang, G. Wang, D.-A. Huang, O. Bastani, D. Jayaraman, Y. Zhu, L. Fan, A. Anandkumar, Eureka: Human-level reward design via coding large language models, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=IEduRU055F + +[292] T. Xie, S. Zhao, C. H. Wu, Y. Liu, Q. Luo, V. Zhong, Y. Yang, T. Yu, Text2reward: Reward shaping with language models for reinforcement learning, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=tUM39YTRxH +[293] M. Janner, Q. Li, S. Levine, Offline reinforcement learning as one big sequence modeling problem, in: Proceedings of the 35th International Conference on Neural Information Processing Systems, NIPS '21, Curran Associates Inc., Red Hook, NY, USA, 2024. +[294] R. Shi, Y. Liu, Y. Ze, S. S. Du, H. Xu, Unleashing the power of pre-trained language models for offline reinforcement learning, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=AY6aM13gGF +[295] S. Li, X. Puig, C. Paxton, Y. Du, C. Wang, L. Fan, T. Chen, D.-A. Huang, E. Akyurek, A. Anandkumar, J. Andreas, I. Mordatch, A. Torralba, Y. Zhu, Pre-trained language models for interactive decision-making, in: Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, Curran Associates Inc., Red Hook, NY, USA, 2024. +[296] L. Mezghani, P. Bojanowski, K. Alahari, S. Sukhbaatar, Think before you act: Unified policy for interleaving language reasoning with actions, in: Workshop on Reincarnating Reinforcement Learning at ICLR 2023, 2023. URL https://openreview.net/forum?id=HQmhQIi6mN +[297] J. Grigsby, L. Fan, Y. Zhu, Amago: Scalable in-context reinforcement learning for adaptive agents (2024). arXiv:2310.09971. URL https://arxiv.org/abs/2310.09971 +[298] B. Zitkovich, T. Yu, S. Xu, P. Xu, T. Xiao, F. Xia, J. Wu, P. Wohlhart, S. Welker, A. Wahid, Q. Vuong, Vanhoucke, Rt-2: Vision-language-action models transfer web knowledge to robotic control, in: J. Tan, M. Toussaint, K. Darvish (Eds.), Proceedings of The 7th Conference on Robot Learning, Vol. 229 of Proceedings of Machine Learning Research, PMLR, 2023, pp. 2165–2183. +[299] S. Yao, R. Rao, M. Hausknecht, K. Narasimhan, Keep CALM and explore: Language models for action generation in text-based games, in: B. Webber, T. Cohn, Y. He, Y. Liu (Eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Online, 2020, pp. 8736-8754. doi:10.18653/v1/2020.emnlp-main.704. URL https://aclanthology.org/2020.emnlp-main.704/ +[300] H. Hu, D. Sadigh, Language instructed reinforcement learning for human-ai coordination, in: Proceedings of the 40th International Conference on Machine Learning, ICML'23, JMLR.org, 2023. +[301] Z. Zhou, B. Hu, C. Zhao, P. Zhang, B. Liu, Large language model as a policy teacher for training reinforcement learning agents, in: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI '24, 2025. doi: 10.24963/ijcai.2024/627. URL https://doi.org/10.24963/ijcai.2024/627 +[302] C. Chen, J. Yoon, Y.-F. Wu, S. Ahn, Transdreamer: Reinforcement learning with transformer world models, in: Deep RL Workshop NeurIPS 2021, 2021. URL https://openreview.net/forum?id=sVrzVAL90sA +[303] D. Das, S. Chernova, B. Kim, State2explanation: Concept-based explanations to benefit agent learning and user understanding, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural + +Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 67156-67182. +URL https://proceedings.neurips.cc/paper_files/paper/2023/file/d4387c37b3b06e55f86eccdb8cd1f829-Paper-Conference.pdf +[304] J. Lin, Y. Du, O. Watkins, D. Hafner, P. Abbeel, D. Klein, A. Dragan, Learning to model the world with language (2024). arXiv:2308.01399. URL https://arxiv.org/abs/2308.01399 +[305] J. Robine, M. Höftmann, T. Uelwer, S. Harmeling, Transformer-based world models are happy with 100k interactions, in: ICLR, 2023. URL https://openreview.net/forum?id=TdBaDGCpjly +[306] R. P. K. Poudel, H. Pandya, C. Zhang, R. Cipolla, Langwm: Language grounded world model (2023). arXiv:2311.17593. URL https://arxiv.org/abs/2311.17593 +[307] J. Lin, Y. Du, O. Watkins, D. Hafner, P. Abbeel, D. Klein, A. Dragan, Learning to model the world with language (2024). URL https://openreview.net/forum?id=eWL0oaShEH +[308] Z. Yang, K. Ren, X. Luo, M. Liu, W. Liu, J. Bian, W. Zhang, D. Li, Towards applicable reinforcement learning: Improving the generalization and sample efficiency with policy ensemble, in: L. D. Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, International Joint Conferences on Artificial Intelligence Organization, 2022, pp. 3659-3665, main Track. doi: 10.24963/ijcai.2022/508. URL https://doi.org/10.24963/ijcai.2022/508 \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13415/images.zip b/data/2025/2503_13xxx/2503.13415/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..21bd20cd5ea95326035b8b1f95ceb623ca5f6dff --- /dev/null +++ b/data/2025/2503_13xxx/2503.13415/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a68dbd1f052bc70ece874719173b58a55de03a208f6d10e4966d985040b6ca04 +size 1641056 diff --git a/data/2025/2503_13xxx/2503.13415/layout.json b/data/2025/2503_13xxx/2503.13415/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0e6a2f96ef55846b1a8f197c029c6e6e7310f11b --- /dev/null +++ b/data/2025/2503_13xxx/2503.13415/layout.json @@ -0,0 +1,31831 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 62, + 108, + 126, + 125 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 108, + 126, + 125 + ], + "spans": [ + { + "bbox": [ + 62, + 108, + 126, + 125 + ], + "type": "text", + "content": "Highlights" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 149, + 531, + 174 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 149, + 531, + 174 + ], + "spans": [ + { + "bbox": [ + 61, + 149, + 531, + 174 + ], + "type": "text", + "content": "A Comprehensive Survey on Multi-Agent Cooperative Decision-Making: Scenarios, Approaches, Challenges and Perspectives" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 194, + 389, + 208 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 194, + 389, + 208 + ], + "spans": [ + { + "bbox": [ + 61, + 194, + 389, + 208 + ], + "type": "text", + "content": "Weiqiang Jin, Hongyang Du, Biao Zhao, Xingwu Tian, Bohang Shi, Guang Yang" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 71, + 308, + 287, + 693 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 71, + 308, + 287, + 333 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 308, + 287, + 333 + ], + "spans": [ + { + "bbox": [ + 71, + 308, + 287, + 333 + ], + "type": "text", + "content": "- Provides a comprehensive survey of multi-agent decision-making methods." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 72, + 398, + 287, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 398, + 287, + 423 + ], + "spans": [ + { + "bbox": [ + 72, + 398, + 287, + 423 + ], + "type": "text", + "content": "- Analyzes key simulation environments for multi-agent reinforcement learning." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 72, + 488, + 287, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 488, + 287, + 513 + ], + "spans": [ + { + "bbox": [ + 72, + 488, + 287, + 513 + ], + "type": "text", + "content": "- Investigate decision-making approaches, including MARL and large language models." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 72, + 578, + 287, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 578, + 287, + 602 + ], + "spans": [ + { + "bbox": [ + 72, + 578, + 287, + 602 + ], + "type": "text", + "content": "- Identifies challenges and future research directions in multi-agent collaboration." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 71, + 669, + 286, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 669, + 286, + 693 + ], + "spans": [ + { + "bbox": [ + 71, + 669, + 286, + 693 + ], + "type": "text", + "content": "- Reviews real-world applications in transportation, aerial systems, and automation." + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 13, + 251, + 36, + 609 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 13, + 251, + 36, + 609 + ], + "spans": [ + { + "bbox": [ + 13, + 251, + 36, + 609 + ], + "type": "text", + "content": "arXiv:2503.13415v1 [cs.MA] 17 Mar 2025" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 85, + 117, + 507, + 152 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 117, + 507, + 152 + ], + "spans": [ + { + "bbox": [ + 85, + 117, + 507, + 152 + ], + "type": "text", + "content": "A Comprehensive Survey on Multi-Agent Cooperative Decision-Making: Scenarios, Approaches, Challenges and Perspectives" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 110, + 168, + 483, + 181 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 168, + 483, + 181 + ], + "spans": [ + { + "bbox": [ + 110, + 168, + 483, + 181 + ], + "type": "text", + "content": "Weiqiang Jina, Hongyang Du, Biao Zhaoa,\\*, Xingwu Tiana, Bohang Shi a, Guang Yang c,d,e,f,*" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "spans": [ + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "inline_equation", + "content": "^{a}" + }, + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "text", + "content": "School of Information and Communications Engineering, Xi'an Jiaotong University, Innovation Harbour, Xi'an, 710049, Shaanxi, China \n" + }, + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "inline_equation", + "content": "^{b}" + }, + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "text", + "content": "Department of Electrical and Electronic Engineering, The University of Hong Kong (HKU), Hong Kong, Hong Kong, China \n" + }, + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "inline_equation", + "content": "^{c}" + }, + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "text", + "content": "Bioengineering Department and Imperial-X, Imperial College London, London, W12 7SL, UK \n" + }, + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "inline_equation", + "content": "^{d}" + }, + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "text", + "content": "National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK \n" + }, + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "inline_equation", + "content": "^{e}" + }, + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "text", + "content": "Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK \n" + }, + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "inline_equation", + "content": "^{f}" + }, + { + "bbox": [ + 74, + 189, + 518, + 247 + ], + "type": "text", + "content": "School of Biomedical Engineering & Imaging Sciences, King's College London, London, WC2R 2LS, UK" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 292, + 102, + 302 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 292, + 102, + 302 + ], + "spans": [ + { + "bbox": [ + 62, + 292, + 102, + 302 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 60, + 309, + 531, + 501 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 309, + 531, + 501 + ], + "spans": [ + { + "bbox": [ + 60, + 309, + 531, + 501 + ], + "type": "text", + "content": "With the rapid development of artificial intelligence, intelligent decision-making techniques have gradually surpassed human levels in various human-machine competitions, especially in complex multi-agent cooperative task scenarios. Multi-agent cooperative decision-making involves multiple agents working together to complete established tasks and achieve specific objectives. These techniques are widely applicable in real-world scenarios such as autonomous driving, drone navigation, disaster rescue, and simulated military confrontations. This paper begins with a comprehensive survey of the leading simulation environments and platforms used for multi-agent cooperative decision-making. Specifically, we provide an in-depth analysis for these simulation environments from various perspectives, including task formats, reward allocation, and the underlying technologies employed. Subsequently, we provide a comprehensive overview of the mainstream intelligent decision-making approaches, algorithms and models for multi-agent systems (MAS). These approaches can be broadly categorized into five types: rule-based (primarily fuzzy logic), game theory-based, evolutionary algorithms-based, deep multi-agent reinforcement learning (MARL)-based, and large language models (LLMs) reasoning-based. Given the significant advantages of MARL and LLMs-based decision-making methods over the traditional rule, game theory, and evolutionary algorithms, this paper focuses on these multi-agent methods utilizing MARL and LLMs-based techniques. We provide an in-depth discussion of these approaches, highlighting their methodology taxonomies, advantages, and drawbacks. Further, several prominent research directions in the future and potential challenges of multi-agent cooperative decision-making are also detailed." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 508, + 106, + 519 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 508, + 106, + 519 + ], + "spans": [ + { + "bbox": [ + 61, + 508, + 106, + 519 + ], + "type": "text", + "content": "Keywords:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 61, + 520, + 528, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 520, + 528, + 544 + ], + "spans": [ + { + "bbox": [ + 61, + 520, + 528, + 544 + ], + "type": "text", + "content": "Intelligent decision-making, Multi-agent systems, Multi-agent cooperative environments, Multi-agent reinforcement learning, Large language models." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 61, + 566, + 131, + 577 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 566, + 131, + 577 + ], + "spans": [ + { + "bbox": [ + 61, + 566, + 131, + 577 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 61, + 593, + 286, + 617 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 593, + 286, + 617 + ], + "spans": [ + { + "bbox": [ + 61, + 593, + 286, + 617 + ], + "type": "text", + "content": "1.1. Research Backgrounds of Multi-Agent Decision-Making" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 61, + 627, + 286, + 650 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 627, + 286, + 650 + ], + "spans": [ + { + "bbox": [ + 61, + 627, + 286, + 650 + ], + "type": "text", + "content": "With the continuous advancement of science and technology, intelligent decision-making technology has" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 566, + 530, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 566, + 530, + 685 + ], + "spans": [ + { + "bbox": [ + 305, + 566, + 530, + 685 + ], + "type": "text", + "content": "made rapid progress. These technologies have gradually surpassed human capabilities in various human-machine game competitions, even exceeding the top human levels. Over the past few decades, especially following the successful application of Deep Q-Networks (DQN) [1, 2] in the Arita game and the victories of AlphaGo and AlphaZero [3, 4] over top human opponents, these landmark achievements have significantly propelled the advancement of intelligent decision-making research." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 305, + 697, + 530, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 697, + 530, + 732 + ], + "spans": [ + { + "bbox": [ + 305, + 697, + 530, + 732 + ], + "type": "text", + "content": "To meet the growing complexity of real-world applications and the increasing demand for more sophisticated, reliable, and efficient intelligent systems, multi" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 61, + 674, + 285, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 674, + 285, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 674, + 285, + 732 + ], + "type": "text", + "content": "*Corresponding authors: Biao Zhao and Guang Yang. Email addresses: weiqiangjin@stu.xjtu.edu.cn (Weiqiang Jin), duhy@hku.hk (Hongyang Du), biaozhao@xjtu.edu.cn (Biao Zhao), txw_xjtu@163.com (Xingwu Tian), Bh_567@stu.xjtu.edu.cn (Bohang Shi), g.yang@imperial.ac.uk (Guang Yang)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 61, + 740, + 194, + 750 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 740, + 194, + 750 + ], + "spans": [ + { + "bbox": [ + 61, + 740, + 194, + 750 + ], + "type": "text", + "content": "Preprint submitted to Information fusion" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 477, + 740, + 530, + 749 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 477, + 740, + 530, + 749 + ], + "spans": [ + { + "bbox": [ + 477, + 740, + 530, + 749 + ], + "type": "text", + "content": "March 18, 2025" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 63, + 107, + 533, + 333 + ], + "blocks": [ + { + "bbox": [ + 63, + 107, + 533, + 333 + ], + "lines": [ + { + "bbox": [ + 63, + 107, + 533, + 333 + ], + "spans": [ + { + "bbox": [ + 63, + 107, + 533, + 333 + ], + "type": "image", + "image_path": "06418d1620b5a8ff60e8838d831282725f05ecef98c24bf5bc1a890f42541a06.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 91, + 343, + 500, + 354 + ], + "lines": [ + { + "bbox": [ + 91, + 343, + 500, + 354 + ], + "spans": [ + { + "bbox": [ + 91, + 343, + 500, + 354 + ], + "type": "text", + "content": "Figure 1: An overview of the evolution of scenarios and methods in decision-making from single-agent to multi-agent systems." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 60, + 373, + 287, + 468 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 373, + 287, + 468 + ], + "spans": [ + { + "bbox": [ + 60, + 373, + 287, + 468 + ], + "type": "text", + "content": "agent cooperative decision-making has rapidly evolved from simple single-agent scenarios [5, 6, 7, 8]. Multiagent cooperative decision-making is a crucial subfield within machine learning (ML) [9] and artificial intelligence (AI) [10]. It involves multiple interacting agents working together to complete established tasks across diverse well-designed dynamic simulated environments and various complex real-world systems." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 60, + 469, + 287, + 684 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 469, + 287, + 684 + ], + "spans": [ + { + "bbox": [ + 60, + 469, + 287, + 684 + ], + "type": "text", + "content": "As depicted in Figure 1, the evolution research progress from single-agent to multi-agent decision-making systems, along with methodological comparisons, highlights that this rapidly advancing field is a crucial step toward achieving human-level AI and the Artificial General Intelligence (AGI) age. Multiagent cooperative decision-making has a wide range of practical applications and many fundamental theoretical works across various domains. The service scenarios are extensive, encompassing smart agriculture management [11, 12], intelligent collaborative robots [13, 14, 15, 16], self-driving collaborative obstacle avoidance [17, 18, 19], autonomous navigation [20, 21, 22] as well as joint rescue tasks [12, 23]. Correspondingly, considering the rapid pace of technological advancement and the multifaceted needs of the real world, in this work, we focus on the comprehensive study of multi-agent cooperative decision-making." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 694, + 255, + 707 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 694, + 255, + 707 + ], + "spans": [ + { + "bbox": [ + 61, + 694, + 255, + 707 + ], + "type": "text", + "content": "1.2. Overview of Previous Multi-Agent Surveys" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 708, + 287, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 708, + 287, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 708, + 287, + 732 + ], + "type": "text", + "content": "Concurrent with the fast-paced advancements in multi-agent cooperative decision-making, there has" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 373, + 531, + 432 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 373, + 531, + 432 + ], + "spans": [ + { + "bbox": [ + 305, + 373, + 531, + 432 + ], + "type": "text", + "content": "been a marked increase in systematic literature reviews in this domain [24, 6, 8, 25]. These reviews have covered a wide range of topics, from theoretical innovations to practical applications, providing a comprehensive overview of the state-of-the-art." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 433, + 532, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 433, + 532, + 733 + ], + "spans": [ + { + "bbox": [ + 305, + 433, + 532, + 733 + ], + "type": "text", + "content": "Ning et al. [25] provided a comprehensive overview of the evolution, challenges, and applications of multi-agent reinforcement learning (MARL)-based intelligent agents, including its practical implementation aspects. Gronauer et al. [6] provided an overview of recent developments in multi-agent deep reinforcement learning, focusing on training schemes, emergent agent behaviors, and the unique challenges of the multi-agent domain, while also discussing future research directions. Yang et al. [26] explored the utility theory application in AI robotics, focusing on how utility AI models can guide decision-making and cooperation in multi-agent/robot systems. Orr et al. [8] reviewed recent advancements in MARL, particularly its applications in multi-robot systems, while discussing current challenges and potential future applications. Du et al. [24] provided a systematic overview of multiagent deep reinforcement learning for MAS, focusing on its challenges, methodologies, and applications. Pamul et al. [7] provided a comprehensive analysis of the application of MARL in connected and automated vehicles (CAVs), identifying current developments, existing research directions, and challenges. Hernandez-Leal et al. [27] provided a comprehensive overview of approaches to addressing opponent-induced non" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 739, + 301, + 749 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 739, + 301, + 749 + ], + "spans": [ + { + "bbox": [ + 293, + 739, + 301, + 749 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 60, + 111, + 286, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 111, + 286, + 220 + ], + "spans": [ + { + "bbox": [ + 60, + 111, + 286, + 220 + ], + "type": "text", + "content": "stationarity in multi-agent learning, categorizing algorithms into a new framework and offering insights into their effectiveness across different environments. The survey by Zhu et al. [28] provided a systematic classification and analysis of MARL systems that incorporate communication, encompassing recent advanced CommMARL research and identifying key dimensions that influence the design and development of these multi-agent systems." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 62, + 231, + 221, + 243 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 231, + 221, + 243 + ], + "spans": [ + { + "bbox": [ + 62, + 231, + 221, + 243 + ], + "type": "text", + "content": "1.3. Motivations of the Current Survey" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 246, + 286, + 317 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 246, + 286, + 317 + ], + "spans": [ + { + "bbox": [ + 61, + 246, + 286, + 317 + ], + "type": "text", + "content": "However, despite the growing body of work in this field, existing surveys often have noticeable limitations [24, 6, 25, 28]. Specifically, our thorough investigation reveals that most current reviews and surveys share several common and significant significant drawbacks and limitations:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 71, + 326, + 286, + 545 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 71, + 326, + 286, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 326, + 286, + 386 + ], + "spans": [ + { + "bbox": [ + 71, + 326, + 286, + 386 + ], + "type": "text", + "content": "- Limited Research Scope: Previous literature reviews [27, 28] predominantly remain within the primary framework of reinforcement learning and have not broken through theoretical limitations, resulting in a lack of comprehensive coverage." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 72, + 394, + 286, + 465 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 394, + 286, + 465 + ], + "spans": [ + { + "bbox": [ + 72, + 394, + 286, + 465 + ], + "type": "text", + "content": "- Neglect of Environments: Previous literature reviews [29, 6, 30] have largely concentrated on methodological and algorithmic advancements, frequently overlooking the essential role of simulation environments and platforms in multi-agent intelligent decision-making." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 72, + 473, + 286, + 545 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 473, + 286, + 545 + ], + "spans": [ + { + "bbox": [ + 72, + 473, + 286, + 545 + ], + "type": "text", + "content": "- Under-emphasis of Project Implementation: Prior surveys [25, 28, 30] often focus on theoretical models and overlook detailed implementation aspects, including code-bases and project architectures. This gap limits readers' ability to fully understand and apply the findings." + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 61, + 553, + 286, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 553, + 286, + 601 + ], + "spans": [ + { + "bbox": [ + 61, + 553, + 286, + 601 + ], + "type": "text", + "content": "To address the aforementioned limitations and challenges, we recognize the need for more systematic and comprehensive reviews in the multi-agent intelligent decision-making field." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 61, + 601, + 286, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 601, + 286, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 601, + 286, + 732 + ], + "type": "text", + "content": "Firstly, current reviews overly emphasize deep reinforcement learning and fail to adequately consider other potentially effective intelligent decision-making methods [24, 5, 7, 28]. Secondly, with the rapid development of large language models (LLMs), their potential in natural language processing, knowledge representation, and complex decision-making has become increasingly apparent. However, current surveys have largely overlooked their integration. Additionally, existing reviews often neglect the critical role of simulation environments in the development of multi-agent systems." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 111, + 531, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 531, + 290 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 531, + 290 + ], + "type": "text", + "content": "However, simulation environments are not merely auxiliary tools but are an integral part of the MAS development and evaluation process. The agents' learning and decision-making processes are influenced and constrained by these environments, making it equally important to understand and develop these environments as it is to focus on the algorithms themselves. Finally, the lack of attention to practical implementation details in current reviews has resulted in a disconnect between theory and practice. This survey will delve into the specifics of project implementation, including code structures, system architecture, and the challenges encountered during development, to enhance research reproducibility and facilitate the effective translation of theoretical research into practical applications." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 291, + 531, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 291, + 531, + 456 + ], + "spans": [ + { + "bbox": [ + 305, + 291, + 531, + 456 + ], + "type": "text", + "content": "Building on the motivations outlined earlier, this survey extends beyond the scope of previous reviews, which were often limited to specific areas of discussion. We treat multi-agent environments as equally important components, alongside the methods and techniques, and provide a thorough introduction to the most advanced algorithms and simulation environments. Moreover, we categorize various multi-agent cooperative decision-making methods from a more fundamental implementation perspective. In summary, this survey seeks to provide a more comprehensive and practical framework for research in multi-agent cooperative decision-making, thereby advancing the continuous development of this critical field." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 306, + 469, + 512, + 480 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 469, + 512, + 480 + ], + "spans": [ + { + "bbox": [ + 306, + 469, + 512, + 480 + ], + "type": "text", + "content": "1.4. The Survey Overview/Contents Organization" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 305, + 481, + 532, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 481, + 532, + 733 + ], + "spans": [ + { + "bbox": [ + 305, + 481, + 532, + 733 + ], + "type": "text", + "content": "As depicted in Figure 2, we have structured the survey to reflect our research approach, with each main and sub-branch corresponding to a specific part: First, in Section 1, we introduce the research background of multi-agent cooperative decision-making, discuss the drawbacks of previous surveys, and outline the organizational structure of this survey. Given that MARL and LLMs-based intelligent decision-making methods demonstrate significant advantages and future potential, our primary attentions are placed on Deep MARL-based and LLMs-based methods due to their superior ability to manage dynamic and uncertain environments. In Section 2, we then delve into mainstream intelligent decision-making approaches, algorithms, and models. We categorize these approaches, with a continued focus on MARL-based and LLMs-based methods, discussing their methodologies, advantages, and limitations. Following this, in Section 3, we provide an in-depth analysis of the leading simulation environments and platforms for multi-agent cooperative decision-making, again focusing on Deep MARL-based and LLMs-based" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "spans": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 63, + 110, + 533, + 368 + ], + "blocks": [ + { + "bbox": [ + 63, + 110, + 533, + 368 + ], + "lines": [ + { + "bbox": [ + 63, + 110, + 533, + 368 + ], + "spans": [ + { + "bbox": [ + 63, + 110, + 533, + 368 + ], + "type": "image", + "image_path": "6fd6b397e7b96d468eedec8a8c9751022174d5ea5d13ed48ce08682d7cec5141.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 61, + 384, + 531, + 405 + ], + "lines": [ + { + "bbox": [ + 61, + 384, + 531, + 405 + ], + "spans": [ + { + "bbox": [ + 61, + 384, + 531, + 405 + ], + "type": "text", + "content": "Figure 2: Illustration of our systematic review of multi-agent intelligent decision-making research. Compared to previous reviews, we have incorporated comprehensive introduction and analysis, with each segment corresponding to a specific chapter in the survey." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 423, + 286, + 497 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 423, + 286, + 497 + ], + "spans": [ + { + "bbox": [ + 61, + 423, + 286, + 497 + ], + "type": "text", + "content": "methods. Furthermore, in Section 4, we discuss the practical applications of multi-agent decision-making systems, such as autonomous driving, UAV navigation, and collaborative robotics. Finally, in Sections 5 and 6, we explore the potential challenges and future research directions of multi-agent cooperative decision-making." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 508, + 185, + 520 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 508, + 185, + 520 + ], + "spans": [ + { + "bbox": [ + 62, + 508, + 185, + 520 + ], + "type": "text", + "content": "1.5. How to read this survey?" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 523, + 286, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 523, + 286, + 594 + ], + "spans": [ + { + "bbox": [ + 61, + 523, + 286, + 594 + ], + "type": "text", + "content": "This survey caters to a diverse readership, each with varying levels of expertise and interest in different aspects of multi-agent decision-making systems. To help readers efficiently find the content that interests them, we offer the following guide, providing direction based on different topics:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 71, + 605, + 286, + 732 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 71, + 605, + 286, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 605, + 286, + 700 + ], + "spans": [ + { + "bbox": [ + 71, + 605, + 286, + 700 + ], + "type": "text", + "content": "- For those interested in rule (fuzzy logic)-based, game theory-based, and evolutionary algorithm-based decision-making research, please refer to Section 2.2.1, 2.2.2, and 2.2.3. This section provides a comprehensive analysis of the rule and game-based methods in multi-agent systems, detailing their corresponding technological taxonomies, features, and limitations." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 71, + 708, + 286, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 708, + 286, + 732 + ], + "spans": [ + { + "bbox": [ + 71, + 708, + 286, + 732 + ], + "type": "text", + "content": "- For those interested in MARL-based decision-making research, please refer to Section 2.3. This" + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 325, + 423, + 531, + 471 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 423, + 531, + 471 + ], + "spans": [ + { + "bbox": [ + 325, + 423, + 531, + 471 + ], + "type": "text", + "content": "section provides a comprehensive analysis of the deep MARL-based methods in multi-agent systems, detailing their corresponding technological taxonomies, advantages, and limitations." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 317, + 479, + 531, + 731 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 317, + 479, + 531, + 563 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 479, + 531, + 563 + ], + "spans": [ + { + "bbox": [ + 317, + 479, + 531, + 563 + ], + "type": "text", + "content": "- If you are focused on decision-making based on LLMs, Section 2.4 will offer you an in-depth exploration, with the corresponding technological taxonomies, advantages, and limitations. This part discusses the unique capabilities of LLMs in multiagent environments and their potential applications, especially in reasoning and decision-making." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 317, + 570, + 531, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 570, + 531, + 641 + ], + "spans": [ + { + "bbox": [ + 317, + 570, + 531, + 641 + ], + "type": "text", + "content": "- For readers focused on the well-known simulation environments of MAS, we suggest reading Section 3, which primarily covers an introduction to MARL-based Simulation Environments (Section 3.1) and LLMs Reasoning-based Simulation Environments (Section 3.2)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 317, + 649, + 531, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 649, + 531, + 731 + ], + "spans": [ + { + "bbox": [ + 317, + 649, + 531, + 731 + ], + "type": "text", + "content": "- If your interest lies in the practical applications of multi-agent decision-making systems, Section 4 will be of particular relevance. This section offers a detailed discussion of how these systems are applied across various fields, such as autonomous driving, UAV navigation, and collaborative robotics." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "spans": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 71, + 111, + 286, + 183 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 111, + 286, + 183 + ], + "spans": [ + { + "bbox": [ + 71, + 111, + 286, + 183 + ], + "type": "text", + "content": "- If you are interested in the challenges and problems faced by existing multi-agent decision-making methods, Section 5 provides an in-depth discussion, exploring the limitations of current approaches and unresolved issues in the field, offering insights into these challenges." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 72, + 191, + 286, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 191, + 286, + 274 + ], + "spans": [ + { + "bbox": [ + 72, + 191, + 286, + 274 + ], + "type": "text", + "content": "- Finally, if you wish to learn about future research directions and the prospects for multi-agent decision-making technique, we recommend reading Section 6. This section looks ahead to future research trends and potential breakthroughs, exploring key directions that could drive the field forward." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 293, + 258, + 306 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 293, + 258, + 306 + ], + "spans": [ + { + "bbox": [ + 61, + 293, + 258, + 306 + ], + "type": "text", + "content": "2. Multi-Agent Decision-Making Taxonomies" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 314, + 286, + 576 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 314, + 286, + 576 + ], + "spans": [ + { + "bbox": [ + 61, + 314, + 286, + 576 + ], + "type": "text", + "content": "This section discusses the taxonomies of decision-making in multi-agent systems and their related techniques. The multi-agent cooperative decision-making methods can be broadly classified into five categories: rule-based (primarily fuzzy logic), game theory-based, evolutionary algorithms-based methods, MARL-based approaches, and LLMs-based methods [31]. Although these rule-based, game theory-based, and evolutionary algorithms-based solutions demonstrate a degree of effectiveness, they typically rely heavily on pre-designed strategies and assumptions. This dependence limits their adaptability to changing and complex environments and ill-suited for handling highly dynamic and uncertain scenarios. In contrast, DRL-based and LLMs reasoning-based solutions offer more dynamic and flexible approaches, capable of learning and adapting to new strategies on the fly. Therefore, these methods have significant advantages in dealing with dynamic and uncertain environments. Thus, special research attentions are placed on DRL-based and LLMs-based methods due to their significant advantages in handling dynamic and uncertain environments." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 577, + 286, + 672 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 577, + 286, + 672 + ], + "spans": [ + { + "bbox": [ + 61, + 577, + 286, + 672 + ], + "type": "text", + "content": "The analysis is conducted from multiple perspectives, including agent interaction dynamics, mainstream paradigms of cooperative decision-making, MARL (multi-agent reinforcement learning), and LLM (large language model)-driven multi-agent systems, aiming to provide a systematic framework and technical foundation for the design and optimization of multi-agent decision-making." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 672, + 286, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 672, + 286, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 672, + 286, + 732 + ], + "type": "text", + "content": "Specifically, Section 2.1 analyzes agent interaction dynamics in MAS, categorizing them into four typical types: fully cooperative, fully competitive, mixed cooperative-competitive, and self-interested, while discussing their impact on overall system behavior. Subse" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 111, + 531, + 159 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 531, + 159 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 531, + 159 + ], + "type": "text", + "content": "sequently, Sections 2.2, 2.3, and 2.4 introduce mainstream paradigms of cooperative decision-making, MARL-based decision-making methods, and LLMs-based multi-agent systems, respectively." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 306, + 170, + 531, + 194 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 170, + 531, + 194 + ], + "spans": [ + { + "bbox": [ + 306, + 170, + 531, + 194 + ], + "type": "text", + "content": "2.1. Agent Interaction Dynamics for Multi-Agent Systems" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 197, + 531, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 197, + 531, + 293 + ], + "spans": [ + { + "bbox": [ + 305, + 197, + 531, + 293 + ], + "type": "text", + "content": "In multi-agent systems, the nature of interaction among agents can be categorized into distinct relational dynamics, i.e., agent interaction dynamics, each influencing the overall system behavior and outcomes. These dynamics are critical to understanding and designing intelligent systems where agents operate within shared environments. Below, we detail the primary types of agent relationships:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 314, + 299, + 531, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 299, + 531, + 407 + ], + "spans": [ + { + "bbox": [ + 314, + 299, + 531, + 407 + ], + "type": "text", + "content": "1. Fully Cooperative: In this scenario setting, all agents have aligned objectives, meaning they share identical reward structures and work towards a common goal. The agents operate with complete cooperation, aiming to maximize collective benefits. This relationship is typical in systems where synergy is essential, and the success of one agent directly contributes to the success of others, ensuring mutual reinforcement of strategies and actions." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 314, + 408, + 531, + 526 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 408, + 531, + 526 + ], + "spans": [ + { + "bbox": [ + 314, + 408, + 531, + 526 + ], + "type": "text", + "content": "2. Fully Competitive: This relationship is characterized by a zero-sum game dynamic, where the gain of one agent directly translates into the loss of another. Agents are in direct opposition, with their objectives fundamentally misaligned. This is commonly observed in competitive environments such as robotic competitions, where agents are designed to outperform each other, and success is measured relative to the failure or underperformance of others." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 314, + 528, + 531, + 671 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 528, + 531, + 671 + ], + "spans": [ + { + "bbox": [ + 314, + 528, + 531, + 671 + ], + "type": "text", + "content": "3. Mixed Cooperative and Competitive: In most real-world scenarios, agents may engage in both cooperation and competition simultaneously. This type of relationship is evident in team-based environments like robotic soccer, where agents within the same team cooperate to achieve a shared objective (e.g., scoring goals), but simultaneously compete against agents from opposing teams. The complexity of such systems lies in balancing internal cooperation with external competition, often requiring sophisticated strategies to optimize outcomes at both individual and collective levels." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 314, + 672, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 672, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 314, + 672, + 531, + 732 + ], + "type": "text", + "content": "4. Self-Interested: In self-interested dynamics, each agent acts primarily to maximize its own utility, with little regard for the impact on others. An agent's actions might incidentally benefit or harm other agents, but these effects are not a concern for" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "spans": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "bbox": [ + 81, + 111, + 286, + 195 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 111, + 286, + 195 + ], + "spans": [ + { + "bbox": [ + 81, + 111, + 286, + 195 + ], + "type": "text", + "content": "the self-interested agent. This relationship is pertinent in scenarios where agents are designed to prioritize personal gain over collective welfare, often leading to outcomes where the overall system efficiency is not necessarily optimized, as individual agents do not account for the potential externalities of their actions on the environment or other agents." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 203, + 286, + 297 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 203, + 286, + 297 + ], + "spans": [ + { + "bbox": [ + 61, + 203, + 286, + 297 + ], + "type": "text", + "content": "Overall, these agent interaction dynamics is crucial for the design and analysis of multi-agent systems, as they directly impact the strategies employed by agents and the overall system performance. The complexity of agent interactions in mixed or self-interested relationships often requires advanced coordination mechanisms and incentive structures to manage potential conflicts and ensure desired outcomes." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 299, + 286, + 393 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 299, + 286, + 393 + ], + "spans": [ + { + "bbox": [ + 61, + 299, + 286, + 393 + ], + "type": "text", + "content": "While the overarching concept of agent interaction dynamics holds some value, this survey focuses more on analyzing the characteristics of these methods from a technical and scientific perspective. Therefore, distinguishing relationships is not the primary emphasis of this study. Instead, we will proceed with a more comprehensive taxonomy of Multi-Agent Decision-making Systems." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 406, + 286, + 430 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 406, + 286, + 430 + ], + "spans": [ + { + "bbox": [ + 61, + 406, + 286, + 430 + ], + "type": "text", + "content": "2.2. Mainstream Paradigms of Multi-Agent Cooperative Decision-Making" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 433, + 287, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 433, + 287, + 624 + ], + "spans": [ + { + "bbox": [ + 61, + 433, + 287, + 624 + ], + "type": "text", + "content": "In multi-agent cooperative decision-making, several mainstream paradigms exist, each leveraging different techniques to tackle challenges associated with coordination, learning, adaptability, and optimization among autonomous agents. These paradigms utilize diverse approaches, including rule-based (primarily fuzzy logic) systems [32, 33, 34, 35, 36, 37], game theory-based [38, 39, 40, 41, 42, 43], evolutionary algorithms-based [44, 45, 46, 47, 48, 49], MARL-based [50, 51, 52, 53, 54, 55, 56], and LLMs-based [57, 30, 58, 59] multiagent decision-making systems. Each of these methods has distinct strengths and applications, depending on the problem context and the complexity of interactions between agents. For a comprehensive overview, please refer to Table 1, which provides a detailed classification of these paradigms." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 636, + 241, + 647 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 636, + 241, + 647 + ], + "spans": [ + { + "bbox": [ + 61, + 636, + 241, + 647 + ], + "type": "text", + "content": "2.2.1. Rule-Based (Primarily Fuzzy Logic)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 61, + 649, + 286, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 649, + 286, + 731 + ], + "spans": [ + { + "bbox": [ + 61, + 649, + 286, + 731 + ], + "type": "text", + "content": "Rule-based decision-making, particularly fuzzy logic, has been widely adopted in multi-agent systems (MAS) due to its ability to handle uncertainty, imprecise data, and dynamic environments [117, 118, 36, 119]. Fuzzy logic enables agents to make adaptive, human-like decisions by mapping inputs to linguistic rules rather than strict mathematical models." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 306, + 111, + 531, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 111, + 531, + 733 + ], + "spans": [ + { + "bbox": [ + 306, + 111, + 531, + 733 + ], + "type": "text", + "content": "Miki et al. [32] presented a rule-based multi-agent control algorithm that utilizes local information instead of absolute coordinates, making it more practical for real-world applications. Charaf et al. [120] introduced a rule-based multi-agent system to address coordination challenges, such as controllability and observability, in distributed testing environments. Yarahmadi et al. [33] reviewed the applications of multi-agent systems in Cyber-Physical Systems (CPS) and the Internet of Things (IoT), proposing a combination of learning and rule-based reasoning to improve decision-making in MAS. Marti et al. [117] presented an expert rule-based system using multi-agent technology to support traffic management during weather-related issues. Daeichian et al. [121] used fuzzy logic in combination with Q-learning and game theory to control traffic lights autonomously. Wu et al. [34] introduced a fuzzy-theoretic game framework that integrates fuzzy logic with game theory to handle uncertainty in utility values during multi-agent decision making. Nekhai et al. [118] devised a cybersecurity management model for agricultural enterprises using a multi-agent system (MAS) based on fuzzy logical reasoning. Ramezani et al. [119] applied fuzzy logic to multi-agent decision-making in soccer robot teams, combining cooperative and non-cooperative game strategies. Zhang et al. [35] introduced a new online method for optimal coordination control in multi-agent differential games, combining fuzzy logic, and adaptive dynamic programming. Ren et al. [36] presented a fuzzy logic-based approach for partner selection in multi-agent systems, emphasizing flexibility and adaptability in dynamic environments. Gu et al. [37] introduced a cooperative reinforcement learning algorithm for multi-agent systems using a leader-follower framework, modeled as a Stackelberg game. Schwartz et al. [60] introduced a multi-agent fuzzy actor-critic learning algorithm for differential games. Harmati et al. [61] proposed a game-theoretic model for coordinating multiple robots in target tracking, using a semi-cooperative Stackelberg equilibrium and a fuzzy inference system for high-level cost tuning. Khuen et al. [62] introduced an Adaptive Fuzzy Logic (AFL) approach for multi-agent systems with negotiation capabilities, focusing on resource allocation. Yan et al. [63] proposed a graphical game-based adaptive fuzzy optimal bipartite containment control scheme for high-order nonlinear multi-agent systems (MASs). Vicerra et al. [64] proposed a multi-agent robot system using a pure fuzzy logic-based artificial intelligence model. Gu et al. [65] presented a fuzzy logic-based policy gradient multi-agent reinforcement learning algorithm for leader-follower systems, where fuzzy logic" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "spans": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 79, + 128, + 514, + 386 + ], + "blocks": [ + { + "bbox": [ + 124, + 118, + 467, + 127 + ], + "lines": [ + { + "bbox": [ + 124, + 118, + 467, + 127 + ], + "spans": [ + { + "bbox": [ + 124, + 118, + 467, + 127 + ], + "type": "text", + "content": "Table 1: Representative Methods in Mainstream Paradigms of Multi-Agent Cooperative Decision-Making." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 79, + 128, + 514, + 386 + ], + "lines": [ + { + "bbox": [ + 79, + 128, + 514, + 386 + ], + "spans": [ + { + "bbox": [ + 79, + 128, + 514, + 386 + ], + "type": "table", + "html": "
ParadigmRepresentative Methods and Key References
Rule-Based (Primarily Fuzzy Logic)Miki et al. [32], Yarahmadi et al. [33], Wu et al. [34], Zhang et al. [35], Ren et al. [36], Gu et al. [37], Schwartz et al. [60], Harmati et al. [61], Khuen et al. [62], Yan et al. [63], Vicerra et al. [64], Gu et al. [65], Maruyama et al. [66], Peng et al. [67], Yang et al. [68]
Game Theory-basedWang et al. [38], Guo et al. [39], Schwung et al. [40], Wang et al. [41], Lin et al. [42], Wang et al. [43], Wang et al. [69], Lanctot et al. [70], Guo et al. [71], Zhang et al. [72], Kong et al. [73], Wang et al. [74], Dong et al. [75], Nguyen et al. [76], Schwung et al. [40], Khan et al. [77]
Evolutionary Algorithms-basedLiu et al. [45], Xu et al. [78], Daan et al. [46], Franciszek et al. [79], Larry et al. [44], Daan et al. [47], Liu et al. [80], Yuan et al. [48], Dong et al. [81], Chen et al. [82], Zhang et al. [49]
MARL-basedWai et al. [83], Hu et al. [84], Son et al. [53], Yu et al. [52], Rashid et al. [50], Rashid et al. [54], Sunehag et al. [85], Huang et al. [15], Xu et al. [58], Yun et al. [86], Mao et al. [87], Kraemer et al. [51], Kouzeghar et al. [88], Gao et al. [89], Liu et al. [19], Qi et al. [90], Vinyals et al. [91], Lu et al. [18], Chu et al. [92], et al. [20], Kurach et al. [55], Lv et al. [93], Radac et al. [94], Wang et al. [56], Liu et al. [95]
LLMs-basedMordatch et al. [59], Zhang et al. [96], Xu et al. [58], Li et al. [57], Wang et al. [30], Zhao et al. [10], Hou et al. [97], Puig et al. [98, 99], Gao et al. [100], Xiao et al. [101], Wang et al. [102], Wu et al. [103], Wen et al. [104], Chen et al. [105], Liu et al. [106], Chen et al. [107], Hong et al. [108, 109], XAgent Team [110], Wang et al. [111, 112], Zheng et al. [113], Zhang et al. [114, 115], Cao et al. [116]
", + "image_path": "ca05f64b084c96aff84be8b6b59c5b56fc41a776606092b74e4f630b51e1bd03.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 60, + 406, + 286, + 535 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 406, + 286, + 535 + ], + "spans": [ + { + "bbox": [ + 60, + 406, + 286, + 535 + ], + "type": "text", + "content": "controllers act as policies. Maruyama et al. [66] extended the classical framework for reasoning about distributed knowledge, incorporating fuzzy logic to handle uncertainty and degrees of certainty within multi-agent systems. Peng et al. [67] proposed a two-layer coordination model for multi-agent systems using fuzzy reinforcement learning. Yang et al. [68] presented a multiagent reinforcement learning algorithm with fuzzy policy to address control challenges in cooperative multi-agent systems, particularly for autonomous robotic formations." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 538, + 286, + 620 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 538, + 286, + 620 + ], + "spans": [ + { + "bbox": [ + 61, + 538, + 286, + 620 + ], + "type": "text", + "content": "Overall, fuzzy logic remains a foundational approach for rule-based decision-making in MAS, offering interpretability and robustness in uncertain environments. In the future, fuzzy logic will be further integrated with LLMs, hierarchical decision architectures, and multiagent planning, enabling more precise and adaptive decision-making in complex real-world scenarios." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 635, + 172, + 647 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 635, + 172, + 647 + ], + "spans": [ + { + "bbox": [ + 61, + 635, + 172, + 647 + ], + "type": "text", + "content": "2.2.2. Game theory-based" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 60, + 649, + 286, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 649, + 286, + 732 + ], + "spans": [ + { + "bbox": [ + 60, + 649, + 286, + 732 + ], + "type": "text", + "content": "Game theory provides a structured framework for analyzing strategic interactions in multi-agent systems. It enables agents to make rational decisions in cooperative, competitive, or mixed scenarios through equilibrium-based optimization [5, 29]. Traditional methods such as Nash equilibrium and Stackelberg games form the foundation, while modern approaches" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 406, + 530, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 406, + 530, + 429 + ], + "spans": [ + { + "bbox": [ + 305, + 406, + 530, + 429 + ], + "type": "text", + "content": "integrate reinforcement learning and Bayesian inference to enhance adaptability in dynamic environments." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 431, + 531, + 527 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 431, + 531, + 527 + ], + "spans": [ + { + "bbox": [ + 305, + 431, + 531, + 527 + ], + "type": "text", + "content": "Wang et al. [38] provided a broad discussion on game-theoretic approaches in multi-agent systems, covering cooperative and non-cooperative scenarios. Guo et al. [39] applied game theory to multi-agent path planning, leveraging Nash equilibrium to optimize navigation and obstacle avoidance. Zhang et al. [72] developed a distributed control algorithm that ensures optimal coverage while maintaining network connectivity." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 528, + 531, + 671 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 528, + 531, + 671 + ], + "spans": [ + { + "bbox": [ + 305, + 528, + 531, + 671 + ], + "type": "text", + "content": "Beyond fundamental decision-making, game theory has been applied in communication networks and energy systems. Wang et al. [41] utilized game-theoretic learning to enhance resource allocation in wireless networks while countering adversarial actions like jamming. Lin et al. [42] introduced potential game theory to optimize distributed energy management in microgrids, where agents autonomously coordinate power distribution. Dong et al. [75] further extended this approach using a hierarchical Stackelberg model for energy trading, balancing incentives between microgrids and individual agents." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 672, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 672, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 305, + 672, + 531, + 732 + ], + "type": "text", + "content": "Incorporating machine learning with game theory has also led to advances in multi-agent optimization. Schwung et al. [40] combined potential game theory with reinforcement learning for adaptive production scheduling, while Wang et al. [74] designed a" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "spans": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 111, + 286, + 183 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 111, + 286, + 183 + ], + "spans": [ + { + "bbox": [ + 61, + 111, + 286, + 183 + ], + "type": "text", + "content": "Nash equilibrium-based fault-tolerant control strategy for multi-agent systems. Additionally, game-theoretic methods have been explored for distributed computing, as shown by Khan et al. [77], who developed a replica placement strategy to minimize data access delays in distributed systems." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 184, + 286, + 267 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 184, + 286, + 267 + ], + "spans": [ + { + "bbox": [ + 61, + 184, + 286, + 267 + ], + "type": "text", + "content": "Overall, game theory remains a cornerstone of multiagent decision-making, offering well-defined theoretical guarantees while enabling dynamic adaptation through hybrid approaches. Future research will likely focus on integrating game theory with deep learning and large language models to enhance strategic reasoning in high-dimensional, uncertain environments." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 285, + 221, + 298 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 285, + 221, + 298 + ], + "spans": [ + { + "bbox": [ + 61, + 285, + 221, + 298 + ], + "type": "text", + "content": "2.2.3. Evolutionary Algorithms-based" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 300, + 286, + 394 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 300, + 286, + 394 + ], + "spans": [ + { + "bbox": [ + 61, + 300, + 286, + 394 + ], + "type": "text", + "content": "Evolutionary algorithms (EAs) provide a bio-inspired approach to optimization in multi-agent systems by leveraging principles such as natural selection, mutation, and recombination [79, 47, 81]. By allowing agents to evolve their strategies iteratively, EAs are particularly effective for problems requiring continuous learning, large-scale coordination, and self-organized behavior." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 396, + 286, + 514 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 396, + 286, + 514 + ], + "spans": [ + { + "bbox": [ + 61, + 396, + 286, + 514 + ], + "type": "text", + "content": "Liu et al. [45] introduced the Multi-Agent Genetic Algorithm (MAGA), where agents interact through competition and cooperation to optimize global solutions. Xu et al. [78] extended this idea to hardware-based multi-agent systems, using nanoclusters as physical agents to achieve large-scale parallel computation. Daan et al. [46] explored the role of evolutionary strategies in dynamic environments such as financial markets, smart grids, and robotics, demonstrating how adaptive algorithms can handle real-world uncertainties." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 516, + 286, + 647 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 516, + 286, + 647 + ], + "spans": [ + { + "bbox": [ + 61, + 516, + 286, + 647 + ], + "type": "text", + "content": "Franciszek et al. [79] proposed a self-optimization model integrating cellular automata and game theory, simulating competitive evolutionary interactions among agents. Larry et al. [44] analyzed the trade-offs between mutation and recombination, showing that mutation can sometimes outperform traditional recombination strategies in evolutionary computing. To further enhance adaptability, Daan et al. [47] introduced Deep Neuroevolution (DNE), applying coevolutionary techniques to complex multi-agent scenarios, including Atari games." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 61, + 649, + 286, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 649, + 286, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 649, + 286, + 732 + ], + "type": "text", + "content": "Recent studies have focused on scaling evolutionary learning to larger agent populations. Liu et al. [80] developed Evolutionary Reinforcement Learning (ERL), a scalable approach that partitions learning into multiple stages, ensuring better adaptability in multiagent environments. Yuan et al. [48] introduced EvoAgent, a framework that extends LLMs-based au" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 111, + 530, + 134 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 530, + 134 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 530, + 134 + ], + "type": "text", + "content": "tonomous agents into multi-agent systems using evolutionary techniques like mutation and selection." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 136, + 531, + 278 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 136, + 531, + 278 + ], + "spans": [ + { + "bbox": [ + 305, + 136, + 531, + 278 + ], + "type": "text", + "content": "Evolutionary game theory has also been explored to improve cooperative behavior. Dong et al. [81] designed a three-strategy decision model, where agents adopt conservative or adaptive strategies based on their interactions with neighbors, fostering long-term cooperation. Chen et al. [82] proposed a kinetic decision-making model grounded in rarefied gas dynamics, offering a new perspective on agent evolution using the Boltzmann equation. Zhang et al. [49] applied evolutionary game theory to policy optimization, analyzing cooperation strategies among governments, enterprises, and farmers in agricultural water conservation projects." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 279, + 530, + 362 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 279, + 530, + 362 + ], + "spans": [ + { + "bbox": [ + 305, + 279, + 530, + 362 + ], + "type": "text", + "content": "Overall, evolutionary algorithms provide a robust framework for decentralized decision-making, allowing agents to self-improve and adapt in uncertain environments. In the future, evolutionary algorithms will be further integrated with deep learning, hierarchical evolution, and large-scale multi-agent coordination, enabling more adaptive, autonomous MAS." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 306, + 373, + 480, + 385 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 373, + 480, + 385 + ], + "spans": [ + { + "bbox": [ + 306, + 373, + 480, + 385 + ], + "type": "text", + "content": "2.2.4. MARL-based Multi-Agent Systems" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 305, + 386, + 530, + 470 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 386, + 530, + 470 + ], + "spans": [ + { + "bbox": [ + 305, + 386, + 530, + 470 + ], + "type": "text", + "content": "Before introducing the MARL-based multi-agent systems (MAS), we provide a detailed discussion in Appendix A on the key technological comparisons and methodological principles of both DRL-based single-agent systems and MARL-based MAS. This helps readers build the necessary background knowledge for better understanding the following discussions." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 305, + 470, + 531, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 470, + 531, + 624 + ], + "spans": [ + { + "bbox": [ + 305, + 470, + 531, + 624 + ], + "type": "text", + "content": "Multi-Agent Reinforcement Learning offers a structured framework to tackle decision-making in MAS, where autonomous agents interact with each other and a shared environment. The MAS research in MARL is broadly divided into three paradigms: Centralized Training with Centralized Execution (CTCE) [122, 123], Decentralized Training with Decentralized Execution (DTDE) [122], and Centralized Training with Decentralized Execution (CTDE) [51, 124]. Each paradigm is designed to address specific challenges such as coordination, scalability, and policy optimization, providing tailored solutions for diverse scenarios." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 305, + 625, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 625, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 305, + 625, + 531, + 732 + ], + "type": "text", + "content": "Centralized Training with Centralized Execution The CTCE paradigm [125, 123] relies on a central controller that governs all agents by aggregating their observations, actions, and rewards to make joint decisions. While this paradigm enables high levels of coordination, its scalability is limited in large-scale systems. Multi-Agent DQN (MADQN) [126, 1, 2] is a representative method, employing parameter-sharing mechanisms to handle cooperative tasks effectively. However," + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "spans": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 111, + 286, + 136 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 111, + 286, + 136 + ], + "spans": [ + { + "bbox": [ + 61, + 111, + 286, + 136 + ], + "type": "text", + "content": "its reliance on centralized control restricts its applicability in dynamic environments with numerous agents." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 141, + 287, + 309 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 141, + 287, + 309 + ], + "spans": [ + { + "bbox": [ + 61, + 141, + 287, + 309 + ], + "type": "text", + "content": "Decentralized Training with Decentralized Execution The DTDE paradigm [122] emphasizes independent learning and execution, where each agent interacts with the environment individually and updates its policy based solely on local observations and rewards. This paradigm excels in scalability and robustness, especially in scenarios with limited communication. Notable methods include Independent Q-Learning (IQL) [127, 128] and Decentralized REINFORCE [129], which allow agents to learn autonomously. Despite its advantages, DTDE faces challenges such as learning nonstationarity, where the environment changes as other agents adapt, and difficulty in addressing the credit assignment problem in cooperative settings." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 314, + 287, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 314, + 287, + 529 + ], + "spans": [ + { + "bbox": [ + 61, + 314, + 287, + 529 + ], + "type": "text", + "content": "Centralized Training with Decentralized Execution The CTDE paradigm [51, 130, 131] combines the strengths of centralized training and decentralized execution, making it the most prominent paradigm in MARL research. During training, a central controller aggregates information from all agents to optimize their policies, but during execution, each agent operates independently based on its own observations. CTDE addresses key challenges like non-stationarity and scalability, with methods such as Value Decomposition Networks (VDN) [85] and QMIX [50, 52] for value-based learning, Multi-Agent Deep Deterministic Policy Gradient (MADDPG) [132] for actor-critic frameworks, and Multi-Agent Proximal Policy Optimization (MAPPO) [133] for policy gradient optimization. These approaches are widely applied in complex environments like StarCraft II [134, 135] and the Multi-Agent Particle Environment (MPE) [136, 137]." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 535, + 287, + 643 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 535, + 287, + 643 + ], + "spans": [ + { + "bbox": [ + 61, + 535, + 287, + 643 + ], + "type": "text", + "content": "Communication-based MARL Algorithms Additionally, communication-based MARL algorithms have emerged to enhance coordination by enabling agents to share critical information during training and execution. Examples include Attentional Communication (ATOC) [138] and Targeted Multi-Agent Communication (TarMAC) [139], which use advanced mechanisms to improve the efficiency and effectiveness of inter-agent communication in cooperative tasks." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 649, + 287, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 649, + 287, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 649, + 287, + 732 + ], + "type": "text", + "content": "By structuring MARL methods within these paradigms, researchers provide a clear framework for addressing the diverse challenges of multi-agent decision-making. From autonomous driving fleets to resource allocation systems, MARL continues to push the boundaries of what distributed intelligent systems can achieve [124]." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 306, + 111, + 477, + 123 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 111, + 477, + 123 + ], + "spans": [ + { + "bbox": [ + 306, + 111, + 477, + 123 + ], + "type": "text", + "content": "2.2.5. LLMs-based Multi-Agent Systems" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 126, + 531, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 126, + 531, + 354 + ], + "spans": [ + { + "bbox": [ + 305, + 126, + 531, + 354 + ], + "type": "text", + "content": "Although LLMs like GPT [140, 141, 142], Llama [143, 144], and Gemini [145] support very long input contexts, their ability to understand complex inputs still varies. In this context, multi-agent collaboration optimizes task execution through role assignment, enabling better performance through collaboration among multiple agents compared to a single agent. Each agent has an independent workflow, memory, and can seek help from other agents when necessary. LLMs-based Multi-Agent Systems represent a relatively new multi-agent decision-making model that leverages the powerful capabilities of language models, to enhance communication and collaboration between autonomous agents. In an LLMs-based multi-agent system, agents communicate via natural language or symbolic representations, breaking down complex tasks into smaller, more manageable subtasks. One important feature of LLMs-based systems is the hierarchical organization of agents, typically consisting of two levels [30, 116]:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 355, + 531, + 391 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 355, + 531, + 391 + ], + "spans": [ + { + "bbox": [ + 305, + 355, + 531, + 391 + ], + "type": "text", + "content": "1) Global planning agents, responsible for high-level decisions such as task decomposition, resource allocation, and overall strategy management." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 393, + 531, + 501 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 393, + 531, + 501 + ], + "spans": [ + { + "bbox": [ + 305, + 393, + 531, + 501 + ], + "type": "text", + "content": "2) Local execution agents, which are responsible for executing specific subtasks and providing feedback to the global planning agent. These agents are generally more focused on local tasks but communicate progress and challenges with the global level for adjustments. This decomposition makes distributed problem solving possible, with agents sharing information, strategies, and goals through language, thus advancing task execution together." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 503, + 531, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 503, + 531, + 659 + ], + "spans": [ + { + "bbox": [ + 305, + 503, + 531, + 659 + ], + "type": "text", + "content": "For example, frameworks like AutoGen [103, 146, 107], Crew AI [114, 115], and LangGraph [111, 112] provide rich tools for building multi-agent solutions, supporting efficient cooperation and interaction between agents. Through these frameworks, developers can build virtual teams that leverage the strengths of different agents in distributed tasks. Additionally, LLMs-based multi-agent systems possess adaptive re-planning capabilities, enabling them to adjust in dynamic environments. When agents encounter changes or new information, they can quickly update strategies or re-plan tasks using language models, ensuring the system remains aligned with changing goals." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 661, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 661, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 305, + 661, + 531, + 732 + ], + "type": "text", + "content": "Firstly, LLMs-based multi-agent environments have emerged as pivotal platforms for advancing research in multi-agent collaboration, reasoning, and task execution. For instance, environments such as Three-DWorld Multi-Agent Transport (TDW-MAT) [147, 148], Communicative Watch-And-Help (C-WAH) [99]," + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "spans": [ + { + "bbox": [ + 293, + 740, + 300, + 748 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 60, + 111, + 286, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 111, + 286, + 290 + ], + "spans": [ + { + "bbox": [ + 60, + 111, + 286, + 290 + ], + "type": "text", + "content": "Cuisineworld [149], and AgentScope [100] offer diverse settings for evaluating and enhancing multi-agent systems in various contexts, from household chores to gaming interactions and beyond. For instance, MindAgent [149] is a novel infrastructure for evaluating planning and coordination capabilities in gaming interactions, leveraging large foundation models (LFMs) to coordinate multi-agent systems, collaborate with human players. Communicative Watch-And-Help (C-WAH) [98, 99] is a realistic multi-agent simulation environment and an extension of the Watch-And-Help Challenge platform, VirtualHome-Social. AgentScope [100] is a robust and flexible multi-agent platform designed to empower developers in building advanced multi-agent systems by leveraging the potential of LLMs." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 60, + 290, + 287, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 290, + 287, + 732 + ], + "spans": [ + { + "bbox": [ + 60, + 290, + 287, + 732 + ], + "type": "text", + "content": "Meanwhile, LLMs-based multi-agent systems have broad applications and great prospects [30, 116]. They can collaborate in robotic teams to perform complex tasks, such as product assembly or joint research, ensuring seamless interaction and cooperation [58, 59, 57]. In autonomous driving, LLMs help vehicles communicate, sharing real-time data and navigation strategies to achieve coordinated actions. Moreover, LLMs can support agents (such as drones) in disaster response, transmitting critical information to help systems efficiently respond to crises. Wu et al. [103] proposed AutoGen, an open-source framework for developing next-generation LLM applications through multi-agent conversations, allowing customizable agent interactions and integration of LLMs, human inputs, and tools. Xiao et al. [101] introduced Chain-of-Experts (CoE), a multi-agent framework that enhances reasoning in complex operations research (OR) problems using LLMs, with domain-specific roles and a conductor for guidance. Chen et al. [105] presented AgentVerse, a multi-agent framework inspired by human group dynamics, dynamically adjusting agent roles and composition to enhance complex task-solving across various domains. Chen et al. [107] developed AutoAgents, a framework that adaptively generates and coordinates multiple specialized agents for efficient task completion. Liu et al. [106] proposed Dynamic LLM-Agent Network (DyLAN), a framework that enhances LLM-agent collaboration through dynamic interactions based on task requirements. Zhang et al. [96] introduced CoELA, a Cooperative Embodied Language Agent framework that leverages LLMs to enhance multi-agent cooperation in complex, decentralized environments. Hong et al. [108, 109] proposed MetaGPT, a meta-programming framework that enhances LLMs-based multi-agent system collaboration using Standard Operating Procedures (SOPs). XAgent Team [110] developed XAgent, an" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 305, + 111, + 532, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 532, + 350 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 532, + 350 + ], + "type": "text", + "content": "open-source, LLM-driven autonomous agent framework for solving complex tasks using a dual-loop architecture for task planning and execution. Zheng et al. [113] introduced PlanAgent, a closed-loop motion planning framework for autonomous driving using multi-modal LLMs to generate hierarchical driving commands. Wang et al. [111, 112] developed LangGraph, a library for building stateful, multi-actor applications with LLMs, offering fine-grained control over workflows and state management. Zhang et al. [114, 115] introduced CrewAI, an open-source framework for coordinating AI agents in role-playing and autonomous operations, with a modular design for efficient collaboration. Hou et al. [97] proposed CoAct, a hierarchical multi-agent system leveraging LLMs for collaborative task execution. It features a global planning agent for task decomposition and strategy management, and a local execution agent for subtask implementation, feedback collection, and adaptive replanning, ensuring alignment with overarching goals." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 305, + 350, + 532, + 505 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 350, + 532, + 505 + ], + "spans": [ + { + "bbox": [ + 305, + 350, + 532, + 505 + ], + "type": "text", + "content": "Despite the strong capabilities of LLMs in small to medium-sized multi-agent systems, scalability remains an open issue, particularly in maintaining coherent communication between large numbers of agents in large environments. As the number of agents increases, the complexity of coordinating their behaviors through language also intensifies. Finding a balance between agent autonomy and effective collaboration is a significant challenge. Additionally, LLMs are often seen as black-box models, meaning understanding the reasoning process behind an agent's decision-making can be difficult. The lack of transparency poses challenges for trust and debugging." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 305, + 505, + 531, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 505, + 531, + 602 + ], + "spans": [ + { + "bbox": [ + 305, + 505, + 531, + 602 + ], + "type": "text", + "content": "In summary, LLMs-based multi-agent systems hold great potential in a variety of applications, offering an advanced way to model and solve complex decision-making problems that require high levels of coordination, adaptability, and communication between agents. By optimizing task decomposition, collaboration, and feedback mechanisms, LLMs bring unprecedented efficiency and flexibility to multi-agent systems." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 306, + 611, + 530, + 634 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 611, + 530, + 634 + ], + "spans": [ + { + "bbox": [ + 306, + 611, + 530, + 634 + ], + "type": "text", + "content": "2.3. MARL-based Multi-Agent Decision-Making Tax-onomies" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 637, + 532, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 637, + 532, + 732 + ], + "spans": [ + { + "bbox": [ + 305, + 637, + 532, + 732 + ], + "type": "text", + "content": "In multi-agent systems, where multiple autonomous agents interact with a shared environment and often with each other, the complexity of decision-making increases significantly. To achieve optimal performance, agents need to learn not only how to act individually but also how to coordinate with others. One of the central challenges in MARL-based multi-agent systems is determining how much information should be" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 292, + 739, + 303, + 749 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 739, + 303, + 749 + ], + "spans": [ + { + "bbox": [ + 292, + 739, + 303, + 749 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 111, + 286, + 208 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 111, + 286, + 208 + ], + "spans": [ + { + "bbox": [ + 61, + 111, + 286, + 208 + ], + "type": "text", + "content": "shared among agents during different phases of learning and deployment. The MARL research is typically structured into three primary paradigms: CTCE [150, 151, 152, 126, 123], DTDE [122, 153], and CTDE [124, 130, 51]. As illustrated in Figure 3, we will next explain the principles and differences of the three methods in conjunction with this conceptual framework diagram." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 226, + 286, + 249 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 226, + 286, + 249 + ], + "spans": [ + { + "bbox": [ + 61, + 226, + 286, + 249 + ], + "type": "text", + "content": "2.3.1. Centralized Training with Decentralized Execution (CTDE)" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 251, + 286, + 336 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 251, + 286, + 336 + ], + "spans": [ + { + "bbox": [ + 61, + 251, + 286, + 336 + ], + "type": "text", + "content": "As shown in the left of Figure 3, CTDE is a hybrid MARL approach that combines the strengths of both centralized and decentralized systems [124]. In CTDE, each agent possesses its own policy network, which is trained under the guidance of a central controller. This approach is characterized by a two-phase process: centralized training followed by decentralized execution." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 69, + 348, + 286, + 506 + ], + "type": "list", + "angle": 0, + "index": 5, + "blocks": [ + { + "bbox": [ + 69, + 348, + 286, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 348, + 286, + 430 + ], + "spans": [ + { + "bbox": [ + 69, + 348, + 286, + 430 + ], + "type": "text", + "content": "1. Centralized Training (Phase 1): During the training phase, the central controller collects data from all agents, including their observations, actions, and rewards. This centralized data aggregation allows the controller to oversee the learning process and facilitate the training of each agent's policy network." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 69, + 433, + 286, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 433, + 286, + 506 + ], + "spans": [ + { + "bbox": [ + 69, + 433, + 286, + 506 + ], + "type": "text", + "content": "2. Decentralized Execution (Phase 2): Once the training is complete, the central controller's involvement ceases, and each agent operates independently. At execution, agents make decisions based on their own observations using their trained policy networks." + } + ] + } + ], + "index": 4 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 61, + 518, + 287, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 518, + 287, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 518, + 287, + 732 + ], + "type": "text", + "content": "In some communication-constrained scenarios, agents often cannot share or fully share their observations of the environment. Instead, they must make decisions independently based on their own local observations and policies, which limits the applicability of fully centralized methods. To overcome this challenge, Kraemer et al. [124, 51] proposed the CTDE learning paradigm. The CTDE agents have access to global environmental state information and the observations of other agents during the training phase, allowing them to learn a joint policy together. However, during the execution phase, each agent relies solely on its own observations and the trained policy to make independent decisions. It combines the advantages of fully decentralized and fully centralized methods, effectively mitigating issues such as learning non-stationarity and the curse of dimensionality, making it the dominant paradigm in current MARL solutions." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 111, + 531, + 183 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 531, + 183 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 531, + 183 + ], + "type": "text", + "content": "Under CTDE, MARL algorithms can primarily be categorized into three types based on their technical implementations: value function decomposition-based algorithms, actor-critic-based algorithms, and algorithms based on policy gradient methods, such as proximal policy optimization (PPO)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 183, + 531, + 410 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 183, + 531, + 410 + ], + "spans": [ + { + "bbox": [ + 305, + 183, + 531, + 410 + ], + "type": "text", + "content": "1. Value Decomposition-based Algorithms Value decomposition-based algorithms mainly address the challenge of estimating the joint state-action value function (Q-value) in multi-agent systems, which is difficult due to the high dimensionality of the joint action space. Instead of directly estimating this joint value function, these algorithms decompose it into more manageable individual state-action value functions (Q-value) for each agent. During execution, each agent selects its action based on its own value function. In training, the joint value function is computed from individual value functions, and the temporal difference error of the joint value guides the learning of the individual functions. A key principle these algorithms must satisfy is the Individual-Global-Max (IGM) principle, ensuring that the actions maximizing the joint value are consistent with those maximizing individual values. Different algorithms use various methods to approximate or satisfy the IGM principle." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 410, + 531, + 588 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 410, + 531, + 588 + ], + "spans": [ + { + "bbox": [ + 305, + 410, + 531, + 588 + ], + "type": "text", + "content": "Value Decomposition Networks (VDN) [85] is one of the earliest value decomposition-based algorithms in CTDE-based MARL models. VDN simplifies the estimation of the joint state-action value function by assuming that it can be represented as the sum of the individual state-action value functions of all agents. It means that the joint value function is obtained by simply adding up the individual value functions, which does not take into account the varying contributions of each agent's Q-value. However, the assumption made by VDN is a sufficient but not necessary condition for satisfying the IGM principle, which can limit its applicability. Additionally, VDN does not utilize global state information during training, further restricting its effectiveness in more complex environments." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 588, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 588, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 305, + 588, + 531, + 732 + ], + "type": "text", + "content": "To address this issue, Rashid et al. [50] proposed the QMIX algorithm within the CTDE paradigm. QMIX assumes a monotonic nonlinear relationship between the joint state-action value function and the individual state-action value functions of agents. To implement this, QMIX introduces a mixing network that computes the joint state-action value function based on the individual Q-values of all agents. This mixing network is designed with non-negative parameters to ensure that the monotonicity assumption is met. QMIX has been successfully applied in various scenarios and is considered one of the most successful value decomposition al" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 64, + 117, + 214, + 247 + ], + "blocks": [ + { + "bbox": [ + 64, + 117, + 214, + 247 + ], + "lines": [ + { + "bbox": [ + 64, + 117, + 214, + 247 + ], + "spans": [ + { + "bbox": [ + 64, + 117, + 214, + 247 + ], + "type": "image", + "image_path": "a7c702553a9d5fa72fd71c9d66c7618d6f75fff815db9d11a1cdaba21c0183ca.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 61, + 259, + 531, + 280 + ], + "lines": [ + { + "bbox": [ + 61, + 259, + 531, + 280 + ], + "spans": [ + { + "bbox": [ + 61, + 259, + 531, + 280 + ], + "type": "text", + "content": "Figure 3: The paradigms visualization of CTDE (left), DTDE (centre), and CTCE (right), consisting of three crucial elements: agent (i.e., algorithm or model), environment, central controller (Optional)." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 220, + 117, + 366, + 246 + ], + "blocks": [ + { + "bbox": [ + 220, + 117, + 366, + 246 + ], + "lines": [ + { + "bbox": [ + 220, + 117, + 366, + 246 + ], + "spans": [ + { + "bbox": [ + 220, + 117, + 366, + 246 + ], + "type": "image", + "image_path": "0bca6a2597b2b0575317349d7a3b6f6c565e49cb2666036c98971bd5711bbe0f.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 368, + 117, + 530, + 246 + ], + "blocks": [ + { + "bbox": [ + 368, + 117, + 530, + 246 + ], + "lines": [ + { + "bbox": [ + 368, + 117, + 530, + 246 + ], + "spans": [ + { + "bbox": [ + 368, + 117, + 530, + 246 + ], + "type": "image", + "image_path": "4524d0bfb9ac0487743344e13bb71efc4a8b1faea35f3507db44ab1a20d4252d.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 60, + 300, + 286, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 300, + 286, + 407 + ], + "spans": [ + { + "bbox": [ + 60, + 300, + 286, + 407 + ], + "type": "text", + "content": "gorithms to date. By enforcing a monotonic relationship between the joint action Q-values and individual Q-values, QMIX simplifies the policy decomposition process, facilitating decentralized decision-making. However, the monotonicity assumption, while sufficient for ensuring the Individual-Global-Max (IGM) principle, is not a necessary condition. This limitation restricts the algorithm's applicability in situations where an agent's optimal action depends on the actions of other agents." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 60, + 408, + 287, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 408, + 287, + 660 + ], + "spans": [ + { + "bbox": [ + 60, + 408, + 287, + 660 + ], + "type": "text", + "content": "Weighted QMIX [54] builds upon QMIX and addresses this limitation by introducing a novel weighting mechanism during the projection of Q-values, which is widely used for cooperative MARL scenarios. In QMIX, the projection of Q-learning targets into the representable space is done with equal weighting across all joint actions, which can lead to suboptimal policy representations, even if the true optimal Q-values " + }, + { + "bbox": [ + 60, + 408, + 287, + 660 + ], + "type": "inline_equation", + "content": "(\\mathbf{Q}^{*})" + }, + { + "bbox": [ + 60, + 408, + 287, + 660 + ], + "type": "text", + "content": " are known. To overcome this, Weighted QMIX introduces two weighting schemes-Centrally-Weighted (CW) QMIX and Optimistically-Weighted (OW) QMIX-that place greater emphasis on the better joint actions during this projection process. The weighting schemes ensure that the correct maximal action is recovered for any set of joint action Q-values, effectively improving the algorithm's ability to learn optimal policies. These schemes in Weighted QMIX enhances the representational capacity of QMIX, demonstrating improved results on both predator-prey scenarios of Multi-Agent Particle Environment (MPE) [137] and the challenging StarCraft II benchmarks [154, 91, 134]." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 61, + 661, + 287, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 661, + 287, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 661, + 287, + 732 + ], + "type": "text", + "content": "Since then, numerous methods building on value function decomposition have been developed. QPLEX [56] introduces a novel duplex dueling network architecture for multi-agent Q-learning, designed to nonlinearly decompose the joint state-action value function while embedding the IGM principle directly into the" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 300, + 531, + 479 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 300, + 531, + 479 + ], + "spans": [ + { + "bbox": [ + 305, + 300, + 531, + 479 + ], + "type": "text", + "content": "network structure. Fast-QMIX [52] enhances the original Q-MIX by dynamically assigning virtual weighted Q-values with an additional network, improving convergence speed, stability, and overall performance in cooperative multi-agent scenarios. QTRAN [53] introduces a more flexible factorization method that overcomes the structural limitations of Q-MIX, where the joint Q-value is constrained to be a monotonic function of the individual Q-values, thereby imposing a specific structural form on the factorization. Specifically, QTRAN introduces a necessary and sufficient condition for the IGM principle and incorporates two additional loss terms into the loss function to constrain the training of individual Q-value functions, ensuring they satisfy this IGM principle." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 481, + 532, + 671 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 481, + 532, + 671 + ], + "spans": [ + { + "bbox": [ + 305, + 481, + 532, + 671 + ], + "type": "text", + "content": "2. Actor-Critic-based Algorithms: Actor-Critic-based algorithms [132, 60, 155] represent a foundational class of methods within the CTDE paradigm, offering a flexible and effective approach for tackling the challenges of multi-agent environments. These algorithms combine the strengths of policy optimization (actor) with value estimation (critic), allowing agents to learn robust and adaptive strategies in both cooperative and competitive settings. By leveraging a centralized critic during training, Actor-Critic-based methods [136, 137, 92] address key issues such as environmental non-stationarity and credit assignment, enabling effective policy optimization in dynamic and complex multiagent scenarios. Below, we discuss several prominent Actor-Critic-based approaches and their contributions to advancing MARL." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 672, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 672, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 305, + 672, + 531, + 732 + ], + "type": "text", + "content": "MADDPG [132] is a typical Actor-Critic-based CTDE approach specifically designed to address the challenges of multi-agent environments, where agents engage in both cooperative and competitive interactions. Traditional reinforcement learning algorithms, such as" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 111, + 285, + 444 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 111, + 285, + 444 + ], + "spans": [ + { + "bbox": [ + 63, + 111, + 285, + 444 + ], + "type": "text", + "content": "Q-learning and policy gradient methods, struggle in multi-agent settings due to issues like non-stationarity where the environment constantly changes as other agents learn-and increased variance with the growing number of agents. MADDPG adapts the actor-critic framework by incorporating a centralized critic during training, which has access to the actions and observations of all agents. This centralized critic helps mitigate the non-stationarity problem by learning a more stable value function that considers the joint action space. During execution, however, each agent independently follows its policy (actor) based on local observations, enabling decentralized decision-making. It allows each agent to successfully learn and execute complex coordination strategies, outperforming existing methods in both cooperative and competitive multiagent environments. To address the computational challenges of continuous action spaces, Li et al. [156] extend the MADDPG algorithm to Multi-Agent Mutual Information Maximization Deep Deterministic Policy Gradient (M3DDPG) by incorporating a minimax approach to enhance robustness in multi-agent environments. M3DDPG introduce Multi-Agent Adversarial Learning (MAAL), which efficiently solves the minimax formulation, ensuring agents can generalize even when opponents' policies change and leading to significant improvements over existing baselines in mixed cooperative-competitive scenarios." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 63, + 446, + 285, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 446, + 285, + 624 + ], + "spans": [ + { + "bbox": [ + 63, + 446, + 285, + 624 + ], + "type": "text", + "content": "Counterfactual Multi-Agent Policy Gradient (COMA) [157] is a cooperative algorithm based on the Actor-Critic framework that uses centralized learning to address the credit assignment problem in multi-agent settings. COMA employs a centralized critic to compute advantage functions for each agent, using counterfactual baselines to reduce policy dependencies among agents and improve learning efficiency. Each agent has its own policy network, but the shared centralized critic evaluates joint Q-values by considering the collective state and action information of all agents. This approach minimizes the negative impacts of policy dependencies and allows for a more comprehensive assessment of each agent's behavior, enhancing overall policy optimization." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 625, + 285, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 625, + 285, + 731 + ], + "spans": [ + { + "bbox": [ + 63, + 625, + 285, + 731 + ], + "type": "text", + "content": "3. Proximal Policy Optimization-based Algorithms: Proximal Policy Optimization (PPO) [158] is a widely used CTDE reinforcement learning algorithm that has been adapted and extended to address challenges in MARL. Within the CTDE paradigm, PPO and its multi-agent variants have shown remarkable effectiveness in balancing policy optimization efficiency and stability. PPO was introduced by Schulman et al. [158] as an efficient policy gradient algorithm designed to im" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 308, + 111, + 530, + 217 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 111, + 530, + 217 + ], + "spans": [ + { + "bbox": [ + 308, + 111, + 530, + 217 + ], + "type": "text", + "content": "prove upon the trust region policy optimization (TRPO) framework [159]. PPO employs a clipped surrogate objective function that simplifies the trust region constraint in TRPO, allowing for stable updates without overly restrictive computational overhead. The key innovation of PPO lies in its ability to control the magnitude of policy updates through the clipping mechanism, which ensures that policies do not deviate excessively from their previous versions." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 308, + 219, + 530, + 409 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 219, + 530, + 409 + ], + "spans": [ + { + "bbox": [ + 308, + 219, + 530, + 409 + ], + "type": "text", + "content": "In MARL, Multi-Agent PPO (MAPPO) [133] extends PPO to the centralized critic paradigm. MAPPO uses a centralized value function (critic) that evaluates joint states and actions during training, while agents execute independently using their decentralized policies. MAPPO has demonstrated superior performance in various cooperative and competitive multi-agent environments, such as the StarCraft II [134, 135] and Multi-Agent Particle Environment (MPE) [59, 136, 137] benchmarks. The centralized critic allows for improved credit assignment and non-stationarity handling during training, while the decentralized execution ensures scalability. While MAPPO leverages parameter sharing among agents, this assumption may not hold in heterogeneous-agent systems where agents differ in capabilities, objectives, or action spaces." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 308, + 410, + 530, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 410, + 530, + 624 + ], + "spans": [ + { + "bbox": [ + 308, + 410, + 530, + 624 + ], + "type": "text", + "content": "To address this, Kuba et al. [160] proposed Heterogeneous-Agent Trust Region Policy Optimization (HATRPO) and Heterogeneous-Agent Proximal Policy Optimization (HAPPO). These algorithms remove the parameter-sharing assumption, allowing for individualized policy networks for each agent. HATRPO builds upon TRPO by introducing a sequential update scheme, where only one agent updates its policy at a time while the policies of other agents remain fixed. This approach ensures monotonic improvement in joint policies, as it approximates the Nash equilibrium under certain conditions, such as full observability and deterministic environments. HAPPO extends PPO in a similar vein, replacing parameter sharing with individualized policies. Like HATRPO, HAPPO employs a sequential update mechanism, but it retains the computational efficiency and practical simplicity of PPO's clipped objective function." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 308, + 624, + 530, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 624, + 530, + 731 + ], + "spans": [ + { + "bbox": [ + 308, + 624, + 530, + 731 + ], + "type": "text", + "content": "Both HATRPO and HAPPO utilize a sequential update process where one agent updates its policy while others remain fixed. This prevents conflicts during policy optimization and ensures theoretical convergence to a stable joint policy. Moreover, HATRPO and HAPPO provide monotonic improvement guarantees under specific conditions. By removing the parameter-sharing constraint, these algorithms enable agents to learn tailored policies that account for their unique roles and" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 64, + 111, + 287, + 158 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 111, + 287, + 158 + ], + "spans": [ + { + "bbox": [ + 64, + 111, + 287, + 158 + ], + "type": "text", + "content": "capabilities. Both algorithms perform competitively in benchmark tasks, demonstrating their ability to scale to high-dimensional state-action spaces while maintaining robust coordination among agents." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 64, + 159, + 287, + 254 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 159, + 287, + 254 + ], + "spans": [ + { + "bbox": [ + 64, + 159, + 287, + 254 + ], + "type": "text", + "content": "PPO-based algorithms, including MAPPO [133], HATRPO [160], and HAPPO [160], have revolutionized multi-agent reinforcement learning by combining the stability of PPO with the coordination benefits of centralized critics. These algorithms have proven effective across a wide array of cooperative and competitive MARL tasks, offering strong performance and scalability." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 64, + 255, + 287, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 255, + 287, + 397 + ], + "spans": [ + { + "bbox": [ + 64, + 255, + 287, + 397 + ], + "type": "text", + "content": "3. Other Categories of Algorithms within the CTDE Paradigm: In addition to the well-established categories of Value Decomposition-based, Actor-Critic-based, and Proximal Policy Optimization (PPO)-based algorithms, the MARL research has seen significant advancements through innovative optimizations and enhancements within CTDE paradigm that are not confined to these traditional classifications. These approaches aim to address the inherent challenges of multi-agent environments, such as non-stationarity and limited communication, to improve overall cooperation and policy learning efficiency." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 64, + 399, + 287, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 64, + 399, + 287, + 731 + ], + "spans": [ + { + "bbox": [ + 64, + 399, + 287, + 731 + ], + "type": "text", + "content": "For example, Centralized Advising and Decentralized Pruning (CADP) is a novel framework introduced by Zhou et al. [131] to address limitations in the CTDE paradigm. CADP enhances the training process by allowing agents to explicitly communicate and exchange advice during centralized training, thus improving joint-policy exploration. To maintain decentralized execution, CADP incorporates a smooth model pruning mechanism that gradually restricts agent communication without compromising their cooperative capabilities, demonstrating its superior performance on multiagent StarCraft II SMAC and Google Research Football benchmarks. CommNet [161] introduces a neural model where multiple agents learn to communicate continuously and collaboratively through a shared communication channel, optimizing their performance on fully cooperative tasks. The method allows agents to develop their own communication protocols during training, leading to improved coordination and task-solving capabilities. Mao et al. [87] introduced a novel Meta-MARL framework by integrating game-theoretical meta-learning with MARL algorithms using the CTDE's framework, such as the Actor-Critic-based COMA [157]. This framework offers initialization-dependent convergence guarantees and significantly improves convergence rates by addressing related tasks collectively. Yun et al. [86] proposed a novel approach called Quantum Meta Multi-Agent Reinforce" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 308, + 111, + 531, + 278 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 111, + 531, + 278 + ], + "spans": [ + { + "bbox": [ + 308, + 111, + 531, + 278 + ], + "type": "text", + "content": "ment Learning (QM2ARL), achieving high rewards, fast convergence, and quick adaptation in dynamic environments. QM2ARL leverages the unique dual-dimensional trainability of Quantum Neural Networks (QNNs) to enhance MARL. Liu et al. [95] proposed the Learning before Interaction (LBI) framework, which integrates a language-guided simulator into the multiagent reinforcement learning pipeline to address complex decision-making problems. By leveraging a generative world model with dynamics and reward components, LBI generates trial-and-error experiences to improve policy learning, demonstrating superior performance and generalization on the StarCraft Multi-Agent Challenge benchmark [134, 135]." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 308, + 290, + 531, + 312 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 290, + 531, + 312 + ], + "spans": [ + { + "bbox": [ + 308, + 290, + 531, + 312 + ], + "type": "text", + "content": "2.3.2. Decentralized Training with Decentralized Execution (DTDE)" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 308, + 314, + 531, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 314, + 531, + 433 + ], + "spans": [ + { + "bbox": [ + 308, + 314, + 531, + 433 + ], + "type": "text", + "content": "As shown in the centre of Figure 3, DTDE represents a fully decentralized mechanism where each agent interacts independently with the environment and updates its own policy based on its own observations and rewards [122]. In this framework, each agent trains and operates completely independently, relying only on its own observations and rewards to update its strategy. DTDE is particularly suited for environments with limited communication or no global coordination, offering strong scalability and robustness [123]." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 308, + 434, + 531, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 434, + 531, + 624 + ], + "spans": [ + { + "bbox": [ + 308, + 434, + 531, + 624 + ], + "type": "text", + "content": "The core idea behind DTDE is the independence of agents [122]. Each agent interacts with its environment and learns without requiring information from others. This makes DTDE scalable, but it also introduces challenges such as non-stationarity, where the environment appears to change as other agents adapt their strategies. This characteristic makes DTDE a valuable and challenging area of research. The theoretical foundation of DTDE is often based on Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs). As described by Amato et al. [122, 124], a Dec-POMDP models a decentralized decision-making environment where agents operate independently with limited observations while aiming to maximize a shared reward. The decentralized nature of DTDE requires each agent to learn optimal strategies based on local information only." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 308, + 625, + 531, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 625, + 531, + 731 + ], + "spans": [ + { + "bbox": [ + 308, + 625, + 531, + 731 + ], + "type": "text", + "content": "First and foremost, one of the earliest DTDE approaches is Independent Q-Learning (IQL) by et al. [127]. Here, each agent applies Q-learning independently, maintaining its own Q-function and updating it based on local observations and rewards. However, IQL faces several challenges, such as the non-stationary nature of the environment caused by other agents learning simultaneously. It also struggles with credit assignment, where it is hard to determine how an individual agent" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 292, + 740, + 303, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 740, + 303, + 748 + ], + "spans": [ + { + "bbox": [ + 292, + 740, + 303, + 748 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 111, + 286, + 136 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 111, + 286, + 136 + ], + "spans": [ + { + "bbox": [ + 61, + 111, + 286, + 136 + ], + "type": "text", + "content": "contributes to the team's success. To address these issues, several extensions of IQL have been proposed:" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 71, + 143, + 286, + 214 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 143, + 286, + 214 + ], + "spans": [ + { + "bbox": [ + 71, + 143, + 286, + 214 + ], + "type": "text", + "content": "- Distributed Q-Learning [128] optimistically assumes other agents always take optimal actions, focusing on learning from high-reward interactions. While effective in deterministic settings, it can be overly optimistic in environments with randomness." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 72, + 222, + 286, + 282 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 222, + 286, + 282 + ], + "spans": [ + { + "bbox": [ + 72, + 222, + 286, + 282 + ], + "type": "text", + "content": "- Hysteresic Q-Learning [162] By introducing two learning rates—one for positive updates and another, smaller rate for negative updates—hysteretic Q-learning balances optimism with robustness in stochastic environments." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 72, + 290, + 286, + 362 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 290, + 286, + 362 + ], + "spans": [ + { + "bbox": [ + 72, + 290, + 286, + 362 + ], + "type": "text", + "content": "- Lenient Q-Learning [163] dynamically adjusts how lenient the agent is in updating its values, depending on how frequently specific state-action pairs are encountered. It allows for more exploration in the early stages of learning while focusing on optimization later." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 370, + 286, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 370, + 286, + 572 + ], + "spans": [ + { + "bbox": [ + 61, + 370, + 286, + 572 + ], + "type": "text", + "content": "As MARL problems become more complex, DTDE methods have been extended to deep learning. Deep Q-Networks [150, 151, 152, 126] and Deep Recurrent Q-Networks [164] have been adapted for decentralized settings, enabling agents to handle high-dimensional state and action spaces. Independent DRQN (IDRQN) [165], for example, combines DRQN with independent learning, but its asynchronous experience replay can cause instability. To solve this, Concurrent Experience Replay Trajectories (CERTs) [166] synchronize experience replay among agents, reducing non-stationarity and improving learning efficiency. Other advancements include Deep Hysteretic DRQN (Dec-HDRQN) [166], which combines hysteretic updates with deep neural networks and uses concurrent buffers to handle decentralized data. These methods have shown robust performance in partially observable environments." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 573, + 286, + 634 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 573, + 286, + 634 + ], + "spans": [ + { + "bbox": [ + 61, + 573, + 286, + 634 + ], + "type": "text", + "content": "In the DTDE paradigm, policy gradient methods offer an alternative to value-based approaches, particularly for scenarios involving continuous action spaces [122]. Several policy gradient DTDE methods have been proposed:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 71, + 641, + 286, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 641, + 286, + 689 + ], + "spans": [ + { + "bbox": [ + 71, + 641, + 286, + 689 + ], + "type": "text", + "content": "- Decentralized REINFORCE [129] independently optimizes each agent's policy using gradient ascent based on rewards observed during episodes. While simple, it is less sample-efficient." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 72, + 697, + 286, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 697, + 286, + 732 + ], + "spans": [ + { + "bbox": [ + 72, + 697, + 286, + 732 + ], + "type": "text", + "content": "- Independent Actor-Critic (IAC) [157] Combining value estimation (critic) and policy optimization (actor), IAC enables agents to learn faster" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 326, + 111, + 530, + 135 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 326, + 111, + 530, + 135 + ], + "spans": [ + { + "bbox": [ + 326, + 111, + 530, + 135 + ], + "type": "text", + "content": "and update more frequently, improving sample efficiency." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 317, + 144, + 530, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 144, + 530, + 204 + ], + "spans": [ + { + "bbox": [ + 317, + 144, + 530, + 204 + ], + "type": "text", + "content": "- Independent Proximal Policy Optimization (IPPO) [158, 153, 133] Extending Proximal Policy Optimization (PPO) to decentralized settings, IPPO improves policy stability by limiting how much policies can change between updates." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 215, + 531, + 370 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 215, + 531, + 370 + ], + "spans": [ + { + "bbox": [ + 305, + 215, + 531, + 370 + ], + "type": "text", + "content": "Despite its advantages, DTDE still faces significant challenges [122, 123]: 1. Non-Stationarity: As other agents learn and adapt, the environment appears dynamic and unstable to each agent, making convergence difficult; 2. Credit Assignment: It is hard to determine how each agent's actions contribute to the team's overall reward in cooperative tasks; 3. Trade-Offs Between Scalability and Performance: While DTDE scales well, its performance may be limited in tasks requiring high levels of coordination. To overcome these challenges, future research could focus on improving communication strategies during training and more robust strategies for dynamic environments." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 305, + 370, + 531, + 478 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 370, + 531, + 478 + ], + "spans": [ + { + "bbox": [ + 305, + 370, + 531, + 478 + ], + "type": "text", + "content": "In conclusion, the DTDE paradigm provides a powerful framework for solving distributed decision-making problems, balancing scalability, independence, and efficiency. It has been successfully applied in areas such as autonomous driving, distributed energy management, and swarm robotics. As research continues, DTDE is expected to play a larger role in real-world multi-agent systems, especially in scenarios requiring robust, independent learning." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 306, + 491, + 530, + 515 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 491, + 530, + 515 + ], + "spans": [ + { + "bbox": [ + 306, + 491, + 530, + 515 + ], + "type": "text", + "content": "2.3.3. Centralized Training with Centralized Execution (CTCE)" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 305, + 517, + 530, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 517, + 530, + 636 + ], + "spans": [ + { + "bbox": [ + 305, + 517, + 530, + 636 + ], + "type": "text", + "content": "As shown in the right of Figure 3, Centralized Training with Centralized Execution (CTCE) stands out as a fully centralized mechanism to MARL decision-making, where all agents transmit their information to a central controller [150, 151, 152, 126]. This central controller has access to the observations, actions, and rewards of all agents. The agents themselves do not possess policy networks and are not responsible for making decisions. Instead, they simply execute the directives issued by the central controller [125, 123]." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 305, + 637, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 637, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 305, + 637, + 531, + 732 + ], + "type": "text", + "content": "Multi-Agent DQN [126] is a classic example of the CTCE paradigm, where DQN is combined with a parameter-sharing mechanism to address tasks in multi-agent environments. Gupta et al. [126] firstly extends three single-agent DRL algorithms (DQN [1, 2], TRPO, and A3C) to multi-agent systems, resulting in Multi-Agent-DQN, Multi-Agent-TRPO, and Multi-Agent-A3C. These approaches were designed to learn" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 108, + 288, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 108, + 288, + 552 + ], + "spans": [ + { + "bbox": [ + 63, + 108, + 288, + 552 + ], + "type": "text", + "content": "cooperative policies in complex, partially observable environments without requiring explicit communication between agents. The DQN algorithm based on multi-agent settings, also known as PS-DQN (Parameter Sharing DQN), effectively utilizes curriculum learning to handle increasing task complexity. By starting with fewer agents and gradually increasing the number, the model scales well to more complex scenarios. Further, this foundational work has led to numerous enhancements and variants based on Multi-Agent DQN, each designed to address specific challenges in multi-agent systems, such as CoRe [150], MARL-DQN [151], and [152]. CoRe [150] introduces a counterfactual reward mechanism into MARL to address the credit assignment problem in cooperative settings. By computing the difference in global rewards when an agent hypothetically changes its action while others keep theirs fixed, CoRe enhances the standard DQN framework, significantly improving learning efficiency and performance in cooperative tasks. MARL-DQN [151] optimizes energy efficiency and resource allocation in NOMA wireless systems by using MARL framework combined with Deep Q-Networks. By combining MARL with DQN, it dynamically adjusts power and time allocation to minimize energy consumption while ensuring quality of service, outperforming traditional methods in terms of efficiency and performance. Hafiz et al. [152] proposed a simplified and efficient multi-agent DQN-based multi-agent system (MAS) that addresses the challenges of complexity, resource demands, and training difficulties inherent in more advanced MARL frameworks. The work introduced a shared state and reward system while maintaining agent-specific actions, which streamlines the experience replay process. The significant improvements in tasks such as Cartpole-v1" + }, + { + "bbox": [ + 63, + 108, + 288, + 552 + ], + "type": "inline_equation", + "content": "^1" + }, + { + "bbox": [ + 63, + 108, + 288, + 552 + ], + "type": "text", + "content": ", LunarLander-v2" + }, + { + "bbox": [ + 63, + 108, + 288, + 552 + ], + "type": "inline_equation", + "content": "^2" + }, + { + "bbox": [ + 63, + 108, + 288, + 552 + ], + "type": "text", + "content": ", and Maze Traversal" + }, + { + "bbox": [ + 63, + 108, + 288, + 552 + ], + "type": "inline_equation", + "content": "^3" + }, + { + "bbox": [ + 63, + 108, + 288, + 552 + ], + "type": "text", + "content": " from OpenAI Gym [167] demonstrates the model's effectiveness and superiority." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 568, + 287, + 592 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 568, + 287, + 592 + ], + "spans": [ + { + "bbox": [ + 61, + 568, + 287, + 592 + ], + "type": "text", + "content": "2.3.4. Addition Taxonomies and Efforts of Communication-based MARL Algorithms" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 593, + 286, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 593, + 286, + 641 + ], + "spans": [ + { + "bbox": [ + 61, + 593, + 286, + 641 + ], + "type": "text", + "content": "As outlined above, three primary paradigms—CTCE, DTDE, and CTDE—have emerged in the MARL domain to tackle the challenges associated with training and execution in multi-agent systems. Each of these" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 305, + 111, + 531, + 158 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 531, + 158 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 531, + 158 + ], + "type": "text", + "content": "paradigms has its strengths and limitations, yet all face inherent difficulties in handling communication among agents, which is critical for effective collaboration and decision-making." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 158, + 531, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 158, + 531, + 337 + ], + "spans": [ + { + "bbox": [ + 305, + 158, + 531, + 337 + ], + "type": "text", + "content": "Specifically, the CTCE paradigm, while providing a fully integrated framework for learning and execution, struggles with scalability as the system size grows. The DTDE paradigm, on the other hand, allows for independent agent training and execution, but often lacks the necessary coordination required for global task optimization. The CTDE paradigm has emerged as a widely adopted approach due to its ability to leverage centralized information during training to learn effective policies, while enabling decentralized execution to operate efficiently in distributed environments. However, even in CTDE, the communication between agents during execution is a bottleneck, prompting researchers to focus on improving communication strategies to enhance system performance." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 338, + 531, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 338, + 531, + 565 + ], + "spans": [ + { + "bbox": [ + 305, + 338, + 531, + 565 + ], + "type": "text", + "content": "Communication-based MARL algorithms have made significant progress in overcoming these challenges. From the perspective of communication protocols and languages, communication-based MARL methods can be categorized into three types: broadcasting communication, targeted communication, and networked communication, as shown in Figure 4. From the technical angle, we provide an overview of these communication-based MARL advancements, categorizing the algorithms into three main groups based on their focus: (1) value function-based Communication-based MARL, (2) policy search-based Communication-based MARL, and (3) Communication-based MARL algorithms designed to improve communication efficiency. These approaches represent the forefront of research in enabling agents to effectively share information, coordinate actions, and optimize performance in complex environments. Here, we provide a detailed introduction to these approaches." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 565, + 531, + 719 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 565, + 531, + 719 + ], + "spans": [ + { + "bbox": [ + 305, + 565, + 531, + 719 + ], + "type": "text", + "content": "Value Function-Based Communication-based MARL: For Value Function-Based Communication-based MARL Algorithms, several notable works include Differentiable Inter-Agent Learning (DIAL) [168] and Deep Distributed Recurrent Q-Networks (DDRQN) [169, 170]. Among them, DIAL facilitates effective collaboration and optimization of joint action policies by enabling the exchange of gradients of Q-functions between agents. On the other hand, DDRQN leverages recurrent neural networks to address partially observable environments, allowing agents to share critical Q-values or hidden states and make adaptive decisions in dynamic settings." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 315, + 720, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 720, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 315, + 720, + 531, + 732 + ], + "type": "text", + "content": "Policy Search-Based Communication-based" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 61, + 663, + 286, + 683 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 663, + 286, + 683 + ], + "spans": [ + { + "bbox": [ + 61, + 663, + 286, + 683 + ], + "type": "text", + "content": "1Cartpole-v1 game: https://www.gymlibrary.dev/environments/classic_control/cart_pole/." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 683, + 285, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 683, + 285, + 712 + ], + "spans": [ + { + "bbox": [ + 61, + 683, + 285, + 712 + ], + "type": "text", + "content": "2LunarLander-v2 game: https://www.gymlibrary.dev/environments/box2d/lunar_lander/ and https://github.com/topics/lunarlander-v2." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 712, + 285, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 712, + 285, + 731 + ], + "spans": [ + { + "bbox": [ + 61, + 712, + 285, + 731 + ], + "type": "text", + "content": "3Maze Traversal game: https://github.com/vision-mini/MazeSolverLLM." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 69, + 112, + 202, + 238 + ], + "blocks": [ + { + "bbox": [ + 69, + 112, + 202, + 238 + ], + "lines": [ + { + "bbox": [ + 69, + 112, + 202, + 238 + ], + "spans": [ + { + "bbox": [ + 69, + 112, + 202, + 238 + ], + "type": "image", + "image_path": "9250e939563f539e78b13d5c0bf791e94784531aabd5105d5e00674c0b29e491.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 77, + 254, + 199, + 265 + ], + "lines": [ + { + "bbox": [ + 77, + 254, + 199, + 265 + ], + "spans": [ + { + "bbox": [ + 77, + 254, + 199, + 265 + ], + "type": "text", + "content": "(1) Broadcasting communication" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 227, + 112, + 355, + 251 + ], + "blocks": [ + { + "bbox": [ + 227, + 112, + 355, + 251 + ], + "lines": [ + { + "bbox": [ + 227, + 112, + 355, + 251 + ], + "spans": [ + { + "bbox": [ + 227, + 112, + 355, + 251 + ], + "type": "image", + "image_path": "f382e9eeb15ff5c88549bd6c0b57199c910ae4030f046b9be51bb2700944e725.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 237, + 254, + 343, + 265 + ], + "lines": [ + { + "bbox": [ + 237, + 254, + 343, + 265 + ], + "spans": [ + { + "bbox": [ + 237, + 254, + 343, + 265 + ], + "type": "text", + "content": "(2) Targeted communication" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 383, + 113, + 520, + 251 + ], + "blocks": [ + { + "bbox": [ + 383, + 113, + 520, + 251 + ], + "lines": [ + { + "bbox": [ + 383, + 113, + 520, + 251 + ], + "spans": [ + { + "bbox": [ + 383, + 113, + 520, + 251 + ], + "type": "image", + "image_path": "494d99f68b7befe7e5aeb7bcab2ce12458baeb66a4548b45686ea81abbdb200e.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 396, + 254, + 511, + 265 + ], + "lines": [ + { + "bbox": [ + 396, + 254, + 511, + 265 + ], + "spans": [ + { + "bbox": [ + 396, + 254, + 511, + 265 + ], + "type": "text", + "content": "(3) Networked communication" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 61, + 288, + 531, + 336 + ], + "lines": [ + { + "bbox": [ + 61, + 288, + 531, + 336 + ], + "spans": [ + { + "bbox": [ + 61, + 288, + 531, + 336 + ], + "type": "text", + "content": "Figure 4: A schematic representation of three distinct communication methods among agents, with arrows indicating the direction of message transmission. (a) Broadcasting communication: The activated agent transmits messages to all other agents within the communication network. (b) Targeted communication: Agents selectively communicate with specific target agents based on a supervisory mechanism that regulates the timing, content, and recipients of the messages. (c) Networked communication: Agents engage in localized interactions with their neighboring agents within the network." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 354, + 286, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 354, + 286, + 665 + ], + "spans": [ + { + "bbox": [ + 61, + 354, + 286, + 665 + ], + "type": "text", + "content": "MARL: For Policy Search-Based Communication-based MARL Algorithms, significant progress has been made with approaches such as Communication Networks (CommNet) [171], Bidirectional Coordinated Network (BiCNet) [172, 173], Multi-Agent Distributed MADDPG (MD-MADDPG) [174, 132], Intrinsic A3C [175, 155], and Multi-Agent Communication and Coordination (MACC) [176, 177]. Among them, CommNet [171] proposes a centralized but differentiable communication framework where agents share encoded signals to form a global context, improving policy decisions. BiCNet [172, 173] enhances coordination among agents by employing bidirectional recurrent layers, making it suitable for complex tasks. MD-MADDPG [174, 132] combines centralized training and decentralized execution, enabling agents to exchange critical state-action information during training for robust policy learning. Intrinsic A3C [175, 155] introduces intrinsic motivation to encourage effective exploration in sparse-reward scenarios, with agents sharing intrinsic rewards through communication to boost performance. Finally, Multi-Agent Communication and Coordination (MACC) [176, 177] focuses on adaptive communication mechanisms, providing stable and secure coordination to enhance training and execution in dynamic multi-agent environments." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 61, + 672, + 286, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 672, + 286, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 672, + 286, + 732 + ], + "type": "text", + "content": "Communication-based MARL Enhancing Communication Efficiency: For algorithms aimed at enhancing communication efficiency, several outstanding approaches include Attentional Communication (ATOC) [138], Targeted Multi-Agent Communication" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 354, + 531, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 354, + 531, + 509 + ], + "spans": [ + { + "bbox": [ + 305, + 354, + 531, + 509 + ], + "type": "text", + "content": "(TarMAC) [139], Inter-Agent Centralized Communication (IC3Net) [178]. Attentional Communication (ATOC) [138] employs an attention mechanism to dynamically determine when communication is necessary, achieving a balance between efficiency and coordination. Targeted Multi-Agent Communication (TarMAC) [139] introduces targeted attention mechanisms to direct messages to relevant teammates, minimizing redundant communication, and improving overall performance. Inter-Agent Centralized Communication (IC3Net) [178] incorporates a gating mechanism that allows agents to learn when and how to communicate, optimizing both the frequency and quality of interactions." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 511, + 530, + 593 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 511, + 530, + 593 + ], + "spans": [ + { + "bbox": [ + 305, + 511, + 530, + 593 + ], + "type": "text", + "content": "These research advances in Communication-based MARL methods demonstrate significant strides in enabling agents to share information and achieve coordinated decision-making in MAS. These advancements will pave the way for deploying MARL in real-world scenarios where efficient and effective communication is essential." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 306, + 614, + 509, + 625 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 614, + 509, + 625 + ], + "spans": [ + { + "bbox": [ + 306, + 614, + 509, + 625 + ], + "type": "text", + "content": "2.4. LLMs-based Multi-Agent System Taxonomies" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 305, + 637, + 530, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 637, + 530, + 731 + ], + "spans": [ + { + "bbox": [ + 305, + 637, + 530, + 731 + ], + "type": "text", + "content": "The field of LLMs-based multi-agent systems has seen significant advancements, with researchers exploring various aspects of these systems to enhance their capabilities and applications [116, 30]. A comprehensive taxonomy can help categorize and understand the different dimensions of LLMs-based multi-agent systems, including architectural design, application domains, evaluation methods, and future research directions." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 111, + 175, + 122 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 111, + 175, + 122 + ], + "spans": [ + { + "bbox": [ + 61, + 111, + 175, + 122 + ], + "type": "text", + "content": "2.4.1. Architectural Design" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 124, + 286, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 124, + 286, + 232 + ], + "spans": [ + { + "bbox": [ + 61, + 124, + 286, + 232 + ], + "type": "text", + "content": "The design of architectures for LLMs-based multiagent systems is a critical component in harnessing the full potential of LLMs to enhance the capabilities of autonomous agents. Architectural design encompasses the framework and mechanisms that enable agents to interact, adapt, and make decisions in complex and dynamic environments. This section explores two primary levels of autonomy within these systems: Adaptive Autonomy and Self-Organizing Autonomy." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 71, + 240, + 286, + 431 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 240, + 286, + 431 + ], + "spans": [ + { + "bbox": [ + 71, + 240, + 286, + 431 + ], + "type": "text", + "content": "- Adaptive Autonomy: [103, 146, 113, 110] Adaptive autonomy refers to systems where agents can adjust their behavior within a predefined framework. These agents are designed to operate within the constraints set by the system architects but can adapt their actions based on the specific requirements of the task at hand. For example, in a task-specific adaptation scenario, an agent might adjust its search strategy in an information retrieval task based on the relevance of the results. In a context-aware adaptation scenario, an agent might change its communication style based on the social context of the interaction. This level of autonomy is crucial for agents that need to operate in dynamic environments where the task requirements can change over time." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 72, + 439, + 286, + 618 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 72, + 439, + 286, + 618 + ], + "spans": [ + { + "bbox": [ + 72, + 439, + 286, + 618 + ], + "type": "text", + "content": "- Self-Organizing Autonomy: [101, 103, 105, 102, 104, 154] Self-organizing autonomy represents a higher level of autonomy where agents can dynamically adapt their behavior without predefined structures. This allows for more flexible and context-aware interactions among agents. For instance, in dynamic task allocation, agents can assign tasks to each other based on the current state of the environment and their individual skills. Emergent behavior is another key feature at this level, where agents can form coalitions or develop new strategies to solve complex problems. This level of autonomy is essential for multi-agent systems that need to operate in highly dynamic and unpredictable environments." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 629, + 142, + 640 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 629, + 142, + 640 + ], + "spans": [ + { + "bbox": [ + 61, + 629, + 142, + 640 + ], + "type": "text", + "content": "2.4.2. Applications" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 641, + 286, + 690 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 641, + 286, + 690 + ], + "spans": [ + { + "bbox": [ + 61, + 641, + 286, + 690 + ], + "type": "text", + "content": "In the social sciences [148, 99, 112], LLMs-based agents have been used to simulate various social phenomena, providing insights into human behavior and social dynamics." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 71, + 697, + 286, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 697, + 286, + 732 + ], + "spans": [ + { + "bbox": [ + 71, + 697, + 286, + 732 + ], + "type": "text", + "content": "1) Economic Agents: [179, 101] LLMs can be used to model economic agents, similar to how economists use the concept of homo economicus." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 325, + 111, + 530, + 230 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 111, + 530, + 230 + ], + "spans": [ + { + "bbox": [ + 325, + 111, + 530, + 230 + ], + "type": "text", + "content": "Experiments have shown that LLMs can produce results qualitatively similar to those of traditional economic models, making them a promising tool for exploring new social science insights. For example, in market simulation, LLMs can predict market trends and the impact of economic policies. In behavioral economics, LLMs can model individual and group decision-making processes, providing a more nuanced understanding of economic behavior." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 317, + 238, + 530, + 380 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 238, + 530, + 380 + ], + "spans": [ + { + "bbox": [ + 317, + 238, + 530, + 380 + ], + "type": "text", + "content": "- 2) Social Network Simulation: [147, 148, 99, 180] The Social-network Simulation System (S3) uses LLMs-based agents to simulate social networks, accurately replicating individual attitudes, emotions, and behaviors. This system can model the propagation of information, attitudes, and emotions at the population level, providing valuable insights into social dynamics. For example, it can simulate how information spreads through social networks and identify influential nodes, or model the evolution of social norms and behaviors over time." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 317, + 389, + 530, + 508 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 389, + 530, + 508 + ], + "spans": [ + { + "bbox": [ + 317, + 389, + 530, + 508 + ], + "type": "text", + "content": "- 3) User Behavior Analysis: [97, 111, 112] LLMs are employed for user simulation in recommender systems, demonstrating superiority over baseline simulation systems. They can generate reliable user behaviors, improving the accuracy of recommendations. For example, in personalized recommendations, LLMs can generate user profiles and behaviors to optimize recommendation algorithms. In user engagement, LLMs can simulate user interactions to optimize user retention and engagement." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 306, + 515, + 530, + 562 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 515, + 530, + 562 + ], + "spans": [ + { + "bbox": [ + 306, + 515, + 530, + 562 + ], + "type": "text", + "content": "In the natural sciences [100, 149, 181], LLMs-based agents have been used to simulate complex systems and processes, providing insights into natural phenomena and scientific theories." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 317, + 570, + 530, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 570, + 530, + 700 + ], + "spans": [ + { + "bbox": [ + 317, + 570, + 530, + 700 + ], + "type": "text", + "content": "1) Macroeconomic Simulation: LLMs-based agents are used for macroeconomic simulation, making realistic decisions and reproducing classic macroeconomic phenomena. These agents can simulate the impact of economic policies on the macroeconomy, providing a more accurate and dynamic model of economic behavior. For example, they can simulate the interactions between different economic sectors and their impact on the overall economy, helping policymakers make more informed decisions." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 317, + 708, + 530, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 708, + 530, + 732 + ], + "spans": [ + { + "bbox": [ + 317, + 708, + 530, + 732 + ], + "type": "text", + "content": "2) Generative Agent-Based Modeling: This approach couples mechanistic models with genera" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 292, + 740, + 302, + 748 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "bbox": [ + 80, + 111, + 286, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 111, + 286, + 218 + ], + "spans": [ + { + "bbox": [ + 80, + 111, + 286, + 218 + ], + "type": "text", + "content": "tive artificial intelligence to unveil social system dynamics, such as norm diffusion and social dynamics. By combining the strengths of both approaches, researchers can model complex social systems and predict their behavior over time. For example, they can model the spread of diseases in a population, the impact of environmental changes on ecosystems, or the evolution of social norms in a community." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 226, + 286, + 262 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 226, + 286, + 262 + ], + "spans": [ + { + "bbox": [ + 61, + 226, + 286, + 262 + ], + "type": "text", + "content": "In engineering [103, 146, 114, 115], LLMs-based agents have been used to develop and optimize complex systems, improving efficiency and performance." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 71, + 269, + 286, + 389 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 269, + 286, + 389 + ], + "spans": [ + { + "bbox": [ + 71, + 269, + 286, + 389 + ], + "type": "text", + "content": "- 1) Software Development: LLMs-based agents are used for software development, facilitating sophisticated interactions and decision-making in a wide range of contexts. These agents can assist in code generation, bug detection, and system optimization, improving the productivity and quality of software development. For example, they can generate code snippets based on natural language descriptions, detect bugs in code, and suggest optimizations to improve performance." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 71, + 396, + 287, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 396, + 287, + 515 + ], + "spans": [ + { + "bbox": [ + 71, + 396, + 287, + 515 + ], + "type": "text", + "content": "- 2) Multi-Robot Systems: LLMs-based multi-agent systems are used to simulate complex real-world environments effectively, enabling interactions among diverse agents to solve various tasks. These systems can coordinate the actions of multiple robots, optimizing their behavior to achieve common goals. For example, they can be used in search and rescue operations, where multiple robots need to coordinate their actions to locate and rescue victims." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 533, + 286, + 557 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 533, + 286, + 557 + ], + "spans": [ + { + "bbox": [ + 61, + 533, + 286, + 557 + ], + "type": "text", + "content": "3. Simulation Environments of Multi-Agent Decision-Making" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 565, + 287, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 565, + 287, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 565, + 287, + 732 + ], + "type": "text", + "content": "First and foremost, the designs and implementations of multi-agent cooperative simulation environments are crucial in the historical research of multi-agent decision-making, which are widely utilized in practical applications and production. These simulation environments form the foundation for conducting efficient and effective studies in multi-agent cooperative decision-making. Specifically, a dynamic multi-agent cooperative decision-making environment refers to predetermined scenarios and platforms where multiple agents collaborate to solve problems, complete tasks, and achieve goals. Such environments provide not only a platform for testing and validating various intelligent decision-making algorithms but also help" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 111, + 532, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 532, + 327 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 532, + 327 + ], + "type": "text", + "content": "us better understand the behaviors and interactions of agents in dynamic settings. By simulating these interactions, researchers can gain insights into how agents coordinate and adapt to changing conditions, thereby improving the robustness and efficiency of multi-agent systems in real-world applications. Consequently, the importance of these simulation environments cannot be overstated. They serve as a testing ground for theoretical models, allowing researchers to observe the practical implications of their intelligent algorithms. Additionally, these platforms help in identifying potential issues and refining strategies before deployment in actual scenarios, ensuring that the agents are well-prepared to handle the complexities of real-world environments. In Table 2, a wide range of simulated environments is listed. Next, we will delve into these environments one by one, emphasizing their significance and features for future development." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 306, + 338, + 485, + 349 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 338, + 485, + 349 + ], + "spans": [ + { + "bbox": [ + 306, + 338, + 485, + 349 + ], + "type": "text", + "content": "3.1. MARL-based Simulation Environments" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 354, + 531, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 354, + 531, + 437 + ], + "spans": [ + { + "bbox": [ + 305, + 354, + 531, + 437 + ], + "type": "text", + "content": "This section provides an overview of several widely-used simulation environments designed for MARL. These platforms, such as Multi-Agent Particle Environment [59, 136, 137], and PettingZoo [182], offer diverse scenarios and functionalities for exploring cooperative and competitive agent interactions in both simple and complex tasks." + } + ] + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 311, + 448, + 383, + 524 + ], + "blocks": [ + { + "bbox": [ + 311, + 448, + 383, + 524 + ], + "lines": [ + { + "bbox": [ + 311, + 448, + 383, + 524 + ], + "spans": [ + { + "bbox": [ + 311, + 448, + 383, + 524 + ], + "type": "image", + "image_path": "dcd89d2a1cccb373c66e14cf63b7c26091146804c5ab4fa00049a238a73e0215.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 321, + 524, + 370, + 532 + ], + "lines": [ + { + "bbox": [ + 321, + 524, + 370, + 532 + ], + "spans": [ + { + "bbox": [ + 321, + 524, + 370, + 532 + ], + "type": "text", + "content": "Communication" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 306, + 544, + 531, + 564 + ], + "lines": [ + { + "bbox": [ + 306, + 544, + 531, + 564 + ], + "spans": [ + { + "bbox": [ + 306, + 544, + 531, + 564 + ], + "type": "text", + "content": "Figure 5: Typical Scenarios in Multi-Agent Particle Environment (MPE)." + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 383, + 448, + 454, + 523 + ], + "blocks": [ + { + "bbox": [ + 383, + 448, + 454, + 523 + ], + "lines": [ + { + "bbox": [ + 383, + 448, + 454, + 523 + ], + "spans": [ + { + "bbox": [ + 383, + 448, + 454, + 523 + ], + "type": "image", + "image_path": "aaea582c151a81fb3696d62ff4d12a2da361daeb71969d0cef534808282b728a.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 407, + 524, + 429, + 532 + ], + "lines": [ + { + "bbox": [ + 407, + 524, + 429, + 532 + ], + "spans": [ + { + "bbox": [ + 407, + 524, + 429, + 532 + ], + "type": "text", + "content": "Spread" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 456, + 448, + 526, + 523 + ], + "blocks": [ + { + "bbox": [ + 456, + 448, + 526, + 523 + ], + "lines": [ + { + "bbox": [ + 456, + 448, + 526, + 523 + ], + "spans": [ + { + "bbox": [ + 456, + 448, + 526, + 523 + ], + "type": "image", + "image_path": "c5a5536b752b9767e384cd17a453f522e02db73bbd898ee3b1d32025f3d71938.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 475, + 524, + 505, + 532 + ], + "lines": [ + { + "bbox": [ + 475, + 524, + 505, + 532 + ], + "spans": [ + { + "bbox": [ + 475, + 524, + 505, + 532 + ], + "type": "text", + "content": "Reference" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + } + ], + "index": 13 + }, + { + "bbox": [ + 306, + 587, + 516, + 599 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 587, + 516, + 599 + ], + "spans": [ + { + "bbox": [ + 306, + 587, + 516, + 599 + ], + "type": "text", + "content": "3.1.1. Several Widely-used Environments on MARL" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 305, + 601, + 531, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 601, + 531, + 696 + ], + "spans": [ + { + "bbox": [ + 305, + 601, + 531, + 696 + ], + "type": "text", + "content": "Multi-Agent Particle Environment (MPE) [59, 136, 137] is a versatile and widely-used MARL platform designed for research in both cooperative and competitive settings. Developed by OpenAI, it is primarily known for being the testing environment of the MAD-DPG algorithm [136]. MPE is a time-discrete, space-continuous 2D platform designed for evaluating MARL algorithms." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 306, + 697, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 697, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 306, + 697, + 531, + 732 + ], + "type": "text", + "content": "Figure 5, initially derived from Malloy et al. [137], illustrates various scenarios within the Multi-Agent Particle Environment (MPE), including tasks" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 292, + 739, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 292, + 739, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 292, + 739, + 302, + 748 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 127, + 128, + 467, + 263 + ], + "blocks": [ + { + "bbox": [ + 135, + 117, + 458, + 127 + ], + "lines": [ + { + "bbox": [ + 135, + 117, + 458, + 127 + ], + "spans": [ + { + "bbox": [ + 135, + 117, + 458, + 127 + ], + "type": "text", + "content": "Table 2: Diverse MARL-based and LLMs-based Simulated Environments for Multi-Agent Systems." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 127, + 128, + 467, + 263 + ], + "lines": [ + { + "bbox": [ + 127, + 128, + 467, + 263 + ], + "spans": [ + { + "bbox": [ + 127, + 128, + 467, + 263 + ], + "type": "table", + "html": "
CategoriesMulti-Agent System Environments
MARL-basedMulti-Agent Particle Environment (MPE) [59, 136, 137], PettingZoo [182], SMAC [134], SMAC-v2 [135], GFootball [55], Gym-Microrts [183], MAgent [184], Dexterous Hands [185, 186], OpenAI Gym [167], Gym-MiniGrid [187]4, Melting Pot [188]5, Capture The Flag6 [189], VillagerAgent [190], Minecraft [191, 192, 193], Unity ML-Agents [194], SUMO7 [195], Hanabi Learning [196, 197], Predator-Prey [198, 199]
LLMs-basedTDW-MAT [147, 148], C-WAH [99], Cuisineworld [149], AgentScope [100], Ro-CoBench [200], Generative Agents [181, 201], SocialAI school [202, 203], Welfare Diplomacy [204]
", + "image_path": "8e1f91109e3b966e14a78a4c15a7cd1df02119c369d997aa57d9ae3164b5be3d.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 60, + 283, + 286, + 473 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 283, + 286, + 473 + ], + "spans": [ + { + "bbox": [ + 60, + 283, + 286, + 473 + ], + "type": "text", + "content": "such as adversarial interactions, cooperative crypto, object pushing, and team-based world navigation. Compatible with the widely-used Gym interface, it supports a variety of tasks ranging from fully cooperative to mixed cooperative-competitive scenarios, such as simple_adversary, simple_crypto, simple_spread, simple-speaker Listener, and simple_world_comm8. Each scenario highlights distinct cooperative and competitive dynamics among agents. MPE allows agents to interact and strategize within a visually simplistic UI where particles represent different entities. MPE is a open-source platform that widely adopted in the multi-agent system research, enabling extensive customization and contributing to its role as a standard tool for studying complex multi-agent dynamics." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 474, + 286, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 474, + 286, + 544 + ], + "spans": [ + { + "bbox": [ + 61, + 474, + 286, + 544 + ], + "type": "text", + "content": "Overall, MPE is a pivotal resource in the MARL community, offering a well-rounded platform for experimentation and algorithm comparison. Its design and functionality have made it an indispensable tool for researchers seeking to push the boundaries of what is possible in multi-agent systems." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 544, + 287, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 544, + 287, + 653 + ], + "spans": [ + { + "bbox": [ + 61, + 544, + 287, + 653 + ], + "type": "text", + "content": "StarCraft Multi-Agent Challenge (SMAC)[134] is a widely-used benchmark for MARL that focuses on decentralized micromanagement tasks in the popular real-time strategy game StarCraft II[10]. In SMAC, multiple agents control individual units and must learn to cooperate and coordinate actions based on local, partial observations. The agents face complex challenges, including coordinating combat techniques like focus fire, kiting, and positioning, while the opponent" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 305, + 283, + 532, + 475 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 283, + 532, + 475 + ], + "spans": [ + { + "bbox": [ + 305, + 283, + 532, + 475 + ], + "type": "text", + "content": "is controlled by the built-in StarCraft II AI. SMAC emphasizes problems such as partial observability, decentralized decision-making, and multi-agent credit assignment. The environment is structured to simulate real-world scenarios where agents must learn to collaborate without full knowledge of the global state. Agents' observations are restricted to a limited field of view, forcing them to rely on local information for decision-making. As shown in Figure 6, these multi-agent cooperative decision-making environments are respectively 2s vs 3z, 5m vs 6m, 6h vs 8z, MM2, where the inside numbers represent the number of units and the letters represent the unit types in general. In recent years, SMAC has become a standard benchmark for evaluating MARL algorithms, offering a rigorous and challenging environment for advancing the field." + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 310, + 482, + 418, + 550 + ], + "blocks": [ + { + "bbox": [ + 310, + 482, + 418, + 550 + ], + "lines": [ + { + "bbox": [ + 310, + 482, + 418, + 550 + ], + "spans": [ + { + "bbox": [ + 310, + 482, + 418, + 550 + ], + "type": "image", + "image_path": "8ce921f3f1029feaf7e4f9920d087711055b3f3b730fa94019ee7dfcc46a6eb1.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 351, + 553, + 381, + 561 + ], + "lines": [ + { + "bbox": [ + 351, + 553, + 381, + 561 + ], + "spans": [ + { + "bbox": [ + 351, + 553, + 381, + 561 + ], + "type": "text", + "content": "2s_vs_3z" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 421, + 482, + 528, + 550 + ], + "blocks": [ + { + "bbox": [ + 421, + 482, + 528, + 550 + ], + "lines": [ + { + "bbox": [ + 421, + 482, + 528, + 550 + ], + "spans": [ + { + "bbox": [ + 421, + 482, + 528, + 550 + ], + "type": "image", + "image_path": "9ffcd120820e280580ba8b4135920ff1bb425175bb4ae45fd0b6b96459a05cdf.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 456, + 553, + 493, + 562 + ], + "lines": [ + { + "bbox": [ + 456, + 553, + 493, + 562 + ], + "spans": [ + { + "bbox": [ + 456, + 553, + 493, + 562 + ], + "type": "text", + "content": "5m_vs_6m" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 310, + 567, + 417, + 637 + ], + "blocks": [ + { + "bbox": [ + 310, + 567, + 417, + 637 + ], + "lines": [ + { + "bbox": [ + 310, + 567, + 417, + 637 + ], + "spans": [ + { + "bbox": [ + 310, + 567, + 417, + 637 + ], + "type": "image", + "image_path": "c441759cf5f1f64bf32be940a7213d4e62acd6483c83f9d24fc127762b5ee44f.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 350, + 639, + 379, + 646 + ], + "lines": [ + { + "bbox": [ + 350, + 639, + 379, + 646 + ], + "spans": [ + { + "bbox": [ + 350, + 639, + 379, + 646 + ], + "type": "text", + "content": "6h_vs_8z" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 420, + 567, + 527, + 637 + ], + "blocks": [ + { + "bbox": [ + 420, + 567, + 527, + 637 + ], + "lines": [ + { + "bbox": [ + 420, + 567, + 527, + 637 + ], + "spans": [ + { + "bbox": [ + 420, + 567, + 527, + 637 + ], + "type": "image", + "image_path": "062a4fad536eb0ec8c96ad5812fb9f6ca97a26bdff81ad9a24d287f63775e317.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 468, + 639, + 490, + 646 + ], + "lines": [ + { + "bbox": [ + 468, + 639, + 490, + 646 + ], + "spans": [ + { + "bbox": [ + 468, + 639, + 490, + 646 + ], + "type": "text", + "content": "MMM2" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 306, + 658, + 530, + 678 + ], + "lines": [ + { + "bbox": [ + 306, + 658, + 530, + 678 + ], + "spans": [ + { + "bbox": [ + 306, + 658, + 530, + 678 + ], + "type": "text", + "content": "Figure 6: Several Typical Scenarios in StarCraft Multi-Agent Challenge (SMAC)." + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "bbox": [ + 306, + 687, + 530, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 687, + 530, + 712 + ], + "spans": [ + { + "bbox": [ + 306, + 687, + 530, + 712 + ], + "type": "text", + "content": "StarCraft Multi-Agent Challenge 2 (SMACv2)[1134, 154, 91] However, SMAC [134] has limitations," + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 314, + 721, + 530, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 721, + 530, + 732 + ], + "spans": [ + { + "bbox": [ + 314, + 721, + 530, + 732 + ], + "type": "text", + "content": "11StarCraft Multi-Agent Challenge 2 (SMACv2): https://" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 61, + 672, + 285, + 703 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 672, + 285, + 703 + ], + "spans": [ + { + "bbox": [ + 61, + 672, + 285, + 703 + ], + "type": "inline_equation", + "content": "^{8}" + }, + { + "bbox": [ + 61, + 672, + 285, + 703 + ], + "type": "text", + "content": "Multi-Agent Particle Environment: https://github.com/openai/multiagent-particle-envs/tree/master/multiagent/scenarios." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 61, + 703, + 284, + 721 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 703, + 284, + 721 + ], + "spans": [ + { + "bbox": [ + 61, + 703, + 284, + 721 + ], + "type": "inline_equation", + "content": "^{9}" + }, + { + "bbox": [ + 61, + 703, + 284, + 721 + ], + "type": "text", + "content": "StarCraft Multi-Agent Challenge (SMAC): https://github.com/oxwhirl/smac." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 73, + 721, + 257, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 73, + 721, + 257, + 732 + ], + "spans": [ + { + "bbox": [ + 73, + 721, + 257, + 732 + ], + "type": "text", + "content": "10StarCraft II: https://starcraft2.blizzard.com/." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "bbox": [ + 60, + 111, + 286, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 111, + 286, + 386 + ], + "spans": [ + { + "bbox": [ + 60, + 111, + 286, + 386 + ], + "type": "text", + "content": "including insufficient stochasticity and partial observability, which allows agents to perform well with simple open-loop policies. To address these shortcomings, SMACv2 introduces procedural content generation (PCG), randomizing team compositions and agent positions, ensuring agents face novel, diverse scenarios. Several multi-agent decision-making scenarios are depicted in Figure 7, which are from Benjamin et al. [135]. This requires more sophisticated, closed-loop policies that condition on both ally and enemy information. Additionally, SMACv2 includes the Extended Partial Observability Challenge (EPO), where enemy observations are masked stochastically, forcing agents to adapt to incomplete information and communicate more effectively. SMACv2 thus represents a major evolution of the original benchmark, addressing key gaps such as the lack of stochasticity and meaningful partial observability. These changes make SMACv2 a more challenging environment, requiring agents to generalize across varied settings and improve coordination, communication, and decentralized decision-making. Overall, SMACv2 provides a more rigorous testbed for advancing the field of cooperative MARL." + } + ] + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 66, + 394, + 174, + 479 + ], + "blocks": [ + { + "bbox": [ + 66, + 394, + 174, + 479 + ], + "lines": [ + { + "bbox": [ + 66, + 394, + 174, + 479 + ], + "spans": [ + { + "bbox": [ + 66, + 394, + 174, + 479 + ], + "type": "image", + "image_path": "5977daaa3a65dbecf1ae4ebd5d4273f9270a5994fc32afaecaa06a046981ff38.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 482, + 135, + 491 + ], + "lines": [ + { + "bbox": [ + 104, + 482, + 135, + 491 + ], + "spans": [ + { + "bbox": [ + 104, + 482, + 135, + 491 + ], + "type": "text", + "content": "(a) Terran" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 61, + 503, + 286, + 522 + ], + "lines": [ + { + "bbox": [ + 61, + 503, + 286, + 522 + ], + "spans": [ + { + "bbox": [ + 61, + 503, + 286, + 522 + ], + "type": "text", + "content": "Figure 7: Several scenarios from SMACv2 showing agents battling the built-in AI." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 177, + 394, + 282, + 479 + ], + "blocks": [ + { + "bbox": [ + 177, + 394, + 282, + 479 + ], + "lines": [ + { + "bbox": [ + 177, + 394, + 282, + 479 + ], + "spans": [ + { + "bbox": [ + 177, + 394, + 282, + 479 + ], + "type": "image", + "image_path": "cc1555bf214608175b7ecc0036ea0248cb323f3aa8b093c0775f6f0247e1a6cb.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 210, + 482, + 246, + 491 + ], + "lines": [ + { + "bbox": [ + 210, + 482, + 246, + 491 + ], + "spans": [ + { + "bbox": [ + 210, + 482, + 246, + 491 + ], + "type": "text", + "content": "(b) Protoss" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 60, + 531, + 287, + 711 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 531, + 287, + 711 + ], + "spans": [ + { + "bbox": [ + 60, + 531, + 287, + 711 + ], + "type": "text", + "content": "Google Research Football Environment (GFootball) [55] is a state-of-the-art multi-agent simulation environment developed by the Google Research Brain Team. It is specifically designed for reinforcement learning research and is built on top of the open-source football game, GamePlay Football. GFootball is compatible with the OpenAI Gym API, making it a versatile tool not only for training intelligent agents but also for allowing players to interact with the built-in AI or trained agents using a keyboard or game controller. GFootball features an advanced, physics-based 3D football simulator where agents can be trained to play football, offering a challenging yet highly customizable platform for testing novel reinforcement learning algorithms and ideas. GFootball is tailored for multi-agent" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 111, + 531, + 184 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 531, + 184 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 531, + 184 + ], + "type": "text", + "content": "experiments and multiplayer scenarios, enabling the exploration of more complex interactions and strategies. GFootball supports various scenarios, including fullgame simulations with varying difficulty levels, as well as simpler tasks in the Football Academy that focus on specific skills like passing or scoring." + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 313, + 195, + 420, + 260 + ], + "blocks": [ + { + "bbox": [ + 313, + 195, + 420, + 260 + ], + "lines": [ + { + "bbox": [ + 313, + 195, + 420, + 260 + ], + "spans": [ + { + "bbox": [ + 313, + 195, + 420, + 260 + ], + "type": "image", + "image_path": "fe125f7ad7146c8d20a68dd683d2c487a0c2aa784c5c3f04bb080f05675d86ff.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 421, + 195, + 524, + 260 + ], + "blocks": [ + { + "bbox": [ + 421, + 195, + 524, + 260 + ], + "lines": [ + { + "bbox": [ + 421, + 195, + 524, + 260 + ], + "spans": [ + { + "bbox": [ + 421, + 195, + 524, + 260 + ], + "type": "image", + "image_path": "e5ddbb87534db84e6d03407faae41692168b868eafe9607c3e2c250685d3d32e.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 312, + 261, + 420, + 327 + ], + "blocks": [ + { + "bbox": [ + 312, + 261, + 420, + 327 + ], + "lines": [ + { + "bbox": [ + 312, + 261, + 420, + 327 + ], + "spans": [ + { + "bbox": [ + 312, + 261, + 420, + 327 + ], + "type": "image", + "image_path": "92ae51be2b8df48d17be35b8d6b2ad59fdf7b1204444fa6c3a8abf9fe3652419.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 306, + 339, + 531, + 359 + ], + "lines": [ + { + "bbox": [ + 306, + 339, + 531, + 359 + ], + "spans": [ + { + "bbox": [ + 306, + 339, + 531, + 359 + ], + "type": "text", + "content": "Figure 8: Typical examples of Training Scenarios in Football Academy." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 421, + 261, + 524, + 327 + ], + "blocks": [ + { + "bbox": [ + 421, + 261, + 524, + 327 + ], + "lines": [ + { + "bbox": [ + 421, + 261, + 524, + 327 + ], + "spans": [ + { + "bbox": [ + 421, + 261, + 524, + 327 + ], + "type": "image", + "image_path": "cd543be592269ea50a5b717ea12435ce30b9687643df4f91e24b491f70ec8d60.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "bbox": [ + 305, + 371, + 531, + 609 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 371, + 531, + 609 + ], + "spans": [ + { + "bbox": [ + 305, + 371, + 531, + 609 + ], + "type": "text", + "content": "Moreover, training agents for the \"Football Benchmark\" can be quite challenging. To help researchers efficiently test and iterate on new ideas, researchers provide a toolset called \"Football Academy\", as illustrated in Figure 8, which includes a series of scenarios with varying levels of difficulty. These scenarios range from simple setups, such as a single player scoring against an open goal (e.g., approaching an open goal, scoring in an open goal, or scoring while running), to more complex team-based setups, where a controlled team must break through specific defensive formations (e.g., scoring while running against a goalkeeper, passing and shooting against a goalkeeper, and 3v1 against a goalkeeper). Additionally, the toolset covers common situations in football matches, such as corner kicks, simple counterattacks, and complex counterattacks. Lastly, as an famous open-source GitHub project12, it offers a unique opportunity for researchers and pushes the boundaries of AI research in a reproducible and scalable manner." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 305, + 610, + 531, + 671 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 610, + 531, + 671 + ], + "spans": [ + { + "bbox": [ + 305, + 610, + 531, + 671 + ], + "type": "text", + "content": "Unity Machine Learning-Agents Toolkit[13] [194] is an open-source platform designed to enable games and simulations to serve as environments for training intelligent agents. Built on Unity's powerful game engine, it supports a wide range of AI and machine learn" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 306, + 692, + 429, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 692, + 429, + 712 + ], + "spans": [ + { + "bbox": [ + 306, + 692, + 429, + 712 + ], + "type": "text", + "content": "12Google Research Football: google-research/football." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 306, + 712, + 431, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 712, + 431, + 732 + ], + "spans": [ + { + "bbox": [ + 306, + 712, + 431, + 732 + ], + "type": "text", + "content": "13Unity ML-Agents Toolkit: Unity-Technologies/ml-agents." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 448, + 693, + 530, + 703 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 448, + 693, + 530, + 703 + ], + "spans": [ + { + "bbox": [ + 448, + 693, + 530, + 703 + ], + "type": "text", + "content": "https://github.com/" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 448, + 713, + 530, + 722 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 448, + 713, + 530, + 722 + ], + "spans": [ + { + "bbox": [ + 448, + 713, + 530, + 722 + ], + "type": "text", + "content": "https://github.com/" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 61, + 722, + 172, + 731 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 722, + 172, + 731 + ], + "spans": [ + { + "bbox": [ + 61, + 722, + 172, + 731 + ], + "type": "text", + "content": "github.com/oxwhirl/smacv2." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 290, + 739, + 301, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 739, + 301, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 739, + 301, + 748 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 111, + 286, + 183 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 111, + 286, + 183 + ], + "spans": [ + { + "bbox": [ + 61, + 111, + 286, + 183 + ], + "type": "text", + "content": "ing methods, including reinforcement learning, imitation learning, and neuroevolution, through an intuitive Python API. The platform includes state-of-the-art algorithm implementations (based on PyTorch), allowing researchers and developers to train agents for 2D, 3D, and VR/AR applications." + } + ] + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 66, + 195, + 283, + 400 + ], + "blocks": [ + { + "bbox": [ + 66, + 195, + 283, + 400 + ], + "lines": [ + { + "bbox": [ + 66, + 195, + 283, + 400 + ], + "spans": [ + { + "bbox": [ + 66, + 195, + 283, + 400 + ], + "type": "image", + "image_path": "53875bfadf5937371eb9290dac80d5ffbbc6ef1faa61797072bc5b31078641a6.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 61, + 410, + 287, + 468 + ], + "lines": [ + { + "bbox": [ + 61, + 410, + 287, + 468 + ], + "spans": [ + { + "bbox": [ + 61, + 410, + 287, + 468 + ], + "type": "text", + "content": "Figure 9: Typical Training Scenarios in Unity Machine Learning-Agents Toolkit (released version: v0.11). From Left-to-right, up-to-down: (a) Basic, (b) 3DBall, (c) Crawler, (d) Push Block, (e) Tennis, (f) Worm, (g) Bouncer, (h) Grid World, (i) Walker, (j) Reacher, (k) Food Collector, (l) Pyramids, (m) Wall Jump, (n) Hallway, (o) Soccer Twos [194]." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 481, + 287, + 599 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 481, + 287, + 599 + ], + "spans": [ + { + "bbox": [ + 61, + 481, + 287, + 599 + ], + "type": "text", + "content": "ML-Agents is particularly useful for training NPC behaviors in diverse scenarios, automated testing of game builds, and evaluating game design decisions. It features a highly flexible simulation environment with realistic visuals, physics-driven interactions, and rich task complexity. By integrating tools for creating custom environments and supporting multi-agent and adversarial settings, the toolkit bridges the gap between AI research and practical applications in game development." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 601, + 287, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 601, + 287, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 601, + 287, + 732 + ], + "type": "text", + "content": "As seen from Figure 9, it depicts several typical multi-agent environments from the previous work of Juliani et al. [194]. The platform also provides key components such as a Python API, Unity SDK, and pre-built environments, enabling users to customize and evaluate their algorithms in Unity's interactive and visually rich settings. With its versatility and accessibility, Unity ML-Agents Toolkit has become an indispensable resource for both AI researchers and game developers, driving innovation in artificial intelligence and simulation-based learning." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 305, + 111, + 532, + 279 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 532, + 279 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 532, + 279 + ], + "type": "text", + "content": "Gym-Microrts" + }, + { + "bbox": [ + 305, + 111, + 532, + 279 + ], + "type": "inline_equation", + "content": "^{14}" + }, + { + "bbox": [ + 305, + 111, + 532, + 279 + ], + "type": "text", + "content": " [183] (pronounced \"Gym-micro-RTS\") is a fast and affordable reinforcement learning (RL) platform designed to facilitate research in fullgame Real-Time Strategy (RTS) games. Unlike traditional RTS research that demands extensive computational resources, Gym- " + }, + { + "bbox": [ + 305, + 111, + 532, + 279 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 305, + 111, + 532, + 279 + ], + "type": "text", + "content": "RTS allows training advanced agents using limited hardware, such as a single GPU and CPU setup, within reasonable timeframes. Figure 10 showcases a match between our best-trained agent (top-left) and CoacAI (bottom-right), the " + }, + { + "bbox": [ + 305, + 111, + 532, + 279 + ], + "type": "inline_equation", + "content": "2020\\mu" + }, + { + "bbox": [ + 305, + 111, + 532, + 279 + ], + "type": "text", + "content": "RTS AI competition champion. The agent employs an efficient strategy, starting with resource harvesting and a worker rush to damage the enemy base, transitioning into midgame combat unit production to secure victory." + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 320, + 286, + 413, + 375 + ], + "blocks": [ + { + "bbox": [ + 320, + 286, + 413, + 375 + ], + "lines": [ + { + "bbox": [ + 320, + 286, + 413, + 375 + ], + "spans": [ + { + "bbox": [ + 320, + 286, + 413, + 375 + ], + "type": "image", + "image_path": "f72b8927d7ad0f20dd420e3c0537c50cb2a61fc6a171923ecd883bbc96ce2f69.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 415, + 288, + 518, + 372 + ], + "blocks": [ + { + "bbox": [ + 415, + 288, + 518, + 372 + ], + "lines": [ + { + "bbox": [ + 415, + 288, + 518, + 372 + ], + "spans": [ + { + "bbox": [ + 415, + 288, + 518, + 372 + ], + "type": "image", + "image_path": "7ed8fe37724e932a2a3ab825bea994f6e768276e6ff3027446672d60c35542b5.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 320, + 377, + 418, + 467 + ], + "blocks": [ + { + "bbox": [ + 320, + 377, + 418, + 467 + ], + "lines": [ + { + "bbox": [ + 320, + 377, + 418, + 467 + ], + "spans": [ + { + "bbox": [ + 320, + 377, + 418, + 467 + ], + "type": "image", + "image_path": "b462ee5279fc07101d5d0eb3e7f398608f911c11288edf318519293f4d90f54c.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 305, + 481, + 531, + 510 + ], + "lines": [ + { + "bbox": [ + 305, + 481, + 531, + 510 + ], + "spans": [ + { + "bbox": [ + 305, + 481, + 531, + 510 + ], + "type": "text", + "content": "Figure 10: Screenshot of our best-trained agent (top-left) playing against CoacAI (bottom-right), the " + }, + { + "bbox": [ + 305, + 481, + 531, + 510 + ], + "type": "inline_equation", + "content": "2020\\mu \\mathrm{RTS}" + }, + { + "bbox": [ + 305, + 481, + 531, + 510 + ], + "type": "text", + "content": " AI competition champion [183]." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 421, + 373, + 518, + 467 + ], + "blocks": [ + { + "bbox": [ + 421, + 373, + 518, + 467 + ], + "lines": [ + { + "bbox": [ + 421, + 373, + 518, + 467 + ], + "spans": [ + { + "bbox": [ + 421, + 373, + 518, + 467 + ], + "type": "image", + "image_path": "bf1aad07e6b6dfae82304cdea60c98c659bb3ab718316201042409f300a4afad.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 519, + 532, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 519, + 532, + 651 + ], + "spans": [ + { + "bbox": [ + 305, + 519, + 532, + 651 + ], + "type": "text", + "content": "The platform offers a simplified RTS environment that captures the core challenges of RTS games, including combinatorial action spaces, real-time decision-making, and partial observability. Gym- " + }, + { + "bbox": [ + 305, + 519, + 532, + 651 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 305, + 519, + 532, + 651 + ], + "type": "text", + "content": "RTS employs a low-level action space, enabling fine-grained control over individual units without AI assistance, which poses unique challenges and opportunities for RL algorithms. It supports Proximal Policy Optimization (PPO) and incorporates techniques like invalid action masking, action composition, and diverse training opponents to enhance training efficiency and agent performance." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 305, + 651, + 531, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 651, + 531, + 700 + ], + "spans": [ + { + "bbox": [ + 305, + 651, + 531, + 700 + ], + "type": "text", + "content": "Gym- " + }, + { + "bbox": [ + 305, + 651, + 531, + 700 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 305, + 651, + 531, + 700 + ], + "type": "text", + "content": "RTS has demonstrated its effectiveness by producing state-of-the-art DRL agents capable of defeating top competition bots, such as CoacAI. The platform is open-source and provides all necessary tools for" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 307, + 712, + 371, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 712, + 371, + 732 + ], + "spans": [ + { + "bbox": [ + 307, + 712, + 371, + 732 + ], + "type": "inline_equation", + "content": "^{14}" + }, + { + "bbox": [ + 307, + 712, + 371, + 732 + ], + "type": "text", + "content": "Gym-Microrts: gym-microrts." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 418, + 712, + 530, + 722 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 418, + 712, + 530, + 722 + ], + "spans": [ + { + "bbox": [ + 418, + 712, + 530, + 722 + ], + "type": "text", + "content": "https://github.com/kered9/" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 290, + 739, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 739, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 739, + 302, + 748 + ], + "type": "text", + "content": "22" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 111, + 286, + 147 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 111, + 286, + 147 + ], + "spans": [ + { + "bbox": [ + 61, + 111, + 286, + 147 + ], + "type": "text", + "content": "researchers to explore and advance RL techniques in RTS games, making it a valuable resource for both AI researchers and gaming enthusiasts." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 147, + 286, + 242 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 147, + 286, + 242 + ], + "spans": [ + { + "bbox": [ + 61, + 147, + 286, + 242 + ], + "type": "inline_equation", + "content": "\\mathbf{MAgent}^{15}" + }, + { + "bbox": [ + 61, + 147, + 286, + 242 + ], + "type": "text", + "content": " [184] is an open-source platform specifically designed to support large-scale MARL research, with a focus on exploring Artificial Collective Intelligence (ACI). Unlike traditional MARL platforms, MAgent excels in handling scenarios involving hundreds to millions of agents, making it ideal for studying complex interactions and emergent behaviors in large populations." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 243, + 286, + 363 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 243, + 286, + 363 + ], + "spans": [ + { + "bbox": [ + 61, + 243, + 286, + 363 + ], + "type": "text", + "content": "For instance, as shown in Figure 11, the \"Pursuit\" scenario is a classic example designed to showcase the emergent cooperative behaviors of agents in a predator-prey environment. In this setup, predators work together to capture preys while the preys attempt to evade capture. Each predator receives rewards for successfully attacking a prey, while preys are penalized if caught. Over time, predators learn to form collaborative strategies, such as surrounding and trapping preys, highlighting the emergence of local cooperation." + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 70, + 375, + 173, + 486 + ], + "blocks": [ + { + "bbox": [ + 70, + 375, + 173, + 486 + ], + "lines": [ + { + "bbox": [ + 70, + 375, + 173, + 486 + ], + "spans": [ + { + "bbox": [ + 70, + 375, + 173, + 486 + ], + "type": "image", + "image_path": "a8b8ab058ef58b895e471b30f19076c894ad38342e7c3636432125fa0ff8fe44.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 88, + 487, + 153, + 495 + ], + "lines": [ + { + "bbox": [ + 88, + 487, + 153, + 495 + ], + "spans": [ + { + "bbox": [ + 88, + 487, + 153, + 495 + ], + "type": "text", + "content": "Adversarial Pursuit" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 176, + 376, + 279, + 485 + ], + "blocks": [ + { + "bbox": [ + 176, + 376, + 279, + 485 + ], + "lines": [ + { + "bbox": [ + 176, + 376, + 279, + 485 + ], + "spans": [ + { + "bbox": [ + 176, + 376, + 279, + 485 + ], + "type": "image", + "image_path": "6a7a5cc929d96a2e693b7eb6dd6b52ad97c671cee999876ac5f06b04b6595221.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 201, + 486, + 257, + 493 + ], + "lines": [ + { + "bbox": [ + 201, + 486, + 257, + 493 + ], + "spans": [ + { + "bbox": [ + 201, + 486, + 257, + 493 + ], + "type": "text", + "content": "Combined Arms" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 70, + 502, + 173, + 606 + ], + "blocks": [ + { + "bbox": [ + 70, + 502, + 173, + 606 + ], + "lines": [ + { + "bbox": [ + 70, + 502, + 173, + 606 + ], + "spans": [ + { + "bbox": [ + 70, + 502, + 173, + 606 + ], + "type": "image", + "image_path": "a02b8f1699d713c9af9acb477efb8ff6330f22c91869f18ec626c6f6d413cd20.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 106, + 608, + 130, + 616 + ], + "lines": [ + { + "bbox": [ + 106, + 608, + 130, + 616 + ], + "spans": [ + { + "bbox": [ + 106, + 608, + 130, + 616 + ], + "type": "text", + "content": "Gather" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 61, + 630, + 286, + 650 + ], + "lines": [ + { + "bbox": [ + 61, + 630, + 286, + 650 + ], + "spans": [ + { + "bbox": [ + 61, + 630, + 286, + 650 + ], + "type": "text", + "content": "Figure 11: Illustrations of one of the typical running example in MAGent, called \"Pursuit\" [184]." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 176, + 498, + 279, + 606 + ], + "blocks": [ + { + "bbox": [ + 176, + 498, + 279, + 606 + ], + "lines": [ + { + "bbox": [ + 176, + 498, + 279, + 606 + ], + "spans": [ + { + "bbox": [ + 176, + 498, + 279, + 606 + ], + "type": "image", + "image_path": "fecce6d79d7fc54c61690b86b34ca2b5b7e04c3550cec69330b8de4f7f92f10e.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 206, + 608, + 242, + 617 + ], + "lines": [ + { + "bbox": [ + 206, + 608, + 242, + 617 + ], + "spans": [ + { + "bbox": [ + 206, + 608, + 242, + 617 + ], + "type": "text", + "content": "Tiger Deer" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 61, + 662, + 286, + 698 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 662, + 286, + 698 + ], + "spans": [ + { + "bbox": [ + 61, + 662, + 286, + 698 + ], + "type": "text", + "content": "The platform is based on a grid-world model where agents can perform actions such as moving, turning, or attacking, while perceiving both local and global" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 305, + 111, + 530, + 218 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 530, + 218 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 530, + 218 + ], + "type": "text", + "content": "information. Through a flexible Python interface, researchers can easily customize the state space, action space, and reward mechanisms, enabling the rapid creation of complex cooperative or competitive environments. MAgent comes with several built-in scenarios, such as pursuit, resource gathering, and team-based battles, which highlight emergent social behaviors like cooperative strategies, competitive dynamics, and resource monopolization." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 305, + 219, + 531, + 314 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 219, + 531, + 314 + ], + "spans": [ + { + "bbox": [ + 305, + 219, + 531, + 314 + ], + "type": "text", + "content": "MAgent is highly scalable, leveraging GPU-based parallelism to simulate large-scale interactions efficiently. It also provides intuitive visualization tools for real-time observation of agent behaviors, facilitating analysis and debugging. Its flexibility and scalability make MAgent a powerful tool for MARL research, enabling the study of large-scale agent interactions, emergent behaviors, and the dynamics of artificial societies." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 306, + 327, + 527, + 338 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 327, + 527, + 338 + ], + "spans": [ + { + "bbox": [ + 306, + 327, + 527, + 338 + ], + "type": "text", + "content": "3.2. LLMs Reasoning-based Simulation Environments" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 305, + 343, + 530, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 343, + 530, + 486 + ], + "spans": [ + { + "bbox": [ + 305, + 343, + 530, + 486 + ], + "type": "text", + "content": "LLMs-based multi-agent systems have become an essential tool for enhancing the collaboration, reasoning, and decision-making capabilities of autonomous agents [116]. By integrating LLMs with simulation platforms, researchers can create complex test environments to explore the interactions of multi-agent systems in various tasks and scenarios. These simulation environments not only provide rich dynamic testing scenarios but also promote the widespread application of LLMs in task planning, coordination, and execution. The following will introduce several widely used simulation platforms for LLM multi-agent systems." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 305, + 486, + 531, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 486, + 531, + 689 + ], + "spans": [ + { + "bbox": [ + 305, + 486, + 531, + 689 + ], + "type": "text", + "content": "ThreeDWorld Multi-Agent Transport (TDW-MAT)" + }, + { + "bbox": [ + 305, + 486, + 531, + 689 + ], + "type": "inline_equation", + "content": "^{16}" + }, + { + "bbox": [ + 305, + 486, + 531, + 689 + ], + "type": "text", + "content": " [147, 148] is a simulation environment for multi-agent embodied task, which is extended from the ThreeDWorld Transport Challenge [147] and is designed for visually-guided task-and-motion planning in physically realistic settings. It operates within the ThreeDWorld (TDW) platform, which offers high-fidelity sensory data, real-time physics-driven interactions, and near-photorealistic rendering. In TDW-MAT, embodied agents are tasked with transporting objects scattered throughout a simulated home environment using containers, emphasizing the need for coordination, physics awareness, and efficient planning. For instance, in the common scenario shown in Figure 12, the agent must transport objects scattered across multiple rooms and place them on the bed (marked with a green bounding box) in the bedroom." + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 69, + 720, + 252, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 720, + 252, + 732 + ], + "spans": [ + { + "bbox": [ + 69, + 720, + 252, + 732 + ], + "type": "text", + "content": "15MAgent: https://github.com/geek-ai/MAgent." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 306, + 712, + 530, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 712, + 530, + 731 + ], + "spans": [ + { + "bbox": [ + 306, + 712, + 530, + 731 + ], + "type": "text", + "content": "16ThreeDWorld Multi-Agent Transport: https://github.com/ threadworld-mit/tdw." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "type": "text", + "content": "23" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 23 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 66, + 109, + 283, + 232 + ], + "blocks": [ + { + "bbox": [ + 66, + 109, + 283, + 232 + ], + "lines": [ + { + "bbox": [ + 66, + 109, + 283, + 232 + ], + "spans": [ + { + "bbox": [ + 66, + 109, + 283, + 232 + ], + "type": "image", + "image_path": "a1e4180f2a0493ad23fc08481d7cb72ffbfc367e8264189dd05626a0c1ddb953.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 61, + 243, + 286, + 263 + ], + "lines": [ + { + "bbox": [ + 61, + 243, + 286, + 263 + ], + "spans": [ + { + "bbox": [ + 61, + 243, + 286, + 263 + ], + "type": "text", + "content": "Figure 12: An overview of one of the example task in ThreeDWorld Transport Challenge [147, 148]." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 282, + 286, + 450 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 282, + 286, + 450 + ], + "spans": [ + { + "bbox": [ + 61, + 282, + 286, + 450 + ], + "type": "text", + "content": "Communicative Watch-And-Help (C-WAH)" + }, + { + "bbox": [ + 61, + 282, + 286, + 450 + ], + "type": "inline_equation", + "content": "^{17}" + }, + { + "bbox": [ + 61, + 282, + 286, + 450 + ], + "type": "text", + "content": " [99] is a realistic multi-agent simulation environment and an extension of the Watch-And-Help Challenge platform, VirtualHome-Social [98]. C-WAH places a greater emphasis on cooperation and enhances communication between agents compared to VirtualHome-Social. Built on the VirtualHome-Social, C-WAH simulates common household activities where agents must collaborate to complete tasks such as preparing meals, washing dishes, and setting up a dinner table. As shown in Figure 13, C-WAH supports both symbolic and visual observation modes, allowing agents to perceive their surroundings either through detailed object information or egocentric RGB and depth images." + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 66, + 460, + 283, + 577 + ], + "blocks": [ + { + "bbox": [ + 66, + 460, + 283, + 577 + ], + "lines": [ + { + "bbox": [ + 66, + 460, + 283, + 577 + ], + "spans": [ + { + "bbox": [ + 66, + 460, + 283, + 577 + ], + "type": "image", + "image_path": "3ccc87a05c26f57ebdd77411aba1dbe0626dbca6891e3519086570eadf130ab7.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 61, + 587, + 286, + 608 + ], + "lines": [ + { + "bbox": [ + 61, + 587, + 286, + 608 + ], + "spans": [ + { + "bbox": [ + 61, + 587, + 286, + 608 + ], + "type": "text", + "content": "Figure 13: An typical object-moving task leveraging LLMs-based embodied agents within the Communicative Watch-And-Help [99]." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 619, + 287, + 681 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 619, + 287, + 681 + ], + "spans": [ + { + "bbox": [ + 61, + 619, + 287, + 681 + ], + "type": "text", + "content": "Cuisineworld[18] [149] is a virtual kitchen environment designed to evaluate and enhance multi-agent collaboration and coordination (i.e., the working efficiency) in a gaming context. As shown in Figure 14, in this scenario, multiple agents work together to prepare" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 111, + 531, + 267 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 531, + 267 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 531, + 267 + ], + "type": "text", + "content": "and complete dish orders within a limited time frame. The tasks range from simple preparations, like chopping ingredients, to complex cooking processes that involve multiple tools and steps. CuisineWorld is equipped with a textual interface, and it supports various levels of difficulty, making it a flexible and robust testbed for assessing the planning and scheduling capabilities of Large Foundation Models (LFMs). The environment also introduces a \"Collaboration Score\" (CoS) metric to measure the efficiency of agent coordination as task demands increase, providing a comprehensive benchmark for multi-agent system performance in dynamic and cooperative settings." + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 311, + 279, + 527, + 399 + ], + "blocks": [ + { + "bbox": [ + 311, + 279, + 527, + 399 + ], + "lines": [ + { + "bbox": [ + 311, + 279, + 527, + 399 + ], + "spans": [ + { + "bbox": [ + 311, + 279, + 527, + 399 + ], + "type": "image", + "image_path": "45d8ef4c1e22b8e2f0fce42e73adc7e8d124e191aac3818de1146886c84efdad.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 306, + 411, + 531, + 431 + ], + "lines": [ + { + "bbox": [ + 306, + 411, + 531, + 431 + ], + "spans": [ + { + "bbox": [ + 306, + 411, + 531, + 431 + ], + "type": "text", + "content": "Figure 14: An typical multi-agent cooperative scenario in the CuisineWorld platform [149]." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 442, + 531, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 442, + 531, + 693 + ], + "spans": [ + { + "bbox": [ + 305, + 442, + 531, + 693 + ], + "type": "text", + "content": "AgentScope[100] is a innovative, robust and flexible multi-agent platform designed to empower developers in building advanced multi-agent systems by leveraging the potential of LLMs. At its core, the platform employs a process-based message exchange mechanism, simplifying the complexities of agent communication and collaboration. This approach ensures smooth and efficient agent interaction, enabling developers to focus on designing workflows rather than low-level details. The platform stands out for its comprehensive fault-tolerance infrastructure, which includes retry mechanisms, rule-based corrections, and customizable error-handling configurations. AgentScope also excels in multi-modal support, seamlessly integrating text, images, audio, and video into its workflows. By decoupling data storage and transfer, it optimizes memory usage and enhances scalability, making it ideal for applications requiring rich multi-modal interactions. Additionally, its actor-based distributed framework enables efficient parallel execution and supports hybrid deployments, bridging the gap between local and distributed" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 61, + 702, + 286, + 722 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 702, + 286, + 722 + ], + "spans": [ + { + "bbox": [ + 61, + 702, + 286, + 722 + ], + "type": "text", + "content": "17Communicative Watch-And-Help: https://github.com/xavierpuigf/watch_and_help." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 73, + 722, + 247, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 73, + 722, + 247, + 732 + ], + "spans": [ + { + "bbox": [ + 73, + 722, + 247, + 732 + ], + "type": "text", + "content": "18Cuisineworld: https://mindagent.github.io/." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 307, + 711, + 531, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 711, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 307, + 711, + 531, + 732 + ], + "type": "text", + "content": "19 AgentScope: https://github.com/modelscope/ agentscope." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "type": "text", + "content": "24" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 24 + }, + { + "para_blocks": [ + { + "bbox": [ + 62, + 111, + 161, + 121 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 111, + 161, + 121 + ], + "spans": [ + { + "bbox": [ + 62, + 111, + 161, + 121 + ], + "type": "text", + "content": "environments with ease." + } + ] + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 65, + 131, + 283, + 250 + ], + "blocks": [ + { + "bbox": [ + 65, + 131, + 283, + 250 + ], + "lines": [ + { + "bbox": [ + 65, + 131, + 283, + 250 + ], + "spans": [ + { + "bbox": [ + 65, + 131, + 283, + 250 + ], + "type": "image", + "image_path": "d6c01b2de5b8288035011a26518bccdfc88b0dd05ecce397647c732cb8a340e5.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 61, + 260, + 286, + 280 + ], + "lines": [ + { + "bbox": [ + 61, + 260, + 286, + 280 + ], + "spans": [ + { + "bbox": [ + 61, + 260, + 286, + 280 + ], + "type": "text", + "content": "Figure 15: The official multi-modal interaction Web UI page between agents in the AgentScope platform [100]." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 295, + 286, + 461 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 295, + 286, + 461 + ], + "spans": [ + { + "bbox": [ + 61, + 295, + 286, + 461 + ], + "type": "text", + "content": "Moreover, to improve user interaction with multiple agents, AgentScope assigns distinct colors and icons to each agent, as shown in Figure 15, providing clear visual differentiation in both the terminal and web interface. Designed with user accessibility in mind, AgentScope provides intuitive programming tools, including pipelines and message centers, which streamline the development process. Its interactive user interfaces, both terminal- and web-based, allow developers to monitor performance, track costs, and engage with agents effectively. These features position AgentScope as a state-of-the-art platform for advancing multi-agent systems, combining ease of use with cutting-edge technology." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 462, + 287, + 690 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 462, + 287, + 690 + ], + "spans": [ + { + "bbox": [ + 61, + 462, + 287, + 690 + ], + "type": "text", + "content": "RoCoBench" + }, + { + "bbox": [ + 61, + 462, + 287, + 690 + ], + "type": "inline_equation", + "content": "^{20}" + }, + { + "bbox": [ + 61, + 462, + 287, + 690 + ], + "type": "text", + "content": " RoCoBench is a benchmark platform, proposed by Mandi et al. [200], designed to evaluate and enhance the collaborative capabilities of multirobot systems powered by LLMs. Built as an extension to the RoCo project" + }, + { + "bbox": [ + 61, + 462, + 287, + 690 + ], + "type": "inline_equation", + "content": "^{21}" + }, + { + "bbox": [ + 61, + 462, + 287, + 690 + ], + "type": "text", + "content": ", RoCoBench provides a realistic simulation environment where robotic agents interact and collaborate to complete complex tasks, as shown in Figure 16, such as sorting packages, assembling components, or preparing a workspace. RoCoBench places a strong emphasis on communication-driven collaboration, integrating both symbolic and visual interaction modes to enable robots to perceive and respond to their environment effectively. Each robot is equipped with LLMs-powered reasoning, facilitating real-time dialogue and coordination. Correspondingly, the benchmark introduces a \"Collaboration Efficiency Metric\" (CEM) to evaluate the effectiveness of multirobot teamwork, taking into account factors like task completion time, resource allocation, and the quality" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 305, + 111, + 532, + 170 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 532, + 170 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 532, + 170 + ], + "type": "text", + "content": "of inter-robot communication. RoCoBench serves as a comprehensive platform for evaluating the potential of LLMs in driving dialectic multi-robot collaboration, offering a scalable and flexible environment for developers and researchers alike" + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 311, + 180, + 530, + 283 + ], + "blocks": [ + { + "bbox": [ + 311, + 180, + 530, + 283 + ], + "lines": [ + { + "bbox": [ + 311, + 180, + 530, + 283 + ], + "spans": [ + { + "bbox": [ + 311, + 180, + 530, + 283 + ], + "type": "image", + "image_path": "3dd879c3d4d448f314c8aa26b5c93b0fa8235c3e4bcc25f245fe5d866f8dd71e.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 305, + 292, + 531, + 349 + ], + "lines": [ + { + "bbox": [ + 305, + 292, + 531, + 349 + ], + "spans": [ + { + "bbox": [ + 305, + 292, + 531, + 349 + ], + "type": "text", + "content": "Figure 16: An overview of RoCoBench, a collection of six multirobot collaboration tasks set in a tabletop manipulation environment. The scenarios encompass a diverse range of collaborative challenges, each demanding distinct communication and coordination strategies between robots, incorporating familiar, intuitive objects designed to align with the semantic understanding capabilities of LLMs [200]." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 365, + 532, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 365, + 532, + 689 + ], + "spans": [ + { + "bbox": [ + 305, + 365, + 532, + 689 + ], + "type": "text", + "content": "Generative Agents" + }, + { + "bbox": [ + 305, + 365, + 532, + 689 + ], + "type": "inline_equation", + "content": "^{22}" + }, + { + "bbox": [ + 305, + 365, + 532, + 689 + ], + "type": "text", + "content": " Park et al. [181, 201] introduces Generative Agents, a groundbreaking framework for simulating human behavior in interactive virtual worlds. These agents exhibit realistic individual and collective behaviors by incorporating dynamic memory, self-reflection, and action planning capabilities. The system leverages LLMs to store, retrieve, and synthesize memories into higher-level reasoning, enabling agents to adapt their actions based on personal experiences and evolving environmental changes. As illustrated in Figure 16, they present an interactive sandbox environment called Smallville, akin to \"The Sims,\" where 25 distinct virtual agents live, interact, and carry out daily activities. Each agent has a detailed initial profile, including personal traits, relationships, and goals, stored as \"seed memories.\" Agents engage in natural language-based dialogues and demonstrate social behaviors such as hosting events, making new acquaintances, and responding to user interventions. Generative Agents enable interactive applications in fields such as simulating realistic social dynamics for games and training simulations; designing dynamic, non-scripted virtual worlds for interactive systems; and exploring theories and behaviors in a controlled yet realistic virtual setting. The evaluations revealed the critical role of memory retrieval, self-reflection, and action planning in achieving coherent agent behaviors. Common issues," + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 69, + 711, + 259, + 721 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 711, + 259, + 721 + ], + "spans": [ + { + "bbox": [ + 69, + 711, + 259, + 721 + ], + "type": "inline_equation", + "content": "^{20}" + }, + { + "bbox": [ + 69, + 711, + 259, + 721 + ], + "type": "text", + "content": "RoCoBench: https://project-roco.github.io/." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 70, + 721, + 260, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 70, + 721, + 260, + 732 + ], + "spans": [ + { + "bbox": [ + 70, + 721, + 260, + 732 + ], + "type": "inline_equation", + "content": "^{21}" + }, + { + "bbox": [ + 70, + 721, + 260, + 732 + ], + "type": "text", + "content": "RoCo Project: https://project-roco.github.io/." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 307, + 711, + 531, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 711, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 307, + 711, + 531, + 732 + ], + "type": "text", + "content": "22Generative Agents: https://youmingyeh.github.io/cs-book/papers/generative-agents." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "type": "text", + "content": "25" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 25 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 111, + 286, + 171 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 111, + 286, + 171 + ], + "spans": [ + { + "bbox": [ + 61, + 111, + 286, + 171 + ], + "type": "text", + "content": "such as exaggerated responses and overly formal communication, were identified as areas for improvement. Generative Agents push the boundaries of human behavior simulation, offering a robust framework for creating autonomous, memory-driven virtual agents." + } + ] + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 67, + 181, + 283, + 297 + ], + "blocks": [ + { + "bbox": [ + 67, + 181, + 283, + 297 + ], + "lines": [ + { + "bbox": [ + 67, + 181, + 283, + 297 + ], + "spans": [ + { + "bbox": [ + 67, + 181, + 283, + 297 + ], + "type": "image", + "image_path": "36660833d9b669e811ee508cad96ba8f3a4987062eebdfae9f1f1f18a635a33d.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 61, + 308, + 287, + 367 + ], + "lines": [ + { + "bbox": [ + 61, + 308, + 287, + 367 + ], + "spans": [ + { + "bbox": [ + 61, + 308, + 287, + 367 + ], + "type": "text", + "content": "Figure 17: Generative agents serve as realistic simulations of human behavior, designed for interactive applications. In a sandbox environment inspired by The Sims, twenty-five agents engage in activities such as planning their routines, sharing updates, building relationships, and collaborating on group events, while allowing users to observe and interact with them. [181, 201]." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 376, + 287, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 376, + 287, + 602 + ], + "spans": [ + { + "bbox": [ + 61, + 376, + 287, + 602 + ], + "type": "text", + "content": "SocialAI school" + }, + { + "bbox": [ + 61, + 376, + 287, + 602 + ], + "type": "inline_equation", + "content": "^{23}" + }, + { + "bbox": [ + 61, + 376, + 287, + 602 + ], + "type": "text", + "content": " Kovac et al. [202, 203] introduces The SocialAI School, a novel framework designed to explore and develop socio-cognitive abilities in artificial agents. The study emphasizes the importance of sociocognitive skills as foundational to human intelligence and cultural evolution. As shown in Figure 18, the SocialAI School provides a customizable suite of procedurally generated environments that enable systematic research into the socio-cognitive abilities required for artificial agents to interact with and contribute to complex cultures. Built on MiniGrid, it provides procedural environments for RL and LLMs-based agents to study social skills like joint attention, imitation, and scaffolding. Open-source and versatile, it enables diverse research, including generalizing social inferences, role reversal studies, and scaffolded learning. The SocialAI School represents a significant step toward enriching AI systems with socio-cognitive abilities inspired by human development." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 603, + 286, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 603, + 286, + 677 + ], + "spans": [ + { + "bbox": [ + 61, + 603, + 286, + 677 + ], + "type": "text", + "content": "Welfare Diplomacy[24] [204] is an innovative variant of the zero-sum game Diplomacy, designed to evaluate the cooperative capabilities of multi-agent systems. Unlike the original game, which focuses on a single winner, Welfare Diplomacy introduces a general-sum framework where players balance military con" + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 313, + 111, + 524, + 253 + ], + "blocks": [ + { + "bbox": [ + 313, + 111, + 524, + 253 + ], + "lines": [ + { + "bbox": [ + 313, + 111, + 524, + 253 + ], + "spans": [ + { + "bbox": [ + 313, + 111, + 524, + 253 + ], + "type": "image", + "image_path": "7d31292bade85ac419cca1f29f6b55c388eca7f69e9ee3bdf8fdeaa392730e54.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 306, + 264, + 531, + 285 + ], + "lines": [ + { + "bbox": [ + 306, + 264, + 531, + 285 + ], + "spans": [ + { + "bbox": [ + 306, + 264, + 531, + 285 + ], + "type": "text", + "content": "Figure 18: A clear workflow of an agent acting in the SocialAI school environment [202, 203]." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 326, + 532, + 576 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 326, + 532, + 576 + ], + "spans": [ + { + "bbox": [ + 305, + 326, + 532, + 576 + ], + "type": "text", + "content": "quest with investments in domestic welfare. Players accumulate Welfare Points (WPs) throughout the game by prioritizing welfare over military expansion, and their total utility at the end of the game is determined by these points, removing the notion of a single \"winner\". Welfare Diplomacy enables clearer assessments of cooperation and provides stronger incentives for training cooperative AI agents. Players take on the roles of European powers, negotiating, forming alliances, and strategizing to compete for key supply centers. Orders are submitted and resolved simultaneously, with the goal of controlling a specified number of SCs to achieve victory, emphasizing a balance between cooperation and betrayal. Based on these rules, Welfare Diplomacy implements themselves via an open-source platform, and develops zero-shot baseline agents using advanced language models like GPT-4 [141, 140]. Experiments reveal that while these agents achieve high social welfare through mutual demilitarization, they remain vulnerable to exploitation, highlighting room for future improvement." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 589, + 532, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 589, + 532, + 731 + ], + "spans": [ + { + "bbox": [ + 305, + 589, + 532, + 731 + ], + "type": "text", + "content": "In summary, these cutting-edge LLMs-powered simulation environments—ranging from task-specific platforms like TDW-MAT [148, 147] and CuisineWorld [149] to socially-driven frameworks such as Generative Agents [201] and the SocialAI School [203]—highlight the transformative potential of integrating advanced AI reasoning and multi-agent systems. By fostering research on collaboration, social cognition, and cooperative decision-making, these tools not only advance our understanding of AI's capabilities but also pave the way for practical applications in dynamic, real-world scenarios." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 61, + 692, + 287, + 734 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 692, + 287, + 734 + ], + "spans": [ + { + "bbox": [ + 61, + 692, + 287, + 734 + ], + "type": "text", + "content": "23SocialAI school project: https://sites.google.com/view/socialai-school. 24Welfare Diplomacy: https://github.com/mukobi/welfare-diplomacy." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 291, + 740, + 302, + 749 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 740, + 302, + 749 + ], + "spans": [ + { + "bbox": [ + 291, + 740, + 302, + 749 + ], + "type": "text", + "content": "26" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 26 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 75, + 109, + 273, + 257 + ], + "blocks": [ + { + "bbox": [ + 75, + 109, + 273, + 257 + ], + "lines": [ + { + "bbox": [ + 75, + 109, + 273, + 257 + ], + "spans": [ + { + "bbox": [ + 75, + 109, + 273, + 257 + ], + "type": "image", + "image_path": "027c51c8ae31617b6e731a7c9c023fae96a39cb5a7aa76015ca260b4832f553b.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 61, + 269, + 286, + 290 + ], + "lines": [ + { + "bbox": [ + 61, + 269, + 286, + 290 + ], + "spans": [ + { + "bbox": [ + 61, + 269, + 286, + 290 + ], + "type": "text", + "content": "Figure 19: The Balkans in the Diplomacy map in Welfare Diplomacy [204]." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 61, + 308, + 286, + 334 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 308, + 286, + 334 + ], + "spans": [ + { + "bbox": [ + 61, + 308, + 286, + 334 + ], + "type": "text", + "content": "4. Practice Applications of Multi-Agent Decision-Making" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 343, + 286, + 522 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 343, + 286, + 522 + ], + "spans": [ + { + "bbox": [ + 61, + 343, + 286, + 522 + ], + "type": "text", + "content": "Multi-agent cooperative decision-making has a wide range of practical applications across various domains. In this section, we delve into the practical applications of multi-agent decision-making, focusing on how advanced methods, particularly multi-agent MARL, are employed to address complex challenges in dynamic and evolving environments. We explore the contributions of advanced multi-agent systems across domains such as agriculture, disaster rescue, military simulations, traffic management, autonomous driving, and multi-robot collaboration. A broad array of applications applications is presented in Table 3. In the following, we will provide a detailed introduction to these applications, highlighting their impact and potential for future advancements." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 536, + 234, + 549 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 536, + 234, + 549 + ], + "spans": [ + { + "bbox": [ + 61, + 536, + 234, + 549 + ], + "type": "text", + "content": "4.1. MARL-based Intelligent Applications" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 553, + 286, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 553, + 286, + 635 + ], + "spans": [ + { + "bbox": [ + 61, + 553, + 286, + 635 + ], + "type": "text", + "content": "Recently, a variety of MARL methods have been developed, fostering efficient collaboration, strategic learning, and adaptive problem-solving in multi-agent systems [83, 24, 25, 28]. Below, we highlight notable contributions that demonstrate the application of MARL in enhancing multi-agent collaboration and performance" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 61, + 637, + 287, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 637, + 287, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 637, + 287, + 732 + ], + "type": "text", + "content": "In smart precious agriculture and continuous pest disease detection, Seewald et al. [11] addressed the challenge of continuous exploration for multi-agent systems with battery constraints by integrating ergodic search methods with energy-aware coverage. In disaster rescue, Qazzaz et al. [12] proposed a novel technique using a reinforcement learning multi Q-learning algorithm to optimize UAV connectivity operations in" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 111, + 530, + 182 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 530, + 182 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 530, + 182 + ], + "type": "text", + "content": "challenging terrain. Samad et al. [23] presents a cloud-based multi-agent framework for efficiently managing aerial robots in disaster response scenarios, aiming to optimize rescue efforts by autonomously processing real-time sensory data to locate and assist injured individuals." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 183, + 531, + 362 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 183, + 531, + 362 + ], + "spans": [ + { + "bbox": [ + 305, + 183, + 531, + 362 + ], + "type": "text", + "content": "In military confrontations, Qi et al. [90] designed a distributed MARL framework based on the actor-work-learner architecture, addressing the issues of slow sample collection and low training efficiency in MARL within the MaCA [89] and SMAC 3D real-time gaming [134, 135] military simulation environments. Benke et al. [208] proposed a computational model for agent decision-making that incorporates strategic deception, enhancing the representation of deceptive behaviors in multi-agent simulations for military operations research. Sutagundar et al. [209] proposed a Context Aware Agent based Military Sensor Network (CAMSN) to enhance multi-sensor image fusion, using node and sink-driven contexts, forming an improved infrastructure for multi-sensor image fusion." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 362, + 531, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 362, + 531, + 421 + ], + "spans": [ + { + "bbox": [ + 305, + 362, + 531, + 421 + ], + "type": "text", + "content": "In efficient limited-bandwidth communication field, Wang et al. [242] proposed a method called IMAC (Informative Multi-Agent Communication) to address the problem of limited-bandwidth communication in MARL." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 422, + 530, + 505 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 422, + 530, + 505 + ], + "spans": [ + { + "bbox": [ + 305, + 422, + 530, + 505 + ], + "type": "text", + "content": "In the research of UAV swarm communications against jamming, Lv et al. [93] proposed a MARL-based scheme to optimize relay selection and power allocation. This strategy leverages network topology, channel states, and shared experiences to improve policy exploration and stability, ultimately enhancing anti-jamming performance." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 305, + 505, + 531, + 684 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 505, + 531, + 684 + ], + "spans": [ + { + "bbox": [ + 305, + 505, + 531, + 684 + ], + "type": "text", + "content": "In UAV pursuit-evasion [20, 21, 22], Kouzeghar [88] proposed a decentralized heterogeneous UAV swarm approach for multi-target pursuit using MARL technique and introduced a variant of the MADDPG [132] to address pursuit-evasion scenarios in non-stationary environments with random obstacles. Alexopoulos et al. [212] addressed the challenge of pursuit-evasion games involving two pursuing and one evading unmanned aerial vehicle (UAV) by introducing a hierarchical decomposition of the game. Luo et al. [213] proposed a cooperative maneuver decision-making method for multi-UAV pursuit-evasion scenarios using an improved MARL approach, which incorporates an enhanced CommNet network with a communication mechanism to address multi-agent coordination." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 305, + 684, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 684, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 305, + 684, + 531, + 732 + ], + "type": "text", + "content": "In large-scale traffic signal/flow control, Wang [214] proposed a curiosity-inspired algorithm to optimize safe and smooth traffic flow in autonomous vehicle on-ramp merging; Chu et al. [92] proposed a fully scalable" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "text", + "content": "27" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 27 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 80, + 128, + 512, + 503 + ], + "blocks": [ + { + "bbox": [ + 173, + 118, + 420, + 127 + ], + "lines": [ + { + "bbox": [ + 173, + 118, + 420, + 127 + ], + "spans": [ + { + "bbox": [ + 173, + 118, + 420, + 127 + ], + "type": "text", + "content": "Table 3: Categorized Applications of MARL and LLMsin Diverse Domains." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 80, + 128, + 512, + 503 + ], + "lines": [ + { + "bbox": [ + 80, + 128, + 512, + 503 + ], + "spans": [ + { + "bbox": [ + 80, + 128, + 512, + 503 + ], + "type": "table", + "html": "
CategoryApplication AreaWorks / References
MARL-based MASSmart Precious Agriculture & Disaster RescueSeewald et al. [11], Qazzaz et al. [12], Samad et al. [23], Boubin et al. [205], Li et al. [206], Mahajan et al. [207]
Military ConfrontationsQi et al. [90], Benke et al. [208], Sutagundar et al. [209], Vangaru et al. [210], Wang et al. [211], MaCA [89], SMAC [134], SMAC-v2 [135]
UAV Pursuit-Evasion & Swarm Communications & NavigationKouzeghar [88], Alexopoulos et al. [212], Luo et al. [213], Lv et al. [93], Xue et al. [20], Rezwan et al. [21], Baroomi et al. [22]
Traffic Signal/Flow ControlWang [214], Chu et al. [92], Aboueleneen et al. [215], Yu et al. [216], Sun et al. [217], Azfar et al. [218], Bokade et al. [219], Kwesiga et al. [220], Zhang et al. [221]
Autonomous DrivingXue et al. [222], Liu et al. [19], Wen et al. [18], Jayawardana et al. [17], Liu et al. [223], Formanek et al. [224], Zhang et al. [225], Kotoku et al. [226], Hua et al. [227]
Multiple Robots CollaborativeGeorgios et al. [13], Silva et al. [14], Huang et al. [15], Cena et al. (SMART) [16], Kevin (SCRIMMAGE) [228], Liu et al. [95]
LLMs-based MASMulti-Agent CollaborationWu et al. (AutoGen) [103], Xiao et al. (CoE) [101], Chen et al. (AgentV-erse) [105], Liu et al. (DyLAN) [106], Zhang et al. (CoELA) [96]
Gaming InteractionXu et al. (LLM-Werewolf) [58], Gong et al. (MindAgent) [149], Xie et al. [229], Lin et al. [230], Jia et al. (GameFi) [231], Yin et al. (MIRAGE) [232], Zhang et al. (DVM) [233], Bonorino et al. [234]
Autonomous DrivingZheng et al. (PlanAgent) [113], Luo et al. (SenseRAG) [235], Mahmud et al. [236], Peng et al. (LearningFlow) [237], Karagounis et al. [238], Luo et al. [239], Gao et al. [240], Hegde et al. [241]
Multi-Modal ApplicationWang et al. (LangGraph) [111, 112], Zhang et al. (CrewAI) [114, 115], Zheng et al. (PlanAgent) [113], Wang et al. (MLLM-Tool) [102]
", + "image_path": "48f547c67fc780cf871fda26165072e845badaaa642603d3a11c7bf140ae0ce4.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 523, + 286, + 559 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 523, + 286, + 559 + ], + "spans": [ + { + "bbox": [ + 61, + 523, + 286, + 559 + ], + "type": "text", + "content": "and decentralized multi-agent deep reinforcement learning algorithm based on the advantage actor-critic (A2C) method." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 565, + 287, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 565, + 287, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 565, + 287, + 732 + ], + "type": "text", + "content": "In autonomous driving area, a large number of superior multi-agent decision-making algorithms and models are continuously being explored and devised. Xue et al. [222] developed a two-stage system framework for improving Multi-Agent Autonomous Driving Systems (MADS) by enabling agents to recognize and understand the Social Value Orientations (SVOs) of other agents. Liu et al. [19] proposed the Personality Modeling Network (PeMN), which includes a cooperation value function and personality parameters to model diverse interactions in highly interactive scenarios, addressing the issue of diverse driving styles in autonomous driving. Wen et al. [18] proposed a safe reinforcement learning algorithm called Parallel" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 305, + 523, + 530, + 619 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 523, + 530, + 619 + ], + "spans": [ + { + "bbox": [ + 305, + 523, + 530, + 619 + ], + "type": "text", + "content": "Constrained Policy Optimization (PCPO) based on the actor-critic architecture to address the issues of unexplainable behaviors and lack of safety guarantees in autonomous driving. Jayawardana et al. [17] proposed enabling socially compatible driving by leveraging human driving data to learn a social preference model, integrating it with reinforcement learning-based AV policy synthesis using Social Value Orientation theory." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 305, + 624, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 624, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 305, + 624, + 531, + 732 + ], + "type": "text", + "content": "In multiple robots collaborative fields, Georgios et al. [13] introduces a novel cognitive architecture for large-scale multi-agent Learning from Demonstration (LfD), leveraging Federated Learning (FL) to enable scalable, collaborative, and AI-driven robotic systems in complex environments. Silva et al. [14] address the challenges and limitations in evaluating intelligent collaborative robots for Industry 4.0. The review emphasizes the urgent need for improved evaluation methods and" + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 740, + 302, + 749 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 740, + 302, + 749 + ], + "spans": [ + { + "bbox": [ + 291, + 740, + 302, + 749 + ], + "type": "text", + "content": "28" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 28 + }, + { + "para_blocks": [ + { + "bbox": [ + 60, + 111, + 286, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 111, + 286, + 337 + ], + "spans": [ + { + "bbox": [ + 60, + 111, + 286, + 337 + ], + "type": "text", + "content": "standards to account for the complexities posed by human variability, AI integration, and advanced control systems in industrial environments. Huang et al. [15] presents a multi-agent reinforcement learning approach using the MADDPG algorithm, enhanced with an experience sample optimizer, to train swarm robots for autonomous, collaborative exploration on Mars. This approach outperforms traditional DRL algorithms in efficiency as the number of robots and targets increases. The SMART multi-agent robotic system [16] is a comprehensive and advanced platform designed for executing coordinated robotic tasks. It integrates both hardware components, such as robots and IP-C Cameras, and software agents responsible for image processing, path planning, communication, and decision-making. By utilizing Work-Flow Petri Nets for modeling and control, the system effectively ensures coordination and successful task execution even in unstructured environments." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 60, + 338, + 286, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 338, + 286, + 601 + ], + "spans": [ + { + "bbox": [ + 60, + 338, + 286, + 601 + ], + "type": "text", + "content": "Furthermore, the well-known project, Simulating Collaborative Robots in a Massive Multi-agent Game Environment (SCRIMMAGE)[25] [228], tackles the high costs of field testing robotic systems by offering a flexible and efficient simulation environment specifically designed for mobile robotics research. Unlike many existing simulators that are primarily designed for ground-based systems with high-fidelity multi-body physics models, SCRIMMAGE focuses on simulating large numbers of aerial vehicles, where precise collision detection and complex physics are often unnecessary. SCRIMMAGE is designed to be highly adaptable, with a plugin-based architecture that supports various levels of sensor fidelity, motion models, and network configurations. This flexibility allows the simulation of hundreds of aircraft with low-fidelity models or a smaller number with high-fidelity models on standard consumer hardware. Overall, SCRIMMAGE[26] provides a robust and scalable solution for testing and refining robotic algorithms in a controlled virtual environment, significantly reducing the risks and costs associated with physical testing." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 60, + 601, + 286, + 684 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 601, + 286, + 684 + ], + "spans": [ + { + "bbox": [ + 60, + 601, + 286, + 684 + ], + "type": "text", + "content": "Liu et al. [95] proposed the Learning before Interaction (LBI) framework, a novel approach designed to enhance multi-agent decision-making through generative world models. Traditional generative models struggle with trial-and-error reasoning, often failing to produce reliable solutions for complex multi-agent tasks. To address this limitation, LBI integrates a language-" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 305, + 111, + 532, + 326 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 532, + 326 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 532, + 326 + ], + "type": "text", + "content": "guided simulator into the MARL pipeline, enabling agents to learn grounded reasoning through simulated experiences. LBI consists of a world model composed of a dynamics model and a reward model. The dynamics model incorporates a vector quantized variational autoencoder (VQ-VAE) [243] for discrete image representation and a causal transformer to autoregressively generate interaction transitions. Meanwhile, the reward model employs a bidirectional transformer trained on expert demonstrations to provide task-specific rewards based on natural language descriptions. LBI further distinguishes itself by generating explainable interaction sequences and reward functions, providing interpretable solutions for multi-agent decision-making problems. By addressing challenges such as the compositional complexity of MARL environments and the scarcity of paired text-image datasets, LBI represents a significant advancement in the field." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 305, + 326, + 532, + 576 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 326, + 532, + 576 + ], + "spans": [ + { + "bbox": [ + 305, + 326, + 532, + 576 + ], + "type": "text", + "content": "Ye et al. [244] proposed an adaptive genetic algorithm (AGA) that dynamically adjusts crossover and mutation populations, leveraging the Dubins car model and state-transition strategies to optimize the allocation of heterogeneous UAVs in suppression of enemy air defense missions. Radac et al. combine two modelfree controller tuning techniques linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning as a novel mixed VRFT-Q learning method [94]. VRFT is initially employed to determine a stabilizing feedback controller using input-output experimental data within a model reference tracking framework. Subsequently, reinforcement Q-learning is applied in the same framework, utilizing input-state experimental data gathered under perturbed VRFT to ensure effective exploration. Extensive simulations on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS) demonstrates the mixed VRFT-Q's significant performance improvement over the Q-learning controllers and the VRFT controllers." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 305, + 577, + 533, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 577, + 533, + 733 + ], + "spans": [ + { + "bbox": [ + 305, + 577, + 533, + 733 + ], + "type": "text", + "content": "To address the lack of a general metric for quantifying policy differences in MARL problems, Hu et al. [84] proposed the Multi-Agent Policy Distance (MAPD), a tool designed to measure policy differences among agents. Additionally, they developed a Multi-Agent Dynamic Parameter Sharing (MADPS) algorithm based on MAPD, demonstrating its effectiveness in enhancing policy diversity and overall performance through extensive experiments. To addresses the challenge of cooperative MARL in scenarios with dynamic team compositions, Wang et al. [245] propose using mutual information as an augmented reward to encourage robustness in agent policies across different team configurations." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 61, + 702, + 286, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 702, + 286, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 702, + 286, + 732 + ], + "type": "text", + "content": "25 SCRIMMAGE Web: http://www.scrimmagesim.org/. 26 SCRIMMAGE project: https://github.com/gtri/ scrimmage." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "type": "text", + "content": "29" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 29 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 108, + 286, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 108, + 286, + 515 + ], + "spans": [ + { + "bbox": [ + 63, + 108, + 286, + 515 + ], + "type": "text", + "content": "They develop a multi-agent policy iteration algorithm with a fixed marginal distribution and demonstrate its strong zero-shot generalization to dynamic team compositions in complex cooperative tasks. Progressive Mutual Information Collaboration (PMIC)" + }, + { + "bbox": [ + 63, + 108, + 286, + 515 + ], + "type": "inline_equation", + "content": "^{27}" + }, + { + "bbox": [ + 63, + 108, + 286, + 515 + ], + "type": "text", + "content": " is a novel framework that leverages mutual information (MI) to guide collaboration among agents, thereby enhancing performance in multi-agent cooperative tasks [246]. The key innovation of is its dual MI objectives: maximizing MI associated with superior collaborative behaviors and minimizing MI linked to inferior ones, ensuring more effective learning and avoiding sub-optimal collaborations. Wai et al. [83] proposes a novel double averaging primal-dual optimization algorithm for MARL, specifically targeting decentralized applications like sensor networks and swarm robotics. The MARL algorithm enables agents to collaboratively evaluate policies by incorporating neighboring gradient and local reward information, achieving fast finite-time convergence to the optimal solution in decentralized convex-concave saddle-point problems. To address the challenge of sparse rewards in MARL, Kang et al. [247] introduce the Dual Preferences-based Multi-Agent Reinforcement Learning (DPM) framework. DPM extends preference-based reinforcement learning (PbRL) by incorporating dual preference types-comparing both trajectories and individual agent contributions-thereby optimizing individual reward functions more effectively. DPM also leverages LLMs to gather preferences, mitigating issues associated with human-based preference collection. Experimental results in the StarCraft Multi-Agent Challenge (SMAC) [135] demonstrate that DPM significantly outperforms existing baselines, particularly in sparse reward settings." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 63, + 518, + 286, + 697 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 518, + 286, + 697 + ], + "spans": [ + { + "bbox": [ + 63, + 518, + 286, + 697 + ], + "type": "text", + "content": "Traditional methods like soft attention struggle with scalability and efficiency in LMAS due to the overwhelming number of agent interactions and large observation spaces. To address these challenges of largescale multi-agent systems (LMAS) involving hundreds of agents, University of Chinese Academy of Sciences [248] introduces the Concentration Network (ConcNet), a novel reinforcement learning framework. ConcNet mimics human cognitive processes of concentration by prioritizing and aggregating observations based on motivational indices, such as expected survival time and state value. It allows the system to focus on the most relevant entities, enhancing decision-making efficiency in complex environments. In ConcNet, a novel concentration policy gradient architecture was further proposed," + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 308, + 111, + 530, + 182 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 111, + 530, + 182 + ], + "spans": [ + { + "bbox": [ + 308, + 111, + 530, + 182 + ], + "type": "text", + "content": "demonstrating its superior performance and scalability in large-scale multi-agent scenarios, such as decentralized collective assault simulations. This research represents a significant advancement in the field, providing a scalable solution for effective decision-making in large-scale multi-agent environments." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 308, + 184, + 530, + 338 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 184, + 530, + 338 + ], + "spans": [ + { + "bbox": [ + 308, + 184, + 530, + 338 + ], + "type": "text", + "content": "In conclusion, MARL-based intelligent applications have shown exceptional adaptability across diverse domains such as autonomous driving, UAV systems, disaster response, and collaborative robotics [242, 88, 20, 19, 13, 16]. Key innovations, including communication-enhanced learning [168, 161, 174, 139], adaptive policy optimization, and mutual information [246] frameworks, have significantly advanced the field. While challenges like sparse rewards and scalability remain, these advancements highlight MARL's potential to address dynamic and complex multi-agent environments effectively, paving the way for further impactful developments." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 308, + 354, + 517, + 366 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 354, + 517, + 366 + ], + "spans": [ + { + "bbox": [ + 308, + 354, + 517, + 366 + ], + "type": "text", + "content": "4.2. LLMs reasoning-based Intelligent Applications" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 308, + 374, + 530, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 374, + 530, + 456 + ], + "spans": [ + { + "bbox": [ + 308, + 374, + 530, + 456 + ], + "type": "text", + "content": "To address diverse and complex challenges, a variety of frameworks leveraging LLMs have been developed, enabling advanced reasoning, collaboration, and task execution in multi-agent systems [116, 30, 57]. Below, we highlight notable contributions that demonstrate the application of LLMs in enhancing multi-agent decision-making and coordination." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 308, + 458, + 530, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 458, + 530, + 731 + ], + "spans": [ + { + "bbox": [ + 308, + 458, + 530, + 731 + ], + "type": "text", + "content": "Wu et al. [103] introduced AutoGen, an open-source framework designed to enable the development of next-generation LLM applications through multi-agent conversations. AutoGen allows for customizable agent interactions and the integration of LLMs, human inputs, and tools to collaboratively solve complex tasks. Xiao et al. [101] proposed Chain-of-Experts (CoE), a novel multi-agent framework designed to enhance reasoning in complex operations research (OR) problems using LLMs. Chen et al. [105] presented AgentVerse, a multiagent framework designed to facilitate collaboration among autonomous agents, inspired by human group dynamics. AgentVerse dynamically adjusts the composition and roles of agents throughout the problem-solving process, enhancing their ability to tackle complex tasks across various domains, including text understanding, reasoning, coding, and embodied AI. The framework consists of four stages: Expert Recruitment, Collaborative Decision-Making, Action Execution, and Evaluation. Chen et al. [107] introduced AutoAgents, a framework capable of adaptively generating and coordinating multiple specialized agents based on different tasks, thereby constructing efficient multi-agent teams" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 71, + 722, + 257, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 722, + 257, + 731 + ], + "spans": [ + { + "bbox": [ + 71, + 722, + 257, + 731 + ], + "type": "text", + "content": "27PMIC code: https://github.com/yeshenpy/PMIC." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "type": "text", + "content": "30" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 30 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 109, + 286, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 109, + 286, + 673 + ], + "spans": [ + { + "bbox": [ + 61, + 109, + 286, + 673 + ], + "type": "text", + "content": "to accomplish complex tasks. Liu et al. [106] proposed the Dynamic LLM-Agent Network (DyLAN), a framework designed to enhance LLM-agent collaboration by enabling agents to interact dynamically based on task requirements, rather than within a static architecture. Xu et al. [58] proposed a novel multi-agent framework that combines LLMs with reinforcement learning to enhance strategic decision-making and communication in the Werewolf game" + }, + { + "bbox": [ + 61, + 109, + 286, + 673 + ], + "type": "inline_equation", + "content": "^{28}" + }, + { + "bbox": [ + 61, + 109, + 286, + 673 + ], + "type": "text", + "content": ", effectively overcoming intrinsic biases and achieving human-level performance. Wen et al. [104] introduce the Multi-Agent Transformer (MAT), a novel architecture that frames cooperative MARL as a sequence modeling problem. Experiments on StarCraftII [154, 91, 134], Multi-Agent MuJoCo (MAMuJoCo) [249], Dexterous Hands Manipulation [185, 186], and Google Research Football [55] benchmarks demonstrate that it achieves superior performance and data efficiency by leveraging modern sequence models in an on-policy learning framework. Wang et al. [102] introduced MLLM-Tool" + }, + { + "bbox": [ + 61, + 109, + 286, + 673 + ], + "type": "inline_equation", + "content": "^{29}" + }, + { + "bbox": [ + 61, + 109, + 286, + 673 + ], + "type": "text", + "content": ", a multimodal tool agent system that integrates open-source LLMs with multimodal encoders, enabling it to process visual and auditory inputs for selecting appropriate tools based on ambiguous multimodal instructions. Moreover, they introduced ToolMMBench, a novel benchmark with multi-modal inputs and multioption solutions, demonstrating its effectiveness in addressing real-world multimodal multi-agent scenarios. Zhang et al. [96] introduce CoELA, a Cooperative Embodied Language Agent framework that leverages LLMs to enhance multi-agent cooperation in complex, decentralized environments. CoELA integrates LLMs with cognitive-inspired modules for perception, memory and execution, allowing agents to plan, communicate, and collaborate effectively on long-horizon tasks, outperforming traditional planning-based methods such as Multi-Agent Transformer(MAT) [104], and showing promising results in human-agent interaction simulation environments, Communicative Watch-And-Help (C-WAH) [99] and ThreeDWorld Multi-Agent Transport (TDW-MAT) [147, 148]. Gong et al. [149] from Team of Li.FeiFei. introduce MindAgent, a novel infrastructure for evaluating planning and coordination capabilities in gaming interactions, leveraging large foundation models (LFMs) to coordinate multi-agent system (MAS), collaborate with human players, and enable in-context learning. Their team also present" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 305, + 111, + 532, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 532, + 350 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 532, + 350 + ], + "type": "text", + "content": "\"Cuisineworld\", a new gaming scenario and benchmark for assessing multi-agent collaboration efficiency. Despite LLMs' success in various collaborative tasks, they struggle with spatial and decentralized decision-making required for flocking. Li et al. [57] explored the challenges faced by LLMs in solving multi-agent flocking tasks, where agents strive to stay close, avoid collisions, and maintain a formation. Sun et al. [180] proposed Corex, a novel framework that enhances complex reasoning by leveraging multi-model collaboration. Inspired by human cognitive processes, Corex employs three collaborative paradigms-Discuss, Review, and Retrieve-where different LLMs act as autonomous agents to collectively solve complex tasks. Corex empowers LLM agents to \"think outside the box\" by facilitating collaborative group discussions, effectively mitigating the cognitive biases inherent in individual LLMs. This approach not only enhances performance but also improves cost-effectiveness and annotation efficiency, offering a significant advantage in complex tasks." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 305, + 350, + 532, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 350, + 532, + 386 + ], + "spans": [ + { + "bbox": [ + 305, + 350, + 532, + 386 + ], + "type": "text", + "content": "Next, we will provide a detailed introduction to some outstanding achievements in the application of LLMs for multi-agent collaborative task execution." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 305, + 386, + 533, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 386, + 533, + 660 + ], + "spans": [ + { + "bbox": [ + 305, + 386, + 533, + 660 + ], + "type": "text", + "content": "MetaGPT: Existing LLMs-based multi-agent systems often struggle with complex tasks due to logical inconsistencies and cumulative hallucinations, leading to biased results. Hong et al. [108, 109] from DeepWisdom" + }, + { + "bbox": [ + 305, + 386, + 533, + 660 + ], + "type": "inline_equation", + "content": "^{31}" + }, + { + "bbox": [ + 305, + 386, + 533, + 660 + ], + "type": "text", + "content": " proposed MetaGPT" + }, + { + "bbox": [ + 305, + 386, + 533, + 660 + ], + "type": "inline_equation", + "content": "^{32}" + }, + { + "bbox": [ + 305, + 386, + 533, + 660 + ], + "type": "text", + "content": ", an innovative metaprogramming framework designed to enhance the collaboration capabilities of LLMs-based multi-agent systems. MetaGPT integrates Standard Operating Procedures (SOPs) commonly used in human workflows, thereby constructing a more efficient and coherent multi-agent collaboration system. MetaGPT employs an assembly-line approach, breaking down complex tasks into multiple subtasks and assigning them to agents with specific domain expertise. These agents collaborate during task execution through clearly defined roles and structured communication interfaces, reducing the risk of information distortion and misunderstanding. In summary, MetaGPT offers a flexible and powerful platform for developing LLMs-based multi-agent systems. Its unique meta-programming framework and rigorous workflow design enable it to excel in handling complex tasks, greatly advancing the field of multi-agent collaboration research." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 316, + 661, + 531, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 661, + 531, + 673 + ], + "spans": [ + { + "bbox": [ + 316, + 661, + 531, + 673 + ], + "type": "text", + "content": "CoAct: Hou et al. [97] proposed CoAct33, a multi-" + } + ] + } + ], + "index": 4 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 61, + 692, + 286, + 711 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 692, + 286, + 711 + ], + "spans": [ + { + "bbox": [ + 61, + 692, + 286, + 711 + ], + "type": "text", + "content": "28 Werewolf game: https://sites.google.com/view/strategic-language-agents/." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 61, + 712, + 286, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 712, + 286, + 731 + ], + "spans": [ + { + "bbox": [ + 61, + 712, + 286, + 731 + ], + "type": "text", + "content": "29MLLM-Tool: https://github.com/MLLM-Tool/MLLM-Tool." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 314, + 692, + 492, + 703 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 692, + 492, + 703 + ], + "spans": [ + { + "bbox": [ + 314, + 692, + 492, + 703 + ], + "type": "text", + "content": "30Cuisineworld: https://mindagent.github.io/." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 315, + 703, + 484, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 703, + 484, + 712 + ], + "spans": [ + { + "bbox": [ + 315, + 703, + 484, + 712 + ], + "type": "text", + "content": "31DeepWisdom: https://www deepwisdom.ai/." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 315, + 712, + 500, + 721 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 712, + 500, + 721 + ], + "spans": [ + { + "bbox": [ + 315, + 712, + 500, + 721 + ], + "type": "text", + "content": "32MetaGPT: https://github.com/geekan/MetaGPT." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 315, + 721, + 478, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 721, + 478, + 731 + ], + "spans": [ + { + "bbox": [ + 315, + 721, + 478, + 731 + ], + "type": "text", + "content": "33CoAct: https://github.com/dxhou/CoAct." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 290, + 739, + 302, + 749 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 739, + 302, + 749 + ], + "spans": [ + { + "bbox": [ + 290, + 739, + 302, + 749 + ], + "type": "text", + "content": "31" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 31 + }, + { + "para_blocks": [ + { + "bbox": [ + 60, + 111, + 286, + 253 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 111, + 286, + 253 + ], + "spans": [ + { + "bbox": [ + 60, + 111, + 286, + 253 + ], + "type": "text", + "content": "agent system based on LLMs designed for hierarchical collaboration tasks. The framework consists of six stages: task decomposition, subtask assignment and communication, subtask analysis and execution, feedback collection, progress evaluation, and replanning when necessary. The global planning agent plays a critical role in managing complex tasks. The local execution agent is responsible for executing specific subtasks. This hierarchical framework demonstrates strong adaptability and performance, particularly in complex real-world tasks requiring dynamic replanning and collaborative execution." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 60, + 254, + 286, + 481 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 254, + 286, + 481 + ], + "spans": [ + { + "bbox": [ + 60, + 254, + 286, + 481 + ], + "type": "text", + "content": "AutoGen: Microsoft [103, 146] introduced AutoGen" + }, + { + "bbox": [ + 60, + 254, + 286, + 481 + ], + "type": "inline_equation", + "content": "^{34}" + }, + { + "bbox": [ + 60, + 254, + 286, + 481 + ], + "type": "text", + "content": ", a flexible framework for creating and managing multiple autonomous agents to collaboratively complete complex tasks, particularly in programming, planning, and creative writing domains. AutoGen allows users to define distinct agent roles, including specialists, general assistants, and decision-makers, ensuring clear task division and effective coordination. Agents interact in a structured conversational environment, exchanging messages to resolve tasks iteratively. AutoGen introduces feedback loops where agents analyze outputs, refine strategies, and optimize task completion autonomously. Notably, it supports integration with various LLMs, offering developers the flexibility to replace APIs without altering code significantly. In summary, AutoGen facilitates scalable, efficient, and robust multi-agent collaboration, demonstrating potential for applications ranging from enhanced ChatGPT systems to real-world industrial workflows." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 60, + 481, + 287, + 686 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 481, + 287, + 686 + ], + "spans": [ + { + "bbox": [ + 60, + 481, + 287, + 686 + ], + "type": "text", + "content": "XAgent: XAgent Team" + }, + { + "bbox": [ + 60, + 481, + 287, + 686 + ], + "type": "inline_equation", + "content": "^{35}" + }, + { + "bbox": [ + 60, + 481, + 287, + 686 + ], + "type": "text", + "content": " [110] developed XAgent" + }, + { + "bbox": [ + 60, + 481, + 287, + 686 + ], + "type": "inline_equation", + "content": "^{36}" + }, + { + "bbox": [ + 60, + 481, + 287, + 686 + ], + "type": "text", + "content": " is an open-source, LLMs-driven autonomous agent framework designed for solving complex tasks automatically and efficiently. As shown in Figure 20, it employs a dual-loop architecture: the outer loop for high-level task planning and coordination, and the inner loop for executing subtasks. The PlanAgent in the outer loop generates an initial plan by breaking a complex task into manageable subtasks, organizing them into a task queue. It iteratively monitors progress, optimizes plans based on feedback from the inner loop, and continues until all subtasks are completed. The inner loop utilizes ToolAgents, which employ various tools like file editors, Python notebooks, web browsers, and shell interfaces within a secure docker environment to execute subtasks. XAgent emphasizes autonomy, safety, and extensibility, allowing users to add new agents or tools" + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 311, + 109, + 528, + 268 + ], + "blocks": [ + { + "bbox": [ + 311, + 109, + 528, + 268 + ], + "lines": [ + { + "bbox": [ + 311, + 109, + 528, + 268 + ], + "spans": [ + { + "bbox": [ + 311, + 109, + 528, + 268 + ], + "type": "image", + "image_path": "3b2c1190be538a01a6e3349b9ec31c50208a6352acfb17257690f6b17ec0e736.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 305, + 279, + 531, + 319 + ], + "lines": [ + { + "bbox": [ + 305, + 279, + 531, + 319 + ], + "spans": [ + { + "bbox": [ + 305, + 279, + 531, + 319 + ], + "type": "text", + "content": "Figure 20: An structure overview of the XAgents framework, highlighting the Task Node as the starting point, the sequence of Sub-Task Nodes forming the Task Execution Graph (TEG), and the Fusion Node integrating outputs for the final result [110]." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 305, + 338, + 531, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 338, + 531, + 373 + ], + "spans": [ + { + "bbox": [ + 305, + 338, + 531, + 373 + ], + "type": "text", + "content": "to enhance functionality. Its GUI facilitates user interaction while supporting human collaboration, enabling real-time guidance and assistance for challenging tasks." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 373, + 531, + 550 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 373, + 531, + 550 + ], + "spans": [ + { + "bbox": [ + 305, + 373, + 531, + 550 + ], + "type": "text", + "content": "PlanAgent: PlanAgent" + }, + { + "bbox": [ + 305, + 373, + 531, + 550 + ], + "type": "inline_equation", + "content": "^{37}" + }, + { + "bbox": [ + 305, + 373, + 531, + 550 + ], + "type": "text", + "content": ", developed by Chinese Academy of Sciences and Li Auto [113], introduces a closed-loop motion planning framework for autonomous driving by leveraging multi-modal large language models (MLLMs). The system utilizes MLLM's multi-modal reasoning and commonsense understanding capabilities to generate hierarchical driving commands based on scene information. PlanAgent addresses key limitations of traditional rule-based and learning-based methods, including overfitting in long-tail scenarios and inefficiencies in scene representation. Its novel integration of MLLM-driven reasoning into autonomous driving planning establishes a new benchmark for safety and robustness in real-world applications." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 552, + 531, + 683 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 552, + 531, + 683 + ], + "spans": [ + { + "bbox": [ + 305, + 552, + 531, + 683 + ], + "type": "text", + "content": "LangGraph: LangChain Inc" + }, + { + "bbox": [ + 305, + 552, + 531, + 683 + ], + "type": "inline_equation", + "content": "^{38}" + }, + { + "bbox": [ + 305, + 552, + 531, + 683 + ], + "type": "text", + "content": " [111, 112] introduced LangGraph" + }, + { + "bbox": [ + 305, + 552, + 531, + 683 + ], + "type": "inline_equation", + "content": "^{39}" + }, + { + "bbox": [ + 305, + 552, + 531, + 683 + ], + "type": "text", + "content": ", a library designed for building stateful, multi-actor applications with LLMs, enabling the creation of complex agent and multi-agent workflows. Inspired by frameworks like Pregel and Apache Beam, LangGraph provides fine-grained control over workflows and state management while offering advanced features like persistence and human-in-the-loop capabilities. LangGraph stands out for its support of iterative workflows with cycles and branching, which differentiates it from DAG-based frameworks. Each" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 69, + 701, + 267, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 701, + 267, + 712 + ], + "spans": [ + { + "bbox": [ + 69, + 701, + 267, + 712 + ], + "type": "text", + "content": "34 AutoGen: https://github.com/microsoft/autogen." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 69, + 712, + 237, + 722 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 712, + 237, + 722 + ], + "spans": [ + { + "bbox": [ + 69, + 712, + 237, + 722 + ], + "type": "text", + "content": "35XAgent Team: https://blog.x-agent.net/." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 69, + 722, + 251, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 722, + 251, + 732 + ], + "spans": [ + { + "bbox": [ + 69, + 722, + 251, + 732 + ], + "type": "text", + "content": "36XAgent: https://github.com/OpenBMB/XAgent." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 314, + 701, + 499, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 701, + 499, + 712 + ], + "spans": [ + { + "bbox": [ + 314, + 701, + 499, + 712 + ], + "type": "text", + "content": "37PlanAgent: http://www.chinasem.cn/planagent." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 315, + 712, + 518, + 722 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 712, + 518, + 722 + ], + "spans": [ + { + "bbox": [ + 315, + 712, + 518, + 722 + ], + "type": "text", + "content": "38LangChain Inc: https://langchain.ac.cn/langgraph." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 315, + 722, + 513, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 722, + 513, + 732 + ], + "spans": [ + { + "bbox": [ + 315, + 722, + 513, + 732 + ], + "type": "text", + "content": "39LangGraph: https://www.langchain.com/langgraph" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 290, + 739, + 302, + 749 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 739, + 302, + 749 + ], + "spans": [ + { + "bbox": [ + 290, + 739, + 302, + 749 + ], + "type": "text", + "content": "32" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 32 + }, + { + "para_blocks": [ + { + "bbox": [ + 60, + 111, + 286, + 302 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 111, + 286, + 302 + ], + "spans": [ + { + "bbox": [ + 60, + 111, + 286, + 302 + ], + "type": "text", + "content": "graph execution generates a state, dynamically updated by node outputs, enabling reliable and adaptive agent behavior. Its built-in persistence allows workflows to pause and resume at any point, facilitating error recovery and advanced human-agent interactions, including \"time travel\" to modify past actions. LangGraph integrates seamlessly with LangChain [250, 179] but functions independently, offering flexibility for various applications, from dialogue agents and recommendation systems to natural language processing and game development. With streaming support, it processes outputs in real-time, making it suitable for tasks requiring immediate feedback. Its low-level architecture and customizable workflows make LangGraph a powerful tool for creating robust, scalable, and interactive LLMs-based systems." + } + ] + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 67, + 313, + 282, + 445 + ], + "blocks": [ + { + "bbox": [ + 67, + 313, + 282, + 445 + ], + "lines": [ + { + "bbox": [ + 67, + 313, + 282, + 445 + ], + "spans": [ + { + "bbox": [ + 67, + 313, + 282, + 445 + ], + "type": "image", + "image_path": "1a5057af905a5275003bf194bbea8a1a1f3a4c961a1885581d7f2c3fed924e71.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 61, + 458, + 287, + 497 + ], + "lines": [ + { + "bbox": [ + 61, + 458, + 287, + 497 + ], + "spans": [ + { + "bbox": [ + 61, + 458, + 287, + 497 + ], + "type": "text", + "content": "Figure 21: A LangGraph workflow representation demonstrating conditional branching and iterative loops for document retrieval, grading, query transformation, and web search before generating a final output [111, 112]." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 507, + 287, + 700 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 507, + 287, + 700 + ], + "spans": [ + { + "bbox": [ + 61, + 507, + 287, + 700 + ], + "type": "text", + "content": "CrewAI: " + }, + { + "bbox": [ + 61, + 507, + 287, + 700 + ], + "type": "inline_equation", + "content": "\\mathrm{CrewAI^{40}}" + }, + { + "bbox": [ + 61, + 507, + 287, + 700 + ], + "type": "text", + "content": " [114, 115] is an open-source framework designed to coordinate AI agents that specialize in role-playing and autonomous operations, enabling efficient collaboration to achieve complex goals. The framework's modular design allows users to create AI teams that operate like real-world teams, with agents assigned specific roles and tasks to ensure clear division of labor and shared objectives. As seen from 22, this framework operates in three primary stages: Agent Creation, where developers define roles with specific goals and tools; Task Management, enabling flexible task assignment and multi-view knowledge enhancement; and Execution and Collaboration, where agents interact in workflows to resolve tasks, with outputs parsed into reusable formats. CrewAI integrates seamlessly with the LangChain ecosystem, leveraging its tools and LLM" + } + ] + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 313, + 115, + 523, + 248 + ], + "blocks": [ + { + "bbox": [ + 313, + 115, + 523, + 248 + ], + "lines": [ + { + "bbox": [ + 313, + 115, + 523, + 248 + ], + "spans": [ + { + "bbox": [ + 313, + 115, + 523, + 248 + ], + "type": "image", + "image_path": "1915e555db021c43ce2d26f3d54bee668e602b1321975e13f09e6963a269360d.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 306, + 264, + 531, + 285 + ], + "lines": [ + { + "bbox": [ + 306, + 264, + 531, + 285 + ], + "spans": [ + { + "bbox": [ + 306, + 264, + 531, + 285 + ], + "type": "text", + "content": "Figure 22: An overview of the processing workflow for the roleplaying multi-agent framework, CrewAI [114, 115]." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 305, + 304, + 531, + 470 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 304, + 531, + 470 + ], + "spans": [ + { + "bbox": [ + 305, + 304, + 531, + 470 + ], + "type": "text", + "content": "capabilities, such as OpenAI and Google Gemini. The framework supports real-time decision-making and task adaptation, with future versions planned to include more advanced collaboration processes, such as consensus-driven workflows and autonomous decision-making. Its innovative features, such as role-based design, dynamic rule generation, and modular task workflows, position CrewAI as a robust and scalable framework for multi-agent collaboration across creative and industrial domains. Overall, CrewAI " + }, + { + "bbox": [ + 305, + 304, + 531, + 470 + ], + "type": "inline_equation", + "content": "^{41}" + }, + { + "bbox": [ + 305, + 304, + 531, + 470 + ], + "type": "text", + "content": " offers a cutting-edge approach to multi-agent systems by integrating role-specific autonomy, flexible workflows, and advanced AI toolsets, making it a versatile framework for collaborative AI applications." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 472, + 531, + 591 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 472, + 531, + 591 + ], + "spans": [ + { + "bbox": [ + 305, + 472, + 531, + 591 + ], + "type": "text", + "content": "In summary, these frameworks and applications [108, 97, 103, 110] highlight the rapid advancements in leveraging LLMs for multi-agent collaboration, reasoning, and task execution. Each system introduces unique innovations—ranging from dynamic agent coordination to enhanced reasoning and human-in-the-loop workflows—demonstrating their potential to tackle complex, real-world challenges across various domains [113, 111, 114, 115]. These developments pave the way for more flexible, scalable, and efficient AI-driven solutions." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 306, + 610, + 530, + 635 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 610, + 530, + 635 + ], + "spans": [ + { + "bbox": [ + 306, + 610, + 530, + 635 + ], + "type": "text", + "content": "5. Challenges in MARL-based and LLMs-based approaches" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 644, + 531, + 681 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 644, + 531, + 681 + ], + "spans": [ + { + "bbox": [ + 305, + 644, + 531, + 681 + ], + "type": "text", + "content": "The extension of single-agent decision-making into multi-agent cooperative contexts introduces several challenges, including developing effective training" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 314, + 703, + 530, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 703, + 530, + 732 + ], + "spans": [ + { + "bbox": [ + 314, + 703, + 530, + 732 + ], + "type": "inline_equation", + "content": "^{41}" + }, + { + "bbox": [ + 314, + 703, + 530, + 732 + ], + "type": "text", + "content": "CrewAI Multi-Agent System platform: https://www.deeplearning.ai/short-courses/ulti-ai-agent-systems-with-crewai/." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 69, + 720, + 259, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 720, + 259, + 732 + ], + "spans": [ + { + "bbox": [ + 69, + 720, + 259, + 732 + ], + "type": "text", + "content": "40CrewAI: https://github.com/crewAIInc/crewAI." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 739, + 302, + 748 + ], + "type": "text", + "content": "33" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 33 + }, + { + "para_blocks": [ + { + "bbox": [ + 60, + 111, + 286, + 253 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 111, + 286, + 253 + ], + "spans": [ + { + "bbox": [ + 60, + 111, + 286, + 253 + ], + "type": "text", + "content": "schemes for multiple agents learning and adapting simultaneously, managing increased computational complexity due to the more sophisticated and stochastic environments compared to single-agent settings, and addressing the foundational role of strategic interaction among agents. Additionally, ensuring the scalability of algorithms to handle larger observation and action spaces, facilitating coordination and cooperation among agents to achieve consistent goals, and dealing with non-stationary environments where agents' behaviors and strategies continuously evolve are also inevitable and critical challenges." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 60, + 254, + 286, + 398 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 254, + 286, + 398 + ], + "spans": [ + { + "bbox": [ + 60, + 254, + 286, + 398 + ], + "type": "text", + "content": "Applying multi-agent decision-making techniques to real-world problems, which often involve complex and dynamic interactions, further underscores the need for sophisticated and advanced approaches to effectively adapt these ever-increasing complexities. Multi-agent cooperative decision making significantly surpasses single-agent decision-making in terms of environmental stochasticity, complexity, the difficulty of strategy optimization, and so on. As shown in Figure 23, we present a tree diagram summarizing the existing challenges in MARL-based and LLMs-based multi-agent decision-making approaches." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 411, + 276, + 423 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 411, + 276, + 423 + ], + "spans": [ + { + "bbox": [ + 61, + 411, + 276, + 423 + ], + "type": "text", + "content": "5.1. Challenges in MARL-based multi-agent systems" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 60, + 425, + 286, + 570 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 425, + 286, + 570 + ], + "spans": [ + { + "bbox": [ + 60, + 425, + 286, + 570 + ], + "type": "text", + "content": "The advancement of MARL remains in its formative stages, with its potential for enabling effective multiagent coordination and achieving scalability in dynamic environments yet to be fully unlocked [7, 6, 8, 25, 28]. Challenges such as environmental stochasticity, strategy learning difficulty, non-stationarity, scalability, and reward complexity have emerged as major bottlenecks. This section provides an in-depth analysis of these challenges, exploring the current state, technical limitations, and potential solutions in MARL-based multiagent systems to enable more robust, efficient, and scalable decision-making frameworks." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 70, + 577, + 286, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 70, + 577, + 286, + 732 + ], + "spans": [ + { + "bbox": [ + 70, + 577, + 286, + 732 + ], + "type": "text", + "content": "1. Environmental Stochasticity and Complexity: [251, 25] In MARL-based decision-making systems, environmental dynamics are influenced not only by external factors but also by the behaviors and decisions of individual agents. This complex interaction results in high levels of stochasticity and complexity in the environment, making prediction and modeling significantly more difficult. For example, in autonomous driving scenarios [19, 214, 18, 222], the behavior of each vehicle affects the decisions of surrounding vehicles, thereby increasing the overall complexity of the system;" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 314, + 111, + 532, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 111, + 532, + 290 + ], + "spans": [ + { + "bbox": [ + 314, + 111, + 532, + 290 + ], + "type": "text", + "content": "2. Strategy Learning Difficulty: [252, 86, 116] Strategy learning in MARL-based decision-making systems involves multidimensional challenges. Firstly, agents must consider the behaviors of other agents, and this interdependence increases the difficulty of strategy learning. Each agent not only has to optimize its own strategy but also predict and adapt to the strategy changes of others. Additionally, the vast joint action space of multiple agents makes it challenging for any single agent to learn effective joint strategies. The vast joint action space means that each agent needs to explore and learn within a larger decision space, which significantly increases the demands on computational resources and time;" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 314, + 290, + 532, + 504 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 290, + 532, + 504 + ], + "spans": [ + { + "bbox": [ + 314, + 290, + 532, + 504 + ], + "type": "text", + "content": "3. Complexity of Reward Functions: [251] In MARL-based decision-making systems, reward functions become more complex [85, 150]. The rewards received from the environment in multiagent cooperative techniques are influenced not only by an individual agent's actions and the environment but also by the actions of other agents, which makes the stable policy learning and modeling process more difficult. In other words, an agent's reward depends not only on its own actions but also on the actions of other agents, making it challenging for the reinforcement learning-based multi-agent decision-making policies to converge. This intricate reward mechanism complicates the design and optimization of reward functions. Agents need to evaluate their behaviors' impact on the overall system through complex interactions to learn effective strategies;" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 314, + 505, + 531, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 505, + 531, + 636 + ], + "spans": [ + { + "bbox": [ + 314, + 505, + 531, + 636 + ], + "type": "text", + "content": "4. Coordination and Cooperation: [253, 90, 254, 251] Furthermore, in MARL-based decision-making systems, agents need to coordinate and cooperate to achieve common goals. This requires establishing effective communication and coordination mechanisms among agents to ensure that their actions are globally consistent and complementary [49]. For example, in disaster rescue scenarios [57, 12], multiple drones need to coordinate their actions to cover the maximum area and utilize resources most efficiently;" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 314, + 637, + 532, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 637, + 532, + 732 + ], + "spans": [ + { + "bbox": [ + 314, + 637, + 532, + 732 + ], + "type": "text", + "content": "5. Non-Stationary: [27, 25] The environment of MARL-based decision-making systems is nonstationary because each agent's behavior dynamically changes the state of the environment. This non-stationarity increases the difficulty of strategy learning, as agents must continually adapt to changes in the environment and the behaviors of other agents." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "type": "text", + "content": "34" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 34 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 86, + 107, + 509, + 309 + ], + "blocks": [ + { + "bbox": [ + 86, + 107, + 509, + 309 + ], + "lines": [ + { + "bbox": [ + 86, + 107, + 509, + 309 + ], + "spans": [ + { + "bbox": [ + 86, + 107, + 509, + 309 + ], + "type": "image", + "image_path": "3e7322f2e7c56ceef6795b3c9c45da60a4c75359ab1c5eeb3db45b67704bb031.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 113, + 319, + 480, + 331 + ], + "lines": [ + { + "bbox": [ + 113, + 319, + 480, + 331 + ], + "spans": [ + { + "bbox": [ + 113, + 319, + 480, + 331 + ], + "type": "text", + "content": "Figure 23: A tree diagram of the challenges in MARL and LLMs-based multi-agent decision-making approaches." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 349, + 288, + 709 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 349, + 288, + 709 + ], + "spans": [ + { + "bbox": [ + 69, + 349, + 288, + 709 + ], + "type": "text", + "content": "6. Scalability: [87, 252, 25] Addressing scalability in MARL demands innovative approaches to tackle the exponential growth in complexity as the number of agents increases. Techniques that leverage knowledge reuse [172, 138, 168], such as parameter sharing and transfer learning [255, 256], have proven indispensable. When agents share structural similarities, information sharing can streamline the training process, enabling systems to scale more effectively. Transfer learning, in particular, allows agents to adapt knowledge from previous tasks to new, related ones, significantly accelerating learning in dynamic environments. Moreover, curriculum learning [257, 258] plays a pivotal role in tackling the increased complexity of training multiple agents. It enables gradual learning by exposing agents to progressively more challenging tasks, thereby improving policy generalization and accelerating convergence. Robustness remains critical for scalability, as learned policies must withstand environmental disturbances. Techniques like policy ensembles and adversarial training [259, 260] enhance resilience by fostering diversity and adaptability in policies. The DTDE paradigm addresses these issues but introduces new complexities [124, 131], such as environmental instability. One promising solution is employing Independent Deep Q-networks (IDQNs) [126, 261, 127], which adapt traditional single-agent approaches to multi-agent contexts." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 71, + 720, + 287, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 720, + 287, + 733 + ], + "spans": [ + { + "bbox": [ + 71, + 720, + 287, + 733 + ], + "type": "text", + "content": "Overall, the interplay between robustness and scal" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 305, + 349, + 533, + 458 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 349, + 533, + 458 + ], + "spans": [ + { + "bbox": [ + 305, + 349, + 533, + 458 + ], + "type": "text", + "content": "ability in MARL is a key area for future exploration. While existing techniques provide strong foundations, integrating methods like meta-learning could offer a way for agents to rapidly adapt to new tasks and environments. Additionally, leveraging recent advances in graph neural networks might enhance the scalability of MARL by modeling agent interactions more efficiently. These directions hold promise for tackling the dynamic and large-scale nature of multi-agent environments." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 306, + 470, + 532, + 493 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 470, + 532, + 493 + ], + "spans": [ + { + "bbox": [ + 306, + 470, + 532, + 493 + ], + "type": "text", + "content": "5.2. Challenges in LLMs reasoning-based multi-agent systems" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 497, + 532, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 497, + 532, + 641 + ], + "spans": [ + { + "bbox": [ + 305, + 497, + 532, + 641 + ], + "type": "text", + "content": "The development of LLMs-based multi-agent systems is still in its early stages, and its advantages in multi-agent collaboration in cross-domain applications have not been fully realized. In this process, technical bottlenecks, design limitations, and imperfect evaluation methods have revealed numerous challenges. This section provides a comprehensive analysis of these challenges, exploring the current status, bottlenecks, and potential breakthrough directions of LLMs-based multiagent systems in key areas such as multi-modal interaction, system scalability, hallucination control, evaluation, and privacy protection." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 314, + 649, + 533, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 649, + 533, + 733 + ], + "spans": [ + { + "bbox": [ + 314, + 649, + 533, + 733 + ], + "type": "text", + "content": "1. Expansion of Multi-Modal Environments: [102, 113, 148] Current LLMs-based multi-agent systems primarily focus on text processing and generation, particularly excelling in language-based interactions. However, applications in multi-modal environments remain insufficient. Multi-modal environments require agents to handle various in" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 739, + 302, + 749 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 739, + 302, + 749 + ], + "spans": [ + { + "bbox": [ + 290, + 739, + 302, + 749 + ], + "type": "text", + "content": "35" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 35 + }, + { + "para_blocks": [ + { + "bbox": [ + 82, + 111, + 285, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 111, + 285, + 290 + ], + "spans": [ + { + "bbox": [ + 82, + 111, + 285, + 290 + ], + "type": "text", + "content": "puts from images, audio, video, and physical sensors, while also generating multi-modal outputs, such as descriptions of visual scenes or simulations of physical actions. This cross-modal interaction not only demands stronger model processing capabilities but also requires efficient information fusion between agents. For example, in practical applications, one agent may need to extract visual features from an image and collaborate with other agents through language to accomplish complex tasks, posing new technical challenges. Future research should focus on building unified multi-modal frameworks that enable agents to efficiently understand and collaboratively process various types of data." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 71, + 291, + 286, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 291, + 286, + 540 + ], + "spans": [ + { + "bbox": [ + 71, + 291, + 286, + 540 + ], + "type": "text", + "content": "2. Hallucination Problem: [262, 30, 108, 263] The hallucination in LLMs, which involves generating false or inaccurate information, becomes more complex in multi-agent environments. This issue may be triggered within a single agent and further propagated through multi-agent interactions, ultimately negatively impacting the decision-making of the entire system. Because the information flow in multi-agent systems is interconnected, any misjudgment at one node can trigger a chain reaction. This characteristic makes the hallucination problem not only confined to the behavior of individual agents but also poses challenges to the stability of the entire system. Therefore, addressing this issue requires a dual approach: on one hand, improving model training methods to reduce the probability of hallucinations in individual agents; on the other hand, designing information verification mechanisms and propagation management strategies to minimize the spread of hallucinated information within the agent network." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 71, + 542, + 286, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 542, + 286, + 731 + ], + "spans": [ + { + "bbox": [ + 71, + 542, + 286, + 731 + ], + "type": "text", + "content": "3. Acquisition of Collective Intelligence: [34, 176, 253] Current LLMs-based multi-agent systems rely more on real-time feedback for learning rather than offline data, unlike traditional multi-agent systems [264, 265]. This real-time learning approach imposes higher requirements on the interactive environment [262, 266]. Since designing and maintaining a reliable real-time interactive environment is not easy, it limits the scalability of the system. Additionally, existing research mostly focuses on optimizing individual agents, neglecting the potential overall efficiency improvements that could arise from agent collaboration. For example, knowledge sharing and behavioral coordination among agents may create advantages of collective intelligence in certain complex tasks. Fu" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 328, + 111, + 530, + 158 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 328, + 111, + 530, + 158 + ], + "spans": [ + { + "bbox": [ + 328, + 111, + 530, + 158 + ], + "type": "text", + "content": "ture research needs to explore how to fully leverage the potential of collective intelligence by optimizing multi-agent interaction strategies and collaboration mechanisms." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 315, + 158, + 530, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 158, + 530, + 396 + ], + "spans": [ + { + "bbox": [ + 315, + 158, + 530, + 396 + ], + "type": "text", + "content": "4. System Scalability: [262, 267, 253] As the number of agents in LLMs-based multi-agent systems increases, the demand for computational resources grows exponentially, posing challenges in resource-constrained environments. A single LLM agent already requires substantial computational power, and when the system scales to hundreds or thousands of agents, existing hardware and software architectures may not be able to support it. Furthermore, scaling the system introduces new complexities, such as how to efficiently allocate tasks, coordinate, and communicate among numerous agents. Studies have shown that the more agents there are, the more difficult it becomes to coordinate their operations, especially in reducing redundancy and conflicts. Therefore, future work needs to optimize resource utilization through the development of lightweight models and efficient communication protocols, while also exploring the scaling laws for agent expansion." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 315, + 398, + 530, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 398, + 530, + 624 + ], + "spans": [ + { + "bbox": [ + 315, + 398, + 530, + 624 + ], + "type": "text", + "content": "5. Evaluation and Benchmarking: [262, 253] Current evaluation methods and benchmark tests for LLMs-based multi-agent systems are still incomplete. Most research focuses solely on the performance of individual agents in specific tasks, neglecting the overall system performance in complex scenarios. Evaluating group behavior is more challenging because the dynamics and complexity of multi-agent systems are difficult to measure with a single metric. Additionally, the lack of a unified testing framework and benchmark data is a major obstacle when comparing the capabilities of different LLMs-based multi-agent systems across domains. Future research needs to develop comprehensive evaluation standards and universal benchmark tests, especially in key fields such as scientific experiments, economic analysis, and urban planning, to provide a basis for system performance comparison and improvement." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 315, + 625, + 530, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 625, + 530, + 731 + ], + "spans": [ + { + "bbox": [ + 315, + 625, + 530, + 731 + ], + "type": "text", + "content": "6. Interaction Efficiency and Cumulative Effects: [262, 268, 105, 254] The complexity of multiagent systems leads to prominent issues of low interaction efficiency and cumulative effects. Low interaction efficiency is mainly reflected in the need for generative agents to frequently query models, making the system inefficient in large-scale applications. On the other hand, because the system state highly depends on the results of the" + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "type": "text", + "content": "36" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 36 + }, + { + "para_blocks": [ + { + "bbox": [ + 80, + 111, + 286, + 206 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 80, + 111, + 286, + 206 + ], + "spans": [ + { + "bbox": [ + 80, + 111, + 286, + 206 + ], + "type": "text", + "content": "previous round, when an error occurs in one round, it may accumulate and propagate to subsequent operations, ultimately degrading the system's overall performance. Future efforts should focus on designing more efficient communication protocols and intermediate result correction mechanisms to reduce interaction costs and the impact of cumulative errors." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 69, + 208, + 286, + 375 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 208, + 286, + 375 + ], + "spans": [ + { + "bbox": [ + 69, + 208, + 286, + 375 + ], + "type": "text", + "content": "7. Security and Privacy Issues: [5, 118, 269, 270, 271] Context sharing within multi-agent systems poses risks of introducing noise and privacy leaks. For example, sensitive information shared between agents (such as identities or locations) may be misused by untrusted nodes, thereby threatening the system's security. Addressing this issue requires a two-pronged approach: first, establishing clear organizational structures to restrict information access permissions; second, introducing more advanced trust management mechanisms, such as distributed trust systems based on consensus algorithms, to enhance the system's security and reliability." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 382, + 286, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 382, + 286, + 502 + ], + "spans": [ + { + "bbox": [ + 61, + 382, + 286, + 502 + ], + "type": "text", + "content": "In summary, LLMs-based multi-agent systems face challenges across multiple domains, including multimodal adaptation, scalability, evaluation methods, collective intelligence development, and privacy protection. These challenges not only reveal the current technological limitations but also provide ample space for future research. With advancements in technology and the deepening of interdisciplinary studies, LLMs-based multi-agent systems are expected to achieve significant breakthroughs both theoretically and in applications." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 518, + 191, + 543 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 518, + 191, + 543 + ], + "spans": [ + { + "bbox": [ + 61, + 518, + 191, + 543 + ], + "type": "text", + "content": "6. Future Research Prospects and Emerging Trends" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 551, + 286, + 671 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 551, + 286, + 671 + ], + "spans": [ + { + "bbox": [ + 61, + 551, + 286, + 671 + ], + "type": "text", + "content": "Multi-Agent Decision-Making Systems are entering a new era where LLMs are combined with MARL [116]. This combination can improve learning efficiency in complex dynamic environments. It also enables better multi-modal information processing, multi-task collaboration, and long-term planning [7, 25, 30, 28, 266]. In this section, we discuss future prospects and challenges of multi-agent decision-making system (MAS) research from theoretical, technical, application, and ethical perspectives." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 682, + 286, + 705 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 682, + 286, + 705 + ], + "spans": [ + { + "bbox": [ + 61, + 682, + 286, + 705 + ], + "type": "text", + "content": "6.1. Theoretical Development: From Traditional RL to LLMs-Enhanced MARL Framework" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 61, + 708, + 286, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 708, + 286, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 708, + 286, + 732 + ], + "type": "text", + "content": "LLMs-enhanced MARL redefines collaboration in multi-agent systems by introducing natural language" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 111, + 530, + 278 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 530, + 278 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 530, + 278 + ], + "type": "text", + "content": "understanding and reasoning [14, 253]. Traditional MARL requires agents to learn control strategies in dynamic environments with limited data [8, 5, 27, 24]. However, this approach often faces challenges like low sample efficiency, difficult reward design, and poor generalization. LLMs, with their strong reasoning and knowledge representation capabilities, offer solutions [272, 30]. For example, they can process multi-modal information such as natural language and vision [113, 273, 102, 260], helping agents understand tasks and environments more effectively. This improves learning speed and generalization. Furthermore, LLMs can act as reasoning tools, providing additional context and knowledge to optimize long-term planning." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 279, + 531, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 279, + 531, + 507 + ], + "spans": [ + { + "bbox": [ + 305, + 279, + 531, + 507 + ], + "type": "text", + "content": "The LLMs-enhanced MARL framework is a groundbreaking integration of LLMs and MARL techniques, which includes roles such as information processor, reward designer, decision-maker, and generator [116]. Figure 24 presents a flowchart illustrating the structure of the LLMs-enhanced MARL framework, highlighting its four key roles. These roles work together to streamline task complexity and improve learning. For instance, LLMs can translate unstructured task descriptions into formal task semantics, reducing learning difficulty. They can also design advanced reward functions to accelerate learning in sparse-reward environments. These roles collectively address the challenges of task complexity, data efficiency, and generalization in MARL [29, 24, 251], while streamlining processes like reward design and policy generation. As shown in Table 4, we summarize recent advancements in LLMs-enhanced MARL methods across these four roles into a comprehensive table for clarity and comparison." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 306, + 522, + 530, + 546 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 522, + 530, + 546 + ], + "spans": [ + { + "bbox": [ + 306, + 522, + 530, + 546 + ], + "type": "text", + "content": "6.2. Technical Integration: From Multi-Modal to Multi-Task Optimization" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 553, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 553, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 305, + 553, + 531, + 732 + ], + "type": "text", + "content": "Combining LLMs and MARL significantly improves the ability to handle multi-modal information, multitask learning, and long-term task planning [273, 102, 272, 30]. Traditional MARL often requires separate modules to process visual, textual, or other forms of data. In contrast, LLMs can unify this processing, enabling comprehensive environment understanding. For example, in a robot task involving voice commands and visual inputs, LLMs can process both types of data simultaneously and generate actions directly. Additionally, LLMs provide a distinct advantage in multi-task learning due to their pre-trained knowledge [30, 57]. Through knowledge transfer, they help agents share experiences across different tasks, improving adaptability [255, 66]. For long-term planning, LLMs can break" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "text", + "content": "37" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 37 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 64, + 108, + 177, + 186 + ], + "blocks": [ + { + "bbox": [ + 64, + 108, + 177, + 186 + ], + "lines": [ + { + "bbox": [ + 64, + 108, + 177, + 186 + ], + "spans": [ + { + "bbox": [ + 64, + 108, + 177, + 186 + ], + "type": "image", + "image_path": "ca0f8bf101575fe8e2080e3589b9515f531ad898f95ecfd4f8b80ccff013637c.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 180, + 108, + 294, + 186 + ], + "blocks": [ + { + "bbox": [ + 180, + 108, + 294, + 186 + ], + "lines": [ + { + "bbox": [ + 180, + 108, + 294, + 186 + ], + "spans": [ + { + "bbox": [ + 180, + 108, + 294, + 186 + ], + "type": "image", + "image_path": "27395decb4a73fcf8ca53d914c366bd56d6e466f64923b049c2683ba7aa9ea47.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 298, + 108, + 411, + 186 + ], + "blocks": [ + { + "bbox": [ + 298, + 108, + 411, + 186 + ], + "lines": [ + { + "bbox": [ + 298, + 108, + 411, + 186 + ], + "spans": [ + { + "bbox": [ + 298, + 108, + 411, + 186 + ], + "type": "image", + "image_path": "001ede390b93b08a552ac07a9cab97f26b1666902dd4fca9440365ed87072e18.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 416, + 108, + 526, + 186 + ], + "blocks": [ + { + "bbox": [ + 416, + 108, + 526, + 186 + ], + "lines": [ + { + "bbox": [ + 416, + 108, + 526, + 186 + ], + "spans": [ + { + "bbox": [ + 416, + 108, + 526, + 186 + ], + "type": "image", + "image_path": "345c901536a9f3d5c821af422611f5d0a0f5220610e7455f10907432ce192167.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 63, + 188, + 177, + 265 + ], + "blocks": [ + { + "bbox": [ + 63, + 188, + 177, + 265 + ], + "lines": [ + { + "bbox": [ + 63, + 188, + 177, + 265 + ], + "spans": [ + { + "bbox": [ + 63, + 188, + 177, + 265 + ], + "type": "image", + "image_path": "48d61eca733bb6936526aa1e4ce4e6d1571acab57c3ce7861fae6cf215f23163.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 74, + 267, + 167, + 275 + ], + "lines": [ + { + "bbox": [ + 74, + 267, + 167, + 275 + ], + "spans": [ + { + "bbox": [ + 74, + 267, + 167, + 275 + ], + "type": "text", + "content": "a) LLM as information processor." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 180, + 188, + 294, + 265 + ], + "blocks": [ + { + "bbox": [ + 180, + 188, + 294, + 265 + ], + "lines": [ + { + "bbox": [ + 180, + 188, + 294, + 265 + ], + "spans": [ + { + "bbox": [ + 180, + 188, + 294, + 265 + ], + "type": "image", + "image_path": "f239702c9eb5ab20c1490b9c5fc64499a1d8aee5b19d5946cbdc5157061ff5da.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 193, + 267, + 271, + 275 + ], + "lines": [ + { + "bbox": [ + 193, + 267, + 271, + 275 + ], + "spans": [ + { + "bbox": [ + 193, + 267, + 271, + 275 + ], + "type": "text", + "content": "b) LLM as reward designer." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 299, + 188, + 411, + 265 + ], + "blocks": [ + { + "bbox": [ + 299, + 188, + 411, + 265 + ], + "lines": [ + { + "bbox": [ + 299, + 188, + 411, + 265 + ], + "spans": [ + { + "bbox": [ + 299, + 188, + 411, + 265 + ], + "type": "image", + "image_path": "11d07db349371dc5a08bc551dd94c462b655cd1302d907bb1c69d7a03ac850ac.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 314, + 267, + 388, + 275 + ], + "lines": [ + { + "bbox": [ + 314, + 267, + 388, + 275 + ], + "spans": [ + { + "bbox": [ + 314, + 267, + 388, + 275 + ], + "type": "text", + "content": "c) LLM as decision-maker" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 60, + 288, + 531, + 308 + ], + "lines": [ + { + "bbox": [ + 60, + 288, + 531, + 308 + ], + "spans": [ + { + "bbox": [ + 60, + 288, + 531, + 308 + ], + "type": "text", + "content": "Figure 24: Schematic diagram of the LLMs-enhanced MARL framework based on Cao et al. [116], showcasing its core roles: information processor (a), reward designer (b), decision-maker (c), and generator (d)." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 416, + 188, + 526, + 265 + ], + "blocks": [ + { + "bbox": [ + 416, + 188, + 526, + 265 + ], + "lines": [ + { + "bbox": [ + 416, + 188, + 526, + 265 + ], + "spans": [ + { + "bbox": [ + 416, + 188, + 526, + 265 + ], + "type": "image", + "image_path": "f0e13440a3152f156a53f69aa98085530dd0b8b475bfa74d08afe44a4dc001b4.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 445, + 266, + 503, + 275 + ], + "lines": [ + { + "bbox": [ + 445, + 266, + 503, + 275 + ], + "spans": [ + { + "bbox": [ + 445, + 266, + 503, + 275 + ], + "type": "text", + "content": "d) LLM as generator" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "bbox": [ + 60, + 327, + 286, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 327, + 286, + 483 + ], + "spans": [ + { + "bbox": [ + 60, + 327, + 286, + 483 + ], + "type": "text", + "content": "down complex tasks into subtasks, addressing challenges like the credit assignment problem. This capability is particularly useful in tasks requiring extended reasoning, such as construction tasks in Minecraft. In optimizing reinforcement learning's sample efficiency [308, 281], the generative capabilities of LLMs can provide agents with additional virtual samples through high-fidelity environment simulations [116, 308]. This not only reduces the cost of real-world learning but also offers high-quality trajectories that serve as valuable references for policy optimization. Furthermore, in sparse reward environments, LLMs can accelerate policy learning by automatically designing reward signals." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 61, + 498, + 286, + 522 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 498, + 286, + 522 + ], + "spans": [ + { + "bbox": [ + 61, + 498, + 286, + 522 + ], + "type": "text", + "content": "6.3. Application Expansion: Driving Intelligent Collaboration in Complex Scenarios" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 60, + 529, + 287, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 60, + 529, + 287, + 732 + ], + "spans": [ + { + "bbox": [ + 60, + 529, + 287, + 732 + ], + "type": "text", + "content": "The potential of LLMs-enhanced MARL in practical applications is enormous, especially in scenarios that require complex collaboration and real-time decision-making [116, 300, 183, 171]. For example, in the field of autonomous driving [19, 18, 17], the integration of LLMs with MARL can simultaneously process sensor data and natural language information (such as traffic regulations, passenger instructions, etc. [92]), thereby enhancing the safety and accuracy of decision-making [214, 74]. In the field of collaborative robots, LLMs can help multiple robots form a more intuitive communication mechanism, achieving highly complex task division and dynamic adjustment. In addition, in multi-objective optimization tasks such as smart grids and intelligent healthcare, LLMs can provide domain knowledge and optimization suggestions to assist reinforcement learning, design more practical reward functions, and thus" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 305, + 327, + 531, + 423 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 327, + 531, + 423 + ], + "spans": [ + { + "bbox": [ + 305, + 327, + 531, + 423 + ], + "type": "text", + "content": "improve the overall efficiency of the system. In dynamic and complex environments such as disaster relief [12], LLMs can dynamically allocate roles and responsibilities according to task requirements, helping multiagent systems quickly adapt to changing environments and highly complex task divisions [296, 295, 297]. This capability provides a solid technical support for a wide range of applications." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 306, + 433, + 530, + 457 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 433, + 530, + 457 + ], + "spans": [ + { + "bbox": [ + 306, + 433, + 530, + 457 + ], + "type": "text", + "content": "6.4. Human Society Coordination: Balancing Technology and Ethics" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 305, + 458, + 531, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 458, + 531, + 613 + ], + "spans": [ + { + "bbox": [ + 305, + 458, + 531, + 613 + ], + "type": "text", + "content": "The integration of LLMs into MARL opens new avenues for advancing multi-agent systems, while also highlighting exciting research directions in improving technical efficiency and addressing ethical considerations. For instance, enhancing the robustness of LLMs in unfamiliar environments offers the opportunity to develop strategies for minimizing biases and hallucinations, thereby improving decision accuracy. Furthermore, the computational complexity and resource demands of LLMs present a chance to innovate in optimizing inference efficiency and scalability. This is especially relevant in dynamic multi-agent environments where real-time responsiveness is critical." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 305, + 613, + 531, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 613, + 531, + 732 + ], + "spans": [ + { + "bbox": [ + 305, + 613, + 531, + 732 + ], + "type": "text", + "content": "From an ethical perspective, the incorporation of LLMs calls for advancements in ensuring data privacy, safeguarding against adversarial attacks, and establishing clear accountability frameworks for AI-driven decisions. Sensitive domains such as healthcare and disaster response could particularly benefit from focused research on protecting sensitive information and enhancing system resilience. Additionally, improving the transparency and explainability of LLMs-driven decisions is another promising area for exploration, as it" + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "text", + "content": "38" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 38 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 64, + 157, + 298, + 428 + ], + "blocks": [ + { + "bbox": [ + 61, + 118, + 287, + 156 + ], + "lines": [ + { + "bbox": [ + 61, + 118, + 287, + 156 + ], + "spans": [ + { + "bbox": [ + 61, + 118, + 287, + 156 + ], + "type": "text", + "content": "Table 4: Summary of recent studies categorized by the four key roles of LLMs in MARL: Information Processor, Reward Designer, Decision-Maker, and Generator, highlighting their respective contributions and applications." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 64, + 157, + 298, + 428 + ], + "lines": [ + { + "bbox": [ + 64, + 157, + 298, + 428 + ], + "spans": [ + { + "bbox": [ + 64, + 157, + 298, + 428 + ], + "type": "table", + "html": "
Method Types\nLLM as ...Researchers. / Works. / Refs.
InformationPoudel et al. (ReCoRe) [274], Choi et al. (ConPE) [275],
Paischer et al. (HELM) [276] and (Semantic HELM) [277],
ProcessorRadford et al. (CLIP) [278], Oord et al. (CPC) [279],
Michael et al. (CURL) [280], Schwarzer et al. (SPR) [281]
RewardKwon et al. (LLMrewardRL) [282], Song et al. (Self-Refined LLM) [283], Wu et al. (Read & Reward) [284],
Carta et al. (GLAM) [285], Chu et al. (Lafite-RL) [286],
DesignerKim et al. (ARP) [287], Yu et al. [288], Adeniji et al. (LAMP) [289], Madaan et al. (Self-Refine) [290], Ma et al. (Eureka) [291], Xie et al. (Text2Reward) [292]
Decision-MakerJanner et al. (TT-Offline RL) [293], Shi et al. (LaMo) [294], Li et al. (LLM scaffold) [295], Mezghani et al. (text BabyAI) [296], Grigsby et al. (AMAGO) [297], Zitkovich et al. (RT-2) [298], Yao et al. (CALM) [299], Hu et al. (instructRL) [300], Zhou et al. (LLM4Teach) [301]
GeneratorChen et al. (TransDreamer) [302], Das et al. (S2E) [303], Lin et al. (Dynalang) [304], Robine et al. (TWM) [305], Poudel et al. (LanGWM) [306], Lin et al. (HomeGrid) [307]
", + "image_path": "e501f1a056b6434b72169c110125e7e9ecb0925eeb6b72e7b0f5a9a2a151b4e4.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 470, + 287, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 470, + 287, + 495 + ], + "spans": [ + { + "bbox": [ + 61, + 470, + 287, + 495 + ], + "type": "text", + "content": "would increase trust and user confidence in multi-agent systems." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 506, + 287, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 506, + 287, + 555 + ], + "spans": [ + { + "bbox": [ + 61, + 506, + 287, + 555 + ], + "type": "text", + "content": "By addressing these areas, future research can maximize the potential of LLMs-enhanced MARL systems, ensuring they are both technically effective and ethically sound in diverse, real-world applications." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 565, + 287, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 565, + 287, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 565, + 287, + 732 + ], + "type": "text", + "content": "Overall, the combination of LLMs and MARL brings new momentum to research and applications in multiagent systems. By enhancing collaboration through natural language understanding and leveraging large-scale knowledge, these systems can achieve greater efficiency and robustness in complex scenarios. However, fully unlocking their potential requires further exploration in theoretical methods, technological development, and ethical practices. With systematic advancements in these areas, LLMs-enhanced MARL can become the foundation for next-generation intelligent decision-making systems, transforming fields like autonomous driving, collaborative robotics, and healthcare, while shaping the future of AI research." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 306, + 111, + 370, + 122 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 111, + 370, + 122 + ], + "spans": [ + { + "bbox": [ + 306, + 111, + 370, + 122 + ], + "type": "text", + "content": "7. Conclusion" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 129, + 532, + 452 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 129, + 532, + 452 + ], + "spans": [ + { + "bbox": [ + 305, + 129, + 532, + 452 + ], + "type": "text", + "content": "Multi-agent cooperative decision-making has demonstrated remarkable potential in addressing complex tasks through intelligent collaboration and adaptability. In this survey, we systematically review the evolution of multi-agent systems, highlighting the shift from traditional methods, such as rule-based and game-theory approaches, to advanced paradigms like MARL and LLMs. We differentiate these methods by examining their unique capabilities, challenges, and applications in diverse environments, paying particular attention to dynamic and uncertain settings. In addition, we explore the critical role of simulation environments as a bridge between theoretical advancements and real-world implementation, emphasizing their influence on agent interaction, learning, and decision-making. Practical applications of multi-agent systems in domains such as autonomous driving, disaster response, and robotics further underscore their transformative potential. By summarizing advanced multi-agent decision-making methodologies, datasets, benchmarks, and future research directions, this survey aims to provide a comprehensive resource for researchers and practitioners. We hope it inspires future studies to address existing challenges, such as improving inter-agent communication and adaptability, while leveraging the innovative potential of DRL and LLMs-based approaches to advance multi-agent cooperative decision-making." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 306, + 466, + 386, + 479 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 466, + 386, + 479 + ], + "spans": [ + { + "bbox": [ + 306, + 466, + 386, + 479 + ], + "type": "text", + "content": "Acknowledgment" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 484, + 532, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 484, + 532, + 665 + ], + "spans": [ + { + "bbox": [ + 305, + 484, + 532, + 665 + ], + "type": "text", + "content": "The corresponding authors of this survey are B. Zhao and G.Yang from Xi'an Jiaotong University and Imperial College London. Guang Yang was supported in part by the ERC IMI (101005122), the H2020 (952172), the MRC (MC/PC/21013), the Royal Society (IEC\\NSFC\\211235), the NVIDIA Academic Hardware Grant Program, the SABER project supported by Boehringer Ingelheim Ltd, NIHR Imperial Biomedical Research Centre (RDA01), Wellcome Leap Dynamic Resilience, UKRI guarantee funding for Horizon Europe MSCA Postdoctoral Fellowships (EP/Z002206/1), and the UKRI Future Leaders Fellowship (MR/V023799/1). The authors would like to thank the editors and anonymous reviewers, who significantly enhanced the quality of the survey." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 678, + 531, + 703 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 678, + 531, + 703 + ], + "spans": [ + { + "bbox": [ + 305, + 678, + 531, + 703 + ], + "type": "text", + "content": "Declaration of Generative AI and AI-assisted Technologies in the Writing Process" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 708, + 532, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 708, + 532, + 733 + ], + "spans": [ + { + "bbox": [ + 305, + 708, + 532, + 733 + ], + "type": "text", + "content": "During the preparation of this work, the authors utilized generative AI and AI-assisted technologies for" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 739, + 303, + 749 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 739, + 303, + 749 + ], + "spans": [ + { + "bbox": [ + 290, + 739, + 303, + 749 + ], + "type": "text", + "content": "39" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 39 + }, + { + "para_blocks": [ + { + "bbox": [ + 61, + 111, + 286, + 183 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 111, + 286, + 183 + ], + "spans": [ + { + "bbox": [ + 61, + 111, + 286, + 183 + ], + "type": "text", + "content": "proofreading and enhancing readability and language clarity in certain sections. The authors have carefully reviewed these contents to ensure accuracy and completeness, acknowledging that AI can generate authoritative-sounding output that may be incorrect, incomplete, or biased." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 62, + 200, + 286, + 236 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 200, + 286, + 236 + ], + "spans": [ + { + "bbox": [ + 62, + 200, + 286, + 236 + ], + "type": "text", + "content": "Appendix A. Technological Comparisons between Single-Agent and Multi-Agent (Under Reinforcement Learning)" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 61, + 243, + 286, + 278 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 243, + 286, + 278 + ], + "spans": [ + { + "bbox": [ + 61, + 243, + 286, + 278 + ], + "type": "text", + "content": "Here, we discuss a series of technological comparisons of both DRL-based single-agent and MARL-based multi-agent research." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 61, + 279, + 286, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 279, + 286, + 373 + ], + "spans": [ + { + "bbox": [ + 61, + 279, + 286, + 373 + ], + "type": "text", + "content": "In solving these single-agent sequential decision-making problems, Markov Decision Processes (MDP) is a powerful mathematical modeling framework, especially in uncertain environments. Since the decision-making process of an agent can inherently be modeled as a sequence of decisions, the single-agent decision-making process can be formulated as an typical MDP, similar to a Markov chain." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 61, + 375, + 286, + 516 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 375, + 286, + 516 + ], + "spans": [ + { + "bbox": [ + 61, + 375, + 286, + 516 + ], + "type": "text", + "content": "In contrast to single-agent DRL systems, multi-agent systems under the MARL techniques involve multiple agents interacting within a shared environment. POMDP is a powerful mathematical modeling framework. It is an extension of the MDP framework that is particularly well-suited for modeling decision-making in environments where the agent does not have full visibility of the entire state space. POMDPs extend MDPs to environments where the agent cannot fully observe the underlying state. Instead, the agent maintains a belief state, which is a probability distribution over the possible states." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 61, + 518, + 286, + 577 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 518, + 286, + 577 + ], + "spans": [ + { + "bbox": [ + 61, + 518, + 286, + 577 + ], + "type": "text", + "content": "Figure A.25 provides a comparative illustration of Markov Decision Processes (MDP) and Partially Observable Markov Decision Processes (POMDP), which correspond to single-agent and multi-agent reinforcement learning paradigms, respectively." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "spans": [ + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "text", + "content": "The left side of Figure A.25 depicts an MDP, which models single-agent decision-making in a fully observable environment. The agent selects an action " + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "inline_equation", + "content": "a" + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "text", + "content": " from the action space " + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "text", + "content": " based on the current state " + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "text", + "content": " from the state space " + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "text", + "content": ". The environment transitions to a new state " + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "inline_equation", + "content": "s'" + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "text", + "content": " following the transition probability function " + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "inline_equation", + "content": "P(s' \\mid s, a)" + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "text", + "content": ", and the agent receives a reward " + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "text", + "content": ". The objective is to optimize a policy " + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "inline_equation", + "content": "\\pi^*" + }, + { + "bbox": [ + 61, + 577, + 286, + 696 + ], + "type": "text", + "content": " that maximizes the cumulative reward. Since the entire state is observable, the decision-making process is relatively straightforward." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 61, + 697, + 286, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 697, + 286, + 732 + ], + "spans": [ + { + "bbox": [ + 61, + 697, + 286, + 732 + ], + "type": "text", + "content": "On the right side, the POMDP framework extends MDPs to multi-agent settings where agents operate under partial observability. Each agent " + }, + { + "bbox": [ + 61, + 697, + 286, + 732 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 61, + 697, + 286, + 732 + ], + "type": "text", + "content": " receives only a" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "spans": [ + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "text", + "content": "partial observation " + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "inline_equation", + "content": "o^i" + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "text", + "content": " rather than the full state " + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "text", + "content": ". The agents take individual actions " + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "inline_equation", + "content": "a^i" + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "text", + "content": ", forming a joint action " + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "inline_equation", + "content": "a_t" + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "text", + "content": ", which influences state transitions and results in individual rewards " + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "inline_equation", + "content": "r^i" + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "text", + "content": ". The observations are generated according to the observation function " + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "inline_equation", + "content": "Z(o \\mid s', a)" + }, + { + "bbox": [ + 305, + 111, + 531, + 195 + ], + "type": "text", + "content": ", requiring each agent to infer the missing state information and maintain a belief state for effective decision-making." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 195, + 531, + 326 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 195, + 531, + 326 + ], + "spans": [ + { + "bbox": [ + 305, + 195, + 531, + 326 + ], + "type": "text", + "content": "In summary, MDPs are well-suited for single-agent systems, where the environment is static and fully observable, allowing the agent to make optimal decisions based on complete knowledge of the state. On the other hand, POMDPs are crucial for multi-agent reinforcement learning scenarios, where multiple agents interact dynamically in an uncertain environment with limited information. This setting introduces challenges such as coordination, competition, and reward interdependencies, making decision-making significantly more complex." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 307, + 343, + 356, + 354 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 343, + 356, + 354 + ], + "spans": [ + { + "bbox": [ + 307, + 343, + 356, + 354 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 315, + 362, + 530, + 731 + ], + "type": "list", + "angle": 0, + "index": 18, + "blocks": [ + { + "bbox": [ + 315, + 362, + 530, + 401 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 315, + 362, + 530, + 401 + ], + "spans": [ + { + "bbox": [ + 315, + 362, + 530, + 401 + ], + "type": "text", + "content": "[1] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller, Playing atari with deep reinforcement learning (2013). arXiv:1312.5602. URL https://arxiv.org/abs/1312.5602" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 316, + 401, + 530, + 476 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 401, + 530, + 476 + ], + "spans": [ + { + "bbox": [ + 316, + 401, + 530, + 476 + ], + "type": "text", + "content": "[2] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, D. Hassabis, Human-level control through deep reinforcement learning, Nature 518 (7540) (2015) 529-533. doi:10.1038/nature14236. URL https://doi.org/10.1038/nature14236" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 316, + 476, + 530, + 551 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 476, + 530, + 551 + ], + "spans": [ + { + "bbox": [ + 316, + 476, + 530, + 551 + ], + "type": "text", + "content": "[3] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, D. Hassabis, Mastering the game of go with deep neural networks and tree search, Nature 529 (7587) (2016) 484-489. doi:10.1038/nature16961. URL https://doi.org/10.1038/nature16961" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 316, + 552, + 530, + 618 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 552, + 530, + 618 + ], + "spans": [ + { + "bbox": [ + 316, + 552, + 530, + 618 + ], + "type": "text", + "content": "[4] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, D. Hassabis, Mastering the game of go without human knowledge, Nature 550 (7676) (2017) 354-359. doi:10.1038/nature24270. URL https://doi.org/10.1038/nature24270" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 618, + 530, + 665 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 618, + 530, + 665 + ], + "spans": [ + { + "bbox": [ + 316, + 618, + 530, + 665 + ], + "type": "text", + "content": "[5] T. Li, K. Zhu, N. C. Luong, D. Niyato, Q. Wu, Y. Zhang, B. Chen, Applications of multi-agent reinforcement learning in future internet: A comprehensive survey, IEEE Communications Surveys & Tutorials 24 (2) (2022) 1240-1279. doi: 10.1109/COMST.2022.3160697." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 666, + 530, + 702 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 666, + 530, + 702 + ], + "spans": [ + { + "bbox": [ + 316, + 666, + 530, + 702 + ], + "type": "text", + "content": "[6] S. Gronauer, K. Diepold, Multi-agent deep reinforcement learning: a survey, Artificial Intelligence Review 55 (2) (2022) 895-943. doi:10.1007/s10462-021-09996-w. URL https://doi.org/10.1007/s10462-021-09996-w" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 703, + 530, + 731 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 703, + 530, + 731 + ], + "spans": [ + { + "bbox": [ + 316, + 703, + 530, + 731 + ], + "type": "text", + "content": "[7] P. Yadav, A. Mishra, S. Kim, A comprehensive survey on multi-agent reinforcement learning for connected and automated vehicles, Sensors 23 (10) (2023). doi:10.3390/" + } + ] + } + ], + "index": 17 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 739, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 739, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 739, + 302, + 748 + ], + "type": "text", + "content": "40" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 40 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 81, + 112, + 286, + 253 + ], + "blocks": [ + { + "bbox": [ + 81, + 112, + 286, + 253 + ], + "lines": [ + { + "bbox": [ + 81, + 112, + 286, + 253 + ], + "spans": [ + { + "bbox": [ + 81, + 112, + 286, + 253 + ], + "type": "image", + "image_path": "424dcd0683a6366cd83793e431932c1cd0e1322fbc5f7fa3309a42502a8009c0.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 82, + 256, + 266, + 269 + ], + "lines": [ + { + "bbox": [ + 82, + 256, + 266, + 269 + ], + "spans": [ + { + "bbox": [ + 82, + 256, + 266, + 269 + ], + "type": "text", + "content": "Markov Decision Processes (MDP) " + }, + { + "bbox": [ + 82, + 256, + 266, + 269 + ], + "type": "inline_equation", + "content": "\\langle S, A, P, R, \\gamma \\rangle" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 286, + 112, + 512, + 250 + ], + "blocks": [ + { + "bbox": [ + 286, + 112, + 512, + 250 + ], + "lines": [ + { + "bbox": [ + 286, + 112, + 512, + 250 + ], + "spans": [ + { + "bbox": [ + 286, + 112, + 512, + 250 + ], + "type": "image", + "image_path": "82ea2fe70ae861282e612beb352235a6bdbd2b72f4eb15bd6ddd82ee786cb44d.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 317, + 255, + 464, + 280 + ], + "lines": [ + { + "bbox": [ + 317, + 255, + 464, + 280 + ], + "spans": [ + { + "bbox": [ + 317, + 255, + 464, + 280 + ], + "type": "text", + "content": "Partially Observable Markov Decision Processes (POMDP) " + }, + { + "bbox": [ + 317, + 255, + 464, + 280 + ], + "type": "inline_equation", + "content": "\\langle S, A, P, R, \\gamma, O, Z \\rangle" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 61, + 295, + 531, + 316 + ], + "lines": [ + { + "bbox": [ + 61, + 295, + 531, + 316 + ], + "spans": [ + { + "bbox": [ + 61, + 295, + 531, + 316 + ], + "type": "text", + "content": "Figure A.25: The Markov Decision Process modeling for the single-agent reinforcement learning paradigm (left) and the Partially Observable Markov Decision Process modeling for the multi-agent reinforcement learning paradigm (right)." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 337, + 286, + 724 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 85, + 337, + 127, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 337, + 127, + 346 + ], + "spans": [ + { + "bbox": [ + 85, + 337, + 127, + 346 + ], + "type": "text", + "content": "s23104710." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 85, + 346, + 279, + 355 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 346, + 279, + 355 + ], + "spans": [ + { + "bbox": [ + 85, + 346, + 279, + 355 + ], + "type": "text", + "content": "URL https://www.mdpi.com/1424-8220/23/10/4710" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 71, + 355, + 286, + 392 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 355, + 286, + 392 + ], + "spans": [ + { + "bbox": [ + 71, + 355, + 286, + 392 + ], + "type": "text", + "content": "[8] J. Orr, A. Dutta, Multi-agent deep reinforcement learning for multi-robot applications: A survey, Sensors 23 (7) (2023). doi:10.3390/s23073625. URL https://www.mdpi.com/1424-8220/23/7/3625" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 71, + 393, + 286, + 459 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 393, + 286, + 459 + ], + "spans": [ + { + "bbox": [ + 71, + 393, + 286, + 459 + ], + "type": "text", + "content": "[9] W. Jin, B. Zhao, Y. Zhang, J. Huang, H. Yu, Wordtrans-absa: Enhancing aspect-based sentiment analysis with masked language modeling for affective token prediction, Expert Systems with Applications 238 (2024) 122289. doi:https://doi.org/10.1016/j.eswa.2023.122289. URL https://www.sciencedirect.com/science/article/pii/S0957417423027914" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 460, + 286, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 460, + 286, + 525 + ], + "spans": [ + { + "bbox": [ + 67, + 460, + 286, + 525 + ], + "type": "text", + "content": "[10] B. Zhao, W. Jin, Y. Zhang, S. Huang, G. Yang, Prompt learning for metonymy resolution: Enhancing performance with internal prior knowledge of pre-trained language models, Knowledge-Based Systems 279 (2023) 110928. doi:https://doi.org/10.1016/j.knosys.2023.110928. URL https://www.sciencedirect.com/science/article/pii/S0950705123006780" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 526, + 286, + 573 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 526, + 286, + 573 + ], + "spans": [ + { + "bbox": [ + 67, + 526, + 286, + 573 + ], + "type": "text", + "content": "[11] A. Seewald, C. J. Lerch, M. Chancan, A. M. Dollar, I. Abraham, Energy-aware ergodic search: Continuous exploration for multi-agent systems with battery constraints (2024). arXiv: 2310.09470. URL https://arxiv.org/abs/2310.09470" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 573, + 286, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 573, + 286, + 620 + ], + "spans": [ + { + "bbox": [ + 67, + 573, + 286, + 620 + ], + "type": "text", + "content": "[12] M. M. H. Qazzaz, S. A. R. Zaidi, D. C. McLernon, A. Salama, A. A. Al-Hameed, Optimizing search and rescue UAV connectivity in challenging terrain through multi q-learning (2024). arXiv:2405.10042. URL https://arxiv.org/abs/2405.10042" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 620, + 286, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 620, + 286, + 666 + ], + "spans": [ + { + "bbox": [ + 67, + 620, + 286, + 666 + ], + "type": "text", + "content": "[13] G. T. Papadopoulos, M. Antona, C. Stephanidis, Towards open and expandable cognitive ai architectures for large-scale multi-agent human-robot collaborative learning, IEEE Access 9 (2021) 73890-73909. doi:10.1109/ACCESS.2021.3080517." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 67, + 667, + 286, + 714 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 667, + 286, + 714 + ], + "spans": [ + { + "bbox": [ + 67, + 667, + 286, + 714 + ], + "type": "text", + "content": "[14] M. D. Silva, R. Regnier, M. Makarov, G. Avrin, D. Dumur, Evaluation of intelligent collaborative robots: a review, in: 2023 IEEE/SICE International Symposium on System Integration (SII), 2023, pp. 1-7. doi:10.1109/SII55687.2023.10039365." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 67, + 714, + 286, + 724 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 714, + 286, + 724 + ], + "spans": [ + { + "bbox": [ + 67, + 714, + 286, + 724 + ], + "type": "text", + "content": "[15] Y. Huang, S. Wu, Z. Mu, X. Long, S. Chu, G. Zhao, A multi" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 312, + 337, + 530, + 724 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 329, + 337, + 530, + 374 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 337, + 530, + 374 + ], + "spans": [ + { + "bbox": [ + 329, + 337, + 530, + 374 + ], + "type": "text", + "content": "agent reinforcement learning method for swarm robots in space collaborative exploration, in: 2020 6th International Conference on Control, Automation and Robotics (ICCAR), 2020, pp. 139-144. doi:10.1109/ICCAR49639.2020.9107997." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 312, + 375, + 530, + 440 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 375, + 530, + 440 + ], + "spans": [ + { + "bbox": [ + 312, + 375, + 530, + 440 + ], + "type": "text", + "content": "[16] C. G. Cena, P. F. Cardenas, R. S. Pazmino, L. Puglisi, R. A. Santonja, A cooperative multi-agent robotics system: Design and modelling, Expert Systems with Applications 40 (12) (2013) 4737-4748. doi:https://doi.org/10.1016/j.eswa.2013.01.048. URL https://www.sciencedirect.com/science/article/pii/S0957417413000791" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 312, + 441, + 530, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 441, + 530, + 487 + ], + "spans": [ + { + "bbox": [ + 312, + 441, + 530, + 487 + ], + "type": "text", + "content": "[17] S. Jayawardana, V. Jayawardana, K. Vidanage, C. Wu, Multi-behavior learning for socially compatible autonomous driving, in: 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC), 2023, pp. 4422-4427. doi:10.1109/ITSC57777.2023.10422120." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 312, + 487, + 530, + 534 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 487, + 530, + 534 + ], + "spans": [ + { + "bbox": [ + 312, + 487, + 530, + 534 + ], + "type": "text", + "content": "[18] L. Wen, J. Duan, S. E. Li, S. Xu, H. Peng, Safe reinforcement learning for autonomous vehicles through parallel constrained policy optimization, in: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), 2020, pp. 1-7. doi:10.1109/ITSC45102.2020.9294262." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 312, + 535, + 530, + 582 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 535, + 530, + 582 + ], + "spans": [ + { + "bbox": [ + 312, + 535, + 530, + 582 + ], + "type": "text", + "content": "[19] L. Weiwei, H. Wenxuan, J. Wei, L. Lanxin, G. Lingping, L. Yong, Learning to model diverse driving behaviors in highly interactive autonomous driving scenarios with multi-agent reinforcement learning (2024). arXiv:2402.13481. URL https://arxiv.org/abs/2402.13481" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 312, + 582, + 530, + 620 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 582, + 530, + 620 + ], + "spans": [ + { + "bbox": [ + 312, + 582, + 530, + 620 + ], + "type": "text", + "content": "[20] Y. Xue, W. Chen, Multi-agent deep reinforcement learning for uavs navigation in unknown complex environment, IEEE Transactions on Intelligent Vehicles 9 (1) (2024) 2290-2303. doi:10.1109/TIV.2023.3298292." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 312, + 620, + 530, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 620, + 530, + 657 + ], + "spans": [ + { + "bbox": [ + 312, + 620, + 530, + 657 + ], + "type": "text", + "content": "[21] S. Rezwan, W. Choi, Artificial intelligence approaches for uav navigation: Recent advances and future challenges, IEEE Access 10 (2022) 26320-26339. doi:10.1109/ACCESS.2022.3157626." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 312, + 658, + 530, + 714 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 658, + 530, + 714 + ], + "spans": [ + { + "bbox": [ + 312, + 658, + 530, + 714 + ], + "type": "text", + "content": "[22] B. Al Baroomi, T. Myo, M. R. Ahmed, A. Al Shibli, M. H. Marhaban, M. S. Kaiser, Ant colony optimization-based path planning for uav navigation in dynamic environments, in: 2023 7th International Conference on Automation, Control and Robots (ICACR), 2023, pp. 168-173. doi:10.1109/ICACR59381.2023.10314603." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 312, + 714, + 530, + 724 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 714, + 530, + 724 + ], + "spans": [ + { + "bbox": [ + 312, + 714, + 530, + 724 + ], + "type": "text", + "content": "[23] T. Samad, S. Iqbal, A. W. Malik, O. Arif, P. Bloodsworth, A" + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "text", + "content": "41" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 41 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 113, + 286, + 728 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 84, + 113, + 286, + 141 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 113, + 286, + 141 + ], + "spans": [ + { + "bbox": [ + 84, + 113, + 286, + 141 + ], + "type": "text", + "content": "multi-agent framework for cloud-based management of collaborative robots, International Journal of Advanced Robotic Systems 15 (4) (2018). doi:10.1177/1729881418785073." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 142, + 286, + 188 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 142, + 286, + 188 + ], + "spans": [ + { + "bbox": [ + 67, + 142, + 286, + 188 + ], + "type": "text", + "content": "[24] W. Du, S. Ding, A survey on multi-agent deep reinforcement learning: from the perspective of challenges and applications, Artificial Intelligence Review 54 (5) (2021) 3215-3238. doi: 10.1007/s10462-020-09938-y. URL https://doi.org/10.1007/s10462-020-09938-y" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 189, + 286, + 245 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 189, + 286, + 245 + ], + "spans": [ + { + "bbox": [ + 67, + 189, + 286, + 245 + ], + "type": "text", + "content": "[25] Z. Ning, L. Xie, A survey on multi-agent reinforcement learning and its application, Journal of Automation and Intelligence 3 (2) (2024) 73-91. doi:https://doi.org/10.1016/j.jai.2024.02.003. URL https://www.sciencedirect.com/science/article/pii/S2949855424000042" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 246, + 286, + 283 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 246, + 286, + 283 + ], + "spans": [ + { + "bbox": [ + 67, + 246, + 286, + 283 + ], + "type": "text", + "content": "[26] Q. Yang, R. Liu, Understanding the application of utility theory in robotics and artificial intelligence: A survey (2023). arXiv: 2306.09445. URL https://arxiv.org/abs/2306.09445" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 284, + 286, + 321 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 284, + 286, + 321 + ], + "spans": [ + { + "bbox": [ + 67, + 284, + 286, + 321 + ], + "type": "text", + "content": "[27] P. Hernandez-Leal, M. Kaisers, T. Baarslag, E. M. de Cote, A survey of learning in multiagent environments: Dealing with non-stationarity (2019). arXiv:1707.09183. URL https://arxiv.org/abs/1707.09183" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 322, + 286, + 368 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 322, + 286, + 368 + ], + "spans": [ + { + "bbox": [ + 67, + 322, + 286, + 368 + ], + "type": "text", + "content": "[28] C. Zhu, M. Dastani, S. Wang, A survey of multi-agent deep reinforcement learning with communication, Autonomous Agents and Multi-Agent Systems 38 (1) (2024) 4. doi: 10.1007/s10458-023-09633-6. URL https://doi.org/10.1007/s10458-023-09633-6" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 369, + 286, + 414 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 369, + 286, + 414 + ], + "spans": [ + { + "bbox": [ + 67, + 369, + 286, + 414 + ], + "type": "text", + "content": "[29] T. T. Nguyen, N. D. Nguyen, S. Nahavandi, Deep reinforcement learning for multiagent systems: A review of challenges, solutions, and applications, IEEE Transactions on Cybernetics 50 (9) (2020) 3826-3839. doi:10.1109/TCYB.2020.2977374." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 416, + 286, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 416, + 286, + 472 + ], + "spans": [ + { + "bbox": [ + 67, + 416, + 286, + 472 + ], + "type": "text", + "content": "[30] L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang, Z. Chen, J. Tang, X. Chen, Y. Lin, W. X. Zhao, Z. Wei, J. Wen, A survey on large language model based autonomous agents, Frontiers of Computer Science 18 (6) (2024) 186345. doi:10.1007/s11704-024-40231-1. URL https://doi.org/10.1007/s11704-024-40231-1" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 473, + 286, + 528 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 473, + 286, + 528 + ], + "spans": [ + { + "bbox": [ + 67, + 473, + 286, + 528 + ], + "type": "text", + "content": "[31] B. Zhao, W. Jin, J. Del Ser, G. Yang, Chatagri: Exploring potentials of chatgpt on cross-linguistic agricultural text classification, Neurocomputing 557 (2023) 126708. doi: https://doi.org/10.1016/j.neucom.2023.126708. URL https://www.sciencedirect.com/science/article/pii/S0925231223008317" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 529, + 286, + 566 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 529, + 286, + 566 + ], + "spans": [ + { + "bbox": [ + 67, + 529, + 286, + 566 + ], + "type": "text", + "content": "[32] T. Miki, M. Nagao, H. Kobayashi, T. Nakamura, A simple rule based multi-agent control algorithm and its implementation using autonomous mobile robots, in: 2010 World Automation Congress, 2010, pp. 1-6." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 567, + 286, + 623 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 567, + 286, + 623 + ], + "spans": [ + { + "bbox": [ + 67, + 567, + 286, + 623 + ], + "type": "text", + "content": "[33] H. Yarahmadi, H. Navidi, M. Challenger, Improving the resource allocation in IoT systems based on the integration of reinforcement learning and rule-based approaches in multi-agent systems, in: 2024 8th International Conference on Smart Cities, Internet of Things and Applications (SCIIoT), 2024, pp. 135-141. doi:10.1109/SCIIoT62588.2024.10570102." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 624, + 286, + 661 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 624, + 286, + 661 + ], + "spans": [ + { + "bbox": [ + 67, + 624, + 286, + 661 + ], + "type": "text", + "content": "[34] S.-H. Wu, V.-W. Soo, A fuzzy game theoretic approach to multi-agent coordination, in: T. Ishida (Ed.), Multiagent Platforms, Springer Berlin Heidelberg, Berlin, Heidelberg, 1999, pp. 76-87. doi:10.1007/3-540-48826-X\\_6." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 662, + 286, + 708 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 662, + 286, + 708 + ], + "spans": [ + { + "bbox": [ + 67, + 662, + 286, + 708 + ], + "type": "text", + "content": "[35] H. Zhang, J. Zhang, G.-H. Yang, Y. Luo, Leader-based optimal coordination control for the consensus problem of multiagent differential games via fuzzy adaptive dynamic programming, IEEE Transactions on Fuzzy Systems 23 (1) (2015) 152-163. doi:10.1109/TFUZZ.2014.2310238." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 67, + 708, + 286, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 708, + 286, + 728 + ], + "spans": [ + { + "bbox": [ + 67, + 708, + 286, + 728 + ], + "type": "text", + "content": "[36] F. Ren, M. Zhang, Q. Bai, A fuzzy-based approach for partner selection in multi-agent systems, in: 6th IEEE/ACIS Interna" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 312, + 113, + 530, + 727 + ], + "type": "list", + "angle": 0, + "index": 30, + "blocks": [ + { + "bbox": [ + 329, + 113, + 530, + 132 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 113, + 530, + 132 + ], + "spans": [ + { + "bbox": [ + 329, + 113, + 530, + 132 + ], + "type": "text", + "content": "tional Conference on Computer and Information Science (ICIS 2007), 2007, pp. 457-462. doi:10.1109/ICIS.2007.21." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 312, + 133, + 530, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 133, + 530, + 160 + ], + "spans": [ + { + "bbox": [ + 312, + 133, + 530, + 160 + ], + "type": "text", + "content": "[37] D. Gu, H. Hu, Fuzzy multi-agent cooperative q-learning, in: 2005 IEEE International Conference on Information Acquisition, 2005, p. 5 pp. doi:10.1109/ICIA.2005.1635080." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 312, + 161, + 530, + 206 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 161, + 530, + 206 + ], + "spans": [ + { + "bbox": [ + 312, + 161, + 530, + 206 + ], + "type": "text", + "content": "[38] J. Wang, Y. Hong, J. Wang, J. Xu, Y. Tang, Q.-L. Han, J. Kurths, Cooperative and competitive multi-agent systems: From optimization to games, IEEE/CAA Journal of Automatica Sinica 9 (5) (2022) 763-783. doi:10.1109/JAS.2022.105506." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 312, + 208, + 530, + 254 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 208, + 530, + 254 + ], + "spans": [ + { + "bbox": [ + 312, + 208, + 530, + 254 + ], + "type": "text", + "content": "[39] Y. Guo, Q. Pan, Q. Sun, C. Zhao, D. Wang, M. Feng, Cooperative game-based multi-agent path planning with obstacle avoidance, in: 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE), 2019, pp. 1385-1390. doi: 10.1109/ISIE.2019.8781205." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 312, + 255, + 530, + 292 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 255, + 530, + 292 + ], + "spans": [ + { + "bbox": [ + 312, + 255, + 530, + 292 + ], + "type": "text", + "content": "[40] D. Schwung, A. Schwung, S. X. Ding, Distributed self-optimization of modular production units: A state-based potential game approach, IEEE Transactions on Cybernetics 52 (4) (2022) 2174–2185. doi:10.1109/TCYB.2020.3006620." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 312, + 293, + 530, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 293, + 530, + 330 + ], + "spans": [ + { + "bbox": [ + 312, + 293, + 530, + 330 + ], + "type": "text", + "content": "[41] X. Wang, J. Wang, J. Chen, Y. Yang, L. Kong, X. Liu, L. Jia, Y. Xu, A game-theoretic learning framework for multi-agent intelligent wireless networks (2019). arXiv:1812.01267. URL https://arxiv.org/abs/1812.01267" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 312, + 331, + 530, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 331, + 530, + 386 + ], + "spans": [ + { + "bbox": [ + 312, + 331, + 530, + 386 + ], + "type": "text", + "content": "[42] W. Lin, Y. Chen, Q. Q. Wang, J. Zeng, J. Liu, Multi-agents based distributed-energy-resource management for intelligent microgrid with potential game algorithm, in: IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society, 2017, pp. 7795-7800. doi:10.1109/IECON.2017.8217366." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 312, + 387, + 530, + 433 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 387, + 530, + 433 + ], + "spans": [ + { + "bbox": [ + 312, + 387, + 530, + 433 + ], + "type": "text", + "content": "[43] H. Wang, Z. Ning, H. Luo, Y. Jiang, M. Huo, Game-based adaptive optimization approach for multi-agent systems, in: 2023 IEEE International Conference on Industrial Technology (ICIT), 2023, pp. 1-5. doi:10.1109/ICIT58465.2023.10143172." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 312, + 435, + 530, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 435, + 530, + 472 + ], + "spans": [ + { + "bbox": [ + 312, + 435, + 530, + 472 + ], + "type": "text", + "content": "[44] L. Bull, Evolutionary computing in multi-agent environments: Operators, in: V. W. Porto, N. Saravanan, D. Waagen, A. E. Eiben (Eds.), Evolutionary Programming VII, Springer Berlin Heidelberg, Berlin, Heidelberg, 1998, pp. 43-52." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 312, + 473, + 530, + 528 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 473, + 530, + 528 + ], + "spans": [ + { + "bbox": [ + 312, + 473, + 530, + 528 + ], + "type": "text", + "content": "[45] J. Liu, W. Zhong, L. Jiao, Multi-Agent Evolutionary Model for Global Numerical Optimization, Springer Berlin Heidelberg, Berlin, Heidelberg, 2010, pp. 13-48. doi:10.1007/978-3-642-13425-8\\2. URL https://doi.org/10.1007/978-3-642-13425-8_2" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 312, + 529, + 530, + 556 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 529, + 530, + 556 + ], + "spans": [ + { + "bbox": [ + 312, + 529, + 530, + 556 + ], + "type": "text", + "content": "[46] D. Bloembergen, K. Tuyls, D. Hennes, M. Kaisers, Evolutionary dynamics of multi-agent learning: a survey, J. Artif. Int. Res. 53 (1) (2015) 659-697." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 312, + 557, + 530, + 623 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 557, + 530, + 623 + ], + "spans": [ + { + "bbox": [ + 312, + 557, + 530, + 623 + ], + "type": "text", + "content": "[47] D. Klijn, A. E. Eiben, A coevolutionary approach to deep multi-agent reinforcement learning, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '21, Association for Computing Machinery, New York, NY, USA, 2021, p. 283-284. doi:10.1145/3449726.3459576. URL https://doi.org/10.1145/3449726.3459576" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 312, + 624, + 530, + 661 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 624, + 530, + 661 + ], + "spans": [ + { + "bbox": [ + 312, + 624, + 530, + 661 + ], + "type": "text", + "content": "[48] S. Yuan, K. Song, J. Chen, X. Tan, D. Li, D. Yang, Evoagent: Towards automatic multi-agent generation via evolutionary algorithms (2024). arXiv:2406.14228. URL https://arxiv.org/abs/2406.14228" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 312, + 662, + 530, + 708 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 662, + 530, + 708 + ], + "spans": [ + { + "bbox": [ + 312, + 662, + 530, + 708 + ], + "type": "text", + "content": "[49] W. Zhang, H. Liu, Evolutionary game analysis of multi-agent cooperation strategy analysis in agricultural water conservancy ppp project under digitization background, Scientific Reports 14 (1) (2024) 22915. doi:10.1038/s41598-024-74065-5. URL https://doi.org/10.1038/s41598-024-74065-5" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 312, + 709, + 530, + 727 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 709, + 530, + 727 + ], + "spans": [ + { + "bbox": [ + 312, + 709, + 530, + 727 + ], + "type": "text", + "content": "[50] T. Rashid, M. Samvelyan, C. Schroeder, G. Farquhar, J. Foerster, S. Whiteson, QMIX: Monotonic value function" + } + ] + } + ], + "index": 29 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "text", + "content": "42" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 42 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 113, + 286, + 728 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 84, + 113, + 286, + 169 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 113, + 286, + 169 + ], + "spans": [ + { + "bbox": [ + 84, + 113, + 286, + 169 + ], + "type": "text", + "content": "factorisation for deep multi-agent reinforcement learning, in: J. Dy, A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, Vol. 80 of Proceedings of Machine Learning Research, PMLR, 2018, pp. 4295-4304. URL https://proceedings.mlr.press/v80/ rashid18a.html" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 170, + 286, + 227 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 170, + 286, + 227 + ], + "spans": [ + { + "bbox": [ + 67, + 170, + 286, + 227 + ], + "type": "text", + "content": "[51] L. Kraemer, B. Banerjee, Multi-agent reinforcement learning as a rehearsal for decentralized planning, Neurocomputing 190 (2016) 82-94. doi:https://doi.org/10.1016/j.neucom.2016.01.031. URL https://www.sciencedirect.com/science/article/pii/S0925231216000783" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 227, + 286, + 273 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 227, + 286, + 273 + ], + "spans": [ + { + "bbox": [ + 67, + 227, + 286, + 273 + ], + "type": "text", + "content": "[52] B. Yu, Z. Cai, J. He, Fast-qmix: Accelerating deep multiagent reinforcement learning with virtual weighted q-values, in: 2021 2nd International Conference on Electronics, Communications and Information Technology (CECIT), 2021, pp. 594-599. doi:10.1109/CECIT53797.2021.00110." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 275, + 286, + 349 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 275, + 286, + 349 + ], + "spans": [ + { + "bbox": [ + 67, + 275, + 286, + 349 + ], + "type": "text", + "content": "[53] K. Son, D. Kim, W. J. Kang, D. E. Hostallero, Y. Yi, QTRAN: Learning to factorize with transformation for cooperative multi-agent reinforcement learning, in: K. Chaudhuri, R. Salakhutdinov (Eds.), Proceedings of the 36th International Conference on Machine Learning, Vol. 97 of Proceedings of Machine Learning Research, PMLR, 2019, pp. 5887-5896. URL https://proceedings.mlr.press/v97/son19a.htm1" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 349, + 286, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 349, + 286, + 406 + ], + "spans": [ + { + "bbox": [ + 67, + 349, + 286, + 406 + ], + "type": "text", + "content": "[54] T. Rashid, G. Farquhar, B. Peng, S. Whiteson, Weighted qmix: expanding monotonic value function factorisation for deep multi-agent reinforcement learning, in: Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS '20, Curran Associates Inc., Red Hook, NY, USA, 2020." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 407, + 286, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 407, + 286, + 472 + ], + "spans": [ + { + "bbox": [ + 67, + 407, + 286, + 472 + ], + "type": "text", + "content": "[55] K. Kurach, A. Raichuk, P. Stanczyk, M. Zajac, O. Bachem, L. Espeholt, C. Riquelme, D. Vincent, M. Michalski, O. Bousquet, S. Gelly, Google research football: A novel reinforcement learning environment, Proceedings of the AAAI Conference on Artificial Intelligence 34 (04) (2020) 4501-4510. doi:10.1609/aaai.v34i04.5878. URL https://doi.org/10.1609/aaai.v34i04.5878" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 473, + 286, + 520 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 473, + 286, + 520 + ], + "spans": [ + { + "bbox": [ + 67, + 473, + 286, + 520 + ], + "type": "text", + "content": "[56] J. Wang, Z. Ren, T. Liu, Y. Yu, C. Zhang, QPLEX: duplex dueling multi-agent q-learning, in: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021, OpenReview.net, 2021. URL https://openreview.net/forum?id=RcmkOxxIQV" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 521, + 286, + 557 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 521, + 286, + 557 + ], + "spans": [ + { + "bbox": [ + 67, + 521, + 286, + 557 + ], + "type": "text", + "content": "[57] P. Li, V. Menon, B. Gudiguntla, D. Ting, L. Zhou, Challenges faced by large language models in solving multi-agent flocking (2024). arXiv:2404.04752. URL https://arxiv.org/abs/2404.04752" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 558, + 286, + 642 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 558, + 286, + 642 + ], + "spans": [ + { + "bbox": [ + 67, + 558, + 286, + 642 + ], + "type": "text", + "content": "[58] Z. Xu, C. Yu, F. Fang, Y. Wang, Y. Wu, Language agents with reinforcement learning for strategic play in the werewolf game, in: R. Salakhutdinov, Z. Kolter, K. Heller, A. Weller, N. Oliver, J. Scarlett, F. Berkenkamp (Eds.), Proceedings of the 41st International Conference on Machine Learning, Vol. 235 of Proceedings of Machine Learning Research, PMLR, 2024, pp. 55434-55464. URL https://proceedings.mlr.press/v235/xu24ad.htm1" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 643, + 286, + 671 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 643, + 286, + 671 + ], + "spans": [ + { + "bbox": [ + 67, + 643, + 286, + 671 + ], + "type": "text", + "content": "[59] I. Mordatch, P. Abbeel, Emergence of grounded compositional language in multi-agent populations, arXiv preprint arXiv:1703.04908 (2017)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 671, + 286, + 717 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 671, + 286, + 717 + ], + "spans": [ + { + "bbox": [ + 67, + 671, + 286, + 717 + ], + "type": "text", + "content": "[60] H. Schwartz, An object oriented approach to fuzzy actor-critic learning for multi-agent differential games, in: 2019 IEEE Symposium Series on Computational Intelligence (SSCI), 2019, pp. 183-190. doi:10.1109/SSCI44817.2019.9002707." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 718, + 286, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 718, + 286, + 728 + ], + "spans": [ + { + "bbox": [ + 67, + 718, + 286, + 728 + ], + "type": "text", + "content": "[61] I. Harmati, Multi-agent coordination for target tracking using" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 312, + 113, + 530, + 728 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 329, + 113, + 530, + 169 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 113, + 530, + 169 + ], + "spans": [ + { + "bbox": [ + 329, + 113, + 530, + 169 + ], + "type": "text", + "content": "fuzzy inference system in game theoretic framework, in: 2006 IEEE Conference on Computer Aided Control System Design, 2006 IEEE International Conference on Control Applications, 2006 IEEE International Symposium on Intelligent Control, 2006, pp. 2390-2395. doi:10.1109/CACSD-CCA-ISIC.2006.4777014." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 312, + 170, + 530, + 216 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 170, + 530, + 216 + ], + "spans": [ + { + "bbox": [ + 312, + 170, + 530, + 216 + ], + "type": "text", + "content": "[62] C. W. Khuen, C. H. Yong, F. Haron, Multi-agent negotiation system using adaptive fuzzy logic in resource allocation, in: The 2nd International Conference on Distributed Frameworks for Multimedia Applications, 2006, pp. 1-7. doi:10.1109/DFMA.2006.296888." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 312, + 217, + 530, + 263 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 217, + 530, + 263 + ], + "spans": [ + { + "bbox": [ + 312, + 217, + 530, + 263 + ], + "type": "text", + "content": "[63] L. Yan, J. Liu, C. P. Chen, Y. Zhang, Z. Wu, Z. Liu, Game-based adaptive fuzzy optimal bipartite containment of nonlinear multiagent systems, IEEE Transactions on Fuzzy Systems 32 (3) (2024) 1455-1465. doi:10.1109/TFUZZ.2023.3327699." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 312, + 264, + 530, + 320 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 264, + 530, + 320 + ], + "spans": [ + { + "bbox": [ + 312, + 264, + 530, + 320 + ], + "type": "text", + "content": "[64] R. R. P. Vicerra, K. K. A. David, A. R. dela Cruz, E. A. Roxas, K. B. C. Simbulan, A. A. Bandala, E. P. Dadios, A multiple level mimo fuzzy logic based intelligence for multiple agent cooperative robot system, in: TENCON 2015 - 2015 IEEE Region 10 Conference, 2015, pp. 1-7. doi:10.1109/TENCON.2015.7372985." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 312, + 321, + 530, + 359 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 321, + 530, + 359 + ], + "spans": [ + { + "bbox": [ + 312, + 321, + 530, + 359 + ], + "type": "text", + "content": "[65] D. Gu, E. Yang, Fuzzy policy gradient reinforcement learning for leader-follower systems, in: IEEE International Conference Mechatronics and Automation, 2005, Vol. 3, 2005, pp. 1557-1561 Vol. 3. doi:10.1109/ICMA.2005.1626787." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 312, + 359, + 530, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 359, + 530, + 396 + ], + "spans": [ + { + "bbox": [ + 312, + 359, + 530, + 396 + ], + "type": "text", + "content": "[66] Y. Maruyama, A reasoning system for fuzzy distributed knowledge representation in multi-agent systems, in: 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2021, pp. 1-6. doi:10.1109/FUZZ45933.2021.9494454." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 312, + 396, + 530, + 443 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 396, + 530, + 443 + ], + "spans": [ + { + "bbox": [ + 312, + 396, + 530, + 443 + ], + "type": "text", + "content": "[67] J. Peng, M. Liu, J. Liu, K.-C. Lin, M. Wu, A coordination model using fuzzy reinforcement learning for multi-agent system, in: 2007 International Symposium on Collaborative Technologies and Systems, 2007, pp. 129-136. doi:10.1109/CTS.2007.4621748." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 312, + 444, + 530, + 490 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 444, + 530, + 490 + ], + "spans": [ + { + "bbox": [ + 312, + 444, + 530, + 490 + ], + "type": "text", + "content": "[68] E. Yang, D. Gu, A multiagent fuzzy policy reinforcement learning algorithm with application to leader-follower robotic systems, in: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006, pp. 3197-3202. doi: 10.1109/IROS.2006.282421." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 312, + 491, + 530, + 538 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 491, + 530, + 538 + ], + "spans": [ + { + "bbox": [ + 312, + 491, + 530, + 538 + ], + "type": "text", + "content": "[69] H. Wang, H. Luo, Y. Jiang, A game-based distributed fault-tolerant control method for multi-agent systems, in: 2022 IEEE 1st Industrial Electronics Society Annual On-Line Conference (ONCON), 2022, pp. 1-5. doi:10.1109/ONCON56984.2022.10126545." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 312, + 539, + 530, + 594 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 539, + 530, + 594 + ], + "spans": [ + { + "bbox": [ + 312, + 539, + 530, + 594 + ], + "type": "text", + "content": "[70] M. Lanctot, V. Zambaldi, A. Gruslys, A. Lazaridou, K. Tuyls, A unified game-theoretic approach to multiagent reinforcement learning, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, p. 4193-4206." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 312, + 596, + 530, + 641 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 596, + 530, + 641 + ], + "spans": [ + { + "bbox": [ + 312, + 596, + 530, + 641 + ], + "type": "text", + "content": "[71] C. Guo, L. Zhong, T. Liu, A study on game theory in decision interaction for multi intelligent agents based on information fusion, in: Y. Wu (Ed.), Computing and Intelligent Systems, Springer Berlin Heidelberg, Berlin, Heidelberg, 2011, pp. 442-452." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 312, + 643, + 530, + 671 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 643, + 530, + 671 + ], + "spans": [ + { + "bbox": [ + 312, + 643, + 530, + 671 + ], + "type": "text", + "content": "[72] J. Zhang, J. Zhang, The coverage control solutions based on evolutionary game theory in the multi-agent systems, in: 2019 12th Asian Control Conference (ASCC), 2019, pp. 1-6." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 312, + 671, + 530, + 717 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 671, + 530, + 717 + ], + "spans": [ + { + "bbox": [ + 312, + 671, + 530, + 717 + ], + "type": "text", + "content": "[73] R. Kong, N. Zhang, X. Bao, X. Lu, Multi-agent distributed optimization based on the game theory and its application in automobile chassis, in: 2018 37th Chinese Control Conference (CCC), 2018, pp. 1107-1111. doi:10.23919/ChiCC.2018.8484212." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 312, + 718, + 530, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 718, + 530, + 728 + ], + "spans": [ + { + "bbox": [ + 312, + 718, + 530, + 728 + ], + "type": "text", + "content": "[74] H. Wang, X. Zhang, H. Luo, X. Qiao, M. Huo, Y. Jiang, Dis" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "text", + "content": "43" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 43 + }, + { + "para_blocks": [ + { + "bbox": [ + 67, + 113, + 286, + 728 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 84, + 113, + 286, + 159 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 113, + 286, + 159 + ], + "spans": [ + { + "bbox": [ + 84, + 113, + 286, + 159 + ], + "type": "text", + "content": "tributed fault tolerant control for multi-agent systems with sensor faults in non-cooperative games, in: 2023 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes (SAFEPPROCESS), 2023, pp. 1-6. doi:10.1109/ SAFEPPROCESS58597.2023.10295775." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 160, + 286, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 160, + 286, + 198 + ], + "spans": [ + { + "bbox": [ + 67, + 160, + 286, + 198 + ], + "type": "text", + "content": "[75] X. Dong, X. Li, S. Cheng, Energy management optimization of microgrid cluster based on multi-agent-system and hierarchical stackelberg game theory, IEEE Access 8 (2020) 206183-206197. doi:10.1109/ACCESS.2020.3037676." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 199, + 286, + 235 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 199, + 286, + 235 + ], + "spans": [ + { + "bbox": [ + 67, + 199, + 286, + 235 + ], + "type": "text", + "content": "[76] P. H. Nguyen, W. L. Kling, P. F. Ribeiro, A game theory strategy to integrate distributed agent-based functions in smart grids, IEEE Transactions on Smart Grid 4 (1) (2013) 568-576. doi:10.1109/TSG.2012.2236657." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 67, + 236, + 286, + 282 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 236, + 286, + 282 + ], + "spans": [ + { + "bbox": [ + 67, + 236, + 286, + 282 + ], + "type": "text", + "content": "[77] S. U. Khan, I. Ahmad, A semi-distributed axiomatic game theoretical mechanism for replicating data objects in large distributed computing systems, in: 2007 IEEE International Parallel and Distributed Processing Symposium, 2007, pp. 1-10. doi:10.1109/IPDPS.2007.370279." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 283, + 286, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 283, + 286, + 330 + ], + "spans": [ + { + "bbox": [ + 67, + 283, + 286, + 330 + ], + "type": "text", + "content": "[78] L. Xu, J. Zhu, B. Chen, Z. Yang, K. Liu, B. Dang, T. Zhang, Y. Yang, R. Huang, A distributed nanocluster based multiagent evolutionary network, Nature Communications 13 (1) (2022) 4698. doi:10.1038/s41467-022-32497-5. URL https://doi.org/10.1038/s41467-022-32497-5" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 67, + 331, + 286, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 331, + 286, + 396 + ], + "spans": [ + { + "bbox": [ + 67, + 331, + 286, + 396 + ], + "type": "text", + "content": "[79] F. Seredynski, T. Kulpa, R. Hoffmann, Towards evolutionary self-optimization of large multi-agent systems, in: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '22, Association for Computing Machinery, New York, NY, USA, 2022, p. 200-203. doi: 10.1145/3520304.3529042. URL https://doi.org/10.1145/3520304.3529042" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 67, + 397, + 286, + 454 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 397, + 286, + 454 + ], + "spans": [ + { + "bbox": [ + 67, + 397, + 286, + 454 + ], + "type": "text", + "content": "[80] H. Liu, Z. Li, K. Huang, R. Wang, G. Cheng, T. Li, Evolutionary reinforcement learning algorithm for large-scale multi-agent cooperation and confrontation applications, The Journal of Supercomputing 80 (2) (2024) 2319-2346. doi:10.1007/s11227-023-05551-2. URL https://doi.org/10.1007/s11227-023-05551-2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 67, + 454, + 286, + 501 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 454, + 286, + 501 + ], + "spans": [ + { + "bbox": [ + 67, + 454, + 286, + 501 + ], + "type": "text", + "content": "[81] Y. Dong, X. Liu, T. Li, C. L. P. Chen, Evolutionary game dynamics of multi-agent systems using local information considering hide right, Complex & Intelligent Systems 10 (1) (2024) 917-925. doi:10.1007/s40747-023-01172-7. URL https://doi.org/10.1007/s40747-023-01172-7" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 67, + 501, + 286, + 539 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 501, + 286, + 539 + ], + "spans": [ + { + "bbox": [ + 67, + 501, + 286, + 539 + ], + "type": "text", + "content": "[82] H. Chen, C. Hu, Z. Huang, Optimal control of multiagent decision-making based on competence evolution, Discrete Dynamics in Nature and Society 2023 (1) (2023) 2179376. doi: https://doi.org/10.1155/2023/2179376." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 67, + 539, + 286, + 586 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 539, + 286, + 586 + ], + "spans": [ + { + "bbox": [ + 67, + 539, + 286, + 586 + ], + "type": "text", + "content": "[83] H.-T. Wai, Z. Yang, Z. Wang, M. Hong, Multi-agent reinforcement learning via double averaging primal-dual optimization, in: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, Curran Associates Inc., Red Hook, NY, USA, 2018, pp. 9672-9683." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 67, + 587, + 286, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 587, + 286, + 624 + ], + "spans": [ + { + "bbox": [ + 67, + 587, + 286, + 624 + ], + "type": "text", + "content": "[84] T. Hu, Z. Pu, X. Ai, T. Qiu, J. Yi, Measuring policy distance for multi-agent reinforcement learning (2024). arXiv:2401.11257. URL https://arxiv.org/abs/2401.11257" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 67, + 624, + 286, + 698 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 624, + 286, + 698 + ], + "spans": [ + { + "bbox": [ + 67, + 624, + 286, + 698 + ], + "type": "text", + "content": "[85] P. Sunehag, G. Lever, A. Gruslys, W. M. Czarnecki, V. Zambaldi, M. Jaderberg, M. Lanctot, N. Sonnerat, J. Z. Leibo, K. Tuyls, T. Graepel, Value-decomposition networks for cooperative multi-agent learning based on team reward, in: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '18, International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 2018, pp. 2085-2087." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 67, + 699, + 286, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 699, + 286, + 728 + ], + "spans": [ + { + "bbox": [ + 67, + 699, + 286, + 728 + ], + "type": "text", + "content": "[86] W. J. Yun, J. Park, J. Kim, Quantum multi-agent meta reinforcement learning, in: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 312, + 113, + 530, + 728 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 329, + 113, + 530, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 113, + 530, + 160 + ], + "spans": [ + { + "bbox": [ + 329, + 113, + 530, + 160 + ], + "type": "text", + "content": "Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence, AAAI'23/IAAI'23/EAAI'23, AAAI Press, 2023. doi:10.1609/aaai.v37i9.26313. URL https://doi.org/10.1609/aaai.v37i9.26313" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 312, + 161, + 530, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 161, + 530, + 198 + ], + "spans": [ + { + "bbox": [ + 312, + 161, + 530, + 198 + ], + "type": "text", + "content": "[87] W. Mao, H. Qiu, C. Wang, H. Franke, Z. Kalbarczyk, R. K. Iyer, T. Basar, Multi-agent meta-reinforcement learning: sharper convergence rates with task similarity, Curran Associates Inc., Red Hook, NY, USA, 2024." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 312, + 199, + 530, + 245 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 199, + 530, + 245 + ], + "spans": [ + { + "bbox": [ + 312, + 199, + 530, + 245 + ], + "type": "text", + "content": "[88] M. Kouzeghar, Y. Song, M. Meghjani, R. Bouffanais, Multi-target pursuit by a decentralized heterogeneous uav swarm using deep multi-agent reinforcement learning (2023). arXiv: 2303.01799. URL https://arxiv.org/abs/2303.01799" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 312, + 246, + 530, + 292 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 246, + 530, + 292 + ], + "spans": [ + { + "bbox": [ + 312, + 246, + 530, + 292 + ], + "type": "text", + "content": "[89] F. Gao, S. Chen, M. Li, B. Huang, Maca: a multi-agent reinforcement learning platform for collective intelligence, in: 2019 IEEE 10th International Conference on Software Engineering and Service Science (ICSESS), 2019, pp. 108-111. doi:10.1109/ICSESS47205.2019.9040781." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 312, + 293, + 530, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 293, + 530, + 330 + ], + "spans": [ + { + "bbox": [ + 312, + 293, + 530, + 330 + ], + "type": "text", + "content": "[90] S. Qi, S. Zhang, X. Hou, J. Zhang, X. Wang, J. Xiao, Efficient distributed framework for collaborative multi-agent reinforcement learning (2022). arXiv:2205.05248. URL https://arxiv.org/abs/2205.05248" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 312, + 331, + 530, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 331, + 530, + 444 + ], + "spans": [ + { + "bbox": [ + 312, + 331, + 530, + 444 + ], + "type": "text", + "content": "[91] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, J. Oh, D. Horgan, M. Kroiss, I. Danihelka, A. Huang, L. Sifre, T. Cai, J. P. Agapiou, M. Jaderberg, A. S. Vezhnevets, R. Leblond, T. Pohlen, V. Dalibard, D. Budden, Y. Sulsky, J. Molloy, T. L. Paine, C. Gulcehre, Z. Wang, T. Pfaff, Y. Wu, R. Ring, D. Yogatama, D. Wünsch, K. McKinney, O. Smith, T. Schaul, T. Lillicrap, K. Kavukcuoglu, D. Hassabis, C. Apps, D. Silver, Grandmaster level in starcraft ii using multi-agent reinforcement learning, Nature 575 (7782) (2019) 350-354. doi:10.1038/s41586-019-1724-z. URL https://doi.org/10.1038/s41586-019-1724-z" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 312, + 444, + 530, + 481 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 444, + 530, + 481 + ], + "spans": [ + { + "bbox": [ + 312, + 444, + 530, + 481 + ], + "type": "text", + "content": "[92] T. Chu, J. Wang, L. Codeca, Z. Li, Multi-agent deep reinforcement learning for large-scale traffic signal control, IEEE Transactions on Intelligent Transportation Systems 21 (3) (2020) 1086-1095. doi:10.1109/TITS.2019.2901791." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 312, + 482, + 530, + 528 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 482, + 530, + 528 + ], + "spans": [ + { + "bbox": [ + 312, + 482, + 530, + 528 + ], + "type": "text", + "content": "[93] Z. Lv, L. Xiao, Y. Du, G. Niu, C. Xing, W. Xu, Multiagent reinforcement learning based uav swarm communications against jamming, Trans. Wireless. Comm. 22 (12) (2023) 9063-9075. doi:10.1109/TWC.2023.3268082. URL https://doi.org/10.1109/TWC.2023.3268082" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 312, + 529, + 530, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 529, + 530, + 576 + ], + "spans": [ + { + "bbox": [ + 312, + 529, + 530, + 576 + ], + "type": "text", + "content": "[94] R.-E. P. Mircea-Bogdan Radac, R.-C. Roman, Model-free control performance improvement using virtual reference feedback tuning and reinforcement q-learning, International Journal of Systems Science 48 (5) (2017) 1071-1083. doi:10.1080/00207721.2016.1236423." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 312, + 576, + 530, + 614 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 576, + 530, + 614 + ], + "spans": [ + { + "bbox": [ + 312, + 576, + 530, + 614 + ], + "type": "text", + "content": "[95] Z. Liu, X. Yang, S. Sun, L. Qian, L. Wan, X. Chen, X. Lan, Grounded answers for multi-agent decision-making problem through generative world model, in: Advances in Neural Information Processing Systems (NeurIPS), 2024." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 312, + 614, + 530, + 661 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 614, + 530, + 661 + ], + "spans": [ + { + "bbox": [ + 312, + 614, + 530, + 661 + ], + "type": "text", + "content": "[96] H. Zhang, W. Du, J. Shan, Q. Zhou, Y. Du, J. B. Tenenbaum, T. Shu, C. Gan, Building cooperative embodied agents modularly with large language models, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=EnXJfQqyOK" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 312, + 662, + 530, + 699 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 662, + 530, + 699 + ], + "spans": [ + { + "bbox": [ + 312, + 662, + 530, + 699 + ], + "type": "text", + "content": "[97] X. Hou, M. Yang, W. Jiao, X. Wang, Z. Tu, W. X. Zhao, Coact: A global-local hierarchy for autonomous agent collaboration (2024). arXiv:2406.13381. URL https://arxiv.org/abs/2406.13381" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 312, + 700, + 530, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 700, + 530, + 728 + ], + "spans": [ + { + "bbox": [ + 312, + 700, + 530, + 728 + ], + "type": "text", + "content": "[98] X. Puig, K. K. Ra, M. Boben, J. Li, T. Wang, S. Fidler, A. Torralba, Virtualhome: Simulating household activities via programs, 2018 IEEE/CVF Conference on Computer Vision" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 740, + 303, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 740, + 303, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 740, + 303, + 748 + ], + "type": "text", + "content": "44" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 44 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 113, + 298, + 727 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 85, + 113, + 227, + 122 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 113, + 227, + 122 + ], + "spans": [ + { + "bbox": [ + 85, + 113, + 227, + 122 + ], + "type": "text", + "content": "and Pattern Recognition (2018) 8494-8502." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 85, + 123, + 284, + 141 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 123, + 284, + 141 + ], + "spans": [ + { + "bbox": [ + 85, + 123, + 284, + 141 + ], + "type": "text", + "content": "URL https://api_semanticscholar.org/CorpusID: 49317780" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 67, + 142, + 285, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 142, + 285, + 198 + ], + "spans": [ + { + "bbox": [ + 67, + 142, + 285, + 198 + ], + "type": "text", + "content": "[99] X. Puig, T. Shu, S. Li, Z. Wang, Y.-H. Liao, J. B. Tenenbaum, S. Fidler, A. Torralba, Watch-and-help: A challenge for social perception and human-{ai} collaboration, in: International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=w_7JMpGZRhO" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 199, + 285, + 254 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 199, + 285, + 254 + ], + "spans": [ + { + "bbox": [ + 63, + 199, + 285, + 254 + ], + "type": "text", + "content": "[100] D. Gao, Z. Li, X. Pan, W. Kuang, Z. Ma, B. Qian, F. Wei, W. Zhang, Y. Xie, D. Chen, L. Yao, H. Peng, Z. Zhang, L. Zhu, C. Cheng, H. Shi, Y. Li, B. Ding, J. Zhou, Agentscope: A flexible yet robust multi-agent platform (2024). arXiv:2402.14034. URL https://arxiv.org/abs/2402.14034" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 255, + 285, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 255, + 285, + 312 + ], + "spans": [ + { + "bbox": [ + 63, + 255, + 285, + 312 + ], + "type": "text", + "content": "[101] Z. Xiao, D. Zhang, Y. Wu, L. Xu, Y. J. Wang, X. Han, X. Fu, T. Zhong, J. Zeng, M. Song, G. Chen, Chain-of-experts: When LLMs meet complex operations research problems, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=HobyL1B9CZ" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 312, + 285, + 349 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 312, + 285, + 349 + ], + "spans": [ + { + "bbox": [ + 63, + 312, + 285, + 349 + ], + "type": "text", + "content": "[102] C. Wang, W. Luo, Q. Chen, H. Mai, J. Guo, S. Dong, X. M. Xuan, Z. Li, L. Ma, S. Gao, Mllm-tool: A multimodal large language model for tool agent learning, arXiv preprint arXiv:2401.10727 (2024)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 63, + 349, + 285, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 349, + 285, + 387 + ], + "spans": [ + { + "bbox": [ + 63, + 349, + 285, + 387 + ], + "type": "text", + "content": "[103] Q. Wu, G. Bansal, J. Zhang, Y. Wu, S. Zhang, E. E. Zhu, B. Li, L. Jiang, X. Zhang, C. Wang, Autogen: Enabling next-gen llm applications via multi-agent conversation, Tech. Rep. MSR-TR-2023-33, Microsoft (August 2023)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 63, + 387, + 298, + 482 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 387, + 298, + 482 + ], + "spans": [ + { + "bbox": [ + 63, + 387, + 298, + 482 + ], + "type": "text", + "content": "[104] M. Wen, J. Kuba, R. Lin, W. Zhang, Y. Wen, J. Wang, Y. Yang, Multi-agent reinforcement learning is a sequence modeling problem, in: S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh (Eds.), Advances in Neural Information Processing Systems, Vol. 35, Curran Associates, Inc., 2022, pp. 16509-16521. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/69413f87e5a34897cd010ca698097d0a-Paper-Conference.pdf" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 63, + 482, + 285, + 539 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 482, + 285, + 539 + ], + "spans": [ + { + "bbox": [ + 63, + 482, + 285, + 539 + ], + "type": "text", + "content": "[105] W. Chen, Y. Su, J. Zuo, C. Yang, C. Yuan, C.-M. Chan, H. Yu, Y. Lu, Y.-H. Hung, C. Qian, Y. Qin, X. Cong, R. Xie, Z. Liu, M. Sun, J. Zhou, Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=EHg5GDnyq1" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 539, + 285, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 539, + 285, + 576 + ], + "spans": [ + { + "bbox": [ + 63, + 539, + 285, + 576 + ], + "type": "text", + "content": "[106] Z. Liu, Y. Zhang, P. Li, Y. Liu, D. Yang, Dynamic LLM-agent network: An LLM-agent collaboration framework with agent team optimization (2024). URL https://openreview.net/forum?id=i43XCU54Br" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 63, + 576, + 285, + 642 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 576, + 285, + 642 + ], + "spans": [ + { + "bbox": [ + 63, + 576, + 285, + 642 + ], + "type": "text", + "content": "[107] G. Chen, S. Dong, Y. Shu, G. Zhang, J. Sesay, B. Karlsson, J. Fu, Y. Shi, Autoagents: A framework for automatic agent generation, in: K. Larson (Ed.), Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, International Joint Conferences on Artificial Intelligence Organization, 2024, pp. 22–30, main Track. doi:10.24963/ijcai.2024/3." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 63, + 643, + 285, + 717 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 643, + 285, + 717 + ], + "spans": [ + { + "bbox": [ + 63, + 643, + 285, + 717 + ], + "type": "text", + "content": "URL https://doi.org/10.24963/ijcai.2024/3 \n[108] S. Hong, M. Zhuge, J. Chen, X. Zheng, Y. Cheng, J. Wang, C. Zhang, Z. Wang, S. K. S. Yau, Z. Lin, L. Zhou, C. Ran, L. Xiao, C. Wu, J. Schmidhuber, MetaGPT: Meta programming for a multi-agent collaborative framework, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=VtmBAGCN7o" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 63, + 718, + 285, + 727 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 718, + 285, + 727 + ], + "spans": [ + { + "bbox": [ + 63, + 718, + 285, + 727 + ], + "type": "text", + "content": "[109] S. Hong, Y. Lin, B. Liu, B. Liu, B. Wu, D. Li, J. Chen," + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 308, + 113, + 530, + 727 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 329, + 113, + 530, + 151 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 113, + 530, + 151 + ], + "spans": [ + { + "bbox": [ + 329, + 113, + 530, + 151 + ], + "type": "text", + "content": "J. Zhang, J. Wang, L. Zhang, L. Zhang, M. Yang, M. Zhuge, T. Guo, T. Zhou, W. Tao, W. Wang, X. Tang, X. Lu, X. Zheng, X. Liang, Y. Fei, Y. Cheng, Z. Xu, C. Wu, Data interpreter: An llm agent for data science (2024). arXiv:2402.18679." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 308, + 152, + 530, + 169 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 152, + 530, + 169 + ], + "spans": [ + { + "bbox": [ + 308, + 152, + 530, + 169 + ], + "type": "text", + "content": "[110] X. Team, Xagent: An autonomous agent for complex task solving (2023)." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 308, + 170, + 530, + 216 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 170, + 530, + 216 + ], + "spans": [ + { + "bbox": [ + 308, + 170, + 530, + 216 + ], + "type": "text", + "content": "[111] J. Wang, Z. Duan, Intelligent spark agents: A modular lang-graph framework for scalable, visualized, and enhanced big data machine learning workflows (2024). arXiv:2412.01490. URL https://arxiv.org/abs/2412.01490" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 308, + 217, + 530, + 254 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 217, + 530, + 254 + ], + "spans": [ + { + "bbox": [ + 308, + 217, + 530, + 254 + ], + "type": "text", + "content": "[112] J. Wang, Z. Duan, Agent ai with langgraph: A modular framework for enhancing machine translation using large language models (2024). arXiv:2412.03801. URL https://arxiv.org/abs/2412.03801" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 255, + 530, + 301 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 255, + 530, + 301 + ], + "spans": [ + { + "bbox": [ + 308, + 255, + 530, + 301 + ], + "type": "text", + "content": "[113] Y. Zheng, Z. Xing, Q. Zhang, B. Jin, P. Li, Y. Zheng, Z. Xia, K. Zhan, X. Lang, Y. Chen, D. Zhao, Planagent: A multimodal large language agent for closed-loop vehicle motion planning (2024). arXiv:2406.01587. URL https://arxiv.org/abs/2406.01587" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 308, + 302, + 530, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 302, + 530, + 330 + ], + "spans": [ + { + "bbox": [ + 308, + 302, + 530, + 330 + ], + "type": "text", + "content": "[114] L. Zhang, Z. Ji, B. Chen, Crew: Facilitating human-ai teaming research (2025). arXiv:2408.00170. URL https://arxiv.org/abs/2408.00170" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 308, + 331, + 530, + 368 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 331, + 530, + 368 + ], + "spans": [ + { + "bbox": [ + 308, + 331, + 530, + 368 + ], + "type": "text", + "content": "[115] Z. Duan, J. Wang, Exploration of llm multi-agent application implementation based on langgraph+crewai (2024). arXiv: 2411.18241. URL https://arxiv.org/abs/2411.18241" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 308, + 369, + 530, + 416 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 369, + 530, + 416 + ], + "spans": [ + { + "bbox": [ + 308, + 369, + 530, + 416 + ], + "type": "text", + "content": "[116] Y. Cao, H. Zhao, Y. Cheng, T. Shu, Y. Chen, G. Liu, G. Liang, J. Zhao, J. Yan, Y. Li, Survey on large language model-enhanced reinforcement learning: Concept, taxonomy, and methods, IEEE Transactions on Neural Networks and Learning Systems (2024) 1-21doi:10.1109/TNNLS.2024.3497992." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 308, + 416, + 530, + 462 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 416, + 530, + 462 + ], + "spans": [ + { + "bbox": [ + 308, + 416, + 530, + 462 + ], + "type": "text", + "content": "[117] I. Marti, V. R. Tomas, A. Saez, J. J. Martinez, A rule-based multi-agent system for road traffic management, in: 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, Vol. 3, 2009, pp. 595-598. doi:10.1109/WI-IAT.2009.358." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 308, + 463, + 530, + 528 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 463, + 530, + 528 + ], + "spans": [ + { + "bbox": [ + 308, + 463, + 530, + 528 + ], + "type": "text", + "content": "[118] V. V. Nekhai, E. Trunova, I. Bilous, I. Bohdan, M. Voitsekhovska, Fuzzy game-theoretic modeling of a multi-agent cybersecurity management system for an agricultural enterprise, in: S. Shkarlet, A. Morozov, A. Palagin, D. Vinnikov, N. Stoianov, M. Zhelezniak, V. Kazymyr (Eds.), Mathematical Modeling and Simulation of Systems, Springer International Publishing, Cham, 2022, pp. 423-434." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 308, + 529, + 530, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 529, + 530, + 576 + ], + "spans": [ + { + "bbox": [ + 308, + 529, + 530, + 576 + ], + "type": "text", + "content": "[119] A. Ramezani, M. R. Andalibizadeh, S. Bahrampour, H. Ramezani, B. Moshiri, Select reliable strategy in multiagent systems using fuzzy logic-based fusion, in: 2008 Second Asia International Conference on Modelling & Simulation (AMS), 2008, pp. 13-17. doi:10.1109/AMS.2008.63." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 308, + 576, + 530, + 623 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 576, + 530, + 623 + ], + "spans": [ + { + "bbox": [ + 308, + 576, + 530, + 623 + ], + "type": "text", + "content": "[120] M. E. H. Charaf, M. Benattou, S. Azzouzi, A rule-based multi-agent system for testing distributed applications, in: 2012 International Conference on Multimedia Computing and Systems, 2012, pp. 967-972. doi:10.1109/ICMCS.2012.6320205." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 308, + 624, + 530, + 671 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 624, + 530, + 671 + ], + "spans": [ + { + "bbox": [ + 308, + 624, + 530, + 671 + ], + "type": "text", + "content": "[121] A. Daeichian, A. Haghani, Fuzzy q-learning-based multi-agent system for intelligent traffic control by a game theory approach, Arabian Journal for Science and Engineering 43 (6) (2018) 3241-3247. doi:10.1007/s13369-017-3018-9. URL https://doi.org/10.1007/s13369-017-3018-9" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 308, + 671, + 530, + 698 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 671, + 530, + 698 + ], + "spans": [ + { + "bbox": [ + 308, + 671, + 530, + 698 + ], + "type": "text", + "content": "[122] C. Amato, A first introduction to cooperative multi-agent reinforcement learning (2024). arXiv:2405.06161. URL https://arxiv.org/abs/2405.06161" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 308, + 699, + 530, + 727 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 699, + 530, + 727 + ], + "spans": [ + { + "bbox": [ + 308, + 699, + 530, + 727 + ], + "type": "text", + "content": "[123] Y. Zhou, S. Liu, Y. Qing, K. Chen, T. Zheng, Y. Huang, J. Song, M. Song, Is centralized training with decentralized execution framework centralized enough for marl? (2023)." + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "text", + "content": "45" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 45 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 113, + 286, + 728 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 85, + 113, + 157, + 121 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 113, + 157, + 121 + ], + "spans": [ + { + "bbox": [ + 85, + 113, + 157, + 121 + ], + "type": "text", + "content": "arXiv:2305.17352." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 85, + 123, + 240, + 132 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 123, + 240, + 132 + ], + "spans": [ + { + "bbox": [ + 85, + 123, + 240, + 132 + ], + "type": "text", + "content": "URL https://arxiv.org/abs/2305.17352" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 133, + 286, + 169 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 133, + 286, + 169 + ], + "spans": [ + { + "bbox": [ + 63, + 133, + 286, + 169 + ], + "type": "text", + "content": "[124] C. Amato, An introduction to centralized training for decentralized execution in cooperative multi-agent reinforcement learning (2024). arXiv:2409.03052. URL https://arxiv.org/abs/2409.03052" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 170, + 286, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 170, + 286, + 236 + ], + "spans": [ + { + "bbox": [ + 63, + 170, + 286, + 236 + ], + "type": "text", + "content": "[125] P. K. Sharma, E. G. Zaroukian, R. Fernandez, A. Basak, D. E. Asher, Survey of recent multi-agent reinforcement learning algorithms utilizing centralized training, in: T. Pham, L. Solomon, M. E. Hohil (Eds.), Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, SPIE, 2021, p. 84. doi:10.1117/12.2585808. URL http://dx.doi.org/10.1117/12.2585808" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 236, + 286, + 283 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 236, + 286, + 283 + ], + "spans": [ + { + "bbox": [ + 63, + 236, + 286, + 283 + ], + "type": "text", + "content": "[126] J. K. Gupta, M. Egorov, M. Kochenderfer, Cooperative multiagent control using deep reinforcement learning, in: G. Sukthankar, J. A. Rodriguez-Aguilar (Eds.), Autonomous Agents and Multiagent Systems, Springer International Publishing, Cham, 2017, pp. 66-83." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 284, + 286, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 284, + 286, + 330 + ], + "spans": [ + { + "bbox": [ + 63, + 284, + 286, + 330 + ], + "type": "text", + "content": "[127] C. Claus, C. Boutilier, The dynamics of reinforcement learning in cooperative multiagent systems, in: Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence, AAAI Press, 1998, pp. 746-752." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 63, + 331, + 286, + 378 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 331, + 286, + 378 + ], + "spans": [ + { + "bbox": [ + 63, + 331, + 286, + 378 + ], + "type": "text", + "content": "[128] M. Lauer, M. A. Riedmiller, An algorithm for distributed reinforcement learning in cooperative multi-agent systems, in: Proceedings of the Seventeenth International Conference on Machine Learning, ICML '00, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2000, p. 535-542." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 63, + 379, + 286, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 379, + 286, + 434 + ], + "spans": [ + { + "bbox": [ + 63, + 379, + 286, + 434 + ], + "type": "text", + "content": "[129] M. Bowling, M. Veloso, Multiagent learning using a variable learning rate, Artificial Intelligence 136 (2) (2002) 215-250. doi:https://doi.org/10.1016/S0004-3702(02)00121-2. URL https://www.sciencedirect.com/science/article/pii/S0004370202001212" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 63, + 435, + 286, + 490 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 435, + 286, + 490 + ], + "spans": [ + { + "bbox": [ + 63, + 435, + 286, + 490 + ], + "type": "text", + "content": "[130] T. Ikeda, T. Shibuya, Centralized training with decentralized execution reinforcement learning for cooperative multi-agent systems with communication delay, in: 2022 61st Annual Conference of the Society of Instrument and Control Engineers (SICE), 2022, pp. 135-140. doi:10.23919/SICE56594.2022.9905866." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 491, + 286, + 539 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 491, + 286, + 539 + ], + "spans": [ + { + "bbox": [ + 63, + 491, + 286, + 539 + ], + "type": "text", + "content": "[131] Y. Zhou, S. Liu, Y. Qing, K. Chen, T. Zheng, Y. Huang, J. Song, M. Song, Is centralized training with decentralized execution framework centralized enough for marl? (2023). arXiv:2305.17352. URL https://arxiv.org/abs/2305.17352" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 63, + 539, + 286, + 594 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 539, + 286, + 594 + ], + "spans": [ + { + "bbox": [ + 63, + 539, + 286, + 594 + ], + "type": "text", + "content": "[132] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, I. Mordatch, Multi-agent actor-critic for mixed cooperative-competitive environments, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, pp. 6382-6393." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 63, + 595, + 286, + 671 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 595, + 286, + 671 + ], + "spans": [ + { + "bbox": [ + 63, + 595, + 286, + 671 + ], + "type": "text", + "content": "[133] C. Yu, A. Velu, E. Vinitsky, J. Gao, Y. Wang, A. Bayen, Y. Wu, The surprising effectiveness of ppo in cooperative multi-agent games, in: Advances in Neural Information Processing Systems, Vol. 35, 2022, pp. 29914-29928. URL https://proceedings.neurips.cc/paper_files/paper/2022/bitize/9c1535a02f0ce079433344e14d910597-Paper-Datasets and_Benchmarks.pdf" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 63, + 671, + 286, + 708 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 671, + 286, + 708 + ], + "spans": [ + { + "bbox": [ + 63, + 671, + 286, + 708 + ], + "type": "text", + "content": "[134] M. Samvelyan, T. Rashid, C. S. de Witt, G. Farquhar, N. Nardelli, T. G. J. Rudner, C.-M. Hung, P. H. S. Torr, J. Foerster, S. Whiteson, The StarCraft Multi-Agent Challenge, CoRR abs/1902.04043 (2019)." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 63, + 708, + 286, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 708, + 286, + 728 + ], + "spans": [ + { + "bbox": [ + 63, + 708, + 286, + 728 + ], + "type": "text", + "content": "[135] B. Ellis, J. Cook, S. Moalla, M. Samvelyan, M. Sun, A. Mahajan, J. Foerster, S. Whiteson, Smacv2: An improved" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 308, + 113, + 531, + 728 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 329, + 113, + 531, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 113, + 531, + 198 + ], + "spans": [ + { + "bbox": [ + 329, + 113, + 531, + 198 + ], + "type": "text", + "content": "benchmark for cooperative multi-agent reinforcement learning, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 37567-37593. \nURL https://proceedings.neurips.cc/paper_files/paper/2023/file/764c18ad230f9e7bf6a77ffc2312c55e-Paper-Datasets and_Benchmarks.pdf" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 308, + 199, + 531, + 235 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 199, + 531, + 235 + ], + "spans": [ + { + "bbox": [ + 308, + 199, + 531, + 235 + ], + "type": "text", + "content": "[136] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, I. Mordatch, Multi-agent actor-critic for mixed cooperative-competitive environments, Neural Information Processing Systems (NIPS) (2017)." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 308, + 236, + 531, + 282 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 236, + 531, + 282 + ], + "spans": [ + { + "bbox": [ + 308, + 236, + 531, + 282 + ], + "type": "text", + "content": "[137] T. Malloy, C. R. Sims, T. Klinger, M. Liu, M. Riemer, G. Tesauro, Capacity-limited decentralized actor-critic for multi-agent games, in: 2021 IEEE Conference on Games (CoG), 2021, pp. 1-8. doi:10.1109/CoG52621.2021.9619081." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 283, + 531, + 340 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 283, + 531, + 340 + ], + "spans": [ + { + "bbox": [ + 308, + 283, + 531, + 340 + ], + "type": "text", + "content": "[138] J. Jiang, Z. Lu, Learning attentional communication for multi-agent cooperation, in: Advances in Neural Information Processing Systems, Vol. 31, 2018, pp. 7254-7264. URL https://proceedings.neurips.cc/paper/2018/file/6a8018b3a00b69c008601b8becae392b-Paper.pdf" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 308, + 340, + 531, + 435 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 340, + 531, + 435 + ], + "spans": [ + { + "bbox": [ + 308, + 340, + 531, + 435 + ], + "type": "text", + "content": "[139] C. Sun, Z. Zang, J. Li, J. Li, X. Xu, R. Wang, C. Zheng, T2mac: targeted and trusted multi-agent communication through selective engagement and evidence-driven integration, in: Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence, AAAI'24/IAAI'24/EAAI'24, AAAI Press, 2025. doi:10.1609/aaai.v38i13.29438. URL https://doi.org/10.1609/aaai.v38i13.29438" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 308, + 435, + 531, + 463 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 435, + 531, + 463 + ], + "spans": [ + { + "bbox": [ + 308, + 435, + 531, + 463 + ], + "type": "text", + "content": "[140] OpenAI, Gpt-4 technical report, arXiv preprint arXiv:2303.08774 (2023). URL https://arxiv.org/abs/2303.08774" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 308, + 464, + 531, + 538 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 464, + 531, + 538 + ], + "spans": [ + { + "bbox": [ + 308, + 464, + 531, + 538 + ], + "type": "text", + "content": "[141] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, R. Lowe, Training language models to follow instructions with human feedback, in: Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, Curran Associates Inc., Red Hook, NY, USA, 2024." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 308, + 539, + 531, + 623 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 539, + 531, + 623 + ], + "spans": [ + { + "bbox": [ + 308, + 539, + 531, + 623 + ], + "type": "text", + "content": "[142] R. Mao, G. Chen, X. Zhang, F. Guerin, E. Cambria, GPTEval: A survey on assessments of ChatGPT and GPT-4, in: N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), ELRA and ICCL, Torino, Italia, 2024, pp. 7844-7866. URL https://aclanthology.org/2024.lrec-main.693" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 308, + 624, + 531, + 671 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 624, + 531, + 671 + ], + "spans": [ + { + "bbox": [ + 308, + 624, + 531, + 671 + ], + "type": "text", + "content": "[143] A. Grattafori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan, A. Yang, A. Fan, A. Goyal, A. Hartshorn, A. Yang, et al., The llama 3 herd of models (2024). arXiv:2407.21783. URL https://arxiv.org/abs/2407.21783" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 308, + 671, + 531, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 671, + 531, + 728 + ], + "spans": [ + { + "bbox": [ + 308, + 671, + 531, + 728 + ], + "type": "text", + "content": "[144] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Roziere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, G. Lample, Llama: Open and efficient foundation language models (2023). arXiv:2302.13971. URL https://arxiv.org/abs/2302.13971" + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "text", + "content": "46" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 46 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 112, + 286, + 728 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 63, + 112, + 286, + 132 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 112, + 286, + 132 + ], + "spans": [ + { + "bbox": [ + 63, + 112, + 286, + 132 + ], + "type": "text", + "content": "[145] G. Team, Gemini: A family of highly capable multimodal models (2024). arXiv:2312.11805." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 85, + 132, + 240, + 142 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 132, + 240, + 142 + ], + "spans": [ + { + "bbox": [ + 85, + 132, + 240, + 142 + ], + "type": "text", + "content": "URL https://arxiv.org/abs/2312.11805" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 142, + 285, + 179 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 142, + 285, + 179 + ], + "spans": [ + { + "bbox": [ + 63, + 142, + 285, + 179 + ], + "type": "text", + "content": "[146] V. Dibia, J. Chen, G. Bansal, S. Syed, A. Fourney, E. Zhu, C. Wang, S. Amershi, Autogen studio: A no-code developer tool for building and debugging multi-agent systems (2024). arXiv:2408.15247." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 85, + 179, + 240, + 188 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 179, + 240, + 188 + ], + "spans": [ + { + "bbox": [ + 85, + 179, + 240, + 188 + ], + "type": "text", + "content": "URL https://arxiv.org/abs/2408.15247" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 189, + 286, + 254 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 189, + 286, + 254 + ], + "spans": [ + { + "bbox": [ + 63, + 189, + 286, + 254 + ], + "type": "text", + "content": "[147] C. Gan, S. Zhou, J. Schwartz, S. Alter, A. Bhandwaldar, D. Gutfreund, D. L. Yamins, J. J. DiCarlo, J. McDermott, A. Torralba, J. B. Tenenbaum, The threadworld transport challenge: A visually guided task-and-motion planning benchmark towards physically realistic embodied ai, in: 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 8847-8854. doi:10.1109/ICRA46639.2022.9812329." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 255, + 286, + 339 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 255, + 286, + 339 + ], + "spans": [ + { + "bbox": [ + 63, + 255, + 286, + 339 + ], + "type": "text", + "content": "[148] C. Gan, J. Schwartz, S. Alter, D. Mrowca, M. Schrimpf, J. Traer, J. De Freitas, J. Kubilius, A. Bhandwaldar, N. Haber, M. Sano, K. Kim, E. Wang, M. Lingelbach, A. Curtis, K. Feigelis, D. Bear, D. Gutfreund, D. Cox, A. Torralba, J. J. DiCarlo, J. Tenenbaum, J. McDermott, D. Yamins, Threadworld: A platform for interactive multi-modal physical simulation, in: J. Vanschoeren, S. Yeung (Eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, Vol. 1, 2021." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 85, + 340, + 284, + 377 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 340, + 284, + 377 + ], + "spans": [ + { + "bbox": [ + 85, + 340, + 284, + 377 + ], + "type": "text", + "content": "URL https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/735b90b4568125ed6c3f678819b6e058-Paper-round1.pdf" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 63, + 378, + 286, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 378, + 286, + 444 + ], + "spans": [ + { + "bbox": [ + 63, + 378, + 286, + 444 + ], + "type": "text", + "content": "[149] R. Gong, Q. Huang, X. Ma, Y. Noda, Z. Durante, Z. Zheng, D. Terzopoulos, L. Fei-Fei, J. Gao, H. Vo, MindAgent: Emergent gaming interaction, in: K. Duh, H. Gomez, S. Bethard (Eds.), Findings of the Association for Computational Linguistics: NAACL 2024, Association for Computational Linguistics, Mexico City, Mexico, 2024, pp. 3154-3183. doi:10.18653/v1/2024-findings-naacl.200." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 85, + 444, + 284, + 462 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 444, + 284, + 462 + ], + "spans": [ + { + "bbox": [ + 85, + 444, + 284, + 462 + ], + "type": "text", + "content": "URL https://aclanthology.org/2024.findings-naacl.200" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 463, + 286, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 463, + 286, + 509 + ], + "spans": [ + { + "bbox": [ + 63, + 463, + 286, + 509 + ], + "type": "text", + "content": "[150] K. Shao, Y. Zhu, Z. Tang, D. Zhao, Cooperative multiagent deep reinforcement learning with counterfactual reward, in: 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1-8. doi:10.1109/IJCNN48605.2020.9207169." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 63, + 511, + 286, + 566 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 511, + 286, + 566 + ], + "spans": [ + { + "bbox": [ + 63, + 511, + 286, + 566 + ], + "type": "text", + "content": "[151] K. R. Chandra, S. Borugadda, Multi agent deep reinforcement learning with deep q-network based energy efficiency and resource allocation in nomawireless systems, in: 2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT), 2023, pp. 1-8. doi:10.1109/ICEEICT56924.2023.10157052." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 63, + 567, + 285, + 594 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 567, + 285, + 594 + ], + "spans": [ + { + "bbox": [ + 63, + 567, + 285, + 594 + ], + "type": "text", + "content": "[152] A. M. Hafiz, G. M. Bhat, Deep q-network based multi-agent reinforcement learning with binary action agents (2020). arXiv: 2008.04109." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 85, + 596, + 240, + 604 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 596, + 240, + 604 + ], + "spans": [ + { + "bbox": [ + 85, + 596, + 240, + 604 + ], + "type": "text", + "content": "URL https://arxiv.org/abs/2008.04109" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 63, + 605, + 286, + 642 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 605, + 286, + 642 + ], + "spans": [ + { + "bbox": [ + 63, + 605, + 286, + 642 + ], + "type": "text", + "content": "[153] C. S. de Witt, T. Gupta, D. Makoviichuk, V. Makoviychuk, P. H. S. Torr, M. Sun, S. Whiteson, Is independent learning all you need in the starcraft multi-agent challenge? (2020). arXiv:2011.09533." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 85, + 644, + 240, + 652 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 644, + 240, + 652 + ], + "spans": [ + { + "bbox": [ + 85, + 644, + 240, + 652 + ], + "type": "text", + "content": "URL https://arxiv.org/abs/2011.09533" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 63, + 653, + 286, + 717 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 653, + 286, + 717 + ], + "spans": [ + { + "bbox": [ + 63, + 653, + 286, + 717 + ], + "type": "text", + "content": "[154] O. Vinyals, T. Ewalds, S. Bartunov, P. Georgiev, A. S. Vezhnevets, M. Yeo, A. Makhzani, H. Kuttler, J. Agapiou, J. Schrittwieser, J. Quan, S. Gaffney, S. Petersen, K. Simonyan, T. Schaul, H. van Hasselt, D. Silver, T. Lillicrap, K. Calderone, P. Keet, A. Brunasso, D. Lawrence, A. Ekermo, J. Repp, R. Tsing, Starcraft ii: A new challenge for reinforcement learning (2017). arXiv:1708.04782." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 85, + 719, + 240, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 719, + 240, + 728 + ], + "spans": [ + { + "bbox": [ + 85, + 719, + 240, + 728 + ], + "type": "text", + "content": "URL https://arxiv.org/abs/1708.04782" + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 308, + 112, + 530, + 728 + ], + "type": "list", + "angle": 0, + "index": 32, + "blocks": [ + { + "bbox": [ + 308, + 112, + 530, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 112, + 530, + 160 + ], + "spans": [ + { + "bbox": [ + 308, + 112, + 530, + 160 + ], + "type": "text", + "content": "[155] H. Shen, K. Zhang, M. Hong, T. Chen, Towards understanding asynchronous advantage actor-critic: Convergence and linear speedup, IEEE Transactions on Signal Processing 71 (2023) 2579-2594. doi:10.1109/tsp.2023.3268475. URL http://dx.doi.org/10.1109/TSP.2023.3268475" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 308, + 161, + 530, + 207 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 161, + 530, + 207 + ], + "spans": [ + { + "bbox": [ + 308, + 161, + 530, + 207 + ], + "type": "text", + "content": "[156] S. Li, Y. Wu, X. Cui, H. Dong, F. Fang, S. Russell, Robust multi-agent reinforcement learning via minimax deep deterministic policy gradient, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, AAAI Press, 2019, pp. 4213-4220. doi:10.1609/aaai.v33i01.33014213." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 308, + 208, + 530, + 273 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 208, + 530, + 273 + ], + "spans": [ + { + "bbox": [ + 308, + 208, + 530, + 273 + ], + "type": "text", + "content": "[157] J. N. Foerster, G. Farquhar, T. Afouras, N. Nardelli, S. Whiteson, Counterfactual multi-agent policy gradients, in: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'18/IAAI'18/EAAI'18, AAAI Press, 2018." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 308, + 275, + 530, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 275, + 530, + 312 + ], + "spans": [ + { + "bbox": [ + 308, + 275, + 530, + 312 + ], + "type": "text", + "content": "[158] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, O. Klimov, Proximal policy optimization algorithms (2017). arXiv: 1707.06347. URL https://arxiv.org/abs/1707.06347" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 308, + 312, + 530, + 359 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 312, + 530, + 359 + ], + "spans": [ + { + "bbox": [ + 308, + 312, + 530, + 359 + ], + "type": "text", + "content": "[159] J. Schulman, S. Levine, P. Moritz, M. Jordan, P. Abbeel, Trust region policy optimization, in: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, JMLR.org, 2015, p. 1889-1897." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 308, + 359, + 530, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 359, + 530, + 406 + ], + "spans": [ + { + "bbox": [ + 308, + 359, + 530, + 406 + ], + "type": "text", + "content": "[160] J. G. Kuba, R. Chen, M. Wen, Y. Wen, F. Sun, J. Wang, Y. Yang, Trust region policy optimisation in multi-agent reinforcement learning, in: International Conference on Learning Representations, 2022. URL https://arxiv.org/abs/2109.11251" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 308, + 407, + 530, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 407, + 530, + 453 + ], + "spans": [ + { + "bbox": [ + 308, + 407, + 530, + 453 + ], + "type": "text", + "content": "[161] S. Sukhbaatar, A. Szlam, R. Fergus, Learning multiagent communication with backpropagation, in: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, Curran Associates Inc., Red Hook, NY, USA, 2016, pp. 2252-2260." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 308, + 454, + 530, + 501 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 454, + 530, + 501 + ], + "spans": [ + { + "bbox": [ + 308, + 454, + 530, + 501 + ], + "type": "text", + "content": "[162] L. Matignon, G. J. Laurent, N. Le Fort-Piat, Hysteresic q-learning: an algorithm for decentralized reinforcement learning in cooperative multi-agent teams, in: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007, pp. 64-69. doi:10.1109/IROS.2007.4399095." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 308, + 502, + 530, + 528 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 502, + 530, + 528 + ], + "spans": [ + { + "bbox": [ + 308, + 502, + 530, + 528 + ], + "type": "text", + "content": "[163] E. Wei, S. Luke, Lenient learning in independent-learner stochastic cooperative games, J. Mach. Learn. Res. 17 (1) (2016) 2914–2955." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 308, + 529, + 530, + 557 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 529, + 530, + 557 + ], + "spans": [ + { + "bbox": [ + 308, + 529, + 530, + 557 + ], + "type": "text", + "content": "[164] M. Hausknecht, P. Stone, Deep recurrent q-learning for partially observable mdps (2017). arXiv:1507.06527. URL https://arxiv.org/abs/1507.06527" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 308, + 558, + 530, + 614 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 558, + 530, + 614 + ], + "spans": [ + { + "bbox": [ + 308, + 558, + 530, + 614 + ], + "type": "text", + "content": "[165] A. Tampuu, T. Matiisen, D. Kodelja, I. Kuzovkin, K. Korjus, J. Aru, J. Aru, R. Vicente, Multiagent cooperation and competition with deep reinforcement learning, PLOS ONE 12 (4) (2017) 1-15. doi:10.1371/journal.pone.0172395. URL https://doi.org/10.1371/journal.pone.0172395" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 308, + 614, + 530, + 661 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 614, + 530, + 661 + ], + "spans": [ + { + "bbox": [ + 308, + 614, + 530, + 661 + ], + "type": "text", + "content": "[166] S. Omidshafiei, J. Pazis, C. Amato, J. P. How, J. Vian, Deep decentralized multi-task multi-agent reinforcement learning under partial observability, in: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, p. 2681-2690." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 308, + 662, + 530, + 699 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 662, + 530, + 699 + ], + "spans": [ + { + "bbox": [ + 308, + 662, + 530, + 699 + ], + "type": "text", + "content": "[167] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, W. Zaremba, Openai gym (2016). arXiv: 1606.01540. URL https://arxiv.org/abs/1606.01540" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 308, + 700, + 530, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 700, + 530, + 728 + ], + "spans": [ + { + "bbox": [ + 308, + 700, + 530, + 728 + ], + "type": "text", + "content": "[168] J. N. Foerster, Y. M. Assael, N. de Freitas, S. Whiteson, Learning to communicate with deep multi-agent reinforcement learning (2016). arXiv:1605.06676." + } + ] + } + ], + "index": 31 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "text", + "content": "47" + } + ] + } + ], + "index": 33 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 47 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 113, + 286, + 728 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 85, + 113, + 240, + 122 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 113, + 240, + 122 + ], + "spans": [ + { + "bbox": [ + 85, + 113, + 240, + 122 + ], + "type": "text", + "content": "URL https://arxiv.org/abs/1605.06676" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 63, + 123, + 286, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 123, + 286, + 160 + ], + "spans": [ + { + "bbox": [ + 63, + 123, + 286, + 160 + ], + "type": "text", + "content": "[169] J. N. Foerster, Y. M. Assael, N. de Freitas, S. Whiteson, Learning to communicate to solve riddles with deep distributed recurrent q-networks (2016). arXiv:1602.02672. URL https://arxiv.org/abs/1602.02672" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 161, + 286, + 207 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 161, + 286, + 207 + ], + "spans": [ + { + "bbox": [ + 63, + 161, + 286, + 207 + ], + "type": "text", + "content": "[170] L. Fan, Y.-y. Liu, S. Zhang, Partially observable multi-agent rl with enhanced deep distributed recurrent q-network, in: 2018 5th International Conference on Information Science and Control Engineering (ICISCE), 2018, pp. 375-379. doi:10.1109/ICISCE.2018.00085." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 208, + 286, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 208, + 286, + 274 + ], + "spans": [ + { + "bbox": [ + 63, + 208, + 286, + 274 + ], + "type": "text", + "content": "[171] S. Sukhbaatar, A. Szlam, R. Fergus, Learning multiagent communication with backpropagation, in: Advances in Neural Information Processing Systems, Vol. 29, 2016, pp. 2244-2252. URL https://proceedings.neurips.cc/paper/2016/file/55b1927fdafe39c48e5b73b5d61ea60-Paper.pdf" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 275, + 286, + 322 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 275, + 286, + 322 + ], + "spans": [ + { + "bbox": [ + 63, + 275, + 286, + 322 + ], + "type": "text", + "content": "[172] P. Peng, Y. Wen, Y. Yang, Q. Yuan, Z. Tang, H. Long, J. Wang, Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play starcraft combat games (2017). arXiv:1703.10069. URL https://arxiv.org/abs/1703.10069" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 322, + 286, + 378 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 322, + 286, + 378 + ], + "spans": [ + { + "bbox": [ + 63, + 322, + 286, + 378 + ], + "type": "text", + "content": "[173] G. Wang, D. Shi, C. Xue, H. Jiang, Y. Wang, Bic-ddpg: Bidirectionally-coordinated nets for deep multi-agent reinforcement learning, in: H. Gao, X. Wang, M. Iqbal, Y. Yin, J. Yin, N. Gu (Eds.), Collaborative Computing: Networking, Applications and Worksharing, Springer International Publishing, Cham, 2021, pp. 337-354." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 63, + 378, + 286, + 416 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 378, + 286, + 416 + ], + "spans": [ + { + "bbox": [ + 63, + 378, + 286, + 416 + ], + "type": "text", + "content": "[174] D. Fan, H. Shen, L. Dong, Multi-agent distributed deep deterministic policy gradient for partially observable tracking, Actuators 10 (10) (2021). doi:10.3390/act10100268. URL https://www.mdpi.com/2076-0825/10/10/268" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 63, + 417, + 286, + 481 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 417, + 286, + 481 + ], + "spans": [ + { + "bbox": [ + 63, + 417, + 286, + 481 + ], + "type": "text", + "content": "[175] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, K. Kavukcuoglu, Asynchronous methods for deep reinforcement learning, in: Proceedings of The 33rd International Conference on Machine Learning, PMLR, 2016, pp. 1928-1937. URL https://proceedings.mlr.press/v48/mniha16.html" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 63, + 482, + 286, + 557 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 482, + 286, + 557 + ], + "spans": [ + { + "bbox": [ + 63, + 482, + 286, + 557 + ], + "type": "text", + "content": "[176] L. Yuan, C. Wang, J. Wang, F. Zhang, F. Chen, C. Guan, Z. Zhang, C. Zhang, Y. Yu, Multi-agent concentrative coordination with decentralized task representation, in: L. D. Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, International Joint Conferences on Artificial Intelligence Organization, 2022, pp. 599–605, main Track. doi:10.24963/ijcai.2022/85. URL https://doi.org/10.24963/ijcai.2022/85" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 558, + 286, + 614 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 558, + 286, + 614 + ], + "spans": [ + { + "bbox": [ + 63, + 558, + 286, + 614 + ], + "type": "text", + "content": "[177] S. Vanneste, A. Vanneste, K. Mets, T. D. Schepper, A. Anwar, S. Mercelis, P. Hellinckx, Learning to communicate using a communication critic and counterfactual reasoning, Neural Computing and Applications N/A (N/A) (2025) N/A. doi: 10.1007/s00521-024-10598-0. URL https://doi.org/10.1007/s00521-024-10598-0" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 63, + 614, + 286, + 661 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 614, + 286, + 661 + ], + "spans": [ + { + "bbox": [ + 63, + 614, + 286, + 661 + ], + "type": "text", + "content": "[178] A. Singh, T. Jain, S. Sukhbaatar, Learning when to communicate at scale in multiagent cooperative and competitive tasks, in: International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bk1Wt24tvH" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 63, + 662, + 286, + 709 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 662, + 286, + 709 + ], + "spans": [ + { + "bbox": [ + 63, + 662, + 286, + 709 + ], + "type": "text", + "content": "[179] D. Das, R. L. Rath, T. Singh, S. Mishra, V. Malik, R. Sobti, B. Brahma, LIm-based custom chatbot using langchain, in: A. E. Hassanien, S. Anand, A. Jaiswal, P. Kumar (Eds.), Innovative Computing and Communications, Springer Nature Singapore, Singapore, 2024, pp. 257-267." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 63, + 710, + 286, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 710, + 286, + 728 + ], + "spans": [ + { + "bbox": [ + 63, + 710, + 286, + 728 + ], + "type": "text", + "content": "[180] Q. Sun, Z. Yin, X. Li, Z. Wu, X. Qiu, L. Kong, Corex: Pushing the boundaries of complex reasoning through multi-model" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 308, + 113, + 531, + 728 + ], + "type": "list", + "angle": 0, + "index": 25, + "blocks": [ + { + "bbox": [ + 329, + 113, + 471, + 122 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 113, + 471, + 122 + ], + "spans": [ + { + "bbox": [ + 329, + 113, + 471, + 122 + ], + "type": "text", + "content": "collaboration (2023). arXiv:2310.00280." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 308, + 123, + 531, + 189 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 123, + 531, + 189 + ], + "spans": [ + { + "bbox": [ + 308, + 123, + 531, + 189 + ], + "type": "text", + "content": "[181] J. S. Park, L. Popowski, C. Cai, M. R. Morris, P. Liang, M. S. Bernstein, Social simulacra: Creating populated prototypes for social computing systems, in: Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST '22, Association for Computing Machinery, New York, NY, USA, 2022. doi:10.1145/3526113.3545616. URL https://doi.org/10.1145/3526113.3545616" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 308, + 190, + 531, + 245 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 190, + 531, + 245 + ], + "spans": [ + { + "bbox": [ + 308, + 190, + 531, + 245 + ], + "type": "text", + "content": "[182] J. Terry, B. Black, N. Grammel, M. Jayakumar, A. Hari, R. Sullivan, L. Santos, R. Perez, C. Horsch, C. Dieffendahl, et al., Pettingzoo: A standard api for multi-agent reinforcement learning, in: Proceedings of the 35th Conference on Neural Information Processing Systems, in: Advances in Neural Information Processing Systems, Vol. 34, 2021, pp. 15032-15043." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 308, + 246, + 531, + 321 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 246, + 531, + 321 + ], + "spans": [ + { + "bbox": [ + 308, + 246, + 531, + 321 + ], + "type": "text", + "content": "[183] S. Huang, S. Ontañón, C. Bamford, L. Grela, Gym-µrts: Toward affordable full game real-time strategy games research with deep reinforcement learning, in: 2021 IEEE Conference on Games (CoG), Copenhagen, Denmark, August 17-20, 2021, IEEE, 2021, pp. 1-8. doi:10.1109/CoG52621.2021.9619076. URL https://doi.org/10.1109/CoG52621.2021.9619076" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 322, + 531, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 322, + 531, + 387 + ], + "spans": [ + { + "bbox": [ + 308, + 322, + 531, + 387 + ], + "type": "text", + "content": "[184] L. Zheng, J. Yang, H. Cai, M. Zhou, W. Zhang, J. Wang, Y. Yu, Magent: A many-agent reinforcement learning platform for artificial collective intelligence, Proceedings of the AAAI Conference on Artificial Intelligence 32 (1) (2018). doi:10.1609/aaai.v32i1.11371. URL https://ojs.aaii.org/index.php/AAAI/article/view/11371" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 308, + 388, + 531, + 452 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 388, + 531, + 452 + ], + "spans": [ + { + "bbox": [ + 308, + 388, + 531, + 452 + ], + "type": "text", + "content": "[185] C. Yu, P. Wang, Dexterous manipulation for multifingered robotic hands with reinforcement learning: A review, Frontiers in Neurorobotics 16 (2022). doi:10.3389/fnbot.2022.861825. URL https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2022.861825" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 308, + 454, + 531, + 528 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 454, + 531, + 528 + ], + "spans": [ + { + "bbox": [ + 308, + 454, + 531, + 528 + ], + "type": "text", + "content": "[186] O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. Weng, W. Zaremba, Learning dexterous in-hand manipulation, The International Journal of Robotics Research 39 (1) (2020) 3-20. arXiv:https://doi.org/10.1177/0278364919887447. doi:10.1177/0278364919887447. URL https://doi.org/10.1177/0278364919887447" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 308, + 529, + 531, + 585 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 529, + 531, + 585 + ], + "spans": [ + { + "bbox": [ + 308, + 529, + 531, + 585 + ], + "type": "text", + "content": "[187] M. Chevalier-Boisvert, B. Dai, M. Towers, R. de Lazcano, L. Willems, S. Lahlou, S. Pal, P. S. Castro, J. Terry, Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks (2023). arXiv: 2306.13831. URL https://arxiv.org/abs/2306.13831" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 308, + 587, + 531, + 633 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 587, + 531, + 633 + ], + "spans": [ + { + "bbox": [ + 308, + 587, + 531, + 633 + ], + "type": "text", + "content": "[188] J. Z. Leibo, E. Duñez-Guzmán, A. S. Vezhnevets, J. P. Agapiou, P. Sunehag, R. Koster, J. Matyas, C. Beattie, I. Mordatch, T. Graepel, Scalable evaluation of multi-agent reinforcement learning with melting pot (2021). arXiv:2107.06857. URL https://arxiv.org/abs/2107.06857" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 308, + 634, + 531, + 680 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 634, + 531, + 680 + ], + "spans": [ + { + "bbox": [ + 308, + 634, + 531, + 680 + ], + "type": "text", + "content": "[189] C. Mehlman, J. Abramov, G. Falco, Cat-and-mouse satellite dynamics: Divergent adversarial reinforcement learning for contested multi-agent space operations (2024). arXiv: 2409.17443. URL https://arxiv.org/abs/2409.17443" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 308, + 681, + 531, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 681, + 531, + 728 + ], + "spans": [ + { + "bbox": [ + 308, + 681, + 531, + 728 + ], + "type": "text", + "content": "[190] Y. Dong, X. Zhu, Z. Pan, L. Zhu, Y. Yang, VillagerAgent: A graph-based multi-agent framework for coordinating complex task dependencies in Minecraft, in: L.-W. Ku, A. Martins, V. Srikumar (Eds.), Findings of the Association for Computational Linguistics: ACL 2024, Association for Computational" + } + ] + } + ], + "index": 24 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 740, + 302, + 748 + ], + "type": "text", + "content": "48" + } + ] + } + ], + "index": 26 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 48 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 112, + 286, + 728 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 84, + 112, + 285, + 151 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 84, + 112, + 285, + 151 + ], + "spans": [ + { + "bbox": [ + 84, + 112, + 285, + 151 + ], + "type": "text", + "content": "Linguistics, Bangkok, Thailand, 2024, pp. 16290-16314. doi:10.18653/v1/2024-findings-acl.964. URL https://aclanthology.org/2024.findings-acl.964/" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 63, + 151, + 286, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 151, + 286, + 198 + ], + "spans": [ + { + "bbox": [ + 63, + 151, + 286, + 198 + ], + "type": "text", + "content": "[191] H. Li, X. Yang, Z. Wang, X. Zhu, J. Zhou, Y. Qiao, X. Wang, H. Li, L. Lu, J. Dai, Auto mc-reward: Automated dense reward design with large language models for mycraft (2024). arXiv:2312.09238. URL https://arxiv.org/abs/2312.09238" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 198, + 286, + 255 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 198, + 286, + 255 + ], + "spans": [ + { + "bbox": [ + 63, + 198, + 286, + 255 + ], + "type": "text", + "content": "[192] X. Zhu, Y. Chen, H. Tian, C. Tao, W. Su, C. Yang, G. Huang, B. Li, L. Lu, X. Wang, Y. Qiao, Z. Zhang, J. Dai, Ghost in the minecraft: Generally capable agents for open-world environments via large language models with text-based knowledge and memory (2023). arXiv:2305.17144. URL https://arxiv.org/abs/2305.17144" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 255, + 286, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 255, + 286, + 274 + ], + "spans": [ + { + "bbox": [ + 63, + 255, + 286, + 274 + ], + "type": "text", + "content": "[193] S. C. Duncan, Minecraft, beyond construction and survival, Well Played 1 (1) (2011) 1-22." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 274, + 286, + 322 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 274, + 286, + 322 + ], + "spans": [ + { + "bbox": [ + 63, + 274, + 286, + 322 + ], + "type": "text", + "content": "[194] A. Juliani, V.-P. Berges, E. Teng, A. Cohen, J. Harper, C. Elion, C. Goy, Y. Gao, H. Henry, M. Mattar, D. Lange, Unity: A general platform for intelligent agents (2020). arXiv:1809.02627. URL https://arxiv.org/abs/1809.02627" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 322, + 286, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 322, + 286, + 387 + ], + "spans": [ + { + "bbox": [ + 63, + 322, + 286, + 387 + ], + "type": "text", + "content": "[195] L. Hu, J. Tang, G. Zou, Z. Li, J. Zeng, M. Li, Simulation optimization of highway hard shoulder running based on multi-agent deep deterministic policy gradient algorithm, Alexandria Engineering Journal 117 (2025) 99-115. doi:https://doi.org/10.1016/j.aej.2024.12.110. URL https://www.sciencedirect.com/science/article/pii/S1110016824017095" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 63, + 387, + 286, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 387, + 286, + 426 + ], + "spans": [ + { + "bbox": [ + 63, + 387, + 286, + 426 + ], + "type": "text", + "content": "[196] Anonymous, A generalist hanabi agent, in: Submitted to The Thirteenth International Conference on Learning Representations, 2024, under review. URL https://openreview.net/forum?id=pCj2sLNoJq" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 63, + 426, + 286, + 463 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 426, + 286, + 463 + ], + "spans": [ + { + "bbox": [ + 63, + 426, + 286, + 463 + ], + "type": "text", + "content": "[197] F. Bredell, H. A. Engelbrecht, J. C. Schoeman, Augmenting the action space with conventions to improve multi-agent cooperation in hanabi (2024). arXiv:2412.06333. URL https://arxiv.org/abs/2412.06333" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 63, + 463, + 286, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 463, + 286, + 510 + ], + "spans": [ + { + "bbox": [ + 63, + 463, + 286, + 510 + ], + "type": "text", + "content": "[198] M. Kölle, Y. Erpelding, F. Ritz, T. Phan, S. Illium, C. Linnhoff-Popien, Aquarium: A comprehensive framework for exploring predator-prey dynamics through multi-agent reinforcement learning algorithms (2024). arXiv:2401.07056. URL https://arxiv.org/abs/2401.07056" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 511, + 286, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 511, + 286, + 567 + ], + "spans": [ + { + "bbox": [ + 63, + 511, + 286, + 567 + ], + "type": "text", + "content": "[199] A. Chatterjee, M. A. Abbasi, E. Venturino, J. Zhen, M. Haque, A predator-prey model with prey refuge: under a stochastic and deterministic environment, Nonlinear Dynamics 112 (15) (2024) 13667-13693. doi:10.1007/s11071-024-09756-9. URL https://doi.org/10.1007/s11071-024-09756-9" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 63, + 567, + 286, + 604 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 567, + 286, + 604 + ], + "spans": [ + { + "bbox": [ + 63, + 567, + 286, + 604 + ], + "type": "text", + "content": "[200] Z. Mandi, S. Jain, S. Song, Roco: Dialectic multi-robot collaboration with large language models, in: 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024, pp. 286-299. doi:10.1109/ICRA57147.2024.10610855." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 63, + 605, + 286, + 661 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 605, + 286, + 661 + ], + "spans": [ + { + "bbox": [ + 63, + 605, + 286, + 661 + ], + "type": "text", + "content": "[201] J. S. Park, J. O'Brien, C. J. Cai, M. R. Morris, P. Liang, M. S. Bernstein, Generative agents: Interactive simulacra of human behavior, in: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23), ACM, ACM, 2023, p. October 2023. doi:10.1145/3586183.3606763." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 63, + 661, + 286, + 709 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 661, + 286, + 709 + ], + "spans": [ + { + "bbox": [ + 63, + 661, + 286, + 709 + ], + "type": "text", + "content": "[202] G. Kovac, R. Portelas, P. F. Dominey, P.-Y. Oudeyer, The social AI school: Insights from developmental psychology towards artificial socio-cultural agents, in: First Workshop on Theory of Mind in Communicating Agents, 2023. URL https://openreview.net/forum?id=Y5r8Wa670b" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 63, + 709, + 286, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 709, + 286, + 728 + ], + "spans": [ + { + "bbox": [ + 63, + 709, + 286, + 728 + ], + "type": "text", + "content": "[203] G. Kovač, R. Portelas, P. F. Dominey, P.-Y. Oudeyer, The socialai school: a framework leveraging developmental psychol-" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 308, + 113, + 531, + 728 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 329, + 113, + 531, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 113, + 531, + 150 + ], + "spans": [ + { + "bbox": [ + 329, + 113, + 531, + 150 + ], + "type": "text", + "content": "ogy toward artificial socio-cultural agents, Frontiers in Neurorobotics 18, part of the Research Topic: Theory of Mind in Robots and Intelligent Systems (2024). doi:10.3389/fnbot.2024.1396359." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 308, + 151, + 531, + 189 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 151, + 531, + 189 + ], + "spans": [ + { + "bbox": [ + 308, + 151, + 531, + 189 + ], + "type": "text", + "content": "[204] G. Mukobi, H. Erlebach, N. Lauffer, L. Hammond, A. Chan, J. Clifton, Welfare diplomacy: Benchmarking language model cooperation (2024). URL https://openreview.net/forum?id=AKJLnDgzkm" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 308, + 189, + 531, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 189, + 531, + 236 + ], + "spans": [ + { + "bbox": [ + 308, + 189, + 531, + 236 + ], + "type": "text", + "content": "[205] J. Boubin, C. Burley, P. Han, B. Li, B. Porter, C. Stewart, Programming and deployment of autonomous swarms using multi-agent reinforcement learning (2021). arXiv:2105.10605. URL https://arxiv.org/abs/2105.10605" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 236, + 531, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 236, + 531, + 274 + ], + "spans": [ + { + "bbox": [ + 308, + 236, + 531, + 274 + ], + "type": "text", + "content": "[206] T. Li, F. Xie, Y. Xiong, Q. Feng, Multi-arm robot task planning for fruit harvesting using multi-agent reinforcement learning (2023). arXiv:2303.00460. URL https://arxiv.org/abs/2303.00460" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 308, + 274, + 531, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 274, + 531, + 312 + ], + "spans": [ + { + "bbox": [ + 308, + 274, + 531, + 312 + ], + "type": "text", + "content": "[207] A. Mahajan, S. Hegde, E. Shay, D. Wu, A. Prins, Comparative analysis of multi-agent reinforcement learning policies for crop planning decision support (2024). arXiv:2412.02057. URL https://arxiv.org/abs/2412.02057" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 308, + 312, + 531, + 359 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 312, + 531, + 359 + ], + "spans": [ + { + "bbox": [ + 308, + 312, + 531, + 359 + ], + "type": "text", + "content": "[208] L. Benke, M. Papasimeon, T. Miller, Modelling strategic deceptive planning in adversarial multi-agent systems, in: S. Sarkadi, B. Wright, P. Masters, P. McBurney (Eds.), Deceptive AI, Springer International Publishing, Cham, 2021, pp. 76-83." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 308, + 359, + 531, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 359, + 531, + 406 + ], + "spans": [ + { + "bbox": [ + 308, + 359, + 531, + 406 + ], + "type": "text", + "content": "[209] A. V. Sutagundar, S. Manvi, Context aware multisensor image fusion for military sensor networks using multi-agent system, International Journal of Ad hoc, Sensor Ubiquitous Computing 2 (1) (2011) 147-167. doi:10.5121/ijasuc.2011.2113. URL http://dx.doi.org/10.5121/ijasuc.2011.2113" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 308, + 407, + 531, + 454 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 407, + 531, + 454 + ], + "spans": [ + { + "bbox": [ + 308, + 407, + 531, + 454 + ], + "type": "text", + "content": "[210] S. Vangaru, D. Rosen, D. Green, R. Rodriguez, M. Wiecek, A. Johnson, A. M. Jones, W. C. Headley, A multi-agent reinforcement learning testbed for cognitive radio applications (2024). arXiv:2410.21521. URL https://arxiv.org/abs/2410.21521" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 308, + 454, + 531, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 454, + 531, + 491 + ], + "spans": [ + { + "bbox": [ + 308, + 454, + 531, + 491 + ], + "type": "text", + "content": "[211] Z. Wang, L. Wang, Q. Yi, Y. Liu, A marl based multi-target tracking algorithm under jamming against radar (2024). arXiv:2412.12547. URL https://arxiv.org/abs/2412.12547" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 308, + 491, + 531, + 538 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 491, + 531, + 538 + ], + "spans": [ + { + "bbox": [ + 308, + 491, + 531, + 538 + ], + "type": "text", + "content": "[212] A. Alexopoulos, T. Schmidt, E. Badreddin, Cooperative pursue in pursuit-evasion games with unmanned aerial vehicles, in: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 4538-4543. doi:10.1109/IROS.2015.7354022." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 308, + 539, + 531, + 595 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 539, + 531, + 595 + ], + "spans": [ + { + "bbox": [ + 308, + 539, + 531, + 595 + ], + "type": "text", + "content": "[213] D. Luo, Z. Fan, Z. Yang, Y. Xu, Multi-uav cooperative maneuver decision-making for pursuit-evasion using improved madrl, Defence Technology 35 (2024) 187-197. doi:https://doi.org/10.1016/j.dt.2023.11.013. URL https://www.sciencedirect.com/science/article/pii/S221491472300301X" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 308, + 595, + 531, + 642 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 595, + 531, + 642 + ], + "spans": [ + { + "bbox": [ + 308, + 595, + 531, + 642 + ], + "type": "text", + "content": "[214] D. Wang, Multi-agent reinforcement learning for safe driving in on-ramp merging of autonomous vehicles, in: 2024 14th International Conference on Cloud Computing, Data Science & Engineering (Confluence), 2024, pp. 644-651. doi:10.1109/Confluence60223.2024.10463500." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 308, + 642, + 531, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 642, + 531, + 690 + ], + "spans": [ + { + "bbox": [ + 308, + 642, + 531, + 690 + ], + "type": "text", + "content": "[215] N. Aboueleneen, Y. Bello, A. Albaseer, A. R. Hussein, M. Abdallah, E. Hossain, Distributed traffic control in complex dynamic roadblocks: A multi-agent deep rl approach (2024). arXiv:2501.00211. URL https://arxiv.org/abs/2501.00211" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 308, + 690, + 531, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 690, + 531, + 728 + ], + "spans": [ + { + "bbox": [ + 308, + 690, + 531, + 728 + ], + "type": "text", + "content": "[216] J. Yu, L. Liang, C. Guo, Z. Guo, S. Jin, G. Y. Li, Heterogeneous multi-agent reinforcement learning for distributed channel access in wlans (2024). arXiv:2412.14218. URL https://arxiv.org/abs/2412.14218" + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 290, + 739, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 290, + 739, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 290, + 739, + 302, + 748 + ], + "type": "text", + "content": "49" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 49 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 113, + 286, + 728 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 63, + 113, + 286, + 151 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 113, + 286, + 151 + ], + "spans": [ + { + "bbox": [ + 63, + 113, + 286, + 151 + ], + "type": "text", + "content": "[217] K. Sun, H. Yu, Reinforcement learning for freeway lane-change regulation via connected vehicles (2024). arXiv: 2412.04341. URL https://arxiv.org/abs/2412.04341" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 63, + 152, + 286, + 188 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 152, + 286, + 188 + ], + "spans": [ + { + "bbox": [ + 63, + 152, + 286, + 188 + ], + "type": "text", + "content": "[218] T. Azfar, R. Ke, Traffic co-simulation framework empowered by infrastructure camera sensing and reinforcement learning (2024). arXiv:2412.03925. URL https://arxiv.org/abs/2412.03925" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 189, + 286, + 227 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 189, + 286, + 227 + ], + "spans": [ + { + "bbox": [ + 63, + 189, + 286, + 227 + ], + "type": "text", + "content": "[219] R. Bokade, X. Jin, Offlight: An offline multi-agent reinforcement learning framework for traffic signal control (2024). arXiv:2411.06601. URL https://arxiv.org/abs/2411.06601" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 227, + 286, + 264 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 227, + 286, + 264 + ], + "spans": [ + { + "bbox": [ + 63, + 227, + 286, + 264 + ], + "type": "text", + "content": "[220] D. K. Kwesiga, S. C. Vishnoi, A. Guin, M. Hunter, Integrating transit signal priority into multi-agent reinforcement learning based traffic signal control (2024). arXiv:2411.19359. URL https://arxiv.org/abs/2411.19359" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 265, + 286, + 329 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 265, + 286, + 329 + ], + "spans": [ + { + "bbox": [ + 63, + 265, + 286, + 329 + ], + "type": "text", + "content": "[221] Y. Zhang, G. Zheng, Z. Liu, Q. Li, H. Zeng, Marlens: Understanding multi-agent reinforcement learning for traffic signal control via visual analytics, IEEE Transactions on Visualization and Computer Graphics (2024) 1-16doi:10.1109/tvcg.2024.3392587. URL http://dx.doi.org/10.1109/TVCG.2024.3392587" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 330, + 286, + 368 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 330, + 286, + 368 + ], + "spans": [ + { + "bbox": [ + 63, + 330, + 286, + 368 + ], + "type": "text", + "content": "[222] J. Xue, D. Zhang, R. Xiong, Y. Wang, E. Liu, A two-stage based social preference recognition in multi-agent autonomous driving system (2023). arXiv:2310.03303. URL https://arxiv.org/abs/2310.03303" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 63, + 369, + 286, + 416 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 369, + 286, + 416 + ], + "spans": [ + { + "bbox": [ + 63, + 369, + 286, + 416 + ], + "type": "text", + "content": "[223] J. Liu, P. Hang, X. Na, C. Huang, J. Sun, Cooperative decision-making for cavs at unsignalized intersections: A marl approach with attention and hierarchical game priors (2024). arXiv: 2409.05712. URL https://arxiv.org/abs/2409.05712" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 63, + 417, + 286, + 454 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 417, + 286, + 454 + ], + "spans": [ + { + "bbox": [ + 63, + 417, + 286, + 454 + ], + "type": "text", + "content": "[224] C. Formanek, L. Beyers, C. R. Tilbury, J. P. Shock, A. Pretorius, Putting data at the centre of offline multi-agent reinforcement learning (2024). arXiv:2409.12001. URL https://arxiv.org/abs/2409.12001" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 63, + 454, + 286, + 501 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 454, + 286, + 501 + ], + "spans": [ + { + "bbox": [ + 63, + 454, + 286, + 501 + ], + "type": "text", + "content": "[225] R. Zhang, J. Hou, F. Walter, S. Gu, J. Guan, F. Röhrbein, Y. Du, P. Cai, G. Chen, A. Knoll, Multi-agent reinforcement learning for autonomous driving: A survey (2024). arXiv: 2408.09675. URL https://arxiv.org/abs/2408.09675" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 502, + 286, + 539 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 502, + 286, + 539 + ], + "spans": [ + { + "bbox": [ + 63, + 502, + 286, + 539 + ], + "type": "text", + "content": "[226] S. Kotoku, T. Mihana, A. Röhm, R. Horisaki, Decentralized multi-agent reinforcement learning algorithm using a cluster-synchronized laser network (2024). arXiv:2407.09124. URL https://arxiv.org/abs/2407.09124" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 63, + 539, + 286, + 586 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 539, + 286, + 586 + ], + "spans": [ + { + "bbox": [ + 63, + 539, + 286, + 586 + ], + "type": "text", + "content": "[227] M. Hua, D. Chen, K. Jiang, F. Zhang, J. Wang, B. Wang, Q. Zhou, H. Xu, Communication-efficient marl for platoon stability and energy-efficiency co-optimization in cooperative adaptive cruise control of cavs (2024). arXiv:2406.11653. URL https://arxiv.org/abs/2406.11653" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 63, + 587, + 286, + 633 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 587, + 286, + 633 + ], + "spans": [ + { + "bbox": [ + 63, + 587, + 286, + 633 + ], + "type": "text", + "content": "[228] K. DeMarco, E. Squires, M. Day, C. Pippin, Simulating collaborative robots in a massive multi-agent game environment (scrimmage), in: N. Correll, M. Schwager, M. Otte (Eds.), Distributed Autonomous Robotic Systems, Springer International Publishing, Cham, 2019, pp. 283-297." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 63, + 634, + 286, + 680 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 634, + 286, + 680 + ], + "spans": [ + { + "bbox": [ + 63, + 634, + 286, + 680 + ], + "type": "text", + "content": "[229] Y. Xie, Y. Liu, Z. Ma, L. Shi, X. Wang, W. Yuan, M. O. Jackson, Q. Mei, How different ai chatbots behave? benchmarking large language models in behavioral economics games (2024). arXiv:2412.12362. URL https://arxiv.org/abs/2412.12362" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 63, + 681, + 286, + 719 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 681, + 286, + 719 + ], + "spans": [ + { + "bbox": [ + 63, + 681, + 286, + 719 + ], + "type": "text", + "content": "[230] W. Lin, J. Roberts, Y. Yang, S. Albanie, Z. Lu, K. Han, Beyond outcomes: Transparent assessment of llm reasoning in games (2024). arXiv:2412.13602. URL https://arxiv.org/abs/2412.13602" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 63, + 719, + 286, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 719, + 286, + 728 + ], + "spans": [ + { + "bbox": [ + 63, + 719, + 286, + 728 + ], + "type": "text", + "content": "[231] F. Jia, J. Zheng, F. Li, Decentralized intelligence in game:" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 308, + 113, + 530, + 726 + ], + "type": "list", + "angle": 0, + "index": 30, + "blocks": [ + { + "bbox": [ + 329, + 113, + 530, + 142 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 113, + 530, + 142 + ], + "spans": [ + { + "bbox": [ + 329, + 113, + 530, + 142 + ], + "type": "text", + "content": "Embodied ai agents and the convergence of defi and virtual ecosystems (2024). arXiv:2412.18601. \nURL https://arxiv.org/abs/2412.18601" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 308, + 142, + 530, + 188 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 142, + 530, + 188 + ], + "spans": [ + { + "bbox": [ + 308, + 142, + 530, + 188 + ], + "type": "text", + "content": "[232] C. Yin, G. Zhouhong, D. Zhaohan, Y. Zheyu, C. Shaosheng, X. Yiqian, F. Hongwei, C. Ping, Mirage: Exploring how large language models perform in complex social interactive environments (2025). arXiv:2501.01652. URL https://arxiv.org/abs/2501.01652" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 189, + 530, + 227 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 189, + 530, + 227 + ], + "spans": [ + { + "bbox": [ + 308, + 189, + 530, + 227 + ], + "type": "text", + "content": "[233] Z. Zhang, Y. Lan, Y. Chen, L. Wang, X. Wang, H. Wang, Dvm: Towards controllable llm agents in social deduction games (2025). arXiv:2501.06695. URL https://arxiv.org/abs/2501.06695" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 308, + 227, + 530, + 264 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 227, + 530, + 264 + ], + "spans": [ + { + "bbox": [ + 308, + 227, + 530, + 264 + ], + "type": "text", + "content": "[234] A. Gonzalez-Bonorino, M. Capra, E. Pantoja, Llms model nonweird populations: Experiments with synthetic cultural agents (2025). arXiv:2501.06834. URL https://arxiv.org/abs/2501.06834" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 308, + 265, + 530, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 265, + 530, + 312 + ], + "spans": [ + { + "bbox": [ + 308, + 265, + 530, + 312 + ], + "type": "text", + "content": "[235] X. Luo, F. Ding, F. Yang, Y. Zhou, J. Loo, H. H. Tew, C. Liu, Sensorag: Constructing environmental knowledge bases with proactive querying for llm-based autonomous driving (2025). arXiv:2501.03535. URL https://arxiv.org/abs/2501.03535" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 308, + 312, + 530, + 359 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 312, + 530, + 359 + ], + "spans": [ + { + "bbox": [ + 308, + 312, + 530, + 359 + ], + "type": "text", + "content": "[236] D. Mahmud, H. Hajmohamed, S. Almentheri, S. Alqaydi, L. Aldaheri, R. A. Khalil, N. Saeed, Integrating llms with its: Recent advances, potentials, challenges, and future directions (2025). arXiv:2501.04437. URL https://arxiv.org/abs/2501.04437" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 308, + 359, + 530, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 359, + 530, + 396 + ], + "spans": [ + { + "bbox": [ + 308, + 359, + 530, + 396 + ], + "type": "text", + "content": "[237] Z. Peng, Y. Wang, X. Han, L. Zheng, J. Ma, Learningflow: Automated policy learning workflow for urban driving with large language models (2025). arXiv:2501.05057. URL https://arxiv.org/abs/2501.05057" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 308, + 397, + 530, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 397, + 530, + 434 + ], + "spans": [ + { + "bbox": [ + 308, + 397, + 530, + 434 + ], + "type": "text", + "content": "[238] A. Karagounis, Leveraging large language models for enhancing autonomous vehicle perception (2024). arXiv:2412.20230. URL https://arxiv.org/abs/2412.20230" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 308, + 435, + 530, + 481 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 435, + 530, + 481 + ], + "spans": [ + { + "bbox": [ + 308, + 435, + 530, + 481 + ], + "type": "text", + "content": "[239] X. Luo, F. Ding, R. Chen, R. Panda, J. Loo, S. Zhang, \"what's happening\"- a human-centered multimodal interpreter explaining the actions of autonomous vehicles (2025). arXiv:2501.05322. URL https://arxiv.org/abs/2501.05322" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 308, + 482, + 530, + 519 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 482, + 530, + 519 + ], + "spans": [ + { + "bbox": [ + 308, + 482, + 530, + 519 + ], + "type": "text", + "content": "[240] H. Gao, Y. Zhao, Application of vision-language model to pedestrians behavior and scene understanding in autonomous driving (2025). arXiv:2501.06680. URL https://arxiv.org/abs/2501.06680" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 308, + 520, + 530, + 566 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 520, + 530, + 566 + ], + "spans": [ + { + "bbox": [ + 308, + 520, + 530, + 566 + ], + "type": "text", + "content": "[241] D. Hegde, R. Yasarla, H. Cai, S. Han, A. Bhattacharyya, S. Mahajan, L. Liu, R. Garrepalli, V. M. Patel, F. Porkli, Distilling multi-modal large language models for autonomous driving (2025). arXiv:2501.09757. URL https://arxiv.org/abs/2501.09757" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 308, + 567, + 530, + 642 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 567, + 530, + 642 + ], + "spans": [ + { + "bbox": [ + 308, + 567, + 530, + 642 + ], + "type": "text", + "content": "[242] R. Wang, X. He, R. Yu, W. Qiu, B. An, Z. Rabinovich, Learning efficient multi-agent communication: An information bottleneck approach, in: H. D. III, A. Singh (Eds.), Proceedings of the 37th International Conference on Machine Learning, Vol. 119 of Proceedings of Machine Learning Research, PMLR, 2020, pp. 9908-9918. URL https://proceedings.mlr.press/v119/wang20i.html" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 308, + 643, + 530, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 643, + 530, + 689 + ], + "spans": [ + { + "bbox": [ + 308, + 643, + 530, + 689 + ], + "type": "text", + "content": "[243] A. van den Oord, O. Vinyals, K. Kavukcuoglu, Neural discrete representation learning, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, Curran Associates Inc., Red Hook, NY, USA, 2017, p. 6309-6318." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 308, + 690, + 530, + 726 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 690, + 530, + 726 + ], + "spans": [ + { + "bbox": [ + 308, + 690, + 530, + 726 + ], + "type": "text", + "content": "[244] F. Ye, J. Chen, Y. Tian, T. Jiang, Cooperative task assignment of a heterogeneous multi-uav system using an adaptive genetic algorithm, Electronics 9 (4) (2020). doi:10.3390/electronics9040687." + } + ] + } + ], + "index": 29 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 739, + 303, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 739, + 303, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 739, + 303, + 748 + ], + "type": "text", + "content": "50" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 50 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 112, + 298, + 728 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 85, + 112, + 266, + 122 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 112, + 266, + 122 + ], + "spans": [ + { + "bbox": [ + 85, + 112, + 266, + 122 + ], + "type": "text", + "content": "URL https://www.mdpi.com/2079-9292/9/4/687" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 63, + 123, + 298, + 206 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 123, + 298, + 206 + ], + "spans": [ + { + "bbox": [ + 63, + 123, + 298, + 206 + ], + "type": "text", + "content": "[245] J. Wang, D. Ye, Z. Lu, Mutual-information regularized multiagent policy iteration, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 2617-2635. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/0799492e7be38b66d10ead5e8809616d-Paper-Conference.pdf" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 207, + 286, + 245 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 207, + 286, + 245 + ], + "spans": [ + { + "bbox": [ + 63, + 207, + 286, + 245 + ], + "type": "text", + "content": "[246] P. Li, H. Tang, T. Yang, X. Hao, T. Sang, Y. Zheng, J. Hao, M. E. Taylor, Z. Wang, Pmic: Improving multi-agent reinforcement learning with progressive mutual information collaboration, ICML 2022 (2022)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 246, + 286, + 283 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 246, + 286, + 283 + ], + "spans": [ + { + "bbox": [ + 63, + 246, + 286, + 283 + ], + "type": "text", + "content": "[247] S. Kang, Y. Lee, S.-Y. Yun, DPM: Dual preferences-based multi-agent reinforcement learning, in: ICML 2024 Workshop on Models of Human Feedback for AI Alignment, 2024. URL https://openreview.net/forum?id=TW3DIP2h5p" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 284, + 286, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 284, + 286, + 330 + ], + "spans": [ + { + "bbox": [ + 63, + 284, + 286, + 330 + ], + "type": "text", + "content": "[248] Q. Fu, T. Qiu, J. Yi, Z. Pu, S. Wu, Concentration network for reinforcement learning of large-scale multi-agent systems, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, AAAI Press, Beijing, China, 2022, p. 21165. doi:10.1609/aaai.v36i9.21165." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 331, + 286, + 368 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 331, + 286, + 368 + ], + "spans": [ + { + "bbox": [ + 63, + 331, + 286, + 368 + ], + "type": "text", + "content": "[249] B. Peng, T. Rashid, C. A. S. de Witt, P.-A. Kamienny, P. H. S. Torr, W. Böhmer, S. Whiteson, Facmac: Factored multi-agent centralised policy gradients (2021). arXiv:2003.06709. URL https://arxiv.org/abs/2003.06709" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 63, + 369, + 286, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 369, + 286, + 406 + ], + "spans": [ + { + "bbox": [ + 63, + 369, + 286, + 406 + ], + "type": "text", + "content": "[250] K. Pandya, M. Holia, Automating customer service using langchain: Building custom open-source gpt chatbot for organizations (2023). arXiv:2310.05421. URL https://arxiv.org/abs/2310.05421" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 63, + 407, + 286, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 407, + 286, + 444 + ], + "spans": [ + { + "bbox": [ + 63, + 407, + 286, + 444 + ], + "type": "text", + "content": "[251] H. Du, S. Thudumu, R. Vasa, K. Mouzakis, A survey on context-aware multi-agent systems: Techniques, challenges and future directions (2024). arXiv:2402.01968. URL https://arxiv.org/abs/2402.01968" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 63, + 444, + 286, + 500 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 444, + 286, + 500 + ], + "spans": [ + { + "bbox": [ + 63, + 444, + 286, + 500 + ], + "type": "text", + "content": "[252] K. Hu, M. Li, Z. Song, K. Xu, Q. Xia, N. Sun, P. Zhou, M. Xia, A review of research on reinforcement learning algorithms for multi-agents, Neurocomputing 599 (2024) 128068. doi: https://doi.org/10.1016/j.neucom.2024.128068. URL https://www.sciencedirect.com/science/article/pii/S0925231224008397" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 502, + 286, + 539 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 502, + 286, + 539 + ], + "spans": [ + { + "bbox": [ + 63, + 502, + 286, + 539 + ], + "type": "text", + "content": "[253] K.-T. Tran, D. Dao, M.-D. Nguyen, Q.-V. Pham, B. O'Sullivan, H. D. Nguyen, Multi-agent collaboration mechanisms: A survey of llms (2025). arXiv:2501.06322. URL https://arxiv.org/abs/2501.06322" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 63, + 539, + 286, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 539, + 286, + 576 + ], + "spans": [ + { + "bbox": [ + 63, + 539, + 286, + 576 + ], + "type": "text", + "content": "[254] Y. Talebirad, A. Nadiri, Multi-agent collaboration: Harnessing the power of intelligent llm agents (2023). arXiv:2306.03314. URL https://arxiv.org/abs/2306.03314" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 63, + 577, + 286, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 577, + 286, + 624 + ], + "spans": [ + { + "bbox": [ + 63, + 577, + 286, + 624 + ], + "type": "text", + "content": "[255] F. L. D. Silva, M. E. Taylor, A. H. R. Costa, Autonomously reusing knowledge in multiagent reinforcement learning, in: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, 2018, pp. 5487-5493." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 63, + 624, + 286, + 661 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 624, + 286, + 661 + ], + "spans": [ + { + "bbox": [ + 63, + 624, + 286, + 661 + ], + "type": "text", + "content": "[256] F. L. Da Silva, A. H. R. Costa, A survey on transfer learning for multiagent reinforcement learning systems, J. Artif. Int. Res. 64 (1) (2019) 645-703. doi:10.1613/jair.1.11396. URL https://doi.org/10.1613/jair.1.11396" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 63, + 662, + 286, + 718 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 662, + 286, + 718 + ], + "spans": [ + { + "bbox": [ + 63, + 662, + 286, + 718 + ], + "type": "text", + "content": "[257] Y. Bengio, J. Louradour, R. Collobert, J. Weston, Curriculum learning, in: Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09, Association for Computing Machinery, New York, NY, USA, 2009, p. 41-48. doi:10.1145/1553374.1553380. URL https://doi.org/10.1145/1553374.1553380" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 63, + 719, + 286, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 719, + 286, + 728 + ], + "spans": [ + { + "bbox": [ + 63, + 719, + 286, + 728 + ], + "type": "text", + "content": "[258] X. Wang, Y. Chen, W. Zhu, A survey on curriculum learn" + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 308, + 113, + 530, + 728 + ], + "type": "list", + "angle": 0, + "index": 28, + "blocks": [ + { + "bbox": [ + 329, + 113, + 530, + 141 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 113, + 530, + 141 + ], + "spans": [ + { + "bbox": [ + 329, + 113, + 530, + 141 + ], + "type": "text", + "content": "ing, IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (9) (2022) 4555-4576. doi:10.1109/TPAMI.2021.3069908." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 308, + 142, + 530, + 179 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 142, + 530, + 179 + ], + "spans": [ + { + "bbox": [ + 308, + 142, + 530, + 179 + ], + "type": "text", + "content": "[259] L. Pinto, J. Davidson, R. Sukthankar, A. Gupta, Robust adversarial reinforcement learning, in: Proceedings of the 34th International Conference on Machine Learning, PMLR, 2017, pp. 2817-2826." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 179, + 530, + 245 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 179, + 530, + 245 + ], + "spans": [ + { + "bbox": [ + 308, + 179, + 530, + 245 + ], + "type": "text", + "content": "[260] W. Jin, N. Wang, T. Tao, B. Shi, H. Bi, B. Zhao, H. Wu, H. Duan, G. Yang, A veracity dissemination consistency-based few-shot fake news detection framework by synergizing adversarial and contrastive self-supervised learning, Scientific Reports 14 (1) (2024) 19470. doi:10.1038/s41598-024-70039-9. URL https://doi.org/10.1038/s41598-024-70039-9" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 308, + 246, + 530, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 246, + 530, + 293 + ], + "spans": [ + { + "bbox": [ + 308, + 246, + 530, + 293 + ], + "type": "text", + "content": "[261] J. Foerster, N. Nardelli, G. Farquhar, T. Afouras, P. H. S. Torr, P. Kohli, S. Whiteson, Stabilising experience replay for deep multi-agent reinforcement learning, in: Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, p. 1146-1155." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 308, + 293, + 530, + 359 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 293, + 530, + 359 + ], + "spans": [ + { + "bbox": [ + 308, + 293, + 530, + 359 + ], + "type": "text", + "content": "[262] T. Guo, X. Chen, Y. Wang, R. Chang, S. Pei, N. V. Chawla, O. Wiest, X. Zhang, Large language model based multi-agents: A survey of progress and challenges, in: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI '24), IJCAI, 2024, pp. 8048-8057. doi: 10.24963/ijcai.2024/890. URL https://doi.org/10.24963/ijcai.2024/890" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 308, + 359, + 530, + 424 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 359, + 530, + 424 + ], + "spans": [ + { + "bbox": [ + 308, + 359, + 530, + 424 + ], + "type": "text", + "content": "[263] L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, Q. Chen, W. Peng, X. Feng, B. Qin, T. Liu, A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions, ACM Transactions on Information Systems Accepted on 24 September 2024 (2024). doi: 10.1145/3703155. URL https://doi.org/10.1145/3703155" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 308, + 426, + 530, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 426, + 530, + 453 + ], + "spans": [ + { + "bbox": [ + 308, + 426, + 530, + 453 + ], + "type": "text", + "content": "[264] E. Alonso, D. Kudenko, D. Kazakov (Eds.), Adaptive agents and multi-agent systems: adaptation and multi-agent learning, Springer-Verlag, Berlin, Heidelberg, 2003." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 308, + 454, + 530, + 500 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 454, + 530, + 500 + ], + "spans": [ + { + "bbox": [ + 308, + 454, + 530, + 500 + ], + "type": "text", + "content": "[265] K. Tuyls, P. J. Hoen, K. Verbeeck, S. Sen (Eds.), Learning and Adaption in Multi-Agent Systems, Vol. 3898 of Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 2006. URL https://link.springer.com/book/10.1007/978-3-540-33316-2" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 308, + 502, + 530, + 539 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 502, + 530, + 539 + ], + "spans": [ + { + "bbox": [ + 308, + 502, + 530, + 539 + ], + "type": "text", + "content": "[266] S. Chen, Y. Liu, W. Han, W. Zhang, T. Liu, A survey on llmbased multi-agent system: Recent advances and new frontiers in application (2025). arXiv:2412.17481. URL https://arxiv.org/abs/2412.17481" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 308, + 539, + 530, + 585 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 539, + 530, + 585 + ], + "spans": [ + { + "bbox": [ + 308, + 539, + 530, + 585 + ], + "type": "text", + "content": "[267] V. Dibia, Multi-agent llm applications — a review of current research, tools, and challenges, accessed from Victor Dibia's newsletter (2023). URL https://newsletter.victordibia.com/p/multi-agent-llm-applications-a-review" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 308, + 586, + 530, + 680 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 586, + 530, + 680 + ], + "spans": [ + { + "bbox": [ + 308, + 586, + 530, + 680 + ], + "type": "text", + "content": "[268] A. Chan, R. Salganik, A. Markelius, C. Pang, N. Rajkumar, D. Krasheninnikov, L. Langosco, Z. He, Y. Duan, M. Carroll, M. Lin, A. Mayhew, K. Collins, M. Molamohammadi, J. Burden, W. Zhao, S. Rismani, K. Voudouris, U. Bhatt, A. Weller, D. Krueger, T. Maharaj, Harms from increasingly agentic algorithmic systems, in: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT '23, Association for Computing Machinery, New York, NY, USA, 2023, p. 651-666. doi:10.1145/3593013.3594033. URL https://doi.org/10.1145/3593013.3594033" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 308, + 681, + 530, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 681, + 530, + 728 + ], + "spans": [ + { + "bbox": [ + 308, + 681, + 530, + 728 + ], + "type": "text", + "content": "[269] A. Amirkhani, A. H. Barshooi, Consensus in multi-agent systems: A review, Artificial Intelligence Review 55 (5) (2022) 3897-3935, accessed: 2022-06-01. doi:10.1007/s10462-021-10097-x. URL https://doi.org/10.1007/s10462-021-10097-x" + } + ] + } + ], + "index": 27 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 740, + 302, + 748 + ], + "type": "text", + "content": "51" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 51 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 112, + 286, + 728 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 63, + 112, + 286, + 188 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 112, + 286, + 188 + ], + "spans": [ + { + "bbox": [ + 63, + 112, + 286, + 188 + ], + "type": "text", + "content": "[270] G. Beydoun, G. Low, H. Mouratidis, B. Henderson-Sellers, A security-aware metamodel for multi-agent systems (mas), Information and Software Technology 51 (5) (2009) 832-845, sPECIAL ISSUE: Model-Driven Development for Secure Information Systems. doi:https://doi.org/10.1016/j.infsof.2008.05.003. URL https://www.sciencedirect.com/science/article/pii/S0950584908000724" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 63, + 189, + 286, + 235 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 189, + 286, + 235 + ], + "spans": [ + { + "bbox": [ + 63, + 189, + 286, + 235 + ], + "type": "text", + "content": "[271] X. Xie, C. Zhang, Y. Zhu, Y. N. Wu, S.-C. Zhu, Congestion-aware multi-agent trajectory prediction for collision avoidance, in: 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 13693-13700. doi:10.1109/ICRA48506.2021.9560994." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 236, + 286, + 292 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 236, + 286, + 292 + ], + "spans": [ + { + "bbox": [ + 63, + 236, + 286, + 292 + ], + "type": "text", + "content": "[272] A. Stooke, K. Lee, P. Abbeel, M. Laskin, Decoupling representation learning from reinforcement learning, in: Proceedings of the 38th International Conference on Machine Learning, PMLR, 2021, pp. 9870-9879. URL https://proceedings.mlr.press/v139/stooke21a.html" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 293, + 286, + 359 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 293, + 286, + 359 + ], + "spans": [ + { + "bbox": [ + 63, + 293, + 286, + 359 + ], + "type": "text", + "content": "[273] J. Luketina, N. Nardelli, G. Farquhar, J. Foerster, J. Andreas, E. Grefenstette, S. Whiteson, T. Rocktäschel, A survey of reinforcement learning informed by natural language, in: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, 2019, pp. 6309-6317. doi:10.24963/ijcai.2019/880. URL https://doi.org/10.24963/ijcai.2019/880" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 359, + 286, + 406 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 359, + 286, + 406 + ], + "spans": [ + { + "bbox": [ + 63, + 359, + 286, + 406 + ], + "type": "text", + "content": "[274] R. P. Poudel, H. Pandya, S. Liwicki, R. Cipolla, Recore: Regularized contrastive representation learning of world model, in: 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 22904-22913. doi: 10.1109/CVPR52733.2024.02161." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 407, + 286, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 407, + 286, + 453 + ], + "spans": [ + { + "bbox": [ + 63, + 407, + 286, + 453 + ], + "type": "text", + "content": "[275] W. Choi, W. K. Kim, S. Kim, H. Woo, Efficient policy adaptation with contrastive prompt ensemble for embodied agents, in: Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Curran Associates Inc., Red Hook, NY, USA, 2024." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 63, + 454, + 286, + 539 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 454, + 286, + 539 + ], + "spans": [ + { + "bbox": [ + 63, + 454, + 286, + 539 + ], + "type": "text", + "content": "[276] F. Paischer, T. Adler, V. Patil, A. Bitto-Nemling, M. Holzleitner, S. Lehner, H. Eghbal-Zadeh, S. Hochreiter, History compression via language models in reinforcement learning, in: K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, S. Sabato (Eds.), Proceedings of the 39th International Conference on Machine Learning, Vol. 162 of Proceedings of Machine Learning Research, PMLR, 2022, pp. 17156-17185. URL https://proceedings.mlr.press/v162/paischer22a.html" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 63, + 539, + 286, + 587 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 539, + 286, + 587 + ], + "spans": [ + { + "bbox": [ + 63, + 539, + 286, + 587 + ], + "type": "text", + "content": "[277] F. Paischer, T. Adler, M. Hofmacher, S. Hochreiter, Semantic HELM: A human-readable memory for reinforcement learning, in: Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=ebMPmx5mr7" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 63, + 587, + 286, + 670 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 587, + 286, + 670 + ], + "spans": [ + { + "bbox": [ + 63, + 587, + 286, + 670 + ], + "type": "text", + "content": "[278] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, I. Sutskever, Learning transferable visual models from natural language supervision, in: M. Meila, T. Zhang (Eds.), Proceedings of the 38th International Conference on Machine Learning, Vol. 139 of Proceedings of Machine Learning Research, PMLR, 2021, pp. 8748-8763. URL https://proceedings.mlr.press/v139/radford21a.html" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 671, + 286, + 709 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 671, + 286, + 709 + ], + "spans": [ + { + "bbox": [ + 63, + 671, + 286, + 709 + ], + "type": "text", + "content": "[279] A. van den Oord, Y. Li, O. Vinyals, Representation learning with contrastive predictive coding (2019). arXiv:1807.03748. URL https://arxiv.org/abs/1807.03748" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 63, + 709, + 286, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 709, + 286, + 728 + ], + "spans": [ + { + "bbox": [ + 63, + 709, + 286, + 728 + ], + "type": "text", + "content": "[280] M. Laskin, A. Srinivas, P. Abbeel, Curl: contrastive unsupervised representations for reinforcement learning, in: Proceed-" + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 308, + 113, + 542, + 728 + ], + "type": "list", + "angle": 0, + "index": 24, + "blocks": [ + { + "bbox": [ + 329, + 113, + 530, + 132 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 113, + 530, + 132 + ], + "spans": [ + { + "bbox": [ + 329, + 113, + 530, + 132 + ], + "type": "text", + "content": "ings of the 37th International Conference on Machine Learning, ICML'20, JMLR.org, 2020." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 308, + 132, + 530, + 188 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 132, + 530, + 188 + ], + "spans": [ + { + "bbox": [ + 308, + 132, + 530, + 188 + ], + "type": "text", + "content": "[281] M. Schwarzer, A. Anand, R. Goel, R. D. Hjelm, A. Courville, P. Bachman, Data-efficient reinforcement learning with self-predictive representations, in: International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=uCQfPZwRaUu" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 308, + 189, + 530, + 227 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 189, + 530, + 227 + ], + "spans": [ + { + "bbox": [ + 308, + 189, + 530, + 227 + ], + "type": "text", + "content": "[282] M. Kwon, S. M. Xie, K. Bullard, D. Sadigh, Reward design with language models, in: The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=10uNUgI5K1" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 308, + 227, + 530, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 227, + 530, + 274 + ], + "spans": [ + { + "bbox": [ + 308, + 227, + 530, + 274 + ], + "type": "text", + "content": "[283] J. Song, Z. Zhou, J. Liu, C. Fang, Z. Shu, L. Ma, Self-refined large language model as automated reward function designer for deep reinforcement learning in robotics (2023). arXiv: 2309.06687. URL https://arxiv.org/abs/2309.06687" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 308, + 274, + 542, + 368 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 274, + 542, + 368 + ], + "spans": [ + { + "bbox": [ + 308, + 274, + 542, + 368 + ], + "type": "text", + "content": "[284] Y. Wu, Y. Fan, P. P. Liang, A. Azaria, Y. Li, T. M. Mitchell, Read and reap the rewards: Learning to play atari with the help of instruction manuals, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 1009-1023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/034d7bfeace2a9a258648b16fc626298-Paper-Conference.pdf" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 308, + 369, + 530, + 416 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 369, + 530, + 416 + ], + "spans": [ + { + "bbox": [ + 308, + 369, + 530, + 416 + ], + "type": "text", + "content": "[285] T. Carta, C. Romac, T. Wolf, S. Lamprier, O. Sigaud, P.-Y. Oudeyer, Grounding large language models in interactive environments with online reinforcement learning, in: Proceedings of the 40th International Conference on Machine Learning, ICML'23, JMLR.org, 2023." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 416, + 530, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 416, + 530, + 472 + ], + "spans": [ + { + "bbox": [ + 308, + 416, + 530, + 472 + ], + "type": "text", + "content": "[286] K. Chu, X. Zhao, C. Weber, M. Li, S. Wermter, Accelerating reinforcement learning of robotic manipulations via feedback from large language models, in: Bridging the Gap between Cognitive Science and Robot Learning in the Real World: Progresses and New Directions, 2024. URL https://openreview.net/forum?id=MBeeqmD8Zk" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 308, + 472, + 530, + 519 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 472, + 530, + 519 + ], + "spans": [ + { + "bbox": [ + 308, + 472, + 530, + 519 + ], + "type": "text", + "content": "[287] C. Kim, Y. Seo, H. Liu, L. Lee, J. Shin, H. Lee, K. Lee, Guide your agent with adaptive multimodal rewards, in: Proceedings of the 37th International Conference on Neural Information Processing Systems, NIPS '23, Curran Associates Inc., Red Hook, NY, USA, 2024." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 308, + 520, + 530, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 520, + 530, + 576 + ], + "spans": [ + { + "bbox": [ + 308, + 520, + 530, + 576 + ], + "type": "text", + "content": "[288] W. Yu, N. Gileadi, C. Fu, S. Kirmani, K.-H. Lee, M. G. Arenas, H.-T. L. Chiang, T. Erez, L. Hasenclever, J. Humplik, B. Ichter, T. Xiao, P. Xu, A. Zeng, T. Zhang, N. Heess, D. Sadigh, J. Tan, Y. Tassa, F. Xia, Language to rewards for robotic skill synthesis (2023). arXiv:2306.08647. URL https://arxiv.org/abs/2306.08647" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 308, + 576, + 530, + 615 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 576, + 530, + 615 + ], + "spans": [ + { + "bbox": [ + 308, + 576, + 530, + 615 + ], + "type": "text", + "content": "[289] A. Adeniji, A. Xie, C. Sferrazza, Y. Seo, S. James, P. Abbeel, Language reward modulation for pretraining reinforcement learning (2024). URL https://openreview.net/forum?id " + }, + { + "bbox": [ + 308, + 576, + 530, + 615 + ], + "type": "inline_equation", + "content": "\\equiv" + }, + { + "bbox": [ + 308, + 576, + 530, + 615 + ], + "type": "text", + "content": " SWRFC2Eup0" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 308, + 615, + 530, + 671 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 615, + 530, + 671 + ], + "spans": [ + { + "bbox": [ + 308, + 615, + 530, + 671 + ], + "type": "text", + "content": "[290] A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y. Yang, S. Gupta, B. P. Majumder, K. Hermann, S. Welleck, A. Yazdanbakhsh, P. Clark, Self-refine: Iterative refinement with self-feedback (2023). arXiv:2303.17651. URL https://arxiv.org/abs/2303.17651" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 308, + 671, + 530, + 728 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 671, + 530, + 728 + ], + "spans": [ + { + "bbox": [ + 308, + 671, + 530, + 728 + ], + "type": "text", + "content": "[291] Y. J. Ma, W. Liang, G. Wang, D.-A. Huang, O. Bastani, D. Jayaraman, Y. Zhu, L. Fan, A. Anandkumar, Eureka: Human-level reward design via coding large language models, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=IEduRU055F" + } + ] + } + ], + "index": 23 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 740, + 303, + 748 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 740, + 303, + 748 + ], + "spans": [ + { + "bbox": [ + 291, + 740, + 303, + 748 + ], + "type": "text", + "content": "52" + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 52 + }, + { + "para_blocks": [ + { + "bbox": [ + 63, + 112, + 286, + 727 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 63, + 112, + 286, + 159 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 112, + 286, + 159 + ], + "spans": [ + { + "bbox": [ + 63, + 112, + 286, + 159 + ], + "type": "text", + "content": "[292] T. Xie, S. Zhao, C. H. Wu, Y. Liu, Q. Luo, V. Zhong, Y. Yang, T. Yu, Text2reward: Reward shaping with language models for reinforcement learning, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=tUM39YTRxH" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 63, + 160, + 286, + 206 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 160, + 286, + 206 + ], + "spans": [ + { + "bbox": [ + 63, + 160, + 286, + 206 + ], + "type": "text", + "content": "[293] M. Janner, Q. Li, S. Levine, Offline reinforcement learning as one big sequence modeling problem, in: Proceedings of the 35th International Conference on Neural Information Processing Systems, NIPS '21, Curran Associates Inc., Red Hook, NY, USA, 2024." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 207, + 286, + 255 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 207, + 286, + 255 + ], + "spans": [ + { + "bbox": [ + 63, + 207, + 286, + 255 + ], + "type": "text", + "content": "[294] R. Shi, Y. Liu, Y. Ze, S. S. Du, H. Xu, Unleashing the power of pre-trained language models for offline reinforcement learning, in: The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=AY6aM13gGF" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 256, + 286, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 256, + 286, + 312 + ], + "spans": [ + { + "bbox": [ + 63, + 256, + 286, + 312 + ], + "type": "text", + "content": "[295] S. Li, X. Puig, C. Paxton, Y. Du, C. Wang, L. Fan, T. Chen, D.-A. Huang, E. Akyurek, A. Anandkumar, J. Andreas, I. Mordatch, A. Torralba, Y. Zhu, Pre-trained language models for interactive decision-making, in: Proceedings of the 36th International Conference on Neural Information Processing Systems, NIPS '22, Curran Associates Inc., Red Hook, NY, USA, 2024." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 312, + 286, + 359 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 312, + 286, + 359 + ], + "spans": [ + { + "bbox": [ + 63, + 312, + 286, + 359 + ], + "type": "text", + "content": "[296] L. Mezghani, P. Bojanowski, K. Alahari, S. Sukhbaatar, Think before you act: Unified policy for interleaving language reasoning with actions, in: Workshop on Reincarnating Reinforcement Learning at ICLR 2023, 2023. URL https://openreview.net/forum?id=HQmhQIi6mN" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 359, + 286, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 359, + 286, + 396 + ], + "spans": [ + { + "bbox": [ + 63, + 359, + 286, + 396 + ], + "type": "text", + "content": "[297] J. Grigsby, L. Fan, Y. Zhu, Amago: Scalable in-context reinforcement learning for adaptive agents (2024). arXiv:2310.09971. URL https://arxiv.org/abs/2310.09971" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 63, + 396, + 286, + 462 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 396, + 286, + 462 + ], + "spans": [ + { + "bbox": [ + 63, + 396, + 286, + 462 + ], + "type": "text", + "content": "[298] B. Zitkovich, T. Yu, S. Xu, P. Xu, T. Xiao, F. Xia, J. Wu, P. Wohlhart, S. Welker, A. Wahid, Q. Vuong, Vanhoucke, Rt-2: Vision-language-action models transfer web knowledge to robotic control, in: J. Tan, M. Toussaint, K. Darvish (Eds.), Proceedings of The 7th Conference on Robot Learning, Vol. 229 of Proceedings of Machine Learning Research, PMLR, 2023, pp. 2165–2183." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 63, + 463, + 286, + 548 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 463, + 286, + 548 + ], + "spans": [ + { + "bbox": [ + 63, + 463, + 286, + 548 + ], + "type": "text", + "content": "[299] S. Yao, R. Rao, M. Hausknecht, K. Narasimhan, Keep CALM and explore: Language models for action generation in text-based games, in: B. Webber, T. Cohn, Y. He, Y. Liu (Eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Online, 2020, pp. 8736-8754. doi:10.18653/v1/2020.emnlp-main.704. URL https://aclanthology.org/2020.emnlp-main.704/" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 63, + 549, + 286, + 585 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 549, + 286, + 585 + ], + "spans": [ + { + "bbox": [ + 63, + 549, + 286, + 585 + ], + "type": "text", + "content": "[300] H. Hu, D. Sadigh, Language instructed reinforcement learning for human-ai coordination, in: Proceedings of the 40th International Conference on Machine Learning, ICML'23, JMLR.org, 2023." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 586, + 286, + 642 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 586, + 286, + 642 + ], + "spans": [ + { + "bbox": [ + 63, + 586, + 286, + 642 + ], + "type": "text", + "content": "[301] Z. Zhou, B. Hu, C. Zhao, P. Zhang, B. Liu, Large language model as a policy teacher for training reinforcement learning agents, in: Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI '24, 2025. doi: 10.24963/ijcai.2024/627. URL https://doi.org/10.24963/ijcai.2024/627" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 63, + 643, + 286, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 643, + 286, + 689 + ], + "spans": [ + { + "bbox": [ + 63, + 643, + 286, + 689 + ], + "type": "text", + "content": "[302] C. Chen, J. Yoon, Y.-F. Wu, S. Ahn, Transdreamer: Reinforcement learning with transformer world models, in: Deep RL Workshop NeurIPS 2021, 2021. URL https://openreview.net/forum?id=sVrzVAL90sA" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 63, + 690, + 286, + 727 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 690, + 286, + 727 + ], + "spans": [ + { + "bbox": [ + 63, + 690, + 286, + 727 + ], + "type": "text", + "content": "[303] D. Das, S. Chernova, B. Kim, State2explanation: Concept-based explanations to benefit agent learning and user understanding, in: A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, S. Levine (Eds.), Advances in Neural" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 308, + 113, + 543, + 407 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 329, + 113, + 543, + 170 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 329, + 113, + 543, + 170 + ], + "spans": [ + { + "bbox": [ + 329, + 113, + 543, + 170 + ], + "type": "text", + "content": "Information Processing Systems, Vol. 36, Curran Associates, Inc., 2023, pp. 67156-67182. \nURL https://proceedings.neurips.cc/paper_files/paper/2023/file/d4387c37b3b06e55f86eccdb8cd1f829-Paper-Conference.pdf" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 308, + 170, + 531, + 207 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 170, + 531, + 207 + ], + "spans": [ + { + "bbox": [ + 308, + 170, + 531, + 207 + ], + "type": "text", + "content": "[304] J. Lin, Y. Du, O. Watkins, D. Hafner, P. Abbeel, D. Klein, A. Dragan, Learning to model the world with language (2024). arXiv:2308.01399. URL https://arxiv.org/abs/2308.01399" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 308, + 208, + 531, + 255 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 208, + 531, + 255 + ], + "spans": [ + { + "bbox": [ + 308, + 208, + 531, + 255 + ], + "type": "text", + "content": "[305] J. Robine, M. Höftmann, T. Uelwer, S. Harmeling, Transformer-based world models are happy with 100k interactions, in: ICLR, 2023. URL https://openreview.net/forum?id=TdBaDGCpjly" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 308, + 256, + 531, + 292 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 256, + 531, + 292 + ], + "spans": [ + { + "bbox": [ + 308, + 256, + 531, + 292 + ], + "type": "text", + "content": "[306] R. P. K. Poudel, H. Pandya, C. Zhang, R. Cipolla, Langwm: Language grounded world model (2023). arXiv:2311.17593. URL https://arxiv.org/abs/2311.17593" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 308, + 293, + 531, + 322 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 293, + 531, + 322 + ], + "spans": [ + { + "bbox": [ + 308, + 293, + 531, + 322 + ], + "type": "text", + "content": "[307] J. Lin, Y. Du, O. Watkins, D. Hafner, P. Abbeel, D. Klein, A. Dragan, Learning to model the world with language (2024). URL https://openreview.net/forum?id=eWL0oaShEH" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 308, + 322, + 531, + 407 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 322, + 531, + 407 + ], + "spans": [ + { + "bbox": [ + 308, + 322, + 531, + 407 + ], + "type": "text", + "content": "[308] Z. Yang, K. Ren, X. Luo, M. Liu, W. Liu, J. Bian, W. Zhang, D. Li, Towards applicable reinforcement learning: Improving the generalization and sample efficiency with policy ensemble, in: L. D. Raedt (Ed.), Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, International Joint Conferences on Artificial Intelligence Organization, 2022, pp. 3659-3665, main Track. doi: 10.24963/ijcai.2022/508. URL https://doi.org/10.24963/ijcai.2022/508" + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 291, + 740, + 302, + 749 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 291, + 740, + 302, + 749 + ], + "spans": [ + { + "bbox": [ + 291, + 740, + 302, + 749 + ], + "type": "text", + "content": "53" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 595, + 841 + ], + "page_idx": 53 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13423/ee02b8fd-a1b4-4f31-b152-93f5d36bdbe3_content_list.json b/data/2025/2503_13xxx/2503.13423/ee02b8fd-a1b4-4f31-b152-93f5d36bdbe3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fc9de0eadce870a34490853b33173c229fab5103 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13423/ee02b8fd-a1b4-4f31-b152-93f5d36bdbe3_content_list.json @@ -0,0 +1,3259 @@ +[ + { + "type": "text", + "text": "SuperBPE: Space Travel for Language Models", + "text_level": 1, + "bbox": [ + 171, + 98, + 666, + 122 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "*Alisa Liu♥♠ *Jonathan Hayase♥", + "bbox": [ + 181, + 143, + 442, + 162 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Valentin Hofmann $^{\\diamond \\diamond}$ Sewoong Oh $^{\\diamond}$ Noah A. Smith $^{\\diamond \\diamond}$ Yejin Choi $^{\\spadesuit}$", + "bbox": [ + 181, + 162, + 720, + 181 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$\\diamond$ University of Washington $\\spadesuit$ NVIDIA $\\diamond$ Allen Institute for AI", + "bbox": [ + 183, + 181, + 647, + 199 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 457, + 233, + 540, + 250 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The assumption across nearly all language model (LM) tokenization schemes is that tokens should be subwords, i.e., contained within word boundaries. Despite providing a seemingly reasonable inductive bias, we question whether this common practice limits the potential of modern LMs. Whitespace is not a reliable delimiter of meaning, as evidenced by multi-word expressions (e.g., by the way), cross-lingual variation in the number of words needed to express a concept (e.g., spacesuit helmet in German is raumanzugehm), and languages that do not use whitespace at all (e.g., Chinese). To explore the potential of tokenization beyond subwords, we introduce a \"superword\" tokenizer, SuperBPE, that incorporates a simple pretokensization curriculum into the byte-pair encoding (BPE) algorithm to first learn subwords and then superwords that bridge whitespace. This modification dramatically improves encoding efficiency: when limiting vocabulary size to 200k, SuperBPE encodes a fixed piece of text with up to $33\\%$ fewer tokens on average than BPE. In experiments, we pretrain 8B transformer LMs from scratch while fixing model size, vocabulary size, and train compute, varying only the algorithm for learning the vocabulary. Our model trained with SuperBPE achieves an average $+4.0\\%$ absolute improvement over the BPE baseline across 30 downstream tasks (including $+8.2\\%$ on MMLU), while simultaneously requiring $27\\%$ less compute at inference time. In analysis, we find that SuperBPE produces segmentations of text that are more uniform in per-token difficulty, perhaps because SuperBPE tokens often capture common multi-word expressions that function semantically as a single unit. In sum, SuperBPE offers a straightforward and local modification to tokenization that improves both encoding efficiency and downstream performance, yielding better LMs overall. $^1$", + "bbox": [ + 228, + 263, + 769, + 628 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 171, + 652, + 320, + 669 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Tokenizer are the lens through which language models (LMs) view data: they segment a stream of bytes into a sequence of tokens in the LM vocabulary. In the era of transformer LMs, tokenization is done at the level of subwords, meaning that tokens consist of parts of words (including complete words), but they cannot bridge whitespace. Intuitively, subword tokens capture meaningful and composable semantic units.", + "bbox": [ + 169, + 683, + 826, + 756 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Although seemingly reasonable, is this common practice a good one? Whitespace is an unreliable delimiter of meaning (Martin, 2017); many groups of words (e.g., a lot of or search engine) function semantically as single units, and English speakers store thousands of such multi-word expressions in their mental lexicon (Church, 2011; Contreras Kallens & Christiansen, 2022). Cross-lingually, there is considerable variation in whether a given meaning is conveyed by a single word or several words. At the extreme, languages such as Chinese and Japanese do not use whitespace at all, and tokens in these languages can span multiple words or even entire sentences (e.g., the tokenizers of GPT-4O [OpenAI, 2024] or DEEPSEEKV3 [DeepSeek-AI, 2025]), but this has seemingly not hindered LMs from", + "bbox": [ + 169, + 760, + 826, + 887 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 31, + 517, + 47 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "*Equal contribution.", + "bbox": [ + 192, + 895, + 328, + 909 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "$^{1}$ Code and artifacts are available at https://superbpe.github.io/.", + "bbox": [ + 192, + 909, + 627, + 922 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2503.13423v3 [cs.CL] 26 Aug 2025", + "bbox": [ + 22, + 275, + 60, + 724 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 493, + 948, + 503, + 959 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/c3aefc783508cbdf35c8b351a764278b60a5a93edab00efc917dd0942af8d010.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 174, + 94, + 727, + 146 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/9dd8cd205ee4d7a5d68b14276daffb2574f1374c4f02bcd5be1bb88161cc4bab.jpg", + "image_caption": [ + "Figure 1: SuperBPE tokenizers encode text much more efficiently than BPE, and this advantage grows with larger vocabulary size. Encoding efficiency ( $y$ -axis) is measured in bytes-per-token, the number of bytes encoded per token over a large corpus. In the 40 bytes of text shown on the top of this figure, SuperBPE uses 7 tokens while BPE uses 13, so the methods' efficiencies are $40/7 = 5.7$ and $40/13 = 3.1$ bytes-per-token, respectively. In the graph, the encoding efficiency of BPE plateaus early because it exhausts the valuable whitespace-delimited words in the training data. In fact, it is bounded above by the gray dotted line, which shows the maximum achievable encoding efficiency with BPE if every whitespace-delimited word were in the vocabulary. In contrast, SuperBPE has dramatically better encoding efficiency that continues to improve with increased vocabulary size, as it can continue to add common word sequences to treat as tokens in the vocabulary. The different gradient lines show different transition points from learning subword to superword tokens, which always yields an immediate improvement. SuperBPE also encodes text more efficiently than a naive variant of BPE that does not use whitespace pretokensization at all." + ], + "image_footnote": [], + "bbox": [ + 300, + 157, + 694, + 393 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "performing well on these languages. In fact, including multi-word tokens promises to be beneficial in many ways: it may shorten token sequences, lowering the costs of LM training and inference, and offer representational advantages by segmenting text into more semantically cohesive units (Salehi et al., 2015; Otani et al., 2020; Hofmann et al., 2021).", + "bbox": [ + 169, + 617, + 823, + 674 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this work, we introduce a superword tokenization algorithm that produces a vocabulary of both subword and \"superword\" tokens, which we use to describe tokens bridging more than one word. Our method, SuperBPE, introduces a pretokenization curriculum to the popular byte-pair encoding (BPE) algorithm (Sennrich et al., 2016): whitespace pretokenization is initially used to enforce learning of subword tokens only (as done in conventional BPE), but it is disabled in a second stage, where the tokenizer transitions to learning superword tokens. Notably, SuperBPE tokenizers scale much better with vocabulary size: BPE quickly hits a point of diminishing returns and begins adding increasingly rare subwords to the vocabulary, while SuperBPE continues to discover common word sequences to treat as single tokens and improve encoding efficiency (see Figure 1).", + "bbox": [ + 169, + 679, + 826, + 821 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In our experiments, we pretrain English LMs at 8B scale from scratch. When fixing the model size, vocabulary size, and training compute—varying only the algorithm for learning the vocabulary—we find that models trained with SuperBPE tokenizers consistently and significantly improve over counterparts trained with a BPE tokenizer while also being $27\\%$ to $33\\%$ more efficient at inference time. Our best SuperBPE model achieves an average improvement of $+4.0\\%$ over 30 downstream tasks, including $+8.2\\%$ on MMLU, and wins on 25 of the 30 individual tasks (Table 1).", + "bbox": [ + 169, + 825, + 826, + 925 + ], + "page_idx": 1 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 31, + 517, + 47 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 491, + 946, + 504, + 959 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In analysis, we find that SuperBPE tokenizers produce segmentations that are more evenly distributed in difficulty. This makes sense from a qualitative linguistic analysis: SuperBPE tokens often correspond to multi-word expressions in English, i.e., word sequences that function as a single semantic unit (see Table 3 for examples). For instance, many prepositional phrases (e.g., by accident or in the long run) are essentially fixed and require memorization. The individual words in these expressions have very little possible variation in context, leading to very low-loss predictions under BPE models.", + "bbox": [ + 169, + 103, + 826, + 202 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "SuperBPE is a straightforward and local modification to tokenization, requiring no changes to the model architecture, training framework, or decoding strategy. Under the same training setup, SuperBPE provides a remarkable boost in both encoding efficiency and performance, yielding better language models overall.", + "bbox": [ + 169, + 207, + 823, + 263 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2 SuperBPE", + "text_level": 1, + "bbox": [ + 171, + 284, + 295, + 301 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We first explain the standard byte-pair encoding (BPE; Sennrich et al., 2016) tokenization algorithm (§2.1), and then introduce SuperBPE, which extends BPE to superwords (§2.2).", + "bbox": [ + 169, + 315, + 823, + 347 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.1 Background on BPE", + "text_level": 1, + "bbox": [ + 171, + 361, + 362, + 376 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "BPE is a tokenization algorithm that greedily learns a subword vocabulary given training data. The algorithm takes a sample of text and a target vocabulary size $T$ as input.", + "bbox": [ + 169, + 387, + 823, + 420 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The first step of BPE is pretokenization, which splits the text into chunks that limit the extent of tokenization; merges cannot bridge these chunks, so the final learned tokens are parts of these chunks. Canonically, pretokenization in BPE consists of splitting on whitespace so that common word sequences do not become a single token. This made sense given the historical context of Sennrich et al. (2016), which aimed to improve word-level tokenization by segmenting words into morphologically meaningful subwords.", + "bbox": [ + 169, + 424, + 825, + 508 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "After pretokenization, the iterative learning algorithm begins. Training text is first split into bytes; the starting vocabulary is the set of all bytes. Then, the frequencies of all pairs of neighboring tokens are recorded, and the most frequent pair is merged into a single, new token at every position in the text where it occurs. The newly merged token is added to the vocabulary. For instance, if the merge is (t, he), then all instances of the token sequence [t, he] will be replaced with the, which is added to the vocabulary. The token pair frequencies are then updated, and the next most frequent pair is again merged into a new token. This continues until the vocabulary reaches the target size $T$ .", + "bbox": [ + 169, + 515, + 826, + 628 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2 SuperBPE tokenization", + "text_level": 1, + "bbox": [ + 171, + 643, + 385, + 659 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "SuperBPE introduces a simple intervention in the pretokenization step, separating tokenizer training into two discrete phases, wherein the tokenizer (1) first learns subwords (by using pretokenization to prevent merges across whitespace) and then (2) learns superwords (by lifting this restriction). Stage 1 is equivalent to regular BPE training and continues up to a certain vocabulary size $t$ , which we call the transition point $(t < T)$ . In stage 2, tokenizer training resumes from the vocabulary learned thus far, but this time whitespace pretokenization is skipped. As a result, token pairs that bridge whitespace are considered, enabling superwords to be added to the vocabulary. Intuitively, we intend for our tokenizer to first learn base units of semantic meaning, then combine these units into common sequences for a much more efficient vocabulary. Note that $t = T$ corresponds to BPE, and $t = 0$ corresponds to a naive revision of BPE that foregoes whitespace pretokenization at any point in training.", + "bbox": [ + 169, + 670, + 826, + 839 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We note that training tokenizers requires more system memory and CPU time when done without whitespace pretokenization (as in stage 2 of SuperBPE). This is because the training", + "bbox": [ + 169, + 844, + 823, + 875 + ], + "page_idx": 2 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 2 + }, + { + "type": "page_footnote", + "text": "2The algorithm originated in 1994 in the field of data compression (Gage, 1994).", + "bbox": [ + 189, + 883, + 712, + 897 + ], + "page_idx": 2 + }, + { + "type": "page_footnote", + "text": "Note that although the creation of a tokenizer is referred to as \"learning,\" there are no parameters involved in the case of BPE, and the algorithm is completely deterministic given the data.", + "bbox": [ + 169, + 896, + 823, + 925 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 2 + }, + { + "type": "table", + "img_path": "images/5269389453ca1c2c61479f7ee22b95b41121fbe5b9825fc94ea68f8f9118b9be.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
CategoryTaskBPESuperBPEΔ
KnowledgeARC-Easy (MC)46.667.1+20.5**
ARC-Challenge (MC)35.150.6+15.5**
Jeopardy (EM)42.141.8-0.3
MMLU (MC)36.544.7+8.2**
OpenbookQA (MC)33.254.4+21.2**
TriviaQA (EM)60.661.3+0.7
WikidataQA (EM)69.770.9+1.2*
Math & ReasoningArithmetic (EM)54.859.3+4.5**
GSM8K (EM)6.46.7+0.3
LSAT-AR (MC)21.323.0+1.7
Operators (EM)35.533.6-1.9
Repeat-Copy-Logic (EM)3.16.2+3.1
CodingHumanEval (pass@10)15.913.4-2.5
MBPP (pass@10)27.528.3+0.8
Reading ComprehensionBoolQ (MC)59.764.6+4.9**
CoQA (EM)12.613.2+0.6
DROP (EM)31.331.4+0.1
HotpotQA (EM)53.555.2+1.7*
SQuAD (EM)75.175.8+0.7
CommonsenseCommonsenseQA (MC)33.553.8+20.3**
COPA (MC)77.085.8+8.8**
PIQA (MC)55.259.8+4.6*
Winograd (MC)50.453.1+2.7
Winogrande (MC)47.352.6+5.3*
Language UnderstandingHellaSwag (MC)29.733.7+4.0**
LAMBADA (EM)77.070.6-6.4**
Language Identification (EM)8.89.0+0.2
String ManipulationCS Algorithms (EM)46.148.6+2.5
CUTE (EM)31.332.6+1.3
Dyck-Languages (EM)15.914.2-1.7
Average39.843.8+4.0
", + "bbox": [ + 233, + 99, + 763, + 563 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Table 1: Performance of BPE and SuperBPE models (with transition point $t = 180\\mathrm{k}$ ) on 30 downstream tasks. The two models are fixed in model parameters (8B), vocabulary size (200k), and training FLOPs (corresponding to ~330B tokens), differing only in their algorithm for learning the vocabulary. The SuperBPE model outperforms the baseline on 25 of 30 tasks and requires $27\\%$ less compute at inference time. See Figure 3 for the moving task average during pretraining and §A.4 for further evaluation details. * $p < 0.05$ , ** $p < 0.005$ under a McNemar test.", + "bbox": [ + 169, + 573, + 826, + 672 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "data is typically represented by a dictionary of \"words\" along with their counts. With whitespace pretokenization, the \"words\" are whitespace-separated chunks (e.g., common words) stored once along with a large count, conferring substantial savings in memory. Without whitespace pretokenization, the \"words\" are extremely long (e.g., entire training documents), leading to minimal dedduplication of the text and excessively large dictionaries. Fortunately, tokenizer training must be done only once; in our experiments, SuperBPE tokenizers train in a few hours on 100 CPUs, a negligible cost compared to LLM pretraining.", + "bbox": [ + 169, + 709, + 826, + 809 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2.3 Encoding efficiency", + "text_level": 1, + "bbox": [ + 171, + 835, + 361, + 853 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "A tokenizer's encoding efficiency can be measured in bytes-per-token, i.e., how many UTF-8 bytes are encoded, on average, in each token over a large corpus of text (see calculation in Figure 1). We train a series of tokenizers on a 10GB subset of data from OLMO 2's pretraining corpus and evaluate encoding efficiency on a held-out subset.", + "bbox": [ + 169, + 867, + 825, + 926 + ], + "page_idx": 3 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Shown in Figure 1, SuperBPE scales much better with vocabulary size than does BPE. BPE quickly plateaus around a vocabulary size of $\\sim 50\\mathrm{K}$ , achieving 4.45 bytes-per-token at a vocabulary size of $200\\mathrm{k}$ . In fact, even with infinite vocabulary size (namely, if every whitespace-delimited word were in the vocabulary), BPE cannot exceed 4.68 bytes-per-token, i.e., the average word length in the held-out subset. SuperBPE exceeds this upper bound with a mere $\\sim 12\\mathrm{k}$ vocabulary size and reaches 5.55 bytes-per-token at $50\\mathrm{K}$ and 6.63 at $200\\mathrm{k}$ .", + "bbox": [ + 169, + 102, + 826, + 200 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Surprisingly, SuperBPE is also more efficient than BPE with whitespace pretokenization completely disabled. Since BPE is a greedy algorithm, completely disabling whitespace pretokenization may cause it to make highly suboptimal choices early on. In particular, tokens in this setting often consist of the end of the previous word and start of the next word, as opposed to sequences of complete words. By keeping whitespace pretokenization on at the beginning, we can avoid suboptimal choices while still obtaining a tokenizer with superwords.", + "bbox": [ + 169, + 207, + 553, + 362 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Figure 2 shows how SuperBPE's encoding efficiency depends on the choice of transition point $t$ . The relationship is smooth, with $t = 80\\mathrm{k}$ achieving the best encoding efficiency. However, we will see in our experiments that the optimal tokenizer for LM pretraining is not necessarily the most encoding-efficient.", + "bbox": [ + 169, + 367, + 553, + 453 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/9c33cd3e1169158421e309ba6c1c3d751448572342e45693e32bbce74e5b0ca6.jpg", + "image_caption": [ + "Figure 2: Encoding efficiency varies smoothly with the choice of transition point $t$ in SuperBPE's pretokenization curriculum." + ], + "image_footnote": [], + "bbox": [ + 562, + 217, + 823, + 369 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3 Experiments", + "text_level": 1, + "bbox": [ + 171, + 479, + 318, + 497 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In our main experiments, we pretrain models from scratch while fixing the total training FLOPs and vocabulary size, changing only the algorithm for learning the vocabulary.", + "bbox": [ + 169, + 515, + 823, + 546 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.1 Setup", + "text_level": 1, + "bbox": [ + 171, + 568, + 256, + 585 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We first pretrain 8B models with BPE and SuperBPE tokenizers. We use the OLMO2 7B (OLMo et al., 2024) training configuration, including the model architecture, training hyperparameters, and pretraining corpus, but reduce the total number of training steps to correspond to $\\sim 330$ tokens (compared to 4T). Following prior work (Pagnoni et al., 2024), we also fix the effective context size (measured in bytes) for each model. This prevents SuperBPE models from gaining an advantage by seeing more textual context for the same next-token prediction (we provide analysis on this in §B.1). Since more efficient models have a shorter context length in tokens, the training steps are adjusted accordingly to match the total train FLOPs at the end of training. Note that in this setting, a same-sized SuperBPE model uses fewer inference FLOPs than the BPE model.", + "bbox": [ + 169, + 597, + 826, + 742 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We fix the vocabulary size of all tokenizers to 200,000 (in the same ballpark as, e.g., GEMMA at 250k [Google, 2024], GPT-4O at 200k, and Llama3 at 130k [Meta, 2024]).6 We consider three transition points for SuperBPE: $t = 80\\mathrm{k}$ , which has the best encoding efficiency, and two later transitions, $t = 160\\mathrm{k}$ and $t = 180\\mathrm{k}$ . All tokenizers are trained on the same 10GB subset of OLMO2's pretraining mix. SA.1 provides further details about tokenizer training.", + "bbox": [ + 169, + 748, + 826, + 823 + ], + "page_idx": 4 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 31, + 517, + 47 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "$^{4}$ OLMO2 7B has 7.30B parameters, while our 8B BPE and SuperBPE models have 8.12B parameters due to their increased vocabulary size.", + "bbox": [ + 169, + 843, + 825, + 869 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "In practice, models using our more efficient tokenizers could shift some or all of the \"saved\" context FLOPs to longer effective contexts instead of more training steps.", + "bbox": [ + 169, + 869, + 826, + 897 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "For 8B models, a 200k vocabulary size is close to the recommendation of Tao et al. (2024) based on primarily English data. We fix the vocabulary size to simplify comparisons between models.", + "bbox": [ + 169, + 896, + 823, + 924 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 946, + 504, + 959 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/13b903da56b7375b8e6dc6f72255fc803e99efb2d92e9a3524c6cb69a93788f9.jpg", + "image_caption": [ + "Figure 3: Average task performance on 30 downstream tasks, evaluated at every 5000 steps in model pretraining. We see that SuperBPE models consistently outperform the baseline that uses a BPE tokenizer. All compared models share the same vocabulary size and train budget; $t$ denotes the transition point in SuperBPE's pretokenization curriculum." + ], + "image_footnote": [], + "bbox": [ + 303, + 93, + 694, + 320 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We also train a slightly larger 11B SuperBPE model with $t = 180\\mathrm{k}$ , which approximately matches the 8B BPE baseline in total bytes of training data seen as well as both train and inference compute. See Table 2 for exact specifications for all runs.", + "bbox": [ + 169, + 414, + 823, + 458 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "3.2 Results on downstream tasks", + "text_level": 1, + "bbox": [ + 171, + 474, + 429, + 488 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We evaluate SuperBPE on 30 benchmarks covering knowledge, math & reasoning, coding, reading comprehension, common sense, language understanding, and string manipulation. The full evaluation suite is shown in Table 1 and evaluation details are in SA.4.", + "bbox": [ + 169, + 500, + 826, + 544 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Figure 3 shows the task average during pretraining. All SuperBPE models substantially outperform the BPE baseline at the end of training. The strongest 8B SuperBPE model, which has transition point $t = 180\\mathrm{k}$ (the latest one we consider), outperforms the baseline by $4.0\\%$ on average and wins on 25 of 30 individual tasks. Table 1 shows the per-task performance for this model (see §A.4 for results for the other models). The largest gains are on multiple choice tasks; when considering these alone, the performance improvement grows to $+9.7\\%$ . The only task on which SuperBPE loses in a statistically significant way is LAMBADA; here, we observe that SuperBPE is actually ahead for the majority of training checkpoints, but accuracy dips at the end from $75.8\\%$ to $70.6\\%$ (see Figure 12).", + "bbox": [ + 169, + 549, + 826, + 676 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Notably, while the choice of transition point affects the performance of the resulting model, all reasonable choices are significantly stronger than the baseline. When using the most encoding-efficient transition point, i.e., $t = 80\\mathrm{k}$ , we see a $+3.1\\%$ task improvement over BPE and inference compute reduced by $35\\%$ . Later transition points empirically cede some gains in encoding efficiency in exchange for further improvements in performance.[7]", + "bbox": [ + 169, + 681, + 826, + 755 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4 Analysis", + "text_level": 1, + "bbox": [ + 171, + 773, + 284, + 791 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.1 Language modeling", + "text_level": 1, + "bbox": [ + 171, + 806, + 362, + 821 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Following prior work (Biderman et al., 2023; Xue et al., 2022; Yu et al., 2023; Wang et al., 2024), we evaluate language modeling performance using bits-per-byte (BPB), which normalizes the loss by the tokenizer's encoding efficiency to fairly compare models with different tokenizers.", + "bbox": [ + 169, + 830, + 826, + 875 + ], + "page_idx": 5 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 5 + }, + { + "type": "page_footnote", + "text": "7This finding adds to the ongoing debate about the relationship between tokenization compression and LM performance (Galle, 2019; Goldman et al., 2024; Schmidt et al., 2024), providing further evidence that higher compression does not necessarily improve performance.", + "bbox": [ + 169, + 883, + 823, + 925 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/35633b046a0c8fcfd9159f25a1d19b38b878f444e9cf5672d853601061d6c040.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
SuperBPE transition pointBPE 8BSuperBPE 8BSuperBPE 11B
t = 80kt = 160kt = 180kt = 180k
Parameter count (billion)8.128.128.128.1211.30
Train steps76,543118,419112,722107,98277,525
Average context length (bytes)18,26218,27218,26318,26818,268
Vocabulary size200k200k200k200k200k
Context length (tokens)4,0962,7562,8843,0003,000
Encoding efficiency (bytes/token)4.466.636.336.096.09
Train compute (1021 FLOPs)17.217.217.217.217.2
Inference compute (109 FLOPs/byte)3.752.422.542.653.75
", + "bbox": [ + 174, + 99, + 823, + 258 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 2: Training setup for the models we compare. We fix the vocabulary size and effective context size (measured in bytes) for each model and adjust the total number of training steps accordingly so that each model has the same total train budget (in FLOPs). The 8B SuperBPE models match the 8B BPE model in train compute but use less inference compute; the 11B SuperBPE model matches the 8B baseline in both train and inference compute. Numbers fixed across model settings are highlighted in the same color.", + "bbox": [ + 169, + 268, + 826, + 363 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "This is necessary because longer tokens, on average, contain more information and therefore are more difficult to predict. Bits-per-byte is defined as $\\mathrm{BPB}(x) = \\mathcal{L}_{\\mathrm{CE}}(x) / (\\ln (2)\\cdot n_{\\mathrm{bytes}})$ , where $n_{\\mathrm{bytes}}$ is the length of the text in bytes and $\\mathcal{L}_{\\mathrm{CE}}(x)$ is the sum of the cross-entropy loss over the entire text. We find that BPE 8B, SuperBPE 8B ( $t = 180\\mathrm{k}$ ), and SuperBPE 11B attain 0.7465, 0.7482, and 0.7445 BPB, respectively, at the end of training. Although these numbers do not differ appreciably, the ranking of models according to BPB and downstream task performance are not consistent.", + "bbox": [ + 169, + 390, + 823, + 494 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.2 Loss distribution analysis", + "text_level": 1, + "bbox": [ + 171, + 516, + 405, + 532 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Why does the SuperBPE 8B model achieve slightly higher normalized language modeling loss (§4.1) than the baseline BPE model despite outperforming it on a wide variety of downstream tasks (§3.2)? To investigate this, we plot the distribution of per-token $\\mathrm{BPB}^9$ for both models on data sampled from the pretraining data mixture in Figure 4.", + "bbox": [ + 169, + 544, + 517, + 660 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Although the BPE and SuperBPE models have very similar BPB on average, we see that loss is distributed very differently over the training data. Compared to the baseline, the SuperBPE model makes fewer predictions with either very high or very low loss.", + "bbox": [ + 169, + 664, + 517, + 750 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/1edddf8bae5540af35ba41c602b1db1040be2fc9f710e1df93c948f4103a12f1.jpg", + "image_caption": [ + "Figure 4: Histogram of per-token losses for both models from Table 1, measured over a large corpus of text. We observe that the SuperBPE model is a more consistent performer, making fewer predictions with very high or very low loss." + ], + "image_footnote": [], + "bbox": [ + 527, + 497, + 823, + 662 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Low-loss tokens. We find that the reduction in low-loss tokens is attributable to a small set of extremely common words that the BPE model can easily predict, but are not available to SuperBPE as they are merged into larger superword tokens. For instance, the tokens _the, _of, and _to (the three most common words in the corpus) appear an order of magnitude more often under BPE than SuperBPE in the same corpus of text. When excluding these three token types alone, the BPB ranking reverses, with SuperBPE achieving 0.02 lower BPB than BPE.", + "bbox": [ + 169, + 768, + 826, + 853 + ], + "page_idx": 6 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 6 + }, + { + "type": "page_footnote", + "text": "8Bits-per-byte of different models are considered comparable because total cross-entropy loss is a universal quantity representing the number of additional bits required to reconstruct the text given the model. This quantity is normalized by the number of bytes for easier interpretation.", + "bbox": [ + 169, + 869, + 823, + 910 + ], + "page_idx": 6 + }, + { + "type": "page_footnote", + "text": "The per-token BPB is the per-token loss (in bits) divided by the average encoding efficiency.", + "bbox": [ + 192, + 910, + 792, + 925 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 491, + 946, + 504, + 959 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/a46ee469fccab5f11901d641b1007db9c5048f40856d774f9b6c27b558ca1c29.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
POS tag#Example Tokens
NN, IN906(case_of,_hint_of,_availability_of,_emphasis_on,_distinction_between
VB, DT566_reached_a,_discovered_the,_identify_the,_becomes_a,_issued_a
DT, NN498.this_month,_no Idea,_the_earth,_the_maximum,_this Stuff
IN, NN406_on_top,_by'accident,_in_effects,_for_lunch,_in_front
IN, DT379_on_the,_without_a,_alongside_the,_for_each
IN, DT, NN333_for_a_living,_by_the_way,_into_the_future,_in_the_midst
NN, IN, DT270_position_of_the,_component_of_the,_review_of_the,_example_of.this
IN, DT, JJ145_like_any_other,_with_each_other,_for_a_short,_of_the Entire
VB, IN, DT121_worked_as_a,_based_on_the,_combined_with_the,_turnedInto_a
IN, DT, NN, IN33_at_the_time_of,_in_the Presence_of,_in_the_middle_of,_in_a_way-that
,, CC, PRP, VB20,_and_it_was,_but_IThink,_but_I_have,_but_I_am
IN, DT, JJ, NN18_in_the_long_run,_on_the_other_hand,_for_the_first_time,_in_the_same_way
", + "bbox": [ + 173, + 101, + 834, + 292 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 3: The most common POS tags for tokens of 2, 3, and 4 words in SuperBPE, along with random example tokens for each tag. NN = noun, IN = preposition, VB = verb, DT = determiner, CC = conjunction, JJ = adjective, and PRP = pronoun.", + "bbox": [ + 169, + 303, + 826, + 347 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "The reduction in low-loss tokens also makes sense from a qualitative linguistic analysis of SuperBPE tokens. In Table 3, we show the most common POS tags among superword tokens in SuperBPE along with random examples for each tag. The tokens often capture common multi-word expressions (by accident, of course, for a living) that function as a single semantic unit (Schneider et al., 2014). As an example, prepositions (IN) figure prominently in superword tokens (e.g., depend on, distinction between) and require lexeme-specific memorization. The individual words in these fixed expressions are often semantically vacuous and have little possible variation in context, so they are easy to predict once memorized.", + "bbox": [ + 169, + 378, + 826, + 492 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "High-loss tokens. On the other hand, the much thinner tail of very high-loss tokens shows that, in the worst case, the SuperBPE model consistently puts more probability mass on the correct token. On average, we expect models to suffer high loss on tokens that are difficult to predict. This may explain why SuperBPE can outperform BPE on downstream tasks but have higher average BPB: the tokens scored in task evaluations tend to be among the hardest to predict. This is consistent with prior findings that models generally continue to improve in downstream tasks even as their overall loss plateaus due to improving on a narrow and difficult slice of the distribution (Liu et al., 2023).", + "bbox": [ + 169, + 515, + 826, + 627 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.3 Scaling", + "text_level": 1, + "bbox": [ + 171, + 652, + 269, + 667 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "To characterize the scaling behavior of SuperBPE, we also perform experiments at smaller scales. $^{10}$ We train baseline models at 680M and 1.9B and scale the base number of training tokens proportionately to the number of parameters. We also perform runs at $0.5 \\times$ , $2 \\times$ , and $4 \\times$ the base number of tokens to observe the trend with respect to training duration. Then, we train two SuperBPE models that match the training budget of each baseline BPE model, one that matches the baseline in parameter count (analogous to SuperBPE 8B) and a larger model that matches in both train and inference compute (analogous to SuperBPE 11B). We focus on the $t = 180k$ tokenizer to reduce complexity.", + "bbox": [ + 169, + 681, + 826, + 796 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We plot BPB at the end of training for each run in Figure 5. In the under-trained regime, both SuperBPE models achieve lower BPB than the baseline. In the over-trained regime, the ranking from worst to best is SuperBPE (matching parameter count), BPE, and SuperBPE (matching inference compute). Additionally, the separation between the models increases with further over-training. We provide additional results and comments on scaling in §B.4.", + "bbox": [ + 169, + 801, + 826, + 875 + ], + "page_idx": 7 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 7 + }, + { + "type": "page_footnote", + "text": "10For scaling, we focus on BPB since our downstream evaluations are too noisy for our small models to make meaningful comparisons.", + "bbox": [ + 169, + 896, + 823, + 925 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 948, + 504, + 959 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/715318fd5aa7972126c82d9ddbd291b8e70a43f4bd0a178f500c66ec097d1351.jpg", + "image_caption": [ + "(a) 680M model size" + ], + "image_footnote": [], + "bbox": [ + 181, + 102, + 488, + 261 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/aed3f91740cde20386c7e584c0bea96ad833070d2a9ec1843562d5fae5c712a1.jpg", + "image_caption": [ + "(b) 1.9B model size", + "Figure 5: Scaling results for 680M and 1.9B baseline model sizes. Compared to the BPE baseline, SuperBPE with matching parameter count achieves lower BPB in the under-trained regime, while SuperBPE with matching inference compute achieves lower BPB than the baseline at every model size and every training budget tested. Note that BPB comparisons between BPE and SuperBPE models do not track downstream task accuracy due to differences in how BPE and SuperBPE models distribute loss over tokens (§4.2)." + ], + "image_footnote": [], + "bbox": [ + 501, + 102, + 813, + 262 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "5 Related Work", + "text_level": 1, + "bbox": [ + 171, + 406, + 328, + 422 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Tokenization beyond subwords Prior work has explored processing text at multiple levels of granularity (Lai et al., 2021; Zhang et al., 2021) or creating multi-word tokens through frequency-based identification of $n$ -grams (Gee et al., 2023; Kumar & Thawani, 2022). However, these were explored in limited experimental contexts (mainly for machine translation) and had mixed effectiveness. Naively disabling pretokenization in BPE has been found to severely degrade model performance (Dagan et al., 2024; Schmidt et al., 2024; Kudo, 2018), although this approach may be more promising for unigram tokenization (Kudo & Richardson, 2018), as adopted by JURASSIC (Lieber et al., 2021) and BLOOMBERGGPT (Wu et al., 2023). In concurrent work, Huang et al. (2025) disentangle input and output vocabularies, expanding only the former to include $n$ -gram tokens. Their method requires significant modifications of the LM input component and considers fixed length of $n$ -grams.", + "bbox": [ + 169, + 438, + 826, + 592 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Multi-token prediction Multi-token prediction (MTP) equips LMs with some extra parameters to predict multiple tokens in a single time step (Qi et al., 2020; Gloeckle et al., 2024) and was recently adopted by DEEPSEEK-V3, though the MTP module is discarded at inference-time. MTP's effectiveness corroborates that LMs are capable of predicting more than one subword in a forward pass. However, these approaches fix the number of tokens predicted in each time step and require modifications to the architecture and training objective. We note that the benefits of MTP and superword tokens may be orthogonal.", + "bbox": [ + 169, + 609, + 826, + 708 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Tokenizer-free language modeling Some works have explored the possibility of completely removing tokenization from LMs and directly modeling text as a sequence of bytes (Clark et al., 2022; Xue et al., 2022; Wang et al., 2024). To overcome the increased compute requirement due to expanded sequence lengths, alternative architectures have been proposed that either segment bytes into fixed-length patches (Tay et al., 2022; Yu et al., 2023) or dynamically predict patch boundaries with sub-modules (Nawrot et al., 2023; Pagnoni et al., 2024; Ahia et al., 2024; Hwang et al., 2025); these dynamic patches have been qualitatively observed to span multiple words.", + "bbox": [ + 169, + 724, + 826, + 838 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Tokenizer transfer Many methods have been proposed to adapt models after training to use a different tokenizer. These may rely on intervention during pretraining (Chen et al., 2023), continued training for a subset of layers (Marchisio et al., 2023), or leveraging self-distillation (Minixhofer et al., 2025), heuristic, (Minixhofer et al., 2022; Gee et al., 2022; Tran, 2020; Liu et al., 2024b; Dobler & De Melo, 2023), or hypernetwork-based (Minixhofer", + "bbox": [ + 169, + 853, + 826, + 925 + ], + "page_idx": 8 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 948, + 503, + 958 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "et al., 2024) initialization of a fresh embedding matrix, optionally followed by fine-tuning. These methods may be used to upgrade existing models to use SuperBPE tokenizers, with the goal of reducing inference cost while maintaining performance. We leave this direction to future work.", + "bbox": [ + 169, + 103, + 826, + 160 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "6 Conclusion", + "text_level": 1, + "bbox": [ + 171, + 179, + 308, + 195 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Although tokenization lies at the foundation of language modeling, acting as the lens through which models view text, the algorithms in use have remained largely unchanged over the past decade. SuperBPE builds on the observation that tokens need not be limited to subwords, extending the BPE algorithm to superword tokens. When replacing subword BPE tokenizers with SuperBPE tokenizers in pretraining, we find that language models perform better over a large suite of downstream tasks, while also being substantially more efficient at inference time. These benefits are achieved without modifying the underlying model architecture, making SuperBPE a compelling alternative to BPE that seamlessly integrates with modern language model ecosystems.", + "bbox": [ + 169, + 210, + 826, + 338 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Acknowledgments", + "text_level": 1, + "bbox": [ + 171, + 356, + 346, + 373 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "We would like to thank Alex Fang for pretraining advice, Vivek Ramanujan for helping debug our distributed training setup, Ian Magnusson for helpful comments on LM evaluation, and Zhaofeng Wu, Alexander Fang, and Xiaochuang Han for feedback on drafts. We are also grateful to Luca Soldaini, Goncalo Faria, Shrimai Prabhumoye, Matt Jordan, Artidoro Pagnoni, Mike Lewis, Doug Downey, Shannon Shen, and the UW NLP community for valuable conversations about this work. Both co-first authors, AL and JH, are supported by the NSF Graduate Research Fellowship Program. JH and SO are supported in part by the Microsoft Grant for Customer Experience Innovation. This work was partially funded by NSF DMS-2134012, NSF CCF-2019844, ONR N00014-24-1-2207, and NSF 2113530 as well as with NVIDIA resources provided through the National AI Research Resource Pilot (NAIRR).", + "bbox": [ + 169, + 387, + 828, + 541 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 171, + 560, + 276, + 575 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Jungo Kasai, David Mortensen, Noah Smith, and Yulia Tsvetkov. Do all languages cost the same? tokenization in the era of commercial language models. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 9904-9923, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.614. URL https://aclanthology.org/2023.emnlp-main.614.", + "Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Valentin Hofmann, Tomasz Limisiewicz, Yulia Tsvetkov, and Noah A. Smith. MAGNET: Improving the multilingual fairness of language models with adaptive gradient-based tokenization. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=1e3MOwHSIX.", + "Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large language models, 2021. URL https://arxiv.org/abs/2108.07732.", + "Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. Pythia: A suite for analyzing large language models across training and scaling, 2023. URL https://arxiv.org/abs/2304.01373.", + "BIG-bench. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=uyTL5Bvosj." + ], + "bbox": [ + 173, + 583, + 828, + 924 + ], + "page_idx": 9 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In *Thirty-Fourth AAAI Conference on Artificial Intelligence*, 2020.", + "Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. Large language models in machine translation. In Jason Eisner (ed.), Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pp. 858-867, Prague, Czech Republic, June 2007. Association for Computational Linguistics. URL https://aclanthology.org/D07-1090/.", + "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. 2020.", + "Anthony Chen, Pallavi Gudipati, Shayne Longpre, Xiao Ling, and Sameer Singh. Evaluating entity disambiguation and the role of popularity in retrieval-based NLP. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4472-4485, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.345. URL https://aclanthology.org/2021.acl-long.345.", + "Y Chen, K Marchisio, R Raileanu, DI Adelani, P Stenetorp, S Riedel, and M Artetxe. Improving language plasticity via pretraining with active forgetting. In Advances in Neural Information Processing Systems. NeurIPS, 2023.", + "Kenneth Church. How many multiword expressions do people know? In Proceedings of the Workshop on Multiword Expressions: From Parsing and Generation to the Real World, pp. 137-144, Portland, Oregon, USA, 2011. Association for Computational Linguistics. URL https://aclanthology.org/W11-0823/.", + "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2924–2936, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1300. URL https://aclanthology.org/N19-1300.", + "Jonathan H. Clark, Dan Garrette, Iulia Turc, and John Wieting. Canine: Pre-training an efficient tokenization-free encoder for language representation. Transactions of the Association for Computational Linguistics, 10:73-91, 2022. doi: 10.1162/tacl_a_00448. URL https://aclanthology.org/2022.tacl-1.5.", + "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018. URL https://arxiv.org/abs/1803.05457.", + "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/abs/2110.14168.", + "Pablo Contreras Kallens and Morten H. Christiansen. Models of language and multiword expressions. Frontiers in Artificial Intelligence, 5, 2022. doi: 10.3389/frai.2022.781962. URL https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2022.781962." + ], + "bbox": [ + 171, + 102, + 826, + 924 + ], + "page_idx": 10 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Gautier Dagan, Gabriel Synnaeve, and Baptiste Rozière. Getting the most out of your tokenizer for pre-training and domain adaptation. In Proceedings of the 41st International Conference on Machine Learning, ICML'24. JMLR.org, 2024. URL https://dl.acm.org/doi/10.5555/3692070.3692457.", + "DeepSeek-AI. Deepseek-v3 technical report, 2025. URL https://arxiv.org/abs/2412.19437.", + "Konstantin Dobler and Gerard De Melo. Focus: Effective embedding initialization for monolingual specialization of multilingual models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 13440-13454, 2023.", + "Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2368-2378, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1246. URL https://aclanthology.org/N19-1246.", + "Lukas Edman, Helmut Schmid, and Alexander Fraser. CUTE: Measuring LLMs' understanding of their tokens. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 3017-3026, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.177. URL https://aclanthology.org/2024.emnlp-main.177.", + "Philip Gage. A new algorithm for data compression. The C Users Journal archive, 12:23-38, 1994. URL https://api-semanticscholar.org/CorpusID:59804030.", + "Matthias Galle. Investigating the effectiveness of BPE: The power of shorter sequences. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (eds.), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1375-1381, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1141. URL https://aclanthology.org/D19-1141.", + "Leonidas Gee, Andrea Zugarini, Leonardo Rigutini, Paolo Torroni, et al. Fast vocabulary transfer for language model compression. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track, pp. 409-416. Association for Computational Linguistics (ACL), 2022.", + "Leonidas Gee, Leonardo Rigutini, Marco Ernandes, and Andrea Zugarini. Multi-word tokenization for sequence compression. In Mingxuan Wang and Imed Zitouni (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track, pp. 612-621, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-industry.58. URL https://aclanthology.org/2023.emnlp-industry.58.", + "Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Roziere, David Lopez-Paz, and Gabriel Synnaeve. Better & faster large language models via multi-token prediction. In *Forty-first International Conference on Machine Learning*, 2024. URL https://openreview.net/forum?id=pEWAcejiU2.", + "Omer Goldman, Avi Caciularu, Matan Eyal, Kris Cao, Idan Szpektor, and Reut Tsarfaty. Unpacking tokenization: Evaluating text compression and its correlation with model performance. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics: ACL 2024, pp. 2274-2286, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024-findings-acl.134. URL https://aclanthology.org/2024-findings-acl.134.", + "Google. Gemma: Open models based on gemini research and technology, 2024." + ], + "bbox": [ + 171, + 102, + 826, + 925 + ], + "page_idx": 11 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Jonathan Hayase, Alisa Liu, Yejin Choi, Sewoong Oh, and Noah A. Smith. Data mixture inference: What do BPE tokenizers reveal about their training data? In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=EHXyeImux0.", + "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=d7KBjmI3GmQ.", + "Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. In Proceedings of the 36th International Conference on Neural Information Processing Systems, pp. 30016-30030, 2022.", + "Valentin Hofmann, Janet Pierrehumbert, and Hinrich Schütze. Superbizarre is not superb: Derivational morphology improves BERT's interpretation of complex words. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 3594-3608, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.279. URL https://aclanthology.org/2021.acl-long.279.", + "Hongzhi Huang, Defa Zhu, Banggu Wu, Yutao Zeng, Ya Wang, Qiyang Min, and Xun Zhou. Over-tokenized transformer: Vocabulary is generally worth scaling, 2025. URL https://arxiv.org/abs/2501.16975.", + "Sukjun Hwang, Brandon Wang, and Albert Gu. Dynamic chunking for end-to-end hierarchical sequence modeling, 2025. URL https://arxiv.org/abs/2507.07955.", + "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Regina Barzilay and Min-Yen Kan (eds.), Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601-1611, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1147. URL https://aclanthology.org/P17-1147.", + "Guy Kaplan, Matanel Oren, Yuval Reif, and Roy Schwartz. From tokens to words: On the inner lexicon of LLMs. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=328vch6tRs.", + "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.", + "Pride Kavumba, Naoya Inoue, Benjamin Heinzerling, Keshav Singh, Paul Reisert, and Kentaro Inui. When choosing plausible alternatives, clever hans can be clever. In Simon Ostermann, Sheng Zhang, Michael Roth, and Peter Clark (eds.), Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing, pp. 33-42, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-6004. URL https://aclanthology.org/D19-6004.", + "Taku Kudo. Sentencepiece experiments. https://github.com/google/sentencepiece/blob/master/doc/experiments.md, 2018.", + "Taku Kudo and John Richardson. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Eduardo Blanco and Wei Lu (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 66-71, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-2012. URL https://aclanthology.org/D18-2012." + ], + "bbox": [ + 171, + 102, + 828, + 924 + ], + "page_idx": 12 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Dipesh Kumar and Avijit Thawani. BPE beyond word boundary: How NOT to use multi word expressions in neural machine translation. In Shabnam Tafreshi, João Sedoc, Anna Rogers, Aleksandr Drozd, Anna Rumshisky, and Arjun Akula (eds.), Proceedings of the Third Workshop on Insights from Negative Results in NLP, pp. 172-179, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.insights-1.24. URL https://aclanthology.org/2022.insights-1.24.", + "Vedang Lad, Wes Gurnee, and Max Tegmark. The remarkable robustness of llms: Stages of inference?, 2024. URL https://arxiv.org/abs/2406.19384.", + "Yuxuan Lai, Yijia Liu, Yansong Feng, Songfang Huang, and Dongyan Zhao. Lattice-BERT: Leveraging multi-granularity representations in Chinese pre-trained language models. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1716-1731, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.137. URL https://aclanthology.org/2021.nacl-main.137.", + "Sander Land. A short introduction to pre-tokenization weirdness, 2024. URL https://tokencontributions.substack.com/p/a-short-introduction-to-pre-tokenization.", + "Sander Land and Max Bartolo. Fishing for magikarp: Automatically detecting under-trained tokens in large language models. In Yaser Al-Onaizan, Mohit Bansal, and YunNung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 11631-11646, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.649. URL https://aclanthology.org/2024.emnlp-main.649.", + "Hector J. Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, pp. 552-561. AAAI Press, 2012.", + "Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Sham Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Sewoong Oh, Luke Zettlemoyer, Kyle Lo, Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G. Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, and Vaishaal Shankar. Datacomp-lm: In search of the next generation of training sets for language models, 2024. URL https://arxiv.org/abs/2406.11794.", + "Opher Lieber, Or Sharir, Barak Lenz, and Yoav Shoham. Jurassic-1: Technical details and evaluation, 2021. URL https:// uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf.", + "Hong Liu, Sang Michael Xie, Zhiyuan Li, and Tengyu Ma. Same pre-training loss, better downstream: Implicit bias matters for language models. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 22188-22214. PMLR, 23-29 Jul 2023. URL https://proceedings.mlr.press/v202/liu23ao.html.", + "Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, and Hannaneh Hajishirzi. Infini-gram: Scaling unbounded n-gram language models to a trillion tokens. In First Conference on Language Modeling, 2024a. URL https://openreview.net/forum?id=u2vAyMeLm." + ], + "bbox": [ + 171, + 102, + 826, + 924 + ], + "page_idx": 13 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Yihong Liu, Peiqin Lin, Mingyang Wang, and Hinrich Schütze. Ofa: A framework of initializing unseen subword embeddings for efficient large-scale multilingual continued pretraining. In Findings of the Association for Computational Linguistics: NAACL 2024, pp. 1067-1097, 2024b.", + "Scott Lundberg. The art of prompt design: Prompt boundaries and token healing, 2023. URL https://medium.com/towards-data-science/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38.", + "Kelly Marchisio, Patrick Lewis, Yihong Chen, and Mikel Artetxe. Mini-model adaptation: Efficiently extending pretrained models to new languages via aligned shallow training. In The 61st Annual Meeting Of The Association For Computational Linguistics, 2023.", + "Haspelmath Martin. The indeterminacy of word segmentation and the nature of morphology and syntax. Folia Linguistica, 51(s1000):31-80, 2017. doi: doi:10.1515/flin-2017-1005. URL https://doi.org/10.1515/flin-2017-1005.", + "Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 17359-17372. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/6f1d43d5a82a37e89b0665b33bf3a182-Paper-Conference.pdf.", + "Meta. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783.", + "Sabrina J. Mielke, Zaid Alyafeai, Elizabeth Salesky, Colin Raffel, Manan Dey, Matthias Galle, Arun Raja, Chenglei Si, Wilson Y. Lee, Benoit Sagot, and Samson Tan. Between words and characters: A brief history of open-vocabulary modeling and tokenization in nlp, 2021. URL https://arxiv.org/abs/2112.10508.", + "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2381-2391, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1260. URL https://aclanthology.org/D18-1260.", + "Benjamin Minixhofer, Fabian Paischer, and Navid Rekabsaz. Wechsel: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3992–4006, 2022.", + "Benjamin Minixhofer, Edoardo Ponti, and Ivan Vulic. Zero-shot tokenizer transfer. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024.", + "Benjamin Minixhofer, Ivan Vulic, and Edoardo Maria Ponti. Universal cross-tokenizer distillation via approximate likelihood matching. arXiv preprint arXiv:2503.20083, 2025.", + "Piotr Nawrot, Jan Chorowski, Adrian Lancucki, and Edoardo Maria Ponti. Efficient transformers with dynamic token pooling. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6403-6417, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.353. URL https://aclanthology.org/2023.acl-long.353.", + "Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Investigating the limitations of transformers with simple arithmetic tasks, 2021. URL https://arxiv.org/abs/2102.13019.", + "Team OLMo, Pete Walsh, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, Dustin Schwenk, Oyvind Tafjord, Taira Anderson, David Atkinson, Faeze Brahman, Christopher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng Liu, Saumya Malik, William Merrill, Lester James V. Miranda, Jacob Morrison," + ], + "bbox": [ + 171, + 102, + 826, + 924 + ], + "page_idx": 14 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 14 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Tyler Murray, Crystal Nam, Valentina Pyatkin, Aman Rangapur, Michael Schmitz, Sam Skjonsberg, David Wadden, Christopher Wilhelm, Michael Wilson, Luke Zettlemoyer, Ali Farhadi, Noah A. Smith, and Hannaneh Hajishirzi. 2 olmo 2 furious, 2024. URL https://arxiv.org/abs/2501.00656.", + "OpenAI. Hello GPT-4o, 2024. URL https://openai.com/index/hello-gpt-4o/.", + "Naoki Otani, Satoru Ozaki, Xingyuan Zhao, Yucen Li, Micaelah St Johns, and Lori Levin. Pre-tokenization of multi-word expressions in cross-lingual word embeddings. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4451-4464, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.360. URL https://aclanthology.org/2020.emnlp-main.360.", + "Artidoro Pagnoni, Ram Pasunuru, Pedro Rodriguez, John Nguyen, Benjamin Muller, Margaret Li, Chunting Zhou, Lili Yu, Jason Weston, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Ari Holtzman, and Srinivasan Iyer. Byte latent transformer: Patches scale better than tokens, 2024. URL https://arxiv.org/abs/2412.09871.", + "Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Katrin Erk and Noah A. Smith (eds.), Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1525-1534, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1144. URL https://aclanthology.org/P16-1144.", + "Jackson Petty, Sjoerd van Steenkiste, Fei Sha, Ishita Dasgupta, Dan Garrette, and Tal Linzen. The impact of depth and width on transformer language model generalization. 2023.", + "Buu Phan, Marton Havasi, Matthew Muckley, and Karen Ullrich. Understanding and mitigating tokenization bias in language models, 2024. URL https://arxiv.org/abs/2406.16829.", + "Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. BPE-dropout: Simple and effective subword regularization. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1882-1892, Online, July 2020. Association for Computational Linguistics. doi: 10. 18653/v1/2020.acl-main.170. URL https://aclanthology.org/2020.acl-main.170.", + "Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. ProphetNet: Predicting future n-gram for sequence-to-SequencePretraining. In Trevor Cohn, Yulan He, and Yang Liu (eds.), Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 2401-2410, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.217. URL https://aclanthology.org/2020-findings-emnlp.217.", + "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. URL https://cdn.openai.com/better-language-models/language Models are unsupervised multitask_learners.pdf.", + "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Jian Su, Kevin Duh, and Xavier Carreras (eds.), Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383-2392, Austin, Texas, November 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1264. URL https://aclanthology.org/D16-1264.", + "Siva Reddy, Danqi Chen, and Christopher D. Manning. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266, 2019. doi: 10.1162/tacl_a_00266. URL https://aclanthology.org/Q19-1016.", + "Marco Tulio Ribeiro. A guidance language for controlling large language models, 2023. URL https://github.com/guidance-ai/guidance?tab=README-ov-file#text-not-tokens." + ], + "bbox": [ + 171, + 102, + 828, + 924 + ], + "page_idx": 15 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 488, + 948, + 508, + 959 + ], + "page_idx": 15 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI) Spring Symposium, 2011.", + "Jessica Rumbelow and Matthew Watkins. Solidgoldmagikarp (plus, prompt generation), 2023. URL https://www.lesswrong.com/posts/aPeJE8bSo6rAfOaLqg/solidgoldmagikarp-plus-prompt-generation.", + "Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: an adversarial winograd schema challenge at scale. Commun. ACM, 64(9):99-106, August 2021. ISSN 0001-0782. URL https://doi.org/10.1145/3474381.", + "Bahar Salehi, Paul Cook, and Timothy Baldwin. A word embedding approach to predicting the compositionality of multiword expressions. In Rada Mihalcea, Joyce Chai, and Anoop Sarkar (eds.), Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 977-983, Denver, Colorado, 2015. Association for Computational Linguistics. doi: 10.3115/v1/N15-1099. URL https://aclanthology.org/N15-1099/.", + "Craig W Schmidt, Varshini Reddy, Haoran Zhang, Alec Alameddine, Omri Uzan, Yuval Pinter, and Chris Tanner. Tokenization is more than compression. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 678-702, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.40. URL https://aclanthology.org/2024.emnlp-main.40.", + "Nathan Schneider, Spencer Onuffer, Nora Kazour, Emily Danchik, Michael T. Mordowanec, Henrietta Conrad, and Noah A. Smith. Comprehensive annotation of multiword expressions in a social web corpus. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (eds.), Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pp. 455-461, Reykjavik, Iceland, May 2014. European Language Resources Association (ELRA). URL https://aclanthology.org/L14-1433/.", + "Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Katrin Erk and Noah A. Smith (eds.), Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715-1725, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://aclanthology.org/P16-1162.", + "Anya Sims, Cong Lu, Klara Kaleb, Jakob Nicolaus Foerster, and Yee Whye Teh. Stochastok: Improving fine-grained subword understanding in LLMs. In ICLR 2025 Workshop on Building Trust in Language Models and Applications, 2025. URL https://openreview.net/forum?id=PZnDZdkGsE.", + "Aaditya K. Singh and DJ Strouse. Tokenization counts: the impact of tokenization on arithmetic in frontier llms, 2024. URL https://arxiv.org/abs/2402.14903.", + "Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024.", + "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4149-4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https://aclanthology.org/N19-1421.", + "Chaofan Tao, Qian Liu, Longxu Dou, Niklas Muennighoff, Zhongwei Wan, Ping Luo, Min Lin, and Ngai Wong. Scaling laws with vocabulary: Larger models deserve larger" + ], + "bbox": [ + 171, + 102, + 826, + 925 + ], + "page_idx": 16 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 16 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "vocabies. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024.", + "Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. Scale efficiently: Insights from pre-training and fine-tuning transformers. arXiv preprint arXiv:2109.10686, 2021.", + "Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, and Donald Metzler. Charformer: Fast character transformers via gradient-based subword tokenization. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=JtBRnr10EFN.", + "Avijit Thawani, Jay Pujara, Filip Ilievski, and Pedro Szekely. Representing numbers in NLP: a survey and a vision. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 644-656, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.53. URL https://aclanthology.org/2021.naacl-main.53.", + "Ke Tran. From english to foreign languages: Transferring pre-trained language models. arXiv preprint arXiv:2002.07306, 2020.", + "Menan Velayuthan and Kengatharaiyer Sarveswaran. Egalitarian language representation in language models: It all begins with tokenizers. In Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, and Steven Schockaert (eds.), Proceedings of the 31st International Conference on Computational Linguistics, pp. 5987-5996, Abu Dhabi, UAE, January 2025. Association for Computational Linguistics. URL https://aclanthology.org/2025.coling-main.400/.", + "Bandhav Veluri, Justin Chan, Malek Itani, Tuochao Chen, Takuya Yoshioka, and Shyamnath Gollakota. Real-time target sound extraction. In ICASSP, pp. 1-5, 2023. URL https://doi.org/10.1109/ICASSP49357.2023.10094573.", + "Tim Vieira, Ben LeBrun, Mario Giulianielli, Juan Luis Gastaldi, Brian DuSell, John Terilla, Timothy J O'Donnell, and Ryan Cotterell. From language models over tokens to language models over characters. arXiv preprint arXiv:2412.03719, 2024.", + "Junxiong Wang, Tushaar Gangavarapu, Jing Nathan Yan, and Alexander M Rush. Mambabye: Token-free selective state space model. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id=X1xNsuKssb.", + "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. Transformers: State-of-the-art natural language processing. In Qun Liu and David Schlangen (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38-45, Online, October 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-demos.6. URL https://aclanthology.org/2020.emnlp-demos.6.", + "Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. Bloomberggpt: A large language model for finance, 2023. URL https://arxiv.org/abs/2303.17564.", + "Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. ByT5: Towards a token-free future with pre-trained byte-to-byte models. Transactions of the Association for Computational Linguistics, 10:291-306, 2022. doi: 10.1162/tacl_a_00461. URL https://aclanthology.org/2022.tacl-1.17." + ], + "bbox": [ + 171, + 102, + 828, + 924 + ], + "page_idx": 17 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 17 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369-2380, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1259. URL https://aclanthology.org/D18-1259.", + "Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, and Mike Lewis. MEGABYTE: Predicting million-byte sequences with multiscale transformers. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=JTmO2V9Xpz.", + "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can a machine really finish your sentence? In Anna Korhonen, David Traum, and Lluis Márquez (eds.), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4791-4800, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1472. URL https://aclanthology.org/P19-1472.", + "Xinsong Zhang, Pengshuai Li, and Hang Li. AMBERT: A pre-trained language model with multi-grained tokenization. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 421-435, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021-findings-acl.37. URL https://aclanthology.org/2021-findings-acl.37.", + "Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Yining Chen, Jiahai Wang, Jian Yin, Ming Zhou, and Nan Duan. Analytical reasoning of text. In Marine Carpuat, Marie-Catherine de Marneffé, and Ivan Vladimir Meza Ruiz (eds.), Findings of the Association for Computational Linguistics: NAACL 2022, pp. 2306-2319, Seattle, United States, July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-naacl.177. URL https://aclanthology.org/2022-findings-naacl.177.", + "Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. AGIEval: A human-centric benchmark for evaluating foundation models. In Kevin Duh, Helena Gomez, and Steven Bethard (eds.), Findings of the Association for Computational Linguistics: NAACL 2024, pp. 2299-2314, Mexico City, Mexico, June 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-naacl.149. URL https://aclanthology.org/2024.findings-naacl.149." + ], + "bbox": [ + 171, + 102, + 828, + 602 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "A Experimental setup details", + "text_level": 1, + "bbox": [ + 171, + 625, + 450, + 643 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "A.1 Tokenizer training", + "text_level": 1, + "bbox": [ + 171, + 655, + 357, + 672 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "We use the HuggingFace tokenizers (Wolf et al., 2020) library for tokenizer training.", + "bbox": [ + 171, + 681, + 782, + 698 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "A.1.1 Tokenizer training data", + "text_level": 1, + "bbox": [ + 171, + 710, + 401, + 727 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "We produce the tokenizer training data by sampling documents uniformly at random from the OLMO2 stage 2 pretraining data, referred to as olmo-mix. We use a 10 GB subset because early experiments showed that data beyond even $\\sim 10$ MB does not make a difference in the resulting tokenizer's encoding efficiency.", + "bbox": [ + 171, + 734, + 823, + 792 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "We found that olmo-mix had several extremely long documents, with the longest $1\\%$ of documents making up $15\\%$ of the data. In particular, a full academic paper (specifically Veluri et al., 2023) is duplicated 2,224 times back-to-back inside one document (as delimited by special EOS tokens). Because our tokenizers are trained on small sets of data, these extremely long documents can take up a large proportion of the data, resulting in unusual tokens like chunk-based processing. To circumvent possible data duplication issues, we truncate the longest $1\\%$ of documents in the tokenizer training data to the $99\\%$ percentile of document lengths. As future practitioners train SuperBPE tokenizers, we encourage especial attention to dedduplication, which may have an outsized impact on SuperBPE tokenizers.", + "bbox": [ + 169, + 797, + 826, + 925 + ], + "page_idx": 18 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "A.1.2 Limit on the size of superword tokens", + "text_level": 1, + "bbox": [ + 171, + 103, + 503, + 119 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Even after truncating the longest $1\\%$ of documents, we found that SuperBPE tokenizers can still have extremely long tokens consisting of highly duplicated boiler-plate text such as the Project Gutenberg license or common internet phrases such as You’re commenting using your. This issue is already present in BPE tokenizers trained on Chinese, which contain sentence-long tokens clearly taken from pornographic content. For instance, tokens in GPT-4O’s tokenizer include 最新高清无码 = latest HD uncensored and 娱乐网址 = entertainment website. To prevent concerns about the tokenizer directly revealing parts of the training data (Hayase et al., 2024), we enforce an upper bound of 4 words in our tokens. Empirically, we found that this had no measurable impact on the encoding efficiency of the tokenizers or the resulting trained LMs.", + "bbox": [ + 169, + 128, + 826, + 270 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "A.1.3 Pretokenization rules", + "text_level": 1, + "bbox": [ + 171, + 285, + 385, + 299 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "We implement whitespace pretokenization with the default regex string from tokenizers which was adopted by the GPT-2 tokenizer.", + "bbox": [ + 169, + 309, + 823, + 338 + ], + "page_idx": 19 + }, + { + "type": "equation", + "text": "\n$$\n? \\backslash p \\{L \\} + |? [ ^ {\\wedge} \\backslash s \\backslash p \\{L \\} \\backslash p \\{N \\} ] + | \\backslash s + (? ! \\backslash S) | \\backslash s +\n$$\n", + "text_format": "latex", + "bbox": [ + 179, + 351, + 511, + 366 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Note that the original GPT-2 pretokenization regex string also splits on contractions, e.g., splitting I'm into [I, 'm]. Since this choice is not universal among commercial tokenizers and is not related to whitespace pretokenization (and furthermore creates plenty of undesirable edge cases [Land, 2024]), we do not include this rule.", + "bbox": [ + 169, + 378, + 823, + 436 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "Independently of whitespace pretokenization (i.e., for both BPE and SuperBPE tokenizers), we follow recent convention (as introduced by GPT-3.5 and borrowed by LLAMA3, OLMo2) and pretokenize digits into blocks of 3. We make one modification, by grouping digits into 3 from the right rather than from the left, so that, e.g., 1000 would be pretokenized as [1, 000] instead of [100, 0]. This choice was recently found to yield improved performance on math benchmarks, even when applied solely at inference time (Singh & Strouse, 2024). Digit pretokenization is enforced with the following regex.", + "bbox": [ + 169, + 441, + 826, + 541 + ], + "page_idx": 19 + }, + { + "type": "equation", + "text": "\n$$\n(? = (\\backslash d \\{3 \\}) + (?! \\backslash d))\n$$\n", + "text_format": "latex", + "bbox": [ + 181, + 551, + 330, + 568 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "A.1.4 Special casing of colon", + "text_level": 1, + "bbox": [ + 171, + 583, + 395, + 599 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "In order to make our tokenizer compatible with the common question-answering format where the prompt ends with a colon and the continuation is expected to start with a space, we \"special-case\" colon by preventing the algorithm from learning any tokens that contain\": as a substring. Without this fix, common question/answer prompts might produce distorted distributions over completions. Please see $\\S C.3$ for further discussion. This affects the resulting tokenizer minimally in terms of the learned vocabulary.", + "bbox": [ + 169, + 607, + 826, + 694 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "A.2 Scaling model configurations", + "text_level": 1, + "bbox": [ + 171, + 709, + 434, + 724 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "When matching inference compute, the goal is to match the average flops per byte of generated text between two models with different tokenizers. To do so, we scale the model up to cancel the effect of longer tokens, which requires precise control over the model's size. To produce a model config with an arbitrary inference compute cost, we first represent the inference flops per token as a polynomial in terms of the model dimension, MLP hidden dimension, and number of layers. Conveniently, once the model dimension and number of layers are chosen, the flops are affine in the MLP hidden dimension, so we can easily solve for the MLP hidden dimension that gets us closest to the desired budget. We fix the head dimension to that of the base model.", + "bbox": [ + 169, + 734, + 826, + 861 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "To find the best config overall, we grid search over the hidden dimension (which must remain a multiple of the head dimension) and number of layers, solving for the MLP hidden dimension at each step. We choose the config which expands the transformer by the most uniform factors. This is measured by taking the ratios of the current parameters with the", + "bbox": [ + 169, + 867, + 823, + 925 + ], + "page_idx": 19 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "20", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 19 + }, + { + "type": "text", + "text": "base config's parameters, applying the logarithm, and taking the standard deviation. While prior work has explored the best way to scale transformer models (Tay et al., 2021; Petty et al., 2023), we believe that scaling all parameters uniformly is reasonable since we are only increasing the model size by a small amount.", + "bbox": [ + 169, + 103, + 823, + 160 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "We present the exact model hyperparameters for all model sizes used in our experiments in Table 4.", + "bbox": [ + 169, + 166, + 823, + 194 + ], + "page_idx": 20 + }, + { + "type": "table", + "img_path": "images/f06d843bf9b139148351b3eacedc5b01c1008f2f1ee0395ed7672c979ff3e3ba.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
680M910M1.9B2.5B8B11B
Parameter count678.2M912.5M1.893B2.536B8.115B11.30B
Model dimension10241,2162,0482,3044,0964,608
MLP hidden dimension8,1929,72816,38418,43222,01624,704
Head dimension6464128128128128
Number of heads161916183236
Number of layers161816193237
Vocabulary size20,000520,000520,000520,000520,000520,0005
", + "bbox": [ + 179, + 203, + 818, + 321 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Table 4: Model parameters for all model sizes. Model sizes 910M, 2.5B, and 11B are scaled versions of 680M, 1.9B, and 8B respectively. All other parameters match those of OLMO 300M (from the OLMO model ladder) for sizes 680M and 910M, OLMO 1B for sizes 1.9B and 2.5B, or OLMO2 7B for sizes 8B and 11B, respectively. Maximum sequence length values for various tokenizers are listed in Table 2.", + "bbox": [ + 169, + 330, + 823, + 401 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "A.3 Compute used for model training", + "text_level": 1, + "bbox": [ + 171, + 434, + 467, + 450 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "All models were pretrained on $328\\times \\mathrm{H}100$ nodes.", + "bbox": [ + 169, + 460, + 529, + 474 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "A.4 Evaluation Suite", + "text_level": 1, + "bbox": [ + 171, + 492, + 341, + 506 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Our evaluation suite builds on DataComp-LM's core evaluation of 22 tasks (Li et al., 2024), which was found to provide low-variance signal of learning. We add 8 more popular tasks (e.g., MMLU, GSM8K) while also covering string manipulation tasks (e.g., CUTE), which are known to be challenging for LMs due to their tokenizers.", + "bbox": [ + 169, + 517, + 826, + 575 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "All evaluations are based on decoding from the model and scoring the generation by either comparing it to the ground truth or evaluating its functional correctness (in the case of coding tasks). For multiple choice (MC) tasks, we check whether the predicted answer choice is an exact match (EM) to the target (we observe that effectively $100\\%$ of model generations are valid answer choices, especially for later checkpoints). For open-ended tasks, we check whether the generated output contains the ground truth answer exactly, and for coding tasks, we report pass@10.", + "bbox": [ + 169, + 580, + 825, + 679 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "We provide 5 in-context examples for all tasks, except for CoQA, which naturally contains in-context examples in the conversational context, and the coding tasks (HumanEval and MBPP), which are evaluated zero-shot following prior work. We use a maximum of 5,000 examples from each dataset, though some datasets contain much fewer examples. BB below stands for BIG-Bench.", + "bbox": [ + 169, + 685, + 823, + 755 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "ARC consists of 4-way MC questions from grades 3-9 science exams. It contains two splits, ARC-Easy, which require knowledge of basic science, and ARC-Challenge, which require some procedural reasoning (Clark et al., 2018).", + "bbox": [ + 169, + 770, + 823, + 813 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "Arithmetic contains simple arithmetic problems (Brown et al., 2020).11 We use the 2da, 2dm, and 2ds splits for addition, multiplication, and division of (up to) 2-digit numbers.", + "bbox": [ + 169, + 828, + 823, + 859 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "BoolQ contains naturally occurring yes/no questions paired with passages that provide an answer (Clark et al., 2019).", + "bbox": [ + 169, + 872, + 823, + 902 + ], + "page_idx": 20 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 20 + }, + { + "type": "page_footnote", + "text": "11https://huggingface.co/datasets/EleutherAI/arithmetic", + "bbox": [ + 184, + 907, + 591, + 922 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "21", + "bbox": [ + 488, + 946, + 506, + 959 + ], + "page_idx": 20 + }, + { + "type": "text", + "text": "CommonsenseQA contains 5-way MC questions that require commonsense knowledge to answer (Talmor et al., 2019).", + "bbox": [ + 169, + 103, + 823, + 132 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "COPA contains two-way MC questions about cause and effect (Roemmle et al., 2011; Kavumba et al., 2019).", + "bbox": [ + 169, + 148, + 826, + 176 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "CoQA consists of passages with a series of conversational questions about the passage Reddy et al. (2019). Each question requires the prior conversational context, due to possible coreference across questions. Because these contextual questions naturally serve as in-context examples, we do not provide additional in-context examples.", + "bbox": [ + 169, + 193, + 826, + 251 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "BB CS Algorithms consists of two subtasks, determining whether a given series of parentheses is balanced and identifying the longest common subsequence in two letter strings (BIG-bench, 2023).", + "bbox": [ + 169, + 266, + 826, + 309 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "CUTE contains questions that require the model to understand and manipulate spelling, such as replacing all instances of a particular letter in a word with another letter (Edman et al., 2024).", + "bbox": [ + 169, + 324, + 826, + 367 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "DROP contains questions about passages, potentially requiring reasoning over multiple pieces of information in the passage (Dua et al., 2019).", + "bbox": [ + 169, + 383, + 823, + 412 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "BB Dyck Languages consists of a sequence of parentheses and requires the model to predict the correct sequence of closing parentheses so that the entire sequence is well-balanced.", + "bbox": [ + 169, + 428, + 826, + 470 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "GSM8K contains grade school math word problems that require between 2 and 8 steps to solve. In the in-context examples, we provide the answer passage that contains intermediate steps with calculator annotations removed. The model is expected to provide the final numerical answer after four hashtags (# ###) that delimit the reasoning and final answer (Cobbe et al., 2021).", + "bbox": [ + 169, + 486, + 823, + 556 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "HellaSwag contains 4-way MC questions which ask for the most natural continuation given the context (Zellers et al., 2019).", + "bbox": [ + 169, + 573, + 823, + 602 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "HotpotQA contains questions along with a corresponding passage from Wikipedia containing the answer (Yang et al., 2018).", + "bbox": [ + 169, + 618, + 826, + 647 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "HumanEval contains programming problems where the model is tasked with completing a Python function given its docstring (Chen et al., 2021). We use \"\\nclass,\" \"\\ndef,\" \"\\n#,\"\\nif,\" as stop tokens. Following the original paper, we sample 20 continuations with top $p = 0.95$ and temperature $= 0.8$ . Models are allowed to generate for a maximum of 128 new tokens. The functional correctness of generations is automatically evaluated using test cases. We use the 20 generation to make an unbiased estimate of the pass@10 rate, i.e., how likely at least one of 10 sampled solutions for a problem is correct.", + "bbox": [ + 169, + 662, + 826, + 762 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Jeopardy contains open-ended questions from the \"Jeopardy!\" quiz show.[12]", + "bbox": [ + 169, + 777, + 733, + 795 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "Lambada contains narratives without the last word, which is inferrable given the context (Paperno et al., 2016). This task requires models to attend to the full narrative instead of only the local context.", + "bbox": [ + 169, + 810, + 826, + 853 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "BB Language Identification contains sentences in different languages, and the task is to choose the language of the sentence from a long list of options.", + "bbox": [ + 169, + 869, + 823, + 898 + ], + "page_idx": 21 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 21 + }, + { + "type": "page_footnote", + "text": "12https://www.kaggle.com/datasets/tunguz/200000-jeopardy-questions", + "bbox": [ + 184, + 907, + 673, + 924 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "22", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 21 + }, + { + "type": "text", + "text": "LSAT-AR contains MC questions that evaluate the analytical reasoning (AR) ability of LMs (Zhong et al., 2022; 2024). Test questions are drawn from the Law School Admission Test (LSAT) from 1991 to 2016.", + "bbox": [ + 169, + 103, + 823, + 147 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "MBPP contains Python programming problems where the model is given a description of the desired function and a series of unit tests. We use the same evaluation setup as HumanEval.", + "bbox": [ + 169, + 161, + 823, + 203 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "MMLU contains 4-way MC questions covering 57 different domains, covering both world knowledge and problem-solving abilities (Hendrycks et al., 2021). Note that we report a straight average over the 5000-example sample, rather than a macro-average over subjects.", + "bbox": [ + 169, + 219, + 825, + 263 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "OpenbookQA contains 4-way MC questions that require multi-step reasoning and commonsense knowledge (Mihaylov et al., 2018).", + "bbox": [ + 169, + 277, + 826, + 308 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "BB Operators contains questions where the model is given a function definition and asked to compute the output of that function given a particular input.", + "bbox": [ + 171, + 321, + 823, + 353 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "PIQA contains MC questions that require physical commonsense reasoning (Bisk et al., 2020).", + "bbox": [ + 169, + 366, + 825, + 395 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "BB Repeat-Copy-Logic contains instructions that ask the model to produce a particular string (Austin et al., 2021).", + "bbox": [ + 169, + 410, + 823, + 440 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "SQuAD contains passages paired with questions about the passage (Rajpurkar et al., 2016). The answer is always a span from the passage.", + "bbox": [ + 169, + 455, + 826, + 484 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "TriviaQA contains open-ended questions about world knowledge (Joshi et al., 2017).", + "bbox": [ + 171, + 500, + 795, + 515 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "BB WikidataQA require models to complete factual statements with the correct continuation.", + "bbox": [ + 169, + 529, + 826, + 558 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Winograd contains binary MC questions where the model is given a context and asked to determine which entity a pronoun refers to, between two options (Levesque et al., 2012). Correctly answer the question requires commonsense knowledge and contextual reasoning.", + "bbox": [ + 169, + 574, + 826, + 617 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "Winogrande contain questions with the same schema as Winograd, but increases both the scale and difficulty of the dataset (Sakaguchi et al., 2021).", + "bbox": [ + 169, + 632, + 823, + 662 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "B Additional Results", + "text_level": 1, + "bbox": [ + 171, + 681, + 377, + 698 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "B.1 How BPB varies with context length", + "text_level": 1, + "bbox": [ + 171, + 713, + 483, + 729 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "In our main experiments (§3), we adjust the context size of SuperBPE models to match the effective context size of the BPE model in raw text. To justify this design choice, we show that the next token becomes easier to predict as a function of the preceding context in bytes (not tokens). Figure 6 shows the average BPB at every token index (left) vs byte index (right) — when measured at fixed token indices, SuperBPE has an advantage from seeing more context (achieving lower loss on average at the same token index), whereas at fixed byte indices, this advantage goes away.", + "bbox": [ + 168, + 739, + 823, + 839 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "B.2 Task evaluation", + "text_level": 1, + "bbox": [ + 171, + 854, + 333, + 869 + ], + "page_idx": 22 + }, + { + "type": "text", + "text": "We report the individual task performance of BPE and all SuperBPE models in Table 5 (this an expansion of Table 1). We also show a subset of task-specific performance curves during pretraining in Figure 12.", + "bbox": [ + 169, + 881, + 823, + 925 + ], + "page_idx": 22 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 22 + }, + { + "type": "page_number", + "text": "23", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 22 + }, + { + "type": "image", + "img_path": "images/84f2007d154f77661b1483fc6a7996163488e1e8ff828b06402921ad2557ab4c.jpg", + "image_caption": [ + "(a) BPB vs tokens of context" + ], + "image_footnote": [], + "bbox": [ + 199, + 111, + 477, + 287 + ], + "page_idx": 23 + }, + { + "type": "image", + "img_path": "images/c421e5a30f24ec4e8a6d113c372482b96b32219f88c00b1e3e277b791450fae3.jpg", + "image_caption": [ + "(b) BPB vs bytes of context", + "Figure 6: When comparing the normalized loss of the next token, controlling for preceding tokens of context gives SuperBPE an advantage, while controlling for bytes of context gives a close match between BPE and SuperBPE." + ], + "image_footnote": [], + "bbox": [ + 513, + 111, + 790, + 287 + ], + "page_idx": 23 + }, + { + "type": "table", + "img_path": "images/37a1775229b0c6dc7311ec0893d78fc5a542f3c4c1c2fade67db25d48e8e2c30.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
CategoryTaskBPE 8BSuperBPE 8BSuperBPE 11B
t = 80kt = 160kt = 180k
KnowledgeARC-Easy (MC)46.660.863.667.160.6
ARC-Challenge (MC)35.146.443.950.643.9
Jeopardy (EM)42.140.241.841.842.2
MMLU (MC)36.541.942.644.741.0
OpenbookQA (MC)33.249.849.454.446.4
TriviaQA (EM)60.659.761.961.362.3
WikidataQA (EM)69.768.269.570.970.9
Math & ReasoningArithmetic (EM)54.863.258.659.356.9
GSM8K (EM)6.46.96.76.77.4
LSAT-AR (MC)21.323.924.323.020.9
Operators (EM)35.532.235.533.637.9
Repeat-Copy-Logic (EM)3.16.26.26.23.1
CodingHumanEval (pass@10)15.915.014.413.415.9
MBPP (pass@10)27.525.328.428.329.4
Reading ComprehensionBoolQ (MC)59.765.262.364.664.7
CoQA (EM)12.612.812.513.213.1
DROP (EM)31.328.632.831.433.1
HotpotQA (EM)53.552.554.755.254.6
SQuAD (EM)75.174.376.275.877.2
CommonsenseCommonsenseQA (MC)33.550.052.353.850.5
COPA (MC)77.086.687.685.897.0
PIQA (MC)55.257.761.859.859.2
Winograd (MC)50.452.555.253.152.3
Winogrande (MC)47.351.251.652.650.2
Language UnderstandingHellaSwag (MC)29.731.230.333.736.6
LAMBADA (EM)77.072.875.170.675.8
Language Identification (EM)8.810.29.79.010.1
String ManipulationCS Algorithms (EM)46.147.342.648.649.1
CUTE (EM)31.332.232.832.635.7
Dyck-Languages (EM)15.923.218.814.216.7
Average39.842.943.443.843.8
", + "bbox": [ + 194, + 402, + 802, + 795 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "Table 5: Performance of BPE and SuperBPE models on 30 downstream tasks. This is an expansion of Table 1 with more models.", + "bbox": [ + 169, + 805, + 823, + 835 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "B.3 BPB evaluation", + "text_level": 1, + "bbox": [ + 171, + 876, + 330, + 888 + ], + "page_idx": 23 + }, + { + "type": "text", + "text": "See Figure 7 for the bits-per-byte during pretraining of all models we compare.", + "bbox": [ + 169, + 907, + 740, + 925 + ], + "page_idx": 23 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 23 + }, + { + "type": "page_number", + "text": "24", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 23 + }, + { + "type": "image", + "img_path": "images/ad02fa6005023c7a5ffcddcd0f09007a757f860bb2f9c15ab4baf49e6e43bf72.jpg", + "image_caption": [ + "Figure 7: Bits-per-byte of BPE and SuperBPE models during pretraining. The BPE 8B, SuperBPE 8B ( $t = 180\\mathrm{k}$ ), and SuperBPE 11B attain 0.7465, 0.7482, and 0.7445 BPB respectively at the end of training." + ], + "image_footnote": [], + "bbox": [ + 269, + 99, + 728, + 364 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "B.4 Additional scaling experiments", + "text_level": 1, + "bbox": [ + 171, + 446, + 452, + 463 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Our tokenizer has several interesting interactions with LM scaling, purely due to its increased efficiency. For the purpose of this section, let $\\alpha$ denote the ratio of our tokenizer's efficiency to the efficiency of a normal BPE tokenizer. (For example, we have $\\alpha \\approx 1.49$ for our most efficient tokenizer.)", + "bbox": [ + 169, + 473, + 826, + 531 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "The primary advantage of a more efficient tokenizer is a reduction of the context length (in tokens) for the same effective context length (in bytes). All other things being equal, this gives:", + "bbox": [ + 169, + 536, + 825, + 582 + ], + "page_idx": 24 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. A $1 / \\alpha^2$ reduction in attention compute.", + "2. A $1 / \\alpha$ reduction in non-attention compute.", + "3. A $1 / \\alpha$ reduction in activation memory during training and KV-cache size during inference." + ], + "bbox": [ + 207, + 592, + 823, + 662 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Thus, if the context length is short, the total compute savings will be close to $1 / \\alpha$ . For longer contexts, the compute savings may approach $1 / \\alpha^2$ . Given a fixed training budget, there are two natural ways to convert these savings into improved performance.", + "bbox": [ + 169, + 676, + 823, + 722 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "B.4.1 Matching model parameter count", + "text_level": 1, + "bbox": [ + 171, + 738, + 470, + 753 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "In many applications of language models, such as deployment to consumer or edge devices, it is crucial to keep the model's size under control. In this regime, we will assume the model size fixed. This directly grants the aforementioned benefits, and we will turn to increasing the number of training steps to match the training budget.", + "bbox": [ + 169, + 763, + 826, + 821 + ], + "page_idx": 24 + }, + { + "type": "text", + "text": "Since the amount of text seen per step is remains the same due to the fixed effective context length, a more efficient tokenizer allows the model to see more text during training for the same budget. This may lead to improved performance on downstream tasks since the model is more likely to have seen relevant training examples during training. Additionally, although the model is the same size, it requires less compute and memory at inference time to perform the same tasks. In some settings, these gains can be used to amplify inference-time scaling (Snell et al., 2024), leading to further potential gains.", + "bbox": [ + 169, + 825, + 826, + 926 + ], + "page_idx": 24 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 24 + }, + { + "type": "page_number", + "text": "25", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 24 + }, + { + "type": "image", + "img_path": "images/d2a85f429d465bfed06712481ee84da313cf21f2e0393cdc138c69019561edbb.jpg", + "image_caption": [ + "Figure 8: Results for scaling both model parameters and train tokens proportionally. Compared to the BPE baseline, we consider a SuperBPE model that matches parameter count and a SuperBPE model that matches inference compute. Here we see the spread between the three settings decreases with scale." + ], + "image_footnote": [], + "bbox": [ + 279, + 130, + 684, + 363 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "B.4.2 Matching inference compute", + "text_level": 1, + "bbox": [ + 171, + 469, + 433, + 486 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "In other applications of language models, model size is less critical compared to inference compute. In these situations, it may be more desirable to scale the model size up to absorb the extra compute.", + "bbox": [ + 169, + 498, + 823, + 542 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "Changing the model size has a strong impact on scaling. Depending on the context length, we may scale the model by a factor of anywhere between $\\alpha$ and $\\alpha^2$ in order to match inference compute. Since each training step involves $1 / \\alpha$ as many tokens, the ratio of tokens to model parameters per step may be reduced by as much as $1 / \\alpha^3$ . Prior work on LM scaling (Hoffmann et al., 2022; Kaplan et al., 2020) reports diminishing gains once the ratio of the numbers of train tokens and model parameters becomes too large. An $\\alpha$ times more efficient tokenizer allows us to train for up to $\\alpha^3$ times longer while maintaining the same token/parameter ratio and without increasing inference compute, delaying the regime of diminishing gains.", + "bbox": [ + 169, + 547, + 826, + 681 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "B.4.3 Experiments", + "text_level": 1, + "bbox": [ + 171, + 705, + 318, + 722 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "We train 680M and 1.9B sized BPE models on various numbers of tokens—ranging from $\\approx 20$ to $\\approx 80$ tokens per parameter—to establish a baseline scaling trend. We then train two models with SuperBPE tokenizers for each baseline model: one with matching parameter count and one with matching inference compute cost.", + "bbox": [ + 169, + 734, + 823, + 792 + ], + "page_idx": 25 + }, + { + "type": "text", + "text": "There are a couple interesting ways to visualize these results: in Figure 5, we hold the model size fixed and increase the number of training tokens, and in Figure 8, we hold the ratio of train tokens to model parameters fixed (inference compute matched will be fixed 0.7 times lower) and vary both the model size and the number of training tokens. The general trends observed from these results are that matching inference compute is almost universally the best, while matching parameter count tends to be worse than the baseline except in the undertrained regime, where it is better than the baseline. The differences between the different settings increase with overtraining, but decreases when scaling both model size and training tokens at the same time.", + "bbox": [ + 169, + 797, + 826, + 925 + ], + "page_idx": 25 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 32, + 517, + 47 + ], + "page_idx": 25 + }, + { + "type": "page_number", + "text": "26", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 25 + }, + { + "type": "image", + "img_path": "images/cc59d2530b2c5247d53eff05c8060a576208af2d40335dd18a2dadc84eec54fc.jpg", + "image_caption": [ + "(a) Superword density" + ], + "image_footnote": [], + "bbox": [ + 184, + 99, + 491, + 299 + ], + "page_idx": 26 + }, + { + "type": "image", + "img_path": "images/e5c63215364f50b2bc6e27f939f21909279826c96d46946a07ccb15c4a36f755.jpg", + "image_caption": [ + "(b) Superword length distribution", + "Figure 9: (Left) The number of superword tokens in a SuperBPE tokenizer, as a function of the transition point. A superword token is any token that violates the whitespace pretokensization rule from Stage 1. With an early transition point of $t = 60\\mathrm{K}$ , about $85\\%$ of the tokens learned in Stage 2 are superword tokens. For $t > 100\\mathrm{k}$ , close to $100\\%$ of Stage 2 tokens are superwords. (Right) The distribution of superword token lengths in terms of number of words, for $t = 180\\mathrm{k}$ ." + ], + "image_footnote": [], + "bbox": [ + 516, + 99, + 810, + 299 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "C Analysis of SuperBPE Tokenizers", + "text_level": 1, + "bbox": [ + 169, + 441, + 509, + 460 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "C.1 Superword token analysis", + "text_level": 1, + "bbox": [ + 169, + 474, + 413, + 491 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "How many superword tokens are in SuperBPE tokenizers? While the second stage of the pretokenization curriculum allows learning of superword tokens, subword tokens can still be learned. Shown in Figure 9a, for transition points $t < 80\\mathrm{k}$ , the number of superword tokens is relatively steady around 120k. Past $t > 100\\mathrm{k}$ , almost all tokens learned in Stage 2 are superword tokens. Figure 9b shows the number of whitespace-delimited words in the superword tokens of SuperBPE with $t = 180\\mathrm{k}$ .", + "bbox": [ + 169, + 502, + 823, + 587 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "C.2 Analysis of token frequencies in encoding", + "text_level": 1, + "bbox": [ + 169, + 606, + 532, + 622 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "We also analyze token frequency statistics under BPE versus SuperBPE tokenizers. Figure 10a shows the relation between token rank (in frequency) and frequency. While tokens in BPE demonstrate a standard Zipfian relation, the slope of SuperBPE curves have a more shallow slope, meaning that the rate of decay in token frequency is smaller. The smaller proportion of tokens with very low counts may reduce prevalence and severity of glitch tokens (Rumbelow & Watkins, 2023; Land & Bartolo, 2024).", + "bbox": [ + 169, + 632, + 826, + 717 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "Figure 10b shows the minimum number of tokens from the vocabulary needed to cover any given proportion of data. For BPE, the relation is striking—only $57\\%$ of tokens are needed to encode $99\\%$ of the data! The remaining tokens make up a long tail of infrequent tokens. In contrast, SuperBPE tokenizers make better use of the vocabulary. For $t = 80k$ and $t = 180k$ , this statistic is $90\\%$ and $70\\%$ of tokens, respectively.", + "bbox": [ + 169, + 723, + 826, + 795 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "C.3 Distributional Distortion at the Prompt Boundary", + "text_level": 1, + "bbox": [ + 169, + 813, + 586, + 829 + ], + "page_idx": 26 + }, + { + "type": "text", + "text": "Prior work (Lundberg, 2023; Phan et al., 2024) has shown that LMs using BPE tokenizers may produce distorted generations due to the forced partition in tokenization between a prompt and its completion. This issue stems from the fact that users typically desire completions conditioned on a text prompt. The natural approach to obtaining such completions is to take the prompt, tokenize it with the proper tokenizer, and then sample a completion of the resulting token sequence from the LM.", + "bbox": [ + 169, + 839, + 823, + 925 + ], + "page_idx": 26 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 26 + }, + { + "type": "page_number", + "text": "27", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 26 + }, + { + "type": "image", + "img_path": "images/8aeab79b4aa06d677e7e186c851179525d53cd6bf84089fe9de1423c2d97e1a4.jpg", + "image_caption": [ + "(a) Token frequency distribution" + ], + "image_footnote": [], + "bbox": [ + 183, + 99, + 493, + 297 + ], + "page_idx": 27 + }, + { + "type": "image", + "img_path": "images/8dd245757e3a67c7d585b6c9e039884acbbdc7b9e840504fdd32355ec36bea42.jpg", + "image_caption": [ + "(b) Data covering", + "Figure 10: (Left) Token counts when ordered by frequency. The rate of decay in token frequency is smaller. (Right) The minimum number of tokens needed to cover a given proportion of the data. SuperBPE tokenizers make better use of the vocabulary, while BPE tokenizers have a long tail of infrequent tokens." + ], + "image_footnote": [], + "bbox": [ + 500, + 99, + 810, + 299 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "For a simple example of how this can go wrong, consider a tokenizer with base vocabulary of A and B and a single merge forming the token AB. Let's suppose we trained a model using this tokenizer on the strings \"AA\", \"AB\", and \"BB\" with equal proportions. If we condition on the text prefix \"A\", there are two equally probable continuations: \"A\" and \"B\". However, A is the only valid completion of the token prefix A, since the token B never follows the token A during training. In other words, the prompt-completion pair (A,B) is canonically tokenized using a token that crosses the boundary between the prompt and the completion.", + "bbox": [ + 169, + 414, + 826, + 513 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "While this problem is shared by all BPE tokenizers, it can be partially mitigated by pre-tokensization: if the prompt and the completion are separated during the pretokenization step, then it is impossible for a token to cross the boundary. This fix tends to work well for English, where the completion is typically expected to begin with whitespace, so whitespace pretokenization would apply. However, there are many settings where whitespace pretokenization cannot fix the underlying issue, including natural languages that do not use whitespace to separate words (like Chinese and Japanese), programming languages, and constrained generation (Lundberg, 2023; Ribeiro, 2023).", + "bbox": [ + 169, + 518, + 826, + 632 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Several fixes for this issue have been proposed: at training time, token merges can be randomly dropped (Provilkov et al., 2020; Sims et al., 2025; DeepSeek-AI, 2025) to expose LMs to the internal makeup of tokens; at inference time, options include token healing (Lundberg, 2023), algorithmic correction (Phan et al., 2024), and enumeration of all relevant segmentations of the prompt (Vieira et al., 2024). We leave a detailed comparison of these techniques to future work.", + "bbox": [ + 169, + 637, + 823, + 722 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Additionally, the issue does not apply at all to models that separate the user's input from the model's response using special tokens, as is typical for chat models.", + "bbox": [ + 169, + 727, + 823, + 758 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "D Other Related Work", + "text_level": 1, + "bbox": [ + 171, + 777, + 392, + 792 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Please see Mielke et al. (2021) for a survey of subword tokenization.", + "bbox": [ + 171, + 809, + 661, + 824 + ], + "page_idx": 27 + }, + { + "type": "text", + "text": "Pretokenization Pretokenization defines how the text is split in order to prevent certain pairs of tokens from being merged. GPT-2 (Radford et al., 2019) introduced a regular expression (regex) which defines the pretokenization pattern. These regex strings have gained complexity over time; GPT-3.5 limits the number of digits in numerical tokens to 3, and allows single punctuation to be merged with the start of words (presumably to accommodate code, as it allows .get to be a single token). Prior work has shown that, for", + "bbox": [ + 169, + 839, + 826, + 925 + ], + "page_idx": 27 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 171, + 31, + 517, + 47 + ], + "page_idx": 27 + }, + { + "type": "page_number", + "text": "28", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 27 + }, + { + "type": "image", + "img_path": "images/af554d01661668c30fdd6aecc729da3932b6d036243cdf6b02f2c9f548475d99.jpg", + "image_caption": [ + "Figure 11: Token counts when ordered by token ID, which reflects the order in which tokens were learned in tokenizer training." + ], + "image_footnote": [], + "bbox": [ + 173, + 99, + 823, + 324 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "instance, digit pretokenization choices (Nogueira et al., 2021; Thawani et al., 2021; Singh & Strouse, 2024) can significantly impact arithmetic performance. It is also likely that pretokenization affects different languages differently (Velayuthan & Sarveswaran, 2025; Ahia et al., 2023), due to natural statistics of the average word length, which acts as an upper bound on encoding efficiency in that language under subword tokenization. Nonetheless, the effectiveness of many pretokenization choices have not been thoroughly studied.", + "bbox": [ + 169, + 393, + 826, + 479 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "$n$ -gram language models Our work is loosely related to $n$ -gram LMs, which incorporate $n$ -gram statistics into the next-word prediction (Brants et al., 2007; Liu et al., 2024a).", + "bbox": [ + 169, + 493, + 823, + 523 + ], + "page_idx": 28 + }, + { + "type": "text", + "text": "Internal representation of semantic units Previous work has showed that the early layers of the LM may \"aggregate\" information over multi-token entities (e.g., [New, York]) into the last token's (e.g., York) hidden representation (Meng et al., 2022; Kaplan et al., 2025; Lad et al., 2024). This suggests that LMs naturally learn multi-word representations, and segmentating text into more semantically cohesive units at the input level (e.g., having New York as a single token) may simplify this process.", + "bbox": [ + 169, + 537, + 826, + 625 + ], + "page_idx": 28 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 28 + }, + { + "type": "page_number", + "text": "29", + "bbox": [ + 488, + 946, + 508, + 959 + ], + "page_idx": 28 + }, + { + "type": "image", + "img_path": "images/3c79a7e71a91e682ec753425b6d4ace409c89e385949a63f471c66eb47df09c4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 171, + 308, + 488, + 489 + ], + "page_idx": 29 + }, + { + "type": "image", + "img_path": "images/83c7ac0a33955fc2008194ff3470f2802a9403c0140208f940882c81d5503629.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 504, + 308, + 823, + 489 + ], + "page_idx": 29 + }, + { + "type": "image", + "img_path": "images/1d69a62d6c61c9405fb5435c05854a3230f8599f4d8c605b56af5368930749bf.jpg", + "image_caption": [ + "Figure 12: Performance during pretraining for a subset of tasks in our evaluation suite." + ], + "image_footnote": [], + "bbox": [ + 171, + 502, + 488, + 685 + ], + "page_idx": 29 + }, + { + "type": "image", + "img_path": "images/fb634c50fcf9f347aa3bb35ced7dfb51e0f1b27248ceb8eaf70bf7a1af7fee11.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 509, + 503, + 825, + 685 + ], + "page_idx": 29 + }, + { + "type": "header", + "text": "Published as a conference paper at COLM 2025", + "bbox": [ + 173, + 32, + 517, + 47 + ], + "page_idx": 29 + }, + { + "type": "page_number", + "text": "30", + "bbox": [ + 488, + 946, + 509, + 959 + ], + "page_idx": 29 + } +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13423/ee02b8fd-a1b4-4f31-b152-93f5d36bdbe3_model.json b/data/2025/2503_13xxx/2503.13423/ee02b8fd-a1b4-4f31-b152-93f5d36bdbe3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..942e4d889a7e4c71881fd42226c043be9b59f96b --- /dev/null +++ b/data/2025/2503_13xxx/2503.13423/ee02b8fd-a1b4-4f31-b152-93f5d36bdbe3_model.json @@ -0,0 +1,4528 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.032, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.099, + 0.668, + 0.123 + ], + "angle": 0, + "content": "SuperBPE: Space Travel for Language Models" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.145, + 0.444, + 0.164 + ], + "angle": 0, + "content": "*Alisa Liu♥♠ *Jonathan Hayase♥" + }, + { + "type": "text", + "bbox": [ + 0.182, + 0.164, + 0.721, + 0.182 + ], + "angle": 0, + "content": "Valentin Hofmann\\(^{\\diamond \\diamond}\\) Sewoong Oh\\(^{\\diamond}\\) Noah A. Smith\\(^{\\diamond \\diamond}\\) Yejin Choi\\(^{\\spadesuit}\\)" + }, + { + "type": "text", + "bbox": [ + 0.184, + 0.182, + 0.648, + 0.2 + ], + "angle": 0, + "content": "\\(\\diamond\\)University of Washington \\(\\spadesuit\\)NVIDIA \\(\\diamond\\)Allen Institute for AI" + }, + { + "type": "title", + "bbox": [ + 0.459, + 0.234, + 0.542, + 0.251 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.23, + 0.265, + 0.77, + 0.63 + ], + "angle": 0, + "content": "The assumption across nearly all language model (LM) tokenization schemes is that tokens should be subwords, i.e., contained within word boundaries. Despite providing a seemingly reasonable inductive bias, we question whether this common practice limits the potential of modern LMs. Whitespace is not a reliable delimiter of meaning, as evidenced by multi-word expressions (e.g., by the way), cross-lingual variation in the number of words needed to express a concept (e.g., spacesuit helmet in German is raumanzugehm), and languages that do not use whitespace at all (e.g., Chinese). To explore the potential of tokenization beyond subwords, we introduce a \"superword\" tokenizer, SuperBPE, that incorporates a simple pretokensization curriculum into the byte-pair encoding (BPE) algorithm to first learn subwords and then superwords that bridge whitespace. This modification dramatically improves encoding efficiency: when limiting vocabulary size to 200k, SuperBPE encodes a fixed piece of text with up to \\(33\\%\\) fewer tokens on average than BPE. In experiments, we pretrain 8B transformer LMs from scratch while fixing model size, vocabulary size, and train compute, varying only the algorithm for learning the vocabulary. Our model trained with SuperBPE achieves an average \\(+4.0\\%\\) absolute improvement over the BPE baseline across 30 downstream tasks (including \\(+8.2\\%\\) on MMLU), while simultaneously requiring \\(27\\%\\) less compute at inference time. In analysis, we find that SuperBPE produces segmentations of text that are more uniform in per-token difficulty, perhaps because SuperBPE tokens often capture common multi-word expressions that function semantically as a single unit. In sum, SuperBPE offers a straightforward and local modification to tokenization that improves both encoding efficiency and downstream performance, yielding better LMs overall.\\(^1\\)" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.653, + 0.321, + 0.67 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.684, + 0.827, + 0.757 + ], + "angle": 0, + "content": "Tokenizer are the lens through which language models (LMs) view data: they segment a stream of bytes into a sequence of tokens in the LM vocabulary. In the era of transformer LMs, tokenization is done at the level of subwords, meaning that tokens consist of parts of words (including complete words), but they cannot bridge whitespace. Intuitively, subword tokens capture meaningful and composable semantic units." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.761, + 0.828, + 0.888 + ], + "angle": 0, + "content": "Although seemingly reasonable, is this common practice a good one? Whitespace is an unreliable delimiter of meaning (Martin, 2017); many groups of words (e.g., a lot of or search engine) function semantically as single units, and English speakers store thousands of such multi-word expressions in their mental lexicon (Church, 2011; Contreras Kallens & Christiansen, 2022). Cross-lingually, there is considerable variation in whether a given meaning is conveyed by a single word or several words. At the extreme, languages such as Chinese and Japanese do not use whitespace at all, and tokens in these languages can span multiple words or even entire sentences (e.g., the tokenizers of GPT-4O [OpenAI, 2024] or DEEPSEEKV3 [DeepSeek-AI, 2025]), but this has seemingly not hindered LMs from" + }, + { + "type": "page_footnote", + "bbox": [ + 0.193, + 0.896, + 0.329, + 0.91 + ], + "angle": 0, + "content": "*Equal contribution." + }, + { + "type": "page_footnote", + "bbox": [ + 0.193, + 0.91, + 0.629, + 0.924 + ], + "angle": 0, + "content": "\\(^{1}\\)Code and artifacts are available at https://superbpe.github.io/." + }, + { + "type": "list", + "bbox": [ + 0.193, + 0.896, + 0.629, + 0.924 + ], + "angle": 0, + "content": null + }, + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.276, + 0.061, + 0.725 + ], + "angle": 270, + "content": "arXiv:2503.13423v3 [cs.CL] 26 Aug 2025" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.504, + 0.96 + ], + "angle": 0, + "content": "1" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.032, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.175, + 0.095, + 0.728, + 0.147 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.302, + 0.159, + 0.696, + 0.395 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.407, + 0.828, + 0.603 + ], + "angle": 0, + "content": "Figure 1: SuperBPE tokenizers encode text much more efficiently than BPE, and this advantage grows with larger vocabulary size. Encoding efficiency (\\(y\\)-axis) is measured in bytes-per-token, the number of bytes encoded per token over a large corpus. In the 40 bytes of text shown on the top of this figure, SuperBPE uses 7 tokens while BPE uses 13, so the methods' efficiencies are \\(40/7 = 5.7\\) and \\(40/13 = 3.1\\) bytes-per-token, respectively. In the graph, the encoding efficiency of BPE plateaus early because it exhausts the valuable whitespace-delimited words in the training data. In fact, it is bounded above by the gray dotted line, which shows the maximum achievable encoding efficiency with BPE if every whitespace-delimited word were in the vocabulary. In contrast, SuperBPE has dramatically better encoding efficiency that continues to improve with increased vocabulary size, as it can continue to add common word sequences to treat as tokens in the vocabulary. The different gradient lines show different transition points from learning subword to superword tokens, which always yields an immediate improvement. SuperBPE also encodes text more efficiently than a naive variant of BPE that does not use whitespace pretokensization at all." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.618, + 0.825, + 0.675 + ], + "angle": 0, + "content": "performing well on these languages. In fact, including multi-word tokens promises to be beneficial in many ways: it may shorten token sequences, lowering the costs of LM training and inference, and offer representational advantages by segmenting text into more semantically cohesive units (Salehi et al., 2015; Otani et al., 2020; Hofmann et al., 2021)." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.68, + 0.827, + 0.822 + ], + "angle": 0, + "content": "In this work, we introduce a superword tokenization algorithm that produces a vocabulary of both subword and \"superword\" tokens, which we use to describe tokens bridging more than one word. Our method, SuperBPE, introduces a pretokenization curriculum to the popular byte-pair encoding (BPE) algorithm (Sennrich et al., 2016): whitespace pretokenization is initially used to enforce learning of subword tokens only (as done in conventional BPE), but it is disabled in a second stage, where the tokenizer transitions to learning superword tokens. Notably, SuperBPE tokenizers scale much better with vocabulary size: BPE quickly hits a point of diminishing returns and begins adding increasingly rare subwords to the vocabulary, while SuperBPE continues to discover common word sequences to treat as single tokens and improve encoding efficiency (see Figure 1)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.827, + 0.828, + 0.926 + ], + "angle": 0, + "content": "In our experiments, we pretrain English LMs at 8B scale from scratch. When fixing the model size, vocabulary size, and training compute—varying only the algorithm for learning the vocabulary—we find that models trained with SuperBPE tokenizers consistently and significantly improve over counterparts trained with a BPE tokenizer while also being \\(27\\%\\) to \\(33\\%\\) more efficient at inference time. Our best SuperBPE model achieves an average improvement of \\(+4.0\\%\\) over 30 downstream tasks, including \\(+8.2\\%\\) on MMLU, and wins on 25 of the 30 individual tasks (Table 1)." + }, + { + "type": "page_number", + "bbox": [ + 0.493, + 0.948, + 0.506, + 0.96 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.104, + 0.827, + 0.203 + ], + "angle": 0, + "content": "In analysis, we find that SuperBPE tokenizers produce segmentations that are more evenly distributed in difficulty. This makes sense from a qualitative linguistic analysis: SuperBPE tokens often correspond to multi-word expressions in English, i.e., word sequences that function as a single semantic unit (see Table 3 for examples). For instance, many prepositional phrases (e.g., by accident or in the long run) are essentially fixed and require memorization. The individual words in these expressions have very little possible variation in context, leading to very low-loss predictions under BPE models." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.208, + 0.825, + 0.265 + ], + "angle": 0, + "content": "SuperBPE is a straightforward and local modification to tokenization, requiring no changes to the model architecture, training framework, or decoding strategy. Under the same training setup, SuperBPE provides a remarkable boost in both encoding efficiency and performance, yielding better language models overall." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.285, + 0.296, + 0.303 + ], + "angle": 0, + "content": "2 SuperBPE" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.316, + 0.825, + 0.348 + ], + "angle": 0, + "content": "We first explain the standard byte-pair encoding (BPE; Sennrich et al., 2016) tokenization algorithm (§2.1), and then introduce SuperBPE, which extends BPE to superwords (§2.2)." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.362, + 0.363, + 0.377 + ], + "angle": 0, + "content": "2.1 Background on BPE" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.388, + 0.825, + 0.421 + ], + "angle": 0, + "content": "BPE is a tokenization algorithm that greedily learns a subword vocabulary given training data. The algorithm takes a sample of text and a target vocabulary size \\( T \\) as input." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.425, + 0.826, + 0.51 + ], + "angle": 0, + "content": "The first step of BPE is pretokenization, which splits the text into chunks that limit the extent of tokenization; merges cannot bridge these chunks, so the final learned tokens are parts of these chunks. Canonically, pretokenization in BPE consists of splitting on whitespace so that common word sequences do not become a single token. This made sense given the historical context of Sennrich et al. (2016), which aimed to improve word-level tokenization by segmenting words into morphologically meaningful subwords." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.516, + 0.827, + 0.629 + ], + "angle": 0, + "content": "After pretokenization, the iterative learning algorithm begins. Training text is first split into bytes; the starting vocabulary is the set of all bytes. Then, the frequencies of all pairs of neighboring tokens are recorded, and the most frequent pair is merged into a single, new token at every position in the text where it occurs. The newly merged token is added to the vocabulary. For instance, if the merge is (t, he), then all instances of the token sequence [t, he] will be replaced with the, which is added to the vocabulary. The token pair frequencies are then updated, and the next most frequent pair is again merged into a new token. This continues until the vocabulary reaches the target size \\( T \\)." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.645, + 0.387, + 0.66 + ], + "angle": 0, + "content": "2.2 SuperBPE tokenization" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.671, + 0.827, + 0.84 + ], + "angle": 0, + "content": "SuperBPE introduces a simple intervention in the pretokenization step, separating tokenizer training into two discrete phases, wherein the tokenizer (1) first learns subwords (by using pretokenization to prevent merges across whitespace) and then (2) learns superwords (by lifting this restriction). Stage 1 is equivalent to regular BPE training and continues up to a certain vocabulary size \\( t \\), which we call the transition point \\( (t < T) \\). In stage 2, tokenizer training resumes from the vocabulary learned thus far, but this time whitespace pretokenization is skipped. As a result, token pairs that bridge whitespace are considered, enabling superwords to be added to the vocabulary. Intuitively, we intend for our tokenizer to first learn base units of semantic meaning, then combine these units into common sequences for a much more efficient vocabulary. Note that \\( t = T \\) corresponds to BPE, and \\( t = 0 \\) corresponds to a naive revision of BPE that foregoes whitespace pretokenization at any point in training." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.845, + 0.825, + 0.875 + ], + "angle": 0, + "content": "We note that training tokenizers requires more system memory and CPU time when done without whitespace pretokenization (as in stage 2 of SuperBPE). This is because the training" + }, + { + "type": "page_footnote", + "bbox": [ + 0.191, + 0.884, + 0.713, + 0.898 + ], + "angle": 0, + "content": "2The algorithm originated in 1994 in the field of data compression (Gage, 1994)." + }, + { + "type": "page_footnote", + "bbox": [ + 0.171, + 0.897, + 0.825, + 0.926 + ], + "angle": 0, + "content": "Note that although the creation of a tokenizer is referred to as \"learning,\" there are no parameters involved in the case of BPE, and the algorithm is completely deterministic given the data." + }, + { + "type": "list", + "bbox": [ + 0.171, + 0.884, + 0.825, + 0.926 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.505, + 0.96 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table", + "bbox": [ + 0.235, + 0.101, + 0.764, + 0.564 + ], + "angle": 0, + "content": "
CategoryTaskBPESuperBPEΔ
KnowledgeARC-Easy (MC)46.667.1+20.5**
ARC-Challenge (MC)35.150.6+15.5**
Jeopardy (EM)42.141.8-0.3
MMLU (MC)36.544.7+8.2**
OpenbookQA (MC)33.254.4+21.2**
TriviaQA (EM)60.661.3+0.7
WikidataQA (EM)69.770.9+1.2*
Math & ReasoningArithmetic (EM)54.859.3+4.5**
GSM8K (EM)6.46.7+0.3
LSAT-AR (MC)21.323.0+1.7
Operators (EM)35.533.6-1.9
Repeat-Copy-Logic (EM)3.16.2+3.1
CodingHumanEval (pass@10)15.913.4-2.5
MBPP (pass@10)27.528.3+0.8
Reading ComprehensionBoolQ (MC)59.764.6+4.9**
CoQA (EM)12.613.2+0.6
DROP (EM)31.331.4+0.1
HotpotQA (EM)53.555.2+1.7*
SQuAD (EM)75.175.8+0.7
CommonsenseCommonsenseQA (MC)33.553.8+20.3**
COPA (MC)77.085.8+8.8**
PIQA (MC)55.259.8+4.6*
Winograd (MC)50.453.1+2.7
Winogrande (MC)47.352.6+5.3*
Language UnderstandingHellaSwag (MC)29.733.7+4.0**
LAMBADA (EM)77.070.6-6.4**
Language Identification (EM)8.89.0+0.2
String ManipulationCS Algorithms (EM)46.148.6+2.5
CUTE (EM)31.332.6+1.3
Dyck-Languages (EM)15.914.2-1.7
Average39.843.8+4.0
" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.574, + 0.828, + 0.673 + ], + "angle": 0, + "content": "Table 1: Performance of BPE and SuperBPE models (with transition point \\( t = 180\\mathrm{k} \\)) on 30 downstream tasks. The two models are fixed in model parameters (8B), vocabulary size (200k), and training FLOPs (corresponding to ~330B tokens), differing only in their algorithm for learning the vocabulary. The SuperBPE model outperforms the baseline on 25 of 30 tasks and requires \\( 27\\% \\) less compute at inference time. See Figure 3 for the moving task average during pretraining and §A.4 for further evaluation details. *\\( p < 0.05 \\), **\\( p < 0.005 \\) under a McNemar test." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.71, + 0.828, + 0.81 + ], + "angle": 0, + "content": "data is typically represented by a dictionary of \"words\" along with their counts. With whitespace pretokenization, the \"words\" are whitespace-separated chunks (e.g., common words) stored once along with a large count, conferring substantial savings in memory. Without whitespace pretokenization, the \"words\" are extremely long (e.g., entire training documents), leading to minimal dedduplication of the text and excessively large dictionaries. Fortunately, tokenizer training must be done only once; in our experiments, SuperBPE tokenizers train in a few hours on 100 CPUs, a negligible cost compared to LLM pretraining." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.837, + 0.362, + 0.854 + ], + "angle": 0, + "content": "2.3 Encoding efficiency" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.868, + 0.826, + 0.927 + ], + "angle": 0, + "content": "A tokenizer's encoding efficiency can be measured in bytes-per-token, i.e., how many UTF-8 bytes are encoded, on average, in each token over a large corpus of text (see calculation in Figure 1). We train a series of tokenizers on a 10GB subset of data from OLMO 2's pretraining corpus and evaluate encoding efficiency on a held-out subset." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.506, + 0.96 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.032, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.103, + 0.828, + 0.201 + ], + "angle": 0, + "content": "Shown in Figure 1, SuperBPE scales much better with vocabulary size than does BPE. BPE quickly plateaus around a vocabulary size of \\(\\sim 50\\mathrm{K}\\), achieving 4.45 bytes-per-token at a vocabulary size of \\(200\\mathrm{k}\\). In fact, even with infinite vocabulary size (namely, if every whitespace-delimited word were in the vocabulary), BPE cannot exceed 4.68 bytes-per-token, i.e., the average word length in the held-out subset. SuperBPE exceeds this upper bound with a mere \\(\\sim 12\\mathrm{k}\\) vocabulary size and reaches 5.55 bytes-per-token at \\(50\\mathrm{K}\\) and 6.63 at \\(200\\mathrm{k}\\)." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.208, + 0.554, + 0.363 + ], + "angle": 0, + "content": "Surprisingly, SuperBPE is also more efficient than BPE with whitespace pretokenization completely disabled. Since BPE is a greedy algorithm, completely disabling whitespace pretokenization may cause it to make highly suboptimal choices early on. In particular, tokens in this setting often consist of the end of the previous word and start of the next word, as opposed to sequences of complete words. By keeping whitespace pretokenization on at the beginning, we can avoid suboptimal choices while still obtaining a tokenizer with superwords." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.368, + 0.554, + 0.454 + ], + "angle": 0, + "content": "Figure 2 shows how SuperBPE's encoding efficiency depends on the choice of transition point \\( t \\). The relationship is smooth, with \\( t = 80\\mathrm{k} \\) achieving the best encoding efficiency. However, we will see in our experiments that the optimal tokenizer for LM pretraining is not necessarily the most encoding-efficient." + }, + { + "type": "image", + "bbox": [ + 0.563, + 0.218, + 0.825, + 0.37 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.559, + 0.387, + 0.828, + 0.444 + ], + "angle": 0, + "content": "Figure 2: Encoding efficiency varies smoothly with the choice of transition point \\( t \\) in SuperBPE's pretokenization curriculum." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.48, + 0.32, + 0.498 + ], + "angle": 0, + "content": "3 Experiments" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.516, + 0.825, + 0.547 + ], + "angle": 0, + "content": "In our main experiments, we pretrain models from scratch while fixing the total training FLOPs and vocabulary size, changing only the algorithm for learning the vocabulary." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.569, + 0.258, + 0.586 + ], + "angle": 0, + "content": "3.1 Setup" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.598, + 0.827, + 0.743 + ], + "angle": 0, + "content": "We first pretrain 8B models with BPE and SuperBPE tokenizers. We use the OLMO2 7B (OLMo et al., 2024) training configuration, including the model architecture, training hyperparameters, and pretraining corpus, but reduce the total number of training steps to correspond to \\(\\sim 330\\) tokens (compared to 4T). Following prior work (Pagnoni et al., 2024), we also fix the effective context size (measured in bytes) for each model. This prevents SuperBPE models from gaining an advantage by seeing more textual context for the same next-token prediction (we provide analysis on this in §B.1). Since more efficient models have a shorter context length in tokens, the training steps are adjusted accordingly to match the total train FLOPs at the end of training. Note that in this setting, a same-sized SuperBPE model uses fewer inference FLOPs than the BPE model." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.749, + 0.828, + 0.824 + ], + "angle": 0, + "content": "We fix the vocabulary size of all tokenizers to 200,000 (in the same ballpark as, e.g., GEMMA at 250k [Google, 2024], GPT-4O at 200k, and Llama3 at 130k [Meta, 2024]).6 We consider three transition points for SuperBPE: \\( t = 80\\mathrm{k} \\), which has the best encoding efficiency, and two later transitions, \\( t = 160\\mathrm{k} \\) and \\( t = 180\\mathrm{k} \\). All tokenizers are trained on the same 10GB subset of OLMO2's pretraining mix. SA.1 provides further details about tokenizer training." + }, + { + "type": "page_footnote", + "bbox": [ + 0.171, + 0.844, + 0.826, + 0.871 + ], + "angle": 0, + "content": "\\(^{4}\\)OLMO2 7B has 7.30B parameters, while our 8B BPE and SuperBPE models have 8.12B parameters due to their increased vocabulary size." + }, + { + "type": "page_footnote", + "bbox": [ + 0.171, + 0.871, + 0.827, + 0.898 + ], + "angle": 0, + "content": "In practice, models using our more efficient tokenizers could shift some or all of the \"saved\" context FLOPs to longer effective contexts instead of more training steps." + }, + { + "type": "page_footnote", + "bbox": [ + 0.171, + 0.897, + 0.825, + 0.925 + ], + "angle": 0, + "content": "For 8B models, a 200k vocabulary size is close to the recommendation of Tao et al. (2024) based on primarily English data. We fix the vocabulary size to simplify comparisons between models." + }, + { + "type": "list", + "bbox": [ + 0.171, + 0.844, + 0.827, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.948, + 0.505, + 0.96 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.304, + 0.094, + 0.696, + 0.321 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.333, + 0.825, + 0.391 + ], + "angle": 0, + "content": "Figure 3: Average task performance on 30 downstream tasks, evaluated at every 5000 steps in model pretraining. We see that SuperBPE models consistently outperform the baseline that uses a BPE tokenizer. All compared models share the same vocabulary size and train budget; \\( t \\) denotes the transition point in SuperBPE's pretokenization curriculum." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.415, + 0.825, + 0.459 + ], + "angle": 0, + "content": "We also train a slightly larger 11B SuperBPE model with \\( t = 180\\mathrm{k} \\), which approximately matches the 8B BPE baseline in total bytes of training data seen as well as both train and inference compute. See Table 2 for exact specifications for all runs." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.475, + 0.431, + 0.489 + ], + "angle": 0, + "content": "3.2 Results on downstream tasks" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.501, + 0.827, + 0.545 + ], + "angle": 0, + "content": "We evaluate SuperBPE on 30 benchmarks covering knowledge, math & reasoning, coding, reading comprehension, common sense, language understanding, and string manipulation. The full evaluation suite is shown in Table 1 and evaluation details are in SA.4." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.55, + 0.827, + 0.678 + ], + "angle": 0, + "content": "Figure 3 shows the task average during pretraining. All SuperBPE models substantially outperform the BPE baseline at the end of training. The strongest 8B SuperBPE model, which has transition point \\( t = 180\\mathrm{k} \\) (the latest one we consider), outperforms the baseline by \\( 4.0\\% \\) on average and wins on 25 of 30 individual tasks. Table 1 shows the per-task performance for this model (see §A.4 for results for the other models). The largest gains are on multiple choice tasks; when considering these alone, the performance improvement grows to \\( +9.7\\% \\). The only task on which SuperBPE loses in a statistically significant way is LAMBADA; here, we observe that SuperBPE is actually ahead for the majority of training checkpoints, but accuracy dips at the end from \\( 75.8\\% \\) to \\( 70.6\\% \\) (see Figure 12)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.682, + 0.827, + 0.756 + ], + "angle": 0, + "content": "Notably, while the choice of transition point affects the performance of the resulting model, all reasonable choices are significantly stronger than the baseline. When using the most encoding-efficient transition point, i.e., \\( t = 80\\mathrm{k} \\), we see a \\( +3.1\\% \\) task improvement over BPE and inference compute reduced by \\( 35\\% \\). Later transition points empirically cede some gains in encoding efficiency in exchange for further improvements in performance.[7]" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.774, + 0.285, + 0.792 + ], + "angle": 0, + "content": "4 Analysis" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.807, + 0.363, + 0.823 + ], + "angle": 0, + "content": "4.1 Language modeling" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.832, + 0.827, + 0.876 + ], + "angle": 0, + "content": "Following prior work (Biderman et al., 2023; Xue et al., 2022; Yu et al., 2023; Wang et al., 2024), we evaluate language modeling performance using bits-per-byte (BPB), which normalizes the loss by the tokenizer's encoding efficiency to fairly compare models with different tokenizers." + }, + { + "type": "page_footnote", + "bbox": [ + 0.171, + 0.885, + 0.825, + 0.926 + ], + "angle": 0, + "content": "7This finding adds to the ongoing debate about the relationship between tokenization compression and LM performance (Galle, 2019; Goldman et al., 2024; Schmidt et al., 2024), providing further evidence that higher compression does not necessarily improve performance." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.505, + 0.96 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table", + "bbox": [ + 0.175, + 0.101, + 0.825, + 0.259 + ], + "angle": 0, + "content": "
SuperBPE transition pointBPE 8BSuperBPE 8BSuperBPE 11B
t = 80kt = 160kt = 180kt = 180k
Parameter count (billion)8.128.128.128.1211.30
Train steps76,543118,419112,722107,98277,525
Average context length (bytes)18,26218,27218,26318,26818,268
Vocabulary size200k200k200k200k200k
Context length (tokens)4,0962,7562,8843,0003,000
Encoding efficiency (bytes/token)4.466.636.336.096.09
Train compute (1021 FLOPs)17.217.217.217.217.2
Inference compute (109 FLOPs/byte)3.752.422.542.653.75
" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.269, + 0.827, + 0.364 + ], + "angle": 0, + "content": "Table 2: Training setup for the models we compare. We fix the vocabulary size and effective context size (measured in bytes) for each model and adjust the total number of training steps accordingly so that each model has the same total train budget (in FLOPs). The 8B SuperBPE models match the 8B BPE model in train compute but use less inference compute; the 11B SuperBPE model matches the 8B baseline in both train and inference compute. Numbers fixed across model settings are highlighted in the same color." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.391, + 0.825, + 0.496 + ], + "angle": 0, + "content": "This is necessary because longer tokens, on average, contain more information and therefore are more difficult to predict. Bits-per-byte is defined as \\(\\mathrm{BPB}(x) = \\mathcal{L}_{\\mathrm{CE}}(x) / (\\ln (2)\\cdot n_{\\mathrm{bytes}})\\), where \\(n_{\\mathrm{bytes}}\\) is the length of the text in bytes and \\(\\mathcal{L}_{\\mathrm{CE}}(x)\\) is the sum of the cross-entropy loss over the entire text. We find that BPE 8B, SuperBPE 8B (\\(t = 180\\mathrm{k}\\)), and SuperBPE 11B attain 0.7465, 0.7482, and 0.7445 BPB, respectively, at the end of training. Although these numbers do not differ appreciably, the ranking of models according to BPB and downstream task performance are not consistent." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.517, + 0.406, + 0.533 + ], + "angle": 0, + "content": "4.2 Loss distribution analysis" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.545, + 0.518, + 0.661 + ], + "angle": 0, + "content": "Why does the SuperBPE 8B model achieve slightly higher normalized language modeling loss (§4.1) than the baseline BPE model despite outperforming it on a wide variety of downstream tasks (§3.2)? To investigate this, we plot the distribution of per-token \\(\\mathrm{BPB}^9\\) for both models on data sampled from the pretraining data mixture in Figure 4." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.665, + 0.519, + 0.751 + ], + "angle": 0, + "content": "Although the BPE and SuperBPE models have very similar BPB on average, we see that loss is distributed very differently over the training data. Compared to the baseline, the SuperBPE model makes fewer predictions with either very high or very low loss." + }, + { + "type": "image", + "bbox": [ + 0.529, + 0.498, + 0.825, + 0.663 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.527, + 0.673, + 0.827, + 0.759 + ], + "angle": 0, + "content": "Figure 4: Histogram of per-token losses for both models from Table 1, measured over a large corpus of text. We observe that the SuperBPE model is a more consistent performer, making fewer predictions with very high or very low loss." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.769, + 0.827, + 0.854 + ], + "angle": 0, + "content": "Low-loss tokens. We find that the reduction in low-loss tokens is attributable to a small set of extremely common words that the BPE model can easily predict, but are not available to SuperBPE as they are merged into larger superword tokens. For instance, the tokens _the, _of, and _to (the three most common words in the corpus) appear an order of magnitude more often under BPE than SuperBPE in the same corpus of text. When excluding these three token types alone, the BPB ranking reverses, with SuperBPE achieving 0.02 lower BPB than BPE." + }, + { + "type": "page_footnote", + "bbox": [ + 0.171, + 0.87, + 0.825, + 0.911 + ], + "angle": 0, + "content": "8Bits-per-byte of different models are considered comparable because total cross-entropy loss is a universal quantity representing the number of additional bits required to reconstruct the text given the model. This quantity is normalized by the number of bytes for easier interpretation." + }, + { + "type": "page_footnote", + "bbox": [ + 0.194, + 0.911, + 0.794, + 0.926 + ], + "angle": 0, + "content": "The per-token BPB is the per-token loss (in bits) divided by the average encoding efficiency." + }, + { + "type": "list", + "bbox": [ + 0.171, + 0.87, + 0.825, + 0.926 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.493, + 0.948, + 0.506, + 0.96 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "table", + "bbox": [ + 0.174, + 0.102, + 0.835, + 0.294 + ], + "angle": 0, + "content": "
POS tag#Example Tokens
NN, IN906(case_of,_hint_of,_availability_of,_emphasis_on,_distinction_between
VB, DT566_reached_a,_discovered_the,_identify_the,_becomes_a,_issued_a
DT, NN498.this_month,_no Idea,_the_earth,_the_maximum,_this Stuff
IN, NN406_on_top,_by'accident,_in_effects,_for_lunch,_in_front
IN, DT379_on_the,_without_a,_alongside_the,_for_each
IN, DT, NN333_for_a_living,_by_the_way,_into_the_future,_in_the_midst
NN, IN, DT270_position_of_the,_component_of_the,_review_of_the,_example_of.this
IN, DT, JJ145_like_any_other,_with_each_other,_for_a_short,_of_the Entire
VB, IN, DT121_worked_as_a,_based_on_the,_combined_with_the,_turnedInto_a
IN, DT, NN, IN33_at_the_time_of,_in_the Presence_of,_in_the_middle_of,_in_a_way-that
,, CC, PRP, VB20,_and_it_was,_but_IThink,_but_I_have,_but_I_am
IN, DT, JJ, NN18_in_the_long_run,_on_the_other_hand,_for_the_first_time,_in_the_same_way
" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.304, + 0.828, + 0.348 + ], + "angle": 0, + "content": "Table 3: The most common POS tags for tokens of 2, 3, and 4 words in SuperBPE, along with random example tokens for each tag. NN = noun, IN = preposition, VB = verb, DT = determiner, CC = conjunction, JJ = adjective, and PRP = pronoun." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.38, + 0.828, + 0.493 + ], + "angle": 0, + "content": "The reduction in low-loss tokens also makes sense from a qualitative linguistic analysis of SuperBPE tokens. In Table 3, we show the most common POS tags among superword tokens in SuperBPE along with random examples for each tag. The tokens often capture common multi-word expressions (by accident, of course, for a living) that function as a single semantic unit (Schneider et al., 2014). As an example, prepositions (IN) figure prominently in superword tokens (e.g., depend on, distinction between) and require lexeme-specific memorization. The individual words in these fixed expressions are often semantically vacuous and have little possible variation in context, so they are easy to predict once memorized." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.516, + 0.827, + 0.628 + ], + "angle": 0, + "content": "High-loss tokens. On the other hand, the much thinner tail of very high-loss tokens shows that, in the worst case, the SuperBPE model consistently puts more probability mass on the correct token. On average, we expect models to suffer high loss on tokens that are difficult to predict. This may explain why SuperBPE can outperform BPE on downstream tasks but have higher average BPB: the tokens scored in task evaluations tend to be among the hardest to predict. This is consistent with prior findings that models generally continue to improve in downstream tasks even as their overall loss plateaus due to improving on a narrow and difficult slice of the distribution (Liu et al., 2023)." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.653, + 0.27, + 0.669 + ], + "angle": 0, + "content": "4.3 Scaling" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.682, + 0.827, + 0.797 + ], + "angle": 0, + "content": "To characterize the scaling behavior of SuperBPE, we also perform experiments at smaller scales.\\(^{10}\\) We train baseline models at 680M and 1.9B and scale the base number of training tokens proportionately to the number of parameters. We also perform runs at \\(0.5 \\times\\), \\(2 \\times\\), and \\(4 \\times\\) the base number of tokens to observe the trend with respect to training duration. Then, we train two SuperBPE models that match the training budget of each baseline BPE model, one that matches the baseline in parameter count (analogous to SuperBPE 8B) and a larger model that matches in both train and inference compute (analogous to SuperBPE 11B). We focus on the \\(t = 180k\\) tokenizer to reduce complexity." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.802, + 0.828, + 0.875 + ], + "angle": 0, + "content": "We plot BPB at the end of training for each run in Figure 5. In the under-trained regime, both SuperBPE models achieve lower BPB than the baseline. In the over-trained regime, the ranking from worst to best is SuperBPE (matching parameter count), BPE, and SuperBPE (matching inference compute). Additionally, the separation between the models increases with further over-training. We provide additional results and comments on scaling in §B.4." + }, + { + "type": "page_footnote", + "bbox": [ + 0.171, + 0.897, + 0.825, + 0.926 + ], + "angle": 0, + "content": "10For scaling, we focus on BPB since our downstream evaluations are too noisy for our small models to make meaningful comparisons." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.505, + 0.96 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.182, + 0.103, + 0.49, + 0.262 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.272, + 0.271, + 0.405, + 0.285 + ], + "angle": 0, + "content": "(a) 680M model size" + }, + { + "type": "image", + "bbox": [ + 0.502, + 0.103, + 0.815, + 0.263 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.597, + 0.271, + 0.722, + 0.285 + ], + "angle": 0, + "content": "(b) 1.9B model size" + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.296, + 0.828, + 0.382 + ], + "angle": 0, + "content": "Figure 5: Scaling results for 680M and 1.9B baseline model sizes. Compared to the BPE baseline, SuperBPE with matching parameter count achieves lower BPB in the under-trained regime, while SuperBPE with matching inference compute achieves lower BPB than the baseline at every model size and every training budget tested. Note that BPB comparisons between BPE and SuperBPE models do not track downstream task accuracy due to differences in how BPE and SuperBPE models distribute loss over tokens (§4.2)." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.407, + 0.329, + 0.423 + ], + "angle": 0, + "content": "5 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.439, + 0.828, + 0.593 + ], + "angle": 0, + "content": "Tokenization beyond subwords Prior work has explored processing text at multiple levels of granularity (Lai et al., 2021; Zhang et al., 2021) or creating multi-word tokens through frequency-based identification of \\( n \\)-grams (Gee et al., 2023; Kumar & Thawani, 2022). However, these were explored in limited experimental contexts (mainly for machine translation) and had mixed effectiveness. Naively disabling pretokenization in BPE has been found to severely degrade model performance (Dagan et al., 2024; Schmidt et al., 2024; Kudo, 2018), although this approach may be more promising for unigram tokenization (Kudo & Richardson, 2018), as adopted by JURASSIC (Lieber et al., 2021) and BLOOMBERGGPT (Wu et al., 2023). In concurrent work, Huang et al. (2025) disentangle input and output vocabularies, expanding only the former to include \\( n \\)-gram tokens. Their method requires significant modifications of the LM input component and considers fixed length of \\( n \\)-grams." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.61, + 0.828, + 0.709 + ], + "angle": 0, + "content": "Multi-token prediction Multi-token prediction (MTP) equips LMs with some extra parameters to predict multiple tokens in a single time step (Qi et al., 2020; Gloeckle et al., 2024) and was recently adopted by DEEPSEEK-V3, though the MTP module is discarded at inference-time. MTP's effectiveness corroborates that LMs are capable of predicting more than one subword in a forward pass. However, these approaches fix the number of tokens predicted in each time step and require modifications to the architecture and training objective. We note that the benefits of MTP and superword tokens may be orthogonal." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.725, + 0.828, + 0.839 + ], + "angle": 0, + "content": "Tokenizer-free language modeling Some works have explored the possibility of completely removing tokenization from LMs and directly modeling text as a sequence of bytes (Clark et al., 2022; Xue et al., 2022; Wang et al., 2024). To overcome the increased compute requirement due to expanded sequence lengths, alternative architectures have been proposed that either segment bytes into fixed-length patches (Tay et al., 2022; Yu et al., 2023) or dynamically predict patch boundaries with sub-modules (Nawrot et al., 2023; Pagnoni et al., 2024; Ahia et al., 2024; Hwang et al., 2025); these dynamic patches have been qualitatively observed to span multiple words." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.854, + 0.828, + 0.926 + ], + "angle": 0, + "content": "Tokenizer transfer Many methods have been proposed to adapt models after training to use a different tokenizer. These may rely on intervention during pretraining (Chen et al., 2023), continued training for a subset of layers (Marchisio et al., 2023), or leveraging self-distillation (Minixhofer et al., 2025), heuristic, (Minixhofer et al., 2022; Gee et al., 2022; Tran, 2020; Liu et al., 2024b; Dobler & De Melo, 2023), or hypernetwork-based (Minixhofer" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.949, + 0.504, + 0.959 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.104, + 0.827, + 0.161 + ], + "angle": 0, + "content": "et al., 2024) initialization of a fresh embedding matrix, optionally followed by fine-tuning. These methods may be used to upgrade existing models to use SuperBPE tokenizers, with the goal of reducing inference cost while maintaining performance. We leave this direction to future work." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.18, + 0.31, + 0.196 + ], + "angle": 0, + "content": "6 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.211, + 0.828, + 0.339 + ], + "angle": 0, + "content": "Although tokenization lies at the foundation of language modeling, acting as the lens through which models view text, the algorithms in use have remained largely unchanged over the past decade. SuperBPE builds on the observation that tokens need not be limited to subwords, extending the BPE algorithm to superword tokens. When replacing subword BPE tokenizers with SuperBPE tokenizers in pretraining, we find that language models perform better over a large suite of downstream tasks, while also being substantially more efficient at inference time. These benefits are achieved without modifying the underlying model architecture, making SuperBPE a compelling alternative to BPE that seamlessly integrates with modern language model ecosystems." + }, + { + "type": "title", + "bbox": [ + 0.173, + 0.357, + 0.347, + 0.374 + ], + "angle": 0, + "content": "Acknowledgments" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.388, + 0.829, + 0.542 + ], + "angle": 0, + "content": "We would like to thank Alex Fang for pretraining advice, Vivek Ramanujan for helping debug our distributed training setup, Ian Magnusson for helpful comments on LM evaluation, and Zhaofeng Wu, Alexander Fang, and Xiaochuang Han for feedback on drafts. We are also grateful to Luca Soldaini, Goncalo Faria, Shrimai Prabhumoye, Matt Jordan, Artidoro Pagnoni, Mike Lewis, Doug Downey, Shannon Shen, and the UW NLP community for valuable conversations about this work. Both co-first authors, AL and JH, are supported by the NSF Graduate Research Fellowship Program. JH and SO are supported in part by the Microsoft Grant for Customer Experience Innovation. This work was partially funded by NSF DMS-2134012, NSF CCF-2019844, ONR N00014-24-1-2207, and NSF 2113530 as well as with NVIDIA resources provided through the National AI Research Resource Pilot (NAIRR)." + }, + { + "type": "title", + "bbox": [ + 0.173, + 0.561, + 0.277, + 0.577 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.584, + 0.829, + 0.671 + ], + "angle": 0, + "content": "Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Jungo Kasai, David Mortensen, Noah Smith, and Yulia Tsvetkov. Do all languages cost the same? tokenization in the era of commercial language models. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 9904-9923, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.614. URL https://aclanthology.org/2023.emnlp-main.614." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.676, + 0.829, + 0.747 + ], + "angle": 0, + "content": "Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Valentin Hofmann, Tomasz Limisiewicz, Yulia Tsvetkov, and Noah A. Smith. MAGNET: Improving the multilingual fairness of language models with adaptive gradient-based tokenization. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=1e3MOwHSIX." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.754, + 0.826, + 0.798 + ], + "angle": 0, + "content": "Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large language models, 2021. URL https://arxiv.org/abs/2108.07732." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.804, + 0.829, + 0.874 + ], + "angle": 0, + "content": "Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. Pythia: A suite for analyzing large language models across training and scaling, 2023. URL https://arxiv.org/abs/2304.01373." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.882, + 0.829, + 0.925 + ], + "angle": 0, + "content": "BIG-bench. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=uyTL5Bvosj." + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.584, + 0.829, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.826, + 0.148 + ], + "angle": 0, + "content": "Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In *Thirty-Fourth AAAI Conference on Artificial Intelligence*, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.155, + 0.828, + 0.229 + ], + "angle": 0, + "content": "Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. Large language models in machine translation. In Jason Eisner (ed.), Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pp. 858-867, Prague, Czech Republic, June 2007. Association for Computational Linguistics. URL https://aclanthology.org/D07-1090/." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.236, + 0.828, + 0.336 + ], + "angle": 0, + "content": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.345, + 0.828, + 0.445 + ], + "angle": 0, + "content": "Anthony Chen, Pallavi Gudipati, Shayne Longpre, Xiao Ling, and Sameer Singh. Evaluating entity disambiguation and the role of popularity in retrieval-based NLP. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4472-4485, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.345. URL https://aclanthology.org/2021.acl-long.345." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.453, + 0.828, + 0.498 + ], + "angle": 0, + "content": "Y Chen, K Marchisio, R Raileanu, DI Adelani, P Stenetorp, S Riedel, and M Artetxe. Improving language plasticity via pretraining with active forgetting. In Advances in Neural Information Processing Systems. NeurIPS, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.506, + 0.828, + 0.564 + ], + "angle": 0, + "content": "Kenneth Church. How many multiword expressions do people know? In Proceedings of the Workshop on Multiword Expressions: From Parsing and Generation to the Real World, pp. 137-144, Portland, Oregon, USA, 2011. Association for Computational Linguistics. URL https://aclanthology.org/W11-0823/." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.572, + 0.828, + 0.673 + ], + "angle": 0, + "content": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2924–2936, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1300. URL https://aclanthology.org/N19-1300." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.681, + 0.828, + 0.74 + ], + "angle": 0, + "content": "Jonathan H. Clark, Dan Garrette, Iulia Turc, and John Wieting. Canine: Pre-training an efficient tokenization-free encoder for language representation. Transactions of the Association for Computational Linguistics, 10:73-91, 2022. doi: 10.1162/tacl_a_00448. URL https://aclanthology.org/2022.tacl-1.5." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.747, + 0.828, + 0.793 + ], + "angle": 0, + "content": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018. URL https://arxiv.org/abs/1803.05457." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.8, + 0.828, + 0.859 + ], + "angle": 0, + "content": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/abs/2110.14168." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.867, + 0.828, + 0.925 + ], + "angle": 0, + "content": "Pablo Contreras Kallens and Morten H. Christiansen. Models of language and multiword expressions. Frontiers in Artificial Intelligence, 5, 2022. doi: 10.3389/frai.2022.781962. URL https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2022.781962." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.828, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.828, + 0.161 + ], + "angle": 0, + "content": "Gautier Dagan, Gabriel Synnaeve, and Baptiste Rozière. Getting the most out of your tokenizer for pre-training and domain adaptation. In Proceedings of the 41st International Conference on Machine Learning, ICML'24. JMLR.org, 2024. URL https://dl.acm.org/doi/10.5555/3692070.3692457." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.169, + 0.828, + 0.198 + ], + "angle": 0, + "content": "DeepSeek-AI. Deepseek-v3 technical report, 2025. URL https://arxiv.org/abs/2412.19437." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.207, + 0.827, + 0.252 + ], + "angle": 0, + "content": "Konstantin Dobler and Gerard De Melo. Focus: Effective embedding initialization for monolingual specialization of multilingual models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 13440-13454, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.259, + 0.828, + 0.358 + ], + "angle": 0, + "content": "Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2368-2378, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1246. URL https://aclanthology.org/N19-1246." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.366, + 0.828, + 0.451 + ], + "angle": 0, + "content": "Lukas Edman, Helmut Schmid, and Alexander Fraser. CUTE: Measuring LLMs' understanding of their tokens. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 3017-3026, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.177. URL https://aclanthology.org/2024.emnlp-main.177." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.46, + 0.827, + 0.489 + ], + "angle": 0, + "content": "Philip Gage. A new algorithm for data compression. The C Users Journal archive, 12:23-38, 1994. URL https://api-semanticscholar.org/CorpusID:59804030." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.497, + 0.828, + 0.582 + ], + "angle": 0, + "content": "Matthias Galle. Investigating the effectiveness of BPE: The power of shorter sequences. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (eds.), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1375-1381, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1141. URL https://aclanthology.org/D19-1141." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.591, + 0.828, + 0.649 + ], + "angle": 0, + "content": "Leonidas Gee, Andrea Zugarini, Leonardo Rigutini, Paolo Torroni, et al. Fast vocabulary transfer for language model compression. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track, pp. 409-416. Association for Computational Linguistics (ACL), 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.657, + 0.828, + 0.741 + ], + "angle": 0, + "content": "Leonidas Gee, Leonardo Rigutini, Marco Ernandes, and Andrea Zugarini. Multi-word tokenization for sequence compression. In Mingxuan Wang and Imed Zitouni (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track, pp. 612-621, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-industry.58. URL https://aclanthology.org/2023.emnlp-industry.58." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.75, + 0.828, + 0.807 + ], + "angle": 0, + "content": "Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Roziere, David Lopez-Paz, and Gabriel Synnaeve. Better & faster large language models via multi-token prediction. In *Forty-first International Conference on Machine Learning*, 2024. URL https://openreview.net/forum?id=pEWAcejiU2." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.816, + 0.828, + 0.901 + ], + "angle": 0, + "content": "Omer Goldman, Avi Caciularu, Matan Eyal, Kris Cao, Idan Szpektor, and Reut Tsarfaty. Unpacking tokenization: Evaluating text compression and its correlation with model performance. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics: ACL 2024, pp. 2274-2286, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024-findings-acl.134. URL https://aclanthology.org/2024-findings-acl.134." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.909, + 0.749, + 0.926 + ], + "angle": 0, + "content": "Google. Gemma: Open models based on gemini research and technology, 2024." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.828, + 0.926 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.103, + 0.829, + 0.162 + ], + "angle": 0, + "content": "Jonathan Hayase, Alisa Liu, Yejin Choi, Sewoong Oh, and Noah A. Smith. Data mixture inference: What do BPE tokenizers reveal about their training data? In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=EHXyeImux0." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.17, + 0.826, + 0.229 + ], + "angle": 0, + "content": "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=d7KBjmI3GmQ." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.238, + 0.826, + 0.297 + ], + "angle": 0, + "content": "Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. In Proceedings of the 36th International Conference on Neural Information Processing Systems, pp. 30016-30030, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.304, + 0.829, + 0.405 + ], + "angle": 0, + "content": "Valentin Hofmann, Janet Pierrehumbert, and Hinrich Schütze. Superbizarre is not superb: Derivational morphology improves BERT's interpretation of complex words. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 3594-3608, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.279. URL https://aclanthology.org/2021.acl-long.279." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.413, + 0.826, + 0.457 + ], + "angle": 0, + "content": "Hongzhi Huang, Defa Zhu, Banggu Wu, Yutao Zeng, Ya Wang, Qiyang Min, and Xun Zhou. Over-tokenized transformer: Vocabulary is generally worth scaling, 2025. URL https://arxiv.org/abs/2501.16975." + }, + { + "type": "ref_text", + "bbox": [ + 0.172, + 0.466, + 0.827, + 0.497 + ], + "angle": 0, + "content": "Sukjun Hwang, Brandon Wang, and Albert Gu. Dynamic chunking for end-to-end hierarchical sequence modeling, 2025. URL https://arxiv.org/abs/2507.07955." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.505, + 0.828, + 0.591 + ], + "angle": 0, + "content": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Regina Barzilay and Min-Yen Kan (eds.), Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601-1611, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1147. URL https://aclanthology.org/P17-1147." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.6, + 0.828, + 0.644 + ], + "angle": 0, + "content": "Guy Kaplan, Matanel Oren, Yuval Reif, and Roy Schwartz. From tokens to words: On the inner lexicon of LLMs. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=328vch6tRs." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.653, + 0.826, + 0.697 + ], + "angle": 0, + "content": "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.706, + 0.828, + 0.791 + ], + "angle": 0, + "content": "Pride Kavumba, Naoya Inoue, Benjamin Heinzerling, Keshav Singh, Paul Reisert, and Kentaro Inui. When choosing plausible alternatives, clever hans can be clever. In Simon Ostermann, Sheng Zhang, Michael Roth, and Peter Clark (eds.), Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing, pp. 33-42, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-6004. URL https://aclanthology.org/D19-6004." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.801, + 0.826, + 0.83 + ], + "angle": 0, + "content": "Taku Kudo. Sentencepiece experiments. https://github.com/google/sentencepiece/blob/master/doc/experiments.md, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.84, + 0.828, + 0.925 + ], + "angle": 0, + "content": "Taku Kudo and John Richardson. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Eduardo Blanco and Wei Lu (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 66-71, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-2012. URL https://aclanthology.org/D18-2012." + }, + { + "type": "list", + "bbox": [ + 0.172, + 0.103, + 0.829, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.828, + 0.189 + ], + "angle": 0, + "content": "Dipesh Kumar and Avijit Thawani. BPE beyond word boundary: How NOT to use multi word expressions in neural machine translation. In Shabnam Tafreshi, João Sedoc, Anna Rogers, Aleksandr Drozd, Anna Rumshisky, and Arjun Akula (eds.), Proceedings of the Third Workshop on Insights from Negative Results in NLP, pp. 172-179, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.insights-1.24. URL https://aclanthology.org/2022.insights-1.24." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.198, + 0.826, + 0.227 + ], + "angle": 0, + "content": "Vedang Lad, Wes Gurnee, and Max Tegmark. The remarkable robustness of llms: Stages of inference?, 2024. URL https://arxiv.org/abs/2406.19384." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.236, + 0.828, + 0.349 + ], + "angle": 0, + "content": "Yuxuan Lai, Yijia Liu, Yansong Feng, Songfang Huang, and Dongyan Zhao. Lattice-BERT: Leveraging multi-granularity representations in Chinese pre-trained language models. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1716-1731, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.137. URL https://aclanthology.org/2021.nacl-main.137." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.359, + 0.827, + 0.388 + ], + "angle": 0, + "content": "Sander Land. A short introduction to pre-tokenization weirdness, 2024. URL https://tokencontributions.substack.com/p/a-short-introduction-to-pre-tokenization." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.398, + 0.828, + 0.482 + ], + "angle": 0, + "content": "Sander Land and Max Bartolo. Fishing for magikarp: Automatically detecting under-trained tokens in large language models. In Yaser Al-Onaizan, Mohit Bansal, and YunNung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 11631-11646, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.649. URL https://aclanthology.org/2024.emnlp-main.649." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.492, + 0.828, + 0.535 + ], + "angle": 0, + "content": "Hector J. Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, pp. 552-561. AAAI Press, 2012." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.544, + 0.828, + 0.724 + ], + "angle": 0, + "content": "Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Sham Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Sewoong Oh, Luke Zettlemoyer, Kyle Lo, Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G. Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, and Vaishaal Shankar. Datacomp-lm: In search of the next generation of training sets for language models, 2024. URL https://arxiv.org/abs/2406.11794." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.735, + 0.828, + 0.778 + ], + "angle": 0, + "content": "Opher Lieber, Or Sharir, Barak Lenz, and Yoav Shoham. Jurassic-1: Technical details and evaluation, 2021. URL https:// uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.788, + 0.828, + 0.872 + ], + "angle": 0, + "content": "Hong Liu, Sang Michael Xie, Zhiyuan Li, and Tengyu Ma. Same pre-training loss, better downstream: Implicit bias matters for language models. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 22188-22214. PMLR, 23-29 Jul 2023. URL https://proceedings.mlr.press/v202/liu23ao.html." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.882, + 0.828, + 0.925 + ], + "angle": 0, + "content": "Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, and Hannaneh Hajishirzi. Infini-gram: Scaling unbounded n-gram language models to a trillion tokens. In First Conference on Language Modeling, 2024a. URL https://openreview.net/forum?id=u2vAyMeLm." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.828, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "14" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.828, + 0.161 + ], + "angle": 0, + "content": "Yihong Liu, Peiqin Lin, Mingyang Wang, and Hinrich Schütze. Ofa: A framework of initializing unseen subword embeddings for efficient large-scale multilingual continued pretraining. In Findings of the Association for Computational Linguistics: NAACL 2024, pp. 1067-1097, 2024b." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.169, + 0.827, + 0.212 + ], + "angle": 0, + "content": "Scott Lundberg. The art of prompt design: Prompt boundaries and token healing, 2023. URL https://medium.com/towards-data-science/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.22, + 0.827, + 0.264 + ], + "angle": 0, + "content": "Kelly Marchisio, Patrick Lewis, Yihong Chen, and Mikel Artetxe. Mini-model adaptation: Efficiently extending pretrained models to new languages via aligned shallow training. In The 61st Annual Meeting Of The Association For Computational Linguistics, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.272, + 0.827, + 0.315 + ], + "angle": 0, + "content": "Haspelmath Martin. The indeterminacy of word segmentation and the nature of morphology and syntax. Folia Linguistica, 51(s1000):31-80, 2017. doi: doi:10.1515/flin-2017-1005. URL https://doi.org/10.1515/flin-2017-1005." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.323, + 0.828, + 0.393 + ], + "angle": 0, + "content": "Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 17359-17372. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/6f1d43d5a82a37e89b0665b33bf3a182-Paper-Conference.pdf." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.402, + 0.773, + 0.417 + ], + "angle": 0, + "content": "Meta. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.425, + 0.828, + 0.482 + ], + "angle": 0, + "content": "Sabrina J. Mielke, Zaid Alyafeai, Elizabeth Salesky, Colin Raffel, Manan Dey, Matthias Galle, Arun Raja, Chenglei Si, Wilson Y. Lee, Benoit Sagot, and Samson Tan. Between words and characters: A brief history of open-vocabulary modeling and tokenization in nlp, 2021. URL https://arxiv.org/abs/2112.10508." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.49, + 0.828, + 0.575 + ], + "angle": 0, + "content": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2381-2391, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1260. URL https://aclanthology.org/D18-1260." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.584, + 0.827, + 0.641 + ], + "angle": 0, + "content": "Benjamin Minixhofer, Fabian Paischer, and Navid Rekabsaz. Wechsel: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3992–4006, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.649, + 0.825, + 0.679 + ], + "angle": 0, + "content": "Benjamin Minixhofer, Edoardo Ponti, and Ivan Vulic. Zero-shot tokenizer transfer. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.686, + 0.825, + 0.716 + ], + "angle": 0, + "content": "Benjamin Minixhofer, Ivan Vulic, and Edoardo Maria Ponti. Universal cross-tokenizer distillation via approximate likelihood matching. arXiv preprint arXiv:2503.20083, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.724, + 0.828, + 0.808 + ], + "angle": 0, + "content": "Piotr Nawrot, Jan Chorowski, Adrian Lancucki, and Edoardo Maria Ponti. Efficient transformers with dynamic token pooling. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6403-6417, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.353. URL https://aclanthology.org/2023.acl-long.353." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.816, + 0.828, + 0.847 + ], + "angle": 0, + "content": "Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Investigating the limitations of transformers with simple arithmetic tasks, 2021. URL https://arxiv.org/abs/2102.13019." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.854, + 0.828, + 0.925 + ], + "angle": 0, + "content": "Team OLMo, Pete Walsh, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, Dustin Schwenk, Oyvind Tafjord, Taira Anderson, David Atkinson, Faeze Brahman, Christopher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng Liu, Saumya Malik, William Merrill, Lester James V. Miranda, Jacob Morrison," + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.828, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "15" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.189, + 0.103, + 0.828, + 0.161 + ], + "angle": 0, + "content": "Tyler Murray, Crystal Nam, Valentina Pyatkin, Aman Rangapur, Michael Schmitz, Sam Skjonsberg, David Wadden, Christopher Wilhelm, Michael Wilson, Luke Zettlemoyer, Ali Farhadi, Noah A. Smith, and Hannaneh Hajishirzi. 2 olmo 2 furious, 2024. URL https://arxiv.org/abs/2501.00656." + }, + { + "type": "ref_text", + "bbox": [ + 0.172, + 0.169, + 0.747, + 0.185 + ], + "angle": 0, + "content": "OpenAI. Hello GPT-4o, 2024. URL https://openai.com/index/hello-gpt-4o/." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.192, + 0.829, + 0.277 + ], + "angle": 0, + "content": "Naoki Otani, Satoru Ozaki, Xingyuan Zhao, Yucen Li, Micaelah St Johns, and Lori Levin. Pre-tokenization of multi-word expressions in cross-lingual word embeddings. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4451-4464, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.360. URL https://aclanthology.org/2020.emnlp-main.360." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.284, + 0.828, + 0.341 + ], + "angle": 0, + "content": "Artidoro Pagnoni, Ram Pasunuru, Pedro Rodriguez, John Nguyen, Benjamin Muller, Margaret Li, Chunting Zhou, Lili Yu, Jason Weston, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Ari Holtzman, and Srinivasan Iyer. Byte latent transformer: Patches scale better than tokens, 2024. URL https://arxiv.org/abs/2412.09871." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.349, + 0.828, + 0.448 + ], + "angle": 0, + "content": "Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Katrin Erk and Noah A. Smith (eds.), Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1525-1534, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1144. URL https://aclanthology.org/P16-1144." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.455, + 0.827, + 0.486 + ], + "angle": 0, + "content": "Jackson Petty, Sjoerd van Steenkiste, Fei Sha, Ishita Dasgupta, Dan Garrette, and Tal Linzen. The impact of depth and width on transformer language model generalization. 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.493, + 0.827, + 0.535 + ], + "angle": 0, + "content": "Buu Phan, Marton Havasi, Matthew Muckley, and Karen Ullrich. Understanding and mitigating tokenization bias in language models, 2024. URL https://arxiv.org/abs/2406.16829." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.543, + 0.828, + 0.615 + ], + "angle": 0, + "content": "Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. BPE-dropout: Simple and effective subword regularization. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1882-1892, Online, July 2020. Association for Computational Linguistics. doi: 10. 18653/v1/2020.acl-main.170. URL https://aclanthology.org/2020.acl-main.170." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.622, + 0.828, + 0.707 + ], + "angle": 0, + "content": "Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. ProphetNet: Predicting future n-gram for sequence-to-SequencePretraining. In Trevor Cohn, Yulan He, and Yang Liu (eds.), Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 2401-2410, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.217. URL https://aclanthology.org/2020-findings-emnlp.217." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.715, + 0.828, + 0.758 + ], + "angle": 0, + "content": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. URL https://cdn.openai.com/better-language-models/language Models are unsupervised multitask_learners.pdf." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.766, + 0.828, + 0.837 + ], + "angle": 0, + "content": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Jian Su, Kevin Duh, and Xavier Carreras (eds.), Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383-2392, Austin, Texas, November 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1264. URL https://aclanthology.org/D16-1264." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.845, + 0.828, + 0.888 + ], + "angle": 0, + "content": "Siva Reddy, Danqi Chen, and Christopher D. Manning. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266, 2019. doi: 10.1162/tacl_a_00266. URL https://aclanthology.org/Q19-1016." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.896, + 0.827, + 0.925 + ], + "angle": 0, + "content": "Marco Tulio Ribeiro. A guidance language for controlling large language models, 2023. URL https://github.com/guidance-ai/guidance?tab=README-ov-file#text-not-tokens." + }, + { + "type": "list", + "bbox": [ + 0.172, + 0.103, + 0.829, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.949, + 0.509, + 0.96 + ], + "angle": 0, + "content": "16" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.103, + 0.826, + 0.148 + ], + "angle": 0, + "content": "Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI) Spring Symposium, 2011." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.155, + 0.826, + 0.199 + ], + "angle": 0, + "content": "Jessica Rumbelow and Matthew Watkins. Solidgoldmagikarp (plus, prompt generation), 2023. URL https://www.lesswrong.com/posts/aPeJE8bSo6rAfOaLqg/solidgoldmagikarp-plus-prompt-generation." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.207, + 0.826, + 0.251 + ], + "angle": 0, + "content": "Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: an adversarial winograd schema challenge at scale. Commun. ACM, 64(9):99-106, August 2021. ISSN 0001-0782. URL https://doi.org/10.1145/3474381." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.259, + 0.827, + 0.344 + ], + "angle": 0, + "content": "Bahar Salehi, Paul Cook, and Timothy Baldwin. A word embedding approach to predicting the compositionality of multiword expressions. In Rada Mihalcea, Joyce Chai, and Anoop Sarkar (eds.), Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 977-983, Denver, Colorado, 2015. Association for Computational Linguistics. doi: 10.3115/v1/N15-1099. URL https://aclanthology.org/N15-1099/." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.353, + 0.827, + 0.438 + ], + "angle": 0, + "content": "Craig W Schmidt, Varshini Reddy, Haoran Zhang, Alec Alameddine, Omri Uzan, Yuval Pinter, and Chris Tanner. Tokenization is more than compression. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 678-702, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.40. URL https://aclanthology.org/2024.emnlp-main.40." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.446, + 0.827, + 0.545 + ], + "angle": 0, + "content": "Nathan Schneider, Spencer Onuffer, Nora Kazour, Emily Danchik, Michael T. Mordowanec, Henrietta Conrad, and Noah A. Smith. Comprehensive annotation of multiword expressions in a social web corpus. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (eds.), Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pp. 455-461, Reykjavik, Iceland, May 2014. European Language Resources Association (ELRA). URL https://aclanthology.org/L14-1433/." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.553, + 0.827, + 0.624 + ], + "angle": 0, + "content": "Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Katrin Erk and Noah A. Smith (eds.), Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715-1725, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://aclanthology.org/P16-1162." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.633, + 0.827, + 0.689 + ], + "angle": 0, + "content": "Anya Sims, Cong Lu, Klara Kaleb, Jakob Nicolaus Foerster, and Yee Whye Teh. Stochastok: Improving fine-grained subword understanding in LLMs. In ICLR 2025 Workshop on Building Trust in Language Models and Applications, 2025. URL https://openreview.net/forum?id=PZnDZdkGsE." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.698, + 0.825, + 0.728 + ], + "angle": 0, + "content": "Aaditya K. Singh and DJ Strouse. Tokenization counts: the impact of tokenization on arithmetic in frontier llms, 2024. URL https://arxiv.org/abs/2402.14903." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.736, + 0.827, + 0.779 + ], + "angle": 0, + "content": "Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.788, + 0.827, + 0.886 + ], + "angle": 0, + "content": "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4149-4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https://aclanthology.org/N19-1421." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.895, + 0.827, + 0.926 + ], + "angle": 0, + "content": "Chaofan Tao, Qian Liu, Longxu Dou, Niklas Muennighoff, Zhongwei Wan, Ping Luo, Min Lin, and Ngai Wong. Scaling laws with vocabulary: Larger models deserve larger" + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.827, + 0.926 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "17" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.189, + 0.103, + 0.826, + 0.133 + ], + "angle": 0, + "content": "vocabies. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.142, + 0.829, + 0.2 + ], + "angle": 0, + "content": "Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. Scale efficiently: Insights from pre-training and fine-tuning transformers. arXiv preprint arXiv:2109.10686, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.209, + 0.826, + 0.268 + ], + "angle": 0, + "content": "Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, and Donald Metzler. Charformer: Fast character transformers via gradient-based subword tokenization. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=JtBRnr10EFN." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.277, + 0.827, + 0.375 + ], + "angle": 0, + "content": "Avijit Thawani, Jay Pujara, Filip Ilievski, and Pedro Szekely. Representing numbers in NLP: a survey and a vision. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 644-656, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.53. URL https://aclanthology.org/2021.naacl-main.53." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.385, + 0.827, + 0.414 + ], + "angle": 0, + "content": "Ke Tran. From english to foreign languages: Transferring pre-trained language models. arXiv preprint arXiv:2002.07306, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.424, + 0.828, + 0.509 + ], + "angle": 0, + "content": "Menan Velayuthan and Kengatharaiyer Sarveswaran. Egalitarian language representation in language models: It all begins with tokenizers. In Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, and Steven Schockaert (eds.), Proceedings of the 31st International Conference on Computational Linguistics, pp. 5987-5996, Abu Dhabi, UAE, January 2025. Association for Computational Linguistics. URL https://aclanthology.org/2025.coling-main.400/." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.519, + 0.827, + 0.562 + ], + "angle": 0, + "content": "Bandhav Veluri, Justin Chan, Malek Itani, Tuochao Chen, Takuya Yoshioka, and Shyamnath Gollakota. Real-time target sound extraction. In ICASSP, pp. 1-5, 2023. URL https://doi.org/10.1109/ICASSP49357.2023.10094573." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.572, + 0.827, + 0.615 + ], + "angle": 0, + "content": "Tim Vieira, Ben LeBrun, Mario Giulianielli, Juan Luis Gastaldi, Brian DuSell, John Terilla, Timothy J O'Donnell, and Ryan Cotterell. From language models over tokens to language models over characters. arXiv preprint arXiv:2412.03719, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.625, + 0.828, + 0.668 + ], + "angle": 0, + "content": "Junxiong Wang, Tushaar Gangavarapu, Jing Nathan Yan, and Alexander M Rush. Mambabye: Token-free selective state space model. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id=X1xNsuKssb." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.678, + 0.828, + 0.804 + ], + "angle": 0, + "content": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. Transformers: State-of-the-art natural language processing. In Qun Liu and David Schlangen (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38-45, Online, October 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-demos.6. URL https://aclanthology.org/2020.emnlp-demos.6." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.814, + 0.827, + 0.858 + ], + "angle": 0, + "content": "Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. Bloomberggpt: A large language model for finance, 2023. URL https://arxiv.org/abs/2303.17564." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.868, + 0.827, + 0.925 + ], + "angle": 0, + "content": "Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. ByT5: Towards a token-free future with pre-trained byte-to-byte models. Transactions of the Association for Computational Linguistics, 10:291-306, 2022. doi: 10.1162/tacl_a_00461. URL https://aclanthology.org/2022.tacl-1.17." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.829, + 0.925 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "18" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.103, + 0.829, + 0.203 + ], + "angle": 0, + "content": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369-2380, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1259. URL https://aclanthology.org/D18-1259." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.208, + 0.829, + 0.265 + ], + "angle": 0, + "content": "Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, and Mike Lewis. MEGABYTE: Predicting million-byte sequences with multiscale transformers. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=JTmO2V9Xpz." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.272, + 0.829, + 0.343 + ], + "angle": 0, + "content": "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can a machine really finish your sentence? In Anna Korhonen, David Traum, and Lluis Márquez (eds.), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4791-4800, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1472. URL https://aclanthology.org/P19-1472." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.349, + 0.829, + 0.42 + ], + "angle": 0, + "content": "Xinsong Zhang, Pengshuai Li, and Hang Li. AMBERT: A pre-trained language model with multi-grained tokenization. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 421-435, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021-findings-acl.37. URL https://aclanthology.org/2021-findings-acl.37." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.426, + 0.829, + 0.512 + ], + "angle": 0, + "content": "Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Yining Chen, Jiahai Wang, Jian Yin, Ming Zhou, and Nan Duan. Analytical reasoning of text. In Marine Carpuat, Marie-Catherine de Marneffé, and Ivan Vladimir Meza Ruiz (eds.), Findings of the Association for Computational Linguistics: NAACL 2022, pp. 2306-2319, Seattle, United States, July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-naacl.177. URL https://aclanthology.org/2022-findings-naacl.177." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.518, + 0.829, + 0.603 + ], + "angle": 0, + "content": "Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. AGIEval: A human-centric benchmark for evaluating foundation models. In Kevin Duh, Helena Gomez, and Steven Bethard (eds.), Findings of the Association for Computational Linguistics: NAACL 2024, pp. 2299-2314, Mexico City, Mexico, June 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-naacl.149. URL https://aclanthology.org/2024.findings-naacl.149." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.103, + 0.829, + 0.603 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.173, + 0.625, + 0.452, + 0.644 + ], + "angle": 0, + "content": "A Experimental setup details" + }, + { + "type": "title", + "bbox": [ + 0.173, + 0.656, + 0.359, + 0.673 + ], + "angle": 0, + "content": "A.1 Tokenizer training" + }, + { + "type": "text", + "bbox": [ + 0.172, + 0.682, + 0.784, + 0.699 + ], + "angle": 0, + "content": "We use the HuggingFace tokenizers (Wolf et al., 2020) library for tokenizer training." + }, + { + "type": "title", + "bbox": [ + 0.173, + 0.712, + 0.403, + 0.728 + ], + "angle": 0, + "content": "A.1.1 Tokenizer training data" + }, + { + "type": "text", + "bbox": [ + 0.172, + 0.736, + 0.825, + 0.794 + ], + "angle": 0, + "content": "We produce the tokenizer training data by sampling documents uniformly at random from the OLMO2 stage 2 pretraining data, referred to as olmo-mix. We use a 10 GB subset because early experiments showed that data beyond even \\(\\sim 10\\) MB does not make a difference in the resulting tokenizer's encoding efficiency." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.799, + 0.827, + 0.926 + ], + "angle": 0, + "content": "We found that olmo-mix had several extremely long documents, with the longest \\(1\\%\\) of documents making up \\(15\\%\\) of the data. In particular, a full academic paper (specifically Veluri et al., 2023) is duplicated 2,224 times back-to-back inside one document (as delimited by special EOS tokens). Because our tokenizers are trained on small sets of data, these extremely long documents can take up a large proportion of the data, resulting in unusual tokens like chunk-based processing. To circumvent possible data duplication issues, we truncate the longest \\(1\\%\\) of documents in the tokenizer training data to the \\(99\\%\\) percentile of document lengths. As future practitioners train SuperBPE tokenizers, we encourage especial attention to dedduplication, which may have an outsized impact on SuperBPE tokenizers." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "19" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.104, + 0.504, + 0.12 + ], + "angle": 0, + "content": "A.1.2 Limit on the size of superword tokens" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.129, + 0.828, + 0.271 + ], + "angle": 0, + "content": "Even after truncating the longest \\(1\\%\\) of documents, we found that SuperBPE tokenizers can still have extremely long tokens consisting of highly duplicated boiler-plate text such as the Project Gutenberg license or common internet phrases such as You’re commenting using your. This issue is already present in BPE tokenizers trained on Chinese, which contain sentence-long tokens clearly taken from pornographic content. For instance, tokens in GPT-4O’s tokenizer include 最新高清无码 = latest HD uncensored and 娱乐网址 = entertainment website. To prevent concerns about the tokenizer directly revealing parts of the training data (Hayase et al., 2024), we enforce an upper bound of 4 words in our tokens. Empirically, we found that this had no measurable impact on the encoding efficiency of the tokenizers or the resulting trained LMs." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.286, + 0.387, + 0.3 + ], + "angle": 0, + "content": "A.1.3 Pretokenization rules" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.31, + 0.825, + 0.339 + ], + "angle": 0, + "content": "We implement whitespace pretokenization with the default regex string from tokenizers which was adopted by the GPT-2 tokenizer." + }, + { + "type": "equation", + "bbox": [ + 0.18, + 0.352, + 0.513, + 0.367 + ], + "angle": 0, + "content": "\\[\n? \\backslash p \\{L \\} + |? [ ^ {\\wedge} \\backslash s \\backslash p \\{L \\} \\backslash p \\{N \\} ] + | \\backslash s + (? ! \\backslash S) | \\backslash s +\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.38, + 0.825, + 0.437 + ], + "angle": 0, + "content": "Note that the original GPT-2 pretokenization regex string also splits on contractions, e.g., splitting I'm into [I, 'm]. Since this choice is not universal among commercial tokenizers and is not related to whitespace pretokenization (and furthermore creates plenty of undesirable edge cases [Land, 2024]), we do not include this rule." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.442, + 0.827, + 0.542 + ], + "angle": 0, + "content": "Independently of whitespace pretokenization (i.e., for both BPE and SuperBPE tokenizers), we follow recent convention (as introduced by GPT-3.5 and borrowed by LLAMA3, OLMo2) and pretokenize digits into blocks of 3. We make one modification, by grouping digits into 3 from the right rather than from the left, so that, e.g., 1000 would be pretokenized as [1, 000] instead of [100, 0]. This choice was recently found to yield improved performance on math benchmarks, even when applied solely at inference time (Singh & Strouse, 2024). Digit pretokenization is enforced with the following regex." + }, + { + "type": "equation", + "bbox": [ + 0.182, + 0.553, + 0.331, + 0.569 + ], + "angle": 0, + "content": "\\[\n(? = (\\backslash d \\{3 \\}) + (?! \\backslash d))\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.584, + 0.396, + 0.6 + ], + "angle": 0, + "content": "A.1.4 Special casing of colon" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.608, + 0.827, + 0.695 + ], + "angle": 0, + "content": "In order to make our tokenizer compatible with the common question-answering format where the prompt ends with a colon and the continuation is expected to start with a space, we \"special-case\" colon by preventing the algorithm from learning any tokens that contain\": as a substring. Without this fix, common question/answer prompts might produce distorted distributions over completions. Please see \\(\\S C.3\\) for further discussion. This affects the resulting tokenizer minimally in terms of the learned vocabulary." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.71, + 0.436, + 0.726 + ], + "angle": 0, + "content": "A.2 Scaling model configurations" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.736, + 0.827, + 0.862 + ], + "angle": 0, + "content": "When matching inference compute, the goal is to match the average flops per byte of generated text between two models with different tokenizers. To do so, we scale the model up to cancel the effect of longer tokens, which requires precise control over the model's size. To produce a model config with an arbitrary inference compute cost, we first represent the inference flops per token as a polynomial in terms of the model dimension, MLP hidden dimension, and number of layers. Conveniently, once the model dimension and number of layers are chosen, the flops are affine in the MLP hidden dimension, so we can easily solve for the MLP hidden dimension that gets us closest to the desired budget. We fix the head dimension to that of the base model." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.868, + 0.825, + 0.926 + ], + "angle": 0, + "content": "To find the best config overall, we grid search over the hidden dimension (which must remain a multiple of the head dimension) and number of layers, solving for the MLP hidden dimension at each step. We choose the config which expands the transformer by the most uniform factors. This is measured by taking the ratios of the current parameters with the" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "20" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.104, + 0.825, + 0.161 + ], + "angle": 0, + "content": "base config's parameters, applying the logarithm, and taking the standard deviation. While prior work has explored the best way to scale transformer models (Tay et al., 2021; Petty et al., 2023), we believe that scaling all parameters uniformly is reasonable since we are only increasing the model size by a small amount." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.167, + 0.825, + 0.195 + ], + "angle": 0, + "content": "We present the exact model hyperparameters for all model sizes used in our experiments in Table 4." + }, + { + "type": "table", + "bbox": [ + 0.18, + 0.204, + 0.819, + 0.322 + ], + "angle": 0, + "content": "
680M910M1.9B2.5B8B11B
Parameter count678.2M912.5M1.893B2.536B8.115B11.30B
Model dimension10241,2162,0482,3044,0964,608
MLP hidden dimension8,1929,72816,38418,43222,01624,704
Head dimension6464128128128128
Number of heads161916183236
Number of layers161816193237
Vocabulary size20,000520,000520,000520,000520,000520,0005
" + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.331, + 0.825, + 0.402 + ], + "angle": 0, + "content": "Table 4: Model parameters for all model sizes. Model sizes 910M, 2.5B, and 11B are scaled versions of 680M, 1.9B, and 8B respectively. All other parameters match those of OLMO 300M (from the OLMO model ladder) for sizes 680M and 910M, OLMO 1B for sizes 1.9B and 2.5B, or OLMO2 7B for sizes 8B and 11B, respectively. Maximum sequence length values for various tokenizers are listed in Table 2." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.435, + 0.468, + 0.451 + ], + "angle": 0, + "content": "A.3 Compute used for model training" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.461, + 0.531, + 0.476 + ], + "angle": 0, + "content": "All models were pretrained on \\(328\\times \\mathrm{H}100\\) nodes." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.493, + 0.342, + 0.507 + ], + "angle": 0, + "content": "A.4 Evaluation Suite" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.518, + 0.827, + 0.576 + ], + "angle": 0, + "content": "Our evaluation suite builds on DataComp-LM's core evaluation of 22 tasks (Li et al., 2024), which was found to provide low-variance signal of learning. We add 8 more popular tasks (e.g., MMLU, GSM8K) while also covering string manipulation tasks (e.g., CUTE), which are known to be challenging for LMs due to their tokenizers." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.581, + 0.826, + 0.68 + ], + "angle": 0, + "content": "All evaluations are based on decoding from the model and scoring the generation by either comparing it to the ground truth or evaluating its functional correctness (in the case of coding tasks). For multiple choice (MC) tasks, we check whether the predicted answer choice is an exact match (EM) to the target (we observe that effectively \\(100\\%\\) of model generations are valid answer choices, especially for later checkpoints). For open-ended tasks, we check whether the generated output contains the ground truth answer exactly, and for coding tasks, we report pass@10." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.686, + 0.825, + 0.756 + ], + "angle": 0, + "content": "We provide 5 in-context examples for all tasks, except for CoQA, which naturally contains in-context examples in the conversational context, and the coding tasks (HumanEval and MBPP), which are evaluated zero-shot following prior work. We use a maximum of 5,000 examples from each dataset, though some datasets contain much fewer examples. BB below stands for BIG-Bench." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.771, + 0.825, + 0.814 + ], + "angle": 0, + "content": "ARC consists of 4-way MC questions from grades 3-9 science exams. It contains two splits, ARC-Easy, which require knowledge of basic science, and ARC-Challenge, which require some procedural reasoning (Clark et al., 2018)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.829, + 0.825, + 0.86 + ], + "angle": 0, + "content": "Arithmetic contains simple arithmetic problems (Brown et al., 2020).11 We use the 2da, 2dm, and 2ds splits for addition, multiplication, and division of (up to) 2-digit numbers." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.873, + 0.825, + 0.903 + ], + "angle": 0, + "content": "BoolQ contains naturally occurring yes/no questions paired with passages that provide an answer (Clark et al., 2019)." + }, + { + "type": "page_footnote", + "bbox": [ + 0.186, + 0.909, + 0.592, + 0.924 + ], + "angle": 0, + "content": "11https://huggingface.co/datasets/EleutherAI/arithmetic" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.508, + 0.96 + ], + "angle": 0, + "content": "21" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.104, + 0.825, + 0.133 + ], + "angle": 0, + "content": "CommonsenseQA contains 5-way MC questions that require commonsense knowledge to answer (Talmor et al., 2019)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.149, + 0.828, + 0.178 + ], + "angle": 0, + "content": "COPA contains two-way MC questions about cause and effect (Roemmle et al., 2011; Kavumba et al., 2019)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.194, + 0.828, + 0.252 + ], + "angle": 0, + "content": "CoQA consists of passages with a series of conversational questions about the passage Reddy et al. (2019). Each question requires the prior conversational context, due to possible coreference across questions. Because these contextual questions naturally serve as in-context examples, we do not provide additional in-context examples." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.267, + 0.828, + 0.31 + ], + "angle": 0, + "content": "BB CS Algorithms consists of two subtasks, determining whether a given series of parentheses is balanced and identifying the longest common subsequence in two letter strings (BIG-bench, 2023)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.325, + 0.827, + 0.368 + ], + "angle": 0, + "content": "CUTE contains questions that require the model to understand and manipulate spelling, such as replacing all instances of a particular letter in a word with another letter (Edman et al., 2024)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.384, + 0.825, + 0.414 + ], + "angle": 0, + "content": "DROP contains questions about passages, potentially requiring reasoning over multiple pieces of information in the passage (Dua et al., 2019)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.429, + 0.827, + 0.471 + ], + "angle": 0, + "content": "BB Dyck Languages consists of a sequence of parentheses and requires the model to predict the correct sequence of closing parentheses so that the entire sequence is well-balanced." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.487, + 0.825, + 0.558 + ], + "angle": 0, + "content": "GSM8K contains grade school math word problems that require between 2 and 8 steps to solve. In the in-context examples, we provide the answer passage that contains intermediate steps with calculator annotations removed. The model is expected to provide the final numerical answer after four hashtags (# ###) that delimit the reasoning and final answer (Cobbe et al., 2021)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.574, + 0.825, + 0.603 + ], + "angle": 0, + "content": "HellaSwag contains 4-way MC questions which ask for the most natural continuation given the context (Zellers et al., 2019)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.619, + 0.827, + 0.648 + ], + "angle": 0, + "content": "HotpotQA contains questions along with a corresponding passage from Wikipedia containing the answer (Yang et al., 2018)." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.664, + 0.828, + 0.763 + ], + "angle": 0, + "content": "HumanEval contains programming problems where the model is tasked with completing a Python function given its docstring (Chen et al., 2021). We use \"\\nclass,\" \"\\ndef,\" \"\\n#,\"\\nif,\" as stop tokens. Following the original paper, we sample 20 continuations with top \\( p = 0.95 \\) and temperature \\( = 0.8 \\). Models are allowed to generate for a maximum of 128 new tokens. The functional correctness of generations is automatically evaluated using test cases. We use the 20 generation to make an unbiased estimate of the pass@10 rate, i.e., how likely at least one of 10 sampled solutions for a problem is correct." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.779, + 0.735, + 0.796 + ], + "angle": 0, + "content": "Jeopardy contains open-ended questions from the \"Jeopardy!\" quiz show.[12]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.811, + 0.827, + 0.854 + ], + "angle": 0, + "content": "Lambada contains narratives without the last word, which is inferrable given the context (Paperno et al., 2016). This task requires models to attend to the full narrative instead of only the local context." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.87, + 0.825, + 0.899 + ], + "angle": 0, + "content": "BB Language Identification contains sentences in different languages, and the task is to choose the language of the sentence from a long list of options." + }, + { + "type": "page_footnote", + "bbox": [ + 0.186, + 0.909, + 0.674, + 0.925 + ], + "angle": 0, + "content": "12https://www.kaggle.com/datasets/tunguz/200000-jeopardy-questions" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "22" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.104, + 0.825, + 0.148 + ], + "angle": 0, + "content": "LSAT-AR contains MC questions that evaluate the analytical reasoning (AR) ability of LMs (Zhong et al., 2022; 2024). Test questions are drawn from the Law School Admission Test (LSAT) from 1991 to 2016." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.162, + 0.825, + 0.204 + ], + "angle": 0, + "content": "MBPP contains Python programming problems where the model is given a description of the desired function and a series of unit tests. We use the same evaluation setup as HumanEval." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.22, + 0.826, + 0.264 + ], + "angle": 0, + "content": "MMLU contains 4-way MC questions covering 57 different domains, covering both world knowledge and problem-solving abilities (Hendrycks et al., 2021). Note that we report a straight average over the 5000-example sample, rather than a macro-average over subjects." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.279, + 0.827, + 0.309 + ], + "angle": 0, + "content": "OpenbookQA contains 4-way MC questions that require multi-step reasoning and commonsense knowledge (Mihaylov et al., 2018)." + }, + { + "type": "text", + "bbox": [ + 0.172, + 0.323, + 0.825, + 0.354 + ], + "angle": 0, + "content": "BB Operators contains questions where the model is given a function definition and asked to compute the output of that function given a particular input." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.367, + 0.826, + 0.396 + ], + "angle": 0, + "content": "PIQA contains MC questions that require physical commonsense reasoning (Bisk et al., 2020)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.411, + 0.825, + 0.441 + ], + "angle": 0, + "content": "BB Repeat-Copy-Logic contains instructions that ask the model to produce a particular string (Austin et al., 2021)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.456, + 0.827, + 0.486 + ], + "angle": 0, + "content": "SQuAD contains passages paired with questions about the passage (Rajpurkar et al., 2016). The answer is always a span from the passage." + }, + { + "type": "text", + "bbox": [ + 0.172, + 0.5, + 0.797, + 0.516 + ], + "angle": 0, + "content": "TriviaQA contains open-ended questions about world knowledge (Joshi et al., 2017)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.53, + 0.827, + 0.559 + ], + "angle": 0, + "content": "BB WikidataQA require models to complete factual statements with the correct continuation." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.575, + 0.827, + 0.618 + ], + "angle": 0, + "content": "Winograd contains binary MC questions where the model is given a context and asked to determine which entity a pronoun refers to, between two options (Levesque et al., 2012). Correctly answer the question requires commonsense knowledge and contextual reasoning." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.633, + 0.825, + 0.663 + ], + "angle": 0, + "content": "Winogrande contain questions with the same schema as Winograd, but increases both the scale and difficulty of the dataset (Sakaguchi et al., 2021)." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.682, + 0.379, + 0.699 + ], + "angle": 0, + "content": "B Additional Results" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.714, + 0.485, + 0.731 + ], + "angle": 0, + "content": "B.1 How BPB varies with context length" + }, + { + "type": "text", + "bbox": [ + 0.169, + 0.74, + 0.825, + 0.84 + ], + "angle": 0, + "content": "In our main experiments (§3), we adjust the context size of SuperBPE models to match the effective context size of the BPE model in raw text. To justify this design choice, we show that the next token becomes easier to predict as a function of the preceding context in bytes (not tokens). Figure 6 shows the average BPB at every token index (left) vs byte index (right) — when measured at fixed token indices, SuperBPE has an advantage from seeing more context (achieving lower loss on average at the same token index), whereas at fixed byte indices, this advantage goes away." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.856, + 0.334, + 0.87 + ], + "angle": 0, + "content": "B.2 Task evaluation" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.882, + 0.825, + 0.926 + ], + "angle": 0, + "content": "We report the individual task performance of BPE and all SuperBPE models in Table 5 (this an expansion of Table 1). We also show a subset of task-specific performance curves during pretraining in Figure 12." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "23" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.2, + 0.112, + 0.478, + 0.289 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.251, + 0.307, + 0.434, + 0.321 + ], + "angle": 0, + "content": "(a) BPB vs tokens of context" + }, + { + "type": "image", + "bbox": [ + 0.514, + 0.112, + 0.791, + 0.289 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.569, + 0.307, + 0.744, + 0.321 + ], + "angle": 0, + "content": "(b) BPB vs bytes of context" + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.332, + 0.825, + 0.377 + ], + "angle": 0, + "content": "Figure 6: When comparing the normalized loss of the next token, controlling for preceding tokens of context gives SuperBPE an advantage, while controlling for bytes of context gives a close match between BPE and SuperBPE." + }, + { + "type": "table", + "bbox": [ + 0.195, + 0.404, + 0.803, + 0.796 + ], + "angle": 0, + "content": "
CategoryTaskBPE 8BSuperBPE 8BSuperBPE 11B
t = 80kt = 160kt = 180k
KnowledgeARC-Easy (MC)46.660.863.667.160.6
ARC-Challenge (MC)35.146.443.950.643.9
Jeopardy (EM)42.140.241.841.842.2
MMLU (MC)36.541.942.644.741.0
OpenbookQA (MC)33.249.849.454.446.4
TriviaQA (EM)60.659.761.961.362.3
WikidataQA (EM)69.768.269.570.970.9
Math & ReasoningArithmetic (EM)54.863.258.659.356.9
GSM8K (EM)6.46.96.76.77.4
LSAT-AR (MC)21.323.924.323.020.9
Operators (EM)35.532.235.533.637.9
Repeat-Copy-Logic (EM)3.16.26.26.23.1
CodingHumanEval (pass@10)15.915.014.413.415.9
MBPP (pass@10)27.525.328.428.329.4
Reading ComprehensionBoolQ (MC)59.765.262.364.664.7
CoQA (EM)12.612.812.513.213.1
DROP (EM)31.328.632.831.433.1
HotpotQA (EM)53.552.554.755.254.6
SQuAD (EM)75.174.376.275.877.2
CommonsenseCommonsenseQA (MC)33.550.052.353.850.5
COPA (MC)77.086.687.685.897.0
PIQA (MC)55.257.761.859.859.2
Winograd (MC)50.452.555.253.152.3
Winogrande (MC)47.351.251.652.650.2
Language UnderstandingHellaSwag (MC)29.731.230.333.736.6
LAMBADA (EM)77.072.875.170.675.8
Language Identification (EM)8.810.29.79.010.1
String ManipulationCS Algorithms (EM)46.147.342.648.649.1
CUTE (EM)31.332.232.832.635.7
Dyck-Languages (EM)15.923.218.814.216.7
Average39.842.943.443.843.8
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.806, + 0.825, + 0.836 + ], + "angle": 0, + "content": "Table 5: Performance of BPE and SuperBPE models on 30 downstream tasks. This is an expansion of Table 1 with more models." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.877, + 0.331, + 0.89 + ], + "angle": 0, + "content": "B.3 BPB evaluation" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.909, + 0.741, + 0.926 + ], + "angle": 0, + "content": "See Figure 7 for the bits-per-byte during pretraining of all models we compare." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "24" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.27, + 0.101, + 0.73, + 0.366 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.376, + 0.828, + 0.421 + ], + "angle": 0, + "content": "Figure 7: Bits-per-byte of BPE and SuperBPE models during pretraining. The BPE 8B, SuperBPE 8B (\\(t = 180\\mathrm{k}\\)), and SuperBPE 11B attain 0.7465, 0.7482, and 0.7445 BPB respectively at the end of training." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.447, + 0.453, + 0.464 + ], + "angle": 0, + "content": "B.4 Additional scaling experiments" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.474, + 0.828, + 0.532 + ], + "angle": 0, + "content": "Our tokenizer has several interesting interactions with LM scaling, purely due to its increased efficiency. For the purpose of this section, let \\(\\alpha\\) denote the ratio of our tokenizer's efficiency to the efficiency of a normal BPE tokenizer. (For example, we have \\(\\alpha \\approx 1.49\\) for our most efficient tokenizer.)" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.537, + 0.826, + 0.583 + ], + "angle": 0, + "content": "The primary advantage of a more efficient tokenizer is a reduction of the context length (in tokens) for the same effective context length (in bytes). All other things being equal, this gives:" + }, + { + "type": "text", + "bbox": [ + 0.21, + 0.593, + 0.518, + 0.611 + ], + "angle": 0, + "content": "1. A \\( 1 / \\alpha^2 \\) reduction in attention compute." + }, + { + "type": "text", + "bbox": [ + 0.209, + 0.615, + 0.543, + 0.631 + ], + "angle": 0, + "content": "2. A \\(1 / \\alpha\\) reduction in non-attention compute." + }, + { + "type": "text", + "bbox": [ + 0.209, + 0.635, + 0.825, + 0.664 + ], + "angle": 0, + "content": "3. A \\(1 / \\alpha\\) reduction in activation memory during training and KV-cache size during inference." + }, + { + "type": "list", + "bbox": [ + 0.209, + 0.593, + 0.825, + 0.664 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.677, + 0.825, + 0.723 + ], + "angle": 0, + "content": "Thus, if the context length is short, the total compute savings will be close to \\(1 / \\alpha\\). For longer contexts, the compute savings may approach \\(1 / \\alpha^2\\). Given a fixed training budget, there are two natural ways to convert these savings into improved performance." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.739, + 0.472, + 0.755 + ], + "angle": 0, + "content": "B.4.1 Matching model parameter count" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.764, + 0.827, + 0.822 + ], + "angle": 0, + "content": "In many applications of language models, such as deployment to consumer or edge devices, it is crucial to keep the model's size under control. In this regime, we will assume the model size fixed. This directly grants the aforementioned benefits, and we will turn to increasing the number of training steps to match the training budget." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.827, + 0.827, + 0.927 + ], + "angle": 0, + "content": "Since the amount of text seen per step is remains the same due to the fixed effective context length, a more efficient tokenizer allows the model to see more text during training for the same budget. This may lead to improved performance on downstream tasks since the model is more likely to have seen relevant training examples during training. Additionally, although the model is the same size, it requires less compute and memory at inference time to perform the same tasks. In some settings, these gains can be used to amplify inference-time scaling (Snell et al., 2024), leading to further potential gains." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "25" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.28, + 0.131, + 0.685, + 0.364 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.376, + 0.828, + 0.435 + ], + "angle": 0, + "content": "Figure 8: Results for scaling both model parameters and train tokens proportionally. Compared to the BPE baseline, we consider a SuperBPE model that matches parameter count and a SuperBPE model that matches inference compute. Here we see the spread between the three settings decreases with scale." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.47, + 0.434, + 0.487 + ], + "angle": 0, + "content": "B.4.2 Matching inference compute" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.499, + 0.825, + 0.543 + ], + "angle": 0, + "content": "In other applications of language models, model size is less critical compared to inference compute. In these situations, it may be more desirable to scale the model size up to absorb the extra compute." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.548, + 0.827, + 0.683 + ], + "angle": 0, + "content": "Changing the model size has a strong impact on scaling. Depending on the context length, we may scale the model by a factor of anywhere between \\(\\alpha\\) and \\(\\alpha^2\\) in order to match inference compute. Since each training step involves \\(1 / \\alpha\\) as many tokens, the ratio of tokens to model parameters per step may be reduced by as much as \\(1 / \\alpha^3\\). Prior work on LM scaling (Hoffmann et al., 2022; Kaplan et al., 2020) reports diminishing gains once the ratio of the numbers of train tokens and model parameters becomes too large. An \\(\\alpha\\) times more efficient tokenizer allows us to train for up to \\(\\alpha^3\\) times longer while maintaining the same token/parameter ratio and without increasing inference compute, delaying the regime of diminishing gains." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.707, + 0.319, + 0.723 + ], + "angle": 0, + "content": "B.4.3 Experiments" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.736, + 0.825, + 0.794 + ], + "angle": 0, + "content": "We train 680M and 1.9B sized BPE models on various numbers of tokens—ranging from \\(\\approx 20\\) to \\(\\approx 80\\) tokens per parameter—to establish a baseline scaling trend. We then train two models with SuperBPE tokenizers for each baseline model: one with matching parameter count and one with matching inference compute cost." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.799, + 0.827, + 0.926 + ], + "angle": 0, + "content": "There are a couple interesting ways to visualize these results: in Figure 5, we hold the model size fixed and increase the number of training tokens, and in Figure 8, we hold the ratio of train tokens to model parameters fixed (inference compute matched will be fixed 0.7 times lower) and vary both the model size and the number of training tokens. The general trends observed from these results are that matching inference compute is almost universally the best, while matching parameter count tends to be worse than the baseline except in the undertrained regime, where it is better than the baseline. The differences between the different settings increase with overtraining, but decreases when scaling both model size and training tokens at the same time." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "26" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.185, + 0.101, + 0.493, + 0.3 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.267, + 0.306, + 0.416, + 0.322 + ], + "angle": 0, + "content": "(a) Superword density" + }, + { + "type": "image", + "bbox": [ + 0.517, + 0.1, + 0.811, + 0.3 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.545, + 0.306, + 0.768, + 0.322 + ], + "angle": 0, + "content": "(b) Superword length distribution" + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.332, + 0.828, + 0.417 + ], + "angle": 0, + "content": "Figure 9: (Left) The number of superword tokens in a SuperBPE tokenizer, as a function of the transition point. A superword token is any token that violates the whitespace pretokensization rule from Stage 1. With an early transition point of \\( t = 60\\mathrm{K} \\), about \\( 85\\% \\) of the tokens learned in Stage 2 are superword tokens. For \\( t > 100\\mathrm{k} \\), close to \\( 100\\% \\) of Stage 2 tokens are superwords. (Right) The distribution of superword token lengths in terms of number of words, for \\( t = 180\\mathrm{k} \\)." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.443, + 0.51, + 0.461 + ], + "angle": 0, + "content": "C Analysis of SuperBPE Tokenizers" + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.476, + 0.414, + 0.492 + ], + "angle": 0, + "content": "C.1 Superword token analysis" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.503, + 0.825, + 0.588 + ], + "angle": 0, + "content": "How many superword tokens are in SuperBPE tokenizers? While the second stage of the pretokenization curriculum allows learning of superword tokens, subword tokens can still be learned. Shown in Figure 9a, for transition points \\( t < 80\\mathrm{k} \\), the number of superword tokens is relatively steady around 120k. Past \\( t > 100\\mathrm{k} \\), almost all tokens learned in Stage 2 are superword tokens. Figure 9b shows the number of whitespace-delimited words in the superword tokens of SuperBPE with \\( t = 180\\mathrm{k} \\)." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.607, + 0.533, + 0.623 + ], + "angle": 0, + "content": "C.2 Analysis of token frequencies in encoding" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.633, + 0.827, + 0.718 + ], + "angle": 0, + "content": "We also analyze token frequency statistics under BPE versus SuperBPE tokenizers. Figure 10a shows the relation between token rank (in frequency) and frequency. While tokens in BPE demonstrate a standard Zipfian relation, the slope of SuperBPE curves have a more shallow slope, meaning that the rate of decay in token frequency is smaller. The smaller proportion of tokens with very low counts may reduce prevalence and severity of glitch tokens (Rumbelow & Watkins, 2023; Land & Bartolo, 2024)." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.724, + 0.828, + 0.796 + ], + "angle": 0, + "content": "Figure 10b shows the minimum number of tokens from the vocabulary needed to cover any given proportion of data. For BPE, the relation is striking—only \\(57\\%\\) of tokens are needed to encode \\(99\\%\\) of the data! The remaining tokens make up a long tail of infrequent tokens. In contrast, SuperBPE tokenizers make better use of the vocabulary. For \\(t = 80k\\) and \\(t = 180k\\), this statistic is \\(90\\%\\) and \\(70\\%\\) of tokens, respectively." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.814, + 0.588, + 0.83 + ], + "angle": 0, + "content": "C.3 Distributional Distortion at the Prompt Boundary" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.84, + 0.825, + 0.926 + ], + "angle": 0, + "content": "Prior work (Lundberg, 2023; Phan et al., 2024) has shown that LMs using BPE tokenizers may produce distorted generations due to the forced partition in tokenization between a prompt and its completion. This issue stems from the fact that users typically desire completions conditioned on a text prompt. The natural approach to obtaining such completions is to take the prompt, tokenize it with the proper tokenizer, and then sample a completion of the resulting token sequence from the LM." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "27" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.173, + 0.032, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.184, + 0.101, + 0.495, + 0.299 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.235, + 0.306, + 0.448, + 0.322 + ], + "angle": 0, + "content": "(a) Token frequency distribution" + }, + { + "type": "image", + "bbox": [ + 0.5, + 0.1, + 0.811, + 0.3 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.598, + 0.306, + 0.715, + 0.322 + ], + "angle": 0, + "content": "(b) Data covering" + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.331, + 0.825, + 0.39 + ], + "angle": 0, + "content": "Figure 10: (Left) Token counts when ordered by frequency. The rate of decay in token frequency is smaller. (Right) The minimum number of tokens needed to cover a given proportion of the data. SuperBPE tokenizers make better use of the vocabulary, while BPE tokenizers have a long tail of infrequent tokens." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.415, + 0.827, + 0.515 + ], + "angle": 0, + "content": "For a simple example of how this can go wrong, consider a tokenizer with base vocabulary of A and B and a single merge forming the token AB. Let's suppose we trained a model using this tokenizer on the strings \"AA\", \"AB\", and \"BB\" with equal proportions. If we condition on the text prefix \"A\", there are two equally probable continuations: \"A\" and \"B\". However, A is the only valid completion of the token prefix A, since the token B never follows the token A during training. In other words, the prompt-completion pair (A,B) is canonically tokenized using a token that crosses the boundary between the prompt and the completion." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.52, + 0.828, + 0.633 + ], + "angle": 0, + "content": "While this problem is shared by all BPE tokenizers, it can be partially mitigated by pre-tokensization: if the prompt and the completion are separated during the pretokenization step, then it is impossible for a token to cross the boundary. This fix tends to work well for English, where the completion is typically expected to begin with whitespace, so whitespace pretokenization would apply. However, there are many settings where whitespace pretokenization cannot fix the underlying issue, including natural languages that do not use whitespace to separate words (like Chinese and Japanese), programming languages, and constrained generation (Lundberg, 2023; Ribeiro, 2023)." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.638, + 0.825, + 0.723 + ], + "angle": 0, + "content": "Several fixes for this issue have been proposed: at training time, token merges can be randomly dropped (Provilkov et al., 2020; Sims et al., 2025; DeepSeek-AI, 2025) to expose LMs to the internal makeup of tokens; at inference time, options include token healing (Lundberg, 2023), algorithmic correction (Phan et al., 2024), and enumeration of all relevant segmentations of the prompt (Vieira et al., 2024). We leave a detailed comparison of these techniques to future work." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.728, + 0.825, + 0.76 + ], + "angle": 0, + "content": "Additionally, the issue does not apply at all to models that separate the user's input from the model's response using special tokens, as is typical for chat models." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.778, + 0.393, + 0.793 + ], + "angle": 0, + "content": "D Other Related Work" + }, + { + "type": "text", + "bbox": [ + 0.172, + 0.81, + 0.663, + 0.825 + ], + "angle": 0, + "content": "Please see Mielke et al. (2021) for a survey of subword tokenization." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.84, + 0.827, + 0.926 + ], + "angle": 0, + "content": "Pretokenization Pretokenization defines how the text is split in order to prevent certain pairs of tokens from being merged. GPT-2 (Radford et al., 2019) introduced a regular expression (regex) which defines the pretokenization pattern. These regex strings have gained complexity over time; GPT-3.5 limits the number of digits in numerical tokens to 3, and allows single punctuation to be merged with the start of words (presumably to accommodate code, as it allows .get to be a single token). Prior work has shown that, for" + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "28" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.1, + 0.825, + 0.325 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.338, + 0.825, + 0.369 + ], + "angle": 0, + "content": "Figure 11: Token counts when ordered by token ID, which reflects the order in which tokens were learned in tokenizer training." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.394, + 0.828, + 0.48 + ], + "angle": 0, + "content": "instance, digit pretokenization choices (Nogueira et al., 2021; Thawani et al., 2021; Singh & Strouse, 2024) can significantly impact arithmetic performance. It is also likely that pretokenization affects different languages differently (Velayuthan & Sarveswaran, 2025; Ahia et al., 2023), due to natural statistics of the average word length, which acts as an upper bound on encoding efficiency in that language under subword tokenization. Nonetheless, the effectiveness of many pretokenization choices have not been thoroughly studied." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.494, + 0.825, + 0.525 + ], + "angle": 0, + "content": "\\(n\\)-gram language models Our work is loosely related to \\(n\\)-gram LMs, which incorporate \\(n\\)-gram statistics into the next-word prediction (Brants et al., 2007; Liu et al., 2024a)." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.538, + 0.827, + 0.625 + ], + "angle": 0, + "content": "Internal representation of semantic units Previous work has showed that the early layers of the LM may \"aggregate\" information over multi-token entities (e.g., [New, York]) into the last token's (e.g., York) hidden representation (Meng et al., 2022; Kaplan et al., 2025; Lad et al., 2024). This suggests that LMs naturally learn multi-word representations, and segmentating text into more semantically cohesive units at the input level (e.g., having New York as a single token) may simplify this process." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.509, + 0.96 + ], + "angle": 0, + "content": "29" + } + ], + [ + { + "type": "header", + "bbox": [ + 0.174, + 0.033, + 0.518, + 0.049 + ], + "angle": 0, + "content": "Published as a conference paper at COLM 2025" + }, + { + "type": "image", + "bbox": [ + 0.173, + 0.309, + 0.49, + 0.491 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.506, + 0.309, + 0.825, + 0.491 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.172, + 0.503, + 0.49, + 0.686 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.51, + 0.504, + 0.826, + 0.686 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.184, + 0.697, + 0.814, + 0.714 + ], + "angle": 0, + "content": "Figure 12: Performance during pretraining for a subset of tasks in our evaluation suite." + }, + { + "type": "page_number", + "bbox": [ + 0.49, + 0.948, + 0.51, + 0.96 + ], + "angle": 0, + "content": "30" + } + ] +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13423/ee02b8fd-a1b4-4f31-b152-93f5d36bdbe3_origin.pdf b/data/2025/2503_13xxx/2503.13423/ee02b8fd-a1b4-4f31-b152-93f5d36bdbe3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5e1f3ab1c0fe79058a83d732f007ab88dd72e8c4 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13423/ee02b8fd-a1b4-4f31-b152-93f5d36bdbe3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93be3cf8411166f6183cfcad9112748ba68ba79433e4f0a5a89be39c59c7d6cf +size 1715365 diff --git a/data/2025/2503_13xxx/2503.13423/full.md b/data/2025/2503_13xxx/2503.13423/full.md new file mode 100644 index 0000000000000000000000000000000000000000..afd5646b266f4d37128ff1b72f9bee3b144a5b8b --- /dev/null +++ b/data/2025/2503_13xxx/2503.13423/full.md @@ -0,0 +1,518 @@ +# SuperBPE: Space Travel for Language Models + +*Alisa Liu♥♠ *Jonathan Hayase♥ + +Valentin Hofmann $^{\diamond \diamond}$ Sewoong Oh $^{\diamond}$ Noah A. Smith $^{\diamond \diamond}$ Yejin Choi $^{\spadesuit}$ + +$\diamond$ University of Washington $\spadesuit$ NVIDIA $\diamond$ Allen Institute for AI + +# Abstract + +The assumption across nearly all language model (LM) tokenization schemes is that tokens should be subwords, i.e., contained within word boundaries. Despite providing a seemingly reasonable inductive bias, we question whether this common practice limits the potential of modern LMs. Whitespace is not a reliable delimiter of meaning, as evidenced by multi-word expressions (e.g., by the way), cross-lingual variation in the number of words needed to express a concept (e.g., spacesuit helmet in German is raumanzugehm), and languages that do not use whitespace at all (e.g., Chinese). To explore the potential of tokenization beyond subwords, we introduce a "superword" tokenizer, SuperBPE, that incorporates a simple pretokensization curriculum into the byte-pair encoding (BPE) algorithm to first learn subwords and then superwords that bridge whitespace. This modification dramatically improves encoding efficiency: when limiting vocabulary size to 200k, SuperBPE encodes a fixed piece of text with up to $33\%$ fewer tokens on average than BPE. In experiments, we pretrain 8B transformer LMs from scratch while fixing model size, vocabulary size, and train compute, varying only the algorithm for learning the vocabulary. Our model trained with SuperBPE achieves an average $+4.0\%$ absolute improvement over the BPE baseline across 30 downstream tasks (including $+8.2\%$ on MMLU), while simultaneously requiring $27\%$ less compute at inference time. In analysis, we find that SuperBPE produces segmentations of text that are more uniform in per-token difficulty, perhaps because SuperBPE tokens often capture common multi-word expressions that function semantically as a single unit. In sum, SuperBPE offers a straightforward and local modification to tokenization that improves both encoding efficiency and downstream performance, yielding better LMs overall. $^1$ + +# 1 Introduction + +Tokenizer are the lens through which language models (LMs) view data: they segment a stream of bytes into a sequence of tokens in the LM vocabulary. In the era of transformer LMs, tokenization is done at the level of subwords, meaning that tokens consist of parts of words (including complete words), but they cannot bridge whitespace. Intuitively, subword tokens capture meaningful and composable semantic units. + +Although seemingly reasonable, is this common practice a good one? Whitespace is an unreliable delimiter of meaning (Martin, 2017); many groups of words (e.g., a lot of or search engine) function semantically as single units, and English speakers store thousands of such multi-word expressions in their mental lexicon (Church, 2011; Contreras Kallens & Christiansen, 2022). Cross-lingually, there is considerable variation in whether a given meaning is conveyed by a single word or several words. At the extreme, languages such as Chinese and Japanese do not use whitespace at all, and tokens in these languages can span multiple words or even entire sentences (e.g., the tokenizers of GPT-4O [OpenAI, 2024] or DEEPSEEKV3 [DeepSeek-AI, 2025]), but this has seemingly not hindered LMs from + +![](images/c3aefc783508cbdf35c8b351a764278b60a5a93edab00efc917dd0942af8d010.jpg) + +![](images/9dd8cd205ee4d7a5d68b14276daffb2574f1374c4f02bcd5be1bb88161cc4bab.jpg) +Figure 1: SuperBPE tokenizers encode text much more efficiently than BPE, and this advantage grows with larger vocabulary size. Encoding efficiency ( $y$ -axis) is measured in bytes-per-token, the number of bytes encoded per token over a large corpus. In the 40 bytes of text shown on the top of this figure, SuperBPE uses 7 tokens while BPE uses 13, so the methods' efficiencies are $40/7 = 5.7$ and $40/13 = 3.1$ bytes-per-token, respectively. In the graph, the encoding efficiency of BPE plateaus early because it exhausts the valuable whitespace-delimited words in the training data. In fact, it is bounded above by the gray dotted line, which shows the maximum achievable encoding efficiency with BPE if every whitespace-delimited word were in the vocabulary. In contrast, SuperBPE has dramatically better encoding efficiency that continues to improve with increased vocabulary size, as it can continue to add common word sequences to treat as tokens in the vocabulary. The different gradient lines show different transition points from learning subword to superword tokens, which always yields an immediate improvement. SuperBPE also encodes text more efficiently than a naive variant of BPE that does not use whitespace pretokensization at all. + +performing well on these languages. In fact, including multi-word tokens promises to be beneficial in many ways: it may shorten token sequences, lowering the costs of LM training and inference, and offer representational advantages by segmenting text into more semantically cohesive units (Salehi et al., 2015; Otani et al., 2020; Hofmann et al., 2021). + +In this work, we introduce a superword tokenization algorithm that produces a vocabulary of both subword and "superword" tokens, which we use to describe tokens bridging more than one word. Our method, SuperBPE, introduces a pretokenization curriculum to the popular byte-pair encoding (BPE) algorithm (Sennrich et al., 2016): whitespace pretokenization is initially used to enforce learning of subword tokens only (as done in conventional BPE), but it is disabled in a second stage, where the tokenizer transitions to learning superword tokens. Notably, SuperBPE tokenizers scale much better with vocabulary size: BPE quickly hits a point of diminishing returns and begins adding increasingly rare subwords to the vocabulary, while SuperBPE continues to discover common word sequences to treat as single tokens and improve encoding efficiency (see Figure 1). + +In our experiments, we pretrain English LMs at 8B scale from scratch. When fixing the model size, vocabulary size, and training compute—varying only the algorithm for learning the vocabulary—we find that models trained with SuperBPE tokenizers consistently and significantly improve over counterparts trained with a BPE tokenizer while also being $27\%$ to $33\%$ more efficient at inference time. Our best SuperBPE model achieves an average improvement of $+4.0\%$ over 30 downstream tasks, including $+8.2\%$ on MMLU, and wins on 25 of the 30 individual tasks (Table 1). + +In analysis, we find that SuperBPE tokenizers produce segmentations that are more evenly distributed in difficulty. This makes sense from a qualitative linguistic analysis: SuperBPE tokens often correspond to multi-word expressions in English, i.e., word sequences that function as a single semantic unit (see Table 3 for examples). For instance, many prepositional phrases (e.g., by accident or in the long run) are essentially fixed and require memorization. The individual words in these expressions have very little possible variation in context, leading to very low-loss predictions under BPE models. + +SuperBPE is a straightforward and local modification to tokenization, requiring no changes to the model architecture, training framework, or decoding strategy. Under the same training setup, SuperBPE provides a remarkable boost in both encoding efficiency and performance, yielding better language models overall. + +# 2 SuperBPE + +We first explain the standard byte-pair encoding (BPE; Sennrich et al., 2016) tokenization algorithm (§2.1), and then introduce SuperBPE, which extends BPE to superwords (§2.2). + +# 2.1 Background on BPE + +BPE is a tokenization algorithm that greedily learns a subword vocabulary given training data. The algorithm takes a sample of text and a target vocabulary size $T$ as input. + +The first step of BPE is pretokenization, which splits the text into chunks that limit the extent of tokenization; merges cannot bridge these chunks, so the final learned tokens are parts of these chunks. Canonically, pretokenization in BPE consists of splitting on whitespace so that common word sequences do not become a single token. This made sense given the historical context of Sennrich et al. (2016), which aimed to improve word-level tokenization by segmenting words into morphologically meaningful subwords. + +After pretokenization, the iterative learning algorithm begins. Training text is first split into bytes; the starting vocabulary is the set of all bytes. Then, the frequencies of all pairs of neighboring tokens are recorded, and the most frequent pair is merged into a single, new token at every position in the text where it occurs. The newly merged token is added to the vocabulary. For instance, if the merge is (t, he), then all instances of the token sequence [t, he] will be replaced with the, which is added to the vocabulary. The token pair frequencies are then updated, and the next most frequent pair is again merged into a new token. This continues until the vocabulary reaches the target size $T$ . + +# 2.2 SuperBPE tokenization + +SuperBPE introduces a simple intervention in the pretokenization step, separating tokenizer training into two discrete phases, wherein the tokenizer (1) first learns subwords (by using pretokenization to prevent merges across whitespace) and then (2) learns superwords (by lifting this restriction). Stage 1 is equivalent to regular BPE training and continues up to a certain vocabulary size $t$ , which we call the transition point $(t < T)$ . In stage 2, tokenizer training resumes from the vocabulary learned thus far, but this time whitespace pretokenization is skipped. As a result, token pairs that bridge whitespace are considered, enabling superwords to be added to the vocabulary. Intuitively, we intend for our tokenizer to first learn base units of semantic meaning, then combine these units into common sequences for a much more efficient vocabulary. Note that $t = T$ corresponds to BPE, and $t = 0$ corresponds to a naive revision of BPE that foregoes whitespace pretokenization at any point in training. + +We note that training tokenizers requires more system memory and CPU time when done without whitespace pretokenization (as in stage 2 of SuperBPE). This is because the training + +
CategoryTaskBPESuperBPEΔ
KnowledgeARC-Easy (MC)46.667.1+20.5**
ARC-Challenge (MC)35.150.6+15.5**
Jeopardy (EM)42.141.8-0.3
MMLU (MC)36.544.7+8.2**
OpenbookQA (MC)33.254.4+21.2**
TriviaQA (EM)60.661.3+0.7
WikidataQA (EM)69.770.9+1.2*
Math & ReasoningArithmetic (EM)54.859.3+4.5**
GSM8K (EM)6.46.7+0.3
LSAT-AR (MC)21.323.0+1.7
Operators (EM)35.533.6-1.9
Repeat-Copy-Logic (EM)3.16.2+3.1
CodingHumanEval (pass@10)15.913.4-2.5
MBPP (pass@10)27.528.3+0.8
Reading ComprehensionBoolQ (MC)59.764.6+4.9**
CoQA (EM)12.613.2+0.6
DROP (EM)31.331.4+0.1
HotpotQA (EM)53.555.2+1.7*
SQuAD (EM)75.175.8+0.7
CommonsenseCommonsenseQA (MC)33.553.8+20.3**
COPA (MC)77.085.8+8.8**
PIQA (MC)55.259.8+4.6*
Winograd (MC)50.453.1+2.7
Winogrande (MC)47.352.6+5.3*
Language UnderstandingHellaSwag (MC)29.733.7+4.0**
LAMBADA (EM)77.070.6-6.4**
Language Identification (EM)8.89.0+0.2
String ManipulationCS Algorithms (EM)46.148.6+2.5
CUTE (EM)31.332.6+1.3
Dyck-Languages (EM)15.914.2-1.7
Average39.843.8+4.0
+ +Table 1: Performance of BPE and SuperBPE models (with transition point $t = 180\mathrm{k}$ ) on 30 downstream tasks. The two models are fixed in model parameters (8B), vocabulary size (200k), and training FLOPs (corresponding to ~330B tokens), differing only in their algorithm for learning the vocabulary. The SuperBPE model outperforms the baseline on 25 of 30 tasks and requires $27\%$ less compute at inference time. See Figure 3 for the moving task average during pretraining and §A.4 for further evaluation details. * $p < 0.05$ , ** $p < 0.005$ under a McNemar test. + +data is typically represented by a dictionary of "words" along with their counts. With whitespace pretokenization, the "words" are whitespace-separated chunks (e.g., common words) stored once along with a large count, conferring substantial savings in memory. Without whitespace pretokenization, the "words" are extremely long (e.g., entire training documents), leading to minimal dedduplication of the text and excessively large dictionaries. Fortunately, tokenizer training must be done only once; in our experiments, SuperBPE tokenizers train in a few hours on 100 CPUs, a negligible cost compared to LLM pretraining. + +# 2.3 Encoding efficiency + +A tokenizer's encoding efficiency can be measured in bytes-per-token, i.e., how many UTF-8 bytes are encoded, on average, in each token over a large corpus of text (see calculation in Figure 1). We train a series of tokenizers on a 10GB subset of data from OLMO 2's pretraining corpus and evaluate encoding efficiency on a held-out subset. + +Shown in Figure 1, SuperBPE scales much better with vocabulary size than does BPE. BPE quickly plateaus around a vocabulary size of $\sim 50\mathrm{K}$ , achieving 4.45 bytes-per-token at a vocabulary size of $200\mathrm{k}$ . In fact, even with infinite vocabulary size (namely, if every whitespace-delimited word were in the vocabulary), BPE cannot exceed 4.68 bytes-per-token, i.e., the average word length in the held-out subset. SuperBPE exceeds this upper bound with a mere $\sim 12\mathrm{k}$ vocabulary size and reaches 5.55 bytes-per-token at $50\mathrm{K}$ and 6.63 at $200\mathrm{k}$ . + +Surprisingly, SuperBPE is also more efficient than BPE with whitespace pretokenization completely disabled. Since BPE is a greedy algorithm, completely disabling whitespace pretokenization may cause it to make highly suboptimal choices early on. In particular, tokens in this setting often consist of the end of the previous word and start of the next word, as opposed to sequences of complete words. By keeping whitespace pretokenization on at the beginning, we can avoid suboptimal choices while still obtaining a tokenizer with superwords. + +Figure 2 shows how SuperBPE's encoding efficiency depends on the choice of transition point $t$ . The relationship is smooth, with $t = 80\mathrm{k}$ achieving the best encoding efficiency. However, we will see in our experiments that the optimal tokenizer for LM pretraining is not necessarily the most encoding-efficient. + +![](images/9c33cd3e1169158421e309ba6c1c3d751448572342e45693e32bbce74e5b0ca6.jpg) +Figure 2: Encoding efficiency varies smoothly with the choice of transition point $t$ in SuperBPE's pretokenization curriculum. + +# 3 Experiments + +In our main experiments, we pretrain models from scratch while fixing the total training FLOPs and vocabulary size, changing only the algorithm for learning the vocabulary. + +# 3.1 Setup + +We first pretrain 8B models with BPE and SuperBPE tokenizers. We use the OLMO2 7B (OLMo et al., 2024) training configuration, including the model architecture, training hyperparameters, and pretraining corpus, but reduce the total number of training steps to correspond to $\sim 330$ tokens (compared to 4T). Following prior work (Pagnoni et al., 2024), we also fix the effective context size (measured in bytes) for each model. This prevents SuperBPE models from gaining an advantage by seeing more textual context for the same next-token prediction (we provide analysis on this in §B.1). Since more efficient models have a shorter context length in tokens, the training steps are adjusted accordingly to match the total train FLOPs at the end of training. Note that in this setting, a same-sized SuperBPE model uses fewer inference FLOPs than the BPE model. + +We fix the vocabulary size of all tokenizers to 200,000 (in the same ballpark as, e.g., GEMMA at 250k [Google, 2024], GPT-4O at 200k, and Llama3 at 130k [Meta, 2024]).6 We consider three transition points for SuperBPE: $t = 80\mathrm{k}$ , which has the best encoding efficiency, and two later transitions, $t = 160\mathrm{k}$ and $t = 180\mathrm{k}$ . All tokenizers are trained on the same 10GB subset of OLMO2's pretraining mix. SA.1 provides further details about tokenizer training. + +![](images/13b903da56b7375b8e6dc6f72255fc803e99efb2d92e9a3524c6cb69a93788f9.jpg) +Figure 3: Average task performance on 30 downstream tasks, evaluated at every 5000 steps in model pretraining. We see that SuperBPE models consistently outperform the baseline that uses a BPE tokenizer. All compared models share the same vocabulary size and train budget; $t$ denotes the transition point in SuperBPE's pretokenization curriculum. + +We also train a slightly larger 11B SuperBPE model with $t = 180\mathrm{k}$ , which approximately matches the 8B BPE baseline in total bytes of training data seen as well as both train and inference compute. See Table 2 for exact specifications for all runs. + +# 3.2 Results on downstream tasks + +We evaluate SuperBPE on 30 benchmarks covering knowledge, math & reasoning, coding, reading comprehension, common sense, language understanding, and string manipulation. The full evaluation suite is shown in Table 1 and evaluation details are in SA.4. + +Figure 3 shows the task average during pretraining. All SuperBPE models substantially outperform the BPE baseline at the end of training. The strongest 8B SuperBPE model, which has transition point $t = 180\mathrm{k}$ (the latest one we consider), outperforms the baseline by $4.0\%$ on average and wins on 25 of 30 individual tasks. Table 1 shows the per-task performance for this model (see §A.4 for results for the other models). The largest gains are on multiple choice tasks; when considering these alone, the performance improvement grows to $+9.7\%$ . The only task on which SuperBPE loses in a statistically significant way is LAMBADA; here, we observe that SuperBPE is actually ahead for the majority of training checkpoints, but accuracy dips at the end from $75.8\%$ to $70.6\%$ (see Figure 12). + +Notably, while the choice of transition point affects the performance of the resulting model, all reasonable choices are significantly stronger than the baseline. When using the most encoding-efficient transition point, i.e., $t = 80\mathrm{k}$ , we see a $+3.1\%$ task improvement over BPE and inference compute reduced by $35\%$ . Later transition points empirically cede some gains in encoding efficiency in exchange for further improvements in performance.[7] + +# 4 Analysis + +# 4.1 Language modeling + +Following prior work (Biderman et al., 2023; Xue et al., 2022; Yu et al., 2023; Wang et al., 2024), we evaluate language modeling performance using bits-per-byte (BPB), which normalizes the loss by the tokenizer's encoding efficiency to fairly compare models with different tokenizers. + +
SuperBPE transition pointBPE 8BSuperBPE 8BSuperBPE 11B
t = 80kt = 160kt = 180kt = 180k
Parameter count (billion)8.128.128.128.1211.30
Train steps76,543118,419112,722107,98277,525
Average context length (bytes)18,26218,27218,26318,26818,268
Vocabulary size200k200k200k200k200k
Context length (tokens)4,0962,7562,8843,0003,000
Encoding efficiency (bytes/token)4.466.636.336.096.09
Train compute (1021 FLOPs)17.217.217.217.217.2
Inference compute (109 FLOPs/byte)3.752.422.542.653.75
+ +Table 2: Training setup for the models we compare. We fix the vocabulary size and effective context size (measured in bytes) for each model and adjust the total number of training steps accordingly so that each model has the same total train budget (in FLOPs). The 8B SuperBPE models match the 8B BPE model in train compute but use less inference compute; the 11B SuperBPE model matches the 8B baseline in both train and inference compute. Numbers fixed across model settings are highlighted in the same color. + +This is necessary because longer tokens, on average, contain more information and therefore are more difficult to predict. Bits-per-byte is defined as $\mathrm{BPB}(x) = \mathcal{L}_{\mathrm{CE}}(x) / (\ln (2)\cdot n_{\mathrm{bytes}})$ , where $n_{\mathrm{bytes}}$ is the length of the text in bytes and $\mathcal{L}_{\mathrm{CE}}(x)$ is the sum of the cross-entropy loss over the entire text. We find that BPE 8B, SuperBPE 8B ( $t = 180\mathrm{k}$ ), and SuperBPE 11B attain 0.7465, 0.7482, and 0.7445 BPB, respectively, at the end of training. Although these numbers do not differ appreciably, the ranking of models according to BPB and downstream task performance are not consistent. + +# 4.2 Loss distribution analysis + +Why does the SuperBPE 8B model achieve slightly higher normalized language modeling loss (§4.1) than the baseline BPE model despite outperforming it on a wide variety of downstream tasks (§3.2)? To investigate this, we plot the distribution of per-token $\mathrm{BPB}^9$ for both models on data sampled from the pretraining data mixture in Figure 4. + +Although the BPE and SuperBPE models have very similar BPB on average, we see that loss is distributed very differently over the training data. Compared to the baseline, the SuperBPE model makes fewer predictions with either very high or very low loss. + +![](images/1edddf8bae5540af35ba41c602b1db1040be2fc9f710e1df93c948f4103a12f1.jpg) +Figure 4: Histogram of per-token losses for both models from Table 1, measured over a large corpus of text. We observe that the SuperBPE model is a more consistent performer, making fewer predictions with very high or very low loss. + +Low-loss tokens. We find that the reduction in low-loss tokens is attributable to a small set of extremely common words that the BPE model can easily predict, but are not available to SuperBPE as they are merged into larger superword tokens. For instance, the tokens _the, _of, and _to (the three most common words in the corpus) appear an order of magnitude more often under BPE than SuperBPE in the same corpus of text. When excluding these three token types alone, the BPB ranking reverses, with SuperBPE achieving 0.02 lower BPB than BPE. + +
POS tag#Example Tokens
NN, IN906(case_of,_hint_of,_availability_of,_emphasis_on,_distinction_between
VB, DT566_reached_a,_discovered_the,_identify_the,_becomes_a,_issued_a
DT, NN498.this_month,_no Idea,_the_earth,_the_maximum,_this Stuff
IN, NN406_on_top,_by'accident,_in_effects,_for_lunch,_in_front
IN, DT379_on_the,_without_a,_alongside_the,_for_each
IN, DT, NN333_for_a_living,_by_the_way,_into_the_future,_in_the_midst
NN, IN, DT270_position_of_the,_component_of_the,_review_of_the,_example_of.this
IN, DT, JJ145_like_any_other,_with_each_other,_for_a_short,_of_the Entire
VB, IN, DT121_worked_as_a,_based_on_the,_combined_with_the,_turnedInto_a
IN, DT, NN, IN33_at_the_time_of,_in_the Presence_of,_in_the_middle_of,_in_a_way-that
,, CC, PRP, VB20,_and_it_was,_but_IThink,_but_I_have,_but_I_am
IN, DT, JJ, NN18_in_the_long_run,_on_the_other_hand,_for_the_first_time,_in_the_same_way
+ +Table 3: The most common POS tags for tokens of 2, 3, and 4 words in SuperBPE, along with random example tokens for each tag. NN = noun, IN = preposition, VB = verb, DT = determiner, CC = conjunction, JJ = adjective, and PRP = pronoun. + +The reduction in low-loss tokens also makes sense from a qualitative linguistic analysis of SuperBPE tokens. In Table 3, we show the most common POS tags among superword tokens in SuperBPE along with random examples for each tag. The tokens often capture common multi-word expressions (by accident, of course, for a living) that function as a single semantic unit (Schneider et al., 2014). As an example, prepositions (IN) figure prominently in superword tokens (e.g., depend on, distinction between) and require lexeme-specific memorization. The individual words in these fixed expressions are often semantically vacuous and have little possible variation in context, so they are easy to predict once memorized. + +High-loss tokens. On the other hand, the much thinner tail of very high-loss tokens shows that, in the worst case, the SuperBPE model consistently puts more probability mass on the correct token. On average, we expect models to suffer high loss on tokens that are difficult to predict. This may explain why SuperBPE can outperform BPE on downstream tasks but have higher average BPB: the tokens scored in task evaluations tend to be among the hardest to predict. This is consistent with prior findings that models generally continue to improve in downstream tasks even as their overall loss plateaus due to improving on a narrow and difficult slice of the distribution (Liu et al., 2023). + +# 4.3 Scaling + +To characterize the scaling behavior of SuperBPE, we also perform experiments at smaller scales. $^{10}$ We train baseline models at 680M and 1.9B and scale the base number of training tokens proportionately to the number of parameters. We also perform runs at $0.5 \times$ , $2 \times$ , and $4 \times$ the base number of tokens to observe the trend with respect to training duration. Then, we train two SuperBPE models that match the training budget of each baseline BPE model, one that matches the baseline in parameter count (analogous to SuperBPE 8B) and a larger model that matches in both train and inference compute (analogous to SuperBPE 11B). We focus on the $t = 180k$ tokenizer to reduce complexity. + +We plot BPB at the end of training for each run in Figure 5. In the under-trained regime, both SuperBPE models achieve lower BPB than the baseline. In the over-trained regime, the ranking from worst to best is SuperBPE (matching parameter count), BPE, and SuperBPE (matching inference compute). Additionally, the separation between the models increases with further over-training. We provide additional results and comments on scaling in §B.4. + +![](images/715318fd5aa7972126c82d9ddbd291b8e70a43f4bd0a178f500c66ec097d1351.jpg) +(a) 680M model size + +![](images/aed3f91740cde20386c7e584c0bea96ad833070d2a9ec1843562d5fae5c712a1.jpg) +(b) 1.9B model size +Figure 5: Scaling results for 680M and 1.9B baseline model sizes. Compared to the BPE baseline, SuperBPE with matching parameter count achieves lower BPB in the under-trained regime, while SuperBPE with matching inference compute achieves lower BPB than the baseline at every model size and every training budget tested. Note that BPB comparisons between BPE and SuperBPE models do not track downstream task accuracy due to differences in how BPE and SuperBPE models distribute loss over tokens (§4.2). + +# 5 Related Work + +Tokenization beyond subwords Prior work has explored processing text at multiple levels of granularity (Lai et al., 2021; Zhang et al., 2021) or creating multi-word tokens through frequency-based identification of $n$ -grams (Gee et al., 2023; Kumar & Thawani, 2022). However, these were explored in limited experimental contexts (mainly for machine translation) and had mixed effectiveness. Naively disabling pretokenization in BPE has been found to severely degrade model performance (Dagan et al., 2024; Schmidt et al., 2024; Kudo, 2018), although this approach may be more promising for unigram tokenization (Kudo & Richardson, 2018), as adopted by JURASSIC (Lieber et al., 2021) and BLOOMBERGGPT (Wu et al., 2023). In concurrent work, Huang et al. (2025) disentangle input and output vocabularies, expanding only the former to include $n$ -gram tokens. Their method requires significant modifications of the LM input component and considers fixed length of $n$ -grams. + +Multi-token prediction Multi-token prediction (MTP) equips LMs with some extra parameters to predict multiple tokens in a single time step (Qi et al., 2020; Gloeckle et al., 2024) and was recently adopted by DEEPSEEK-V3, though the MTP module is discarded at inference-time. MTP's effectiveness corroborates that LMs are capable of predicting more than one subword in a forward pass. However, these approaches fix the number of tokens predicted in each time step and require modifications to the architecture and training objective. We note that the benefits of MTP and superword tokens may be orthogonal. + +Tokenizer-free language modeling Some works have explored the possibility of completely removing tokenization from LMs and directly modeling text as a sequence of bytes (Clark et al., 2022; Xue et al., 2022; Wang et al., 2024). To overcome the increased compute requirement due to expanded sequence lengths, alternative architectures have been proposed that either segment bytes into fixed-length patches (Tay et al., 2022; Yu et al., 2023) or dynamically predict patch boundaries with sub-modules (Nawrot et al., 2023; Pagnoni et al., 2024; Ahia et al., 2024; Hwang et al., 2025); these dynamic patches have been qualitatively observed to span multiple words. + +Tokenizer transfer Many methods have been proposed to adapt models after training to use a different tokenizer. These may rely on intervention during pretraining (Chen et al., 2023), continued training for a subset of layers (Marchisio et al., 2023), or leveraging self-distillation (Minixhofer et al., 2025), heuristic, (Minixhofer et al., 2022; Gee et al., 2022; Tran, 2020; Liu et al., 2024b; Dobler & De Melo, 2023), or hypernetwork-based (Minixhofer + +et al., 2024) initialization of a fresh embedding matrix, optionally followed by fine-tuning. These methods may be used to upgrade existing models to use SuperBPE tokenizers, with the goal of reducing inference cost while maintaining performance. We leave this direction to future work. + +# 6 Conclusion + +Although tokenization lies at the foundation of language modeling, acting as the lens through which models view text, the algorithms in use have remained largely unchanged over the past decade. SuperBPE builds on the observation that tokens need not be limited to subwords, extending the BPE algorithm to superword tokens. When replacing subword BPE tokenizers with SuperBPE tokenizers in pretraining, we find that language models perform better over a large suite of downstream tasks, while also being substantially more efficient at inference time. These benefits are achieved without modifying the underlying model architecture, making SuperBPE a compelling alternative to BPE that seamlessly integrates with modern language model ecosystems. + +# Acknowledgments + +We would like to thank Alex Fang for pretraining advice, Vivek Ramanujan for helping debug our distributed training setup, Ian Magnusson for helpful comments on LM evaluation, and Zhaofeng Wu, Alexander Fang, and Xiaochuang Han for feedback on drafts. We are also grateful to Luca Soldaini, Goncalo Faria, Shrimai Prabhumoye, Matt Jordan, Artidoro Pagnoni, Mike Lewis, Doug Downey, Shannon Shen, and the UW NLP community for valuable conversations about this work. Both co-first authors, AL and JH, are supported by the NSF Graduate Research Fellowship Program. JH and SO are supported in part by the Microsoft Grant for Customer Experience Innovation. This work was partially funded by NSF DMS-2134012, NSF CCF-2019844, ONR N00014-24-1-2207, and NSF 2113530 as well as with NVIDIA resources provided through the National AI Research Resource Pilot (NAIRR). + +# References + +Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Jungo Kasai, David Mortensen, Noah Smith, and Yulia Tsvetkov. Do all languages cost the same? tokenization in the era of commercial language models. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 9904-9923, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.614. URL https://aclanthology.org/2023.emnlp-main.614. +Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Valentin Hofmann, Tomasz Limisiewicz, Yulia Tsvetkov, and Noah A. Smith. MAGNET: Improving the multilingual fairness of language models with adaptive gradient-based tokenization. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=1e3MOwHSIX. +Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large language models, 2021. URL https://arxiv.org/abs/2108.07732. +Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. Pythia: A suite for analyzing large language models across training and scaling, 2023. URL https://arxiv.org/abs/2304.01373. +BIG-bench. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=uyTL5Bvosj. + +Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In *Thirty-Fourth AAAI Conference on Artificial Intelligence*, 2020. +Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. Large language models in machine translation. In Jason Eisner (ed.), Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pp. 858-867, Prague, Czech Republic, June 2007. Association for Computational Linguistics. URL https://aclanthology.org/D07-1090/. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. 2020. +Anthony Chen, Pallavi Gudipati, Shayne Longpre, Xiao Ling, and Sameer Singh. Evaluating entity disambiguation and the role of popularity in retrieval-based NLP. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4472-4485, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.345. URL https://aclanthology.org/2021.acl-long.345. +Y Chen, K Marchisio, R Raileanu, DI Adelani, P Stenetorp, S Riedel, and M Artetxe. Improving language plasticity via pretraining with active forgetting. In Advances in Neural Information Processing Systems. NeurIPS, 2023. +Kenneth Church. How many multiword expressions do people know? In Proceedings of the Workshop on Multiword Expressions: From Parsing and Generation to the Real World, pp. 137-144, Portland, Oregon, USA, 2011. Association for Computational Linguistics. URL https://aclanthology.org/W11-0823/. +Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2924–2936, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1300. URL https://aclanthology.org/N19-1300. +Jonathan H. Clark, Dan Garrette, Iulia Turc, and John Wieting. Canine: Pre-training an efficient tokenization-free encoder for language representation. Transactions of the Association for Computational Linguistics, 10:73-91, 2022. doi: 10.1162/tacl_a_00448. URL https://aclanthology.org/2022.tacl-1.5. +Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018. URL https://arxiv.org/abs/1803.05457. +Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/abs/2110.14168. +Pablo Contreras Kallens and Morten H. Christiansen. Models of language and multiword expressions. Frontiers in Artificial Intelligence, 5, 2022. doi: 10.3389/frai.2022.781962. URL https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2022.781962. + +Gautier Dagan, Gabriel Synnaeve, and Baptiste Rozière. Getting the most out of your tokenizer for pre-training and domain adaptation. In Proceedings of the 41st International Conference on Machine Learning, ICML'24. JMLR.org, 2024. URL https://dl.acm.org/doi/10.5555/3692070.3692457. +DeepSeek-AI. Deepseek-v3 technical report, 2025. URL https://arxiv.org/abs/2412.19437. +Konstantin Dobler and Gerard De Melo. Focus: Effective embedding initialization for monolingual specialization of multilingual models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 13440-13454, 2023. +Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2368-2378, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1246. URL https://aclanthology.org/N19-1246. +Lukas Edman, Helmut Schmid, and Alexander Fraser. CUTE: Measuring LLMs' understanding of their tokens. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 3017-3026, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.177. URL https://aclanthology.org/2024.emnlp-main.177. +Philip Gage. A new algorithm for data compression. The C Users Journal archive, 12:23-38, 1994. URL https://api-semanticscholar.org/CorpusID:59804030. +Matthias Galle. Investigating the effectiveness of BPE: The power of shorter sequences. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (eds.), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1375-1381, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1141. URL https://aclanthology.org/D19-1141. +Leonidas Gee, Andrea Zugarini, Leonardo Rigutini, Paolo Torroni, et al. Fast vocabulary transfer for language model compression. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track, pp. 409-416. Association for Computational Linguistics (ACL), 2022. +Leonidas Gee, Leonardo Rigutini, Marco Ernandes, and Andrea Zugarini. Multi-word tokenization for sequence compression. In Mingxuan Wang and Imed Zitouni (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track, pp. 612-621, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-industry.58. URL https://aclanthology.org/2023.emnlp-industry.58. +Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Roziere, David Lopez-Paz, and Gabriel Synnaeve. Better & faster large language models via multi-token prediction. In *Forty-first International Conference on Machine Learning*, 2024. URL https://openreview.net/forum?id=pEWAcejiU2. +Omer Goldman, Avi Caciularu, Matan Eyal, Kris Cao, Idan Szpektor, and Reut Tsarfaty. Unpacking tokenization: Evaluating text compression and its correlation with model performance. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics: ACL 2024, pp. 2274-2286, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024-findings-acl.134. URL https://aclanthology.org/2024-findings-acl.134. +Google. Gemma: Open models based on gemini research and technology, 2024. + +Jonathan Hayase, Alisa Liu, Yejin Choi, Sewoong Oh, and Noah A. Smith. Data mixture inference: What do BPE tokenizers reveal about their training data? In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=EHXyeImux0. +Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=d7KBjmI3GmQ. +Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. In Proceedings of the 36th International Conference on Neural Information Processing Systems, pp. 30016-30030, 2022. +Valentin Hofmann, Janet Pierrehumbert, and Hinrich Schütze. Superbizarre is not superb: Derivational morphology improves BERT's interpretation of complex words. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 3594-3608, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.279. URL https://aclanthology.org/2021.acl-long.279. +Hongzhi Huang, Defa Zhu, Banggu Wu, Yutao Zeng, Ya Wang, Qiyang Min, and Xun Zhou. Over-tokenized transformer: Vocabulary is generally worth scaling, 2025. URL https://arxiv.org/abs/2501.16975. +Sukjun Hwang, Brandon Wang, and Albert Gu. Dynamic chunking for end-to-end hierarchical sequence modeling, 2025. URL https://arxiv.org/abs/2507.07955. +Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Regina Barzilay and Min-Yen Kan (eds.), Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601-1611, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1147. URL https://aclanthology.org/P17-1147. +Guy Kaplan, Matanel Oren, Yuval Reif, and Roy Schwartz. From tokens to words: On the inner lexicon of LLMs. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=328vch6tRs. +Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020. +Pride Kavumba, Naoya Inoue, Benjamin Heinzerling, Keshav Singh, Paul Reisert, and Kentaro Inui. When choosing plausible alternatives, clever hans can be clever. In Simon Ostermann, Sheng Zhang, Michael Roth, and Peter Clark (eds.), Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing, pp. 33-42, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-6004. URL https://aclanthology.org/D19-6004. +Taku Kudo. Sentencepiece experiments. https://github.com/google/sentencepiece/blob/master/doc/experiments.md, 2018. +Taku Kudo and John Richardson. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Eduardo Blanco and Wei Lu (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 66-71, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-2012. URL https://aclanthology.org/D18-2012. + +Dipesh Kumar and Avijit Thawani. BPE beyond word boundary: How NOT to use multi word expressions in neural machine translation. In Shabnam Tafreshi, João Sedoc, Anna Rogers, Aleksandr Drozd, Anna Rumshisky, and Arjun Akula (eds.), Proceedings of the Third Workshop on Insights from Negative Results in NLP, pp. 172-179, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.insights-1.24. URL https://aclanthology.org/2022.insights-1.24. +Vedang Lad, Wes Gurnee, and Max Tegmark. The remarkable robustness of llms: Stages of inference?, 2024. URL https://arxiv.org/abs/2406.19384. +Yuxuan Lai, Yijia Liu, Yansong Feng, Songfang Huang, and Dongyan Zhao. Lattice-BERT: Leveraging multi-granularity representations in Chinese pre-trained language models. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1716-1731, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.137. URL https://aclanthology.org/2021.nacl-main.137. +Sander Land. A short introduction to pre-tokenization weirdness, 2024. URL https://tokencontributions.substack.com/p/a-short-introduction-to-pre-tokenization. +Sander Land and Max Bartolo. Fishing for magikarp: Automatically detecting under-trained tokens in large language models. In Yaser Al-Onaizan, Mohit Bansal, and YunNung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 11631-11646, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.649. URL https://aclanthology.org/2024.emnlp-main.649. +Hector J. Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, pp. 552-561. AAAI Press, 2012. +Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Sham Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Sewoong Oh, Luke Zettlemoyer, Kyle Lo, Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G. Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, and Vaishaal Shankar. Datacomp-lm: In search of the next generation of training sets for language models, 2024. URL https://arxiv.org/abs/2406.11794. +Opher Lieber, Or Sharir, Barak Lenz, and Yoav Shoham. Jurassic-1: Technical details and evaluation, 2021. URL https:// uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf. +Hong Liu, Sang Michael Xie, Zhiyuan Li, and Tengyu Ma. Same pre-training loss, better downstream: Implicit bias matters for language models. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 22188-22214. PMLR, 23-29 Jul 2023. URL https://proceedings.mlr.press/v202/liu23ao.html. +Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, and Hannaneh Hajishirzi. Infini-gram: Scaling unbounded n-gram language models to a trillion tokens. In First Conference on Language Modeling, 2024a. URL https://openreview.net/forum?id=u2vAyMeLm. + +Yihong Liu, Peiqin Lin, Mingyang Wang, and Hinrich Schütze. Ofa: A framework of initializing unseen subword embeddings for efficient large-scale multilingual continued pretraining. In Findings of the Association for Computational Linguistics: NAACL 2024, pp. 1067-1097, 2024b. +Scott Lundberg. The art of prompt design: Prompt boundaries and token healing, 2023. URL https://medium.com/towards-data-science/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38. +Kelly Marchisio, Patrick Lewis, Yihong Chen, and Mikel Artetxe. Mini-model adaptation: Efficiently extending pretrained models to new languages via aligned shallow training. In The 61st Annual Meeting Of The Association For Computational Linguistics, 2023. +Haspelmath Martin. The indeterminacy of word segmentation and the nature of morphology and syntax. Folia Linguistica, 51(s1000):31-80, 2017. doi: doi:10.1515/flin-2017-1005. URL https://doi.org/10.1515/flin-2017-1005. +Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 17359-17372. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/6f1d43d5a82a37e89b0665b33bf3a182-Paper-Conference.pdf. +Meta. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783. +Sabrina J. Mielke, Zaid Alyafeai, Elizabeth Salesky, Colin Raffel, Manan Dey, Matthias Galle, Arun Raja, Chenglei Si, Wilson Y. Lee, Benoit Sagot, and Samson Tan. Between words and characters: A brief history of open-vocabulary modeling and tokenization in nlp, 2021. URL https://arxiv.org/abs/2112.10508. +Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2381-2391, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1260. URL https://aclanthology.org/D18-1260. +Benjamin Minixhofer, Fabian Paischer, and Navid Rekabsaz. Wechsel: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3992–4006, 2022. +Benjamin Minixhofer, Edoardo Ponti, and Ivan Vulic. Zero-shot tokenizer transfer. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. +Benjamin Minixhofer, Ivan Vulic, and Edoardo Maria Ponti. Universal cross-tokenizer distillation via approximate likelihood matching. arXiv preprint arXiv:2503.20083, 2025. +Piotr Nawrot, Jan Chorowski, Adrian Lancucki, and Edoardo Maria Ponti. Efficient transformers with dynamic token pooling. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6403-6417, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.353. URL https://aclanthology.org/2023.acl-long.353. +Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Investigating the limitations of transformers with simple arithmetic tasks, 2021. URL https://arxiv.org/abs/2102.13019. +Team OLMo, Pete Walsh, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, Dustin Schwenk, Oyvind Tafjord, Taira Anderson, David Atkinson, Faeze Brahman, Christopher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng Liu, Saumya Malik, William Merrill, Lester James V. Miranda, Jacob Morrison, + +Tyler Murray, Crystal Nam, Valentina Pyatkin, Aman Rangapur, Michael Schmitz, Sam Skjonsberg, David Wadden, Christopher Wilhelm, Michael Wilson, Luke Zettlemoyer, Ali Farhadi, Noah A. Smith, and Hannaneh Hajishirzi. 2 olmo 2 furious, 2024. URL https://arxiv.org/abs/2501.00656. +OpenAI. Hello GPT-4o, 2024. URL https://openai.com/index/hello-gpt-4o/. +Naoki Otani, Satoru Ozaki, Xingyuan Zhao, Yucen Li, Micaelah St Johns, and Lori Levin. Pre-tokenization of multi-word expressions in cross-lingual word embeddings. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4451-4464, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.360. URL https://aclanthology.org/2020.emnlp-main.360. +Artidoro Pagnoni, Ram Pasunuru, Pedro Rodriguez, John Nguyen, Benjamin Muller, Margaret Li, Chunting Zhou, Lili Yu, Jason Weston, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Ari Holtzman, and Srinivasan Iyer. Byte latent transformer: Patches scale better than tokens, 2024. URL https://arxiv.org/abs/2412.09871. +Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Katrin Erk and Noah A. Smith (eds.), Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1525-1534, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1144. URL https://aclanthology.org/P16-1144. +Jackson Petty, Sjoerd van Steenkiste, Fei Sha, Ishita Dasgupta, Dan Garrette, and Tal Linzen. The impact of depth and width on transformer language model generalization. 2023. +Buu Phan, Marton Havasi, Matthew Muckley, and Karen Ullrich. Understanding and mitigating tokenization bias in language models, 2024. URL https://arxiv.org/abs/2406.16829. +Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. BPE-dropout: Simple and effective subword regularization. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1882-1892, Online, July 2020. Association for Computational Linguistics. doi: 10. 18653/v1/2020.acl-main.170. URL https://aclanthology.org/2020.acl-main.170. +Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. ProphetNet: Predicting future n-gram for sequence-to-SequencePretraining. In Trevor Cohn, Yulan He, and Yang Liu (eds.), Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 2401-2410, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.217. URL https://aclanthology.org/2020-findings-emnlp.217. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. URL https://cdn.openai.com/better-language-models/language Models are unsupervised multitask_learners.pdf. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Jian Su, Kevin Duh, and Xavier Carreras (eds.), Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383-2392, Austin, Texas, November 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1264. URL https://aclanthology.org/D16-1264. +Siva Reddy, Danqi Chen, and Christopher D. Manning. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266, 2019. doi: 10.1162/tacl_a_00266. URL https://aclanthology.org/Q19-1016. +Marco Tulio Ribeiro. A guidance language for controlling large language models, 2023. URL https://github.com/guidance-ai/guidance?tab=README-ov-file#text-not-tokens. + +Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI) Spring Symposium, 2011. +Jessica Rumbelow and Matthew Watkins. Solidgoldmagikarp (plus, prompt generation), 2023. URL https://www.lesswrong.com/posts/aPeJE8bSo6rAfOaLqg/solidgoldmagikarp-plus-prompt-generation. +Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: an adversarial winograd schema challenge at scale. Commun. ACM, 64(9):99-106, August 2021. ISSN 0001-0782. URL https://doi.org/10.1145/3474381. +Bahar Salehi, Paul Cook, and Timothy Baldwin. A word embedding approach to predicting the compositionality of multiword expressions. In Rada Mihalcea, Joyce Chai, and Anoop Sarkar (eds.), Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 977-983, Denver, Colorado, 2015. Association for Computational Linguistics. doi: 10.3115/v1/N15-1099. URL https://aclanthology.org/N15-1099/. +Craig W Schmidt, Varshini Reddy, Haoran Zhang, Alec Alameddine, Omri Uzan, Yuval Pinter, and Chris Tanner. Tokenization is more than compression. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 678-702, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.40. URL https://aclanthology.org/2024.emnlp-main.40. +Nathan Schneider, Spencer Onuffer, Nora Kazour, Emily Danchik, Michael T. Mordowanec, Henrietta Conrad, and Noah A. Smith. Comprehensive annotation of multiword expressions in a social web corpus. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (eds.), Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pp. 455-461, Reykjavik, Iceland, May 2014. European Language Resources Association (ELRA). URL https://aclanthology.org/L14-1433/. +Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Katrin Erk and Noah A. Smith (eds.), Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715-1725, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://aclanthology.org/P16-1162. +Anya Sims, Cong Lu, Klara Kaleb, Jakob Nicolaus Foerster, and Yee Whye Teh. Stochastok: Improving fine-grained subword understanding in LLMs. In ICLR 2025 Workshop on Building Trust in Language Models and Applications, 2025. URL https://openreview.net/forum?id=PZnDZdkGsE. +Aaditya K. Singh and DJ Strouse. Tokenization counts: the impact of tokenization on arithmetic in frontier llms, 2024. URL https://arxiv.org/abs/2402.14903. +Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024. +Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4149-4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https://aclanthology.org/N19-1421. +Chaofan Tao, Qian Liu, Longxu Dou, Niklas Muennighoff, Zhongwei Wan, Ping Luo, Min Lin, and Ngai Wong. Scaling laws with vocabulary: Larger models deserve larger + +vocabies. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. +Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. Scale efficiently: Insights from pre-training and fine-tuning transformers. arXiv preprint arXiv:2109.10686, 2021. +Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, and Donald Metzler. Charformer: Fast character transformers via gradient-based subword tokenization. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=JtBRnr10EFN. +Avijit Thawani, Jay Pujara, Filip Ilievski, and Pedro Szekely. Representing numbers in NLP: a survey and a vision. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 644-656, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.53. URL https://aclanthology.org/2021.naacl-main.53. +Ke Tran. From english to foreign languages: Transferring pre-trained language models. arXiv preprint arXiv:2002.07306, 2020. +Menan Velayuthan and Kengatharaiyer Sarveswaran. Egalitarian language representation in language models: It all begins with tokenizers. In Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, and Steven Schockaert (eds.), Proceedings of the 31st International Conference on Computational Linguistics, pp. 5987-5996, Abu Dhabi, UAE, January 2025. Association for Computational Linguistics. URL https://aclanthology.org/2025.coling-main.400/. +Bandhav Veluri, Justin Chan, Malek Itani, Tuochao Chen, Takuya Yoshioka, and Shyamnath Gollakota. Real-time target sound extraction. In ICASSP, pp. 1-5, 2023. URL https://doi.org/10.1109/ICASSP49357.2023.10094573. +Tim Vieira, Ben LeBrun, Mario Giulianielli, Juan Luis Gastaldi, Brian DuSell, John Terilla, Timothy J O'Donnell, and Ryan Cotterell. From language models over tokens to language models over characters. arXiv preprint arXiv:2412.03719, 2024. +Junxiong Wang, Tushaar Gangavarapu, Jing Nathan Yan, and Alexander M Rush. Mambabye: Token-free selective state space model. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id=X1xNsuKssb. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. Transformers: State-of-the-art natural language processing. In Qun Liu and David Schlangen (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38-45, Online, October 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-demos.6. URL https://aclanthology.org/2020.emnlp-demos.6. +Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. Bloomberggpt: A large language model for finance, 2023. URL https://arxiv.org/abs/2303.17564. +Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. ByT5: Towards a token-free future with pre-trained byte-to-byte models. Transactions of the Association for Computational Linguistics, 10:291-306, 2022. doi: 10.1162/tacl_a_00461. URL https://aclanthology.org/2022.tacl-1.17. + +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369-2380, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1259. URL https://aclanthology.org/D18-1259. +Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, and Mike Lewis. MEGABYTE: Predicting million-byte sequences with multiscale transformers. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=JTmO2V9Xpz. +Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can a machine really finish your sentence? In Anna Korhonen, David Traum, and Lluis Márquez (eds.), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4791-4800, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1472. URL https://aclanthology.org/P19-1472. +Xinsong Zhang, Pengshuai Li, and Hang Li. AMBERT: A pre-trained language model with multi-grained tokenization. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 421-435, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021-findings-acl.37. URL https://aclanthology.org/2021-findings-acl.37. +Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Yining Chen, Jiahai Wang, Jian Yin, Ming Zhou, and Nan Duan. Analytical reasoning of text. In Marine Carpuat, Marie-Catherine de Marneffé, and Ivan Vladimir Meza Ruiz (eds.), Findings of the Association for Computational Linguistics: NAACL 2022, pp. 2306-2319, Seattle, United States, July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-naacl.177. URL https://aclanthology.org/2022-findings-naacl.177. +Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. AGIEval: A human-centric benchmark for evaluating foundation models. In Kevin Duh, Helena Gomez, and Steven Bethard (eds.), Findings of the Association for Computational Linguistics: NAACL 2024, pp. 2299-2314, Mexico City, Mexico, June 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-naacl.149. URL https://aclanthology.org/2024.findings-naacl.149. + +# A Experimental setup details + +# A.1 Tokenizer training + +We use the HuggingFace tokenizers (Wolf et al., 2020) library for tokenizer training. + +# A.1.1 Tokenizer training data + +We produce the tokenizer training data by sampling documents uniformly at random from the OLMO2 stage 2 pretraining data, referred to as olmo-mix. We use a 10 GB subset because early experiments showed that data beyond even $\sim 10$ MB does not make a difference in the resulting tokenizer's encoding efficiency. + +We found that olmo-mix had several extremely long documents, with the longest $1\%$ of documents making up $15\%$ of the data. In particular, a full academic paper (specifically Veluri et al., 2023) is duplicated 2,224 times back-to-back inside one document (as delimited by special EOS tokens). Because our tokenizers are trained on small sets of data, these extremely long documents can take up a large proportion of the data, resulting in unusual tokens like chunk-based processing. To circumvent possible data duplication issues, we truncate the longest $1\%$ of documents in the tokenizer training data to the $99\%$ percentile of document lengths. As future practitioners train SuperBPE tokenizers, we encourage especial attention to dedduplication, which may have an outsized impact on SuperBPE tokenizers. + +# A.1.2 Limit on the size of superword tokens + +Even after truncating the longest $1\%$ of documents, we found that SuperBPE tokenizers can still have extremely long tokens consisting of highly duplicated boiler-plate text such as the Project Gutenberg license or common internet phrases such as You’re commenting using your. This issue is already present in BPE tokenizers trained on Chinese, which contain sentence-long tokens clearly taken from pornographic content. For instance, tokens in GPT-4O’s tokenizer include 最新高清无码 = latest HD uncensored and 娱乐网址 = entertainment website. To prevent concerns about the tokenizer directly revealing parts of the training data (Hayase et al., 2024), we enforce an upper bound of 4 words in our tokens. Empirically, we found that this had no measurable impact on the encoding efficiency of the tokenizers or the resulting trained LMs. + +# A.1.3 Pretokenization rules + +We implement whitespace pretokenization with the default regex string from tokenizers which was adopted by the GPT-2 tokenizer. + +$$ +? \backslash p \{L \} + |? [ ^ {\wedge} \backslash s \backslash p \{L \} \backslash p \{N \} ] + | \backslash s + (? ! \backslash S) | \backslash s + +$$ + +Note that the original GPT-2 pretokenization regex string also splits on contractions, e.g., splitting I'm into [I, 'm]. Since this choice is not universal among commercial tokenizers and is not related to whitespace pretokenization (and furthermore creates plenty of undesirable edge cases [Land, 2024]), we do not include this rule. + +Independently of whitespace pretokenization (i.e., for both BPE and SuperBPE tokenizers), we follow recent convention (as introduced by GPT-3.5 and borrowed by LLAMA3, OLMo2) and pretokenize digits into blocks of 3. We make one modification, by grouping digits into 3 from the right rather than from the left, so that, e.g., 1000 would be pretokenized as [1, 000] instead of [100, 0]. This choice was recently found to yield improved performance on math benchmarks, even when applied solely at inference time (Singh & Strouse, 2024). Digit pretokenization is enforced with the following regex. + +$$ +(? = (\backslash d \{3 \}) + (?! \backslash d)) +$$ + +# A.1.4 Special casing of colon + +In order to make our tokenizer compatible with the common question-answering format where the prompt ends with a colon and the continuation is expected to start with a space, we "special-case" colon by preventing the algorithm from learning any tokens that contain": as a substring. Without this fix, common question/answer prompts might produce distorted distributions over completions. Please see $\S C.3$ for further discussion. This affects the resulting tokenizer minimally in terms of the learned vocabulary. + +# A.2 Scaling model configurations + +When matching inference compute, the goal is to match the average flops per byte of generated text between two models with different tokenizers. To do so, we scale the model up to cancel the effect of longer tokens, which requires precise control over the model's size. To produce a model config with an arbitrary inference compute cost, we first represent the inference flops per token as a polynomial in terms of the model dimension, MLP hidden dimension, and number of layers. Conveniently, once the model dimension and number of layers are chosen, the flops are affine in the MLP hidden dimension, so we can easily solve for the MLP hidden dimension that gets us closest to the desired budget. We fix the head dimension to that of the base model. + +To find the best config overall, we grid search over the hidden dimension (which must remain a multiple of the head dimension) and number of layers, solving for the MLP hidden dimension at each step. We choose the config which expands the transformer by the most uniform factors. This is measured by taking the ratios of the current parameters with the + +base config's parameters, applying the logarithm, and taking the standard deviation. While prior work has explored the best way to scale transformer models (Tay et al., 2021; Petty et al., 2023), we believe that scaling all parameters uniformly is reasonable since we are only increasing the model size by a small amount. + +We present the exact model hyperparameters for all model sizes used in our experiments in Table 4. + +
680M910M1.9B2.5B8B11B
Parameter count678.2M912.5M1.893B2.536B8.115B11.30B
Model dimension10241,2162,0482,3044,0964,608
MLP hidden dimension8,1929,72816,38418,43222,01624,704
Head dimension6464128128128128
Number of heads161916183236
Number of layers161816193237
Vocabulary size20,000520,000520,000520,000520,000520,0005
+ +Table 4: Model parameters for all model sizes. Model sizes 910M, 2.5B, and 11B are scaled versions of 680M, 1.9B, and 8B respectively. All other parameters match those of OLMO 300M (from the OLMO model ladder) for sizes 680M and 910M, OLMO 1B for sizes 1.9B and 2.5B, or OLMO2 7B for sizes 8B and 11B, respectively. Maximum sequence length values for various tokenizers are listed in Table 2. + +# A.3 Compute used for model training + +All models were pretrained on $328\times \mathrm{H}100$ nodes. + +# A.4 Evaluation Suite + +Our evaluation suite builds on DataComp-LM's core evaluation of 22 tasks (Li et al., 2024), which was found to provide low-variance signal of learning. We add 8 more popular tasks (e.g., MMLU, GSM8K) while also covering string manipulation tasks (e.g., CUTE), which are known to be challenging for LMs due to their tokenizers. + +All evaluations are based on decoding from the model and scoring the generation by either comparing it to the ground truth or evaluating its functional correctness (in the case of coding tasks). For multiple choice (MC) tasks, we check whether the predicted answer choice is an exact match (EM) to the target (we observe that effectively $100\%$ of model generations are valid answer choices, especially for later checkpoints). For open-ended tasks, we check whether the generated output contains the ground truth answer exactly, and for coding tasks, we report pass@10. + +We provide 5 in-context examples for all tasks, except for CoQA, which naturally contains in-context examples in the conversational context, and the coding tasks (HumanEval and MBPP), which are evaluated zero-shot following prior work. We use a maximum of 5,000 examples from each dataset, though some datasets contain much fewer examples. BB below stands for BIG-Bench. + +ARC consists of 4-way MC questions from grades 3-9 science exams. It contains two splits, ARC-Easy, which require knowledge of basic science, and ARC-Challenge, which require some procedural reasoning (Clark et al., 2018). + +Arithmetic contains simple arithmetic problems (Brown et al., 2020).11 We use the 2da, 2dm, and 2ds splits for addition, multiplication, and division of (up to) 2-digit numbers. + +BoolQ contains naturally occurring yes/no questions paired with passages that provide an answer (Clark et al., 2019). + +CommonsenseQA contains 5-way MC questions that require commonsense knowledge to answer (Talmor et al., 2019). + +COPA contains two-way MC questions about cause and effect (Roemmle et al., 2011; Kavumba et al., 2019). + +CoQA consists of passages with a series of conversational questions about the passage Reddy et al. (2019). Each question requires the prior conversational context, due to possible coreference across questions. Because these contextual questions naturally serve as in-context examples, we do not provide additional in-context examples. + +BB CS Algorithms consists of two subtasks, determining whether a given series of parentheses is balanced and identifying the longest common subsequence in two letter strings (BIG-bench, 2023). + +CUTE contains questions that require the model to understand and manipulate spelling, such as replacing all instances of a particular letter in a word with another letter (Edman et al., 2024). + +DROP contains questions about passages, potentially requiring reasoning over multiple pieces of information in the passage (Dua et al., 2019). + +BB Dyck Languages consists of a sequence of parentheses and requires the model to predict the correct sequence of closing parentheses so that the entire sequence is well-balanced. + +GSM8K contains grade school math word problems that require between 2 and 8 steps to solve. In the in-context examples, we provide the answer passage that contains intermediate steps with calculator annotations removed. The model is expected to provide the final numerical answer after four hashtags (# ###) that delimit the reasoning and final answer (Cobbe et al., 2021). + +HellaSwag contains 4-way MC questions which ask for the most natural continuation given the context (Zellers et al., 2019). + +HotpotQA contains questions along with a corresponding passage from Wikipedia containing the answer (Yang et al., 2018). + +HumanEval contains programming problems where the model is tasked with completing a Python function given its docstring (Chen et al., 2021). We use "\nclass," "\ndef," "\n#,"\nif," as stop tokens. Following the original paper, we sample 20 continuations with top $p = 0.95$ and temperature $= 0.8$ . Models are allowed to generate for a maximum of 128 new tokens. The functional correctness of generations is automatically evaluated using test cases. We use the 20 generation to make an unbiased estimate of the pass@10 rate, i.e., how likely at least one of 10 sampled solutions for a problem is correct. + +Jeopardy contains open-ended questions from the "Jeopardy!" quiz show.[12] + +Lambada contains narratives without the last word, which is inferrable given the context (Paperno et al., 2016). This task requires models to attend to the full narrative instead of only the local context. + +BB Language Identification contains sentences in different languages, and the task is to choose the language of the sentence from a long list of options. + +LSAT-AR contains MC questions that evaluate the analytical reasoning (AR) ability of LMs (Zhong et al., 2022; 2024). Test questions are drawn from the Law School Admission Test (LSAT) from 1991 to 2016. + +MBPP contains Python programming problems where the model is given a description of the desired function and a series of unit tests. We use the same evaluation setup as HumanEval. + +MMLU contains 4-way MC questions covering 57 different domains, covering both world knowledge and problem-solving abilities (Hendrycks et al., 2021). Note that we report a straight average over the 5000-example sample, rather than a macro-average over subjects. + +OpenbookQA contains 4-way MC questions that require multi-step reasoning and commonsense knowledge (Mihaylov et al., 2018). + +BB Operators contains questions where the model is given a function definition and asked to compute the output of that function given a particular input. + +PIQA contains MC questions that require physical commonsense reasoning (Bisk et al., 2020). + +BB Repeat-Copy-Logic contains instructions that ask the model to produce a particular string (Austin et al., 2021). + +SQuAD contains passages paired with questions about the passage (Rajpurkar et al., 2016). The answer is always a span from the passage. + +TriviaQA contains open-ended questions about world knowledge (Joshi et al., 2017). + +BB WikidataQA require models to complete factual statements with the correct continuation. + +Winograd contains binary MC questions where the model is given a context and asked to determine which entity a pronoun refers to, between two options (Levesque et al., 2012). Correctly answer the question requires commonsense knowledge and contextual reasoning. + +Winogrande contain questions with the same schema as Winograd, but increases both the scale and difficulty of the dataset (Sakaguchi et al., 2021). + +# B Additional Results + +# B.1 How BPB varies with context length + +In our main experiments (§3), we adjust the context size of SuperBPE models to match the effective context size of the BPE model in raw text. To justify this design choice, we show that the next token becomes easier to predict as a function of the preceding context in bytes (not tokens). Figure 6 shows the average BPB at every token index (left) vs byte index (right) — when measured at fixed token indices, SuperBPE has an advantage from seeing more context (achieving lower loss on average at the same token index), whereas at fixed byte indices, this advantage goes away. + +# B.2 Task evaluation + +We report the individual task performance of BPE and all SuperBPE models in Table 5 (this an expansion of Table 1). We also show a subset of task-specific performance curves during pretraining in Figure 12. + +![](images/84f2007d154f77661b1483fc6a7996163488e1e8ff828b06402921ad2557ab4c.jpg) +(a) BPB vs tokens of context + +![](images/c421e5a30f24ec4e8a6d113c372482b96b32219f88c00b1e3e277b791450fae3.jpg) +(b) BPB vs bytes of context +Figure 6: When comparing the normalized loss of the next token, controlling for preceding tokens of context gives SuperBPE an advantage, while controlling for bytes of context gives a close match between BPE and SuperBPE. + +
CategoryTaskBPE 8BSuperBPE 8BSuperBPE 11B
t = 80kt = 160kt = 180k
KnowledgeARC-Easy (MC)46.660.863.667.160.6
ARC-Challenge (MC)35.146.443.950.643.9
Jeopardy (EM)42.140.241.841.842.2
MMLU (MC)36.541.942.644.741.0
OpenbookQA (MC)33.249.849.454.446.4
TriviaQA (EM)60.659.761.961.362.3
WikidataQA (EM)69.768.269.570.970.9
Math & ReasoningArithmetic (EM)54.863.258.659.356.9
GSM8K (EM)6.46.96.76.77.4
LSAT-AR (MC)21.323.924.323.020.9
Operators (EM)35.532.235.533.637.9
Repeat-Copy-Logic (EM)3.16.26.26.23.1
CodingHumanEval (pass@10)15.915.014.413.415.9
MBPP (pass@10)27.525.328.428.329.4
Reading ComprehensionBoolQ (MC)59.765.262.364.664.7
CoQA (EM)12.612.812.513.213.1
DROP (EM)31.328.632.831.433.1
HotpotQA (EM)53.552.554.755.254.6
SQuAD (EM)75.174.376.275.877.2
CommonsenseCommonsenseQA (MC)33.550.052.353.850.5
COPA (MC)77.086.687.685.897.0
PIQA (MC)55.257.761.859.859.2
Winograd (MC)50.452.555.253.152.3
Winogrande (MC)47.351.251.652.650.2
Language UnderstandingHellaSwag (MC)29.731.230.333.736.6
LAMBADA (EM)77.072.875.170.675.8
Language Identification (EM)8.810.29.79.010.1
String ManipulationCS Algorithms (EM)46.147.342.648.649.1
CUTE (EM)31.332.232.832.635.7
Dyck-Languages (EM)15.923.218.814.216.7
Average39.842.943.443.843.8
+ +Table 5: Performance of BPE and SuperBPE models on 30 downstream tasks. This is an expansion of Table 1 with more models. + +# B.3 BPB evaluation + +See Figure 7 for the bits-per-byte during pretraining of all models we compare. + +![](images/ad02fa6005023c7a5ffcddcd0f09007a757f860bb2f9c15ab4baf49e6e43bf72.jpg) +Figure 7: Bits-per-byte of BPE and SuperBPE models during pretraining. The BPE 8B, SuperBPE 8B ( $t = 180\mathrm{k}$ ), and SuperBPE 11B attain 0.7465, 0.7482, and 0.7445 BPB respectively at the end of training. + +# B.4 Additional scaling experiments + +Our tokenizer has several interesting interactions with LM scaling, purely due to its increased efficiency. For the purpose of this section, let $\alpha$ denote the ratio of our tokenizer's efficiency to the efficiency of a normal BPE tokenizer. (For example, we have $\alpha \approx 1.49$ for our most efficient tokenizer.) + +The primary advantage of a more efficient tokenizer is a reduction of the context length (in tokens) for the same effective context length (in bytes). All other things being equal, this gives: + +1. A $1 / \alpha^2$ reduction in attention compute. +2. A $1 / \alpha$ reduction in non-attention compute. +3. A $1 / \alpha$ reduction in activation memory during training and KV-cache size during inference. + +Thus, if the context length is short, the total compute savings will be close to $1 / \alpha$ . For longer contexts, the compute savings may approach $1 / \alpha^2$ . Given a fixed training budget, there are two natural ways to convert these savings into improved performance. + +# B.4.1 Matching model parameter count + +In many applications of language models, such as deployment to consumer or edge devices, it is crucial to keep the model's size under control. In this regime, we will assume the model size fixed. This directly grants the aforementioned benefits, and we will turn to increasing the number of training steps to match the training budget. + +Since the amount of text seen per step is remains the same due to the fixed effective context length, a more efficient tokenizer allows the model to see more text during training for the same budget. This may lead to improved performance on downstream tasks since the model is more likely to have seen relevant training examples during training. Additionally, although the model is the same size, it requires less compute and memory at inference time to perform the same tasks. In some settings, these gains can be used to amplify inference-time scaling (Snell et al., 2024), leading to further potential gains. + +![](images/d2a85f429d465bfed06712481ee84da313cf21f2e0393cdc138c69019561edbb.jpg) +Figure 8: Results for scaling both model parameters and train tokens proportionally. Compared to the BPE baseline, we consider a SuperBPE model that matches parameter count and a SuperBPE model that matches inference compute. Here we see the spread between the three settings decreases with scale. + +# B.4.2 Matching inference compute + +In other applications of language models, model size is less critical compared to inference compute. In these situations, it may be more desirable to scale the model size up to absorb the extra compute. + +Changing the model size has a strong impact on scaling. Depending on the context length, we may scale the model by a factor of anywhere between $\alpha$ and $\alpha^2$ in order to match inference compute. Since each training step involves $1 / \alpha$ as many tokens, the ratio of tokens to model parameters per step may be reduced by as much as $1 / \alpha^3$ . Prior work on LM scaling (Hoffmann et al., 2022; Kaplan et al., 2020) reports diminishing gains once the ratio of the numbers of train tokens and model parameters becomes too large. An $\alpha$ times more efficient tokenizer allows us to train for up to $\alpha^3$ times longer while maintaining the same token/parameter ratio and without increasing inference compute, delaying the regime of diminishing gains. + +# B.4.3 Experiments + +We train 680M and 1.9B sized BPE models on various numbers of tokens—ranging from $\approx 20$ to $\approx 80$ tokens per parameter—to establish a baseline scaling trend. We then train two models with SuperBPE tokenizers for each baseline model: one with matching parameter count and one with matching inference compute cost. + +There are a couple interesting ways to visualize these results: in Figure 5, we hold the model size fixed and increase the number of training tokens, and in Figure 8, we hold the ratio of train tokens to model parameters fixed (inference compute matched will be fixed 0.7 times lower) and vary both the model size and the number of training tokens. The general trends observed from these results are that matching inference compute is almost universally the best, while matching parameter count tends to be worse than the baseline except in the undertrained regime, where it is better than the baseline. The differences between the different settings increase with overtraining, but decreases when scaling both model size and training tokens at the same time. + +![](images/cc59d2530b2c5247d53eff05c8060a576208af2d40335dd18a2dadc84eec54fc.jpg) +(a) Superword density + +![](images/e5c63215364f50b2bc6e27f939f21909279826c96d46946a07ccb15c4a36f755.jpg) +(b) Superword length distribution +Figure 9: (Left) The number of superword tokens in a SuperBPE tokenizer, as a function of the transition point. A superword token is any token that violates the whitespace pretokensization rule from Stage 1. With an early transition point of $t = 60\mathrm{K}$ , about $85\%$ of the tokens learned in Stage 2 are superword tokens. For $t > 100\mathrm{k}$ , close to $100\%$ of Stage 2 tokens are superwords. (Right) The distribution of superword token lengths in terms of number of words, for $t = 180\mathrm{k}$ . + +# C Analysis of SuperBPE Tokenizers + +# C.1 Superword token analysis + +How many superword tokens are in SuperBPE tokenizers? While the second stage of the pretokenization curriculum allows learning of superword tokens, subword tokens can still be learned. Shown in Figure 9a, for transition points $t < 80\mathrm{k}$ , the number of superword tokens is relatively steady around 120k. Past $t > 100\mathrm{k}$ , almost all tokens learned in Stage 2 are superword tokens. Figure 9b shows the number of whitespace-delimited words in the superword tokens of SuperBPE with $t = 180\mathrm{k}$ . + +# C.2 Analysis of token frequencies in encoding + +We also analyze token frequency statistics under BPE versus SuperBPE tokenizers. Figure 10a shows the relation between token rank (in frequency) and frequency. While tokens in BPE demonstrate a standard Zipfian relation, the slope of SuperBPE curves have a more shallow slope, meaning that the rate of decay in token frequency is smaller. The smaller proportion of tokens with very low counts may reduce prevalence and severity of glitch tokens (Rumbelow & Watkins, 2023; Land & Bartolo, 2024). + +Figure 10b shows the minimum number of tokens from the vocabulary needed to cover any given proportion of data. For BPE, the relation is striking—only $57\%$ of tokens are needed to encode $99\%$ of the data! The remaining tokens make up a long tail of infrequent tokens. In contrast, SuperBPE tokenizers make better use of the vocabulary. For $t = 80k$ and $t = 180k$ , this statistic is $90\%$ and $70\%$ of tokens, respectively. + +# C.3 Distributional Distortion at the Prompt Boundary + +Prior work (Lundberg, 2023; Phan et al., 2024) has shown that LMs using BPE tokenizers may produce distorted generations due to the forced partition in tokenization between a prompt and its completion. This issue stems from the fact that users typically desire completions conditioned on a text prompt. The natural approach to obtaining such completions is to take the prompt, tokenize it with the proper tokenizer, and then sample a completion of the resulting token sequence from the LM. + +![](images/8aeab79b4aa06d677e7e186c851179525d53cd6bf84089fe9de1423c2d97e1a4.jpg) +(a) Token frequency distribution + +![](images/8dd245757e3a67c7d585b6c9e039884acbbdc7b9e840504fdd32355ec36bea42.jpg) +(b) Data covering +Figure 10: (Left) Token counts when ordered by frequency. The rate of decay in token frequency is smaller. (Right) The minimum number of tokens needed to cover a given proportion of the data. SuperBPE tokenizers make better use of the vocabulary, while BPE tokenizers have a long tail of infrequent tokens. + +For a simple example of how this can go wrong, consider a tokenizer with base vocabulary of A and B and a single merge forming the token AB. Let's suppose we trained a model using this tokenizer on the strings "AA", "AB", and "BB" with equal proportions. If we condition on the text prefix "A", there are two equally probable continuations: "A" and "B". However, A is the only valid completion of the token prefix A, since the token B never follows the token A during training. In other words, the prompt-completion pair (A,B) is canonically tokenized using a token that crosses the boundary between the prompt and the completion. + +While this problem is shared by all BPE tokenizers, it can be partially mitigated by pre-tokensization: if the prompt and the completion are separated during the pretokenization step, then it is impossible for a token to cross the boundary. This fix tends to work well for English, where the completion is typically expected to begin with whitespace, so whitespace pretokenization would apply. However, there are many settings where whitespace pretokenization cannot fix the underlying issue, including natural languages that do not use whitespace to separate words (like Chinese and Japanese), programming languages, and constrained generation (Lundberg, 2023; Ribeiro, 2023). + +Several fixes for this issue have been proposed: at training time, token merges can be randomly dropped (Provilkov et al., 2020; Sims et al., 2025; DeepSeek-AI, 2025) to expose LMs to the internal makeup of tokens; at inference time, options include token healing (Lundberg, 2023), algorithmic correction (Phan et al., 2024), and enumeration of all relevant segmentations of the prompt (Vieira et al., 2024). We leave a detailed comparison of these techniques to future work. + +Additionally, the issue does not apply at all to models that separate the user's input from the model's response using special tokens, as is typical for chat models. + +# D Other Related Work + +Please see Mielke et al. (2021) for a survey of subword tokenization. + +Pretokenization Pretokenization defines how the text is split in order to prevent certain pairs of tokens from being merged. GPT-2 (Radford et al., 2019) introduced a regular expression (regex) which defines the pretokenization pattern. These regex strings have gained complexity over time; GPT-3.5 limits the number of digits in numerical tokens to 3, and allows single punctuation to be merged with the start of words (presumably to accommodate code, as it allows .get to be a single token). Prior work has shown that, for + +![](images/af554d01661668c30fdd6aecc729da3932b6d036243cdf6b02f2c9f548475d99.jpg) +Figure 11: Token counts when ordered by token ID, which reflects the order in which tokens were learned in tokenizer training. + +instance, digit pretokenization choices (Nogueira et al., 2021; Thawani et al., 2021; Singh & Strouse, 2024) can significantly impact arithmetic performance. It is also likely that pretokenization affects different languages differently (Velayuthan & Sarveswaran, 2025; Ahia et al., 2023), due to natural statistics of the average word length, which acts as an upper bound on encoding efficiency in that language under subword tokenization. Nonetheless, the effectiveness of many pretokenization choices have not been thoroughly studied. + +$n$ -gram language models Our work is loosely related to $n$ -gram LMs, which incorporate $n$ -gram statistics into the next-word prediction (Brants et al., 2007; Liu et al., 2024a). + +Internal representation of semantic units Previous work has showed that the early layers of the LM may "aggregate" information over multi-token entities (e.g., [New, York]) into the last token's (e.g., York) hidden representation (Meng et al., 2022; Kaplan et al., 2025; Lad et al., 2024). This suggests that LMs naturally learn multi-word representations, and segmentating text into more semantically cohesive units at the input level (e.g., having New York as a single token) may simplify this process. + +![](images/3c79a7e71a91e682ec753425b6d4ace409c89e385949a63f471c66eb47df09c4.jpg) + +![](images/83c7ac0a33955fc2008194ff3470f2802a9403c0140208f940882c81d5503629.jpg) + +![](images/1d69a62d6c61c9405fb5435c05854a3230f8599f4d8c605b56af5368930749bf.jpg) +Figure 12: Performance during pretraining for a subset of tasks in our evaluation suite. + +![](images/fb634c50fcf9f347aa3bb35ced7dfb51e0f1b27248ceb8eaf70bf7a1af7fee11.jpg) \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13423/images.zip b/data/2025/2503_13xxx/2503.13423/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e683d0f6b68756c2a17713ad8f3d4ef360fc45f0 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13423/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7dbb94913a22c6ac9cca71011ad6dd70655cffc30cbe5dd83d4aa7d6a0f3ae8d +size 896437 diff --git a/data/2025/2503_13xxx/2503.13423/layout.json b/data/2025/2503_13xxx/2503.13423/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4d201c9199bfcdf5603b9fa44298b77cff4da745 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13423/layout.json @@ -0,0 +1,15840 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 105, + 78, + 408, + 97 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 78, + 408, + 97 + ], + "spans": [ + { + "bbox": [ + 105, + 78, + 408, + 97 + ], + "type": "text", + "content": "SuperBPE: Space Travel for Language Models" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 111, + 114, + 271, + 129 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 114, + 271, + 129 + ], + "spans": [ + { + "bbox": [ + 111, + 114, + 271, + 129 + ], + "type": "text", + "content": "*Alisa Liu♥♠ *Jonathan Hayase♥" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 111, + 129, + 441, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 129, + 441, + 144 + ], + "spans": [ + { + "bbox": [ + 111, + 129, + 441, + 144 + ], + "type": "text", + "content": "Valentin Hofmann" + }, + { + "bbox": [ + 111, + 129, + 441, + 144 + ], + "type": "inline_equation", + "content": "^{\\diamond \\diamond}" + }, + { + "bbox": [ + 111, + 129, + 441, + 144 + ], + "type": "text", + "content": " Sewoong Oh" + }, + { + "bbox": [ + 111, + 129, + 441, + 144 + ], + "type": "inline_equation", + "content": "^{\\diamond}" + }, + { + "bbox": [ + 111, + 129, + 441, + 144 + ], + "type": "text", + "content": " Noah A. Smith" + }, + { + "bbox": [ + 111, + 129, + 441, + 144 + ], + "type": "inline_equation", + "content": "^{\\diamond \\diamond}" + }, + { + "bbox": [ + 111, + 129, + 441, + 144 + ], + "type": "text", + "content": " Yejin Choi" + }, + { + "bbox": [ + 111, + 129, + 441, + 144 + ], + "type": "inline_equation", + "content": "^{\\spadesuit}" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 112, + 144, + 396, + 158 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 112, + 144, + 396, + 158 + ], + "spans": [ + { + "bbox": [ + 112, + 144, + 396, + 158 + ], + "type": "inline_equation", + "content": "\\diamond" + }, + { + "bbox": [ + 112, + 144, + 396, + 158 + ], + "type": "text", + "content": "University of Washington " + }, + { + "bbox": [ + 112, + 144, + 396, + 158 + ], + "type": "inline_equation", + "content": "\\spadesuit" + }, + { + "bbox": [ + 112, + 144, + 396, + 158 + ], + "type": "text", + "content": "NVIDIA " + }, + { + "bbox": [ + 112, + 144, + 396, + 158 + ], + "type": "inline_equation", + "content": "\\diamond" + }, + { + "bbox": [ + 112, + 144, + 396, + 158 + ], + "type": "text", + "content": "Allen Institute for AI" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 280, + 185, + 331, + 198 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 280, + 185, + 331, + 198 + ], + "spans": [ + { + "bbox": [ + 280, + 185, + 331, + 198 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 140, + 209, + 471, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 209, + 471, + 498 + ], + "spans": [ + { + "bbox": [ + 140, + 209, + 471, + 498 + ], + "type": "text", + "content": "The assumption across nearly all language model (LM) tokenization schemes is that tokens should be subwords, i.e., contained within word boundaries. Despite providing a seemingly reasonable inductive bias, we question whether this common practice limits the potential of modern LMs. Whitespace is not a reliable delimiter of meaning, as evidenced by multi-word expressions (e.g., by the way), cross-lingual variation in the number of words needed to express a concept (e.g., spacesuit helmet in German is raumanzugehm), and languages that do not use whitespace at all (e.g., Chinese). To explore the potential of tokenization beyond subwords, we introduce a \"superword\" tokenizer, SuperBPE, that incorporates a simple pretokensization curriculum into the byte-pair encoding (BPE) algorithm to first learn subwords and then superwords that bridge whitespace. This modification dramatically improves encoding efficiency: when limiting vocabulary size to 200k, SuperBPE encodes a fixed piece of text with up to " + }, + { + "bbox": [ + 140, + 209, + 471, + 498 + ], + "type": "inline_equation", + "content": "33\\%" + }, + { + "bbox": [ + 140, + 209, + 471, + 498 + ], + "type": "text", + "content": " fewer tokens on average than BPE. In experiments, we pretrain 8B transformer LMs from scratch while fixing model size, vocabulary size, and train compute, varying only the algorithm for learning the vocabulary. Our model trained with SuperBPE achieves an average " + }, + { + "bbox": [ + 140, + 209, + 471, + 498 + ], + "type": "inline_equation", + "content": "+4.0\\%" + }, + { + "bbox": [ + 140, + 209, + 471, + 498 + ], + "type": "text", + "content": " absolute improvement over the BPE baseline across 30 downstream tasks (including " + }, + { + "bbox": [ + 140, + 209, + 471, + 498 + ], + "type": "inline_equation", + "content": "+8.2\\%" + }, + { + "bbox": [ + 140, + 209, + 471, + 498 + ], + "type": "text", + "content": " on MMLU), while simultaneously requiring " + }, + { + "bbox": [ + 140, + 209, + 471, + 498 + ], + "type": "inline_equation", + "content": "27\\%" + }, + { + "bbox": [ + 140, + 209, + 471, + 498 + ], + "type": "text", + "content": " less compute at inference time. In analysis, we find that SuperBPE produces segmentations of text that are more uniform in per-token difficulty, perhaps because SuperBPE tokens often capture common multi-word expressions that function semantically as a single unit. In sum, SuperBPE offers a straightforward and local modification to tokenization that improves both encoding efficiency and downstream performance, yielding better LMs overall." + }, + { + "bbox": [ + 140, + 209, + 471, + 498 + ], + "type": "inline_equation", + "content": "^1" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 517, + 196, + 530 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 517, + 196, + 530 + ], + "spans": [ + { + "bbox": [ + 105, + 517, + 196, + 530 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 541, + 506, + 599 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 541, + 506, + 599 + ], + "spans": [ + { + "bbox": [ + 104, + 541, + 506, + 599 + ], + "type": "text", + "content": "Tokenizer are the lens through which language models (LMs) view data: they segment a stream of bytes into a sequence of tokens in the LM vocabulary. In the era of transformer LMs, tokenization is done at the level of subwords, meaning that tokens consist of parts of words (including complete words), but they cannot bridge whitespace. Intuitively, subword tokens capture meaningful and composable semantic units." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 602, + 506, + 703 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 602, + 506, + 703 + ], + "spans": [ + { + "bbox": [ + 104, + 602, + 506, + 703 + ], + "type": "text", + "content": "Although seemingly reasonable, is this common practice a good one? Whitespace is an unreliable delimiter of meaning (Martin, 2017); many groups of words (e.g., a lot of or search engine) function semantically as single units, and English speakers store thousands of such multi-word expressions in their mental lexicon (Church, 2011; Contreras Kallens & Christiansen, 2022). Cross-lingually, there is considerable variation in whether a given meaning is conveyed by a single word or several words. At the extreme, languages such as Chinese and Japanese do not use whitespace at all, and tokens in these languages can span multiple words or even entire sentences (e.g., the tokenizers of GPT-4O [OpenAI, 2024] or DEEPSEEKV3 [DeepSeek-AI, 2025]), but this has seemingly not hindered LMs from" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 118, + 709, + 201, + 720 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 709, + 201, + 720 + ], + "spans": [ + { + "bbox": [ + 118, + 709, + 201, + 720 + ], + "type": "text", + "content": "*Equal contribution." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 118, + 720, + 384, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 720, + 384, + 731 + ], + "spans": [ + { + "bbox": [ + 118, + 720, + 384, + 731 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 118, + 720, + 384, + 731 + ], + "type": "text", + "content": "Code and artifacts are available at https://superbpe.github.io/." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 14, + 218, + 37, + 574 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 218, + 37, + 574 + ], + "spans": [ + { + "bbox": [ + 14, + 218, + 37, + 574 + ], + "type": "text", + "content": "arXiv:2503.13423v3 [cs.CL] 26 Aug 2025" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 107, + 75, + 445, + 116 + ], + "blocks": [ + { + "bbox": [ + 107, + 75, + 445, + 116 + ], + "lines": [ + { + "bbox": [ + 107, + 75, + 445, + 116 + ], + "spans": [ + { + "bbox": [ + 107, + 75, + 445, + 116 + ], + "type": "image", + "image_path": "c3aefc783508cbdf35c8b351a764278b60a5a93edab00efc917dd0942af8d010.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 184, + 125, + 425, + 312 + ], + "blocks": [ + { + "bbox": [ + 184, + 125, + 425, + 312 + ], + "lines": [ + { + "bbox": [ + 184, + 125, + 425, + 312 + ], + "spans": [ + { + "bbox": [ + 184, + 125, + 425, + 312 + ], + "type": "image", + "image_path": "9dd8cd205ee4d7a5d68b14276daffb2574f1374c4f02bcd5be1bb88161cc4bab.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 322, + 506, + 477 + ], + "lines": [ + { + "bbox": [ + 104, + 322, + 506, + 477 + ], + "spans": [ + { + "bbox": [ + 104, + 322, + 506, + 477 + ], + "type": "text", + "content": "Figure 1: SuperBPE tokenizers encode text much more efficiently than BPE, and this advantage grows with larger vocabulary size. Encoding efficiency (" + }, + { + "bbox": [ + 104, + 322, + 506, + 477 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 104, + 322, + 506, + 477 + ], + "type": "text", + "content": "-axis) is measured in bytes-per-token, the number of bytes encoded per token over a large corpus. In the 40 bytes of text shown on the top of this figure, SuperBPE uses 7 tokens while BPE uses 13, so the methods' efficiencies are " + }, + { + "bbox": [ + 104, + 322, + 506, + 477 + ], + "type": "inline_equation", + "content": "40/7 = 5.7" + }, + { + "bbox": [ + 104, + 322, + 506, + 477 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 322, + 506, + 477 + ], + "type": "inline_equation", + "content": "40/13 = 3.1" + }, + { + "bbox": [ + 104, + 322, + 506, + 477 + ], + "type": "text", + "content": " bytes-per-token, respectively. In the graph, the encoding efficiency of BPE plateaus early because it exhausts the valuable whitespace-delimited words in the training data. In fact, it is bounded above by the gray dotted line, which shows the maximum achievable encoding efficiency with BPE if every whitespace-delimited word were in the vocabulary. In contrast, SuperBPE has dramatically better encoding efficiency that continues to improve with increased vocabulary size, as it can continue to add common word sequences to treat as tokens in the vocabulary. The different gradient lines show different transition points from learning subword to superword tokens, which always yields an immediate improvement. SuperBPE also encodes text more efficiently than a naive variant of BPE that does not use whitespace pretokensization at all." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 489, + 504, + 534 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 489, + 504, + 534 + ], + "spans": [ + { + "bbox": [ + 104, + 489, + 504, + 534 + ], + "type": "text", + "content": "performing well on these languages. In fact, including multi-word tokens promises to be beneficial in many ways: it may shorten token sequences, lowering the costs of LM training and inference, and offer representational advantages by segmenting text into more semantically cohesive units (Salehi et al., 2015; Otani et al., 2020; Hofmann et al., 2021)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 538, + 506, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 538, + 506, + 651 + ], + "spans": [ + { + "bbox": [ + 104, + 538, + 506, + 651 + ], + "type": "text", + "content": "In this work, we introduce a superword tokenization algorithm that produces a vocabulary of both subword and \"superword\" tokens, which we use to describe tokens bridging more than one word. Our method, SuperBPE, introduces a pretokenization curriculum to the popular byte-pair encoding (BPE) algorithm (Sennrich et al., 2016): whitespace pretokenization is initially used to enforce learning of subword tokens only (as done in conventional BPE), but it is disabled in a second stage, where the tokenizer transitions to learning superword tokens. Notably, SuperBPE tokenizers scale much better with vocabulary size: BPE quickly hits a point of diminishing returns and begins adding increasingly rare subwords to the vocabulary, while SuperBPE continues to discover common word sequences to treat as single tokens and improve encoding efficiency (see Figure 1)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "content": "In our experiments, we pretrain English LMs at 8B scale from scratch. When fixing the model size, vocabulary size, and training compute—varying only the algorithm for learning the vocabulary—we find that models trained with SuperBPE tokenizers consistently and significantly improve over counterparts trained with a BPE tokenizer while also being " + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "inline_equation", + "content": "27\\%" + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "inline_equation", + "content": "33\\%" + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "content": " more efficient at inference time. Our best SuperBPE model achieves an average improvement of " + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "inline_equation", + "content": "+4.0\\%" + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "content": " over 30 downstream tasks, including " + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "inline_equation", + "content": "+8.2\\%" + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "content": " on MMLU, and wins on 25 of the 30 individual tasks (Table 1)." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 301, + 750, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 750, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 301, + 750, + 309, + 760 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 160 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 160 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 160 + ], + "type": "text", + "content": "In analysis, we find that SuperBPE tokenizers produce segmentations that are more evenly distributed in difficulty. This makes sense from a qualitative linguistic analysis: SuperBPE tokens often correspond to multi-word expressions in English, i.e., word sequences that function as a single semantic unit (see Table 3 for examples). For instance, many prepositional phrases (e.g., by accident or in the long run) are essentially fixed and require memorization. The individual words in these expressions have very little possible variation in context, leading to very low-loss predictions under BPE models." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 164, + 504, + 209 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 164, + 504, + 209 + ], + "spans": [ + { + "bbox": [ + 104, + 164, + 504, + 209 + ], + "type": "text", + "content": "SuperBPE is a straightforward and local modification to tokenization, requiring no changes to the model architecture, training framework, or decoding strategy. Under the same training setup, SuperBPE provides a remarkable boost in both encoding efficiency and performance, yielding better language models overall." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 225, + 181, + 239 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 225, + 181, + 239 + ], + "spans": [ + { + "bbox": [ + 105, + 225, + 181, + 239 + ], + "type": "text", + "content": "2 SuperBPE" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 250, + 504, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 250, + 504, + 275 + ], + "spans": [ + { + "bbox": [ + 104, + 250, + 504, + 275 + ], + "type": "text", + "content": "We first explain the standard byte-pair encoding (BPE; Sennrich et al., 2016) tokenization algorithm (§2.1), and then introduce SuperBPE, which extends BPE to superwords (§2.2)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 286, + 222, + 298 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 286, + 222, + 298 + ], + "spans": [ + { + "bbox": [ + 105, + 286, + 222, + 298 + ], + "type": "text", + "content": "2.1 Background on BPE" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 307, + 504, + 333 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 307, + 504, + 333 + ], + "spans": [ + { + "bbox": [ + 104, + 307, + 504, + 333 + ], + "type": "text", + "content": "BPE is a tokenization algorithm that greedily learns a subword vocabulary given training data. The algorithm takes a sample of text and a target vocabulary size " + }, + { + "bbox": [ + 104, + 307, + 504, + 333 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 104, + 307, + 504, + 333 + ], + "type": "text", + "content": " as input." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 336, + 505, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 336, + 505, + 403 + ], + "spans": [ + { + "bbox": [ + 104, + 336, + 505, + 403 + ], + "type": "text", + "content": "The first step of BPE is pretokenization, which splits the text into chunks that limit the extent of tokenization; merges cannot bridge these chunks, so the final learned tokens are parts of these chunks. Canonically, pretokenization in BPE consists of splitting on whitespace so that common word sequences do not become a single token. This made sense given the historical context of Sennrich et al. (2016), which aimed to improve word-level tokenization by segmenting words into morphologically meaningful subwords." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 408, + 506, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 408, + 506, + 498 + ], + "spans": [ + { + "bbox": [ + 104, + 408, + 506, + 498 + ], + "type": "text", + "content": "After pretokenization, the iterative learning algorithm begins. Training text is first split into bytes; the starting vocabulary is the set of all bytes. Then, the frequencies of all pairs of neighboring tokens are recorded, and the most frequent pair is merged into a single, new token at every position in the text where it occurs. The newly merged token is added to the vocabulary. For instance, if the merge is (t, he), then all instances of the token sequence [t, he] will be replaced with the, which is added to the vocabulary. The token pair frequencies are then updated, and the next most frequent pair is again merged into a new token. This continues until the vocabulary reaches the target size " + }, + { + "bbox": [ + 104, + 408, + 506, + 498 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 104, + 408, + 506, + 498 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 510, + 236, + 522 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 510, + 236, + 522 + ], + "spans": [ + { + "bbox": [ + 105, + 510, + 236, + 522 + ], + "type": "text", + "content": "2.2 SuperBPE tokenization" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 531, + 506, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 531, + 506, + 665 + ], + "spans": [ + { + "bbox": [ + 104, + 531, + 506, + 665 + ], + "type": "text", + "content": "SuperBPE introduces a simple intervention in the pretokenization step, separating tokenizer training into two discrete phases, wherein the tokenizer (1) first learns subwords (by using pretokenization to prevent merges across whitespace) and then (2) learns superwords (by lifting this restriction). Stage 1 is equivalent to regular BPE training and continues up to a certain vocabulary size " + }, + { + "bbox": [ + 104, + 531, + 506, + 665 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 531, + 506, + 665 + ], + "type": "text", + "content": ", which we call the transition point " + }, + { + "bbox": [ + 104, + 531, + 506, + 665 + ], + "type": "inline_equation", + "content": "(t < T)" + }, + { + "bbox": [ + 104, + 531, + 506, + 665 + ], + "type": "text", + "content": ". In stage 2, tokenizer training resumes from the vocabulary learned thus far, but this time whitespace pretokenization is skipped. As a result, token pairs that bridge whitespace are considered, enabling superwords to be added to the vocabulary. Intuitively, we intend for our tokenizer to first learn base units of semantic meaning, then combine these units into common sequences for a much more efficient vocabulary. Note that " + }, + { + "bbox": [ + 104, + 531, + 506, + 665 + ], + "type": "inline_equation", + "content": "t = T" + }, + { + "bbox": [ + 104, + 531, + 506, + 665 + ], + "type": "text", + "content": " corresponds to BPE, and " + }, + { + "bbox": [ + 104, + 531, + 506, + 665 + ], + "type": "inline_equation", + "content": "t = 0" + }, + { + "bbox": [ + 104, + 531, + 506, + 665 + ], + "type": "text", + "content": " corresponds to a naive revision of BPE that foregoes whitespace pretokenization at any point in training." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 669, + 504, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 669, + 504, + 693 + ], + "spans": [ + { + "bbox": [ + 104, + 669, + 504, + 693 + ], + "type": "text", + "content": "We note that training tokenizers requires more system memory and CPU time when done without whitespace pretokenization (as in stage 2 of SuperBPE). This is because the training" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 116, + 700, + 436, + 711 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 700, + 436, + 711 + ], + "spans": [ + { + "bbox": [ + 116, + 700, + 436, + 711 + ], + "type": "text", + "content": "2The algorithm originated in 1994 in the field of data compression (Gage, 1994)." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 710, + 504, + 733 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 710, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 710, + 504, + 733 + ], + "type": "text", + "content": "Note that although the creation of a tokenizer is referred to as \"learning,\" there are no parameters involved in the case of BPE, and the algorithm is completely deterministic given the data." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 143, + 79, + 467, + 446 + ], + "blocks": [ + { + "bbox": [ + 143, + 79, + 467, + 446 + ], + "lines": [ + { + "bbox": [ + 143, + 79, + 467, + 446 + ], + "spans": [ + { + "bbox": [ + 143, + 79, + 467, + 446 + ], + "type": "table", + "html": "
CategoryTaskBPESuperBPEΔ
KnowledgeARC-Easy (MC)46.667.1+20.5**
ARC-Challenge (MC)35.150.6+15.5**
Jeopardy (EM)42.141.8-0.3
MMLU (MC)36.544.7+8.2**
OpenbookQA (MC)33.254.4+21.2**
TriviaQA (EM)60.661.3+0.7
WikidataQA (EM)69.770.9+1.2*
Math & ReasoningArithmetic (EM)54.859.3+4.5**
GSM8K (EM)6.46.7+0.3
LSAT-AR (MC)21.323.0+1.7
Operators (EM)35.533.6-1.9
Repeat-Copy-Logic (EM)3.16.2+3.1
CodingHumanEval (pass@10)15.913.4-2.5
MBPP (pass@10)27.528.3+0.8
Reading ComprehensionBoolQ (MC)59.764.6+4.9**
CoQA (EM)12.613.2+0.6
DROP (EM)31.331.4+0.1
HotpotQA (EM)53.555.2+1.7*
SQuAD (EM)75.175.8+0.7
CommonsenseCommonsenseQA (MC)33.553.8+20.3**
COPA (MC)77.085.8+8.8**
PIQA (MC)55.259.8+4.6*
Winograd (MC)50.453.1+2.7
Winogrande (MC)47.352.6+5.3*
Language UnderstandingHellaSwag (MC)29.733.7+4.0**
LAMBADA (EM)77.070.6-6.4**
Language Identification (EM)8.89.0+0.2
String ManipulationCS Algorithms (EM)46.148.6+2.5
CUTE (EM)31.332.6+1.3
Dyck-Languages (EM)15.914.2-1.7
Average39.843.8+4.0
", + "image_path": "5269389453ca1c2c61479f7ee22b95b41121fbe5b9825fc94ea68f8f9118b9be.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 454, + 506, + 533 + ], + "lines": [ + { + "bbox": [ + 104, + 454, + 506, + 533 + ], + "spans": [ + { + "bbox": [ + 104, + 454, + 506, + 533 + ], + "type": "text", + "content": "Table 1: Performance of BPE and SuperBPE models (with transition point " + }, + { + "bbox": [ + 104, + 454, + 506, + 533 + ], + "type": "inline_equation", + "content": "t = 180\\mathrm{k}" + }, + { + "bbox": [ + 104, + 454, + 506, + 533 + ], + "type": "text", + "content": ") on 30 downstream tasks. The two models are fixed in model parameters (8B), vocabulary size (200k), and training FLOPs (corresponding to ~330B tokens), differing only in their algorithm for learning the vocabulary. The SuperBPE model outperforms the baseline on 25 of 30 tasks and requires " + }, + { + "bbox": [ + 104, + 454, + 506, + 533 + ], + "type": "inline_equation", + "content": "27\\%" + }, + { + "bbox": [ + 104, + 454, + 506, + 533 + ], + "type": "text", + "content": " less compute at inference time. See Figure 3 for the moving task average during pretraining and §A.4 for further evaluation details. *" + }, + { + "bbox": [ + 104, + 454, + 506, + 533 + ], + "type": "inline_equation", + "content": "p < 0.05" + }, + { + "bbox": [ + 104, + 454, + 506, + 533 + ], + "type": "text", + "content": ", **" + }, + { + "bbox": [ + 104, + 454, + 506, + 533 + ], + "type": "inline_equation", + "content": "p < 0.005" + }, + { + "bbox": [ + 104, + 454, + 506, + 533 + ], + "type": "text", + "content": " under a McNemar test." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 104, + 562, + 506, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 562, + 506, + 641 + ], + "spans": [ + { + "bbox": [ + 104, + 562, + 506, + 641 + ], + "type": "text", + "content": "data is typically represented by a dictionary of \"words\" along with their counts. With whitespace pretokenization, the \"words\" are whitespace-separated chunks (e.g., common words) stored once along with a large count, conferring substantial savings in memory. Without whitespace pretokenization, the \"words\" are extremely long (e.g., entire training documents), leading to minimal dedduplication of the text and excessively large dictionaries. Fortunately, tokenizer training must be done only once; in our experiments, SuperBPE tokenizers train in a few hours on 100 CPUs, a negligible cost compared to LLM pretraining." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 662, + 221, + 676 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 662, + 221, + 676 + ], + "spans": [ + { + "bbox": [ + 105, + 662, + 221, + 676 + ], + "type": "text", + "content": "2.3 Encoding efficiency" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 687, + 505, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 687, + 505, + 734 + ], + "spans": [ + { + "bbox": [ + 104, + 687, + 505, + 734 + ], + "type": "text", + "content": "A tokenizer's encoding efficiency can be measured in bytes-per-token, i.e., how many UTF-8 bytes are encoded, on average, in each token over a large corpus of text (see calculation in Figure 1). We train a series of tokenizers on a 10GB subset of data from OLMO 2's pretraining corpus and evaluate encoding efficiency on a held-out subset." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "spans": [ + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "type": "text", + "content": "Shown in Figure 1, SuperBPE scales much better with vocabulary size than does BPE. BPE quickly plateaus around a vocabulary size of " + }, + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "type": "inline_equation", + "content": "\\sim 50\\mathrm{K}" + }, + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "type": "text", + "content": ", achieving 4.45 bytes-per-token at a vocabulary size of " + }, + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "type": "inline_equation", + "content": "200\\mathrm{k}" + }, + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "type": "text", + "content": ". In fact, even with infinite vocabulary size (namely, if every whitespace-delimited word were in the vocabulary), BPE cannot exceed 4.68 bytes-per-token, i.e., the average word length in the held-out subset. SuperBPE exceeds this upper bound with a mere " + }, + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "type": "inline_equation", + "content": "\\sim 12\\mathrm{k}" + }, + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "type": "text", + "content": " vocabulary size and reaches 5.55 bytes-per-token at " + }, + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "type": "inline_equation", + "content": "50\\mathrm{K}" + }, + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "type": "text", + "content": " and 6.63 at " + }, + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "type": "inline_equation", + "content": "200\\mathrm{k}" + }, + { + "bbox": [ + 104, + 81, + 506, + 159 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 164, + 339, + 287 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 164, + 339, + 287 + ], + "spans": [ + { + "bbox": [ + 104, + 164, + 339, + 287 + ], + "type": "text", + "content": "Surprisingly, SuperBPE is also more efficient than BPE with whitespace pretokenization completely disabled. Since BPE is a greedy algorithm, completely disabling whitespace pretokenization may cause it to make highly suboptimal choices early on. In particular, tokens in this setting often consist of the end of the previous word and start of the next word, as opposed to sequences of complete words. By keeping whitespace pretokenization on at the beginning, we can avoid suboptimal choices while still obtaining a tokenizer with superwords." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 291, + 339, + 359 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 291, + 339, + 359 + ], + "spans": [ + { + "bbox": [ + 104, + 291, + 339, + 359 + ], + "type": "text", + "content": "Figure 2 shows how SuperBPE's encoding efficiency depends on the choice of transition point " + }, + { + "bbox": [ + 104, + 291, + 339, + 359 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 291, + 339, + 359 + ], + "type": "text", + "content": ". The relationship is smooth, with " + }, + { + "bbox": [ + 104, + 291, + 339, + 359 + ], + "type": "inline_equation", + "content": "t = 80\\mathrm{k}" + }, + { + "bbox": [ + 104, + 291, + 339, + 359 + ], + "type": "text", + "content": " achieving the best encoding efficiency. However, we will see in our experiments that the optimal tokenizer for LM pretraining is not necessarily the most encoding-efficient." + } + ] + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 344, + 172, + 504, + 293 + ], + "blocks": [ + { + "bbox": [ + 344, + 172, + 504, + 293 + ], + "lines": [ + { + "bbox": [ + 344, + 172, + 504, + 293 + ], + "spans": [ + { + "bbox": [ + 344, + 172, + 504, + 293 + ], + "type": "image", + "image_path": "9c33cd3e1169158421e309ba6c1c3d751448572342e45693e32bbce74e5b0ca6.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 342, + 306, + 506, + 351 + ], + "lines": [ + { + "bbox": [ + 342, + 306, + 506, + 351 + ], + "spans": [ + { + "bbox": [ + 342, + 306, + 506, + 351 + ], + "type": "text", + "content": "Figure 2: Encoding efficiency varies smoothly with the choice of transition point " + }, + { + "bbox": [ + 342, + 306, + 506, + 351 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 342, + 306, + 506, + 351 + ], + "type": "text", + "content": " in SuperBPE's pretokenization curriculum." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 380, + 195, + 394 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 380, + 195, + 394 + ], + "spans": [ + { + "bbox": [ + 105, + 380, + 195, + 394 + ], + "type": "text", + "content": "3 Experiments" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 408, + 504, + 433 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 408, + 504, + 433 + ], + "spans": [ + { + "bbox": [ + 104, + 408, + 504, + 433 + ], + "type": "text", + "content": "In our main experiments, we pretrain models from scratch while fixing the total training FLOPs and vocabulary size, changing only the algorithm for learning the vocabulary." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 450, + 157, + 464 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 450, + 157, + 464 + ], + "spans": [ + { + "bbox": [ + 105, + 450, + 157, + 464 + ], + "type": "text", + "content": "3.1 Setup" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 473, + 506, + 588 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 473, + 506, + 588 + ], + "spans": [ + { + "bbox": [ + 104, + 473, + 506, + 588 + ], + "type": "text", + "content": "We first pretrain 8B models with BPE and SuperBPE tokenizers. We use the OLMO2 7B (OLMo et al., 2024) training configuration, including the model architecture, training hyperparameters, and pretraining corpus, but reduce the total number of training steps to correspond to " + }, + { + "bbox": [ + 104, + 473, + 506, + 588 + ], + "type": "inline_equation", + "content": "\\sim 330" + }, + { + "bbox": [ + 104, + 473, + 506, + 588 + ], + "type": "text", + "content": " tokens (compared to 4T). Following prior work (Pagnoni et al., 2024), we also fix the effective context size (measured in bytes) for each model. This prevents SuperBPE models from gaining an advantage by seeing more textual context for the same next-token prediction (we provide analysis on this in §B.1). Since more efficient models have a shorter context length in tokens, the training steps are adjusted accordingly to match the total train FLOPs at the end of training. Note that in this setting, a same-sized SuperBPE model uses fewer inference FLOPs than the BPE model." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 593, + 506, + 652 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 593, + 506, + 652 + ], + "spans": [ + { + "bbox": [ + 104, + 593, + 506, + 652 + ], + "type": "text", + "content": "We fix the vocabulary size of all tokenizers to 200,000 (in the same ballpark as, e.g., GEMMA at 250k [Google, 2024], GPT-4O at 200k, and Llama3 at 130k [Meta, 2024]).6 We consider three transition points for SuperBPE: " + }, + { + "bbox": [ + 104, + 593, + 506, + 652 + ], + "type": "inline_equation", + "content": "t = 80\\mathrm{k}" + }, + { + "bbox": [ + 104, + 593, + 506, + 652 + ], + "type": "text", + "content": ", which has the best encoding efficiency, and two later transitions, " + }, + { + "bbox": [ + 104, + 593, + 506, + 652 + ], + "type": "inline_equation", + "content": "t = 160\\mathrm{k}" + }, + { + "bbox": [ + 104, + 593, + 506, + 652 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 593, + 506, + 652 + ], + "type": "inline_equation", + "content": "t = 180\\mathrm{k}" + }, + { + "bbox": [ + 104, + 593, + 506, + 652 + ], + "type": "text", + "content": ". All tokenizers are trained on the same 10GB subset of OLMO2's pretraining mix. SA.1 provides further details about tokenizer training." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 668, + 505, + 689 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 668, + 505, + 689 + ], + "spans": [ + { + "bbox": [ + 104, + 668, + 505, + 689 + ], + "type": "inline_equation", + "content": "^{4}" + }, + { + "bbox": [ + 104, + 668, + 505, + 689 + ], + "type": "text", + "content": "OLMO2 7B has 7.30B parameters, while our 8B BPE and SuperBPE models have 8.12B parameters due to their increased vocabulary size." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 689, + 506, + 711 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 689, + 506, + 711 + ], + "spans": [ + { + "bbox": [ + 104, + 689, + 506, + 711 + ], + "type": "text", + "content": "In practice, models using our more efficient tokenizers could shift some or all of the \"saved\" context FLOPs to longer effective contexts instead of more training steps." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 710, + 504, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 710, + 504, + 732 + ], + "spans": [ + { + "bbox": [ + 104, + 710, + 504, + 732 + ], + "type": "text", + "content": "For 8B models, a 200k vocabulary size is close to the recommendation of Tao et al. (2024) based on primarily English data. We fix the vocabulary size to simplify comparisons between models." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 750, + 309, + 760 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 186, + 74, + 425, + 254 + ], + "blocks": [ + { + "bbox": [ + 186, + 74, + 425, + 254 + ], + "lines": [ + { + "bbox": [ + 186, + 74, + 425, + 254 + ], + "spans": [ + { + "bbox": [ + 186, + 74, + 425, + 254 + ], + "type": "image", + "image_path": "13b903da56b7375b8e6dc6f72255fc803e99efb2d92e9a3524c6cb69a93788f9.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 263, + 504, + 309 + ], + "lines": [ + { + "bbox": [ + 104, + 263, + 504, + 309 + ], + "spans": [ + { + "bbox": [ + 104, + 263, + 504, + 309 + ], + "type": "text", + "content": "Figure 3: Average task performance on 30 downstream tasks, evaluated at every 5000 steps in model pretraining. We see that SuperBPE models consistently outperform the baseline that uses a BPE tokenizer. All compared models share the same vocabulary size and train budget; " + }, + { + "bbox": [ + 104, + 263, + 504, + 309 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 263, + 504, + 309 + ], + "type": "text", + "content": " denotes the transition point in SuperBPE's pretokenization curriculum." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 328, + 504, + 363 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 328, + 504, + 363 + ], + "spans": [ + { + "bbox": [ + 104, + 328, + 504, + 363 + ], + "type": "text", + "content": "We also train a slightly larger 11B SuperBPE model with " + }, + { + "bbox": [ + 104, + 328, + 504, + 363 + ], + "type": "inline_equation", + "content": "t = 180\\mathrm{k}" + }, + { + "bbox": [ + 104, + 328, + 504, + 363 + ], + "type": "text", + "content": ", which approximately matches the 8B BPE baseline in total bytes of training data seen as well as both train and inference compute. See Table 2 for exact specifications for all runs." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 376, + 263, + 387 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 376, + 263, + 387 + ], + "spans": [ + { + "bbox": [ + 105, + 376, + 263, + 387 + ], + "type": "text", + "content": "3.2 Results on downstream tasks" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 396, + 506, + 431 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 396, + 506, + 431 + ], + "spans": [ + { + "bbox": [ + 104, + 396, + 506, + 431 + ], + "type": "text", + "content": "We evaluate SuperBPE on 30 benchmarks covering knowledge, math & reasoning, coding, reading comprehension, common sense, language understanding, and string manipulation. The full evaluation suite is shown in Table 1 and evaluation details are in SA.4." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "spans": [ + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "type": "text", + "content": "Figure 3 shows the task average during pretraining. All SuperBPE models substantially outperform the BPE baseline at the end of training. The strongest 8B SuperBPE model, which has transition point " + }, + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "type": "inline_equation", + "content": "t = 180\\mathrm{k}" + }, + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "type": "text", + "content": " (the latest one we consider), outperforms the baseline by " + }, + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "type": "inline_equation", + "content": "4.0\\%" + }, + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "type": "text", + "content": " on average and wins on 25 of 30 individual tasks. Table 1 shows the per-task performance for this model (see §A.4 for results for the other models). The largest gains are on multiple choice tasks; when considering these alone, the performance improvement grows to " + }, + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "type": "inline_equation", + "content": "+9.7\\%" + }, + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "type": "text", + "content": ". The only task on which SuperBPE loses in a statistically significant way is LAMBADA; here, we observe that SuperBPE is actually ahead for the majority of training checkpoints, but accuracy dips at the end from " + }, + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "type": "inline_equation", + "content": "75.8\\%" + }, + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "type": "inline_equation", + "content": "70.6\\%" + }, + { + "bbox": [ + 104, + 435, + 506, + 536 + ], + "type": "text", + "content": " (see Figure 12)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 540, + 506, + 598 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 540, + 506, + 598 + ], + "spans": [ + { + "bbox": [ + 104, + 540, + 506, + 598 + ], + "type": "text", + "content": "Notably, while the choice of transition point affects the performance of the resulting model, all reasonable choices are significantly stronger than the baseline. When using the most encoding-efficient transition point, i.e., " + }, + { + "bbox": [ + 104, + 540, + 506, + 598 + ], + "type": "inline_equation", + "content": "t = 80\\mathrm{k}" + }, + { + "bbox": [ + 104, + 540, + 506, + 598 + ], + "type": "text", + "content": ", we see a " + }, + { + "bbox": [ + 104, + 540, + 506, + 598 + ], + "type": "inline_equation", + "content": "+3.1\\%" + }, + { + "bbox": [ + 104, + 540, + 506, + 598 + ], + "type": "text", + "content": " task improvement over BPE and inference compute reduced by " + }, + { + "bbox": [ + 104, + 540, + 506, + 598 + ], + "type": "inline_equation", + "content": "35\\%" + }, + { + "bbox": [ + 104, + 540, + 506, + 598 + ], + "type": "text", + "content": ". Later transition points empirically cede some gains in encoding efficiency in exchange for further improvements in performance.[7]" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 613, + 174, + 627 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 613, + 174, + 627 + ], + "spans": [ + { + "bbox": [ + 105, + 613, + 174, + 627 + ], + "type": "text", + "content": "4 Analysis" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 639, + 222, + 651 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 639, + 222, + 651 + ], + "spans": [ + { + "bbox": [ + 105, + 639, + 222, + 651 + ], + "type": "text", + "content": "4.1 Language modeling" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 658, + 506, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 658, + 506, + 693 + ], + "spans": [ + { + "bbox": [ + 104, + 658, + 506, + 693 + ], + "type": "text", + "content": "Following prior work (Biderman et al., 2023; Xue et al., 2022; Yu et al., 2023; Wang et al., 2024), we evaluate language modeling performance using bits-per-byte (BPB), which normalizes the loss by the tokenizer's encoding efficiency to fairly compare models with different tokenizers." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 700, + 504, + 733 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 700, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 700, + 504, + 733 + ], + "type": "text", + "content": "7This finding adds to the ongoing debate about the relationship between tokenization compression and LM performance (Galle, 2019; Goldman et al., 2024; Schmidt et al., 2024), providing further evidence that higher compression does not necessarily improve performance." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 107, + 79, + 504, + 205 + ], + "blocks": [ + { + "bbox": [ + 107, + 79, + 504, + 205 + ], + "lines": [ + { + "bbox": [ + 107, + 79, + 504, + 205 + ], + "spans": [ + { + "bbox": [ + 107, + 79, + 504, + 205 + ], + "type": "table", + "html": "
SuperBPE transition pointBPE 8BSuperBPE 8BSuperBPE 11B
t = 80kt = 160kt = 180kt = 180k
Parameter count (billion)8.128.128.128.1211.30
Train steps76,543118,419112,722107,98277,525
Average context length (bytes)18,26218,27218,26318,26818,268
Vocabulary size200k200k200k200k200k
Context length (tokens)4,0962,7562,8843,0003,000
Encoding efficiency (bytes/token)4.466.636.336.096.09
Train compute (1021 FLOPs)17.217.217.217.217.2
Inference compute (109 FLOPs/byte)3.752.422.542.653.75
", + "image_path": "35633b046a0c8fcfd9159f25a1d19b38b878f444e9cf5672d853601061d6c040.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 213, + 506, + 288 + ], + "lines": [ + { + "bbox": [ + 104, + 213, + 506, + 288 + ], + "spans": [ + { + "bbox": [ + 104, + 213, + 506, + 288 + ], + "type": "text", + "content": "Table 2: Training setup for the models we compare. We fix the vocabulary size and effective context size (measured in bytes) for each model and adjust the total number of training steps accordingly so that each model has the same total train budget (in FLOPs). The 8B SuperBPE models match the 8B BPE model in train compute but use less inference compute; the 11B SuperBPE model matches the 8B baseline in both train and inference compute. Numbers fixed across model settings are highlighted in the same color." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 104, + 309, + 504, + 392 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 309, + 504, + 392 + ], + "spans": [ + { + "bbox": [ + 104, + 309, + 504, + 392 + ], + "type": "text", + "content": "This is necessary because longer tokens, on average, contain more information and therefore are more difficult to predict. Bits-per-byte is defined as " + }, + { + "bbox": [ + 104, + 309, + 504, + 392 + ], + "type": "inline_equation", + "content": "\\mathrm{BPB}(x) = \\mathcal{L}_{\\mathrm{CE}}(x) / (\\ln (2)\\cdot n_{\\mathrm{bytes}})" + }, + { + "bbox": [ + 104, + 309, + 504, + 392 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 104, + 309, + 504, + 392 + ], + "type": "inline_equation", + "content": "n_{\\mathrm{bytes}}" + }, + { + "bbox": [ + 104, + 309, + 504, + 392 + ], + "type": "text", + "content": " is the length of the text in bytes and " + }, + { + "bbox": [ + 104, + 309, + 504, + 392 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\mathrm{CE}}(x)" + }, + { + "bbox": [ + 104, + 309, + 504, + 392 + ], + "type": "text", + "content": " is the sum of the cross-entropy loss over the entire text. We find that BPE 8B, SuperBPE 8B (" + }, + { + "bbox": [ + 104, + 309, + 504, + 392 + ], + "type": "inline_equation", + "content": "t = 180\\mathrm{k}" + }, + { + "bbox": [ + 104, + 309, + 504, + 392 + ], + "type": "text", + "content": "), and SuperBPE 11B attain 0.7465, 0.7482, and 0.7445 BPB, respectively, at the end of training. Although these numbers do not differ appreciably, the ranking of models according to BPB and downstream task performance are not consistent." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 409, + 248, + 422 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 409, + 248, + 422 + ], + "spans": [ + { + "bbox": [ + 105, + 409, + 248, + 422 + ], + "type": "text", + "content": "4.2 Loss distribution analysis" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 431, + 317, + 523 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 431, + 317, + 523 + ], + "spans": [ + { + "bbox": [ + 104, + 431, + 317, + 523 + ], + "type": "text", + "content": "Why does the SuperBPE 8B model achieve slightly higher normalized language modeling loss (§4.1) than the baseline BPE model despite outperforming it on a wide variety of downstream tasks (§3.2)? To investigate this, we plot the distribution of per-token " + }, + { + "bbox": [ + 104, + 431, + 317, + 523 + ], + "type": "inline_equation", + "content": "\\mathrm{BPB}^9" + }, + { + "bbox": [ + 104, + 431, + 317, + 523 + ], + "type": "text", + "content": " for both models on data sampled from the pretraining data mixture in Figure 4." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 526, + 317, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 526, + 317, + 594 + ], + "spans": [ + { + "bbox": [ + 104, + 526, + 317, + 594 + ], + "type": "text", + "content": "Although the BPE and SuperBPE models have very similar BPB on average, we see that loss is distributed very differently over the training data. Compared to the baseline, the SuperBPE model makes fewer predictions with either very high or very low loss." + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 323, + 394, + 504, + 525 + ], + "blocks": [ + { + "bbox": [ + 323, + 394, + 504, + 525 + ], + "lines": [ + { + "bbox": [ + 323, + 394, + 504, + 525 + ], + "spans": [ + { + "bbox": [ + 323, + 394, + 504, + 525 + ], + "type": "image", + "image_path": "1edddf8bae5540af35ba41c602b1db1040be2fc9f710e1df93c948f4103a12f1.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 322, + 533, + 506, + 601 + ], + "lines": [ + { + "bbox": [ + 322, + 533, + 506, + 601 + ], + "spans": [ + { + "bbox": [ + 322, + 533, + 506, + 601 + ], + "type": "text", + "content": "Figure 4: Histogram of per-token losses for both models from Table 1, measured over a large corpus of text. We observe that the SuperBPE model is a more consistent performer, making fewer predictions with very high or very low loss." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 609, + 506, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 609, + 506, + 676 + ], + "spans": [ + { + "bbox": [ + 104, + 609, + 506, + 676 + ], + "type": "text", + "content": "Low-loss tokens. We find that the reduction in low-loss tokens is attributable to a small set of extremely common words that the BPE model can easily predict, but are not available to SuperBPE as they are merged into larger superword tokens. For instance, the tokens _the, _of, and _to (the three most common words in the corpus) appear an order of magnitude more often under BPE than SuperBPE in the same corpus of text. When excluding these three token types alone, the BPB ranking reverses, with SuperBPE achieving 0.02 lower BPB than BPE." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 689, + 504, + 721 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 689, + 504, + 721 + ], + "spans": [ + { + "bbox": [ + 104, + 689, + 504, + 721 + ], + "type": "text", + "content": "8Bits-per-byte of different models are considered comparable because total cross-entropy loss is a universal quantity representing the number of additional bits required to reconstruct the text given the model. This quantity is normalized by the number of bytes for easier interpretation." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 118, + 721, + 485, + 733 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 721, + 485, + 733 + ], + "spans": [ + { + "bbox": [ + 118, + 721, + 485, + 733 + ], + "type": "text", + "content": "The per-token BPB is the per-token loss (in bits) divided by the average encoding efficiency." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 301, + 750, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 301, + 750, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 301, + 750, + 309, + 760 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 106, + 80, + 511, + 232 + ], + "blocks": [ + { + "bbox": [ + 106, + 80, + 511, + 232 + ], + "lines": [ + { + "bbox": [ + 106, + 80, + 511, + 232 + ], + "spans": [ + { + "bbox": [ + 106, + 80, + 511, + 232 + ], + "type": "table", + "html": "
POS tag#Example Tokens
NN, IN906(case_of,_hint_of,_availability_of,_emphasis_on,_distinction_between
VB, DT566_reached_a,_discovered_the,_identify_the,_becomes_a,_issued_a
DT, NN498.this_month,_no Idea,_the_earth,_the_maximum,_this Stuff
IN, NN406_on_top,_by'accident,_in_effects,_for_lunch,_in_front
IN, DT379_on_the,_without_a,_alongside_the,_for_each
IN, DT, NN333_for_a_living,_by_the_way,_into_the_future,_in_the_midst
NN, IN, DT270_position_of_the,_component_of_the,_review_of_the,_example_of.this
IN, DT, JJ145_like_any_other,_with_each_other,_for_a_short,_of_the Entire
VB, IN, DT121_worked_as_a,_based_on_the,_combined_with_the,_turnedInto_a
IN, DT, NN, IN33_at_the_time_of,_in_the Presence_of,_in_the_middle_of,_in_a_way-that
,, CC, PRP, VB20,_and_it_was,_but_IThink,_but_I_have,_but_I_am
IN, DT, JJ, NN18_in_the_long_run,_on_the_other_hand,_for_the_first_time,_in_the_same_way
", + "image_path": "a46ee469fccab5f11901d641b1007db9c5048f40856d774f9b6c27b558ca1c29.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 240, + 506, + 275 + ], + "lines": [ + { + "bbox": [ + 104, + 240, + 506, + 275 + ], + "spans": [ + { + "bbox": [ + 104, + 240, + 506, + 275 + ], + "type": "text", + "content": "Table 3: The most common POS tags for tokens of 2, 3, and 4 words in SuperBPE, along with random example tokens for each tag. NN = noun, IN = preposition, VB = verb, DT = determiner, CC = conjunction, JJ = adjective, and PRP = pronoun." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 104, + 300, + 506, + 390 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 300, + 506, + 390 + ], + "spans": [ + { + "bbox": [ + 104, + 300, + 506, + 390 + ], + "type": "text", + "content": "The reduction in low-loss tokens also makes sense from a qualitative linguistic analysis of SuperBPE tokens. In Table 3, we show the most common POS tags among superword tokens in SuperBPE along with random examples for each tag. The tokens often capture common multi-word expressions (by accident, of course, for a living) that function as a single semantic unit (Schneider et al., 2014). As an example, prepositions (IN) figure prominently in superword tokens (e.g., depend on, distinction between) and require lexeme-specific memorization. The individual words in these fixed expressions are often semantically vacuous and have little possible variation in context, so they are easy to predict once memorized." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 408, + 506, + 497 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 408, + 506, + 497 + ], + "spans": [ + { + "bbox": [ + 104, + 408, + 506, + 497 + ], + "type": "text", + "content": "High-loss tokens. On the other hand, the much thinner tail of very high-loss tokens shows that, in the worst case, the SuperBPE model consistently puts more probability mass on the correct token. On average, we expect models to suffer high loss on tokens that are difficult to predict. This may explain why SuperBPE can outperform BPE on downstream tasks but have higher average BPB: the tokens scored in task evaluations tend to be among the hardest to predict. This is consistent with prior findings that models generally continue to improve in downstream tasks even as their overall loss plateaus due to improving on a narrow and difficult slice of the distribution (Liu et al., 2023)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 517, + 165, + 529 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 517, + 165, + 529 + ], + "spans": [ + { + "bbox": [ + 105, + 517, + 165, + 529 + ], + "type": "text", + "content": "4.3 Scaling" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "spans": [ + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "type": "text", + "content": "To characterize the scaling behavior of SuperBPE, we also perform experiments at smaller scales." + }, + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "type": "inline_equation", + "content": "^{10}" + }, + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "type": "text", + "content": " We train baseline models at 680M and 1.9B and scale the base number of training tokens proportionately to the number of parameters. We also perform runs at " + }, + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "type": "inline_equation", + "content": "0.5 \\times" + }, + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "type": "inline_equation", + "content": "2 \\times" + }, + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "type": "inline_equation", + "content": "4 \\times" + }, + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "type": "text", + "content": " the base number of tokens to observe the trend with respect to training duration. Then, we train two SuperBPE models that match the training budget of each baseline BPE model, one that matches the baseline in parameter count (analogous to SuperBPE 8B) and a larger model that matches in both train and inference compute (analogous to SuperBPE 11B). We focus on the " + }, + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "type": "inline_equation", + "content": "t = 180k" + }, + { + "bbox": [ + 104, + 540, + 506, + 631 + ], + "type": "text", + "content": " tokenizer to reduce complexity." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 635, + 506, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 635, + 506, + 693 + ], + "spans": [ + { + "bbox": [ + 104, + 635, + 506, + 693 + ], + "type": "text", + "content": "We plot BPB at the end of training for each run in Figure 5. In the under-trained regime, both SuperBPE models achieve lower BPB than the baseline. In the over-trained regime, the ranking from worst to best is SuperBPE (matching parameter count), BPE, and SuperBPE (matching inference compute). Additionally, the separation between the models increases with further over-training. We provide additional results and comments on scaling in §B.4." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 710, + 504, + 733 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 710, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 710, + 504, + 733 + ], + "type": "text", + "content": "10For scaling, we focus on BPB since our downstream evaluations are too noisy for our small models to make meaningful comparisons." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 111, + 81, + 299, + 207 + ], + "blocks": [ + { + "bbox": [ + 111, + 81, + 299, + 207 + ], + "lines": [ + { + "bbox": [ + 111, + 81, + 299, + 207 + ], + "spans": [ + { + "bbox": [ + 111, + 81, + 299, + 207 + ], + "type": "image", + "image_path": "715318fd5aa7972126c82d9ddbd291b8e70a43f4bd0a178f500c66ec097d1351.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 166, + 214, + 247, + 225 + ], + "lines": [ + { + "bbox": [ + 166, + 214, + 247, + 225 + ], + "spans": [ + { + "bbox": [ + 166, + 214, + 247, + 225 + ], + "type": "text", + "content": "(a) 680M model size" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 307, + 81, + 498, + 208 + ], + "blocks": [ + { + "bbox": [ + 307, + 81, + 498, + 208 + ], + "lines": [ + { + "bbox": [ + 307, + 81, + 498, + 208 + ], + "spans": [ + { + "bbox": [ + 307, + 81, + 498, + 208 + ], + "type": "image", + "image_path": "aed3f91740cde20386c7e584c0bea96ad833070d2a9ec1843562d5fae5c712a1.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 365, + 214, + 441, + 225 + ], + "lines": [ + { + "bbox": [ + 365, + 214, + 441, + 225 + ], + "spans": [ + { + "bbox": [ + 365, + 214, + 441, + 225 + ], + "type": "text", + "content": "(b) 1.9B model size" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 234, + 506, + 302 + ], + "lines": [ + { + "bbox": [ + 104, + 234, + 506, + 302 + ], + "spans": [ + { + "bbox": [ + 104, + 234, + 506, + 302 + ], + "type": "text", + "content": "Figure 5: Scaling results for 680M and 1.9B baseline model sizes. Compared to the BPE baseline, SuperBPE with matching parameter count achieves lower BPB in the under-trained regime, while SuperBPE with matching inference compute achieves lower BPB than the baseline at every model size and every training budget tested. Note that BPB comparisons between BPE and SuperBPE models do not track downstream task accuracy due to differences in how BPE and SuperBPE models distribute loss over tokens (§4.2)." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 322, + 201, + 335 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 322, + 201, + 335 + ], + "spans": [ + { + "bbox": [ + 105, + 322, + 201, + 335 + ], + "type": "text", + "content": "5 Related Work" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 347, + 506, + 469 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 347, + 506, + 469 + ], + "spans": [ + { + "bbox": [ + 104, + 347, + 506, + 469 + ], + "type": "text", + "content": "Tokenization beyond subwords Prior work has explored processing text at multiple levels of granularity (Lai et al., 2021; Zhang et al., 2021) or creating multi-word tokens through frequency-based identification of " + }, + { + "bbox": [ + 104, + 347, + 506, + 469 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 104, + 347, + 506, + 469 + ], + "type": "text", + "content": "-grams (Gee et al., 2023; Kumar & Thawani, 2022). However, these were explored in limited experimental contexts (mainly for machine translation) and had mixed effectiveness. Naively disabling pretokenization in BPE has been found to severely degrade model performance (Dagan et al., 2024; Schmidt et al., 2024; Kudo, 2018), although this approach may be more promising for unigram tokenization (Kudo & Richardson, 2018), as adopted by JURASSIC (Lieber et al., 2021) and BLOOMBERGGPT (Wu et al., 2023). In concurrent work, Huang et al. (2025) disentangle input and output vocabularies, expanding only the former to include " + }, + { + "bbox": [ + 104, + 347, + 506, + 469 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 104, + 347, + 506, + 469 + ], + "type": "text", + "content": "-gram tokens. Their method requires significant modifications of the LM input component and considers fixed length of " + }, + { + "bbox": [ + 104, + 347, + 506, + 469 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 104, + 347, + 506, + 469 + ], + "type": "text", + "content": "-grams." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 483, + 506, + 561 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 483, + 506, + 561 + ], + "spans": [ + { + "bbox": [ + 104, + 483, + 506, + 561 + ], + "type": "text", + "content": "Multi-token prediction Multi-token prediction (MTP) equips LMs with some extra parameters to predict multiple tokens in a single time step (Qi et al., 2020; Gloeckle et al., 2024) and was recently adopted by DEEPSEEK-V3, though the MTP module is discarded at inference-time. MTP's effectiveness corroborates that LMs are capable of predicting more than one subword in a forward pass. However, these approaches fix the number of tokens predicted in each time step and require modifications to the architecture and training objective. We note that the benefits of MTP and superword tokens may be orthogonal." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 574, + 506, + 664 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 574, + 506, + 664 + ], + "spans": [ + { + "bbox": [ + 104, + 574, + 506, + 664 + ], + "type": "text", + "content": "Tokenizer-free language modeling Some works have explored the possibility of completely removing tokenization from LMs and directly modeling text as a sequence of bytes (Clark et al., 2022; Xue et al., 2022; Wang et al., 2024). To overcome the increased compute requirement due to expanded sequence lengths, alternative architectures have been proposed that either segment bytes into fixed-length patches (Tay et al., 2022; Yu et al., 2023) or dynamically predict patch boundaries with sub-modules (Nawrot et al., 2023; Pagnoni et al., 2024; Ahia et al., 2024; Hwang et al., 2025); these dynamic patches have been qualitatively observed to span multiple words." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 676, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 676, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 676, + 506, + 733 + ], + "type": "text", + "content": "Tokenizer transfer Many methods have been proposed to adapt models after training to use a different tokenizer. These may rely on intervention during pretraining (Chen et al., 2023), continued training for a subset of layers (Marchisio et al., 2023), or leveraging self-distillation (Minixhofer et al., 2025), heuristic, (Minixhofer et al., 2022; Gee et al., 2022; Tran, 2020; Liu et al., 2024b; Dobler & De Melo, 2023), or hypernetwork-based (Minixhofer" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 308, + 759 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 759 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 759 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 127 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 127 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 127 + ], + "type": "text", + "content": "et al., 2024) initialization of a fresh embedding matrix, optionally followed by fine-tuning. These methods may be used to upgrade existing models to use SuperBPE tokenizers, with the goal of reducing inference cost while maintaining performance. We leave this direction to future work." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 142, + 189, + 155 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 142, + 189, + 155 + ], + "spans": [ + { + "bbox": [ + 105, + 142, + 189, + 155 + ], + "type": "text", + "content": "6 Conclusion" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 167, + 506, + 268 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 167, + 506, + 268 + ], + "spans": [ + { + "bbox": [ + 104, + 167, + 506, + 268 + ], + "type": "text", + "content": "Although tokenization lies at the foundation of language modeling, acting as the lens through which models view text, the algorithms in use have remained largely unchanged over the past decade. SuperBPE builds on the observation that tokens need not be limited to subwords, extending the BPE algorithm to superword tokens. When replacing subword BPE tokenizers with SuperBPE tokenizers in pretraining, we find that language models perform better over a large suite of downstream tasks, while also being substantially more efficient at inference time. These benefits are achieved without modifying the underlying model architecture, making SuperBPE a compelling alternative to BPE that seamlessly integrates with modern language model ecosystems." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 282, + 212, + 296 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 282, + 212, + 296 + ], + "spans": [ + { + "bbox": [ + 105, + 282, + 212, + 296 + ], + "type": "text", + "content": "Acknowledgments" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 307, + 507, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 307, + 507, + 429 + ], + "spans": [ + { + "bbox": [ + 104, + 307, + 507, + 429 + ], + "type": "text", + "content": "We would like to thank Alex Fang for pretraining advice, Vivek Ramanujan for helping debug our distributed training setup, Ian Magnusson for helpful comments on LM evaluation, and Zhaofeng Wu, Alexander Fang, and Xiaochuang Han for feedback on drafts. We are also grateful to Luca Soldaini, Goncalo Faria, Shrimai Prabhumoye, Matt Jordan, Artidoro Pagnoni, Mike Lewis, Doug Downey, Shannon Shen, and the UW NLP community for valuable conversations about this work. Both co-first authors, AL and JH, are supported by the NSF Graduate Research Fellowship Program. JH and SO are supported in part by the Microsoft Grant for Customer Experience Innovation. This work was partially funded by NSF DMS-2134012, NSF CCF-2019844, ONR N00014-24-1-2207, and NSF 2113530 as well as with NVIDIA resources provided through the National AI Research Resource Pilot (NAIRR)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 444, + 169, + 456 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 444, + 169, + 456 + ], + "spans": [ + { + "bbox": [ + 105, + 444, + 169, + 456 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 106, + 462, + 507, + 732 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 106, + 462, + 507, + 531 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 462, + 507, + 531 + ], + "spans": [ + { + "bbox": [ + 106, + 462, + 507, + 531 + ], + "type": "text", + "content": "Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Jungo Kasai, David Mortensen, Noah Smith, and Yulia Tsvetkov. Do all languages cost the same? tokenization in the era of commercial language models. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 9904-9923, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.614. URL https://aclanthology.org/2023.emnlp-main.614." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 106, + 535, + 507, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 535, + 507, + 591 + ], + "spans": [ + { + "bbox": [ + 106, + 535, + 507, + 591 + ], + "type": "text", + "content": "Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Valentin Hofmann, Tomasz Limisiewicz, Yulia Tsvetkov, and Noah A. Smith. MAGNET: Improving the multilingual fairness of language models with adaptive gradient-based tokenization. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=1e3MOwHSIX." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 106, + 597, + 505, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 597, + 505, + 632 + ], + "spans": [ + { + "bbox": [ + 106, + 597, + 505, + 632 + ], + "type": "text", + "content": "Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large language models, 2021. URL https://arxiv.org/abs/2108.07732." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 106, + 636, + 507, + 692 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 636, + 507, + 692 + ], + "spans": [ + { + "bbox": [ + 106, + 636, + 507, + 692 + ], + "type": "text", + "content": "Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. Pythia: A suite for analyzing large language models across training and scaling, 2023. URL https://arxiv.org/abs/2304.01373." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 106, + 698, + 507, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 698, + 507, + 732 + ], + "spans": [ + { + "bbox": [ + 106, + 698, + 507, + 732 + ], + "type": "text", + "content": "BIG-bench. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=uyTL5Bvosj." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 506, + 732 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "type": "text", + "content": "Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In *Thirty-Fourth AAAI Conference on Artificial Intelligence*, 2020." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 506, + 181 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 506, + 181 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 506, + 181 + ], + "type": "text", + "content": "Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. Large language models in machine translation. In Jason Eisner (ed.), Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pp. 858-867, Prague, Czech Republic, June 2007. Association for Computational Linguistics. URL https://aclanthology.org/D07-1090/." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 186, + 506, + 266 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 186, + 506, + 266 + ], + "spans": [ + { + "bbox": [ + 105, + 186, + 506, + 266 + ], + "type": "text", + "content": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. 2020." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 273, + 506, + 352 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 273, + 506, + 352 + ], + "spans": [ + { + "bbox": [ + 105, + 273, + 506, + 352 + ], + "type": "text", + "content": "Anthony Chen, Pallavi Gudipati, Shayne Longpre, Xiao Ling, and Sameer Singh. Evaluating entity disambiguation and the role of popularity in retrieval-based NLP. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4472-4485, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.345. URL https://aclanthology.org/2021.acl-long.345." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 358, + 506, + 394 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 358, + 506, + 394 + ], + "spans": [ + { + "bbox": [ + 105, + 358, + 506, + 394 + ], + "type": "text", + "content": "Y Chen, K Marchisio, R Raileanu, DI Adelani, P Stenetorp, S Riedel, and M Artetxe. Improving language plasticity via pretraining with active forgetting. In Advances in Neural Information Processing Systems. NeurIPS, 2023." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 400, + 506, + 446 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 400, + 506, + 446 + ], + "spans": [ + { + "bbox": [ + 105, + 400, + 506, + 446 + ], + "type": "text", + "content": "Kenneth Church. How many multiword expressions do people know? In Proceedings of the Workshop on Multiword Expressions: From Parsing and Generation to the Real World, pp. 137-144, Portland, Oregon, USA, 2011. Association for Computational Linguistics. URL https://aclanthology.org/W11-0823/." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 453, + 506, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 453, + 506, + 533 + ], + "spans": [ + { + "bbox": [ + 105, + 453, + 506, + 533 + ], + "type": "text", + "content": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2924–2936, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1300. URL https://aclanthology.org/N19-1300." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 539, + 506, + 586 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 539, + 506, + 586 + ], + "spans": [ + { + "bbox": [ + 105, + 539, + 506, + 586 + ], + "type": "text", + "content": "Jonathan H. Clark, Dan Garrette, Iulia Turc, and John Wieting. Canine: Pre-training an efficient tokenization-free encoder for language representation. Transactions of the Association for Computational Linguistics, 10:73-91, 2022. doi: 10.1162/tacl_a_00448. URL https://aclanthology.org/2022.tacl-1.5." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 591, + 506, + 628 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 591, + 506, + 628 + ], + "spans": [ + { + "bbox": [ + 105, + 591, + 506, + 628 + ], + "type": "text", + "content": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018. URL https://arxiv.org/abs/1803.05457." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 633, + 506, + 680 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 633, + 506, + 680 + ], + "spans": [ + { + "bbox": [ + 105, + 633, + 506, + 680 + ], + "type": "text", + "content": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/abs/2110.14168." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 686, + 506, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 686, + 506, + 732 + ], + "spans": [ + { + "bbox": [ + 105, + 686, + 506, + 732 + ], + "type": "text", + "content": "Pablo Contreras Kallens and Morten H. Christiansen. Models of language and multiword expressions. Frontiers in Artificial Intelligence, 5, 2022. doi: 10.3389/frai.2022.781962. URL https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2022.781962." + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 506, + 733 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 107, + 81, + 506, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 506, + 127 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 506, + 127 + ], + "type": "text", + "content": "Gautier Dagan, Gabriel Synnaeve, and Baptiste Rozière. Getting the most out of your tokenizer for pre-training and domain adaptation. In Proceedings of the 41st International Conference on Machine Learning, ICML'24. JMLR.org, 2024. URL https://dl.acm.org/doi/10.5555/3692070.3692457." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 133, + 506, + 156 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 133, + 506, + 156 + ], + "spans": [ + { + "bbox": [ + 107, + 133, + 506, + 156 + ], + "type": "text", + "content": "DeepSeek-AI. Deepseek-v3 technical report, 2025. URL https://arxiv.org/abs/2412.19437." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 163, + 506, + 199 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 163, + 506, + 199 + ], + "spans": [ + { + "bbox": [ + 107, + 163, + 506, + 199 + ], + "type": "text", + "content": "Konstantin Dobler and Gerard De Melo. Focus: Effective embedding initialization for monolingual specialization of multilingual models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 13440-13454, 2023." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 205, + 506, + 283 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 205, + 506, + 283 + ], + "spans": [ + { + "bbox": [ + 107, + 205, + 506, + 283 + ], + "type": "text", + "content": "Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2368-2378, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1246. URL https://aclanthology.org/N19-1246." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 289, + 506, + 357 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 289, + 506, + 357 + ], + "spans": [ + { + "bbox": [ + 107, + 289, + 506, + 357 + ], + "type": "text", + "content": "Lukas Edman, Helmut Schmid, and Alexander Fraser. CUTE: Measuring LLMs' understanding of their tokens. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 3017-3026, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.177. URL https://aclanthology.org/2024.emnlp-main.177." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 364, + 506, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 364, + 506, + 387 + ], + "spans": [ + { + "bbox": [ + 107, + 364, + 506, + 387 + ], + "type": "text", + "content": "Philip Gage. A new algorithm for data compression. The C Users Journal archive, 12:23-38, 1994. URL https://api-semanticscholar.org/CorpusID:59804030." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 393, + 506, + 460 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 393, + 506, + 460 + ], + "spans": [ + { + "bbox": [ + 107, + 393, + 506, + 460 + ], + "type": "text", + "content": "Matthias Galle. Investigating the effectiveness of BPE: The power of shorter sequences. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (eds.), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1375-1381, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1141. URL https://aclanthology.org/D19-1141." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 468, + 506, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 468, + 506, + 514 + ], + "spans": [ + { + "bbox": [ + 107, + 468, + 506, + 514 + ], + "type": "text", + "content": "Leonidas Gee, Andrea Zugarini, Leonardo Rigutini, Paolo Torroni, et al. Fast vocabulary transfer for language model compression. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track, pp. 409-416. Association for Computational Linguistics (ACL), 2022." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 520, + 506, + 586 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 520, + 506, + 586 + ], + "spans": [ + { + "bbox": [ + 107, + 520, + 506, + 586 + ], + "type": "text", + "content": "Leonidas Gee, Leonardo Rigutini, Marco Ernandes, and Andrea Zugarini. Multi-word tokenization for sequence compression. In Mingxuan Wang and Imed Zitouni (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track, pp. 612-621, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-industry.58. URL https://aclanthology.org/2023.emnlp-industry.58." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 594, + 506, + 639 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 594, + 506, + 639 + ], + "spans": [ + { + "bbox": [ + 107, + 594, + 506, + 639 + ], + "type": "text", + "content": "Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Roziere, David Lopez-Paz, and Gabriel Synnaeve. Better & faster large language models via multi-token prediction. In *Forty-first International Conference on Machine Learning*, 2024. URL https://openreview.net/forum?id=pEWAcejiU2." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 646, + 506, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 646, + 506, + 713 + ], + "spans": [ + { + "bbox": [ + 107, + 646, + 506, + 713 + ], + "type": "text", + "content": "Omer Goldman, Avi Caciularu, Matan Eyal, Kris Cao, Idan Szpektor, and Reut Tsarfaty. Unpacking tokenization: Evaluating text compression and its correlation with model performance. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics: ACL 2024, pp. 2274-2286, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024-findings-acl.134. URL https://aclanthology.org/2024-findings-acl.134." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 719, + 458, + 733 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 719, + 458, + 733 + ], + "spans": [ + { + "bbox": [ + 105, + 719, + 458, + 733 + ], + "type": "text", + "content": "Google. Gemma: Open models based on gemini research and technology, 2024." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 732 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 128 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 81, + 507, + 128 + ], + "spans": [ + { + "bbox": [ + 105, + 81, + 507, + 128 + ], + "type": "text", + "content": "Jonathan Hayase, Alisa Liu, Yejin Choi, Sewoong Oh, and Noah A. Smith. Data mixture inference: What do BPE tokenizers reveal about their training data? In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=EHXyeImux0." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 134, + 505, + 181 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 134, + 505, + 181 + ], + "spans": [ + { + "bbox": [ + 105, + 134, + 505, + 181 + ], + "type": "text", + "content": "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=d7KBjmI3GmQ." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 188, + 505, + 235 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 188, + 505, + 235 + ], + "spans": [ + { + "bbox": [ + 105, + 188, + 505, + 235 + ], + "type": "text", + "content": "Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. In Proceedings of the 36th International Conference on Neural Information Processing Systems, pp. 30016-30030, 2022." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 240, + 507, + 320 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 240, + 507, + 320 + ], + "spans": [ + { + "bbox": [ + 105, + 240, + 507, + 320 + ], + "type": "text", + "content": "Valentin Hofmann, Janet Pierrehumbert, and Hinrich Schütze. Superbizarre is not superb: Derivational morphology improves BERT's interpretation of complex words. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 3594-3608, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.279. URL https://aclanthology.org/2021.acl-long.279." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 327, + 505, + 361 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 327, + 505, + 361 + ], + "spans": [ + { + "bbox": [ + 105, + 327, + 505, + 361 + ], + "type": "text", + "content": "Hongzhi Huang, Defa Zhu, Banggu Wu, Yutao Zeng, Ya Wang, Qiyang Min, and Xun Zhou. Over-tokenized transformer: Vocabulary is generally worth scaling, 2025. URL https://arxiv.org/abs/2501.16975." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 369, + 506, + 393 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 369, + 506, + 393 + ], + "spans": [ + { + "bbox": [ + 105, + 369, + 506, + 393 + ], + "type": "text", + "content": "Sukjun Hwang, Brandon Wang, and Albert Gu. Dynamic chunking for end-to-end hierarchical sequence modeling, 2025. URL https://arxiv.org/abs/2507.07955." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 399, + 506, + 468 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 399, + 506, + 468 + ], + "spans": [ + { + "bbox": [ + 105, + 399, + 506, + 468 + ], + "type": "text", + "content": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Regina Barzilay and Min-Yen Kan (eds.), Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601-1611, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1147. URL https://aclanthology.org/P17-1147." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 475, + 506, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 475, + 506, + 510 + ], + "spans": [ + { + "bbox": [ + 105, + 475, + 506, + 510 + ], + "type": "text", + "content": "Guy Kaplan, Matanel Oren, Yuval Reif, and Roy Schwartz. From tokens to words: On the inner lexicon of LLMs. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=328vch6tRs." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 517, + 505, + 552 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 517, + 505, + 552 + ], + "spans": [ + { + "bbox": [ + 105, + 517, + 505, + 552 + ], + "type": "text", + "content": "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 559, + 506, + 626 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 559, + 506, + 626 + ], + "spans": [ + { + "bbox": [ + 105, + 559, + 506, + 626 + ], + "type": "text", + "content": "Pride Kavumba, Naoya Inoue, Benjamin Heinzerling, Keshav Singh, Paul Reisert, and Kentaro Inui. When choosing plausible alternatives, clever hans can be clever. In Simon Ostermann, Sheng Zhang, Michael Roth, and Peter Clark (eds.), Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing, pp. 33-42, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-6004. URL https://aclanthology.org/D19-6004." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 634, + 505, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 634, + 505, + 657 + ], + "spans": [ + { + "bbox": [ + 105, + 634, + 505, + 657 + ], + "type": "text", + "content": "Taku Kudo. Sentencepiece experiments. https://github.com/google/sentencepiece/blob/master/doc/experiments.md, 2018." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 665, + 506, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 665, + 506, + 732 + ], + "spans": [ + { + "bbox": [ + 105, + 665, + 506, + 732 + ], + "type": "text", + "content": "Taku Kudo and John Richardson. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Eduardo Blanco and Wei Lu (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 66-71, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-2012. URL https://aclanthology.org/D18-2012." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 506, + 732 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 107, + 81, + 506, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 506, + 149 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 506, + 149 + ], + "type": "text", + "content": "Dipesh Kumar and Avijit Thawani. BPE beyond word boundary: How NOT to use multi word expressions in neural machine translation. In Shabnam Tafreshi, João Sedoc, Anna Rogers, Aleksandr Drozd, Anna Rumshisky, and Arjun Akula (eds.), Proceedings of the Third Workshop on Insights from Negative Results in NLP, pp. 172-179, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.insights-1.24. URL https://aclanthology.org/2022.insights-1.24." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 156, + 505, + 179 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 156, + 505, + 179 + ], + "spans": [ + { + "bbox": [ + 107, + 156, + 505, + 179 + ], + "type": "text", + "content": "Vedang Lad, Wes Gurnee, and Max Tegmark. The remarkable robustness of llms: Stages of inference?, 2024. URL https://arxiv.org/abs/2406.19384." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 186, + 506, + 276 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 186, + 506, + 276 + ], + "spans": [ + { + "bbox": [ + 107, + 186, + 506, + 276 + ], + "type": "text", + "content": "Yuxuan Lai, Yijia Liu, Yansong Feng, Songfang Huang, and Dongyan Zhao. Lattice-BERT: Leveraging multi-granularity representations in Chinese pre-trained language models. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1716-1731, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.137. URL https://aclanthology.org/2021.nacl-main.137." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 284, + 506, + 307 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 284, + 506, + 307 + ], + "spans": [ + { + "bbox": [ + 105, + 284, + 506, + 307 + ], + "type": "text", + "content": "Sander Land. A short introduction to pre-tokenization weirdness, 2024. URL https://tokencontributions.substack.com/p/a-short-introduction-to-pre-tokenization." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 315, + 506, + 381 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 315, + 506, + 381 + ], + "spans": [ + { + "bbox": [ + 107, + 315, + 506, + 381 + ], + "type": "text", + "content": "Sander Land and Max Bartolo. Fishing for magikarp: Automatically detecting under-trained tokens in large language models. In Yaser Al-Onaizan, Mohit Bansal, and YunNung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 11631-11646, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.649. URL https://aclanthology.org/2024.emnlp-main.649." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 389, + 506, + 423 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 389, + 506, + 423 + ], + "spans": [ + { + "bbox": [ + 107, + 389, + 506, + 423 + ], + "type": "text", + "content": "Hector J. Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, pp. 552-561. AAAI Press, 2012." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 430, + 506, + 573 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 430, + 506, + 573 + ], + "spans": [ + { + "bbox": [ + 107, + 430, + 506, + 573 + ], + "type": "text", + "content": "Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Sham Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Sewoong Oh, Luke Zettlemoyer, Kyle Lo, Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G. Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, and Vaishaal Shankar. Datacomp-lm: In search of the next generation of training sets for language models, 2024. URL https://arxiv.org/abs/2406.11794." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 582, + 506, + 616 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 582, + 506, + 616 + ], + "spans": [ + { + "bbox": [ + 107, + 582, + 506, + 616 + ], + "type": "text", + "content": "Opher Lieber, Or Sharir, Barak Lenz, and Yoav Shoham. Jurassic-1: Technical details and evaluation, 2021. URL https:// uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 624, + 506, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 624, + 506, + 690 + ], + "spans": [ + { + "bbox": [ + 107, + 624, + 506, + 690 + ], + "type": "text", + "content": "Hong Liu, Sang Michael Xie, Zhiyuan Li, and Tengyu Ma. Same pre-training loss, better downstream: Implicit bias matters for language models. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 22188-22214. PMLR, 23-29 Jul 2023. URL https://proceedings.mlr.press/v202/liu23ao.html." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 698, + 506, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 698, + 506, + 732 + ], + "spans": [ + { + "bbox": [ + 107, + 698, + 506, + 732 + ], + "type": "text", + "content": "Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, and Hannaneh Hajishirzi. Infini-gram: Scaling unbounded n-gram language models to a trillion tokens. In First Conference on Language Modeling, 2024a. URL https://openreview.net/forum?id=u2vAyMeLm." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 506, + 732 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 107, + 81, + 506, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 506, + 127 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 506, + 127 + ], + "type": "text", + "content": "Yihong Liu, Peiqin Lin, Mingyang Wang, and Hinrich Schütze. Ofa: A framework of initializing unseen subword embeddings for efficient large-scale multilingual continued pretraining. In Findings of the Association for Computational Linguistics: NAACL 2024, pp. 1067-1097, 2024b." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 133, + 506, + 167 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 133, + 506, + 167 + ], + "spans": [ + { + "bbox": [ + 105, + 133, + 506, + 167 + ], + "type": "text", + "content": "Scott Lundberg. The art of prompt design: Prompt boundaries and token healing, 2023. URL https://medium.com/towards-data-science/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 174, + 506, + 209 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 174, + 506, + 209 + ], + "spans": [ + { + "bbox": [ + 105, + 174, + 506, + 209 + ], + "type": "text", + "content": "Kelly Marchisio, Patrick Lewis, Yihong Chen, and Mikel Artetxe. Mini-model adaptation: Efficiently extending pretrained models to new languages via aligned shallow training. In The 61st Annual Meeting Of The Association For Computational Linguistics, 2023." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 215, + 506, + 249 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 215, + 506, + 249 + ], + "spans": [ + { + "bbox": [ + 105, + 215, + 506, + 249 + ], + "type": "text", + "content": "Haspelmath Martin. The indeterminacy of word segmentation and the nature of morphology and syntax. Folia Linguistica, 51(s1000):31-80, 2017. doi: doi:10.1515/flin-2017-1005. URL https://doi.org/10.1515/flin-2017-1005." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 255, + 506, + 311 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 255, + 506, + 311 + ], + "spans": [ + { + "bbox": [ + 105, + 255, + 506, + 311 + ], + "type": "text", + "content": "Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 17359-17372. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/6f1d43d5a82a37e89b0665b33bf3a182-Paper-Conference.pdf." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 318, + 473, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 318, + 473, + 330 + ], + "spans": [ + { + "bbox": [ + 105, + 318, + 473, + 330 + ], + "type": "text", + "content": "Meta. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 336, + 506, + 381 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 336, + 506, + 381 + ], + "spans": [ + { + "bbox": [ + 105, + 336, + 506, + 381 + ], + "type": "text", + "content": "Sabrina J. Mielke, Zaid Alyafeai, Elizabeth Salesky, Colin Raffel, Manan Dey, Matthias Galle, Arun Raja, Chenglei Si, Wilson Y. Lee, Benoit Sagot, and Samson Tan. Between words and characters: A brief history of open-vocabulary modeling and tokenization in nlp, 2021. URL https://arxiv.org/abs/2112.10508." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 388, + 506, + 455 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 388, + 506, + 455 + ], + "spans": [ + { + "bbox": [ + 105, + 388, + 506, + 455 + ], + "type": "text", + "content": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2381-2391, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1260. URL https://aclanthology.org/D18-1260." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 462, + 506, + 507 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 462, + 506, + 507 + ], + "spans": [ + { + "bbox": [ + 105, + 462, + 506, + 507 + ], + "type": "text", + "content": "Benjamin Minixhofer, Fabian Paischer, and Navid Rekabsaz. Wechsel: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3992–4006, 2022." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 514, + 504, + 537 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 514, + 504, + 537 + ], + "spans": [ + { + "bbox": [ + 105, + 514, + 504, + 537 + ], + "type": "text", + "content": "Benjamin Minixhofer, Edoardo Ponti, and Ivan Vulic. Zero-shot tokenizer transfer. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 543, + 504, + 567 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 543, + 504, + 567 + ], + "spans": [ + { + "bbox": [ + 105, + 543, + 504, + 567 + ], + "type": "text", + "content": "Benjamin Minixhofer, Ivan Vulic, and Edoardo Maria Ponti. Universal cross-tokenizer distillation via approximate likelihood matching. arXiv preprint arXiv:2503.20083, 2025." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 573, + 506, + 639 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 573, + 506, + 639 + ], + "spans": [ + { + "bbox": [ + 105, + 573, + 506, + 639 + ], + "type": "text", + "content": "Piotr Nawrot, Jan Chorowski, Adrian Lancucki, and Edoardo Maria Ponti. Efficient transformers with dynamic token pooling. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6403-6417, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.353. URL https://aclanthology.org/2023.acl-long.353." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 646, + 506, + 670 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 646, + 506, + 670 + ], + "spans": [ + { + "bbox": [ + 105, + 646, + 506, + 670 + ], + "type": "text", + "content": "Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Investigating the limitations of transformers with simple arithmetic tasks, 2021. URL https://arxiv.org/abs/2102.13019." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 676, + 506, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 676, + 506, + 732 + ], + "spans": [ + { + "bbox": [ + 105, + 676, + 506, + 732 + ], + "type": "text", + "content": "Team OLMo, Pete Walsh, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, Dustin Schwenk, Oyvind Tafjord, Taira Anderson, David Atkinson, Faeze Brahman, Christopher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng Liu, Saumya Malik, William Merrill, Lester James V. Miranda, Jacob Morrison," + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 732 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 115, + 81, + 506, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 81, + 506, + 127 + ], + "spans": [ + { + "bbox": [ + 115, + 81, + 506, + 127 + ], + "type": "text", + "content": "Tyler Murray, Crystal Nam, Valentina Pyatkin, Aman Rangapur, Michael Schmitz, Sam Skjonsberg, David Wadden, Christopher Wilhelm, Michael Wilson, Luke Zettlemoyer, Ali Farhadi, Noah A. Smith, and Hannaneh Hajishirzi. 2 olmo 2 furious, 2024. URL https://arxiv.org/abs/2501.00656." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 133, + 457, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 133, + 457, + 146 + ], + "spans": [ + { + "bbox": [ + 105, + 133, + 457, + 146 + ], + "type": "text", + "content": "OpenAI. Hello GPT-4o, 2024. URL https://openai.com/index/hello-gpt-4o/." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 106, + 152, + 507, + 219 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 152, + 507, + 219 + ], + "spans": [ + { + "bbox": [ + 106, + 152, + 507, + 219 + ], + "type": "text", + "content": "Naoki Otani, Satoru Ozaki, Xingyuan Zhao, Yucen Li, Micaelah St Johns, and Lori Levin. Pre-tokenization of multi-word expressions in cross-lingual word embeddings. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4451-4464, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.360. URL https://aclanthology.org/2020.emnlp-main.360." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 106, + 224, + 506, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 224, + 506, + 270 + ], + "spans": [ + { + "bbox": [ + 106, + 224, + 506, + 270 + ], + "type": "text", + "content": "Artidoro Pagnoni, Ram Pasunuru, Pedro Rodriguez, John Nguyen, Benjamin Muller, Margaret Li, Chunting Zhou, Lili Yu, Jason Weston, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Ari Holtzman, and Srinivasan Iyer. Byte latent transformer: Patches scale better than tokens, 2024. URL https://arxiv.org/abs/2412.09871." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 106, + 276, + 506, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 276, + 506, + 354 + ], + "spans": [ + { + "bbox": [ + 106, + 276, + 506, + 354 + ], + "type": "text", + "content": "Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Katrin Erk and Noah A. Smith (eds.), Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1525-1534, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1144. URL https://aclanthology.org/P16-1144." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 360, + 506, + 384 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 360, + 506, + 384 + ], + "spans": [ + { + "bbox": [ + 105, + 360, + 506, + 384 + ], + "type": "text", + "content": "Jackson Petty, Sjoerd van Steenkiste, Fei Sha, Ishita Dasgupta, Dan Garrette, and Tal Linzen. The impact of depth and width on transformer language model generalization. 2023." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 106, + 390, + 506, + 423 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 390, + 506, + 423 + ], + "spans": [ + { + "bbox": [ + 106, + 390, + 506, + 423 + ], + "type": "text", + "content": "Buu Phan, Marton Havasi, Matthew Muckley, and Karen Ullrich. Understanding and mitigating tokenization bias in language models, 2024. URL https://arxiv.org/abs/2406.16829." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 106, + 430, + 506, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 430, + 506, + 487 + ], + "spans": [ + { + "bbox": [ + 106, + 430, + 506, + 487 + ], + "type": "text", + "content": "Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. BPE-dropout: Simple and effective subword regularization. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1882-1892, Online, July 2020. Association for Computational Linguistics. doi: 10. 18653/v1/2020.acl-main.170. URL https://aclanthology.org/2020.acl-main.170." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 106, + 492, + 506, + 559 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 492, + 506, + 559 + ], + "spans": [ + { + "bbox": [ + 106, + 492, + 506, + 559 + ], + "type": "text", + "content": "Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. ProphetNet: Predicting future n-gram for sequence-to-SequencePretraining. In Trevor Cohn, Yulan He, and Yang Liu (eds.), Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 2401-2410, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.217. URL https://aclanthology.org/2020-findings-emnlp.217." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 106, + 566, + 506, + 600 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 566, + 506, + 600 + ], + "spans": [ + { + "bbox": [ + 106, + 566, + 506, + 600 + ], + "type": "text", + "content": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. URL https://cdn.openai.com/better-language-models/language Models are unsupervised multitask_learners.pdf." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 106, + 606, + 506, + 662 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 606, + 506, + 662 + ], + "spans": [ + { + "bbox": [ + 106, + 606, + 506, + 662 + ], + "type": "text", + "content": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Jian Su, Kevin Duh, and Xavier Carreras (eds.), Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383-2392, Austin, Texas, November 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1264. URL https://aclanthology.org/D16-1264." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 106, + 669, + 506, + 703 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 669, + 506, + 703 + ], + "spans": [ + { + "bbox": [ + 106, + 669, + 506, + 703 + ], + "type": "text", + "content": "Siva Reddy, Danqi Chen, and Christopher D. Manning. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266, 2019. doi: 10.1162/tacl_a_00266. URL https://aclanthology.org/Q19-1016." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 106, + 709, + 506, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 709, + 506, + 732 + ], + "spans": [ + { + "bbox": [ + 106, + 709, + 506, + 732 + ], + "type": "text", + "content": "Marco Tulio Ribeiro. A guidance language for controlling large language models, 2023. URL https://github.com/guidance-ai/guidance?tab=README-ov-file#text-not-tokens." + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 751, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 751, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 751, + 311, + 760 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 506, + 733 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "type": "text", + "content": "Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI) Spring Symposium, 2011." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 505, + 157 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 505, + 157 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 505, + 157 + ], + "type": "text", + "content": "Jessica Rumbelow and Matthew Watkins. Solidgoldmagikarp (plus, prompt generation), 2023. URL https://www.lesswrong.com/posts/aPeJE8bSo6rAfOaLqg/solidgoldmagikarp-plus-prompt-generation." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 163, + 505, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 163, + 505, + 198 + ], + "spans": [ + { + "bbox": [ + 105, + 163, + 505, + 198 + ], + "type": "text", + "content": "Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: an adversarial winograd schema challenge at scale. Commun. ACM, 64(9):99-106, August 2021. ISSN 0001-0782. URL https://doi.org/10.1145/3474381." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 205, + 506, + 272 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 205, + 506, + 272 + ], + "spans": [ + { + "bbox": [ + 105, + 205, + 506, + 272 + ], + "type": "text", + "content": "Bahar Salehi, Paul Cook, and Timothy Baldwin. A word embedding approach to predicting the compositionality of multiword expressions. In Rada Mihalcea, Joyce Chai, and Anoop Sarkar (eds.), Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 977-983, Denver, Colorado, 2015. Association for Computational Linguistics. doi: 10.3115/v1/N15-1099. URL https://aclanthology.org/N15-1099/." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 279, + 506, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 279, + 506, + 346 + ], + "spans": [ + { + "bbox": [ + 105, + 279, + 506, + 346 + ], + "type": "text", + "content": "Craig W Schmidt, Varshini Reddy, Haoran Zhang, Alec Alameddine, Omri Uzan, Yuval Pinter, and Chris Tanner. Tokenization is more than compression. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 678-702, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.40. URL https://aclanthology.org/2024.emnlp-main.40." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 353, + 506, + 431 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 353, + 506, + 431 + ], + "spans": [ + { + "bbox": [ + 105, + 353, + 506, + 431 + ], + "type": "text", + "content": "Nathan Schneider, Spencer Onuffer, Nora Kazour, Emily Danchik, Michael T. Mordowanec, Henrietta Conrad, and Noah A. Smith. Comprehensive annotation of multiword expressions in a social web corpus. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (eds.), Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pp. 455-461, Reykjavik, Iceland, May 2014. European Language Resources Association (ELRA). URL https://aclanthology.org/L14-1433/." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 437, + 506, + 494 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 437, + 506, + 494 + ], + "spans": [ + { + "bbox": [ + 105, + 437, + 506, + 494 + ], + "type": "text", + "content": "Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Katrin Erk and Noah A. Smith (eds.), Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715-1725, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://aclanthology.org/P16-1162." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 501, + 506, + 545 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 501, + 506, + 545 + ], + "spans": [ + { + "bbox": [ + 105, + 501, + 506, + 545 + ], + "type": "text", + "content": "Anya Sims, Cong Lu, Klara Kaleb, Jakob Nicolaus Foerster, and Yee Whye Teh. Stochastok: Improving fine-grained subword understanding in LLMs. In ICLR 2025 Workshop on Building Trust in Language Models and Applications, 2025. URL https://openreview.net/forum?id=PZnDZdkGsE." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 552, + 504, + 576 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 552, + 504, + 576 + ], + "spans": [ + { + "bbox": [ + 105, + 552, + 504, + 576 + ], + "type": "text", + "content": "Aaditya K. Singh and DJ Strouse. Tokenization counts: the impact of tokenization on arithmetic in frontier llms, 2024. URL https://arxiv.org/abs/2402.14903." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 582, + 506, + 616 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 582, + 506, + 616 + ], + "spans": [ + { + "bbox": [ + 105, + 582, + 506, + 616 + ], + "type": "text", + "content": "Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 624, + 506, + 701 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 624, + 506, + 701 + ], + "spans": [ + { + "bbox": [ + 105, + 624, + 506, + 701 + ], + "type": "text", + "content": "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4149-4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https://aclanthology.org/N19-1421." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 708, + 506, + 733 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 708, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 105, + 708, + 506, + 733 + ], + "type": "text", + "content": "Chaofan Tao, Qian Liu, Longxu Dou, Niklas Muennighoff, Zhongwei Wan, Ping Luo, Min Lin, and Ngai Wong. Scaling laws with vocabulary: Larger models deserve larger" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 732 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 115, + 81, + 505, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 81, + 505, + 105 + ], + "spans": [ + { + "bbox": [ + 115, + 81, + 505, + 105 + ], + "type": "text", + "content": "vocabies. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 112, + 507, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 112, + 507, + 158 + ], + "spans": [ + { + "bbox": [ + 107, + 112, + 507, + 158 + ], + "type": "text", + "content": "Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. Scale efficiently: Insights from pre-training and fine-tuning transformers. arXiv preprint arXiv:2109.10686, 2021." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 106, + 165, + 505, + 212 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 165, + 505, + 212 + ], + "spans": [ + { + "bbox": [ + 106, + 165, + 505, + 212 + ], + "type": "text", + "content": "Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, and Donald Metzler. Charformer: Fast character transformers via gradient-based subword tokenization. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=JtBRnr10EFN." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 106, + 219, + 506, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 219, + 506, + 297 + ], + "spans": [ + { + "bbox": [ + 106, + 219, + 506, + 297 + ], + "type": "text", + "content": "Avijit Thawani, Jay Pujara, Filip Ilievski, and Pedro Szekely. Representing numbers in NLP: a survey and a vision. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou (eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 644-656, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.53. URL https://aclanthology.org/2021.naacl-main.53." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 304, + 506, + 327 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 304, + 506, + 327 + ], + "spans": [ + { + "bbox": [ + 105, + 304, + 506, + 327 + ], + "type": "text", + "content": "Ke Tran. From english to foreign languages: Transferring pre-trained language models. arXiv preprint arXiv:2002.07306, 2020." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 106, + 335, + 506, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 335, + 506, + 403 + ], + "spans": [ + { + "bbox": [ + 106, + 335, + 506, + 403 + ], + "type": "text", + "content": "Menan Velayuthan and Kengatharaiyer Sarveswaran. Egalitarian language representation in language models: It all begins with tokenizers. In Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, and Steven Schockaert (eds.), Proceedings of the 31st International Conference on Computational Linguistics, pp. 5987-5996, Abu Dhabi, UAE, January 2025. Association for Computational Linguistics. URL https://aclanthology.org/2025.coling-main.400/." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 411, + 506, + 445 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 411, + 506, + 445 + ], + "spans": [ + { + "bbox": [ + 107, + 411, + 506, + 445 + ], + "type": "text", + "content": "Bandhav Veluri, Justin Chan, Malek Itani, Tuochao Chen, Takuya Yoshioka, and Shyamnath Gollakota. Real-time target sound extraction. In ICASSP, pp. 1-5, 2023. URL https://doi.org/10.1109/ICASSP49357.2023.10094573." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 453, + 506, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 453, + 506, + 487 + ], + "spans": [ + { + "bbox": [ + 107, + 453, + 506, + 487 + ], + "type": "text", + "content": "Tim Vieira, Ben LeBrun, Mario Giulianielli, Juan Luis Gastaldi, Brian DuSell, John Terilla, Timothy J O'Donnell, and Ryan Cotterell. From language models over tokens to language models over characters. arXiv preprint arXiv:2412.03719, 2024." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 106, + 495, + 506, + 529 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 495, + 506, + 529 + ], + "spans": [ + { + "bbox": [ + 106, + 495, + 506, + 529 + ], + "type": "text", + "content": "Junxiong Wang, Tushaar Gangavarapu, Jing Nathan Yan, and Alexander M Rush. Mambabye: Token-free selective state space model. In First Conference on Language Modeling, 2024. URL https://openreview.net/forum?id=X1xNsuKssb." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 106, + 536, + 506, + 636 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 536, + 506, + 636 + ], + "spans": [ + { + "bbox": [ + 106, + 536, + 506, + 636 + ], + "type": "text", + "content": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. Transformers: State-of-the-art natural language processing. In Qun Liu and David Schlangen (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38-45, Online, October 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-demos.6. URL https://aclanthology.org/2020.emnlp-demos.6." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 106, + 644, + 506, + 679 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 644, + 506, + 679 + ], + "spans": [ + { + "bbox": [ + 106, + 644, + 506, + 679 + ], + "type": "text", + "content": "Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. Bloomberggpt: A large language model for finance, 2023. URL https://arxiv.org/abs/2303.17564." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 106, + 687, + 506, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 687, + 506, + 732 + ], + "spans": [ + { + "bbox": [ + 106, + 687, + 506, + 732 + ], + "type": "text", + "content": "Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. ByT5: Towards a token-free future with pre-trained byte-to-byte models. Transactions of the Association for Computational Linguistics, 10:291-306, 2022. doi: 10.1162/tacl_a_00461. URL https://aclanthology.org/2022.tacl-1.17." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 477 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 81, + 507, + 160 + ], + "spans": [ + { + "bbox": [ + 105, + 81, + 507, + 160 + ], + "type": "text", + "content": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369-2380, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1259. URL https://aclanthology.org/D18-1259." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 164, + 507, + 209 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 164, + 507, + 209 + ], + "spans": [ + { + "bbox": [ + 105, + 164, + 507, + 209 + ], + "type": "text", + "content": "Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, and Mike Lewis. MEGABYTE: Predicting million-byte sequences with multiscale transformers. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=JTmO2V9Xpz." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 215, + 507, + 271 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 215, + 507, + 271 + ], + "spans": [ + { + "bbox": [ + 105, + 215, + 507, + 271 + ], + "type": "text", + "content": "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can a machine really finish your sentence? In Anna Korhonen, David Traum, and Lluis Márquez (eds.), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4791-4800, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1472. URL https://aclanthology.org/P19-1472." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 276, + 507, + 332 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 276, + 507, + 332 + ], + "spans": [ + { + "bbox": [ + 105, + 276, + 507, + 332 + ], + "type": "text", + "content": "Xinsong Zhang, Pengshuai Li, and Hang Li. AMBERT: A pre-trained language model with multi-grained tokenization. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 421-435, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021-findings-acl.37. URL https://aclanthology.org/2021-findings-acl.37." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 337, + 507, + 405 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 337, + 507, + 405 + ], + "spans": [ + { + "bbox": [ + 105, + 337, + 507, + 405 + ], + "type": "text", + "content": "Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Yining Chen, Jiahai Wang, Jian Yin, Ming Zhou, and Nan Duan. Analytical reasoning of text. In Marine Carpuat, Marie-Catherine de Marneffé, and Ivan Vladimir Meza Ruiz (eds.), Findings of the Association for Computational Linguistics: NAACL 2022, pp. 2306-2319, Seattle, United States, July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-naacl.177. URL https://aclanthology.org/2022-findings-naacl.177." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 410, + 507, + 477 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 410, + 507, + 477 + ], + "spans": [ + { + "bbox": [ + 105, + 410, + 507, + 477 + ], + "type": "text", + "content": "Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. AGIEval: A human-centric benchmark for evaluating foundation models. In Kevin Duh, Helena Gomez, and Steven Bethard (eds.), Findings of the Association for Computational Linguistics: NAACL 2024, pp. 2299-2314, Mexico City, Mexico, June 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-naacl.149. URL https://aclanthology.org/2024.findings-naacl.149." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 105, + 495, + 276, + 510 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 495, + 276, + 510 + ], + "spans": [ + { + "bbox": [ + 105, + 495, + 276, + 510 + ], + "type": "text", + "content": "A Experimental setup details" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 519, + 219, + 533 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 519, + 219, + 533 + ], + "spans": [ + { + "bbox": [ + 105, + 519, + 219, + 533 + ], + "type": "text", + "content": "A.1 Tokenizer training" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 540, + 479, + 553 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 540, + 479, + 553 + ], + "spans": [ + { + "bbox": [ + 105, + 540, + 479, + 553 + ], + "type": "text", + "content": "We use the HuggingFace tokenizers (Wolf et al., 2020) library for tokenizer training." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 563, + 246, + 576 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 563, + 246, + 576 + ], + "spans": [ + { + "bbox": [ + 105, + 563, + 246, + 576 + ], + "type": "text", + "content": "A.1.1 Tokenizer training data" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 582, + 504, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 582, + 504, + 628 + ], + "spans": [ + { + "bbox": [ + 105, + 582, + 504, + 628 + ], + "type": "text", + "content": "We produce the tokenizer training data by sampling documents uniformly at random from the OLMO2 stage 2 pretraining data, referred to as olmo-mix. We use a 10 GB subset because early experiments showed that data beyond even " + }, + { + "bbox": [ + 105, + 582, + 504, + 628 + ], + "type": "inline_equation", + "content": "\\sim 10" + }, + { + "bbox": [ + 105, + 582, + 504, + 628 + ], + "type": "text", + "content": " MB does not make a difference in the resulting tokenizer's encoding efficiency." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "text", + "content": "We found that olmo-mix had several extremely long documents, with the longest " + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "text", + "content": " of documents making up " + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "inline_equation", + "content": "15\\%" + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "text", + "content": " of the data. In particular, a full academic paper (specifically Veluri et al., 2023) is duplicated 2,224 times back-to-back inside one document (as delimited by special EOS tokens). Because our tokenizers are trained on small sets of data, these extremely long documents can take up a large proportion of the data, resulting in unusual tokens like chunk-based processing. To circumvent possible data duplication issues, we truncate the longest " + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "text", + "content": " of documents in the tokenizer training data to the " + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "inline_equation", + "content": "99\\%" + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "text", + "content": " percentile of document lengths. As future practitioners train SuperBPE tokenizers, we encourage especial attention to dedduplication, which may have an outsized impact on SuperBPE tokenizers." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 82, + 308, + 95 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 82, + 308, + 95 + ], + "spans": [ + { + "bbox": [ + 105, + 82, + 308, + 95 + ], + "type": "text", + "content": "A.1.2 Limit on the size of superword tokens" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 102, + 506, + 214 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 102, + 506, + 214 + ], + "spans": [ + { + "bbox": [ + 104, + 102, + 506, + 214 + ], + "type": "text", + "content": "Even after truncating the longest " + }, + { + "bbox": [ + 104, + 102, + 506, + 214 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 104, + 102, + 506, + 214 + ], + "type": "text", + "content": " of documents, we found that SuperBPE tokenizers can still have extremely long tokens consisting of highly duplicated boiler-plate text such as the Project Gutenberg license or common internet phrases such as You’re commenting using your. This issue is already present in BPE tokenizers trained on Chinese, which contain sentence-long tokens clearly taken from pornographic content. For instance, tokens in GPT-4O’s tokenizer include 最新高清无码 = latest HD uncensored and 娱乐网址 = entertainment website. To prevent concerns about the tokenizer directly revealing parts of the training data (Hayase et al., 2024), we enforce an upper bound of 4 words in our tokens. Empirically, we found that this had no measurable impact on the encoding efficiency of the tokenizers or the resulting trained LMs." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 226, + 236, + 237 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 226, + 236, + 237 + ], + "spans": [ + { + "bbox": [ + 105, + 226, + 236, + 237 + ], + "type": "text", + "content": "A.1.3 Pretokenization rules" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 245, + 504, + 268 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 245, + 504, + 268 + ], + "spans": [ + { + "bbox": [ + 104, + 245, + 504, + 268 + ], + "type": "text", + "content": "We implement whitespace pretokenization with the default regex string from tokenizers which was adopted by the GPT-2 tokenizer." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 110, + 278, + 313, + 290 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 278, + 313, + 290 + ], + "spans": [ + { + "bbox": [ + 110, + 278, + 313, + 290 + ], + "type": "interline_equation", + "content": "? \\backslash p \\{L \\} + |? [ ^ {\\wedge} \\backslash s \\backslash p \\{L \\} \\backslash p \\{N \\} ] + | \\backslash s + (? ! \\backslash S) | \\backslash s +", + "image_path": "12ee9ff47cec804659ff7f988abfc7d3f96b75ef90e8545e5ca4324cbb830a7b.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 300, + 504, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 300, + 504, + 346 + ], + "spans": [ + { + "bbox": [ + 104, + 300, + 504, + 346 + ], + "type": "text", + "content": "Note that the original GPT-2 pretokenization regex string also splits on contractions, e.g., splitting I'm into [I, 'm]. Since this choice is not universal among commercial tokenizers and is not related to whitespace pretokenization (and furthermore creates plenty of undesirable edge cases [Land, 2024]), we do not include this rule." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 350, + 506, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 350, + 506, + 429 + ], + "spans": [ + { + "bbox": [ + 104, + 350, + 506, + 429 + ], + "type": "text", + "content": "Independently of whitespace pretokenization (i.e., for both BPE and SuperBPE tokenizers), we follow recent convention (as introduced by GPT-3.5 and borrowed by LLAMA3, OLMo2) and pretokenize digits into blocks of 3. We make one modification, by grouping digits into 3 from the right rather than from the left, so that, e.g., 1000 would be pretokenized as [1, 000] instead of [100, 0]. This choice was recently found to yield improved performance on math benchmarks, even when applied solely at inference time (Singh & Strouse, 2024). Digit pretokenization is enforced with the following regex." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 111, + 437, + 202, + 450 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 437, + 202, + 450 + ], + "spans": [ + { + "bbox": [ + 111, + 437, + 202, + 450 + ], + "type": "interline_equation", + "content": "(? = (\\backslash d \\{3 \\}) + (?! \\backslash d))", + "image_path": "6e03a6a497acffe9a3ced57df406a9ec7096d1ed75a8a51d5158a1d286adea85.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 462, + 242, + 475 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 462, + 242, + 475 + ], + "spans": [ + { + "bbox": [ + 105, + 462, + 242, + 475 + ], + "type": "text", + "content": "A.1.4 Special casing of colon" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 481, + 506, + 550 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 481, + 506, + 550 + ], + "spans": [ + { + "bbox": [ + 104, + 481, + 506, + 550 + ], + "type": "text", + "content": "In order to make our tokenizer compatible with the common question-answering format where the prompt ends with a colon and the continuation is expected to start with a space, we \"special-case\" colon by preventing the algorithm from learning any tokens that contain\": as a substring. Without this fix, common question/answer prompts might produce distorted distributions over completions. Please see " + }, + { + "bbox": [ + 104, + 481, + 506, + 550 + ], + "type": "inline_equation", + "content": "\\S C.3" + }, + { + "bbox": [ + 104, + 481, + 506, + 550 + ], + "type": "text", + "content": " for further discussion. This affects the resulting tokenizer minimally in terms of the learned vocabulary." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 562, + 266, + 574 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 562, + 266, + 574 + ], + "spans": [ + { + "bbox": [ + 105, + 562, + 266, + 574 + ], + "type": "text", + "content": "A.2 Scaling model configurations" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 582, + 506, + 682 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 582, + 506, + 682 + ], + "spans": [ + { + "bbox": [ + 104, + 582, + 506, + 682 + ], + "type": "text", + "content": "When matching inference compute, the goal is to match the average flops per byte of generated text between two models with different tokenizers. To do so, we scale the model up to cancel the effect of longer tokens, which requires precise control over the model's size. To produce a model config with an arbitrary inference compute cost, we first represent the inference flops per token as a polynomial in terms of the model dimension, MLP hidden dimension, and number of layers. Conveniently, once the model dimension and number of layers are chosen, the flops are affine in the MLP hidden dimension, so we can easily solve for the MLP hidden dimension that gets us closest to the desired budget. We fix the head dimension to that of the base model." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 687, + 504, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 687, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 687, + 504, + 733 + ], + "type": "text", + "content": "To find the best config overall, we grid search over the hidden dimension (which must remain a multiple of the head dimension) and number of layers, solving for the MLP hidden dimension at each step. We choose the config which expands the transformer by the most uniform factors. This is measured by taking the ratios of the current parameters with the" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 504, + 127 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 504, + 127 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 504, + 127 + ], + "type": "text", + "content": "base config's parameters, applying the logarithm, and taking the standard deviation. While prior work has explored the best way to scale transformer models (Tay et al., 2021; Petty et al., 2023), we believe that scaling all parameters uniformly is reasonable since we are only increasing the model size by a small amount." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 132, + 504, + 154 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 132, + 504, + 154 + ], + "spans": [ + { + "bbox": [ + 104, + 132, + 504, + 154 + ], + "type": "text", + "content": "We present the exact model hyperparameters for all model sizes used in our experiments in Table 4." + } + ] + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 110, + 161, + 501, + 255 + ], + "blocks": [ + { + "bbox": [ + 110, + 161, + 501, + 255 + ], + "lines": [ + { + "bbox": [ + 110, + 161, + 501, + 255 + ], + "spans": [ + { + "bbox": [ + 110, + 161, + 501, + 255 + ], + "type": "table", + "html": "
680M910M1.9B2.5B8B11B
Parameter count678.2M912.5M1.893B2.536B8.115B11.30B
Model dimension10241,2162,0482,3044,0964,608
MLP hidden dimension8,1929,72816,38418,43222,01624,704
Head dimension6464128128128128
Number of heads161916183236
Number of layers161816193237
Vocabulary size20,000520,000520,000520,000520,000520,0005
", + "image_path": "f06d843bf9b139148351b3eacedc5b01c1008f2f1ee0395ed7672c979ff3e3ba.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 262, + 504, + 318 + ], + "lines": [ + { + "bbox": [ + 104, + 262, + 504, + 318 + ], + "spans": [ + { + "bbox": [ + 104, + 262, + 504, + 318 + ], + "type": "text", + "content": "Table 4: Model parameters for all model sizes. Model sizes 910M, 2.5B, and 11B are scaled versions of 680M, 1.9B, and 8B respectively. All other parameters match those of OLMO 300M (from the OLMO model ladder) for sizes 680M and 910M, OLMO 1B for sizes 1.9B and 2.5B, or OLMO2 7B for sizes 8B and 11B, respectively. Maximum sequence length values for various tokenizers are listed in Table 2." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 105, + 344, + 286, + 357 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 344, + 286, + 357 + ], + "spans": [ + { + "bbox": [ + 105, + 344, + 286, + 357 + ], + "type": "text", + "content": "A.3 Compute used for model training" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 365, + 324, + 376 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 365, + 324, + 376 + ], + "spans": [ + { + "bbox": [ + 104, + 365, + 324, + 376 + ], + "type": "text", + "content": "All models were pretrained on " + }, + { + "bbox": [ + 104, + 365, + 324, + 376 + ], + "type": "inline_equation", + "content": "328\\times \\mathrm{H}100" + }, + { + "bbox": [ + 104, + 365, + 324, + 376 + ], + "type": "text", + "content": " nodes." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 390, + 209, + 401 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 390, + 209, + 401 + ], + "spans": [ + { + "bbox": [ + 105, + 390, + 209, + 401 + ], + "type": "text", + "content": "A.4 Evaluation Suite" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 410, + 506, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 410, + 506, + 456 + ], + "spans": [ + { + "bbox": [ + 104, + 410, + 506, + 456 + ], + "type": "text", + "content": "Our evaluation suite builds on DataComp-LM's core evaluation of 22 tasks (Li et al., 2024), which was found to provide low-variance signal of learning. We add 8 more popular tasks (e.g., MMLU, GSM8K) while also covering string manipulation tasks (e.g., CUTE), which are known to be challenging for LMs due to their tokenizers." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 460, + 505, + 538 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 460, + 505, + 538 + ], + "spans": [ + { + "bbox": [ + 104, + 460, + 505, + 538 + ], + "type": "text", + "content": "All evaluations are based on decoding from the model and scoring the generation by either comparing it to the ground truth or evaluating its functional correctness (in the case of coding tasks). For multiple choice (MC) tasks, we check whether the predicted answer choice is an exact match (EM) to the target (we observe that effectively " + }, + { + "bbox": [ + 104, + 460, + 505, + 538 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 104, + 460, + 505, + 538 + ], + "type": "text", + "content": " of model generations are valid answer choices, especially for later checkpoints). For open-ended tasks, we check whether the generated output contains the ground truth answer exactly, and for coding tasks, we report pass@10." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 543, + 504, + 598 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 543, + 504, + 598 + ], + "spans": [ + { + "bbox": [ + 104, + 543, + 504, + 598 + ], + "type": "text", + "content": "We provide 5 in-context examples for all tasks, except for CoQA, which naturally contains in-context examples in the conversational context, and the coding tasks (HumanEval and MBPP), which are evaluated zero-shot following prior work. We use a maximum of 5,000 examples from each dataset, though some datasets contain much fewer examples. BB below stands for BIG-Bench." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 610, + 504, + 644 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 610, + 504, + 644 + ], + "spans": [ + { + "bbox": [ + 104, + 610, + 504, + 644 + ], + "type": "text", + "content": "ARC consists of 4-way MC questions from grades 3-9 science exams. It contains two splits, ARC-Easy, which require knowledge of basic science, and ARC-Challenge, which require some procedural reasoning (Clark et al., 2018)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 656, + 504, + 681 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 656, + 504, + 681 + ], + "spans": [ + { + "bbox": [ + 104, + 656, + 504, + 681 + ], + "type": "text", + "content": "Arithmetic contains simple arithmetic problems (Brown et al., 2020).11 We use the 2da, 2dm, and 2ds splits for addition, multiplication, and division of (up to) 2-digit numbers." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 691, + 504, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 691, + 504, + 715 + ], + "spans": [ + { + "bbox": [ + 104, + 691, + 504, + 715 + ], + "type": "text", + "content": "BoolQ contains naturally occurring yes/no questions paired with passages that provide an answer (Clark et al., 2019)." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 113, + 719, + 362, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 113, + 719, + 362, + 731 + ], + "spans": [ + { + "bbox": [ + 113, + 719, + 362, + 731 + ], + "type": "text", + "content": "11https://huggingface.co/datasets/EleutherAI/arithmetic" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 504, + 105 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 504, + 105 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 504, + 105 + ], + "type": "text", + "content": "CommonsenseQA contains 5-way MC questions that require commonsense knowledge to answer (Talmor et al., 2019)." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 118, + 506, + 140 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 118, + 506, + 140 + ], + "spans": [ + { + "bbox": [ + 104, + 118, + 506, + 140 + ], + "type": "text", + "content": "COPA contains two-way MC questions about cause and effect (Roemmle et al., 2011; Kavumba et al., 2019)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 153, + 506, + 199 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 153, + 506, + 199 + ], + "spans": [ + { + "bbox": [ + 104, + 153, + 506, + 199 + ], + "type": "text", + "content": "CoQA consists of passages with a series of conversational questions about the passage Reddy et al. (2019). Each question requires the prior conversational context, due to possible coreference across questions. Because these contextual questions naturally serve as in-context examples, we do not provide additional in-context examples." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 211, + 506, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 211, + 506, + 245 + ], + "spans": [ + { + "bbox": [ + 104, + 211, + 506, + 245 + ], + "type": "text", + "content": "BB CS Algorithms consists of two subtasks, determining whether a given series of parentheses is balanced and identifying the longest common subsequence in two letter strings (BIG-bench, 2023)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 257, + 506, + 291 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 257, + 506, + 291 + ], + "spans": [ + { + "bbox": [ + 104, + 257, + 506, + 291 + ], + "type": "text", + "content": "CUTE contains questions that require the model to understand and manipulate spelling, such as replacing all instances of a particular letter in a word with another letter (Edman et al., 2024)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 304, + 504, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 304, + 504, + 327 + ], + "spans": [ + { + "bbox": [ + 104, + 304, + 504, + 327 + ], + "type": "text", + "content": "DROP contains questions about passages, potentially requiring reasoning over multiple pieces of information in the passage (Dua et al., 2019)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 339, + 506, + 373 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 339, + 506, + 373 + ], + "spans": [ + { + "bbox": [ + 104, + 339, + 506, + 373 + ], + "type": "text", + "content": "BB Dyck Languages consists of a sequence of parentheses and requires the model to predict the correct sequence of closing parentheses so that the entire sequence is well-balanced." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 385, + 504, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 385, + 504, + 441 + ], + "spans": [ + { + "bbox": [ + 104, + 385, + 504, + 441 + ], + "type": "text", + "content": "GSM8K contains grade school math word problems that require between 2 and 8 steps to solve. In the in-context examples, we provide the answer passage that contains intermediate steps with calculator annotations removed. The model is expected to provide the final numerical answer after four hashtags (# ###) that delimit the reasoning and final answer (Cobbe et al., 2021)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 454, + 504, + 477 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 454, + 504, + 477 + ], + "spans": [ + { + "bbox": [ + 104, + 454, + 504, + 477 + ], + "type": "text", + "content": "HellaSwag contains 4-way MC questions which ask for the most natural continuation given the context (Zellers et al., 2019)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 490, + 506, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 490, + 506, + 513 + ], + "spans": [ + { + "bbox": [ + 104, + 490, + 506, + 513 + ], + "type": "text", + "content": "HotpotQA contains questions along with a corresponding passage from Wikipedia containing the answer (Yang et al., 2018)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 525, + 506, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 525, + 506, + 604 + ], + "spans": [ + { + "bbox": [ + 104, + 525, + 506, + 604 + ], + "type": "text", + "content": "HumanEval contains programming problems where the model is tasked with completing a Python function given its docstring (Chen et al., 2021). We use \"\\nclass,\" \"\\ndef,\" \"\\n#,\"\\nif,\" as stop tokens. Following the original paper, we sample 20 continuations with top " + }, + { + "bbox": [ + 104, + 525, + 506, + 604 + ], + "type": "inline_equation", + "content": "p = 0.95" + }, + { + "bbox": [ + 104, + 525, + 506, + 604 + ], + "type": "text", + "content": " and temperature " + }, + { + "bbox": [ + 104, + 525, + 506, + 604 + ], + "type": "inline_equation", + "content": "= 0.8" + }, + { + "bbox": [ + 104, + 525, + 506, + 604 + ], + "type": "text", + "content": ". Models are allowed to generate for a maximum of 128 new tokens. The functional correctness of generations is automatically evaluated using test cases. We use the 20 generation to make an unbiased estimate of the pass@10 rate, i.e., how likely at least one of 10 sampled solutions for a problem is correct." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 616, + 449, + 630 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 616, + 449, + 630 + ], + "spans": [ + { + "bbox": [ + 104, + 616, + 449, + 630 + ], + "type": "text", + "content": "Jeopardy contains open-ended questions from the \"Jeopardy!\" quiz show.[12]" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 642, + 506, + 676 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 642, + 506, + 676 + ], + "spans": [ + { + "bbox": [ + 104, + 642, + 506, + 676 + ], + "type": "text", + "content": "Lambada contains narratives without the last word, which is inferrable given the context (Paperno et al., 2016). This task requires models to attend to the full narrative instead of only the local context." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 689, + 504, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 689, + 504, + 712 + ], + "spans": [ + { + "bbox": [ + 104, + 689, + 504, + 712 + ], + "type": "text", + "content": "BB Language Identification contains sentences in different languages, and the task is to choose the language of the sentence from a long list of options." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 113, + 719, + 412, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 113, + 719, + 412, + 732 + ], + "spans": [ + { + "bbox": [ + 113, + 719, + 412, + 732 + ], + "type": "text", + "content": "12https://www.kaggle.com/datasets/tunguz/200000-jeopardy-questions" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "22" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 504, + 117 + ], + "type": "text", + "content": "LSAT-AR contains MC questions that evaluate the analytical reasoning (AR) ability of LMs (Zhong et al., 2022; 2024). Test questions are drawn from the Law School Admission Test (LSAT) from 1991 to 2016." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 128, + 504, + 161 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 128, + 504, + 161 + ], + "spans": [ + { + "bbox": [ + 104, + 128, + 504, + 161 + ], + "type": "text", + "content": "MBPP contains Python programming problems where the model is given a description of the desired function and a series of unit tests. We use the same evaluation setup as HumanEval." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 174, + 505, + 209 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 174, + 505, + 209 + ], + "spans": [ + { + "bbox": [ + 104, + 174, + 505, + 209 + ], + "type": "text", + "content": "MMLU contains 4-way MC questions covering 57 different domains, covering both world knowledge and problem-solving abilities (Hendrycks et al., 2021). Note that we report a straight average over the 5000-example sample, rather than a macro-average over subjects." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 220, + 506, + 244 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 220, + 506, + 244 + ], + "spans": [ + { + "bbox": [ + 104, + 220, + 506, + 244 + ], + "type": "text", + "content": "OpenbookQA contains 4-way MC questions that require multi-step reasoning and commonsense knowledge (Mihaylov et al., 2018)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 255, + 504, + 280 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 255, + 504, + 280 + ], + "spans": [ + { + "bbox": [ + 105, + 255, + 504, + 280 + ], + "type": "text", + "content": "BB Operators contains questions where the model is given a function definition and asked to compute the output of that function given a particular input." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 290, + 505, + 313 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 290, + 505, + 313 + ], + "spans": [ + { + "bbox": [ + 104, + 290, + 505, + 313 + ], + "type": "text", + "content": "PIQA contains MC questions that require physical commonsense reasoning (Bisk et al., 2020)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 325, + 504, + 349 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 325, + 504, + 349 + ], + "spans": [ + { + "bbox": [ + 104, + 325, + 504, + 349 + ], + "type": "text", + "content": "BB Repeat-Copy-Logic contains instructions that ask the model to produce a particular string (Austin et al., 2021)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 361, + 506, + 384 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 361, + 506, + 384 + ], + "spans": [ + { + "bbox": [ + 104, + 361, + 506, + 384 + ], + "type": "text", + "content": "SQuAD contains passages paired with questions about the passage (Rajpurkar et al., 2016). The answer is always a span from the passage." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 396, + 487, + 408 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 396, + 487, + 408 + ], + "spans": [ + { + "bbox": [ + 105, + 396, + 487, + 408 + ], + "type": "text", + "content": "TriviaQA contains open-ended questions about world knowledge (Joshi et al., 2017)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 419, + 506, + 442 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 419, + 506, + 442 + ], + "spans": [ + { + "bbox": [ + 104, + 419, + 506, + 442 + ], + "type": "text", + "content": "BB WikidataQA require models to complete factual statements with the correct continuation." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 455, + 506, + 489 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 455, + 506, + 489 + ], + "spans": [ + { + "bbox": [ + 104, + 455, + 506, + 489 + ], + "type": "text", + "content": "Winograd contains binary MC questions where the model is given a context and asked to determine which entity a pronoun refers to, between two options (Levesque et al., 2012). Correctly answer the question requires commonsense knowledge and contextual reasoning." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 501, + 504, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 501, + 504, + 525 + ], + "spans": [ + { + "bbox": [ + 104, + 501, + 504, + 525 + ], + "type": "text", + "content": "Winogrande contain questions with the same schema as Winograd, but increases both the scale and difficulty of the dataset (Sakaguchi et al., 2021)." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 540, + 231, + 553 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 540, + 231, + 553 + ], + "spans": [ + { + "bbox": [ + 105, + 540, + 231, + 553 + ], + "type": "text", + "content": "B Additional Results" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 565, + 296, + 578 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 565, + 296, + 578 + ], + "spans": [ + { + "bbox": [ + 105, + 565, + 296, + 578 + ], + "type": "text", + "content": "B.1 How BPB varies with context length" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 103, + 586, + 504, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 103, + 586, + 504, + 665 + ], + "spans": [ + { + "bbox": [ + 103, + 586, + 504, + 665 + ], + "type": "text", + "content": "In our main experiments (§3), we adjust the context size of SuperBPE models to match the effective context size of the BPE model in raw text. To justify this design choice, we show that the next token becomes easier to predict as a function of the preceding context in bytes (not tokens). Figure 6 shows the average BPB at every token index (left) vs byte index (right) — when measured at fixed token indices, SuperBPE has an advantage from seeing more context (achieving lower loss on average at the same token index), whereas at fixed byte indices, this advantage goes away." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 677, + 204, + 689 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 677, + 204, + 689 + ], + "spans": [ + { + "bbox": [ + 105, + 677, + 204, + 689 + ], + "type": "text", + "content": "B.2 Task evaluation" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 104, + 698, + 504, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 698, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 698, + 504, + 733 + ], + "type": "text", + "content": "We report the individual task performance of BPE and all SuperBPE models in Table 5 (this an expansion of Table 1). We also show a subset of task-specific performance curves during pretraining in Figure 12." + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "23" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 122, + 88, + 292, + 228 + ], + "blocks": [ + { + "bbox": [ + 122, + 88, + 292, + 228 + ], + "lines": [ + { + "bbox": [ + 122, + 88, + 292, + 228 + ], + "spans": [ + { + "bbox": [ + 122, + 88, + 292, + 228 + ], + "type": "image", + "image_path": "84f2007d154f77661b1483fc6a7996163488e1e8ff828b06402921ad2557ab4c.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 153, + 243, + 265, + 254 + ], + "lines": [ + { + "bbox": [ + 153, + 243, + 265, + 254 + ], + "spans": [ + { + "bbox": [ + 153, + 243, + 265, + 254 + ], + "type": "text", + "content": "(a) BPB vs tokens of context" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 314, + 88, + 484, + 228 + ], + "blocks": [ + { + "bbox": [ + 314, + 88, + 484, + 228 + ], + "lines": [ + { + "bbox": [ + 314, + 88, + 484, + 228 + ], + "spans": [ + { + "bbox": [ + 314, + 88, + 484, + 228 + ], + "type": "image", + "image_path": "c421e5a30f24ec4e8a6d113c372482b96b32219f88c00b1e3e277b791450fae3.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 348, + 243, + 455, + 254 + ], + "lines": [ + { + "bbox": [ + 348, + 243, + 455, + 254 + ], + "spans": [ + { + "bbox": [ + 348, + 243, + 455, + 254 + ], + "type": "text", + "content": "(b) BPB vs bytes of context" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 262, + 504, + 298 + ], + "lines": [ + { + "bbox": [ + 104, + 262, + 504, + 298 + ], + "spans": [ + { + "bbox": [ + 104, + 262, + 504, + 298 + ], + "type": "text", + "content": "Figure 6: When comparing the normalized loss of the next token, controlling for preceding tokens of context gives SuperBPE an advantage, while controlling for bytes of context gives a close match between BPE and SuperBPE." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 119, + 319, + 491, + 630 + ], + "blocks": [ + { + "bbox": [ + 119, + 319, + 491, + 630 + ], + "lines": [ + { + "bbox": [ + 119, + 319, + 491, + 630 + ], + "spans": [ + { + "bbox": [ + 119, + 319, + 491, + 630 + ], + "type": "table", + "html": "
CategoryTaskBPE 8BSuperBPE 8BSuperBPE 11B
t = 80kt = 160kt = 180k
KnowledgeARC-Easy (MC)46.660.863.667.160.6
ARC-Challenge (MC)35.146.443.950.643.9
Jeopardy (EM)42.140.241.841.842.2
MMLU (MC)36.541.942.644.741.0
OpenbookQA (MC)33.249.849.454.446.4
TriviaQA (EM)60.659.761.961.362.3
WikidataQA (EM)69.768.269.570.970.9
Math & ReasoningArithmetic (EM)54.863.258.659.356.9
GSM8K (EM)6.46.96.76.77.4
LSAT-AR (MC)21.323.924.323.020.9
Operators (EM)35.532.235.533.637.9
Repeat-Copy-Logic (EM)3.16.26.26.23.1
CodingHumanEval (pass@10)15.915.014.413.415.9
MBPP (pass@10)27.525.328.428.329.4
Reading ComprehensionBoolQ (MC)59.765.262.364.664.7
CoQA (EM)12.612.812.513.213.1
DROP (EM)31.328.632.831.433.1
HotpotQA (EM)53.552.554.755.254.6
SQuAD (EM)75.174.376.275.877.2
CommonsenseCommonsenseQA (MC)33.550.052.353.850.5
COPA (MC)77.086.687.685.897.0
PIQA (MC)55.257.761.859.859.2
Winograd (MC)50.452.555.253.152.3
Winogrande (MC)47.351.251.652.650.2
Language UnderstandingHellaSwag (MC)29.731.230.333.736.6
LAMBADA (EM)77.072.875.170.675.8
Language Identification (EM)8.810.29.79.010.1
String ManipulationCS Algorithms (EM)46.147.342.648.649.1
CUTE (EM)31.332.232.832.635.7
Dyck-Languages (EM)15.923.218.814.216.7
Average39.842.943.443.843.8
", + "image_path": "37a1775229b0c6dc7311ec0893d78fc5a542f3c4c1c2fade67db25d48e8e2c30.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 638, + 504, + 662 + ], + "lines": [ + { + "bbox": [ + 104, + 638, + 504, + 662 + ], + "spans": [ + { + "bbox": [ + 104, + 638, + 504, + 662 + ], + "type": "text", + "content": "Table 5: Performance of BPE and SuperBPE models on 30 downstream tasks. This is an expansion of Table 1 with more models." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 105, + 694, + 202, + 704 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 694, + 202, + 704 + ], + "spans": [ + { + "bbox": [ + 105, + 694, + 202, + 704 + ], + "type": "text", + "content": "B.3 BPB evaluation" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 719, + 453, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 719, + 453, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 719, + 453, + 733 + ], + "type": "text", + "content": "See Figure 7 for the bits-per-byte during pretraining of all models we compare." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "24" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 23 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 165, + 79, + 446, + 289 + ], + "blocks": [ + { + "bbox": [ + 165, + 79, + 446, + 289 + ], + "lines": [ + { + "bbox": [ + 165, + 79, + 446, + 289 + ], + "spans": [ + { + "bbox": [ + 165, + 79, + 446, + 289 + ], + "type": "image", + "image_path": "ad02fa6005023c7a5ffcddcd0f09007a757f860bb2f9c15ab4baf49e6e43bf72.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 297, + 506, + 333 + ], + "lines": [ + { + "bbox": [ + 104, + 297, + 506, + 333 + ], + "spans": [ + { + "bbox": [ + 104, + 297, + 506, + 333 + ], + "type": "text", + "content": "Figure 7: Bits-per-byte of BPE and SuperBPE models during pretraining. The BPE 8B, SuperBPE 8B (" + }, + { + "bbox": [ + 104, + 297, + 506, + 333 + ], + "type": "inline_equation", + "content": "t = 180\\mathrm{k}" + }, + { + "bbox": [ + 104, + 297, + 506, + 333 + ], + "type": "text", + "content": "), and SuperBPE 11B attain 0.7465, 0.7482, and 0.7445 BPB respectively at the end of training." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 354, + 277, + 367 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 354, + 277, + 367 + ], + "spans": [ + { + "bbox": [ + 105, + 354, + 277, + 367 + ], + "type": "text", + "content": "B.4 Additional scaling experiments" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 375, + 506, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 375, + 506, + 421 + ], + "spans": [ + { + "bbox": [ + 104, + 375, + 506, + 421 + ], + "type": "text", + "content": "Our tokenizer has several interesting interactions with LM scaling, purely due to its increased efficiency. For the purpose of this section, let " + }, + { + "bbox": [ + 104, + 375, + 506, + 421 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 104, + 375, + 506, + 421 + ], + "type": "text", + "content": " denote the ratio of our tokenizer's efficiency to the efficiency of a normal BPE tokenizer. (For example, we have " + }, + { + "bbox": [ + 104, + 375, + 506, + 421 + ], + "type": "inline_equation", + "content": "\\alpha \\approx 1.49" + }, + { + "bbox": [ + 104, + 375, + 506, + 421 + ], + "type": "text", + "content": " for our most efficient tokenizer.)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 425, + 505, + 461 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 425, + 505, + 461 + ], + "spans": [ + { + "bbox": [ + 104, + 425, + 505, + 461 + ], + "type": "text", + "content": "The primary advantage of a more efficient tokenizer is a reduction of the context length (in tokens) for the same effective context length (in bytes). All other things being equal, this gives:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 127, + 469, + 504, + 525 + ], + "type": "list", + "angle": 0, + "index": 9, + "blocks": [ + { + "bbox": [ + 128, + 469, + 317, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 128, + 469, + 317, + 483 + ], + "spans": [ + { + "bbox": [ + 128, + 469, + 317, + 483 + ], + "type": "text", + "content": "1. A " + }, + { + "bbox": [ + 128, + 469, + 317, + 483 + ], + "type": "inline_equation", + "content": "1 / \\alpha^2" + }, + { + "bbox": [ + 128, + 469, + 317, + 483 + ], + "type": "text", + "content": " reduction in attention compute." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 127, + 487, + 332, + 499 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 487, + 332, + 499 + ], + "spans": [ + { + "bbox": [ + 127, + 487, + 332, + 499 + ], + "type": "text", + "content": "2. A " + }, + { + "bbox": [ + 127, + 487, + 332, + 499 + ], + "type": "inline_equation", + "content": "1 / \\alpha" + }, + { + "bbox": [ + 127, + 487, + 332, + 499 + ], + "type": "text", + "content": " reduction in non-attention compute." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 127, + 502, + 504, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 502, + 504, + 525 + ], + "spans": [ + { + "bbox": [ + 127, + 502, + 504, + 525 + ], + "type": "text", + "content": "3. A " + }, + { + "bbox": [ + 127, + 502, + 504, + 525 + ], + "type": "inline_equation", + "content": "1 / \\alpha" + }, + { + "bbox": [ + 127, + 502, + 504, + 525 + ], + "type": "text", + "content": " reduction in activation memory during training and KV-cache size during inference." + } + ] + } + ], + "index": 8 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 104, + 536, + 504, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 536, + 504, + 572 + ], + "spans": [ + { + "bbox": [ + 104, + 536, + 504, + 572 + ], + "type": "text", + "content": "Thus, if the context length is short, the total compute savings will be close to " + }, + { + "bbox": [ + 104, + 536, + 504, + 572 + ], + "type": "inline_equation", + "content": "1 / \\alpha" + }, + { + "bbox": [ + 104, + 536, + 504, + 572 + ], + "type": "text", + "content": ". For longer contexts, the compute savings may approach " + }, + { + "bbox": [ + 104, + 536, + 504, + 572 + ], + "type": "inline_equation", + "content": "1 / \\alpha^2" + }, + { + "bbox": [ + 104, + 536, + 504, + 572 + ], + "type": "text", + "content": ". Given a fixed training budget, there are two natural ways to convert these savings into improved performance." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 585, + 288, + 597 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 585, + 288, + 597 + ], + "spans": [ + { + "bbox": [ + 105, + 585, + 288, + 597 + ], + "type": "text", + "content": "B.4.1 Matching model parameter count" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 605, + 506, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 605, + 506, + 651 + ], + "spans": [ + { + "bbox": [ + 104, + 605, + 506, + 651 + ], + "type": "text", + "content": "In many applications of language models, such as deployment to consumer or edge devices, it is crucial to keep the model's size under control. In this regime, we will assume the model size fixed. This directly grants the aforementioned benefits, and we will turn to increasing the number of training steps to match the training budget." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 654, + 506, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 654, + 506, + 734 + ], + "spans": [ + { + "bbox": [ + 104, + 654, + 506, + 734 + ], + "type": "text", + "content": "Since the amount of text seen per step is remains the same due to the fixed effective context length, a more efficient tokenizer allows the model to see more text during training for the same budget. This may lead to improved performance on downstream tasks since the model is more likely to have seen relevant training examples during training. Additionally, although the model is the same size, it requires less compute and memory at inference time to perform the same tasks. In some settings, these gains can be used to amplify inference-time scaling (Snell et al., 2024), leading to further potential gains." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "25" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 24 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 171, + 103, + 419, + 288 + ], + "blocks": [ + { + "bbox": [ + 171, + 103, + 419, + 288 + ], + "lines": [ + { + "bbox": [ + 171, + 103, + 419, + 288 + ], + "spans": [ + { + "bbox": [ + 171, + 103, + 419, + 288 + ], + "type": "image", + "image_path": "d2a85f429d465bfed06712481ee84da313cf21f2e0393cdc138c69019561edbb.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 297, + 506, + 344 + ], + "lines": [ + { + "bbox": [ + 104, + 297, + 506, + 344 + ], + "spans": [ + { + "bbox": [ + 104, + 297, + 506, + 344 + ], + "type": "text", + "content": "Figure 8: Results for scaling both model parameters and train tokens proportionally. Compared to the BPE baseline, we consider a SuperBPE model that matches parameter count and a SuperBPE model that matches inference compute. Here we see the spread between the three settings decreases with scale." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 372, + 265, + 385 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 372, + 265, + 385 + ], + "spans": [ + { + "bbox": [ + 105, + 372, + 265, + 385 + ], + "type": "text", + "content": "B.4.2 Matching inference compute" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 395, + 504, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 395, + 504, + 430 + ], + "spans": [ + { + "bbox": [ + 104, + 395, + 504, + 430 + ], + "type": "text", + "content": "In other applications of language models, model size is less critical compared to inference compute. In these situations, it may be more desirable to scale the model size up to absorb the extra compute." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "spans": [ + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "text", + "content": "Changing the model size has a strong impact on scaling. Depending on the context length, we may scale the model by a factor of anywhere between " + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "inline_equation", + "content": "\\alpha^2" + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "text", + "content": " in order to match inference compute. Since each training step involves " + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "inline_equation", + "content": "1 / \\alpha" + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "text", + "content": " as many tokens, the ratio of tokens to model parameters per step may be reduced by as much as " + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "inline_equation", + "content": "1 / \\alpha^3" + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "text", + "content": ". Prior work on LM scaling (Hoffmann et al., 2022; Kaplan et al., 2020) reports diminishing gains once the ratio of the numbers of train tokens and model parameters becomes too large. An " + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "text", + "content": " times more efficient tokenizer allows us to train for up to " + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "inline_equation", + "content": "\\alpha^3" + }, + { + "bbox": [ + 104, + 434, + 506, + 540 + ], + "type": "text", + "content": " times longer while maintaining the same token/parameter ratio and without increasing inference compute, delaying the regime of diminishing gains." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 559, + 195, + 572 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 559, + 195, + 572 + ], + "spans": [ + { + "bbox": [ + 105, + 559, + 195, + 572 + ], + "type": "text", + "content": "B.4.3 Experiments" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 582, + 504, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 582, + 504, + 628 + ], + "spans": [ + { + "bbox": [ + 104, + 582, + 504, + 628 + ], + "type": "text", + "content": "We train 680M and 1.9B sized BPE models on various numbers of tokens—ranging from " + }, + { + "bbox": [ + 104, + 582, + 504, + 628 + ], + "type": "inline_equation", + "content": "\\approx 20" + }, + { + "bbox": [ + 104, + 582, + 504, + 628 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 104, + 582, + 504, + 628 + ], + "type": "inline_equation", + "content": "\\approx 80" + }, + { + "bbox": [ + 104, + 582, + 504, + 628 + ], + "type": "text", + "content": " tokens per parameter—to establish a baseline scaling trend. We then train two models with SuperBPE tokenizers for each baseline model: one with matching parameter count and one with matching inference compute cost." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "text", + "content": "There are a couple interesting ways to visualize these results: in Figure 5, we hold the model size fixed and increase the number of training tokens, and in Figure 8, we hold the ratio of train tokens to model parameters fixed (inference compute matched will be fixed 0.7 times lower) and vary both the model size and the number of training tokens. The general trends observed from these results are that matching inference compute is almost universally the best, while matching parameter count tends to be worse than the baseline except in the undertrained regime, where it is better than the baseline. The differences between the different settings increase with overtraining, but decreases when scaling both model size and training tokens at the same time." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "26" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 25 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 113, + 79, + 301, + 237 + ], + "blocks": [ + { + "bbox": [ + 113, + 79, + 301, + 237 + ], + "lines": [ + { + "bbox": [ + 113, + 79, + 301, + 237 + ], + "spans": [ + { + "bbox": [ + 113, + 79, + 301, + 237 + ], + "type": "image", + "image_path": "cc59d2530b2c5247d53eff05c8060a576208af2d40335dd18a2dadc84eec54fc.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 163, + 242, + 254, + 255 + ], + "lines": [ + { + "bbox": [ + 163, + 242, + 254, + 255 + ], + "spans": [ + { + "bbox": [ + 163, + 242, + 254, + 255 + ], + "type": "text", + "content": "(a) Superword density" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 316, + 79, + 496, + 237 + ], + "blocks": [ + { + "bbox": [ + 316, + 79, + 496, + 237 + ], + "lines": [ + { + "bbox": [ + 316, + 79, + 496, + 237 + ], + "spans": [ + { + "bbox": [ + 316, + 79, + 496, + 237 + ], + "type": "image", + "image_path": "e5c63215364f50b2bc6e27f939f21909279826c96d46946a07ccb15c4a36f755.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 333, + 242, + 470, + 255 + ], + "lines": [ + { + "bbox": [ + 333, + 242, + 470, + 255 + ], + "spans": [ + { + "bbox": [ + 333, + 242, + 470, + 255 + ], + "type": "text", + "content": "(b) Superword length distribution" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "lines": [ + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "spans": [ + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "type": "text", + "content": "Figure 9: (Left) The number of superword tokens in a SuperBPE tokenizer, as a function of the transition point. A superword token is any token that violates the whitespace pretokensization rule from Stage 1. With an early transition point of " + }, + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "type": "inline_equation", + "content": "t = 60\\mathrm{K}" + }, + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "type": "text", + "content": ", about " + }, + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "type": "inline_equation", + "content": "85\\%" + }, + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "type": "text", + "content": " of the tokens learned in Stage 2 are superword tokens. For " + }, + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "type": "inline_equation", + "content": "t > 100\\mathrm{k}" + }, + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "type": "text", + "content": ", close to " + }, + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "type": "text", + "content": " of Stage 2 tokens are superwords. (Right) The distribution of superword token lengths in terms of number of words, for " + }, + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "type": "inline_equation", + "content": "t = 180\\mathrm{k}" + }, + { + "bbox": [ + 104, + 262, + 506, + 330 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 350, + 312, + 365 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 350, + 312, + 365 + ], + "spans": [ + { + "bbox": [ + 104, + 350, + 312, + 365 + ], + "type": "text", + "content": "C Analysis of SuperBPE Tokenizers" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 376, + 253, + 389 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 376, + 253, + 389 + ], + "spans": [ + { + "bbox": [ + 104, + 376, + 253, + 389 + ], + "type": "text", + "content": "C.1 Superword token analysis" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 398, + 504, + 465 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 398, + 504, + 465 + ], + "spans": [ + { + "bbox": [ + 104, + 398, + 504, + 465 + ], + "type": "text", + "content": "How many superword tokens are in SuperBPE tokenizers? While the second stage of the pretokenization curriculum allows learning of superword tokens, subword tokens can still be learned. Shown in Figure 9a, for transition points " + }, + { + "bbox": [ + 104, + 398, + 504, + 465 + ], + "type": "inline_equation", + "content": "t < 80\\mathrm{k}" + }, + { + "bbox": [ + 104, + 398, + 504, + 465 + ], + "type": "text", + "content": ", the number of superword tokens is relatively steady around 120k. Past " + }, + { + "bbox": [ + 104, + 398, + 504, + 465 + ], + "type": "inline_equation", + "content": "t > 100\\mathrm{k}" + }, + { + "bbox": [ + 104, + 398, + 504, + 465 + ], + "type": "text", + "content": ", almost all tokens learned in Stage 2 are superword tokens. Figure 9b shows the number of whitespace-delimited words in the superword tokens of SuperBPE with " + }, + { + "bbox": [ + 104, + 398, + 504, + 465 + ], + "type": "inline_equation", + "content": "t = 180\\mathrm{k}" + }, + { + "bbox": [ + 104, + 398, + 504, + 465 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 480, + 326, + 493 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 480, + 326, + 493 + ], + "spans": [ + { + "bbox": [ + 104, + 480, + 326, + 493 + ], + "type": "text", + "content": "C.2 Analysis of token frequencies in encoding" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 501, + 506, + 568 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 501, + 506, + 568 + ], + "spans": [ + { + "bbox": [ + 104, + 501, + 506, + 568 + ], + "type": "text", + "content": "We also analyze token frequency statistics under BPE versus SuperBPE tokenizers. Figure 10a shows the relation between token rank (in frequency) and frequency. While tokens in BPE demonstrate a standard Zipfian relation, the slope of SuperBPE curves have a more shallow slope, meaning that the rate of decay in token frequency is smaller. The smaller proportion of tokens with very low counts may reduce prevalence and severity of glitch tokens (Rumbelow & Watkins, 2023; Land & Bartolo, 2024)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "spans": [ + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "text", + "content": "Figure 10b shows the minimum number of tokens from the vocabulary needed to cover any given proportion of data. For BPE, the relation is striking—only " + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "inline_equation", + "content": "57\\%" + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "text", + "content": " of tokens are needed to encode " + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "inline_equation", + "content": "99\\%" + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "text", + "content": " of the data! The remaining tokens make up a long tail of infrequent tokens. In contrast, SuperBPE tokenizers make better use of the vocabulary. For " + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "inline_equation", + "content": "t = 80k" + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "inline_equation", + "content": "t = 180k" + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "text", + "content": ", this statistic is " + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "inline_equation", + "content": "70\\%" + }, + { + "bbox": [ + 104, + 573, + 506, + 630 + ], + "type": "text", + "content": " of tokens, respectively." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 644, + 359, + 657 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 644, + 359, + 657 + ], + "spans": [ + { + "bbox": [ + 104, + 644, + 359, + 657 + ], + "type": "text", + "content": "C.3 Distributional Distortion at the Prompt Boundary" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 665, + 504, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 665, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 665, + 504, + 733 + ], + "type": "text", + "content": "Prior work (Lundberg, 2023; Phan et al., 2024) has shown that LMs using BPE tokenizers may produce distorted generations due to the forced partition in tokenization between a prompt and its completion. This issue stems from the fact that users typically desire completions conditioned on a text prompt. The natural approach to obtaining such completions is to take the prompt, tokenize it with the proper tokenizer, and then sample a completion of the resulting token sequence from the LM." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "27" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 26 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 112, + 79, + 302, + 236 + ], + "blocks": [ + { + "bbox": [ + 112, + 79, + 302, + 236 + ], + "lines": [ + { + "bbox": [ + 112, + 79, + 302, + 236 + ], + "spans": [ + { + "bbox": [ + 112, + 79, + 302, + 236 + ], + "type": "image", + "image_path": "8aeab79b4aa06d677e7e186c851179525d53cd6bf84089fe9de1423c2d97e1a4.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 143, + 242, + 274, + 255 + ], + "lines": [ + { + "bbox": [ + 143, + 242, + 274, + 255 + ], + "spans": [ + { + "bbox": [ + 143, + 242, + 274, + 255 + ], + "type": "text", + "content": "(a) Token frequency distribution" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 306, + 79, + 496, + 237 + ], + "blocks": [ + { + "bbox": [ + 306, + 79, + 496, + 237 + ], + "lines": [ + { + "bbox": [ + 306, + 79, + 496, + 237 + ], + "spans": [ + { + "bbox": [ + 306, + 79, + 496, + 237 + ], + "type": "image", + "image_path": "8dd245757e3a67c7d585b6c9e039884acbbdc7b9e840504fdd32355ec36bea42.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 365, + 242, + 437, + 255 + ], + "lines": [ + { + "bbox": [ + 365, + 242, + 437, + 255 + ], + "spans": [ + { + "bbox": [ + 365, + 242, + 437, + 255 + ], + "type": "text", + "content": "(b) Data covering" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 262, + 504, + 308 + ], + "lines": [ + { + "bbox": [ + 104, + 262, + 504, + 308 + ], + "spans": [ + { + "bbox": [ + 104, + 262, + 504, + 308 + ], + "type": "text", + "content": "Figure 10: (Left) Token counts when ordered by frequency. The rate of decay in token frequency is smaller. (Right) The minimum number of tokens needed to cover a given proportion of the data. SuperBPE tokenizers make better use of the vocabulary, while BPE tokenizers have a long tail of infrequent tokens." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 328, + 506, + 407 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 328, + 506, + 407 + ], + "spans": [ + { + "bbox": [ + 104, + 328, + 506, + 407 + ], + "type": "text", + "content": "For a simple example of how this can go wrong, consider a tokenizer with base vocabulary of A and B and a single merge forming the token AB. Let's suppose we trained a model using this tokenizer on the strings \"AA\", \"AB\", and \"BB\" with equal proportions. If we condition on the text prefix \"A\", there are two equally probable continuations: \"A\" and \"B\". However, A is the only valid completion of the token prefix A, since the token B never follows the token A during training. In other words, the prompt-completion pair (A,B) is canonically tokenized using a token that crosses the boundary between the prompt and the completion." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 411, + 506, + 501 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 411, + 506, + 501 + ], + "spans": [ + { + "bbox": [ + 104, + 411, + 506, + 501 + ], + "type": "text", + "content": "While this problem is shared by all BPE tokenizers, it can be partially mitigated by pre-tokensization: if the prompt and the completion are separated during the pretokenization step, then it is impossible for a token to cross the boundary. This fix tends to work well for English, where the completion is typically expected to begin with whitespace, so whitespace pretokenization would apply. However, there are many settings where whitespace pretokenization cannot fix the underlying issue, including natural languages that do not use whitespace to separate words (like Chinese and Japanese), programming languages, and constrained generation (Lundberg, 2023; Ribeiro, 2023)." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 505, + 504, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 505, + 504, + 572 + ], + "spans": [ + { + "bbox": [ + 104, + 505, + 504, + 572 + ], + "type": "text", + "content": "Several fixes for this issue have been proposed: at training time, token merges can be randomly dropped (Provilkov et al., 2020; Sims et al., 2025; DeepSeek-AI, 2025) to expose LMs to the internal makeup of tokens; at inference time, options include token healing (Lundberg, 2023), algorithmic correction (Phan et al., 2024), and enumeration of all relevant segmentations of the prompt (Vieira et al., 2024). We leave a detailed comparison of these techniques to future work." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 576, + 504, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 576, + 504, + 601 + ], + "spans": [ + { + "bbox": [ + 104, + 576, + 504, + 601 + ], + "type": "text", + "content": "Additionally, the issue does not apply at all to models that separate the user's input from the model's response using special tokens, as is typical for chat models." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 616, + 240, + 628 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 616, + 240, + 628 + ], + "spans": [ + { + "bbox": [ + 105, + 616, + 240, + 628 + ], + "type": "text", + "content": "D Other Related Work" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 641, + 405, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 641, + 405, + 653 + ], + "spans": [ + { + "bbox": [ + 105, + 641, + 405, + 653 + ], + "type": "text", + "content": "Please see Mielke et al. (2021) for a survey of subword tokenization." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": "Pretokenization Pretokenization defines how the text is split in order to prevent certain pairs of tokens from being merged. GPT-2 (Radford et al., 2019) introduced a regular expression (regex) which defines the pretokenization pattern. These regex strings have gained complexity over time; GPT-3.5 limits the number of digits in numerical tokens to 3, and allows single punctuation to be merged with the start of words (presumably to accommodate code, as it allows .get to be a single token). Prior work has shown that, for" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 25, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "28" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 27 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 106, + 79, + 504, + 257 + ], + "blocks": [ + { + "bbox": [ + 106, + 79, + 504, + 257 + ], + "lines": [ + { + "bbox": [ + 106, + 79, + 504, + 257 + ], + "spans": [ + { + "bbox": [ + 106, + 79, + 504, + 257 + ], + "type": "image", + "image_path": "af554d01661668c30fdd6aecc729da3932b6d036243cdf6b02f2c9f548475d99.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 267, + 504, + 292 + ], + "lines": [ + { + "bbox": [ + 104, + 267, + 504, + 292 + ], + "spans": [ + { + "bbox": [ + 104, + 267, + 504, + 292 + ], + "type": "text", + "content": "Figure 11: Token counts when ordered by token ID, which reflects the order in which tokens were learned in tokenizer training." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 312, + 506, + 380 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 312, + 506, + 380 + ], + "spans": [ + { + "bbox": [ + 104, + 312, + 506, + 380 + ], + "type": "text", + "content": "instance, digit pretokenization choices (Nogueira et al., 2021; Thawani et al., 2021; Singh & Strouse, 2024) can significantly impact arithmetic performance. It is also likely that pretokenization affects different languages differently (Velayuthan & Sarveswaran, 2025; Ahia et al., 2023), due to natural statistics of the average word length, which acts as an upper bound on encoding efficiency in that language under subword tokenization. Nonetheless, the effectiveness of many pretokenization choices have not been thoroughly studied." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 391, + 504, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 391, + 504, + 415 + ], + "spans": [ + { + "bbox": [ + 104, + 391, + 504, + 415 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 104, + 391, + 504, + 415 + ], + "type": "text", + "content": "-gram language models Our work is loosely related to " + }, + { + "bbox": [ + 104, + 391, + 504, + 415 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 104, + 391, + 504, + 415 + ], + "type": "text", + "content": "-gram LMs, which incorporate " + }, + { + "bbox": [ + 104, + 391, + 504, + 415 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 104, + 391, + 504, + 415 + ], + "type": "text", + "content": "-gram statistics into the next-word prediction (Brants et al., 2007; Liu et al., 2024a)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 426, + 506, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 426, + 506, + 495 + ], + "spans": [ + { + "bbox": [ + 104, + 426, + 506, + 495 + ], + "type": "text", + "content": "Internal representation of semantic units Previous work has showed that the early layers of the LM may \"aggregate\" information over multi-token entities (e.g., [New, York]) into the last token's (e.g., York) hidden representation (Meng et al., 2022; Kaplan et al., 2025; Lad et al., 2024). This suggests that LMs naturally learn multi-word representations, and segmentating text into more semantically cohesive units at the input level (e.g., having New York as a single token) may simplify this process." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "29" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 28 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 105, + 244, + 299, + 388 + ], + "blocks": [ + { + "bbox": [ + 105, + 244, + 299, + 388 + ], + "lines": [ + { + "bbox": [ + 105, + 244, + 299, + 388 + ], + "spans": [ + { + "bbox": [ + 105, + 244, + 299, + 388 + ], + "type": "image", + "image_path": "3c79a7e71a91e682ec753425b6d4ace409c89e385949a63f471c66eb47df09c4.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 309, + 244, + 504, + 388 + ], + "blocks": [ + { + "bbox": [ + 309, + 244, + 504, + 388 + ], + "lines": [ + { + "bbox": [ + 309, + 244, + 504, + 388 + ], + "spans": [ + { + "bbox": [ + 309, + 244, + 504, + 388 + ], + "type": "image", + "image_path": "83c7ac0a33955fc2008194ff3470f2802a9403c0140208f940882c81d5503629.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 105, + 398, + 299, + 543 + ], + "blocks": [ + { + "bbox": [ + 105, + 398, + 299, + 543 + ], + "lines": [ + { + "bbox": [ + 105, + 398, + 299, + 543 + ], + "spans": [ + { + "bbox": [ + 105, + 398, + 299, + 543 + ], + "type": "image", + "image_path": "1d69a62d6c61c9405fb5435c05854a3230f8599f4d8c605b56af5368930749bf.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 112, + 552, + 498, + 565 + ], + "lines": [ + { + "bbox": [ + 112, + 552, + 498, + 565 + ], + "spans": [ + { + "bbox": [ + 112, + 552, + 498, + 565 + ], + "type": "text", + "content": "Figure 12: Performance during pretraining for a subset of tasks in our evaluation suite." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 312, + 399, + 505, + 543 + ], + "blocks": [ + { + "bbox": [ + 312, + 399, + 505, + 543 + ], + "lines": [ + { + "bbox": [ + 312, + 399, + 505, + 543 + ], + "spans": [ + { + "bbox": [ + 312, + 399, + 505, + 543 + ], + "type": "image", + "image_path": "fb634c50fcf9f347aa3bb35ced7dfb51e0f1b27248ceb8eaf70bf7a1af7fee11.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 317, + 38 + ], + "type": "text", + "content": "Published as a conference paper at COLM 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "text", + "content": "30" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 29 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13436/53224727-d9f9-435a-ad74-c91800bf8f34_content_list.json b/data/2025/2503_13xxx/2503.13436/53224727-d9f9-435a-ad74-c91800bf8f34_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5623f06085b77ddd137df6ffd99367372fc91a77 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13436/53224727-d9f9-435a-ad74-c91800bf8f34_content_list.json @@ -0,0 +1,1985 @@ +[ + { + "type": "text", + "text": "Unified Autoregressive Visual Generation and Understanding with Continuous Tokens", + "text_level": 1, + "bbox": [ + 222, + 122, + 774, + 172 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Lijie Fan $^{1,*}$ Luming Tang $^{1,*}$ Siyang Qin $^{1,*}$ Tianhong Li $^{2}$ Xuan Yang $^{1}$ Siyuan Qiao $^{1}$ Andreas Steiner $^{1}$ Chen Sun $^{1}$ Yuanzhen Li $^{1}$ Tao Zhu $^{1}$ Michael Rubinstein $^{1}$ Michalis Raptis $^{1}$ Deqing Sun $^{1,\\dagger}$ Radu Soricut $^{1,\\dagger}$ Google DeepMind ${}^{2}\\mathrm{MIT}\\quad {*},$ equal contribution", + "bbox": [ + 186, + 223, + 821, + 290 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/fe57fbb071286d72cde1ae0a4dc870c142112e69c0e374902ceb8453aa9cf6c0.jpg", + "image_caption": [ + "Figure 1: Generated images from our UniFluid autoregressive model after aesthetic finetuning." + ], + "image_footnote": [], + "bbox": [ + 178, + 304, + 815, + 665 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 459, + 686, + 537, + 702 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "We present UniFluid, a unified autoregressive framework for joint visual generation and understanding leveraging continuous visual tokens. Our unified autoregressive architecture processes multimodal image and text inputs, generating discrete tokens for text and continuous tokens for image. We find though there is an inherent trade-off between the image generation and understanding task, a carefully tuned training recipe enables them to improve each other. By selecting an appropriate loss balance weight, the unified model achieves results comparable to or exceeding those of single-task baselines on both tasks. Furthermore, we demonstrate that employing stronger pre-trained LLMs and random-order generation during training is important to achieve high-fidelity image generation within this unified framework. Built upon the Gemma model series, UniFluid exhibits competitive performance across both image generation and understanding, demonstrating strong transferability to various downstream tasks, including image editing for generation, as well as visual captioning and question answering for understanding.", + "bbox": [ + 228, + 710, + 767, + 904 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2503.13436v1 [cs.CV] 17 Mar 2025", + "bbox": [ + 22, + 260, + 57, + 705 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "Tech Report.", + "bbox": [ + 171, + 922, + 250, + 936 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 174, + 89, + 310, + 106 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Large Language Models (LLMs) have recently advanced from text-centric architectures, such as BERT [11] and GPT [35], toward multimodal systems capable of understanding and generating content across different modalities. GPT-3 [5] and PaLM [9] show that scaling language models leads to emergent capabilities, while Flamingo [2] further demonstrates that incorporating visual inputs facilitates unified multimodal reasoning. This trend toward unified vision-language model—using a single model for diverse tasks of visual understanding and generation—has promising potential for leveraging knowledge and reasoning abilities that transfer across different vision and language tasks, ultimately facilitating more robust and generalizable multimodal representation and modeling capabilities.", + "bbox": [ + 174, + 122, + 823, + 247 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Motivated by the advantages and strong scaling properties of autoregressive models, coupled with their simplicity, we investigate a pure autoregressive framework for unified visual generation and understanding, without the limitations introduced by vector quantization (VQ). In this paper, we introduce UniFluid, a unified framework that leverages continuous visual tokens within an autoregressive architecture to jointly handle vision-language generation and understanding tasks. Building upon pre-trained Gemma [48] on large-scale text corpus, UniFluid unlocks powerful visual generation and understanding capabilities through training with paired image-text data, and further allows these two tasks to mutually benefit each other within a single architecture.", + "bbox": [ + 174, + 253, + 823, + 364 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Specifically, UniFluid adopts a unified autoregressive framework where both text and continuous visual inputs are embedded as tokens in the same space, enabling seamless joint training of image generation and understanding tasks. UniFluid integrates a continuous tokenizer [12, 21] for image generation and a pre-trained SigLIP [59] image encoder for understanding tasks, while textual inputs are processed using a standard SentencePiece tokenizer [20]. The resulting multimodal sequences are modeled autoregressively using Gemma [48] as the underlying transformer backbone. Task-specific prediction heads—a diffusion-based head for image generation and a cross-entropy head for text generation—ensure effective modality-specific training and inference, enabling UniFluid to efficiently learn shared representations that mutually enhance its generation and understanding capabilities.", + "bbox": [ + 174, + 369, + 823, + 494 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Our experiments demonstrate several key advantages of the proposed unified training strategy. We find though there is a trade-off between the two tasks, a carefully tuned training recipe can allow the tasks to support each other and outperform the single-task baselines. Effectively balancing the loss between the tasks allows a single model that performs both with results superior to or on par with single-task models. Moreover, the choice of pre-trained LLM backbone significantly impacts visual generation performance. We also find that while employing random generation order is essential for high-quality image synthesis, it is less critical for understanding tasks. Finally, our unified pre-trained models show strong generalization and transferability, achieving compelling results in downstream applications, including image editing and various vision-language understanding benchmarks.", + "bbox": [ + 174, + 501, + 823, + 626 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Related Works", + "text_level": 1, + "bbox": [ + 174, + 648, + 328, + 666 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Multimodal Large Language Models. Multimodal Large Language Models [2, 10, 23, 26, 25, 3, 42] have shown significant performance in visual understanding tasks. Flamingo [2] adopted a frozen LLM and vision encoder, utilizing perceiver with cross-attention to bridge the modalities. LLaVA [26, 25] proposed instruction tuning over pre-trained LLMs with multimodal inputs to align a pre-trained image encoder into the LLM's embedding space, thereby enabling it with visual understanding and instruction following capabilities. MiniGPT-4 [62] and mPLUG-Owl [56] have shown vision encoders can be connected to LLMs through projection layers, demonstrating sophisticated visual reasoning capabilities. The PaliGemma [3, 42] series built upon the Gemma [47, 48] model family to develop versatile vision-language models capable of strong transfer to diverse downstream visual understanding tasks.", + "bbox": [ + 174, + 681, + 823, + 821 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Autoregressive Image Generation. While diffusion models [41, 36] have achieved impressive success in image generation, autoregressive image generation methods have also shown significant development, driven by their simplicity and closeness to LLM training paradigms. A large body of research centers on tokenizing images into discrete tokens and applying autoregressive objectives to these discrete representations. Notable examples include Parti [57] and Muse [6]. [49] proposes an approach that operates on image scales, progressively refining resolutions from coarse to fine through", + "bbox": [ + 174, + 827, + 823, + 911 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 936, + 503, + 946 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/21fcf027d2bdd513ed513d39f97fd528b50bf3f4df10285dbb085df06e72a1cd.jpg", + "image_caption": [ + "Figure 2: High-level illustration of UniFluid. UniFluid performs joint training of image generation and understanding tasks through next token prediction. For image embeddings, we use a VAE image tokenizer for generation, and a SigLIP image encoder for understanding. BOI/BOS stands for beginning of Image/Sentence." + ], + "image_footnote": [], + "bbox": [ + 174, + 95, + 823, + 366 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "next-scale prediction. Alternatively, works such as MAR [21], Fluid [12], and techniques employing per-token diffusion heads on top of LLM-predicted embeddings have explored autoregressive image generation with continuous visual tokens.", + "bbox": [ + 169, + 439, + 823, + 481 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Unified Multimodal Models. There is growing research interests in unifying visual generation and understanding within a single model. VQ-based models, such as Chameleon [46], Emu [44], and Janus [54, 8], propose encoding visual inputs into discrete tokens and unifying tasks into next-token prediction within this discrete token space. Models with hybrid training targets, such as Transfusion [61, 38] and Show-O [55], aim to unify next-token prediction objectives with diffusion objectives within a single framework. MetaMorph [50] maintains the autoregressive objective by regressing visual SigLIP [59] features using an LLM, but necessitates a separate diffusion model to decode the predicted latent features into images. Our approach distinguishes itself by performing per-token autoregressive generation using continuous visual tokens. This maintains the next-token prediction objective, while not being limited by the vector quantized tokens. [45] also explores using continuous tokens to generate multimodal outputs.", + "bbox": [ + 169, + 487, + 826, + 640 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Method", + "text_level": 1, + "bbox": [ + 171, + 659, + 272, + 674 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In this section, we illustrate the architecture of our UniFluid model. The model expects both image and text sequences as input and achieves joint training on both generation and understanding tasks, using next-token prediction as its training objective.", + "bbox": [ + 169, + 689, + 823, + 733 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1 Unified Autoregressive Modeling with Continuous Visual Tokens", + "text_level": 1, + "bbox": [ + 169, + 747, + 660, + 762 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Our approach leverages the autoregressive paradigm to unify visual understanding and generation within a single framework. Given an ordered sequence of tokens $X = \\{x^{1},\\dots,x^{n}\\}$ , the autoregressive model [34, 14, 51, 52] factorizes the joint probability distribution as a product of conditional probabilities, effectively framing the generation task as a sequential \"next token prediction\" problem: $p(X) = \\prod_{i = 1}^{n}p(x^{i}\\mid x^{1},\\dots,x^{i - 1})$ . As shown in MAR [21] and Fluid [12], this autoregressive formulation is applicable for both discrete tokens and continuous tokens. In UniFluid, we exploit this property to enable the generation of continuous visual tokens under the unified decoder-only framework. Our model is modality-agnostic; both text and image tokens are treated as elements within a long unified sequence, and their respective logits are predicted iteratively in an autoregressive manner by the backbone transformer.", + "bbox": [ + 169, + 772, + 826, + 912 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "To accommodate the distinct nature of text and image modalities, we employ modality-specific prediction heads to calculate the appropriate loss functions and sampling for each modality. This unified approach allows the model to learn a shared representation space through the unified training procedure, facilitating synergistic learning and enabling seamless transitions between visual generation and understanding.", + "bbox": [ + 169, + 90, + 823, + 161 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2 Architecture", + "text_level": 1, + "bbox": [ + 171, + 181, + 302, + 196 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "As depicted in Figure 2, UniFluid employs a unified framework where both text and image inputs are tokenized and projected into a shared embedding space. This allows it to leverage a decoder-only transformer as the core backbone for the unified autoregressive task. Text inputs are tokenized using the SentencePiece tokenizer. This results in discrete tokens with a vocabulary size of $V$ . For image generation, images are encoded into continuous visual tokens using a continuous variational autoencoder. To facilitate the process of image understanding, following PaliGemma, we used SigLIP as a separate image encoder to extract high-level information from the visual inputs. UniFluid consists of a classification head to convert the transformer's text logits into a categorical distribution, and a diffusion head to convert image logits into a per-token probability distribution.", + "bbox": [ + 169, + 209, + 823, + 335 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The inherent structure of text as a linear sequence aligns well with the standard 1D positional embeddings of the LLM, which are sufficient for text modeling and image understanding tasks. However, image tokens possess a 2D spatial structure. To capture this inherent 2D nature, we incorporate learnable 2D positional embeddings, which are added to the image token embeddings. Meanwhile, inspired by [58], to achieve random order generation, a position embedding for the next predicted token is also added to each image token. To enhance the model's ability to initiate and guide image generation, we prepend a \"Beginning of Image\" (BOI) token to the sequence of continuous image tokens. This BOI token serves as a distinct signal, indicating the start of the visual generation process. Given that the sequence length for generated image tokens is predefined (256 tokens for 256x256 images), an explicit \"End of Image\" token is unnecessary in our case.", + "bbox": [ + 169, + 340, + 826, + 481 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4 Implementation", + "text_level": 1, + "bbox": [ + 171, + 505, + 339, + 522 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4.1 Training", + "text_level": 1, + "bbox": [ + 171, + 537, + 274, + 553 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Per-token Classification Head for Discrete Text Tokens. We employ the same SentencePiece tokenizer as Gemma for text tokenization. The transformer's output logits for text are transformed into categorical probability distributions over the vocabulary, and we apply the standard cross-entropy loss, denoted as $L_{Text}$ , to optimize the prediction of these discrete text tokens.", + "bbox": [ + 169, + 566, + 823, + 622 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Per-token Diffusion Head for Continuous Visual Tokens. We adopt the same continuous tokenizer as Fluid to embed 256x256 images into 32x32x4 continuous tokens, and use a patch size of 2 to merge 4 tokens into one. To model the per-token distribution of these predicted continuous visual tokens, we employ a lightweight MLP as a diffusion head. We adopt the same diffusion process and loss function, denoted as $\\mathcal{L}_{\\text{Visual}}$ , as in [21, 12], which is specifically tailored for continuous visual token prediction. For the understanding task, the input image resolution is $224 \\times 224$ , and we use SigLIP as the image encoder. Note that the SigLIP features are only used as prefix for the understanding task during training, and no losses are added on top of them.", + "bbox": [ + 169, + 628, + 823, + 739 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Task-Specific Training Configurations.", + "text_level": 1, + "bbox": [ + 171, + 744, + 444, + 760 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Image Understanding: For image understanding tasks, the model is provided with image embeddings and question tokens as input prefix. Following PaliGemma, we apply a bidirectional attention mask to both image and question tokens. A causal attention mask is applied to the answer tokens, ensuring that the model only attends to previous answer tokens during autoregressive generation. The text token loss, $\\mathcal{L}_{\\text{Text}}$ , is calculated specifically on the answer text tokens.", + "bbox": [ + 169, + 766, + 823, + 835 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Image Generation: Conversely, for image generation tasks, text prompts are provided as conditioning inputs. To maintain the appropriate information flow, we employ a bidirectional attention mask for the text prompt tokens, enabling them to attend to all other text tokens. A causal attention mask is applied to the image tokens, ensuring that each image token only attends to preceding image tokens. The visual token loss, $L_{\\text{Visual}}$ , is calculated on the generated image tokens.", + "bbox": [ + 169, + 842, + 826, + 912 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Unified Loss Function. The total training loss for UniFluid is a weighted sum of the text token prediction loss and the visual token prediction loss, defined as: $\\mathcal{L} = \\mathcal{L}_{\\text{Visual}} + \\lambda \\cdot \\mathcal{L}_{\\text{Text}}$ where $\\lambda$ is a hyper-parameter that represents the weight assigned to the text token prediction loss, allowing us to balance the contributions of the two modalities during training.", + "bbox": [ + 169, + 90, + 823, + 148 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Training Details. We train the model with a batch size of 2048 using the AdamW optimizer with a learning rate of 1e-4. The training process consists of 1 million steps with a constant learning rate schedule and a warm-up period of 65k steps. Following [58], for image generation, the image token order is randomly permuted during the initial 300k training iterations, then linearly anneals to raster between 300k and 600k iterations, and finally sticks to raster order for the subsequent 400k steps. Except for the comparison with Gemma-1, we use the Gemma-2 model series as the backbone transformer for all experiments in this paper.", + "bbox": [ + 169, + 152, + 826, + 252 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.2 Inference", + "text_level": 1, + "bbox": [ + 171, + 267, + 279, + 282 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For text decoding, we employ categorical sampling for each generated text prediction. The predicted token is then selected from the vocabulary $V$ based on the sampled probability distribution. We use the same decoding strategy as PaliGemma. Greedy decoding is used for all tasks except for downstream COCOcap (beam search $n = 2$ ) and TextCaps (beam search $n = 3$ ). For image decoding, we use a diffusion sampling process to generate continuous visual tokens with diffusion sampling step set to 100 in our implementation.", + "bbox": [ + 169, + 292, + 823, + 377 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "As both text and image generation are performed at the token level, with predictions occurring one token at a time under a causal attention mechanism, we can efficiently utilize Key-Value (KV) caching. This optimization is applicable to both discrete text tokens and continuous visual tokens, significantly accelerating the inference process.", + "bbox": [ + 169, + 383, + 826, + 439 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "5 Experiment", + "text_level": 1, + "bbox": [ + 171, + 458, + 305, + 474 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "5.1 Setup", + "text_level": 1, + "bbox": [ + 171, + 489, + 254, + 506 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Datasets. We train our models using the WebLI dataset [7], a collection of high-quality image-text pairs. For visual generation, we follow Fluid to employ a WebLI subset of image and text descriptions specifically for the generation task. For visual understanding, consistent with PaliGemma, we leverage the image-text description pairs and image question-answer pairs that are also available within WebLI.", + "bbox": [ + 169, + 516, + 826, + 571 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Evaluation Metrics. We assess the image generation quality using the FID [16] score on 30K images of the MS-COCO [24] training set and evaluate performance on the GenEval [13] benchmark, where we use the original text prompt without any rewrites. For evaluating visual understanding performance, we use the caption CIDEr score on MS-COCO. Given our similar training dataset and setup to PaliGemma, we also evaluate the finetuning performance on a variety of captioning and question answering tasks. We report the average score on 4 Captioning tasks, including COCOcap [24], Screen2Words [53], TextCaps [39], WidgetCap [22], and 20 QA tasks, including OKVQA [29], AOKVQA-MC [37], AOKVQA-DA [37], GQA [17], NLVR2 [43], AI2D [19], ScienceQA [28], RSVQA-lr [27], RSVQA-hr (test/test2) [27], ChartQA (human/aug) [30], VizWizVQA [15], TallyQA (simple/complex) [1], CountBenchQA [3], TextVQA [40], DocVQA [32], InfoVQA [31], ST-VQA [4].", + "bbox": [ + 169, + 578, + 826, + 731 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In the following sections, we present the experimental results obtained under different configurations of UniFluid, providing insights into the relationship between the two tasks and highlighting key design choices for UniFluid training.", + "bbox": [ + 169, + 736, + 823, + 779 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "5.2 Main Results", + "text_level": 1, + "bbox": [ + 171, + 795, + 305, + 809 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Unified Training Improves Generation Performance. To evaluate the effectiveness of the unified training framework and determine whether unified training offers advantages compared to training separate models for different tasks, we perform controlled experiments to analyze the performance of models trained with a single task.", + "bbox": [ + 169, + 821, + 823, + 878 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We first compare the visual generation performance of the model trained under the unified training objective with the performance of a text-to-image model (T2I only), trained solely with the visual", + "bbox": [ + 169, + 883, + 823, + 912 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/0c74970ac44a50ef2f5559ce53b6e18bbed02e637bc1ab1b59f7a8e7900b29bb.jpg", + "table_caption": [ + "Table 1: Unified training achieves better generation performance than text-to-image only training. We evaluate the performance using MS-COCO zero-shot FID and GenEval score." + ], + "table_footnote": [], + "table_body": "
Training TargetSizeFID ↓GenEval ↑
T2I only0.7B9.710.50
Unified0.7B8.390.52
T2I only2B7.880.59
Unified2B7.200.59
", + "bbox": [ + 348, + 130, + 645, + 212 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "autoregressive objective for the generation task. We ensure that the total number of visual tokens for training is the same for the visual generation loss in both the unified model training and text-to-image only training scenarios. The generation performance comparison is presented in Table 1. The unified model achieves better performance compared to the T2I only model, despite both models having observed the same number of tokens for the visual generation task. This suggests that unified model training can be beneficial for visual generation tasks, and that visual understanding ability has the potential to unlock enhanced visual generation quality.", + "bbox": [ + 169, + 239, + 823, + 338 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Trade-off Between Generation and Understanding. We also investigate whether the visual generation task can contribute to improved visual understanding performance. In the UniFluid unified training setup, the hyperparameter $\\lambda$ controls the balance between the losses applied to image tokens and text tokens.", + "bbox": [ + 169, + 344, + 823, + 398 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In Table 3 and Figure 3, we present the understanding and generation results with varying $\\lambda$ of the 0.7B model. We compare the transfer performance to downstream understanding tasks between the unified model with different $\\lambda$ and a image-to-text model (I2T only), trained solely with the image understanding objective. Within the unified training setup, a trade-off exists between visual generation and understanding tasks, which can be effectively controlled by adjusting the loss mixing weight, $\\lambda$ . While increasing $\\lambda$ can improve image understanding performance, ultimately exceeding the I2T-only baseline for downstream captioning, it concurrently diminishes the image generation capabilities.", + "bbox": [ + 169, + 406, + 823, + 517 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In most scenarios, a smaller $\\lambda$ value (e.g., 0.005) is advisable, maintaining a significant proportion (over $90\\%$ ) of image understanding while supporting the generation of high-fidelity images. Larger $\\lambda$ values, in contrast, strongly favor image understanding but result in a rapid drop of image generation ability, as indicated by a sharp rise in FID score when $\\lambda$ exceeds 0.1. Qualitative results for image captioning and question answering, demonstrating the understanding capabilities of the fine-tuned model based on the unified model with Gemma-2 2B as backbone LLM and $\\lambda = 0.005$ are presented in Figure 6.", + "bbox": [ + 169, + 523, + 823, + 619 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Better Pre-trained LLM Backbone Leads to Better Visual Generation and Understanding. We investigate the effect of pre-trained LLMs within the unified model training setup, specifically examining whether more powerful LLMs contribute to enhanced image understanding performance and superior visual generation quality. To this end, we conducted experiments using Gemma-1 2B [47] and Gemma-2 2B [47] as backbone LLMs. Gemma-2 is a stronger LLM than Gemma-1 with $10\\%$ average improvements across different text benchmarks.", + "bbox": [ + 169, + 626, + 826, + 709 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/7b4d738402e13bf19379cf0a30676469c8f6afa22957a77ca92c17d04ce946cb.jpg", + "table_caption": [ + "Table 2: Performance comparison of image generation and understanding of UniFluid trained with different LLM backbone. FID and CIDEr are measured on MS-COCO. Gemma-2 gets much better performance compared to Gemma-1, for both image understanding and generation tasks." + ], + "table_footnote": [], + "table_body": "
Pretrained ModelGenerationUnderstanding
COCO FID ↓GenEval ↑COCO CIDEr ↑Cap Avg ↑QA Avg ↑
Gemma-19.730.5238.02113.4060.21
Gemma-27.200.5940.91116.1362.10
", + "bbox": [ + 233, + 773, + 763, + 839 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The experimental results are presented in Table 2. Here we used $\\lambda = 0.005$ for all models. The results demonstrate that employing a stronger LLM is crucial for generating images with higher fidelity and quality. Gemma-2 achieves significantly lower FID scores compared to Gemma-1, highlighting that even though LLM pre-training is unimodal, without exposure to visual data, using a better LLM", + "bbox": [ + 169, + 854, + 823, + 912 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 936, + 504, + 946 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/27d4b879d1e6ed0c16ebbf7e8c1a05a9de1f4ec816c64e73bb7cd204bec2ff52.jpg", + "image_caption": [ + "Figure 3: Plot of image understanding and generation performance with varying $\\lambda$ during training. We show the FID on MS-COCO, the average score for downstream captioning tasks (Cap Avg), and the average score for downstream QA tasks (QA Avg) with different $\\lambda$ on the three figures. Green dashed lines show the FID of T2I-only model, blue and red dashed lines show the downstream captioning average and QA average for I2T-only model, respectively. In practice, smaller $\\lambda$ has better trade-off between two tasks." + ], + "image_footnote": [], + "bbox": [ + 91, + 89, + 344, + 270 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/c28e0b1d3d4b55152d911ee6364845722e0f928715ebd8b2ba2f2245e04ae38f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 366, + 90, + 620, + 270 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/f01bbba059cb1c487d2b7bad0c5ebf483e7592f5c196f7ec35b2335a3abc2546.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 650, + 90, + 897, + 268 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/ed7e8a9c62dec847919f2f338a44a638eacc3f82ec9e8af56e81b4b02edc4e68.jpg", + "table_caption": [ + "Table 3: Image generation and understanding results with different $\\lambda$ . $\\lambda = 0.1$ roughly makes the loss for generation and understanding of the same scale. We present MS-COCO FID and the average captioning and QA results for downstream understanding tasks, compared to the image-to-text (I2T) only baseline. A smaller $\\lambda$ like 0.005 is recommended in most cases as it preserves most of the image understanding capability while enabling the generation of high-quality images and outperforms the text-to-image (T2I) only baseline." + ], + "table_footnote": [], + "table_body": "
TaskT2I onlyI2T onlyUnified, λ
0.0050.010.050.10.21.02.0
Generation (FID) ↓9.71-8.399.6512.4817.9028.6047.8954.02
Understanding (Cap Avg) ↑-125.05120.25120.25121.18121.68123.65126.25125.80
Understanding (QA Avg) ↑-65.9560.2961.3761.4861.8765.1765.6465.88
", + "bbox": [ + 173, + 426, + 821, + 494 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/f5b44fb22df8ee423bbb5c991ec67c67c311aac003801a68e36de4d1a80aa028.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 114, + 529, + 241, + 609 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/e604f122f83cb527d28352cf1e5e4b880d31afe49ece6c4f36cdeba5b992241f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 245, + 529, + 370, + 609 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/c46e9fbda409dd9b0d6fe25cf1f0c7088f846324cf25ae0edc1adb8a3f609780.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 375, + 529, + 500, + 609 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/f47624129a101646b71e64eed6aa1212c7690071847fef6726e962f91d51a79a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 504, + 529, + 629, + 609 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/0f43ac3efb9385ab77e2cb077a6165efd276ed993423b635fcddef5a6344bb93.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 635, + 529, + 758, + 609 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/0fa9c60d7e9f9b58905e06c1a86f5e7d8e6d6f7ac03fd1e9ce36f146966739e7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 766, + 529, + 888, + 609 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/b7e9b91e62274b99585c58f8e8402e7149ff8e358ebfe68f4b2866fc814b6e9f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 116, + 612, + 241, + 691 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/4af61b319ebef54c220450bb22d985e5d8a8ac0d51b79951a276a4bd1185093d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 245, + 612, + 370, + 691 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/1268d8942fcae33e355da233b87b1ba2aa1da48b738edce9e834235d40b7d2ef.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 375, + 612, + 500, + 691 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/055827d4421ec7ad88afd48e896d72caedb2e029a4b9e0133d779db345ac66b4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 504, + 612, + 629, + 691 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/3360fd5ca9e05220182a0d5bb8c2c82ab50272d09ff5a52bbccf97b5f84a9be0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 635, + 612, + 758, + 691 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/69f5c722e1cfdf648c8c320239579694d5e00c5f385fd8359ec77dc239059df8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 766, + 612, + 888, + 691 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/65d7e2652966061b99c7248f64aefbfe77768d356615fa25906dab470109ad38.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 116, + 695, + 241, + 773 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/2fd077dbbd0d8c49b3d104308ffd8f662dbabfe5228d171df188dcd388f68d7e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 245, + 695, + 370, + 773 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/6142c847daaa0b2e444e2eaee75b2301277eeb6f084c630d7a11e3e3d7715b7e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 375, + 695, + 500, + 773 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/db70eb466e41c1079e55240a87d754c1630717a9f61f3af9f006a8036540d3c6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 504, + 695, + 629, + 773 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/ec817598f9c016649943bcaf3e7f9eea8c6d6ea635c3dbd66d86aac41e302c90.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 635, + 695, + 758, + 773 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/4c71302f59a6e2c12cdc6bab1d67480bad20db33ec424562bec26d4d367ff76b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 766, + 695, + 888, + 773 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/cba4f32c6197716dd8754f4789b7a8866571df39c69c4d735d0ac593280aa660.jpg", + "image_caption": [ + "Figure 4: Downstream image understanding performance (after finetuning) on each benchmark (higher is better). Blue bars show performance of model backbone initialized with Gemma-1 2B and green bars show model backbone initialized with Gemma-2 2B. Initializing with stronger LLM help improve UniFluid visual understanding performance on most datasets." + ], + "image_footnote": [], + "bbox": [ + 116, + 777, + 241, + 858 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/f181a225d21891e5163a357e3e714abad9344f5552bb4dc59b2bd38c830fa9da.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 245, + 777, + 370, + 858 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/cb7ec8bfd66ea26fb70e6ee0daa7cf9e7b4a7c654ee225e88386ca6f754b128b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 375, + 777, + 500, + 872 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/7fab743345e3073b4153a8b710ead7da07fac7174f879e5a3a8ffd62deef737a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 504, + 777, + 629, + 872 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/11601a09dc1fd36d9290ed9af67eaa56e8cbb16f8407e81c13812bc1a5c2c8b7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 635, + 777, + 758, + 858 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/f0ce9ba61697d3680f8cc472b3daf1280274538d8e0d6a2eed7026b12c75840b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 766, + 777, + 888, + 858 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 493, + 936, + 504, + 946 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/c956187abea4dd8794f3600eded98ff56e89b52dfb2e2578cc297cfc9c1e2c4b.jpg", + "image_caption": [ + "A black basketball shoe with a lightning bolt on it." + ], + "image_footnote": [], + "bbox": [ + 75, + 18, + 267, + 167 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/1dda4edef0f15fb07c736b4c19bc3834934ff9cadc02e9f0f49659946ff6d814.jpg", + "image_caption": [ + "A melting apple." + ], + "image_footnote": [], + "bbox": [ + 292, + 18, + 486, + 167 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/466d2fe43bf8d14bf7059dbedc9088860424dd3ee80301f589abee1a3c733d74.jpg", + "image_caption": [ + "A space elevator in the universe." + ], + "image_footnote": [], + "bbox": [ + 509, + 18, + 702, + 167 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/9bd5a18be00334727765b9a18254bc1ed3b8dc9e3dc00bc5cec75969e5d733d0.jpg", + "image_caption": [ + "human life depicted entirely out of fractals." + ], + "image_footnote": [], + "bbox": [ + 728, + 18, + 919, + 167 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/2efd5160ecbb6255aa5aa53f9e7c7990aed82e7424c5e14d7ec64fd6295f8a8b.jpg", + "image_caption": [ + "Graffiti of a funny dog on a street wall." + ], + "image_footnote": [], + "bbox": [ + 75, + 217, + 266, + 367 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/eedd97bda7977dc3c2b8ddcaa4797ae048a9f58b38c7e304952ee35bdeb0e617.jpg", + "image_caption": [ + "Turtle swimming \nunderwater, aesthetic, \nfantasy." + ], + "image_footnote": [], + "bbox": [ + 292, + 215, + 483, + 364 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/2d819e1f841d2622c73d61555687a7195917f493c4944afb10b5dba54f3e5ca1.jpg", + "image_caption": [ + "A transparent sculpture of a duck made out of glass." + ], + "image_footnote": [], + "bbox": [ + 511, + 217, + 702, + 364 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/7ff46f16950151c5e77b47b936d2c67bdc5ecdccfbe3f2e8131383c933a526f9.jpg", + "image_caption": [ + "An armchair in the shape of an avocado." + ], + "image_footnote": [], + "bbox": [ + 730, + 217, + 919, + 366 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/db9b12aa29a17e233f26cae458623aad7335736add2998309d2ecc99fc278a1c.jpg", + "image_caption": [ + "A hyper-detailed rendering of a geyser erupting in a colorful, geothermal landscape." + ], + "image_footnote": [], + "bbox": [ + 73, + 431, + 267, + 580 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/f54f46e66e32e2eec00ae7b8d752e50c4814067b201f685dac85b03a07a273d6.jpg", + "image_caption": [ + "An astronaut riding a horse on the moon, oil painting by Van Gogh." + ], + "image_footnote": [], + "bbox": [ + 292, + 431, + 483, + 580 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/8d1ac576da2e319c5322e81c14503269b1d8b27ac23b92ebc04c6a1e63258500.jpg", + "image_caption": [ + "photo of an eagle with a golden crown resting upon its head." + ], + "image_footnote": [], + "bbox": [ + 511, + 431, + 702, + 579 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/a15ecf2565252d6785ad118897c8a9997898e511a7971087b628b411aee66098.jpg", + "image_caption": [ + "A window with raindrops trickling down, overlooking a blurry city." + ], + "image_footnote": [], + "bbox": [ + 730, + 431, + 919, + 579 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/1268a3fc86a3637bab1e17dd7e19f49d8912d1bec2f9c972de4d7c3bbe98d92a.jpg", + "image_caption": [ + "An image of a modern architectural building with large glass windows, situated on a cliff overlooking a serene ocean at sunset." + ], + "image_footnote": [], + "bbox": [ + 73, + 662, + 267, + 811 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/0b1b1b92592474bf268039868377a95c9b10fbd9c272acae0077e0faa47382d0.jpg", + "image_caption": [ + "A cozy cabin in the middle of a snowy forest, surrounded by tall trees with lights glowing through the windows, a northern lights display visible in the sky." + ], + "image_footnote": [], + "bbox": [ + 292, + 662, + 483, + 811 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/a05f6050aea65fd8c472eca0b0aa84d22ae31d07fc18623d18f8abcf22e27deb.jpg", + "image_caption": [ + "A still life of a vase overflowing with vibrant flowers, painted in bold colors and textured brushstrokes, reminiscent of van Gogh's iconic style." + ], + "image_footnote": [], + "bbox": [ + 511, + 662, + 702, + 811 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/66c56563088f32892912db5825e6709d4e50a71e4d13352801783949bdaace75.jpg", + "image_caption": [ + "An otherworldly forest of giant glowing mushrooms under a vibrant night sky filled with distant planets and stars, creating a dreamlike, cosmic landscape.", + "Figure 5: Images generated from UniFluid autoregressive model after aesthetic fine-tuning." + ], + "image_footnote": [], + "bbox": [ + 730, + 662, + 919, + 811 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 936, + 504, + 946 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/36f4a8eceafcdb00d110ca517d7b8a9cf4704c36dd59aee4f5c7d4974d7e9486.jpg", + "image_caption": [], + "image_footnote": [ + "A close up view of a wooden pole with the word \"sushi\" in silver letters on it. The pole is in front of a parking lot. The pole is in the foreground, with a concrete sidewalk in the foreground. There are plants and trees in the background. There is a brown brick building with a black awning on the left side of it. There are trees in the background. The sky is clear and blue." + ], + "bbox": [ + 107, + 36, + 277, + 209 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/3b5e953dc74551e08a0fd1c416c0915653361ba3cff1e61815598e1a8cd421bb.jpg", + "image_caption": [], + "image_footnote": [ + "A close up view of a wet white and brown labradoodle dog walking on a gray cement surface. The dog has a red leash attached to its collar. The dog has a silver metal hook attached to its collar. The dog is walking towards the camera. The dog is casting a shadow on the cement surface. Behind the dog is a building with a large window on the side. A tree line is visible in the background. The sky is blue with a few white clouds." + ], + "bbox": [ + 496, + 37, + 661, + 205 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/04fe165de52995499a22c711c69eafb3a57240f4b865c524c8ee2c812a8e402b.jpg", + "image_caption": [], + "image_footnote": [ + "Question: In what year did Panama's population reach 4.28 million?", + "Answer: 2020" + ], + "bbox": [ + 109, + 227, + 341, + 354 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/3affb1716f45e3f8a8e892a46ed5c32d321d2f28697ab2b0d93d6ccf76afcff1.jpg", + "image_caption": [], + "image_footnote": [ + "Question: What device's average duration of in-app engagement sessions was 6.6 minutes?", + "Answer: android" + ], + "bbox": [ + 493, + 227, + 727, + 352 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/d247a1dbb553478afd47f826cff4a46e58101f933b13bce059a3897dcaba7b76.jpg", + "image_caption": [ + "Figure 6: Finetuned UniFluid model demonstrates strong image-to-text capability on image captioning and question answering." + ], + "image_footnote": [ + "Question: The bus is likely driving through which American city?", + "Answer: new york" + ], + "bbox": [ + 107, + 369, + 334, + 503 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/56f7755653a55050558209e33a525944678ce07f18db276e8244c8c56aa35766.jpg", + "image_caption": [], + "image_footnote": [ + "Question: Which number birthday is probably being celebrated?", + "Answer: thirty" + ], + "bbox": [ + 493, + 371, + 720, + 503 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/79f93e931fd94d63593c874d6ac1fb094c20d25d037a24a190c7c96be9620be3.jpg", + "image_caption": [ + "Change the black suitcase to a white one" + ], + "image_footnote": [], + "bbox": [ + 104, + 545, + 232, + 645 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/d19bf5d7e7f56008e163ebc8e441f8a1112671cf8a8ef094221c9f5680a30ced.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 233, + 546, + 362, + 645 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/1d019c5e0c764eb1e8b95793df132237f61dc32dfaf1373ae7c746f9613670e2.jpg", + "image_caption": [ + "Remove the fence from the photo" + ], + "image_footnote": [], + "bbox": [ + 367, + 546, + 496, + 645 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/7f3981b5455644ddc8ab004d938034d1d81756f69c154a9d395fc70887dc2689.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 496, + 546, + 625, + 645 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/857c3d2b4d90a930f09ebe7c2eaecd6e41843ff5f2eb1e4dfd4d5f0fbb8c4eae.jpg", + "image_caption": [ + "Change this into a 1950s Flintstone cartoon art style" + ], + "image_footnote": [], + "bbox": [ + 632, + 546, + 761, + 645 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/5d06f1f7f1bc3e93747fda98c60b99db1e7b3bfc38ea08d5f99f70ff408d4703.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 761, + 546, + 890, + 645 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/480023cdc110c691db2b63895df6e29d8f62b6c948074069eb234e06d6a1643d.jpg", + "image_caption": [ + "Change the trees from flowers to icicles" + ], + "image_footnote": [], + "bbox": [ + 104, + 667, + 232, + 767 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/3cf06912c8adfa112e4cec26c1ef46e0c4e30e821c79f1b2de0b87c712b9bc5c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 233, + 667, + 362, + 767 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/eb0786feda26f92aea48692bb7ee90a2bd953e0f5f3bb17a29cbf991e6540cde.jpg", + "image_caption": [ + "Change the elephant into a giant rabbit" + ], + "image_footnote": [], + "bbox": [ + 367, + 667, + 496, + 767 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/ab435942c6a892c5e005c6f3e596ec777b7002aee1398e2d94d8acf12d940801.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 496, + 667, + 625, + 767 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/9c446d91fda981918537212203a547eb8e4fb4c4bbdb313a91d2a0e202c7d290.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 633, + 667, + 761, + 767 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/7f9899abfaf1d95a495617d9c2b59fbcd4e91d93fcf676ed5bac64c54a57d59d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 761, + 667, + 890, + 767 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/37e1926ec2131e720adf5a00ec75683674a2fdf5d3477f9049de45486d0cd9f1.jpg", + "image_caption": [ + "Change the bird to all white" + ], + "image_footnote": [], + "bbox": [ + 104, + 789, + 232, + 888 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/00ace96392ae997da28586fe1293583f6d92d6429a3fcfed2d0751233fbb1bb4.jpg", + "image_caption": [ + "Figure 7: Image editing results on evaluation benchmark from UniFluid autoregressive model after fine-tuning. It handles multiple editing tasks effectively, including object removal, insertion, style and color changes." + ], + "image_footnote": [], + "bbox": [ + 233, + 789, + 362, + 888 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/83cafdcca03ca33853f930748ce61120aaab1ee886c0c13f8fcc11862d110f05.jpg", + "image_caption": [ + "Turn it into a Paul Cezanne still life painting" + ], + "image_footnote": [], + "bbox": [ + 393, + 790, + 488, + 887 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/5117cd2da0dfc63a8e7e5811316065d98885423bf2410418a9ecb417bb9c410b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 500, + 790, + 629, + 888 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/2f04b59d14ad478ebcffecd91ec88aa335419c135b386a9254939ca44fbb078b.jpg", + "image_caption": [ + "Change the meat into carrots" + ], + "image_footnote": [], + "bbox": [ + 632, + 789, + 761, + 887 + ], + "page_idx": 8 + }, + { + "type": "image", + "img_path": "images/46973e49a359e902ed1f7d9f02c2def2ed4f8fe3f1d72a0e4d3fdeb7fc3d6011.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 761, + 789, + 890, + 887 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 8 + }, + { + "type": "table", + "img_path": "images/8027d54f52a4ad71cfa0c7bea294a4fb80ff464f7ea42d394d811564e25d0f95.jpg", + "table_caption": [ + "Table 4: Performance comparison of image generation and understanding of UniFluid trained with different image generation order. FID and CIDEr is measured on MS-COCO." + ], + "table_footnote": [], + "table_body": "
Generation OrderGenerationUnderstanding
COCO FID ↓GenEval ↑COCO CIDEr ↑Cap Avg ↑QA Avg ↑
Raster8.280.5945.57116.5561.81
Random7.200.5940.91116.1362.10
", + "bbox": [ + 233, + 123, + 763, + 189 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "as a backbone is important for unlocking improved visual quality in a unified model training setup. The image understanding performance also improved slightly when using stronger LLM, which is a trend also noted in PaliGemma2. We show the detailed comparison on each downstream visual understanding benchmark in Figure 4.", + "bbox": [ + 169, + 214, + 826, + 272 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Training with Random Order Helps Generation But Not Understanding. Images inherently possess 2D patterns. As demonstrated in Fluid, raster-order training can be problematic, potentially leading to collapse and artifacts characterized by disappearing patterns. Approaches such as RAR [58] and RandAR [33] propose training image generation AR models with random-order training, which can improve ImageNet FID and result in better visual quality.", + "bbox": [ + 169, + 276, + 823, + 347 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Here we study the effect of different visual generation orders during training within our unified framework setup. We compare the performance between random-order and raster-order training, both with Gemma-2 2B as backbone LLM. The results presented in Table 4 indicate that for per-token image generation within a unified framework, raster-order training continues to underperform compared to random-order generation. Incorporating random-order during training could ensure the generated images are of high quality. However, it does not necessarily improve the visual understanding performance, where raster-order achieves better performance on MS-COCO CIDEr and downstream captioning task average (Cap Avg).", + "bbox": [ + 169, + 352, + 826, + 465 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "5.3 More Generation Capabilities", + "text_level": 1, + "bbox": [ + 171, + 479, + 421, + 494 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "We also verify the transferability of the trained model to various downstream generation tasks.", + "bbox": [ + 169, + 505, + 790, + 520 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Aesthetic Fine-Tuning. To enhance the visual quality and aesthetic appeal of the generated images, we perform aesthetic fine-tuning on a publicly available dataset. The results are shown in Figure 5.", + "bbox": [ + 169, + 525, + 826, + 555 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Image Editing Task. Since our unified framework is trained with multimodal inputs, it can naturally extend to image editing tasks that involve both image and text prompt inputs. We fine-tune the 2B UniFluid model with 4M image editing pairs from HQEdit [18] and UltraEdit [60]. In Figure 7, we apply the fine-tuned model to the input images and editing prompts from a public available benchmark. Although preliminary, the experiments show that UniFluid is able to adapt and generalize to tasks that involve interleaved data modalities.", + "bbox": [ + 169, + 560, + 826, + 643 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "6 Conclusion", + "text_level": 1, + "bbox": [ + 171, + 662, + 302, + 679 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "In this paper, we presented UniFluid, a pure autoregressive framework for joint visual generation and understanding, utilizing continuous visual tokens. We identified an inherent trade-off between the visual generation and understanding tasks, but the two tasks can benefit each other with tuned training recipes. Careful choice of the loss balance between the two tasks allows a single unified model to achieve performance comparable to or exceeding single-task baselines. We conducted investigation of key design choices for UniFluid training, revealing the critical importance of employing strong backbone LLM and random-order generation to unlock high-quality visual generation capabilities. We believe that this work encourages future research into the exploration of continuous visual tokens for unified vision-language model training, paving the way for more efficient and powerful autoregressive multimodal systems.", + "bbox": [ + 169, + 693, + 826, + 833 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "Acknowledgements. We are grateful to Alex Rizkowsky and Amy Shen for their support in securing computational resources. We also wish to thank Charles Herrmann, Junhwa Hur, Shangbang Long, André Susano Pinto, Srinivas Kaza, David Salesin, and the VisCam team for their insightful discussions and constructive feedback, which greatly improved this work.", + "bbox": [ + 169, + 839, + 826, + 895 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 490, + 935, + 509, + 946 + ], + "page_idx": 9 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 173, + 89, + 269, + 106 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Manoj Acharya, Kushal Kafle, and Christopher Kanan. Tallyqa: Answering complex counting questions. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 8076-8084, 2019.", + "[2] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35:23716-23736, 2022.", + "[3] Lucas Beyer, Andreas Steiner, André Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neumann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, et al. Paligemma: A versatile 3b vlm for transfer. arXiv preprint arXiv:2407.07726, 2024.", + "[4] Ali Furkan Biten, Ruben Tito, Andres Mafla, Lluis Gomez, Marçal Rusinol, Ernest Valveny, CV Jawahar, and Dimosthenis Karatzas. Scene text visual question answering. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4291-4301, 2019.", + "[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020.", + "[6] Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint arXiv:2301.00704, 2023.", + "[7] Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794, 2022.", + "[8] Xiaokang Chen, Zhiyu Wu, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, and Chong Ruan. Janus-pro: Unified multimodal understanding and generation with data and model scaling. arXiv preprint arXiv:2501.17811, 2025.", + "[9] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1-113, 2023.", + "[10] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructlip: Towards general-purpose vision-language models with instruction tuning, 2023.", + "[11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pages 4171–4186, 2019.", + "[12] Lijie Fan, Tianhong Li, Siyang Qin, Yuanzhen Li, Chen Sun, Michael Rubinstein, Deqing Sun, Kaiming He, and Yonglong Tian. Fluid: Scaling autoregressive text-to-image generative models with continuous tokens. arXiv preprint arXiv:2410.13863, 2024.", + "[13] Dhruba Ghosh, Hannaneh Hajishirzi, and Ludwig Schmidt. Geneval: An object-focused framework for evaluating text-to-image alignment. Advances in Neural Information Processing Systems, 36:52132-52152, 2023.", + "[14] Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive networks. In International Conference on Machine Learning, pages 1242–1250. PMLR, 2014.", + "[15] Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3608-3617, 2018." + ], + "bbox": [ + 171, + 112, + 828, + 910 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 490, + 935, + 506, + 946 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[16] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.", + "[17] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700-6709, 2019.", + "[18] Mude Hui, Siwei Yang, Bingchen Zhao, Yichun Shi, Heng Wang, Peng Wang, Yuyin Zhou, and Cihang Xie. Hq-edit: A high-quality dataset for instruction-based image editing. arXiv preprint arXiv:2404.09990, 2024.", + "[19] Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV 14, pages 235-251. Springer, 2016.", + "[20] Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226, 2018.", + "[21] Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. Autoregressive image generation without vector quantization. Advances in Neural Information Processing Systems, 37:56424-56445, 2024.", + "[22] Yang Li, Gang Li, Luheng He, Jingjie Zheng, Hong Li, and Zhiwei Guan. Widget captioning: Generating natural language description for mobile user interface elements. arXiv preprint arXiv:2010.04295, 2020.", + "[23] Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, and Song Han. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 26689-26699, 2024.", + "[24] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer vision-ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part v 13, pages 740-755. Springer, 2014.", + "[25] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26296-26306, 2024.", + "[26] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36:34892-34916, 2023.", + "[27] Sylvain Lobry, Diego Marcos, Jesse Murray, and Devis Tuia. Rsvqa: Visual question answering for remote sensing data. IEEE Transactions on Geoscience and Remote Sensing, 58(12):8555-8566, 2020.", + "[28] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507–2521, 2022.", + "[29] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195-3204, 2019.", + "[30] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answering about charts with visual and logical reasoning. arXiv preprint arXiv:2203.10244, 2022.", + "[31] Minesh Mathew, Viraj Bagal, Ruben Tito, Dimosthenis Karatzas, Ernest Valveny, and CV Jawahar. Infographicvqa. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1697-1706, 2022." + ], + "bbox": [ + 173, + 90, + 826, + 912 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[32] Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 2200-2209, 2021.", + "[33] Ziqi Pang, Tianyuan Zhang, Fujun Luan, Yunze Man, Hao Tan, Kai Zhang, William T Freeman, and Yu-Xiong Wang. Randar: Decoder-only autoregressive visual generation in random orders. arXiv preprint arXiv:2412.01827, 2024.", + "[34] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In International conference on machine learning, pages 4055-4064. PMLR, 2018.", + "[35] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Technical Report, 2018.", + "[36] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021.", + "[37] Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-okvqa: A benchmark for visual question answering using world knowledge. In European conference on computer vision, pages 146–162. Springer, 2022.", + "[38] Weijia Shi, Xiaochuang Han, Chunting Zhou, Weixin Liang, Xi Victoria Lin, Luke Zettlemoyer, and Lili Yu. Llamafusion: Adapting pretrained language models for multimodal generation. arXiv preprint arXiv:2412.15188, 2024.", + "[39] Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part II 16, pages 742-758. Springer, 2020.", + "[40] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8317-8326, 2019.", + "[41] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020.", + "[42] Andreas Steiner, André Susano Pinto, Michael Tschannen, Daniel Keysers, Xiao Wang, Yonatan Bitton, Alexey Gritsanko, Matthias Minderer, Anthony Sherbondy, Shangbang Long, et al. Paligemma 2: A family of versatile vlms for transfer. arXiv preprint arXiv:2412.03555, 2024.", + "[43] Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. arXiv preprint arXiv:1811.00491, 2018.", + "[44] Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Emu: Generative pretraining in multimodality. arXiv preprint arXiv:2307.05222, 2023.", + "[45] Yutao Sun, Hangbo Bao, Wenhui Wang, Zhiliang Peng, Li Dong, Shaohan Huang, Jianyong Wang, and Furu Wei. Multimodal latent language modeling with next-token diffusion. arXiv preprint arXiv:2412.08635, 2024.", + "[46] Chameleon Team. Chameleon: Mixed-modal early-fusion foundation models. arXiv preprint arXiv:2405.09818, 2024.", + "[47] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024." + ], + "bbox": [ + 173, + 90, + 826, + 912 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[48] Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118, 2024.", + "[49] Keyu Tian, Yi Jiang, Zehuan Yuan, Bingyue Peng, and Liwei Wang. Visual autoregressive modeling: Scalable image generation via next-scale prediction. Advances in neural information processing systems, 37:84839-84865, 2024.", + "[50] Shengbang Tong, David Fan, Jiachen Zhu, Yunyang Xiong, Xinlei Chen, Koustuv Sinha, Michael Rabbat, Yann LeCun, Saining Xie, and Zhuang Liu. Metamorph: Multimodal understanding and generation via instruction tuning. arXiv preprint arXiv:2412.14164, 2024.", + "[51] Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelCNN decoders. Advances in neural information processing systems, 29, 2016.", + "[52] Aäron Van Den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International conference on machine learning, pages 1747-1756. PMLR, 2016.", + "[53] Bryan Wang, Gang Li, Xin Zhou, Zhourong Chen, Tovi Grossman, and Yang Li. Screen2words: Automatic mobile ui summarization with multimodal learning. In The 34th Annual ACM Symposium on User Interface Software and Technology, pages 498-510, 2021.", + "[54] Chengyue Wu, Xiaokang Chen, Zhiyu Wu, Yiyang Ma, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, Chong Ruan, et al. Janus: Decoupling visual encoding for unified multimodal understanding and generation. arXiv preprint arXiv:2410.13848, 2024.", + "[55] Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, and Mike Zheng Shou. Show-o: One single transformer to unify multimodal understanding and generation. arXiv preprint arXiv:2408.12528, 2024.", + "[56] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023.", + "[57] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2(3):5, 2022.", + "[58] Qihang Yu, Ju He, Xueqing Deng, Xiaohui Shen, and Liang-Chieh Chen. Randomized autoregressive visual generation. arXiv preprint arXiv:2411.00776, 2024.", + "[59] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF international conference on computer vision, pages 11975-11986, 2023.", + "[60] Haozhe Zhao, Xiaojian Shawn Ma, Liang Chen, Shuzheng Si, Rujie Wu, Kaikai An, Peiyu Yu, Minjia Zhang, Qing Li, and Baobao Chang. Ultraedit: Instruction-based fine-grained image editing at scale. Advances in Neural Information Processing Systems, 37:3058-3093, 2024.", + "[61] Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Xuezhe Ma, Luke Zettlemoyer, and Omer Levy. Transfusion: Predict the next token and diffuse images with one multi-modal model. arXiv preprint arXiv:2408.11039, 2024.", + "[62] Deyao Zhu, Jun Chen, Xiaogian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023." + ], + "bbox": [ + 173, + 90, + 826, + 866 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 13 + } +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13436/53224727-d9f9-435a-ad74-c91800bf8f34_model.json b/data/2025/2503_13xxx/2503.13436/53224727-d9f9-435a-ad74-c91800bf8f34_model.json new file mode 100644 index 0000000000000000000000000000000000000000..788a03a80167281fc67208f2ab97ceaf2cee8a4e --- /dev/null +++ b/data/2025/2503_13xxx/2503.13436/53224727-d9f9-435a-ad74-c91800bf8f34_model.json @@ -0,0 +1,2868 @@ +[ + [ + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.261, + 0.058, + 0.707 + ], + "angle": 270, + "content": "arXiv:2503.13436v1 [cs.CV] 17 Mar 2025" + }, + { + "type": "title", + "bbox": [ + 0.223, + 0.123, + 0.776, + 0.174 + ], + "angle": 0, + "content": "Unified Autoregressive Visual Generation and Understanding with Continuous Tokens" + }, + { + "type": "text", + "bbox": [ + 0.187, + 0.224, + 0.823, + 0.291 + ], + "angle": 0, + "content": "Lijie Fan\\(^{1,*}\\) Luming Tang\\(^{1,*}\\) Siyang Qin\\(^{1,*}\\) Tianhong Li\\(^{2}\\) Xuan Yang\\(^{1}\\) Siyuan Qiao\\(^{1}\\) Andreas Steiner\\(^{1}\\) Chen Sun\\(^{1}\\) Yuanzhen Li\\(^{1}\\) Tao Zhu\\(^{1}\\) Michael Rubinstein\\(^{1}\\) Michalis Raptis\\(^{1}\\) Deqing Sun\\(^{1,\\dagger}\\) Radu Soricut\\(^{1,\\dagger}\\) Google DeepMind \\({}^{2}\\mathrm{MIT}\\quad {*},\\) equal contribution" + }, + { + "type": "image", + "bbox": [ + 0.179, + 0.305, + 0.816, + 0.666 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.214, + 0.667, + 0.782, + 0.681 + ], + "angle": 0, + "content": "Figure 1: Generated images from our UniFluid autoregressive model after aesthetic finetuning." + }, + { + "type": "title", + "bbox": [ + 0.46, + 0.688, + 0.538, + 0.703 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.23, + 0.711, + 0.769, + 0.905 + ], + "angle": 0, + "content": "We present UniFluid, a unified autoregressive framework for joint visual generation and understanding leveraging continuous visual tokens. Our unified autoregressive architecture processes multimodal image and text inputs, generating discrete tokens for text and continuous tokens for image. We find though there is an inherent trade-off between the image generation and understanding task, a carefully tuned training recipe enables them to improve each other. By selecting an appropriate loss balance weight, the unified model achieves results comparable to or exceeding those of single-task baselines on both tasks. Furthermore, we demonstrate that employing stronger pre-trained LLMs and random-order generation during training is important to achieve high-fidelity image generation within this unified framework. Built upon the Gemma model series, UniFluid exhibits competitive performance across both image generation and understanding, demonstrating strong transferability to various downstream tasks, including image editing for generation, as well as visual captioning and question answering for understanding." + }, + { + "type": "footer", + "bbox": [ + 0.172, + 0.923, + 0.251, + 0.937 + ], + "angle": 0, + "content": "Tech Report." + } + ], + [ + { + "type": "title", + "bbox": [ + 0.176, + 0.09, + 0.312, + 0.107 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.123, + 0.825, + 0.248 + ], + "angle": 0, + "content": "Large Language Models (LLMs) have recently advanced from text-centric architectures, such as BERT [11] and GPT [35], toward multimodal systems capable of understanding and generating content across different modalities. GPT-3 [5] and PaLM [9] show that scaling language models leads to emergent capabilities, while Flamingo [2] further demonstrates that incorporating visual inputs facilitates unified multimodal reasoning. This trend toward unified vision-language model—using a single model for diverse tasks of visual understanding and generation—has promising potential for leveraging knowledge and reasoning abilities that transfer across different vision and language tasks, ultimately facilitating more robust and generalizable multimodal representation and modeling capabilities." + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.254, + 0.825, + 0.365 + ], + "angle": 0, + "content": "Motivated by the advantages and strong scaling properties of autoregressive models, coupled with their simplicity, we investigate a pure autoregressive framework for unified visual generation and understanding, without the limitations introduced by vector quantization (VQ). In this paper, we introduce UniFluid, a unified framework that leverages continuous visual tokens within an autoregressive architecture to jointly handle vision-language generation and understanding tasks. Building upon pre-trained Gemma [48] on large-scale text corpus, UniFluid unlocks powerful visual generation and understanding capabilities through training with paired image-text data, and further allows these two tasks to mutually benefit each other within a single architecture." + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.371, + 0.825, + 0.496 + ], + "angle": 0, + "content": "Specifically, UniFluid adopts a unified autoregressive framework where both text and continuous visual inputs are embedded as tokens in the same space, enabling seamless joint training of image generation and understanding tasks. UniFluid integrates a continuous tokenizer [12, 21] for image generation and a pre-trained SigLIP [59] image encoder for understanding tasks, while textual inputs are processed using a standard SentencePiece tokenizer [20]. The resulting multimodal sequences are modeled autoregressively using Gemma [48] as the underlying transformer backbone. Task-specific prediction heads—a diffusion-based head for image generation and a cross-entropy head for text generation—ensure effective modality-specific training and inference, enabling UniFluid to efficiently learn shared representations that mutually enhance its generation and understanding capabilities." + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.502, + 0.825, + 0.627 + ], + "angle": 0, + "content": "Our experiments demonstrate several key advantages of the proposed unified training strategy. We find though there is a trade-off between the two tasks, a carefully tuned training recipe can allow the tasks to support each other and outperform the single-task baselines. Effectively balancing the loss between the tasks allows a single model that performs both with results superior to or on par with single-task models. Moreover, the choice of pre-trained LLM backbone significantly impacts visual generation performance. We also find that while employing random generation order is essential for high-quality image synthesis, it is less critical for understanding tasks. Finally, our unified pre-trained models show strong generalization and transferability, achieving compelling results in downstream applications, including image editing and various vision-language understanding benchmarks." + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.65, + 0.33, + 0.667 + ], + "angle": 0, + "content": "2 Related Works" + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.683, + 0.825, + 0.822 + ], + "angle": 0, + "content": "Multimodal Large Language Models. Multimodal Large Language Models [2, 10, 23, 26, 25, 3, 42] have shown significant performance in visual understanding tasks. Flamingo [2] adopted a frozen LLM and vision encoder, utilizing perceiver with cross-attention to bridge the modalities. LLaVA [26, 25] proposed instruction tuning over pre-trained LLMs with multimodal inputs to align a pre-trained image encoder into the LLM's embedding space, thereby enabling it with visual understanding and instruction following capabilities. MiniGPT-4 [62] and mPLUG-Owl [56] have shown vision encoders can be connected to LLMs through projection layers, demonstrating sophisticated visual reasoning capabilities. The PaliGemma [3, 42] series built upon the Gemma [47, 48] model family to develop versatile vision-language models capable of strong transfer to diverse downstream visual understanding tasks." + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.828, + 0.825, + 0.912 + ], + "angle": 0, + "content": "Autoregressive Image Generation. While diffusion models [41, 36] have achieved impressive success in image generation, autoregressive image generation methods have also shown significant development, driven by their simplicity and closeness to LLM training paradigms. A large body of research centers on tokenizing images into discrete tokens and applying autoregressive objectives to these discrete representations. Notable examples include Parti [57] and Muse [6]. [49] proposes an approach that operates on image scales, progressively refining resolutions from coarse to fine through" + }, + { + "type": "page_number", + "bbox": [ + 0.495, + 0.937, + 0.504, + 0.947 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.175, + 0.097, + 0.825, + 0.367 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.379, + 0.825, + 0.419 + ], + "angle": 0, + "content": "Figure 2: High-level illustration of UniFluid. UniFluid performs joint training of image generation and understanding tasks through next token prediction. For image embeddings, we use a VAE image tokenizer for generation, and a SigLIP image encoder for understanding. BOI/BOS stands for beginning of Image/Sentence." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.44, + 0.825, + 0.482 + ], + "angle": 0, + "content": "next-scale prediction. Alternatively, works such as MAR [21], Fluid [12], and techniques employing per-token diffusion heads on top of LLM-predicted embeddings have explored autoregressive image generation with continuous visual tokens." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.488, + 0.827, + 0.641 + ], + "angle": 0, + "content": "Unified Multimodal Models. There is growing research interests in unifying visual generation and understanding within a single model. VQ-based models, such as Chameleon [46], Emu [44], and Janus [54, 8], propose encoding visual inputs into discrete tokens and unifying tasks into next-token prediction within this discrete token space. Models with hybrid training targets, such as Transfusion [61, 38] and Show-O [55], aim to unify next-token prediction objectives with diffusion objectives within a single framework. MetaMorph [50] maintains the autoregressive objective by regressing visual SigLIP [59] features using an LLM, but necessitates a separate diffusion model to decode the predicted latent features into images. Our approach distinguishes itself by performing per-token autoregressive generation using continuous visual tokens. This maintains the next-token prediction objective, while not being limited by the vector quantized tokens. [45] also explores using continuous tokens to generate multimodal outputs." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.66, + 0.273, + 0.675 + ], + "angle": 0, + "content": "3 Method" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.69, + 0.825, + 0.734 + ], + "angle": 0, + "content": "In this section, we illustrate the architecture of our UniFluid model. The model expects both image and text sequences as input and achieves joint training on both generation and understanding tasks, using next-token prediction as its training objective." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.748, + 0.661, + 0.763 + ], + "angle": 0, + "content": "3.1 Unified Autoregressive Modeling with Continuous Visual Tokens" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.773, + 0.827, + 0.913 + ], + "angle": 0, + "content": "Our approach leverages the autoregressive paradigm to unify visual understanding and generation within a single framework. Given an ordered sequence of tokens \\( X = \\{x^{1},\\dots,x^{n}\\} \\), the autoregressive model [34, 14, 51, 52] factorizes the joint probability distribution as a product of conditional probabilities, effectively framing the generation task as a sequential \"next token prediction\" problem: \\( p(X) = \\prod_{i = 1}^{n}p(x^{i}\\mid x^{1},\\dots,x^{i - 1}) \\). As shown in MAR [21] and Fluid [12], this autoregressive formulation is applicable for both discrete tokens and continuous tokens. In UniFluid, we exploit this property to enable the generation of continuous visual tokens under the unified decoder-only framework. Our model is modality-agnostic; both text and image tokens are treated as elements within a long unified sequence, and their respective logits are predicted iteratively in an autoregressive manner by the backbone transformer." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.948 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.092, + 0.825, + 0.162 + ], + "angle": 0, + "content": "To accommodate the distinct nature of text and image modalities, we employ modality-specific prediction heads to calculate the appropriate loss functions and sampling for each modality. This unified approach allows the model to learn a shared representation space through the unified training procedure, facilitating synergistic learning and enabling seamless transitions between visual generation and understanding." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.183, + 0.303, + 0.198 + ], + "angle": 0, + "content": "3.2 Architecture" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.21, + 0.825, + 0.336 + ], + "angle": 0, + "content": "As depicted in Figure 2, UniFluid employs a unified framework where both text and image inputs are tokenized and projected into a shared embedding space. This allows it to leverage a decoder-only transformer as the core backbone for the unified autoregressive task. Text inputs are tokenized using the SentencePiece tokenizer. This results in discrete tokens with a vocabulary size of \\( V \\). For image generation, images are encoded into continuous visual tokens using a continuous variational autoencoder. To facilitate the process of image understanding, following PaliGemma, we used SigLIP as a separate image encoder to extract high-level information from the visual inputs. UniFluid consists of a classification head to convert the transformer's text logits into a categorical distribution, and a diffusion head to convert image logits into a per-token probability distribution." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.342, + 0.827, + 0.482 + ], + "angle": 0, + "content": "The inherent structure of text as a linear sequence aligns well with the standard 1D positional embeddings of the LLM, which are sufficient for text modeling and image understanding tasks. However, image tokens possess a 2D spatial structure. To capture this inherent 2D nature, we incorporate learnable 2D positional embeddings, which are added to the image token embeddings. Meanwhile, inspired by [58], to achieve random order generation, a position embedding for the next predicted token is also added to each image token. To enhance the model's ability to initiate and guide image generation, we prepend a \"Beginning of Image\" (BOI) token to the sequence of continuous image tokens. This BOI token serves as a distinct signal, indicating the start of the visual generation process. Given that the sequence length for generated image tokens is predefined (256 tokens for 256x256 images), an explicit \"End of Image\" token is unnecessary in our case." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.506, + 0.341, + 0.523 + ], + "angle": 0, + "content": "4 Implementation" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.539, + 0.275, + 0.554 + ], + "angle": 0, + "content": "4.1 Training" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.567, + 0.825, + 0.623 + ], + "angle": 0, + "content": "Per-token Classification Head for Discrete Text Tokens. We employ the same SentencePiece tokenizer as Gemma for text tokenization. The transformer's output logits for text are transformed into categorical probability distributions over the vocabulary, and we apply the standard cross-entropy loss, denoted as \\( L_{Text} \\), to optimize the prediction of these discrete text tokens." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.629, + 0.825, + 0.74 + ], + "angle": 0, + "content": "Per-token Diffusion Head for Continuous Visual Tokens. We adopt the same continuous tokenizer as Fluid to embed 256x256 images into 32x32x4 continuous tokens, and use a patch size of 2 to merge 4 tokens into one. To model the per-token distribution of these predicted continuous visual tokens, we employ a lightweight MLP as a diffusion head. We adopt the same diffusion process and loss function, denoted as \\(\\mathcal{L}_{\\text{Visual}}\\), as in [21, 12], which is specifically tailored for continuous visual token prediction. For the understanding task, the input image resolution is \\(224 \\times 224\\), and we use SigLIP as the image encoder. Note that the SigLIP features are only used as prefix for the understanding task during training, and no losses are added on top of them." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.746, + 0.445, + 0.761 + ], + "angle": 0, + "content": "Task-Specific Training Configurations." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.767, + 0.825, + 0.837 + ], + "angle": 0, + "content": "Image Understanding: For image understanding tasks, the model is provided with image embeddings and question tokens as input prefix. Following PaliGemma, we apply a bidirectional attention mask to both image and question tokens. A causal attention mask is applied to the answer tokens, ensuring that the model only attends to previous answer tokens during autoregressive generation. The text token loss, \\(\\mathcal{L}_{\\text{Text}}\\), is calculated specifically on the answer text tokens." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.843, + 0.827, + 0.913 + ], + "angle": 0, + "content": "Image Generation: Conversely, for image generation tasks, text prompts are provided as conditioning inputs. To maintain the appropriate information flow, we employ a bidirectional attention mask for the text prompt tokens, enabling them to attend to all other text tokens. A causal attention mask is applied to the image tokens, ensuring that each image token only attends to preceding image tokens. The visual token loss, \\( L_{\\text{Visual}} \\), is calculated on the generated image tokens." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.948 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.092, + 0.825, + 0.149 + ], + "angle": 0, + "content": "Unified Loss Function. The total training loss for UniFluid is a weighted sum of the text token prediction loss and the visual token prediction loss, defined as: \\(\\mathcal{L} = \\mathcal{L}_{\\text{Visual}} + \\lambda \\cdot \\mathcal{L}_{\\text{Text}}\\) where \\(\\lambda\\) is a hyper-parameter that represents the weight assigned to the text token prediction loss, allowing us to balance the contributions of the two modalities during training." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.154, + 0.827, + 0.253 + ], + "angle": 0, + "content": "Training Details. We train the model with a batch size of 2048 using the AdamW optimizer with a learning rate of 1e-4. The training process consists of 1 million steps with a constant learning rate schedule and a warm-up period of 65k steps. Following [58], for image generation, the image token order is randomly permuted during the initial 300k training iterations, then linearly anneals to raster between 300k and 600k iterations, and finally sticks to raster order for the subsequent 400k steps. Except for the comparison with Gemma-1, we use the Gemma-2 model series as the backbone transformer for all experiments in this paper." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.268, + 0.281, + 0.283 + ], + "angle": 0, + "content": "4.2 Inference" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.294, + 0.825, + 0.378 + ], + "angle": 0, + "content": "For text decoding, we employ categorical sampling for each generated text prediction. The predicted token is then selected from the vocabulary \\( V \\) based on the sampled probability distribution. We use the same decoding strategy as PaliGemma. Greedy decoding is used for all tasks except for downstream COCOcap (beam search \\( n = 2 \\)) and TextCaps (beam search \\( n = 3 \\)). For image decoding, we use a diffusion sampling process to generate continuous visual tokens with diffusion sampling step set to 100 in our implementation." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.384, + 0.827, + 0.44 + ], + "angle": 0, + "content": "As both text and image generation are performed at the token level, with predictions occurring one token at a time under a causal attention mechanism, we can efficiently utilize Key-Value (KV) caching. This optimization is applicable to both discrete text tokens and continuous visual tokens, significantly accelerating the inference process." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.459, + 0.307, + 0.476 + ], + "angle": 0, + "content": "5 Experiment" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.491, + 0.255, + 0.507 + ], + "angle": 0, + "content": "5.1 Setup" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.517, + 0.827, + 0.573 + ], + "angle": 0, + "content": "Datasets. We train our models using the WebLI dataset [7], a collection of high-quality image-text pairs. For visual generation, we follow Fluid to employ a WebLI subset of image and text descriptions specifically for the generation task. For visual understanding, consistent with PaliGemma, we leverage the image-text description pairs and image question-answer pairs that are also available within WebLI." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.579, + 0.827, + 0.732 + ], + "angle": 0, + "content": "Evaluation Metrics. We assess the image generation quality using the FID [16] score on 30K images of the MS-COCO [24] training set and evaluate performance on the GenEval [13] benchmark, where we use the original text prompt without any rewrites. For evaluating visual understanding performance, we use the caption CIDEr score on MS-COCO. Given our similar training dataset and setup to PaliGemma, we also evaluate the finetuning performance on a variety of captioning and question answering tasks. We report the average score on 4 Captioning tasks, including COCOcap [24], Screen2Words [53], TextCaps [39], WidgetCap [22], and 20 QA tasks, including OKVQA [29], AOKVQA-MC [37], AOKVQA-DA [37], GQA [17], NLVR2 [43], AI2D [19], ScienceQA [28], RSVQA-lr [27], RSVQA-hr (test/test2) [27], ChartQA (human/aug) [30], VizWizVQA [15], TallyQA (simple/complex) [1], CountBenchQA [3], TextVQA [40], DocVQA [32], InfoVQA [31], ST-VQA [4]." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.737, + 0.825, + 0.78 + ], + "angle": 0, + "content": "In the following sections, we present the experimental results obtained under different configurations of UniFluid, providing insights into the relationship between the two tasks and highlighting key design choices for UniFluid training." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.796, + 0.307, + 0.81 + ], + "angle": 0, + "content": "5.2 Main Results" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.822, + 0.825, + 0.879 + ], + "angle": 0, + "content": "Unified Training Improves Generation Performance. To evaluate the effectiveness of the unified training framework and determine whether unified training offers advantages compared to training separate models for different tasks, we perform controlled experiments to analyze the performance of models trained with a single task." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.884, + 0.825, + 0.913 + ], + "angle": 0, + "content": "We first compare the visual generation performance of the model trained under the unified training objective with the performance of a text-to-image model (T2I only), trained solely with the visual" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.948 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.099, + 0.825, + 0.124 + ], + "angle": 0, + "content": "Table 1: Unified training achieves better generation performance than text-to-image only training. We evaluate the performance using MS-COCO zero-shot FID and GenEval score." + }, + { + "type": "table", + "bbox": [ + 0.349, + 0.131, + 0.647, + 0.213 + ], + "angle": 0, + "content": "
Training TargetSizeFID ↓GenEval ↑
T2I only0.7B9.710.50
Unified0.7B8.390.52
T2I only2B7.880.59
Unified2B7.200.59
" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.241, + 0.825, + 0.339 + ], + "angle": 0, + "content": "autoregressive objective for the generation task. We ensure that the total number of visual tokens for training is the same for the visual generation loss in both the unified model training and text-to-image only training scenarios. The generation performance comparison is presented in Table 1. The unified model achieves better performance compared to the T2I only model, despite both models having observed the same number of tokens for the visual generation task. This suggests that unified model training can be beneficial for visual generation tasks, and that visual understanding ability has the potential to unlock enhanced visual generation quality." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.345, + 0.825, + 0.4 + ], + "angle": 0, + "content": "Trade-off Between Generation and Understanding. We also investigate whether the visual generation task can contribute to improved visual understanding performance. In the UniFluid unified training setup, the hyperparameter \\(\\lambda\\) controls the balance between the losses applied to image tokens and text tokens." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.407, + 0.825, + 0.518 + ], + "angle": 0, + "content": "In Table 3 and Figure 3, we present the understanding and generation results with varying \\(\\lambda\\) of the 0.7B model. We compare the transfer performance to downstream understanding tasks between the unified model with different \\(\\lambda\\) and a image-to-text model (I2T only), trained solely with the image understanding objective. Within the unified training setup, a trade-off exists between visual generation and understanding tasks, which can be effectively controlled by adjusting the loss mixing weight, \\(\\lambda\\). While increasing \\(\\lambda\\) can improve image understanding performance, ultimately exceeding the I2T-only baseline for downstream captioning, it concurrently diminishes the image generation capabilities." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.524, + 0.825, + 0.621 + ], + "angle": 0, + "content": "In most scenarios, a smaller \\(\\lambda\\) value (e.g., 0.005) is advisable, maintaining a significant proportion (over \\(90\\%\\)) of image understanding while supporting the generation of high-fidelity images. Larger \\(\\lambda\\) values, in contrast, strongly favor image understanding but result in a rapid drop of image generation ability, as indicated by a sharp rise in FID score when \\(\\lambda\\) exceeds 0.1. Qualitative results for image captioning and question answering, demonstrating the understanding capabilities of the fine-tuned model based on the unified model with Gemma-2 2B as backbone LLM and \\(\\lambda = 0.005\\) are presented in Figure 6." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.627, + 0.827, + 0.71 + ], + "angle": 0, + "content": "Better Pre-trained LLM Backbone Leads to Better Visual Generation and Understanding. We investigate the effect of pre-trained LLMs within the unified model training setup, specifically examining whether more powerful LLMs contribute to enhanced image understanding performance and superior visual generation quality. To this end, we conducted experiments using Gemma-1 2B [47] and Gemma-2 2B [47] as backbone LLMs. Gemma-2 is a stronger LLM than Gemma-1 with \\(10\\%\\) average improvements across different text benchmarks." + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.735, + 0.825, + 0.773 + ], + "angle": 0, + "content": "Table 2: Performance comparison of image generation and understanding of UniFluid trained with different LLM backbone. FID and CIDEr are measured on MS-COCO. Gemma-2 gets much better performance compared to Gemma-1, for both image understanding and generation tasks." + }, + { + "type": "table", + "bbox": [ + 0.235, + 0.774, + 0.764, + 0.84 + ], + "angle": 0, + "content": "
Pretrained ModelGenerationUnderstanding
COCO FID ↓GenEval ↑COCO CIDEr ↑Cap Avg ↑QA Avg ↑
Gemma-19.730.5238.02113.4060.21
Gemma-27.200.5940.91116.1362.10
" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.856, + 0.825, + 0.913 + ], + "angle": 0, + "content": "The experimental results are presented in Table 2. Here we used \\(\\lambda = 0.005\\) for all models. The results demonstrate that employing a stronger LLM is crucial for generating images with higher fidelity and quality. Gemma-2 achieves significantly lower FID scores compared to Gemma-1, highlighting that even though LLM pre-training is unimodal, without exposure to visual data, using a better LLM" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.937, + 0.505, + 0.948 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.092, + 0.09, + 0.345, + 0.271 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.367, + 0.091, + 0.621, + 0.271 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.651, + 0.092, + 0.898, + 0.27 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.281, + 0.825, + 0.348 + ], + "angle": 0, + "content": "Figure 3: Plot of image understanding and generation performance with varying \\(\\lambda\\) during training. We show the FID on MS-COCO, the average score for downstream captioning tasks (Cap Avg), and the average score for downstream QA tasks (QA Avg) with different \\(\\lambda\\) on the three figures. Green dashed lines show the FID of T2I-only model, blue and red dashed lines show the downstream captioning average and QA average for I2T-only model, respectively. In practice, smaller \\(\\lambda\\) has better trade-off between two tasks." + }, + { + "type": "table_caption", + "bbox": [ + 0.17, + 0.361, + 0.825, + 0.426 + ], + "angle": 0, + "content": "Table 3: Image generation and understanding results with different \\(\\lambda\\). \\(\\lambda = 0.1\\) roughly makes the loss for generation and understanding of the same scale. We present MS-COCO FID and the average captioning and QA results for downstream understanding tasks, compared to the image-to-text (I2T) only baseline. A smaller \\(\\lambda\\) like 0.005 is recommended in most cases as it preserves most of the image understanding capability while enabling the generation of high-quality images and outperforms the text-to-image (T2I) only baseline." + }, + { + "type": "table", + "bbox": [ + 0.174, + 0.427, + 0.822, + 0.495 + ], + "angle": 0, + "content": "
TaskT2I onlyI2T onlyUnified, λ
0.0050.010.050.10.21.02.0
Generation (FID) ↓9.71-8.399.6512.4817.9028.6047.8954.02
Understanding (Cap Avg) ↑-125.05120.25120.25121.18121.68123.65126.25125.80
Understanding (QA Avg) ↑-65.9560.2961.3761.4861.8765.1765.6465.88
" + }, + { + "type": "image", + "bbox": [ + 0.116, + 0.53, + 0.242, + 0.611 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.246, + 0.53, + 0.371, + 0.61 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.377, + 0.53, + 0.501, + 0.61 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.505, + 0.53, + 0.63, + 0.61 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.636, + 0.53, + 0.759, + 0.61 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.767, + 0.53, + 0.889, + 0.61 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.117, + 0.613, + 0.242, + 0.692 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.246, + 0.613, + 0.371, + 0.692 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.377, + 0.613, + 0.501, + 0.692 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.505, + 0.613, + 0.63, + 0.692 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.636, + 0.613, + 0.759, + 0.692 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.767, + 0.613, + 0.889, + 0.692 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.117, + 0.696, + 0.242, + 0.775 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.246, + 0.696, + 0.371, + 0.775 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.377, + 0.696, + 0.501, + 0.775 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.505, + 0.696, + 0.63, + 0.775 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.636, + 0.696, + 0.759, + 0.775 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.767, + 0.696, + 0.889, + 0.775 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.117, + 0.779, + 0.242, + 0.859 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.246, + 0.779, + 0.371, + 0.859 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.377, + 0.779, + 0.501, + 0.873 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.505, + 0.779, + 0.63, + 0.873 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.636, + 0.779, + 0.759, + 0.859 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.767, + 0.779, + 0.889, + 0.859 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.885, + 0.825, + 0.937 + ], + "angle": 0, + "content": "Figure 4: Downstream image understanding performance (after finetuning) on each benchmark (higher is better). Blue bars show performance of model backbone initialized with Gemma-1 2B and green bars show model backbone initialized with Gemma-2 2B. Initializing with stronger LLM help improve UniFluid visual understanding performance on most datasets." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.937, + 0.505, + 0.947 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.076, + 0.019, + 0.268, + 0.168 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.086, + 0.175, + 0.258, + 0.205 + ], + "angle": 0, + "content": "A black basketball shoe with a lightning bolt on it." + }, + { + "type": "image", + "bbox": [ + 0.293, + 0.019, + 0.487, + 0.168 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.334, + 0.176, + 0.444, + 0.192 + ], + "angle": 0, + "content": "A melting apple." + }, + { + "type": "image", + "bbox": [ + 0.511, + 0.019, + 0.703, + 0.168 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.518, + 0.176, + 0.674, + 0.204 + ], + "angle": 0, + "content": "A space elevator in the universe." + }, + { + "type": "image", + "bbox": [ + 0.729, + 0.019, + 0.921, + 0.168 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.736, + 0.176, + 0.89, + 0.205 + ], + "angle": 0, + "content": "human life depicted entirely out of fractals." + }, + { + "type": "image", + "bbox": [ + 0.076, + 0.218, + 0.267, + 0.368 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.087, + 0.375, + 0.26, + 0.402 + ], + "angle": 0, + "content": "Graffiti of a funny dog on a street wall." + }, + { + "type": "image", + "bbox": [ + 0.293, + 0.217, + 0.485, + 0.366 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.299, + 0.375, + 0.454, + 0.417 + ], + "angle": 0, + "content": "Turtle swimming \nunderwater, aesthetic, \nfantasy." + }, + { + "type": "image", + "bbox": [ + 0.512, + 0.218, + 0.704, + 0.366 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.517, + 0.375, + 0.68, + 0.417 + ], + "angle": 0, + "content": "A transparent sculpture of a duck made out of glass." + }, + { + "type": "image", + "bbox": [ + 0.731, + 0.218, + 0.921, + 0.367 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.736, + 0.375, + 0.905, + 0.402 + ], + "angle": 0, + "content": "An armchair in the shape of an avocado." + }, + { + "type": "image", + "bbox": [ + 0.075, + 0.432, + 0.268, + 0.581 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.079, + 0.59, + 0.264, + 0.645 + ], + "angle": 0, + "content": "A hyper-detailed rendering of a geyser erupting in a colorful, geothermal landscape." + }, + { + "type": "image", + "bbox": [ + 0.293, + 0.432, + 0.485, + 0.581 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.299, + 0.59, + 0.452, + 0.633 + ], + "angle": 0, + "content": "An astronaut riding a horse on the moon, oil painting by Van Gogh." + }, + { + "type": "image", + "bbox": [ + 0.512, + 0.432, + 0.704, + 0.58 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.517, + 0.59, + 0.682, + 0.631 + ], + "angle": 0, + "content": "photo of an eagle with a golden crown resting upon its head." + }, + { + "type": "image", + "bbox": [ + 0.731, + 0.432, + 0.921, + 0.58 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.736, + 0.59, + 0.904, + 0.632 + ], + "angle": 0, + "content": "A window with raindrops trickling down, overlooking a blurry city." + }, + { + "type": "image", + "bbox": [ + 0.075, + 0.663, + 0.268, + 0.812 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.08, + 0.822, + 0.258, + 0.904 + ], + "angle": 0, + "content": "An image of a modern architectural building with large glass windows, situated on a cliff overlooking a serene ocean at sunset." + }, + { + "type": "image", + "bbox": [ + 0.293, + 0.663, + 0.485, + 0.812 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.293, + 0.822, + 0.477, + 0.918 + ], + "angle": 0, + "content": "A cozy cabin in the middle of a snowy forest, surrounded by tall trees with lights glowing through the windows, a northern lights display visible in the sky." + }, + { + "type": "image", + "bbox": [ + 0.512, + 0.663, + 0.704, + 0.812 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.517, + 0.822, + 0.695, + 0.918 + ], + "angle": 0, + "content": "A still life of a vase overflowing with vibrant flowers, painted in bold colors and textured brushstrokes, reminiscent of van Gogh's iconic style." + }, + { + "type": "image", + "bbox": [ + 0.731, + 0.663, + 0.921, + 0.812 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.735, + 0.822, + 0.912, + 0.918 + ], + "angle": 0, + "content": "An otherworldly forest of giant glowing mushrooms under a vibrant night sky filled with distant planets and stars, creating a dreamlike, cosmic landscape." + }, + { + "type": "image_caption", + "bbox": [ + 0.225, + 0.919, + 0.773, + 0.933 + ], + "angle": 0, + "content": "Figure 5: Images generated from UniFluid autoregressive model after aesthetic fine-tuning." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.937, + 0.505, + 0.948 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.109, + 0.037, + 0.278, + 0.21 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.283, + 0.046, + 0.477, + 0.188 + ], + "angle": 0, + "content": "A close up view of a wooden pole with the word \"sushi\" in silver letters on it. The pole is in front of a parking lot. The pole is in the foreground, with a concrete sidewalk in the foreground. There are plants and trees in the background. There is a brown brick building with a black awning on the left side of it. There are trees in the background. The sky is clear and blue." + }, + { + "type": "image", + "bbox": [ + 0.497, + 0.038, + 0.663, + 0.206 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.671, + 0.044, + 0.884, + 0.187 + ], + "angle": 0, + "content": "A close up view of a wet white and brown labradoodle dog walking on a gray cement surface. The dog has a red leash attached to its collar. The dog has a silver metal hook attached to its collar. The dog is walking towards the camera. The dog is casting a shadow on the cement surface. Behind the dog is a building with a large window on the side. A tree line is visible in the background. The sky is blue with a few white clouds." + }, + { + "type": "image", + "bbox": [ + 0.11, + 0.228, + 0.342, + 0.355 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.352, + 0.254, + 0.477, + 0.302 + ], + "angle": 0, + "content": "Question: In what year did Panama's population reach 4.28 million?" + }, + { + "type": "image_footnote", + "bbox": [ + 0.353, + 0.314, + 0.43, + 0.326 + ], + "angle": 0, + "content": "Answer: 2020" + }, + { + "type": "image", + "bbox": [ + 0.495, + 0.228, + 0.728, + 0.353 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.747, + 0.243, + 0.864, + 0.302 + ], + "angle": 0, + "content": "Question: What device's average duration of in-app engagement sessions was 6.6 minutes?" + }, + { + "type": "image_footnote", + "bbox": [ + 0.748, + 0.314, + 0.839, + 0.325 + ], + "angle": 0, + "content": "Answer: android" + }, + { + "type": "image", + "bbox": [ + 0.108, + 0.371, + 0.335, + 0.504 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.345, + 0.408, + 0.46, + 0.444 + ], + "angle": 0, + "content": "Question: The bus is likely driving through which American city?" + }, + { + "type": "image_footnote", + "bbox": [ + 0.346, + 0.456, + 0.443, + 0.468 + ], + "angle": 0, + "content": "Answer: new york" + }, + { + "type": "image", + "bbox": [ + 0.494, + 0.372, + 0.721, + 0.504 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.747, + 0.408, + 0.851, + 0.455 + ], + "angle": 0, + "content": "Question: Which number birthday is probably being celebrated?" + }, + { + "type": "image_footnote", + "bbox": [ + 0.748, + 0.468, + 0.827, + 0.48 + ], + "angle": 0, + "content": "Answer: thirty" + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.508, + 0.825, + 0.536 + ], + "angle": 0, + "content": "Figure 6: Finetuned UniFluid model demonstrates strong image-to-text capability on image captioning and question answering." + }, + { + "type": "image", + "bbox": [ + 0.106, + 0.546, + 0.233, + 0.646 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.143, + 0.652, + 0.329, + 0.662 + ], + "angle": 0, + "content": "Change the black suitcase to a white one" + }, + { + "type": "image", + "bbox": [ + 0.234, + 0.547, + 0.363, + 0.646 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.547, + 0.497, + 0.646 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.547, + 0.627, + 0.646 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.406, + 0.652, + 0.562, + 0.663 + ], + "angle": 0, + "content": "Remove the fence from the photo" + }, + { + "type": "image", + "bbox": [ + 0.633, + 0.547, + 0.762, + 0.646 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.763, + 0.547, + 0.892, + 0.646 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.106, + 0.668, + 0.233, + 0.768 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.142, + 0.773, + 0.323, + 0.784 + ], + "angle": 0, + "content": "Change the trees from flowers to icicles" + }, + { + "type": "image", + "bbox": [ + 0.234, + 0.668, + 0.363, + 0.768 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.368, + 0.668, + 0.498, + 0.768 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.668, + 0.627, + 0.768 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.646, + 0.652, + 0.88, + 0.663 + ], + "angle": 0, + "content": "Change this into a 1950s Flintstone cartoon art style" + }, + { + "type": "image", + "bbox": [ + 0.634, + 0.668, + 0.762, + 0.768 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.763, + 0.668, + 0.892, + 0.768 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.106, + 0.79, + 0.233, + 0.89 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.234, + 0.79, + 0.363, + 0.89 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.407, + 0.773, + 0.585, + 0.784 + ], + "angle": 0, + "content": "Change the elephant into a giant rabbit" + }, + { + "type": "image", + "bbox": [ + 0.395, + 0.791, + 0.49, + 0.888 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.501, + 0.791, + 0.63, + 0.89 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.633, + 0.79, + 0.762, + 0.888 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.763, + 0.79, + 0.892, + 0.888 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.896, + 0.295, + 0.906 + ], + "angle": 0, + "content": "Change the bird to all white" + }, + { + "type": "image_caption", + "bbox": [ + 0.397, + 0.896, + 0.59, + 0.906 + ], + "angle": 0, + "content": "Turn it into a Paul Cezanne still life painting" + }, + { + "type": "image_caption", + "bbox": [ + 0.692, + 0.896, + 0.827, + 0.906 + ], + "angle": 0, + "content": "Change the meat into carrots" + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.907, + 0.827, + 0.933 + ], + "angle": 0, + "content": "Figure 7: Image editing results on evaluation benchmark from UniFluid autoregressive model after fine-tuning. It handles multiple editing tasks effectively, including object removal, insertion, style and color changes." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.948 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.172, + 0.099, + 0.825, + 0.123 + ], + "angle": 0, + "content": "Table 4: Performance comparison of image generation and understanding of UniFluid trained with different image generation order. FID and CIDEr is measured on MS-COCO." + }, + { + "type": "table", + "bbox": [ + 0.235, + 0.124, + 0.764, + 0.19 + ], + "angle": 0, + "content": "
Generation OrderGenerationUnderstanding
COCO FID ↓GenEval ↑COCO CIDEr ↑Cap Avg ↑QA Avg ↑
Raster8.280.5945.57116.5561.81
Random7.200.5940.91116.1362.10
" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.215, + 0.827, + 0.273 + ], + "angle": 0, + "content": "as a backbone is important for unlocking improved visual quality in a unified model training setup. The image understanding performance also improved slightly when using stronger LLM, which is a trend also noted in PaliGemma2. We show the detailed comparison on each downstream visual understanding benchmark in Figure 4." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.277, + 0.825, + 0.348 + ], + "angle": 0, + "content": "Training with Random Order Helps Generation But Not Understanding. Images inherently possess 2D patterns. As demonstrated in Fluid, raster-order training can be problematic, potentially leading to collapse and artifacts characterized by disappearing patterns. Approaches such as RAR [58] and RandAR [33] propose training image generation AR models with random-order training, which can improve ImageNet FID and result in better visual quality." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.353, + 0.827, + 0.466 + ], + "angle": 0, + "content": "Here we study the effect of different visual generation orders during training within our unified framework setup. We compare the performance between random-order and raster-order training, both with Gemma-2 2B as backbone LLM. The results presented in Table 4 indicate that for per-token image generation within a unified framework, raster-order training continues to underperform compared to random-order generation. Incorporating random-order during training could ensure the generated images are of high quality. However, it does not necessarily improve the visual understanding performance, where raster-order achieves better performance on MS-COCO CIDEr and downstream captioning task average (Cap Avg)." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.48, + 0.422, + 0.495 + ], + "angle": 0, + "content": "5.3 More Generation Capabilities" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.506, + 0.791, + 0.521 + ], + "angle": 0, + "content": "We also verify the transferability of the trained model to various downstream generation tasks." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.526, + 0.827, + 0.556 + ], + "angle": 0, + "content": "Aesthetic Fine-Tuning. To enhance the visual quality and aesthetic appeal of the generated images, we perform aesthetic fine-tuning on a publicly available dataset. The results are shown in Figure 5." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.561, + 0.827, + 0.645 + ], + "angle": 0, + "content": "Image Editing Task. Since our unified framework is trained with multimodal inputs, it can naturally extend to image editing tasks that involve both image and text prompt inputs. We fine-tune the 2B UniFluid model with 4M image editing pairs from HQEdit [18] and UltraEdit [60]. In Figure 7, we apply the fine-tuned model to the input images and editing prompts from a public available benchmark. Although preliminary, the experiments show that UniFluid is able to adapt and generalize to tasks that involve interleaved data modalities." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.664, + 0.303, + 0.68 + ], + "angle": 0, + "content": "6 Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.694, + 0.827, + 0.834 + ], + "angle": 0, + "content": "In this paper, we presented UniFluid, a pure autoregressive framework for joint visual generation and understanding, utilizing continuous visual tokens. We identified an inherent trade-off between the visual generation and understanding tasks, but the two tasks can benefit each other with tuned training recipes. Careful choice of the loss balance between the two tasks allows a single unified model to achieve performance comparable to or exceeding single-task baselines. We conducted investigation of key design choices for UniFluid training, revealing the critical importance of employing strong backbone LLM and random-order generation to unlock high-quality visual generation capabilities. We believe that this work encourages future research into the exploration of continuous visual tokens for unified vision-language model training, paving the way for more efficient and powerful autoregressive multimodal systems." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.84, + 0.827, + 0.896 + ], + "angle": 0, + "content": "Acknowledgements. We are grateful to Alex Rizkowsky and Amy Shen for their support in securing computational resources. We also wish to thank Charles Herrmann, Junhwa Hur, Shangbang Long, André Susano Pinto, Srinivas Kaza, David Salesin, and the VisCam team for their insightful discussions and constructive feedback, which greatly improved this work." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.51, + 0.948 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.174, + 0.09, + 0.27, + 0.107 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.113, + 0.826, + 0.155 + ], + "angle": 0, + "content": "[1] Manoj Acharya, Kushal Kafle, and Christopher Kanan. Tallyqa: Answering complex counting questions. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 8076-8084, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.163, + 0.829, + 0.22 + ], + "angle": 0, + "content": "[2] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35:23716-23736, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.226, + 0.826, + 0.271 + ], + "angle": 0, + "content": "[3] Lucas Beyer, Andreas Steiner, André Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neumann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, et al. Paligemma: A versatile 3b vlm for transfer. arXiv preprint arXiv:2407.07726, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.277, + 0.826, + 0.321 + ], + "angle": 0, + "content": "[4] Ali Furkan Biten, Ruben Tito, Andres Mafla, Lluis Gomez, Marçal Rusinol, Ernest Valveny, CV Jawahar, and Dimosthenis Karatzas. Scene text visual question answering. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4291-4301, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.327, + 0.826, + 0.371 + ], + "angle": 0, + "content": "[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.377, + 0.826, + 0.421 + ], + "angle": 0, + "content": "[6] Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint arXiv:2301.00704, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.427, + 0.826, + 0.47 + ], + "angle": 0, + "content": "[7] Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.477, + 0.826, + 0.521 + ], + "angle": 0, + "content": "[8] Xiaokang Chen, Zhiyu Wu, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, and Chong Ruan. Janus-pro: Unified multimodal understanding and generation with data and model scaling. arXiv preprint arXiv:2501.17811, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.527, + 0.829, + 0.582 + ], + "angle": 0, + "content": "[9] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1-113, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.591, + 0.826, + 0.635 + ], + "angle": 0, + "content": "[10] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructlip: Towards general-purpose vision-language models with instruction tuning, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.641, + 0.829, + 0.699 + ], + "angle": 0, + "content": "[11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pages 4171–4186, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.705, + 0.826, + 0.749 + ], + "angle": 0, + "content": "[12] Lijie Fan, Tianhong Li, Siyang Qin, Yuanzhen Li, Chen Sun, Michael Rubinstein, Deqing Sun, Kaiming He, and Yonglong Tian. Fluid: Scaling autoregressive text-to-image generative models with continuous tokens. arXiv preprint arXiv:2410.13863, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.755, + 0.826, + 0.798 + ], + "angle": 0, + "content": "[13] Dhruba Ghosh, Hannaneh Hajishirzi, and Ludwig Schmidt. Geneval: An object-focused framework for evaluating text-to-image alignment. Advances in Neural Information Processing Systems, 36:52132-52152, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.805, + 0.826, + 0.848 + ], + "angle": 0, + "content": "[14] Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive networks. In International Conference on Machine Learning, pages 1242–1250. PMLR, 2014." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.855, + 0.826, + 0.911 + ], + "angle": 0, + "content": "[15] Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3608-3617, 2018." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.113, + 0.829, + 0.911 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.091, + 0.828, + 0.135 + ], + "angle": 0, + "content": "[16] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.142, + 0.826, + 0.186 + ], + "angle": 0, + "content": "[17] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700-6709, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.193, + 0.825, + 0.235 + ], + "angle": 0, + "content": "[18] Mude Hui, Siwei Yang, Bingchen Zhao, Yichun Shi, Heng Wang, Peng Wang, Yuyin Zhou, and Cihang Xie. Hq-edit: A high-quality dataset for instruction-based image editing. arXiv preprint arXiv:2404.09990, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.244, + 0.825, + 0.301 + ], + "angle": 0, + "content": "[19] Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV 14, pages 235-251. Springer, 2016." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.309, + 0.825, + 0.339 + ], + "angle": 0, + "content": "[20] Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.346, + 0.826, + 0.388 + ], + "angle": 0, + "content": "[21] Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. Autoregressive image generation without vector quantization. Advances in Neural Information Processing Systems, 37:56424-56445, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.397, + 0.826, + 0.44 + ], + "angle": 0, + "content": "[22] Yang Li, Gang Li, Luheng He, Jingjie Zheng, Hong Li, and Zhiwei Guan. Widget captioning: Generating natural language description for mobile user interface elements. arXiv preprint arXiv:2010.04295, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.448, + 0.826, + 0.491 + ], + "angle": 0, + "content": "[23] Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, and Song Han. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 26689-26699, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.499, + 0.826, + 0.556 + ], + "angle": 0, + "content": "[24] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer vision-ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part v 13, pages 740-755. Springer, 2014." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.564, + 0.826, + 0.607 + ], + "angle": 0, + "content": "[25] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26296-26306, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.615, + 0.826, + 0.644 + ], + "angle": 0, + "content": "[26] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36:34892-34916, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.652, + 0.827, + 0.693 + ], + "angle": 0, + "content": "[27] Sylvain Lobry, Diego Marcos, Jesse Murray, and Devis Tuia. Rsvqa: Visual question answering for remote sensing data. IEEE Transactions on Geoscience and Remote Sensing, 58(12):8555-8566, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.703, + 0.827, + 0.759 + ], + "angle": 0, + "content": "[28] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507–2521, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.767, + 0.827, + 0.811 + ], + "angle": 0, + "content": "[29] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195-3204, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.818, + 0.826, + 0.86 + ], + "angle": 0, + "content": "[30] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answering about charts with visual and logical reasoning. arXiv preprint arXiv:2203.10244, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.869, + 0.827, + 0.913 + ], + "angle": 0, + "content": "[31] Minesh Mathew, Viraj Bagal, Ruben Tito, Dimosthenis Karatzas, Ernest Valveny, and CV Jawahar. Infographicvqa. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1697-1706, 2022." + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.091, + 0.828, + 0.913 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.091, + 0.828, + 0.135 + ], + "angle": 0, + "content": "[32] Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 2200-2209, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.145, + 0.828, + 0.188 + ], + "angle": 0, + "content": "[33] Ziqi Pang, Tianyuan Zhang, Fujun Luan, Yunze Man, Hao Tan, Kai Zhang, William T Freeman, and Yu-Xiong Wang. Randar: Decoder-only autoregressive visual generation in random orders. arXiv preprint arXiv:2412.01827, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.199, + 0.827, + 0.241 + ], + "angle": 0, + "content": "[34] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In International conference on machine learning, pages 4055-4064. PMLR, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.252, + 0.825, + 0.283 + ], + "angle": 0, + "content": "[35] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Technical Report, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.293, + 0.827, + 0.322 + ], + "angle": 0, + "content": "[36] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.332, + 0.825, + 0.375 + ], + "angle": 0, + "content": "[37] Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-okvqa: A benchmark for visual question answering using world knowledge. In European conference on computer vision, pages 146–162. Springer, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.386, + 0.827, + 0.429 + ], + "angle": 0, + "content": "[38] Weijia Shi, Xiaochuang Han, Chunting Zhou, Weixin Liang, Xi Victoria Lin, Luke Zettlemoyer, and Lili Yu. Llamafusion: Adapting pretrained language models for multimodal generation. arXiv preprint arXiv:2412.15188, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.44, + 0.825, + 0.496 + ], + "angle": 0, + "content": "[39] Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part II 16, pages 742-758. Springer, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.507, + 0.825, + 0.551 + ], + "angle": 0, + "content": "[40] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8317-8326, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.561, + 0.825, + 0.604 + ], + "angle": 0, + "content": "[41] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.614, + 0.827, + 0.657 + ], + "angle": 0, + "content": "[42] Andreas Steiner, André Susano Pinto, Michael Tschannen, Daniel Keysers, Xiao Wang, Yonatan Bitton, Alexey Gritsanko, Matthias Minderer, Anthony Sherbondy, Shangbang Long, et al. Paligemma 2: A family of versatile vlms for transfer. arXiv preprint arXiv:2412.03555, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.668, + 0.827, + 0.711 + ], + "angle": 0, + "content": "[43] Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. arXiv preprint arXiv:1811.00491, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.722, + 0.827, + 0.765 + ], + "angle": 0, + "content": "[44] Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Emu: Generative pretraining in multimodality. arXiv preprint arXiv:2307.05222, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.775, + 0.825, + 0.819 + ], + "angle": 0, + "content": "[45] Yutao Sun, Hangbo Bao, Wenhui Wang, Zhiliang Peng, Li Dong, Shaohan Huang, Jianyong Wang, and Furu Wei. Multimodal latent language modeling with next-token diffusion. arXiv preprint arXiv:2412.08635, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.83, + 0.825, + 0.857 + ], + "angle": 0, + "content": "[46] Chameleon Team. Chameleon: Mixed-modal early-fusion foundation models. arXiv preprint arXiv:2405.09818, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.869, + 0.825, + 0.913 + ], + "angle": 0, + "content": "[47] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024." + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.091, + 0.828, + 0.913 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.091, + 0.828, + 0.148 + ], + "angle": 0, + "content": "[48] Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.157, + 0.826, + 0.2 + ], + "angle": 0, + "content": "[49] Keyu Tian, Yi Jiang, Zehuan Yuan, Bingyue Peng, and Liwei Wang. Visual autoregressive modeling: Scalable image generation via next-scale prediction. Advances in neural information processing systems, 37:84839-84865, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.208, + 0.827, + 0.252 + ], + "angle": 0, + "content": "[50] Shengbang Tong, David Fan, Jiachen Zhu, Yunyang Xiong, Xinlei Chen, Koustuv Sinha, Michael Rabbat, Yann LeCun, Saining Xie, and Zhuang Liu. Metamorph: Multimodal understanding and generation via instruction tuning. arXiv preprint arXiv:2412.14164, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.259, + 0.827, + 0.302 + ], + "angle": 0, + "content": "[51] Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelCNN decoders. Advances in neural information processing systems, 29, 2016." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.31, + 0.825, + 0.34 + ], + "angle": 0, + "content": "[52] Aäron Van Den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International conference on machine learning, pages 1747-1756. PMLR, 2016." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.348, + 0.826, + 0.392 + ], + "angle": 0, + "content": "[53] Bryan Wang, Gang Li, Xin Zhou, Zhourong Chen, Tovi Grossman, and Yang Li. Screen2words: Automatic mobile ui summarization with multimodal learning. In The 34th Annual ACM Symposium on User Interface Software and Technology, pages 498-510, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.4, + 0.826, + 0.443 + ], + "angle": 0, + "content": "[54] Chengyue Wu, Xiaokang Chen, Zhiyu Wu, Yiyang Ma, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, Chong Ruan, et al. Janus: Decoupling visual encoding for unified multimodal understanding and generation. arXiv preprint arXiv:2410.13848, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.451, + 0.827, + 0.506 + ], + "angle": 0, + "content": "[55] Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, and Mike Zheng Shou. Show-o: One single transformer to unify multimodal understanding and generation. arXiv preprint arXiv:2408.12528, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.516, + 0.826, + 0.56 + ], + "angle": 0, + "content": "[56] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.568, + 0.827, + 0.623 + ], + "angle": 0, + "content": "[57] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2(3):5, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.633, + 0.827, + 0.663 + ], + "angle": 0, + "content": "[58] Qihang Yu, Ju He, Xueqing Deng, Xiaohui Shen, and Liang-Chieh Chen. Randomized autoregressive visual generation. arXiv preprint arXiv:2411.00776, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.671, + 0.826, + 0.714 + ], + "angle": 0, + "content": "[59] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF international conference on computer vision, pages 11975-11986, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.722, + 0.826, + 0.765 + ], + "angle": 0, + "content": "[60] Haozhe Zhao, Xiaojian Shawn Ma, Liang Chen, Shuzheng Si, Rujie Wu, Kaikai An, Peiyu Yu, Minjia Zhang, Qing Li, and Baobao Chang. Ultraedit: Instruction-based fine-grained image editing at scale. Advances in Neural Information Processing Systems, 37:3058-3093, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.773, + 0.827, + 0.817 + ], + "angle": 0, + "content": "[61] Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Xuezhe Ma, Luke Zettlemoyer, and Omer Levy. Transfusion: Predict the next token and diffuse images with one multi-modal model. arXiv preprint arXiv:2408.11039, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.825, + 0.827, + 0.867 + ], + "angle": 0, + "content": "[62] Deyao Zhu, Jun Chen, Xiaogian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023." + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.091, + 0.828, + 0.867 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "14" + } + ] +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13436/53224727-d9f9-435a-ad74-c91800bf8f34_origin.pdf b/data/2025/2503_13xxx/2503.13436/53224727-d9f9-435a-ad74-c91800bf8f34_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..33e576d9db15ec129fb64eb1edf6377a4a471768 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13436/53224727-d9f9-435a-ad74-c91800bf8f34_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:085b1288cd16795c3f6ef8bf1df020a900b5b77e36e4f82a3eb1a3273476e707 +size 2673740 diff --git a/data/2025/2503_13xxx/2503.13436/full.md b/data/2025/2503_13xxx/2503.13436/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8ab6c25759bca5c6a80438a18c8288ae77041b47 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13436/full.md @@ -0,0 +1,380 @@ +# Unified Autoregressive Visual Generation and Understanding with Continuous Tokens + +Lijie Fan $^{1,*}$ Luming Tang $^{1,*}$ Siyang Qin $^{1,*}$ Tianhong Li $^{2}$ Xuan Yang $^{1}$ Siyuan Qiao $^{1}$ Andreas Steiner $^{1}$ Chen Sun $^{1}$ Yuanzhen Li $^{1}$ Tao Zhu $^{1}$ Michael Rubinstein $^{1}$ Michalis Raptis $^{1}$ Deqing Sun $^{1,\dagger}$ Radu Soricut $^{1,\dagger}$ Google DeepMind ${}^{2}\mathrm{MIT}\quad {*},$ equal contribution + +![](images/fe57fbb071286d72cde1ae0a4dc870c142112e69c0e374902ceb8453aa9cf6c0.jpg) +Figure 1: Generated images from our UniFluid autoregressive model after aesthetic finetuning. + +# Abstract + +We present UniFluid, a unified autoregressive framework for joint visual generation and understanding leveraging continuous visual tokens. Our unified autoregressive architecture processes multimodal image and text inputs, generating discrete tokens for text and continuous tokens for image. We find though there is an inherent trade-off between the image generation and understanding task, a carefully tuned training recipe enables them to improve each other. By selecting an appropriate loss balance weight, the unified model achieves results comparable to or exceeding those of single-task baselines on both tasks. Furthermore, we demonstrate that employing stronger pre-trained LLMs and random-order generation during training is important to achieve high-fidelity image generation within this unified framework. Built upon the Gemma model series, UniFluid exhibits competitive performance across both image generation and understanding, demonstrating strong transferability to various downstream tasks, including image editing for generation, as well as visual captioning and question answering for understanding. + +# 1 Introduction + +Large Language Models (LLMs) have recently advanced from text-centric architectures, such as BERT [11] and GPT [35], toward multimodal systems capable of understanding and generating content across different modalities. GPT-3 [5] and PaLM [9] show that scaling language models leads to emergent capabilities, while Flamingo [2] further demonstrates that incorporating visual inputs facilitates unified multimodal reasoning. This trend toward unified vision-language model—using a single model for diverse tasks of visual understanding and generation—has promising potential for leveraging knowledge and reasoning abilities that transfer across different vision and language tasks, ultimately facilitating more robust and generalizable multimodal representation and modeling capabilities. + +Motivated by the advantages and strong scaling properties of autoregressive models, coupled with their simplicity, we investigate a pure autoregressive framework for unified visual generation and understanding, without the limitations introduced by vector quantization (VQ). In this paper, we introduce UniFluid, a unified framework that leverages continuous visual tokens within an autoregressive architecture to jointly handle vision-language generation and understanding tasks. Building upon pre-trained Gemma [48] on large-scale text corpus, UniFluid unlocks powerful visual generation and understanding capabilities through training with paired image-text data, and further allows these two tasks to mutually benefit each other within a single architecture. + +Specifically, UniFluid adopts a unified autoregressive framework where both text and continuous visual inputs are embedded as tokens in the same space, enabling seamless joint training of image generation and understanding tasks. UniFluid integrates a continuous tokenizer [12, 21] for image generation and a pre-trained SigLIP [59] image encoder for understanding tasks, while textual inputs are processed using a standard SentencePiece tokenizer [20]. The resulting multimodal sequences are modeled autoregressively using Gemma [48] as the underlying transformer backbone. Task-specific prediction heads—a diffusion-based head for image generation and a cross-entropy head for text generation—ensure effective modality-specific training and inference, enabling UniFluid to efficiently learn shared representations that mutually enhance its generation and understanding capabilities. + +Our experiments demonstrate several key advantages of the proposed unified training strategy. We find though there is a trade-off between the two tasks, a carefully tuned training recipe can allow the tasks to support each other and outperform the single-task baselines. Effectively balancing the loss between the tasks allows a single model that performs both with results superior to or on par with single-task models. Moreover, the choice of pre-trained LLM backbone significantly impacts visual generation performance. We also find that while employing random generation order is essential for high-quality image synthesis, it is less critical for understanding tasks. Finally, our unified pre-trained models show strong generalization and transferability, achieving compelling results in downstream applications, including image editing and various vision-language understanding benchmarks. + +# 2 Related Works + +Multimodal Large Language Models. Multimodal Large Language Models [2, 10, 23, 26, 25, 3, 42] have shown significant performance in visual understanding tasks. Flamingo [2] adopted a frozen LLM and vision encoder, utilizing perceiver with cross-attention to bridge the modalities. LLaVA [26, 25] proposed instruction tuning over pre-trained LLMs with multimodal inputs to align a pre-trained image encoder into the LLM's embedding space, thereby enabling it with visual understanding and instruction following capabilities. MiniGPT-4 [62] and mPLUG-Owl [56] have shown vision encoders can be connected to LLMs through projection layers, demonstrating sophisticated visual reasoning capabilities. The PaliGemma [3, 42] series built upon the Gemma [47, 48] model family to develop versatile vision-language models capable of strong transfer to diverse downstream visual understanding tasks. + +Autoregressive Image Generation. While diffusion models [41, 36] have achieved impressive success in image generation, autoregressive image generation methods have also shown significant development, driven by their simplicity and closeness to LLM training paradigms. A large body of research centers on tokenizing images into discrete tokens and applying autoregressive objectives to these discrete representations. Notable examples include Parti [57] and Muse [6]. [49] proposes an approach that operates on image scales, progressively refining resolutions from coarse to fine through + +![](images/21fcf027d2bdd513ed513d39f97fd528b50bf3f4df10285dbb085df06e72a1cd.jpg) +Figure 2: High-level illustration of UniFluid. UniFluid performs joint training of image generation and understanding tasks through next token prediction. For image embeddings, we use a VAE image tokenizer for generation, and a SigLIP image encoder for understanding. BOI/BOS stands for beginning of Image/Sentence. + +next-scale prediction. Alternatively, works such as MAR [21], Fluid [12], and techniques employing per-token diffusion heads on top of LLM-predicted embeddings have explored autoregressive image generation with continuous visual tokens. + +Unified Multimodal Models. There is growing research interests in unifying visual generation and understanding within a single model. VQ-based models, such as Chameleon [46], Emu [44], and Janus [54, 8], propose encoding visual inputs into discrete tokens and unifying tasks into next-token prediction within this discrete token space. Models with hybrid training targets, such as Transfusion [61, 38] and Show-O [55], aim to unify next-token prediction objectives with diffusion objectives within a single framework. MetaMorph [50] maintains the autoregressive objective by regressing visual SigLIP [59] features using an LLM, but necessitates a separate diffusion model to decode the predicted latent features into images. Our approach distinguishes itself by performing per-token autoregressive generation using continuous visual tokens. This maintains the next-token prediction objective, while not being limited by the vector quantized tokens. [45] also explores using continuous tokens to generate multimodal outputs. + +# 3 Method + +In this section, we illustrate the architecture of our UniFluid model. The model expects both image and text sequences as input and achieves joint training on both generation and understanding tasks, using next-token prediction as its training objective. + +# 3.1 Unified Autoregressive Modeling with Continuous Visual Tokens + +Our approach leverages the autoregressive paradigm to unify visual understanding and generation within a single framework. Given an ordered sequence of tokens $X = \{x^{1},\dots,x^{n}\}$ , the autoregressive model [34, 14, 51, 52] factorizes the joint probability distribution as a product of conditional probabilities, effectively framing the generation task as a sequential "next token prediction" problem: $p(X) = \prod_{i = 1}^{n}p(x^{i}\mid x^{1},\dots,x^{i - 1})$ . As shown in MAR [21] and Fluid [12], this autoregressive formulation is applicable for both discrete tokens and continuous tokens. In UniFluid, we exploit this property to enable the generation of continuous visual tokens under the unified decoder-only framework. Our model is modality-agnostic; both text and image tokens are treated as elements within a long unified sequence, and their respective logits are predicted iteratively in an autoregressive manner by the backbone transformer. + +To accommodate the distinct nature of text and image modalities, we employ modality-specific prediction heads to calculate the appropriate loss functions and sampling for each modality. This unified approach allows the model to learn a shared representation space through the unified training procedure, facilitating synergistic learning and enabling seamless transitions between visual generation and understanding. + +# 3.2 Architecture + +As depicted in Figure 2, UniFluid employs a unified framework where both text and image inputs are tokenized and projected into a shared embedding space. This allows it to leverage a decoder-only transformer as the core backbone for the unified autoregressive task. Text inputs are tokenized using the SentencePiece tokenizer. This results in discrete tokens with a vocabulary size of $V$ . For image generation, images are encoded into continuous visual tokens using a continuous variational autoencoder. To facilitate the process of image understanding, following PaliGemma, we used SigLIP as a separate image encoder to extract high-level information from the visual inputs. UniFluid consists of a classification head to convert the transformer's text logits into a categorical distribution, and a diffusion head to convert image logits into a per-token probability distribution. + +The inherent structure of text as a linear sequence aligns well with the standard 1D positional embeddings of the LLM, which are sufficient for text modeling and image understanding tasks. However, image tokens possess a 2D spatial structure. To capture this inherent 2D nature, we incorporate learnable 2D positional embeddings, which are added to the image token embeddings. Meanwhile, inspired by [58], to achieve random order generation, a position embedding for the next predicted token is also added to each image token. To enhance the model's ability to initiate and guide image generation, we prepend a "Beginning of Image" (BOI) token to the sequence of continuous image tokens. This BOI token serves as a distinct signal, indicating the start of the visual generation process. Given that the sequence length for generated image tokens is predefined (256 tokens for 256x256 images), an explicit "End of Image" token is unnecessary in our case. + +# 4 Implementation + +# 4.1 Training + +Per-token Classification Head for Discrete Text Tokens. We employ the same SentencePiece tokenizer as Gemma for text tokenization. The transformer's output logits for text are transformed into categorical probability distributions over the vocabulary, and we apply the standard cross-entropy loss, denoted as $L_{Text}$ , to optimize the prediction of these discrete text tokens. + +Per-token Diffusion Head for Continuous Visual Tokens. We adopt the same continuous tokenizer as Fluid to embed 256x256 images into 32x32x4 continuous tokens, and use a patch size of 2 to merge 4 tokens into one. To model the per-token distribution of these predicted continuous visual tokens, we employ a lightweight MLP as a diffusion head. We adopt the same diffusion process and loss function, denoted as $\mathcal{L}_{\text{Visual}}$ , as in [21, 12], which is specifically tailored for continuous visual token prediction. For the understanding task, the input image resolution is $224 \times 224$ , and we use SigLIP as the image encoder. Note that the SigLIP features are only used as prefix for the understanding task during training, and no losses are added on top of them. + +# Task-Specific Training Configurations. + +Image Understanding: For image understanding tasks, the model is provided with image embeddings and question tokens as input prefix. Following PaliGemma, we apply a bidirectional attention mask to both image and question tokens. A causal attention mask is applied to the answer tokens, ensuring that the model only attends to previous answer tokens during autoregressive generation. The text token loss, $\mathcal{L}_{\text{Text}}$ , is calculated specifically on the answer text tokens. + +Image Generation: Conversely, for image generation tasks, text prompts are provided as conditioning inputs. To maintain the appropriate information flow, we employ a bidirectional attention mask for the text prompt tokens, enabling them to attend to all other text tokens. A causal attention mask is applied to the image tokens, ensuring that each image token only attends to preceding image tokens. The visual token loss, $L_{\text{Visual}}$ , is calculated on the generated image tokens. + +Unified Loss Function. The total training loss for UniFluid is a weighted sum of the text token prediction loss and the visual token prediction loss, defined as: $\mathcal{L} = \mathcal{L}_{\text{Visual}} + \lambda \cdot \mathcal{L}_{\text{Text}}$ where $\lambda$ is a hyper-parameter that represents the weight assigned to the text token prediction loss, allowing us to balance the contributions of the two modalities during training. + +Training Details. We train the model with a batch size of 2048 using the AdamW optimizer with a learning rate of 1e-4. The training process consists of 1 million steps with a constant learning rate schedule and a warm-up period of 65k steps. Following [58], for image generation, the image token order is randomly permuted during the initial 300k training iterations, then linearly anneals to raster between 300k and 600k iterations, and finally sticks to raster order for the subsequent 400k steps. Except for the comparison with Gemma-1, we use the Gemma-2 model series as the backbone transformer for all experiments in this paper. + +# 4.2 Inference + +For text decoding, we employ categorical sampling for each generated text prediction. The predicted token is then selected from the vocabulary $V$ based on the sampled probability distribution. We use the same decoding strategy as PaliGemma. Greedy decoding is used for all tasks except for downstream COCOcap (beam search $n = 2$ ) and TextCaps (beam search $n = 3$ ). For image decoding, we use a diffusion sampling process to generate continuous visual tokens with diffusion sampling step set to 100 in our implementation. + +As both text and image generation are performed at the token level, with predictions occurring one token at a time under a causal attention mechanism, we can efficiently utilize Key-Value (KV) caching. This optimization is applicable to both discrete text tokens and continuous visual tokens, significantly accelerating the inference process. + +# 5 Experiment + +# 5.1 Setup + +Datasets. We train our models using the WebLI dataset [7], a collection of high-quality image-text pairs. For visual generation, we follow Fluid to employ a WebLI subset of image and text descriptions specifically for the generation task. For visual understanding, consistent with PaliGemma, we leverage the image-text description pairs and image question-answer pairs that are also available within WebLI. + +Evaluation Metrics. We assess the image generation quality using the FID [16] score on 30K images of the MS-COCO [24] training set and evaluate performance on the GenEval [13] benchmark, where we use the original text prompt without any rewrites. For evaluating visual understanding performance, we use the caption CIDEr score on MS-COCO. Given our similar training dataset and setup to PaliGemma, we also evaluate the finetuning performance on a variety of captioning and question answering tasks. We report the average score on 4 Captioning tasks, including COCOcap [24], Screen2Words [53], TextCaps [39], WidgetCap [22], and 20 QA tasks, including OKVQA [29], AOKVQA-MC [37], AOKVQA-DA [37], GQA [17], NLVR2 [43], AI2D [19], ScienceQA [28], RSVQA-lr [27], RSVQA-hr (test/test2) [27], ChartQA (human/aug) [30], VizWizVQA [15], TallyQA (simple/complex) [1], CountBenchQA [3], TextVQA [40], DocVQA [32], InfoVQA [31], ST-VQA [4]. + +In the following sections, we present the experimental results obtained under different configurations of UniFluid, providing insights into the relationship between the two tasks and highlighting key design choices for UniFluid training. + +# 5.2 Main Results + +Unified Training Improves Generation Performance. To evaluate the effectiveness of the unified training framework and determine whether unified training offers advantages compared to training separate models for different tasks, we perform controlled experiments to analyze the performance of models trained with a single task. + +We first compare the visual generation performance of the model trained under the unified training objective with the performance of a text-to-image model (T2I only), trained solely with the visual + +Table 1: Unified training achieves better generation performance than text-to-image only training. We evaluate the performance using MS-COCO zero-shot FID and GenEval score. + +
Training TargetSizeFID ↓GenEval ↑
T2I only0.7B9.710.50
Unified0.7B8.390.52
T2I only2B7.880.59
Unified2B7.200.59
+ +autoregressive objective for the generation task. We ensure that the total number of visual tokens for training is the same for the visual generation loss in both the unified model training and text-to-image only training scenarios. The generation performance comparison is presented in Table 1. The unified model achieves better performance compared to the T2I only model, despite both models having observed the same number of tokens for the visual generation task. This suggests that unified model training can be beneficial for visual generation tasks, and that visual understanding ability has the potential to unlock enhanced visual generation quality. + +Trade-off Between Generation and Understanding. We also investigate whether the visual generation task can contribute to improved visual understanding performance. In the UniFluid unified training setup, the hyperparameter $\lambda$ controls the balance between the losses applied to image tokens and text tokens. + +In Table 3 and Figure 3, we present the understanding and generation results with varying $\lambda$ of the 0.7B model. We compare the transfer performance to downstream understanding tasks between the unified model with different $\lambda$ and a image-to-text model (I2T only), trained solely with the image understanding objective. Within the unified training setup, a trade-off exists between visual generation and understanding tasks, which can be effectively controlled by adjusting the loss mixing weight, $\lambda$ . While increasing $\lambda$ can improve image understanding performance, ultimately exceeding the I2T-only baseline for downstream captioning, it concurrently diminishes the image generation capabilities. + +In most scenarios, a smaller $\lambda$ value (e.g., 0.005) is advisable, maintaining a significant proportion (over $90\%$ ) of image understanding while supporting the generation of high-fidelity images. Larger $\lambda$ values, in contrast, strongly favor image understanding but result in a rapid drop of image generation ability, as indicated by a sharp rise in FID score when $\lambda$ exceeds 0.1. Qualitative results for image captioning and question answering, demonstrating the understanding capabilities of the fine-tuned model based on the unified model with Gemma-2 2B as backbone LLM and $\lambda = 0.005$ are presented in Figure 6. + +Better Pre-trained LLM Backbone Leads to Better Visual Generation and Understanding. We investigate the effect of pre-trained LLMs within the unified model training setup, specifically examining whether more powerful LLMs contribute to enhanced image understanding performance and superior visual generation quality. To this end, we conducted experiments using Gemma-1 2B [47] and Gemma-2 2B [47] as backbone LLMs. Gemma-2 is a stronger LLM than Gemma-1 with $10\%$ average improvements across different text benchmarks. + +Table 2: Performance comparison of image generation and understanding of UniFluid trained with different LLM backbone. FID and CIDEr are measured on MS-COCO. Gemma-2 gets much better performance compared to Gemma-1, for both image understanding and generation tasks. + +
Pretrained ModelGenerationUnderstanding
COCO FID ↓GenEval ↑COCO CIDEr ↑Cap Avg ↑QA Avg ↑
Gemma-19.730.5238.02113.4060.21
Gemma-27.200.5940.91116.1362.10
+ +The experimental results are presented in Table 2. Here we used $\lambda = 0.005$ for all models. The results demonstrate that employing a stronger LLM is crucial for generating images with higher fidelity and quality. Gemma-2 achieves significantly lower FID scores compared to Gemma-1, highlighting that even though LLM pre-training is unimodal, without exposure to visual data, using a better LLM + +![](images/27d4b879d1e6ed0c16ebbf7e8c1a05a9de1f4ec816c64e73bb7cd204bec2ff52.jpg) +Figure 3: Plot of image understanding and generation performance with varying $\lambda$ during training. We show the FID on MS-COCO, the average score for downstream captioning tasks (Cap Avg), and the average score for downstream QA tasks (QA Avg) with different $\lambda$ on the three figures. Green dashed lines show the FID of T2I-only model, blue and red dashed lines show the downstream captioning average and QA average for I2T-only model, respectively. In practice, smaller $\lambda$ has better trade-off between two tasks. + +![](images/c28e0b1d3d4b55152d911ee6364845722e0f928715ebd8b2ba2f2245e04ae38f.jpg) + +![](images/f01bbba059cb1c487d2b7bad0c5ebf483e7592f5c196f7ec35b2335a3abc2546.jpg) + +Table 3: Image generation and understanding results with different $\lambda$ . $\lambda = 0.1$ roughly makes the loss for generation and understanding of the same scale. We present MS-COCO FID and the average captioning and QA results for downstream understanding tasks, compared to the image-to-text (I2T) only baseline. A smaller $\lambda$ like 0.005 is recommended in most cases as it preserves most of the image understanding capability while enabling the generation of high-quality images and outperforms the text-to-image (T2I) only baseline. + +
TaskT2I onlyI2T onlyUnified, λ
0.0050.010.050.10.21.02.0
Generation (FID) ↓9.71-8.399.6512.4817.9028.6047.8954.02
Understanding (Cap Avg) ↑-125.05120.25120.25121.18121.68123.65126.25125.80
Understanding (QA Avg) ↑-65.9560.2961.3761.4861.8765.1765.6465.88
+ +![](images/f5b44fb22df8ee423bbb5c991ec67c67c311aac003801a68e36de4d1a80aa028.jpg) + +![](images/e604f122f83cb527d28352cf1e5e4b880d31afe49ece6c4f36cdeba5b992241f.jpg) + +![](images/c46e9fbda409dd9b0d6fe25cf1f0c7088f846324cf25ae0edc1adb8a3f609780.jpg) + +![](images/f47624129a101646b71e64eed6aa1212c7690071847fef6726e962f91d51a79a.jpg) + +![](images/0f43ac3efb9385ab77e2cb077a6165efd276ed993423b635fcddef5a6344bb93.jpg) + +![](images/0fa9c60d7e9f9b58905e06c1a86f5e7d8e6d6f7ac03fd1e9ce36f146966739e7.jpg) + +![](images/b7e9b91e62274b99585c58f8e8402e7149ff8e358ebfe68f4b2866fc814b6e9f.jpg) + +![](images/4af61b319ebef54c220450bb22d985e5d8a8ac0d51b79951a276a4bd1185093d.jpg) + +![](images/1268d8942fcae33e355da233b87b1ba2aa1da48b738edce9e834235d40b7d2ef.jpg) + +![](images/055827d4421ec7ad88afd48e896d72caedb2e029a4b9e0133d779db345ac66b4.jpg) + +![](images/3360fd5ca9e05220182a0d5bb8c2c82ab50272d09ff5a52bbccf97b5f84a9be0.jpg) + +![](images/69f5c722e1cfdf648c8c320239579694d5e00c5f385fd8359ec77dc239059df8.jpg) + +![](images/65d7e2652966061b99c7248f64aefbfe77768d356615fa25906dab470109ad38.jpg) + +![](images/2fd077dbbd0d8c49b3d104308ffd8f662dbabfe5228d171df188dcd388f68d7e.jpg) + +![](images/6142c847daaa0b2e444e2eaee75b2301277eeb6f084c630d7a11e3e3d7715b7e.jpg) + +![](images/db70eb466e41c1079e55240a87d754c1630717a9f61f3af9f006a8036540d3c6.jpg) + +![](images/ec817598f9c016649943bcaf3e7f9eea8c6d6ea635c3dbd66d86aac41e302c90.jpg) + +![](images/4c71302f59a6e2c12cdc6bab1d67480bad20db33ec424562bec26d4d367ff76b.jpg) + +![](images/cba4f32c6197716dd8754f4789b7a8866571df39c69c4d735d0ac593280aa660.jpg) +Figure 4: Downstream image understanding performance (after finetuning) on each benchmark (higher is better). Blue bars show performance of model backbone initialized with Gemma-1 2B and green bars show model backbone initialized with Gemma-2 2B. Initializing with stronger LLM help improve UniFluid visual understanding performance on most datasets. + +![](images/f181a225d21891e5163a357e3e714abad9344f5552bb4dc59b2bd38c830fa9da.jpg) + +![](images/cb7ec8bfd66ea26fb70e6ee0daa7cf9e7b4a7c654ee225e88386ca6f754b128b.jpg) + +![](images/7fab743345e3073b4153a8b710ead7da07fac7174f879e5a3a8ffd62deef737a.jpg) + +![](images/11601a09dc1fd36d9290ed9af67eaa56e8cbb16f8407e81c13812bc1a5c2c8b7.jpg) + +![](images/f0ce9ba61697d3680f8cc472b3daf1280274538d8e0d6a2eed7026b12c75840b.jpg) + +![](images/c956187abea4dd8794f3600eded98ff56e89b52dfb2e2578cc297cfc9c1e2c4b.jpg) +A black basketball shoe with a lightning bolt on it. + +![](images/1dda4edef0f15fb07c736b4c19bc3834934ff9cadc02e9f0f49659946ff6d814.jpg) +A melting apple. + +![](images/466d2fe43bf8d14bf7059dbedc9088860424dd3ee80301f589abee1a3c733d74.jpg) +A space elevator in the universe. + +![](images/9bd5a18be00334727765b9a18254bc1ed3b8dc9e3dc00bc5cec75969e5d733d0.jpg) +human life depicted entirely out of fractals. + +![](images/2efd5160ecbb6255aa5aa53f9e7c7990aed82e7424c5e14d7ec64fd6295f8a8b.jpg) +Graffiti of a funny dog on a street wall. + +![](images/eedd97bda7977dc3c2b8ddcaa4797ae048a9f58b38c7e304952ee35bdeb0e617.jpg) +Turtle swimming +underwater, aesthetic, +fantasy. + +![](images/2d819e1f841d2622c73d61555687a7195917f493c4944afb10b5dba54f3e5ca1.jpg) +A transparent sculpture of a duck made out of glass. + +![](images/7ff46f16950151c5e77b47b936d2c67bdc5ecdccfbe3f2e8131383c933a526f9.jpg) +An armchair in the shape of an avocado. + +![](images/db9b12aa29a17e233f26cae458623aad7335736add2998309d2ecc99fc278a1c.jpg) +A hyper-detailed rendering of a geyser erupting in a colorful, geothermal landscape. + +![](images/f54f46e66e32e2eec00ae7b8d752e50c4814067b201f685dac85b03a07a273d6.jpg) +An astronaut riding a horse on the moon, oil painting by Van Gogh. + +![](images/8d1ac576da2e319c5322e81c14503269b1d8b27ac23b92ebc04c6a1e63258500.jpg) +photo of an eagle with a golden crown resting upon its head. + +![](images/a15ecf2565252d6785ad118897c8a9997898e511a7971087b628b411aee66098.jpg) +A window with raindrops trickling down, overlooking a blurry city. + +![](images/1268a3fc86a3637bab1e17dd7e19f49d8912d1bec2f9c972de4d7c3bbe98d92a.jpg) +An image of a modern architectural building with large glass windows, situated on a cliff overlooking a serene ocean at sunset. + +![](images/0b1b1b92592474bf268039868377a95c9b10fbd9c272acae0077e0faa47382d0.jpg) +A cozy cabin in the middle of a snowy forest, surrounded by tall trees with lights glowing through the windows, a northern lights display visible in the sky. + +![](images/a05f6050aea65fd8c472eca0b0aa84d22ae31d07fc18623d18f8abcf22e27deb.jpg) +A still life of a vase overflowing with vibrant flowers, painted in bold colors and textured brushstrokes, reminiscent of van Gogh's iconic style. + +![](images/66c56563088f32892912db5825e6709d4e50a71e4d13352801783949bdaace75.jpg) +An otherworldly forest of giant glowing mushrooms under a vibrant night sky filled with distant planets and stars, creating a dreamlike, cosmic landscape. +Figure 5: Images generated from UniFluid autoregressive model after aesthetic fine-tuning. + +![](images/36f4a8eceafcdb00d110ca517d7b8a9cf4704c36dd59aee4f5c7d4974d7e9486.jpg) +A close up view of a wooden pole with the word "sushi" in silver letters on it. The pole is in front of a parking lot. The pole is in the foreground, with a concrete sidewalk in the foreground. There are plants and trees in the background. There is a brown brick building with a black awning on the left side of it. There are trees in the background. The sky is clear and blue. + +![](images/3b5e953dc74551e08a0fd1c416c0915653361ba3cff1e61815598e1a8cd421bb.jpg) +A close up view of a wet white and brown labradoodle dog walking on a gray cement surface. The dog has a red leash attached to its collar. The dog has a silver metal hook attached to its collar. The dog is walking towards the camera. The dog is casting a shadow on the cement surface. Behind the dog is a building with a large window on the side. A tree line is visible in the background. The sky is blue with a few white clouds. + +![](images/04fe165de52995499a22c711c69eafb3a57240f4b865c524c8ee2c812a8e402b.jpg) +Question: In what year did Panama's population reach 4.28 million? +Answer: 2020 + +![](images/3affb1716f45e3f8a8e892a46ed5c32d321d2f28697ab2b0d93d6ccf76afcff1.jpg) +Question: What device's average duration of in-app engagement sessions was 6.6 minutes? +Answer: android + +Figure 6: Finetuned UniFluid model demonstrates strong image-to-text capability on image captioning and question answering. +![](images/d247a1dbb553478afd47f826cff4a46e58101f933b13bce059a3897dcaba7b76.jpg) +Question: The bus is likely driving through which American city? +Answer: new york + +![](images/56f7755653a55050558209e33a525944678ce07f18db276e8244c8c56aa35766.jpg) +Question: Which number birthday is probably being celebrated? +Answer: thirty + +![](images/79f93e931fd94d63593c874d6ac1fb094c20d25d037a24a190c7c96be9620be3.jpg) +Change the black suitcase to a white one + +![](images/d19bf5d7e7f56008e163ebc8e441f8a1112671cf8a8ef094221c9f5680a30ced.jpg) + +![](images/1d019c5e0c764eb1e8b95793df132237f61dc32dfaf1373ae7c746f9613670e2.jpg) +Remove the fence from the photo + +![](images/7f3981b5455644ddc8ab004d938034d1d81756f69c154a9d395fc70887dc2689.jpg) + +![](images/857c3d2b4d90a930f09ebe7c2eaecd6e41843ff5f2eb1e4dfd4d5f0fbb8c4eae.jpg) +Change this into a 1950s Flintstone cartoon art style + +![](images/5d06f1f7f1bc3e93747fda98c60b99db1e7b3bfc38ea08d5f99f70ff408d4703.jpg) + +![](images/480023cdc110c691db2b63895df6e29d8f62b6c948074069eb234e06d6a1643d.jpg) +Change the trees from flowers to icicles + +![](images/3cf06912c8adfa112e4cec26c1ef46e0c4e30e821c79f1b2de0b87c712b9bc5c.jpg) + +![](images/eb0786feda26f92aea48692bb7ee90a2bd953e0f5f3bb17a29cbf991e6540cde.jpg) +Change the elephant into a giant rabbit + +![](images/ab435942c6a892c5e005c6f3e596ec777b7002aee1398e2d94d8acf12d940801.jpg) + +![](images/9c446d91fda981918537212203a547eb8e4fb4c4bbdb313a91d2a0e202c7d290.jpg) + +![](images/7f9899abfaf1d95a495617d9c2b59fbcd4e91d93fcf676ed5bac64c54a57d59d.jpg) + +![](images/37e1926ec2131e720adf5a00ec75683674a2fdf5d3477f9049de45486d0cd9f1.jpg) +Change the bird to all white + +![](images/00ace96392ae997da28586fe1293583f6d92d6429a3fcfed2d0751233fbb1bb4.jpg) +Figure 7: Image editing results on evaluation benchmark from UniFluid autoregressive model after fine-tuning. It handles multiple editing tasks effectively, including object removal, insertion, style and color changes. + +![](images/83cafdcca03ca33853f930748ce61120aaab1ee886c0c13f8fcc11862d110f05.jpg) +Turn it into a Paul Cezanne still life painting + +![](images/5117cd2da0dfc63a8e7e5811316065d98885423bf2410418a9ecb417bb9c410b.jpg) + +![](images/2f04b59d14ad478ebcffecd91ec88aa335419c135b386a9254939ca44fbb078b.jpg) +Change the meat into carrots + +![](images/46973e49a359e902ed1f7d9f02c2def2ed4f8fe3f1d72a0e4d3fdeb7fc3d6011.jpg) + +Table 4: Performance comparison of image generation and understanding of UniFluid trained with different image generation order. FID and CIDEr is measured on MS-COCO. + +
Generation OrderGenerationUnderstanding
COCO FID ↓GenEval ↑COCO CIDEr ↑Cap Avg ↑QA Avg ↑
Raster8.280.5945.57116.5561.81
Random7.200.5940.91116.1362.10
+ +as a backbone is important for unlocking improved visual quality in a unified model training setup. The image understanding performance also improved slightly when using stronger LLM, which is a trend also noted in PaliGemma2. We show the detailed comparison on each downstream visual understanding benchmark in Figure 4. + +Training with Random Order Helps Generation But Not Understanding. Images inherently possess 2D patterns. As demonstrated in Fluid, raster-order training can be problematic, potentially leading to collapse and artifacts characterized by disappearing patterns. Approaches such as RAR [58] and RandAR [33] propose training image generation AR models with random-order training, which can improve ImageNet FID and result in better visual quality. + +Here we study the effect of different visual generation orders during training within our unified framework setup. We compare the performance between random-order and raster-order training, both with Gemma-2 2B as backbone LLM. The results presented in Table 4 indicate that for per-token image generation within a unified framework, raster-order training continues to underperform compared to random-order generation. Incorporating random-order during training could ensure the generated images are of high quality. However, it does not necessarily improve the visual understanding performance, where raster-order achieves better performance on MS-COCO CIDEr and downstream captioning task average (Cap Avg). + +# 5.3 More Generation Capabilities + +We also verify the transferability of the trained model to various downstream generation tasks. + +Aesthetic Fine-Tuning. To enhance the visual quality and aesthetic appeal of the generated images, we perform aesthetic fine-tuning on a publicly available dataset. The results are shown in Figure 5. + +Image Editing Task. Since our unified framework is trained with multimodal inputs, it can naturally extend to image editing tasks that involve both image and text prompt inputs. We fine-tune the 2B UniFluid model with 4M image editing pairs from HQEdit [18] and UltraEdit [60]. In Figure 7, we apply the fine-tuned model to the input images and editing prompts from a public available benchmark. Although preliminary, the experiments show that UniFluid is able to adapt and generalize to tasks that involve interleaved data modalities. + +# 6 Conclusion + +In this paper, we presented UniFluid, a pure autoregressive framework for joint visual generation and understanding, utilizing continuous visual tokens. We identified an inherent trade-off between the visual generation and understanding tasks, but the two tasks can benefit each other with tuned training recipes. Careful choice of the loss balance between the two tasks allows a single unified model to achieve performance comparable to or exceeding single-task baselines. We conducted investigation of key design choices for UniFluid training, revealing the critical importance of employing strong backbone LLM and random-order generation to unlock high-quality visual generation capabilities. We believe that this work encourages future research into the exploration of continuous visual tokens for unified vision-language model training, paving the way for more efficient and powerful autoregressive multimodal systems. + +Acknowledgements. We are grateful to Alex Rizkowsky and Amy Shen for their support in securing computational resources. We also wish to thank Charles Herrmann, Junhwa Hur, Shangbang Long, André Susano Pinto, Srinivas Kaza, David Salesin, and the VisCam team for their insightful discussions and constructive feedback, which greatly improved this work. + +# References + +[1] Manoj Acharya, Kushal Kafle, and Christopher Kanan. Tallyqa: Answering complex counting questions. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 8076-8084, 2019. +[2] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35:23716-23736, 2022. +[3] Lucas Beyer, Andreas Steiner, André Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neumann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, et al. Paligemma: A versatile 3b vlm for transfer. arXiv preprint arXiv:2407.07726, 2024. +[4] Ali Furkan Biten, Ruben Tito, Andres Mafla, Lluis Gomez, Marçal Rusinol, Ernest Valveny, CV Jawahar, and Dimosthenis Karatzas. Scene text visual question answering. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4291-4301, 2019. +[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020. +[6] Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint arXiv:2301.00704, 2023. +[7] Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794, 2022. +[8] Xiaokang Chen, Zhiyu Wu, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, and Chong Ruan. Janus-pro: Unified multimodal understanding and generation with data and model scaling. arXiv preprint arXiv:2501.17811, 2025. +[9] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1-113, 2023. +[10] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructlip: Towards general-purpose vision-language models with instruction tuning, 2023. +[11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pages 4171–4186, 2019. +[12] Lijie Fan, Tianhong Li, Siyang Qin, Yuanzhen Li, Chen Sun, Michael Rubinstein, Deqing Sun, Kaiming He, and Yonglong Tian. Fluid: Scaling autoregressive text-to-image generative models with continuous tokens. arXiv preprint arXiv:2410.13863, 2024. +[13] Dhruba Ghosh, Hannaneh Hajishirzi, and Ludwig Schmidt. Geneval: An object-focused framework for evaluating text-to-image alignment. Advances in Neural Information Processing Systems, 36:52132-52152, 2023. +[14] Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive networks. In International Conference on Machine Learning, pages 1242–1250. PMLR, 2014. +[15] Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3608-3617, 2018. + +[16] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. +[17] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700-6709, 2019. +[18] Mude Hui, Siwei Yang, Bingchen Zhao, Yichun Shi, Heng Wang, Peng Wang, Yuyin Zhou, and Cihang Xie. Hq-edit: A high-quality dataset for instruction-based image editing. arXiv preprint arXiv:2404.09990, 2024. +[19] Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV 14, pages 235-251. Springer, 2016. +[20] Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226, 2018. +[21] Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. Autoregressive image generation without vector quantization. Advances in Neural Information Processing Systems, 37:56424-56445, 2024. +[22] Yang Li, Gang Li, Luheng He, Jingjie Zheng, Hong Li, and Zhiwei Guan. Widget captioning: Generating natural language description for mobile user interface elements. arXiv preprint arXiv:2010.04295, 2020. +[23] Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, and Song Han. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 26689-26699, 2024. +[24] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer vision-ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part v 13, pages 740-755. Springer, 2014. +[25] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26296-26306, 2024. +[26] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36:34892-34916, 2023. +[27] Sylvain Lobry, Diego Marcos, Jesse Murray, and Devis Tuia. Rsvqa: Visual question answering for remote sensing data. IEEE Transactions on Geoscience and Remote Sensing, 58(12):8555-8566, 2020. +[28] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507–2521, 2022. +[29] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195-3204, 2019. +[30] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answering about charts with visual and logical reasoning. arXiv preprint arXiv:2203.10244, 2022. +[31] Minesh Mathew, Viraj Bagal, Ruben Tito, Dimosthenis Karatzas, Ernest Valveny, and CV Jawahar. Infographicvqa. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1697-1706, 2022. + +[32] Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 2200-2209, 2021. +[33] Ziqi Pang, Tianyuan Zhang, Fujun Luan, Yunze Man, Hao Tan, Kai Zhang, William T Freeman, and Yu-Xiong Wang. Randar: Decoder-only autoregressive visual generation in random orders. arXiv preprint arXiv:2412.01827, 2024. +[34] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In International conference on machine learning, pages 4055-4064. PMLR, 2018. +[35] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Technical Report, 2018. +[36] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021. +[37] Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-okvqa: A benchmark for visual question answering using world knowledge. In European conference on computer vision, pages 146–162. Springer, 2022. +[38] Weijia Shi, Xiaochuang Han, Chunting Zhou, Weixin Liang, Xi Victoria Lin, Luke Zettlemoyer, and Lili Yu. Llamafusion: Adapting pretrained language models for multimodal generation. arXiv preprint arXiv:2412.15188, 2024. +[39] Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part II 16, pages 742-758. Springer, 2020. +[40] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8317-8326, 2019. +[41] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. +[42] Andreas Steiner, André Susano Pinto, Michael Tschannen, Daniel Keysers, Xiao Wang, Yonatan Bitton, Alexey Gritsanko, Matthias Minderer, Anthony Sherbondy, Shangbang Long, et al. Paligemma 2: A family of versatile vlms for transfer. arXiv preprint arXiv:2412.03555, 2024. +[43] Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. arXiv preprint arXiv:1811.00491, 2018. +[44] Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Emu: Generative pretraining in multimodality. arXiv preprint arXiv:2307.05222, 2023. +[45] Yutao Sun, Hangbo Bao, Wenhui Wang, Zhiliang Peng, Li Dong, Shaohan Huang, Jianyong Wang, and Furu Wei. Multimodal latent language modeling with next-token diffusion. arXiv preprint arXiv:2412.08635, 2024. +[46] Chameleon Team. Chameleon: Mixed-modal early-fusion foundation models. arXiv preprint arXiv:2405.09818, 2024. +[47] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024. + +[48] Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118, 2024. +[49] Keyu Tian, Yi Jiang, Zehuan Yuan, Bingyue Peng, and Liwei Wang. Visual autoregressive modeling: Scalable image generation via next-scale prediction. Advances in neural information processing systems, 37:84839-84865, 2024. +[50] Shengbang Tong, David Fan, Jiachen Zhu, Yunyang Xiong, Xinlei Chen, Koustuv Sinha, Michael Rabbat, Yann LeCun, Saining Xie, and Zhuang Liu. Metamorph: Multimodal understanding and generation via instruction tuning. arXiv preprint arXiv:2412.14164, 2024. +[51] Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelCNN decoders. Advances in neural information processing systems, 29, 2016. +[52] Aäron Van Den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International conference on machine learning, pages 1747-1756. PMLR, 2016. +[53] Bryan Wang, Gang Li, Xin Zhou, Zhourong Chen, Tovi Grossman, and Yang Li. Screen2words: Automatic mobile ui summarization with multimodal learning. In The 34th Annual ACM Symposium on User Interface Software and Technology, pages 498-510, 2021. +[54] Chengyue Wu, Xiaokang Chen, Zhiyu Wu, Yiyang Ma, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, Chong Ruan, et al. Janus: Decoupling visual encoding for unified multimodal understanding and generation. arXiv preprint arXiv:2410.13848, 2024. +[55] Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, and Mike Zheng Shou. Show-o: One single transformer to unify multimodal understanding and generation. arXiv preprint arXiv:2408.12528, 2024. +[56] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023. +[57] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2(3):5, 2022. +[58] Qihang Yu, Ju He, Xueqing Deng, Xiaohui Shen, and Liang-Chieh Chen. Randomized autoregressive visual generation. arXiv preprint arXiv:2411.00776, 2024. +[59] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF international conference on computer vision, pages 11975-11986, 2023. +[60] Haozhe Zhao, Xiaojian Shawn Ma, Liang Chen, Shuzheng Si, Rujie Wu, Kaikai An, Peiyu Yu, Minjia Zhang, Qing Li, and Baobao Chang. Ultraedit: Instruction-based fine-grained image editing at scale. Advances in Neural Information Processing Systems, 37:3058-3093, 2024. +[61] Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Xuezhe Ma, Luke Zettlemoyer, and Omer Levy. Transfusion: Predict the next token and diffuse images with one multi-modal model. arXiv preprint arXiv:2408.11039, 2024. +[62] Deyao Zhu, Jun Chen, Xiaogian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13436/images.zip b/data/2025/2503_13xxx/2503.13436/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..00f3368012c715d9225395d921311ac17c54aa8d --- /dev/null +++ b/data/2025/2503_13xxx/2503.13436/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:579336f9a285ffbe14ab19b8bbf987d33954350d83967b2aca26c1f6c1241d26 +size 965784 diff --git a/data/2025/2503_13xxx/2503.13436/layout.json b/data/2025/2503_13xxx/2503.13436/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d2112101facd05eaf6ccd542083145eb367e4b68 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13436/layout.json @@ -0,0 +1,10398 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 136, + 97, + 474, + 137 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 136, + 97, + 474, + 137 + ], + "spans": [ + { + "bbox": [ + 136, + 97, + 474, + 137 + ], + "type": "text", + "content": "Unified Autoregressive Visual Generation and Understanding with Continuous Tokens" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "spans": [ + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": "Lijie Fan" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1,*}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Luming Tang" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1,*}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Siyang Qin" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1,*}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Tianhong Li" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Xuan Yang" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Siyuan Qiao" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Andreas Steiner" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Chen Sun" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Yuanzhen Li" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Tao Zhu" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Michael Rubinstein" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Michalis Raptis" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Deqing Sun" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1,\\dagger}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Radu Soricut" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "^{1,\\dagger}" + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " Google DeepMind " + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "inline_equation", + "content": "{}^{2}\\mathrm{MIT}\\quad {*}," + }, + { + "bbox": [ + 114, + 177, + 503, + 230 + ], + "type": "text", + "content": " equal contribution" + } + ] + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 109, + 241, + 499, + 527 + ], + "blocks": [ + { + "bbox": [ + 109, + 241, + 499, + 527 + ], + "lines": [ + { + "bbox": [ + 109, + 241, + 499, + 527 + ], + "spans": [ + { + "bbox": [ + 109, + 241, + 499, + 527 + ], + "type": "image", + "image_path": "fe57fbb071286d72cde1ae0a4dc870c142112e69c0e374902ceb8453aa9cf6c0.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 130, + 528, + 478, + 539 + ], + "lines": [ + { + "bbox": [ + 130, + 528, + 478, + 539 + ], + "spans": [ + { + "bbox": [ + 130, + 528, + 478, + 539 + ], + "type": "text", + "content": "Figure 1: Generated images from our UniFluid autoregressive model after aesthetic finetuning." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 281, + 544, + 329, + 556 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 281, + 544, + 329, + 556 + ], + "spans": [ + { + "bbox": [ + 281, + 544, + 329, + 556 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 140, + 563, + 470, + 716 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 563, + 470, + 716 + ], + "spans": [ + { + "bbox": [ + 140, + 563, + 470, + 716 + ], + "type": "text", + "content": "We present UniFluid, a unified autoregressive framework for joint visual generation and understanding leveraging continuous visual tokens. Our unified autoregressive architecture processes multimodal image and text inputs, generating discrete tokens for text and continuous tokens for image. We find though there is an inherent trade-off between the image generation and understanding task, a carefully tuned training recipe enables them to improve each other. By selecting an appropriate loss balance weight, the unified model achieves results comparable to or exceeding those of single-task baselines on both tasks. Furthermore, we demonstrate that employing stronger pre-trained LLMs and random-order generation during training is important to achieve high-fidelity image generation within this unified framework. Built upon the Gemma model series, UniFluid exhibits competitive performance across both image generation and understanding, demonstrating strong transferability to various downstream tasks, including image editing for generation, as well as visual captioning and question answering for understanding." + } + ] + } + ], + "index": 6 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 14, + 206, + 35, + 559 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 206, + 35, + 559 + ], + "spans": [ + { + "bbox": [ + 14, + 206, + 35, + 559 + ], + "type": "text", + "content": "arXiv:2503.13436v1 [cs.CV] 17 Mar 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 105, + 731, + 153, + 742 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 731, + 153, + 742 + ], + "spans": [ + { + "bbox": [ + 105, + 731, + 153, + 742 + ], + "type": "text", + "content": "Tech Report." + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 107, + 71, + 190, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 71, + 190, + 84 + ], + "spans": [ + { + "bbox": [ + 107, + 71, + 190, + 84 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 107, + 97, + 504, + 196 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 97, + 504, + 196 + ], + "spans": [ + { + "bbox": [ + 107, + 97, + 504, + 196 + ], + "type": "text", + "content": "Large Language Models (LLMs) have recently advanced from text-centric architectures, such as BERT [11] and GPT [35], toward multimodal systems capable of understanding and generating content across different modalities. GPT-3 [5] and PaLM [9] show that scaling language models leads to emergent capabilities, while Flamingo [2] further demonstrates that incorporating visual inputs facilitates unified multimodal reasoning. This trend toward unified vision-language model—using a single model for diverse tasks of visual understanding and generation—has promising potential for leveraging knowledge and reasoning abilities that transfer across different vision and language tasks, ultimately facilitating more robust and generalizable multimodal representation and modeling capabilities." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 201, + 504, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 201, + 504, + 289 + ], + "spans": [ + { + "bbox": [ + 107, + 201, + 504, + 289 + ], + "type": "text", + "content": "Motivated by the advantages and strong scaling properties of autoregressive models, coupled with their simplicity, we investigate a pure autoregressive framework for unified visual generation and understanding, without the limitations introduced by vector quantization (VQ). In this paper, we introduce UniFluid, a unified framework that leverages continuous visual tokens within an autoregressive architecture to jointly handle vision-language generation and understanding tasks. Building upon pre-trained Gemma [48] on large-scale text corpus, UniFluid unlocks powerful visual generation and understanding capabilities through training with paired image-text data, and further allows these two tasks to mutually benefit each other within a single architecture." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 293, + 504, + 392 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 293, + 504, + 392 + ], + "spans": [ + { + "bbox": [ + 107, + 293, + 504, + 392 + ], + "type": "text", + "content": "Specifically, UniFluid adopts a unified autoregressive framework where both text and continuous visual inputs are embedded as tokens in the same space, enabling seamless joint training of image generation and understanding tasks. UniFluid integrates a continuous tokenizer [12, 21] for image generation and a pre-trained SigLIP [59] image encoder for understanding tasks, while textual inputs are processed using a standard SentencePiece tokenizer [20]. The resulting multimodal sequences are modeled autoregressively using Gemma [48] as the underlying transformer backbone. Task-specific prediction heads—a diffusion-based head for image generation and a cross-entropy head for text generation—ensure effective modality-specific training and inference, enabling UniFluid to efficiently learn shared representations that mutually enhance its generation and understanding capabilities." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 397, + 504, + 496 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 397, + 504, + 496 + ], + "spans": [ + { + "bbox": [ + 107, + 397, + 504, + 496 + ], + "type": "text", + "content": "Our experiments demonstrate several key advantages of the proposed unified training strategy. We find though there is a trade-off between the two tasks, a carefully tuned training recipe can allow the tasks to support each other and outperform the single-task baselines. Effectively balancing the loss between the tasks allows a single model that performs both with results superior to or on par with single-task models. Moreover, the choice of pre-trained LLM backbone significantly impacts visual generation performance. We also find that while employing random generation order is essential for high-quality image synthesis, it is less critical for understanding tasks. Finally, our unified pre-trained models show strong generalization and transferability, achieving compelling results in downstream applications, including image editing and various vision-language understanding benchmarks." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 514, + 201, + 528 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 514, + 201, + 528 + ], + "spans": [ + { + "bbox": [ + 107, + 514, + 201, + 528 + ], + "type": "text", + "content": "2 Related Works" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 540, + 504, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 540, + 504, + 651 + ], + "spans": [ + { + "bbox": [ + 107, + 540, + 504, + 651 + ], + "type": "text", + "content": "Multimodal Large Language Models. Multimodal Large Language Models [2, 10, 23, 26, 25, 3, 42] have shown significant performance in visual understanding tasks. Flamingo [2] adopted a frozen LLM and vision encoder, utilizing perceiver with cross-attention to bridge the modalities. LLaVA [26, 25] proposed instruction tuning over pre-trained LLMs with multimodal inputs to align a pre-trained image encoder into the LLM's embedding space, thereby enabling it with visual understanding and instruction following capabilities. MiniGPT-4 [62] and mPLUG-Owl [56] have shown vision encoders can be connected to LLMs through projection layers, demonstrating sophisticated visual reasoning capabilities. The PaliGemma [3, 42] series built upon the Gemma [47, 48] model family to develop versatile vision-language models capable of strong transfer to diverse downstream visual understanding tasks." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 655, + 504, + 722 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 655, + 504, + 722 + ], + "spans": [ + { + "bbox": [ + 107, + 655, + 504, + 722 + ], + "type": "text", + "content": "Autoregressive Image Generation. While diffusion models [41, 36] have achieved impressive success in image generation, autoregressive image generation methods have also shown significant development, driven by their simplicity and closeness to LLM training paradigms. A large body of research centers on tokenizing images into discrete tokens and applying autoregressive objectives to these discrete representations. Notable examples include Parti [57] and Muse [6]. [49] proposes an approach that operates on image scales, progressively refining resolutions from coarse to fine through" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 742, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 742, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 742, + 308, + 750 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 107, + 76, + 504, + 290 + ], + "blocks": [ + { + "bbox": [ + 107, + 76, + 504, + 290 + ], + "lines": [ + { + "bbox": [ + 107, + 76, + 504, + 290 + ], + "spans": [ + { + "bbox": [ + 107, + 76, + 504, + 290 + ], + "type": "image", + "image_path": "21fcf027d2bdd513ed513d39f97fd528b50bf3f4df10285dbb085df06e72a1cd.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 300, + 504, + 331 + ], + "lines": [ + { + "bbox": [ + 104, + 300, + 504, + 331 + ], + "spans": [ + { + "bbox": [ + 104, + 300, + 504, + 331 + ], + "type": "text", + "content": "Figure 2: High-level illustration of UniFluid. UniFluid performs joint training of image generation and understanding tasks through next token prediction. For image embeddings, we use a VAE image tokenizer for generation, and a SigLIP image encoder for understanding. BOI/BOS stands for beginning of Image/Sentence." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 348, + 504, + 381 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 348, + 504, + 381 + ], + "spans": [ + { + "bbox": [ + 104, + 348, + 504, + 381 + ], + "type": "text", + "content": "next-scale prediction. Alternatively, works such as MAR [21], Fluid [12], and techniques employing per-token diffusion heads on top of LLM-predicted embeddings have explored autoregressive image generation with continuous visual tokens." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 386, + 506, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 386, + 506, + 507 + ], + "spans": [ + { + "bbox": [ + 104, + 386, + 506, + 507 + ], + "type": "text", + "content": "Unified Multimodal Models. There is growing research interests in unifying visual generation and understanding within a single model. VQ-based models, such as Chameleon [46], Emu [44], and Janus [54, 8], propose encoding visual inputs into discrete tokens and unifying tasks into next-token prediction within this discrete token space. Models with hybrid training targets, such as Transfusion [61, 38] and Show-O [55], aim to unify next-token prediction objectives with diffusion objectives within a single framework. MetaMorph [50] maintains the autoregressive objective by regressing visual SigLIP [59] features using an LLM, but necessitates a separate diffusion model to decode the predicted latent features into images. Our approach distinguishes itself by performing per-token autoregressive generation using continuous visual tokens. This maintains the next-token prediction objective, while not being limited by the vector quantized tokens. [45] also explores using continuous tokens to generate multimodal outputs." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 522, + 167, + 534 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 522, + 167, + 534 + ], + "spans": [ + { + "bbox": [ + 105, + 522, + 167, + 534 + ], + "type": "text", + "content": "3 Method" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 546, + 504, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 546, + 504, + 581 + ], + "spans": [ + { + "bbox": [ + 104, + 546, + 504, + 581 + ], + "type": "text", + "content": "In this section, we illustrate the architecture of our UniFluid model. The model expects both image and text sequences as input and achieves joint training on both generation and understanding tasks, using next-token prediction as its training objective." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 592, + 404, + 604 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 592, + 404, + 604 + ], + "spans": [ + { + "bbox": [ + 104, + 592, + 404, + 604 + ], + "type": "text", + "content": "3.1 Unified Autoregressive Modeling with Continuous Visual Tokens" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 612, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 612, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 612, + 506, + 723 + ], + "type": "text", + "content": "Our approach leverages the autoregressive paradigm to unify visual understanding and generation within a single framework. Given an ordered sequence of tokens " + }, + { + "bbox": [ + 104, + 612, + 506, + 723 + ], + "type": "inline_equation", + "content": "X = \\{x^{1},\\dots,x^{n}\\}" + }, + { + "bbox": [ + 104, + 612, + 506, + 723 + ], + "type": "text", + "content": ", the autoregressive model [34, 14, 51, 52] factorizes the joint probability distribution as a product of conditional probabilities, effectively framing the generation task as a sequential \"next token prediction\" problem: " + }, + { + "bbox": [ + 104, + 612, + 506, + 723 + ], + "type": "inline_equation", + "content": "p(X) = \\prod_{i = 1}^{n}p(x^{i}\\mid x^{1},\\dots,x^{i - 1})" + }, + { + "bbox": [ + 104, + 612, + 506, + 723 + ], + "type": "text", + "content": ". As shown in MAR [21] and Fluid [12], this autoregressive formulation is applicable for both discrete tokens and continuous tokens. In UniFluid, we exploit this property to enable the generation of continuous visual tokens under the unified decoder-only framework. Our model is modality-agnostic; both text and image tokens are treated as elements within a long unified sequence, and their respective logits are predicted iteratively in an autoregressive manner by the backbone transformer." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 504, + 128 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 504, + 128 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 504, + 128 + ], + "type": "text", + "content": "To accommodate the distinct nature of text and image modalities, we employ modality-specific prediction heads to calculate the appropriate loss functions and sampling for each modality. This unified approach allows the model to learn a shared representation space through the unified training procedure, facilitating synergistic learning and enabling seamless transitions between visual generation and understanding." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 105, + 144, + 185, + 156 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 144, + 185, + 156 + ], + "spans": [ + { + "bbox": [ + 105, + 144, + 185, + 156 + ], + "type": "text", + "content": "3.2 Architecture" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 166, + 504, + 266 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 166, + 504, + 266 + ], + "spans": [ + { + "bbox": [ + 104, + 166, + 504, + 266 + ], + "type": "text", + "content": "As depicted in Figure 2, UniFluid employs a unified framework where both text and image inputs are tokenized and projected into a shared embedding space. This allows it to leverage a decoder-only transformer as the core backbone for the unified autoregressive task. Text inputs are tokenized using the SentencePiece tokenizer. This results in discrete tokens with a vocabulary size of " + }, + { + "bbox": [ + 104, + 166, + 504, + 266 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 104, + 166, + 504, + 266 + ], + "type": "text", + "content": ". For image generation, images are encoded into continuous visual tokens using a continuous variational autoencoder. To facilitate the process of image understanding, following PaliGemma, we used SigLIP as a separate image encoder to extract high-level information from the visual inputs. UniFluid consists of a classification head to convert the transformer's text logits into a categorical distribution, and a diffusion head to convert image logits into a per-token probability distribution." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 270, + 506, + 381 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 270, + 506, + 381 + ], + "spans": [ + { + "bbox": [ + 104, + 270, + 506, + 381 + ], + "type": "text", + "content": "The inherent structure of text as a linear sequence aligns well with the standard 1D positional embeddings of the LLM, which are sufficient for text modeling and image understanding tasks. However, image tokens possess a 2D spatial structure. To capture this inherent 2D nature, we incorporate learnable 2D positional embeddings, which are added to the image token embeddings. Meanwhile, inspired by [58], to achieve random order generation, a position embedding for the next predicted token is also added to each image token. To enhance the model's ability to initiate and guide image generation, we prepend a \"Beginning of Image\" (BOI) token to the sequence of continuous image tokens. This BOI token serves as a distinct signal, indicating the start of the visual generation process. Given that the sequence length for generated image tokens is predefined (256 tokens for 256x256 images), an explicit \"End of Image\" token is unnecessary in our case." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 400, + 208, + 414 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 400, + 208, + 414 + ], + "spans": [ + { + "bbox": [ + 105, + 400, + 208, + 414 + ], + "type": "text", + "content": "4 Implementation" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 426, + 168, + 438 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 426, + 168, + 438 + ], + "spans": [ + { + "bbox": [ + 105, + 426, + 168, + 438 + ], + "type": "text", + "content": "4.1 Training" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 449, + 504, + 493 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 449, + 504, + 493 + ], + "spans": [ + { + "bbox": [ + 104, + 449, + 504, + 493 + ], + "type": "text", + "content": "Per-token Classification Head for Discrete Text Tokens. We employ the same SentencePiece tokenizer as Gemma for text tokenization. The transformer's output logits for text are transformed into categorical probability distributions over the vocabulary, and we apply the standard cross-entropy loss, denoted as " + }, + { + "bbox": [ + 104, + 449, + 504, + 493 + ], + "type": "inline_equation", + "content": "L_{Text}" + }, + { + "bbox": [ + 104, + 449, + 504, + 493 + ], + "type": "text", + "content": ", to optimize the prediction of these discrete text tokens." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 498, + 504, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 498, + 504, + 586 + ], + "spans": [ + { + "bbox": [ + 104, + 498, + 504, + 586 + ], + "type": "text", + "content": "Per-token Diffusion Head for Continuous Visual Tokens. We adopt the same continuous tokenizer as Fluid to embed 256x256 images into 32x32x4 continuous tokens, and use a patch size of 2 to merge 4 tokens into one. To model the per-token distribution of these predicted continuous visual tokens, we employ a lightweight MLP as a diffusion head. We adopt the same diffusion process and loss function, denoted as " + }, + { + "bbox": [ + 104, + 498, + 504, + 586 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\text{Visual}}" + }, + { + "bbox": [ + 104, + 498, + 504, + 586 + ], + "type": "text", + "content": ", as in [21, 12], which is specifically tailored for continuous visual token prediction. For the understanding task, the input image resolution is " + }, + { + "bbox": [ + 104, + 498, + 504, + 586 + ], + "type": "inline_equation", + "content": "224 \\times 224" + }, + { + "bbox": [ + 104, + 498, + 504, + 586 + ], + "type": "text", + "content": ", and we use SigLIP as the image encoder. Note that the SigLIP features are only used as prefix for the understanding task during training, and no losses are added on top of them." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 590, + 272, + 602 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 590, + 272, + 602 + ], + "spans": [ + { + "bbox": [ + 105, + 590, + 272, + 602 + ], + "type": "text", + "content": "Task-Specific Training Configurations." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 607, + 504, + 662 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 607, + 504, + 662 + ], + "spans": [ + { + "bbox": [ + 104, + 607, + 504, + 662 + ], + "type": "text", + "content": "Image Understanding: For image understanding tasks, the model is provided with image embeddings and question tokens as input prefix. Following PaliGemma, we apply a bidirectional attention mask to both image and question tokens. A causal attention mask is applied to the answer tokens, ensuring that the model only attends to previous answer tokens during autoregressive generation. The text token loss, " + }, + { + "bbox": [ + 104, + 607, + 504, + 662 + ], + "type": "inline_equation", + "content": "\\mathcal{L}_{\\text{Text}}" + }, + { + "bbox": [ + 104, + 607, + 504, + 662 + ], + "type": "text", + "content": ", is calculated specifically on the answer text tokens." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 667, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 667, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 667, + 506, + 723 + ], + "type": "text", + "content": "Image Generation: Conversely, for image generation tasks, text prompts are provided as conditioning inputs. To maintain the appropriate information flow, we employ a bidirectional attention mask for the text prompt tokens, enabling them to attend to all other text tokens. A causal attention mask is applied to the image tokens, ensuring that each image token only attends to preceding image tokens. The visual token loss, " + }, + { + "bbox": [ + 104, + 667, + 506, + 723 + ], + "type": "inline_equation", + "content": "L_{\\text{Visual}}" + }, + { + "bbox": [ + 104, + 667, + 506, + 723 + ], + "type": "text", + "content": ", is calculated on the generated image tokens." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "content": "Unified Loss Function. The total training loss for UniFluid is a weighted sum of the text token prediction loss and the visual token prediction loss, defined as: " + }, + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "inline_equation", + "content": "\\mathcal{L} = \\mathcal{L}_{\\text{Visual}} + \\lambda \\cdot \\mathcal{L}_{\\text{Text}}" + }, + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 72, + 504, + 118 + ], + "type": "text", + "content": " is a hyper-parameter that represents the weight assigned to the text token prediction loss, allowing us to balance the contributions of the two modalities during training." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 121, + 506, + 200 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 121, + 506, + 200 + ], + "spans": [ + { + "bbox": [ + 104, + 121, + 506, + 200 + ], + "type": "text", + "content": "Training Details. We train the model with a batch size of 2048 using the AdamW optimizer with a learning rate of 1e-4. The training process consists of 1 million steps with a constant learning rate schedule and a warm-up period of 65k steps. Following [58], for image generation, the image token order is randomly permuted during the initial 300k training iterations, then linearly anneals to raster between 300k and 600k iterations, and finally sticks to raster order for the subsequent 400k steps. Except for the comparison with Gemma-1, we use the Gemma-2 model series as the backbone transformer for all experiments in this paper." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 212, + 171, + 224 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 212, + 171, + 224 + ], + "spans": [ + { + "bbox": [ + 105, + 212, + 171, + 224 + ], + "type": "text", + "content": "4.2 Inference" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 232, + 504, + 299 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 232, + 504, + 299 + ], + "spans": [ + { + "bbox": [ + 104, + 232, + 504, + 299 + ], + "type": "text", + "content": "For text decoding, we employ categorical sampling for each generated text prediction. The predicted token is then selected from the vocabulary " + }, + { + "bbox": [ + 104, + 232, + 504, + 299 + ], + "type": "inline_equation", + "content": "V" + }, + { + "bbox": [ + 104, + 232, + 504, + 299 + ], + "type": "text", + "content": " based on the sampled probability distribution. We use the same decoding strategy as PaliGemma. Greedy decoding is used for all tasks except for downstream COCOcap (beam search " + }, + { + "bbox": [ + 104, + 232, + 504, + 299 + ], + "type": "inline_equation", + "content": "n = 2" + }, + { + "bbox": [ + 104, + 232, + 504, + 299 + ], + "type": "text", + "content": ") and TextCaps (beam search " + }, + { + "bbox": [ + 104, + 232, + 504, + 299 + ], + "type": "inline_equation", + "content": "n = 3" + }, + { + "bbox": [ + 104, + 232, + 504, + 299 + ], + "type": "text", + "content": "). For image decoding, we use a diffusion sampling process to generate continuous visual tokens with diffusion sampling step set to 100 in our implementation." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 304, + 506, + 348 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 304, + 506, + 348 + ], + "spans": [ + { + "bbox": [ + 104, + 304, + 506, + 348 + ], + "type": "text", + "content": "As both text and image generation are performed at the token level, with predictions occurring one token at a time under a causal attention mechanism, we can efficiently utilize Key-Value (KV) caching. This optimization is applicable to both discrete text tokens and continuous visual tokens, significantly accelerating the inference process." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 363, + 187, + 376 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 363, + 187, + 376 + ], + "spans": [ + { + "bbox": [ + 105, + 363, + 187, + 376 + ], + "type": "text", + "content": "5 Experiment" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 388, + 156, + 401 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 388, + 156, + 401 + ], + "spans": [ + { + "bbox": [ + 105, + 388, + 156, + 401 + ], + "type": "text", + "content": "5.1 Setup" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 409, + 506, + 453 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 409, + 506, + 453 + ], + "spans": [ + { + "bbox": [ + 104, + 409, + 506, + 453 + ], + "type": "text", + "content": "Datasets. We train our models using the WebLI dataset [7], a collection of high-quality image-text pairs. For visual generation, we follow Fluid to employ a WebLI subset of image and text descriptions specifically for the generation task. For visual understanding, consistent with PaliGemma, we leverage the image-text description pairs and image question-answer pairs that are also available within WebLI." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 458, + 506, + 579 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 458, + 506, + 579 + ], + "spans": [ + { + "bbox": [ + 104, + 458, + 506, + 579 + ], + "type": "text", + "content": "Evaluation Metrics. We assess the image generation quality using the FID [16] score on 30K images of the MS-COCO [24] training set and evaluate performance on the GenEval [13] benchmark, where we use the original text prompt without any rewrites. For evaluating visual understanding performance, we use the caption CIDEr score on MS-COCO. Given our similar training dataset and setup to PaliGemma, we also evaluate the finetuning performance on a variety of captioning and question answering tasks. We report the average score on 4 Captioning tasks, including COCOcap [24], Screen2Words [53], TextCaps [39], WidgetCap [22], and 20 QA tasks, including OKVQA [29], AOKVQA-MC [37], AOKVQA-DA [37], GQA [17], NLVR2 [43], AI2D [19], ScienceQA [28], RSVQA-lr [27], RSVQA-hr (test/test2) [27], ChartQA (human/aug) [30], VizWizVQA [15], TallyQA (simple/complex) [1], CountBenchQA [3], TextVQA [40], DocVQA [32], InfoVQA [31], ST-VQA [4]." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 583, + 504, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 583, + 504, + 617 + ], + "spans": [ + { + "bbox": [ + 104, + 583, + 504, + 617 + ], + "type": "text", + "content": "In the following sections, we present the experimental results obtained under different configurations of UniFluid, providing insights into the relationship between the two tasks and highlighting key design choices for UniFluid training." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 630, + 187, + 641 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 630, + 187, + 641 + ], + "spans": [ + { + "bbox": [ + 105, + 630, + 187, + 641 + ], + "type": "text", + "content": "5.2 Main Results" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 651, + 504, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 651, + 504, + 696 + ], + "spans": [ + { + "bbox": [ + 104, + 651, + 504, + 696 + ], + "type": "text", + "content": "Unified Training Improves Generation Performance. To evaluate the effectiveness of the unified training framework and determine whether unified training offers advantages compared to training separate models for different tasks, we perform controlled experiments to analyze the performance of models trained with a single task." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 700, + 504, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 700, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 700, + 504, + 723 + ], + "type": "text", + "content": "We first compare the visual generation performance of the model trained under the unified training objective with the performance of a text-to-image model (T2I only), trained solely with the visual" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 213, + 103, + 395, + 168 + ], + "blocks": [ + { + "bbox": [ + 104, + 78, + 504, + 98 + ], + "lines": [ + { + "bbox": [ + 104, + 78, + 504, + 98 + ], + "spans": [ + { + "bbox": [ + 104, + 78, + 504, + 98 + ], + "type": "text", + "content": "Table 1: Unified training achieves better generation performance than text-to-image only training. We evaluate the performance using MS-COCO zero-shot FID and GenEval score." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 213, + 103, + 395, + 168 + ], + "lines": [ + { + "bbox": [ + 213, + 103, + 395, + 168 + ], + "spans": [ + { + "bbox": [ + 213, + 103, + 395, + 168 + ], + "type": "table", + "html": "
Training TargetSizeFID ↓GenEval ↑
T2I only0.7B9.710.50
Unified0.7B8.390.52
T2I only2B7.880.59
Unified2B7.200.59
", + "image_path": "0c74970ac44a50ef2f5559ce53b6e18bbed02e637bc1ab1b59f7a8e7900b29bb.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 190, + 504, + 268 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 190, + 504, + 268 + ], + "spans": [ + { + "bbox": [ + 104, + 190, + 504, + 268 + ], + "type": "text", + "content": "autoregressive objective for the generation task. We ensure that the total number of visual tokens for training is the same for the visual generation loss in both the unified model training and text-to-image only training scenarios. The generation performance comparison is presented in Table 1. The unified model achieves better performance compared to the T2I only model, despite both models having observed the same number of tokens for the visual generation task. This suggests that unified model training can be beneficial for visual generation tasks, and that visual understanding ability has the potential to unlock enhanced visual generation quality." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 273, + 504, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 273, + 504, + 316 + ], + "spans": [ + { + "bbox": [ + 104, + 273, + 504, + 316 + ], + "type": "text", + "content": "Trade-off Between Generation and Understanding. We also investigate whether the visual generation task can contribute to improved visual understanding performance. In the UniFluid unified training setup, the hyperparameter " + }, + { + "bbox": [ + 104, + 273, + 504, + 316 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 273, + 504, + 316 + ], + "type": "text", + "content": " controls the balance between the losses applied to image tokens and text tokens." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 322, + 504, + 410 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 322, + 504, + 410 + ], + "spans": [ + { + "bbox": [ + 104, + 322, + 504, + 410 + ], + "type": "text", + "content": "In Table 3 and Figure 3, we present the understanding and generation results with varying " + }, + { + "bbox": [ + 104, + 322, + 504, + 410 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 322, + 504, + 410 + ], + "type": "text", + "content": " of the 0.7B model. We compare the transfer performance to downstream understanding tasks between the unified model with different " + }, + { + "bbox": [ + 104, + 322, + 504, + 410 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 322, + 504, + 410 + ], + "type": "text", + "content": " and a image-to-text model (I2T only), trained solely with the image understanding objective. Within the unified training setup, a trade-off exists between visual generation and understanding tasks, which can be effectively controlled by adjusting the loss mixing weight, " + }, + { + "bbox": [ + 104, + 322, + 504, + 410 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 322, + 504, + 410 + ], + "type": "text", + "content": ". While increasing " + }, + { + "bbox": [ + 104, + 322, + 504, + 410 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 322, + 504, + 410 + ], + "type": "text", + "content": " can improve image understanding performance, ultimately exceeding the I2T-only baseline for downstream captioning, it concurrently diminishes the image generation capabilities." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "spans": [ + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "type": "text", + "content": "In most scenarios, a smaller " + }, + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "type": "text", + "content": " value (e.g., 0.005) is advisable, maintaining a significant proportion (over " + }, + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "type": "inline_equation", + "content": "90\\%" + }, + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "type": "text", + "content": ") of image understanding while supporting the generation of high-fidelity images. Larger " + }, + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "type": "text", + "content": " values, in contrast, strongly favor image understanding but result in a rapid drop of image generation ability, as indicated by a sharp rise in FID score when " + }, + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "type": "text", + "content": " exceeds 0.1. Qualitative results for image captioning and question answering, demonstrating the understanding capabilities of the fine-tuned model based on the unified model with Gemma-2 2B as backbone LLM and " + }, + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "type": "inline_equation", + "content": "\\lambda = 0.005" + }, + { + "bbox": [ + 104, + 415, + 504, + 491 + ], + "type": "text", + "content": " are presented in Figure 6." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 496, + 506, + 562 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 496, + 506, + 562 + ], + "spans": [ + { + "bbox": [ + 104, + 496, + 506, + 562 + ], + "type": "text", + "content": "Better Pre-trained LLM Backbone Leads to Better Visual Generation and Understanding. We investigate the effect of pre-trained LLMs within the unified model training setup, specifically examining whether more powerful LLMs contribute to enhanced image understanding performance and superior visual generation quality. To this end, we conducted experiments using Gemma-1 2B [47] and Gemma-2 2B [47] as backbone LLMs. Gemma-2 is a stronger LLM than Gemma-1 with " + }, + { + "bbox": [ + 104, + 496, + 506, + 562 + ], + "type": "inline_equation", + "content": "10\\%" + }, + { + "bbox": [ + 104, + 496, + 506, + 562 + ], + "type": "text", + "content": " average improvements across different text benchmarks." + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 143, + 613, + 467, + 665 + ], + "blocks": [ + { + "bbox": [ + 104, + 582, + 504, + 612 + ], + "lines": [ + { + "bbox": [ + 104, + 582, + 504, + 612 + ], + "spans": [ + { + "bbox": [ + 104, + 582, + 504, + 612 + ], + "type": "text", + "content": "Table 2: Performance comparison of image generation and understanding of UniFluid trained with different LLM backbone. FID and CIDEr are measured on MS-COCO. Gemma-2 gets much better performance compared to Gemma-1, for both image understanding and generation tasks." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 143, + 613, + 467, + 665 + ], + "lines": [ + { + "bbox": [ + 143, + 613, + 467, + 665 + ], + "spans": [ + { + "bbox": [ + 143, + 613, + 467, + 665 + ], + "type": "table", + "html": "
Pretrained ModelGenerationUnderstanding
COCO FID ↓GenEval ↑COCO CIDEr ↑Cap Avg ↑QA Avg ↑
Gemma-19.730.5238.02113.4060.21
Gemma-27.200.5940.91116.1362.10
", + "image_path": "7b4d738402e13bf19379cf0a30676469c8f6afa22957a77ca92c17d04ce946cb.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_body" + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 677, + 504, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 677, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 677, + 504, + 723 + ], + "type": "text", + "content": "The experimental results are presented in Table 2. Here we used " + }, + { + "bbox": [ + 104, + 677, + 504, + 723 + ], + "type": "inline_equation", + "content": "\\lambda = 0.005" + }, + { + "bbox": [ + 104, + 677, + 504, + 723 + ], + "type": "text", + "content": " for all models. The results demonstrate that employing a stronger LLM is crucial for generating images with higher fidelity and quality. Gemma-2 achieves significantly lower FID scores compared to Gemma-1, highlighting that even though LLM pre-training is unimodal, without exposure to visual data, using a better LLM" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 56, + 71, + 211, + 214 + ], + "blocks": [ + { + "bbox": [ + 56, + 71, + 211, + 214 + ], + "lines": [ + { + "bbox": [ + 56, + 71, + 211, + 214 + ], + "spans": [ + { + "bbox": [ + 56, + 71, + 211, + 214 + ], + "type": "image", + "image_path": "27d4b879d1e6ed0c16ebbf7e8c1a05a9de1f4ec816c64e73bb7cd204bec2ff52.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 222, + 504, + 275 + ], + "lines": [ + { + "bbox": [ + 104, + 222, + 504, + 275 + ], + "spans": [ + { + "bbox": [ + 104, + 222, + 504, + 275 + ], + "type": "text", + "content": "Figure 3: Plot of image understanding and generation performance with varying " + }, + { + "bbox": [ + 104, + 222, + 504, + 275 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 222, + 504, + 275 + ], + "type": "text", + "content": " during training. We show the FID on MS-COCO, the average score for downstream captioning tasks (Cap Avg), and the average score for downstream QA tasks (QA Avg) with different " + }, + { + "bbox": [ + 104, + 222, + 504, + 275 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 222, + 504, + 275 + ], + "type": "text", + "content": " on the three figures. Green dashed lines show the FID of T2I-only model, blue and red dashed lines show the downstream captioning average and QA average for I2T-only model, respectively. In practice, smaller " + }, + { + "bbox": [ + 104, + 222, + 504, + 275 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 222, + 504, + 275 + ], + "type": "text", + "content": " has better trade-off between two tasks." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 224, + 72, + 380, + 214 + ], + "blocks": [ + { + "bbox": [ + 224, + 72, + 380, + 214 + ], + "lines": [ + { + "bbox": [ + 224, + 72, + 380, + 214 + ], + "spans": [ + { + "bbox": [ + 224, + 72, + 380, + 214 + ], + "type": "image", + "image_path": "c28e0b1d3d4b55152d911ee6364845722e0f928715ebd8b2ba2f2245e04ae38f.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 398, + 72, + 549, + 213 + ], + "blocks": [ + { + "bbox": [ + 398, + 72, + 549, + 213 + ], + "lines": [ + { + "bbox": [ + 398, + 72, + 549, + 213 + ], + "spans": [ + { + "bbox": [ + 398, + 72, + 549, + 213 + ], + "type": "image", + "image_path": "f01bbba059cb1c487d2b7bad0c5ebf483e7592f5c196f7ec35b2335a3abc2546.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 106, + 338, + 503, + 392 + ], + "blocks": [ + { + "bbox": [ + 104, + 285, + 504, + 337 + ], + "lines": [ + { + "bbox": [ + 104, + 285, + 504, + 337 + ], + "spans": [ + { + "bbox": [ + 104, + 285, + 504, + 337 + ], + "type": "text", + "content": "Table 3: Image generation and understanding results with different " + }, + { + "bbox": [ + 104, + 285, + 504, + 337 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 285, + 504, + 337 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 104, + 285, + 504, + 337 + ], + "type": "inline_equation", + "content": "\\lambda = 0.1" + }, + { + "bbox": [ + 104, + 285, + 504, + 337 + ], + "type": "text", + "content": " roughly makes the loss for generation and understanding of the same scale. We present MS-COCO FID and the average captioning and QA results for downstream understanding tasks, compared to the image-to-text (I2T) only baseline. A smaller " + }, + { + "bbox": [ + 104, + 285, + 504, + 337 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 285, + 504, + 337 + ], + "type": "text", + "content": " like 0.005 is recommended in most cases as it preserves most of the image understanding capability while enabling the generation of high-quality images and outperforms the text-to-image (T2I) only baseline." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 106, + 338, + 503, + 392 + ], + "lines": [ + { + "bbox": [ + 106, + 338, + 503, + 392 + ], + "spans": [ + { + "bbox": [ + 106, + 338, + 503, + 392 + ], + "type": "table", + "html": "
TaskT2I onlyI2T onlyUnified, λ
0.0050.010.050.10.21.02.0
Generation (FID) ↓9.71-8.399.6512.4817.9028.6047.8954.02
Understanding (Cap Avg) ↑-125.05120.25120.25121.18121.68123.65126.25125.80
Understanding (QA Avg) ↑-65.9560.2961.3761.4861.8765.1765.6465.88
", + "image_path": "ed7e8a9c62dec847919f2f338a44a638eacc3f82ec9e8af56e81b4b02edc4e68.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 70, + 419, + 148, + 483 + ], + "blocks": [ + { + "bbox": [ + 70, + 419, + 148, + 483 + ], + "lines": [ + { + "bbox": [ + 70, + 419, + 148, + 483 + ], + "spans": [ + { + "bbox": [ + 70, + 419, + 148, + 483 + ], + "type": "image", + "image_path": "f5b44fb22df8ee423bbb5c991ec67c67c311aac003801a68e36de4d1a80aa028.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 150, + 419, + 227, + 483 + ], + "blocks": [ + { + "bbox": [ + 150, + 419, + 227, + 483 + ], + "lines": [ + { + "bbox": [ + 150, + 419, + 227, + 483 + ], + "spans": [ + { + "bbox": [ + 150, + 419, + 227, + 483 + ], + "type": "image", + "image_path": "e604f122f83cb527d28352cf1e5e4b880d31afe49ece6c4f36cdeba5b992241f.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 230, + 419, + 306, + 483 + ], + "blocks": [ + { + "bbox": [ + 230, + 419, + 306, + 483 + ], + "lines": [ + { + "bbox": [ + 230, + 419, + 306, + 483 + ], + "spans": [ + { + "bbox": [ + 230, + 419, + 306, + 483 + ], + "type": "image", + "image_path": "c46e9fbda409dd9b0d6fe25cf1f0c7088f846324cf25ae0edc1adb8a3f609780.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 309, + 419, + 385, + 483 + ], + "blocks": [ + { + "bbox": [ + 309, + 419, + 385, + 483 + ], + "lines": [ + { + "bbox": [ + 309, + 419, + 385, + 483 + ], + "spans": [ + { + "bbox": [ + 309, + 419, + 385, + 483 + ], + "type": "image", + "image_path": "f47624129a101646b71e64eed6aa1212c7690071847fef6726e962f91d51a79a.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 389, + 419, + 464, + 483 + ], + "blocks": [ + { + "bbox": [ + 389, + 419, + 464, + 483 + ], + "lines": [ + { + "bbox": [ + 389, + 419, + 464, + 483 + ], + "spans": [ + { + "bbox": [ + 389, + 419, + 464, + 483 + ], + "type": "image", + "image_path": "0f43ac3efb9385ab77e2cb077a6165efd276ed993423b635fcddef5a6344bb93.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 469, + 419, + 544, + 483 + ], + "blocks": [ + { + "bbox": [ + 469, + 419, + 544, + 483 + ], + "lines": [ + { + "bbox": [ + 469, + 419, + 544, + 483 + ], + "spans": [ + { + "bbox": [ + 469, + 419, + 544, + 483 + ], + "type": "image", + "image_path": "0fa9c60d7e9f9b58905e06c1a86f5e7d8e6d6f7ac03fd1e9ce36f146966739e7.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 71, + 485, + 148, + 548 + ], + "blocks": [ + { + "bbox": [ + 71, + 485, + 148, + 548 + ], + "lines": [ + { + "bbox": [ + 71, + 485, + 148, + 548 + ], + "spans": [ + { + "bbox": [ + 71, + 485, + 148, + 548 + ], + "type": "image", + "image_path": "b7e9b91e62274b99585c58f8e8402e7149ff8e358ebfe68f4b2866fc814b6e9f.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 150, + 485, + 227, + 548 + ], + "blocks": [ + { + "bbox": [ + 150, + 485, + 227, + 548 + ], + "lines": [ + { + "bbox": [ + 150, + 485, + 227, + 548 + ], + "spans": [ + { + "bbox": [ + 150, + 485, + 227, + 548 + ], + "type": "image", + "image_path": "4af61b319ebef54c220450bb22d985e5d8a8ac0d51b79951a276a4bd1185093d.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 230, + 485, + 306, + 548 + ], + "blocks": [ + { + "bbox": [ + 230, + 485, + 306, + 548 + ], + "lines": [ + { + "bbox": [ + 230, + 485, + 306, + 548 + ], + "spans": [ + { + "bbox": [ + 230, + 485, + 306, + 548 + ], + "type": "image", + "image_path": "1268d8942fcae33e355da233b87b1ba2aa1da48b738edce9e834235d40b7d2ef.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 309, + 485, + 385, + 548 + ], + "blocks": [ + { + "bbox": [ + 309, + 485, + 385, + 548 + ], + "lines": [ + { + "bbox": [ + 309, + 485, + 385, + 548 + ], + "spans": [ + { + "bbox": [ + 309, + 485, + 385, + 548 + ], + "type": "image", + "image_path": "055827d4421ec7ad88afd48e896d72caedb2e029a4b9e0133d779db345ac66b4.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 389, + 485, + 464, + 548 + ], + "blocks": [ + { + "bbox": [ + 389, + 485, + 464, + 548 + ], + "lines": [ + { + "bbox": [ + 389, + 485, + 464, + 548 + ], + "spans": [ + { + "bbox": [ + 389, + 485, + 464, + 548 + ], + "type": "image", + "image_path": "3360fd5ca9e05220182a0d5bb8c2c82ab50272d09ff5a52bbccf97b5f84a9be0.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 469, + 485, + 544, + 548 + ], + "blocks": [ + { + "bbox": [ + 469, + 485, + 544, + 548 + ], + "lines": [ + { + "bbox": [ + 469, + 485, + 544, + 548 + ], + "spans": [ + { + "bbox": [ + 469, + 485, + 544, + 548 + ], + "type": "image", + "image_path": "69f5c722e1cfdf648c8c320239579694d5e00c5f385fd8359ec77dc239059df8.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 71, + 551, + 148, + 613 + ], + "blocks": [ + { + "bbox": [ + 71, + 551, + 148, + 613 + ], + "lines": [ + { + "bbox": [ + 71, + 551, + 148, + 613 + ], + "spans": [ + { + "bbox": [ + 71, + 551, + 148, + 613 + ], + "type": "image", + "image_path": "65d7e2652966061b99c7248f64aefbfe77768d356615fa25906dab470109ad38.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 150, + 551, + 227, + 613 + ], + "blocks": [ + { + "bbox": [ + 150, + 551, + 227, + 613 + ], + "lines": [ + { + "bbox": [ + 150, + 551, + 227, + 613 + ], + "spans": [ + { + "bbox": [ + 150, + 551, + 227, + 613 + ], + "type": "image", + "image_path": "2fd077dbbd0d8c49b3d104308ffd8f662dbabfe5228d171df188dcd388f68d7e.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 230, + 551, + 306, + 613 + ], + "blocks": [ + { + "bbox": [ + 230, + 551, + 306, + 613 + ], + "lines": [ + { + "bbox": [ + 230, + 551, + 306, + 613 + ], + "spans": [ + { + "bbox": [ + 230, + 551, + 306, + 613 + ], + "type": "image", + "image_path": "6142c847daaa0b2e444e2eaee75b2301277eeb6f084c630d7a11e3e3d7715b7e.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 309, + 551, + 385, + 613 + ], + "blocks": [ + { + "bbox": [ + 309, + 551, + 385, + 613 + ], + "lines": [ + { + "bbox": [ + 309, + 551, + 385, + 613 + ], + "spans": [ + { + "bbox": [ + 309, + 551, + 385, + 613 + ], + "type": "image", + "image_path": "db70eb466e41c1079e55240a87d754c1630717a9f61f3af9f006a8036540d3c6.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 389, + 551, + 464, + 613 + ], + "blocks": [ + { + "bbox": [ + 389, + 551, + 464, + 613 + ], + "lines": [ + { + "bbox": [ + 389, + 551, + 464, + 613 + ], + "spans": [ + { + "bbox": [ + 389, + 551, + 464, + 613 + ], + "type": "image", + "image_path": "ec817598f9c016649943bcaf3e7f9eea8c6d6ea635c3dbd66d86aac41e302c90.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 469, + 551, + 544, + 613 + ], + "blocks": [ + { + "bbox": [ + 469, + 551, + 544, + 613 + ], + "lines": [ + { + "bbox": [ + 469, + 551, + 544, + 613 + ], + "spans": [ + { + "bbox": [ + 469, + 551, + 544, + 613 + ], + "type": "image", + "image_path": "4c71302f59a6e2c12cdc6bab1d67480bad20db33ec424562bec26d4d367ff76b.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 71, + 616, + 148, + 680 + ], + "blocks": [ + { + "bbox": [ + 71, + 616, + 148, + 680 + ], + "lines": [ + { + "bbox": [ + 71, + 616, + 148, + 680 + ], + "spans": [ + { + "bbox": [ + 71, + 616, + 148, + 680 + ], + "type": "image", + "image_path": "cba4f32c6197716dd8754f4789b7a8866571df39c69c4d735d0ac593280aa660.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 700, + 504, + 742 + ], + "lines": [ + { + "bbox": [ + 104, + 700, + 504, + 742 + ], + "spans": [ + { + "bbox": [ + 104, + 700, + 504, + 742 + ], + "type": "text", + "content": "Figure 4: Downstream image understanding performance (after finetuning) on each benchmark (higher is better). Blue bars show performance of model backbone initialized with Gemma-1 2B and green bars show model backbone initialized with Gemma-2 2B. Initializing with stronger LLM help improve UniFluid visual understanding performance on most datasets." + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_caption" + } + ], + "index": 24 + }, + { + "type": "image", + "bbox": [ + 150, + 616, + 227, + 680 + ], + "blocks": [ + { + "bbox": [ + 150, + 616, + 227, + 680 + ], + "lines": [ + { + "bbox": [ + 150, + 616, + 227, + 680 + ], + "spans": [ + { + "bbox": [ + 150, + 616, + 227, + 680 + ], + "type": "image", + "image_path": "f181a225d21891e5163a357e3e714abad9344f5552bb4dc59b2bd38c830fa9da.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_body" + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 230, + 616, + 306, + 691 + ], + "blocks": [ + { + "bbox": [ + 230, + 616, + 306, + 691 + ], + "lines": [ + { + "bbox": [ + 230, + 616, + 306, + 691 + ], + "spans": [ + { + "bbox": [ + 230, + 616, + 306, + 691 + ], + "type": "image", + "image_path": "cb7ec8bfd66ea26fb70e6ee0daa7cf9e7b4a7c654ee225e88386ca6f754b128b.jpg" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_body" + } + ], + "index": 26 + }, + { + "type": "image", + "bbox": [ + 309, + 616, + 385, + 691 + ], + "blocks": [ + { + "bbox": [ + 309, + 616, + 385, + 691 + ], + "lines": [ + { + "bbox": [ + 309, + 616, + 385, + 691 + ], + "spans": [ + { + "bbox": [ + 309, + 616, + 385, + 691 + ], + "type": "image", + "image_path": "7fab743345e3073b4153a8b710ead7da07fac7174f879e5a3a8ffd62deef737a.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 389, + 616, + 464, + 680 + ], + "blocks": [ + { + "bbox": [ + 389, + 616, + 464, + 680 + ], + "lines": [ + { + "bbox": [ + 389, + 616, + 464, + 680 + ], + "spans": [ + { + "bbox": [ + 389, + 616, + 464, + 680 + ], + "type": "image", + "image_path": "11601a09dc1fd36d9290ed9af67eaa56e8cbb16f8407e81c13812bc1a5c2c8b7.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + } + ], + "index": 28 + }, + { + "type": "image", + "bbox": [ + 469, + 616, + 544, + 680 + ], + "blocks": [ + { + "bbox": [ + 469, + 616, + 544, + 680 + ], + "lines": [ + { + "bbox": [ + 469, + 616, + 544, + 680 + ], + "spans": [ + { + "bbox": [ + 469, + 616, + 544, + 680 + ], + "type": "image", + "image_path": "f0ce9ba61697d3680f8cc472b3daf1280274538d8e0d6a2eed7026b12c75840b.jpg" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_body" + } + ], + "index": 29 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 46, + 15, + 164, + 133 + ], + "blocks": [ + { + "bbox": [ + 46, + 15, + 164, + 133 + ], + "lines": [ + { + "bbox": [ + 46, + 15, + 164, + 133 + ], + "spans": [ + { + "bbox": [ + 46, + 15, + 164, + 133 + ], + "type": "image", + "image_path": "c956187abea4dd8794f3600eded98ff56e89b52dfb2e2578cc297cfc9c1e2c4b.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 52, + 138, + 157, + 162 + ], + "lines": [ + { + "bbox": [ + 52, + 138, + 157, + 162 + ], + "spans": [ + { + "bbox": [ + 52, + 138, + 157, + 162 + ], + "type": "text", + "content": "A black basketball shoe with a lightning bolt on it." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 179, + 15, + 298, + 133 + ], + "blocks": [ + { + "bbox": [ + 179, + 15, + 298, + 133 + ], + "lines": [ + { + "bbox": [ + 179, + 15, + 298, + 133 + ], + "spans": [ + { + "bbox": [ + 179, + 15, + 298, + 133 + ], + "type": "image", + "image_path": "1dda4edef0f15fb07c736b4c19bc3834934ff9cadc02e9f0f49659946ff6d814.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 204, + 139, + 271, + 152 + ], + "lines": [ + { + "bbox": [ + 204, + 139, + 271, + 152 + ], + "spans": [ + { + "bbox": [ + 204, + 139, + 271, + 152 + ], + "type": "text", + "content": "A melting apple." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 312, + 15, + 430, + 133 + ], + "blocks": [ + { + "bbox": [ + 312, + 15, + 430, + 133 + ], + "lines": [ + { + "bbox": [ + 312, + 15, + 430, + 133 + ], + "spans": [ + { + "bbox": [ + 312, + 15, + 430, + 133 + ], + "type": "image", + "image_path": "466d2fe43bf8d14bf7059dbedc9088860424dd3ee80301f589abee1a3c733d74.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 317, + 139, + 412, + 161 + ], + "lines": [ + { + "bbox": [ + 317, + 139, + 412, + 161 + ], + "spans": [ + { + "bbox": [ + 317, + 139, + 412, + 161 + ], + "type": "text", + "content": "A space elevator in the universe." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 446, + 15, + 563, + 133 + ], + "blocks": [ + { + "bbox": [ + 446, + 15, + 563, + 133 + ], + "lines": [ + { + "bbox": [ + 446, + 15, + 563, + 133 + ], + "spans": [ + { + "bbox": [ + 446, + 15, + 563, + 133 + ], + "type": "image", + "image_path": "9bd5a18be00334727765b9a18254bc1ed3b8dc9e3dc00bc5cec75969e5d733d0.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 450, + 139, + 544, + 162 + ], + "lines": [ + { + "bbox": [ + 450, + 139, + 544, + 162 + ], + "spans": [ + { + "bbox": [ + 450, + 139, + 544, + 162 + ], + "type": "text", + "content": "human life depicted entirely out of fractals." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 46, + 172, + 163, + 291 + ], + "blocks": [ + { + "bbox": [ + 46, + 172, + 163, + 291 + ], + "lines": [ + { + "bbox": [ + 46, + 172, + 163, + 291 + ], + "spans": [ + { + "bbox": [ + 46, + 172, + 163, + 291 + ], + "type": "image", + "image_path": "2efd5160ecbb6255aa5aa53f9e7c7990aed82e7424c5e14d7ec64fd6295f8a8b.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 53, + 297, + 159, + 318 + ], + "lines": [ + { + "bbox": [ + 53, + 297, + 159, + 318 + ], + "spans": [ + { + "bbox": [ + 53, + 297, + 159, + 318 + ], + "type": "text", + "content": "Graffiti of a funny dog on a street wall." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 179, + 171, + 296, + 289 + ], + "blocks": [ + { + "bbox": [ + 179, + 171, + 296, + 289 + ], + "lines": [ + { + "bbox": [ + 179, + 171, + 296, + 289 + ], + "spans": [ + { + "bbox": [ + 179, + 171, + 296, + 289 + ], + "type": "image", + "image_path": "eedd97bda7977dc3c2b8ddcaa4797ae048a9f58b38c7e304952ee35bdeb0e617.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 182, + 297, + 277, + 330 + ], + "lines": [ + { + "bbox": [ + 182, + 297, + 277, + 330 + ], + "spans": [ + { + "bbox": [ + 182, + 297, + 277, + 330 + ], + "type": "text", + "content": "Turtle swimming \nunderwater, aesthetic, \nfantasy." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 313, + 172, + 430, + 289 + ], + "blocks": [ + { + "bbox": [ + 313, + 172, + 430, + 289 + ], + "lines": [ + { + "bbox": [ + 313, + 172, + 430, + 289 + ], + "spans": [ + { + "bbox": [ + 313, + 172, + 430, + 289 + ], + "type": "image", + "image_path": "2d819e1f841d2622c73d61555687a7195917f493c4944afb10b5dba54f3e5ca1.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 316, + 297, + 416, + 330 + ], + "lines": [ + { + "bbox": [ + 316, + 297, + 416, + 330 + ], + "spans": [ + { + "bbox": [ + 316, + 297, + 416, + 330 + ], + "type": "text", + "content": "A transparent sculpture of a duck made out of glass." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 447, + 172, + 563, + 290 + ], + "blocks": [ + { + "bbox": [ + 447, + 172, + 563, + 290 + ], + "lines": [ + { + "bbox": [ + 447, + 172, + 563, + 290 + ], + "spans": [ + { + "bbox": [ + 447, + 172, + 563, + 290 + ], + "type": "image", + "image_path": "7ff46f16950151c5e77b47b936d2c67bdc5ecdccfbe3f2e8131383c933a526f9.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 450, + 297, + 553, + 318 + ], + "lines": [ + { + "bbox": [ + 450, + 297, + 553, + 318 + ], + "spans": [ + { + "bbox": [ + 450, + 297, + 553, + 318 + ], + "type": "text", + "content": "An armchair in the shape of an avocado." + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 45, + 342, + 164, + 460 + ], + "blocks": [ + { + "bbox": [ + 45, + 342, + 164, + 460 + ], + "lines": [ + { + "bbox": [ + 45, + 342, + 164, + 460 + ], + "spans": [ + { + "bbox": [ + 45, + 342, + 164, + 460 + ], + "type": "image", + "image_path": "db9b12aa29a17e233f26cae458623aad7335736add2998309d2ecc99fc278a1c.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 48, + 467, + 161, + 510 + ], + "lines": [ + { + "bbox": [ + 48, + 467, + 161, + 510 + ], + "spans": [ + { + "bbox": [ + 48, + 467, + 161, + 510 + ], + "type": "text", + "content": "A hyper-detailed rendering of a geyser erupting in a colorful, geothermal landscape." + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 179, + 342, + 296, + 460 + ], + "blocks": [ + { + "bbox": [ + 179, + 342, + 296, + 460 + ], + "lines": [ + { + "bbox": [ + 179, + 342, + 296, + 460 + ], + "spans": [ + { + "bbox": [ + 179, + 342, + 296, + 460 + ], + "type": "image", + "image_path": "f54f46e66e32e2eec00ae7b8d752e50c4814067b201f685dac85b03a07a273d6.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 182, + 467, + 276, + 501 + ], + "lines": [ + { + "bbox": [ + 182, + 467, + 276, + 501 + ], + "spans": [ + { + "bbox": [ + 182, + 467, + 276, + 501 + ], + "type": "text", + "content": "An astronaut riding a horse on the moon, oil painting by Van Gogh." + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 313, + 342, + 430, + 459 + ], + "blocks": [ + { + "bbox": [ + 313, + 342, + 430, + 459 + ], + "lines": [ + { + "bbox": [ + 313, + 342, + 430, + 459 + ], + "spans": [ + { + "bbox": [ + 313, + 342, + 430, + 459 + ], + "type": "image", + "image_path": "8d1ac576da2e319c5322e81c14503269b1d8b27ac23b92ebc04c6a1e63258500.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 316, + 467, + 417, + 499 + ], + "lines": [ + { + "bbox": [ + 316, + 467, + 417, + 499 + ], + "spans": [ + { + "bbox": [ + 316, + 467, + 417, + 499 + ], + "type": "text", + "content": "photo of an eagle with a golden crown resting upon its head." + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 447, + 342, + 563, + 459 + ], + "blocks": [ + { + "bbox": [ + 447, + 342, + 563, + 459 + ], + "lines": [ + { + "bbox": [ + 447, + 342, + 563, + 459 + ], + "spans": [ + { + "bbox": [ + 447, + 342, + 563, + 459 + ], + "type": "image", + "image_path": "a15ecf2565252d6785ad118897c8a9997898e511a7971087b628b411aee66098.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 450, + 467, + 553, + 500 + ], + "lines": [ + { + "bbox": [ + 450, + 467, + 553, + 500 + ], + "spans": [ + { + "bbox": [ + 450, + 467, + 553, + 500 + ], + "type": "text", + "content": "A window with raindrops trickling down, overlooking a blurry city." + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_caption" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 45, + 525, + 164, + 643 + ], + "blocks": [ + { + "bbox": [ + 45, + 525, + 164, + 643 + ], + "lines": [ + { + "bbox": [ + 45, + 525, + 164, + 643 + ], + "spans": [ + { + "bbox": [ + 45, + 525, + 164, + 643 + ], + "type": "image", + "image_path": "1268a3fc86a3637bab1e17dd7e19f49d8912d1bec2f9c972de4d7c3bbe98d92a.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 48, + 651, + 157, + 715 + ], + "lines": [ + { + "bbox": [ + 48, + 651, + 157, + 715 + ], + "spans": [ + { + "bbox": [ + 48, + 651, + 157, + 715 + ], + "type": "text", + "content": "An image of a modern architectural building with large glass windows, situated on a cliff overlooking a serene ocean at sunset." + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_caption" + } + ], + "index": 24 + }, + { + "type": "image", + "bbox": [ + 179, + 525, + 296, + 643 + ], + "blocks": [ + { + "bbox": [ + 179, + 525, + 296, + 643 + ], + "lines": [ + { + "bbox": [ + 179, + 525, + 296, + 643 + ], + "spans": [ + { + "bbox": [ + 179, + 525, + 296, + 643 + ], + "type": "image", + "image_path": "0b1b1b92592474bf268039868377a95c9b10fbd9c272acae0077e0faa47382d0.jpg" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 179, + 651, + 291, + 727 + ], + "lines": [ + { + "bbox": [ + 179, + 651, + 291, + 727 + ], + "spans": [ + { + "bbox": [ + 179, + 651, + 291, + 727 + ], + "type": "text", + "content": "A cozy cabin in the middle of a snowy forest, surrounded by tall trees with lights glowing through the windows, a northern lights display visible in the sky." + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_caption" + } + ], + "index": 26 + }, + { + "type": "image", + "bbox": [ + 313, + 525, + 430, + 643 + ], + "blocks": [ + { + "bbox": [ + 313, + 525, + 430, + 643 + ], + "lines": [ + { + "bbox": [ + 313, + 525, + 430, + 643 + ], + "spans": [ + { + "bbox": [ + 313, + 525, + 430, + 643 + ], + "type": "image", + "image_path": "a05f6050aea65fd8c472eca0b0aa84d22ae31d07fc18623d18f8abcf22e27deb.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 316, + 651, + 425, + 727 + ], + "lines": [ + { + "bbox": [ + 316, + 651, + 425, + 727 + ], + "spans": [ + { + "bbox": [ + 316, + 651, + 425, + 727 + ], + "type": "text", + "content": "A still life of a vase overflowing with vibrant flowers, painted in bold colors and textured brushstrokes, reminiscent of van Gogh's iconic style." + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_caption" + } + ], + "index": 28 + }, + { + "type": "image", + "bbox": [ + 447, + 525, + 563, + 643 + ], + "blocks": [ + { + "bbox": [ + 447, + 525, + 563, + 643 + ], + "lines": [ + { + "bbox": [ + 447, + 525, + 563, + 643 + ], + "spans": [ + { + "bbox": [ + 447, + 525, + 563, + 643 + ], + "type": "image", + "image_path": "66c56563088f32892912db5825e6709d4e50a71e4d13352801783949bdaace75.jpg" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 449, + 651, + 558, + 727 + ], + "lines": [ + { + "bbox": [ + 449, + 651, + 558, + 727 + ], + "spans": [ + { + "bbox": [ + 449, + 651, + 558, + 727 + ], + "type": "text", + "content": "An otherworldly forest of giant glowing mushrooms under a vibrant night sky filled with distant planets and stars, creating a dreamlike, cosmic landscape." + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 137, + 727, + 473, + 738 + ], + "lines": [ + { + "bbox": [ + 137, + 727, + 473, + 738 + ], + "spans": [ + { + "bbox": [ + 137, + 727, + 473, + 738 + ], + "type": "text", + "content": "Figure 5: Images generated from UniFluid autoregressive model after aesthetic fine-tuning." + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_caption" + } + ], + "index": 30 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 33 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 66, + 29, + 170, + 166 + ], + "blocks": [ + { + "bbox": [ + 66, + 29, + 170, + 166 + ], + "lines": [ + { + "bbox": [ + 66, + 29, + 170, + 166 + ], + "spans": [ + { + "bbox": [ + 66, + 29, + 170, + 166 + ], + "type": "image", + "image_path": "36f4a8eceafcdb00d110ca517d7b8a9cf4704c36dd59aee4f5c7d4974d7e9486.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 173, + 36, + 291, + 148 + ], + "lines": [ + { + "bbox": [ + 173, + 36, + 291, + 148 + ], + "spans": [ + { + "bbox": [ + 173, + 36, + 291, + 148 + ], + "type": "text", + "content": "A close up view of a wooden pole with the word \"sushi\" in silver letters on it. The pole is in front of a parking lot. The pole is in the foreground, with a concrete sidewalk in the foreground. There are plants and trees in the background. There is a brown brick building with a black awning on the left side of it. There are trees in the background. The sky is clear and blue." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 304, + 30, + 405, + 163 + ], + "blocks": [ + { + "bbox": [ + 304, + 30, + 405, + 163 + ], + "lines": [ + { + "bbox": [ + 304, + 30, + 405, + 163 + ], + "spans": [ + { + "bbox": [ + 304, + 30, + 405, + 163 + ], + "type": "image", + "image_path": "3b5e953dc74551e08a0fd1c416c0915653361ba3cff1e61815598e1a8cd421bb.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 410, + 34, + 541, + 148 + ], + "lines": [ + { + "bbox": [ + 410, + 34, + 541, + 148 + ], + "spans": [ + { + "bbox": [ + 410, + 34, + 541, + 148 + ], + "type": "text", + "content": "A close up view of a wet white and brown labradoodle dog walking on a gray cement surface. The dog has a red leash attached to its collar. The dog has a silver metal hook attached to its collar. The dog is walking towards the camera. The dog is casting a shadow on the cement surface. Behind the dog is a building with a large window on the side. A tree line is visible in the background. The sky is blue with a few white clouds." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 67, + 180, + 209, + 281 + ], + "blocks": [ + { + "bbox": [ + 67, + 180, + 209, + 281 + ], + "lines": [ + { + "bbox": [ + 67, + 180, + 209, + 281 + ], + "spans": [ + { + "bbox": [ + 67, + 180, + 209, + 281 + ], + "type": "image", + "image_path": "04fe165de52995499a22c711c69eafb3a57240f4b865c524c8ee2c812a8e402b.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 215, + 201, + 291, + 239 + ], + "lines": [ + { + "bbox": [ + 215, + 201, + 291, + 239 + ], + "spans": [ + { + "bbox": [ + 215, + 201, + 291, + 239 + ], + "type": "text", + "content": "Question: In what year did Panama's population reach 4.28 million?" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 216, + 248, + 263, + 258 + ], + "lines": [ + { + "bbox": [ + 216, + 248, + 263, + 258 + ], + "spans": [ + { + "bbox": [ + 216, + 248, + 263, + 258 + ], + "type": "text", + "content": "Answer: 2020" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 302, + 180, + 445, + 279 + ], + "blocks": [ + { + "bbox": [ + 302, + 180, + 445, + 279 + ], + "lines": [ + { + "bbox": [ + 302, + 180, + 445, + 279 + ], + "spans": [ + { + "bbox": [ + 302, + 180, + 445, + 279 + ], + "type": "image", + "image_path": "3affb1716f45e3f8a8e892a46ed5c32d321d2f28697ab2b0d93d6ccf76afcff1.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 457, + 192, + 528, + 239 + ], + "lines": [ + { + "bbox": [ + 457, + 192, + 528, + 239 + ], + "spans": [ + { + "bbox": [ + 457, + 192, + 528, + 239 + ], + "type": "text", + "content": "Question: What device's average duration of in-app engagement sessions was 6.6 minutes?" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 457, + 248, + 513, + 257 + ], + "lines": [ + { + "bbox": [ + 457, + 248, + 513, + 257 + ], + "spans": [ + { + "bbox": [ + 457, + 248, + 513, + 257 + ], + "type": "text", + "content": "Answer: android" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 66, + 293, + 205, + 399 + ], + "blocks": [ + { + "bbox": [ + 66, + 293, + 205, + 399 + ], + "lines": [ + { + "bbox": [ + 66, + 293, + 205, + 399 + ], + "spans": [ + { + "bbox": [ + 66, + 293, + 205, + 399 + ], + "type": "image", + "image_path": "d247a1dbb553478afd47f826cff4a46e58101f933b13bce059a3897dcaba7b76.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 211, + 323, + 281, + 351 + ], + "lines": [ + { + "bbox": [ + 211, + 323, + 281, + 351 + ], + "spans": [ + { + "bbox": [ + 211, + 323, + 281, + 351 + ], + "type": "text", + "content": "Question: The bus is likely driving through which American city?" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 211, + 361, + 271, + 370 + ], + "lines": [ + { + "bbox": [ + 211, + 361, + 271, + 370 + ], + "spans": [ + { + "bbox": [ + 211, + 361, + 271, + 370 + ], + "type": "text", + "content": "Answer: new york" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 104, + 402, + 504, + 424 + ], + "lines": [ + { + "bbox": [ + 104, + 402, + 504, + 424 + ], + "spans": [ + { + "bbox": [ + 104, + 402, + 504, + 424 + ], + "type": "text", + "content": "Figure 6: Finetuned UniFluid model demonstrates strong image-to-text capability on image captioning and question answering." + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 302, + 294, + 441, + 399 + ], + "blocks": [ + { + "bbox": [ + 302, + 294, + 441, + 399 + ], + "lines": [ + { + "bbox": [ + 302, + 294, + 441, + 399 + ], + "spans": [ + { + "bbox": [ + 302, + 294, + 441, + 399 + ], + "type": "image", + "image_path": "56f7755653a55050558209e33a525944678ce07f18db276e8244c8c56aa35766.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 457, + 323, + 520, + 360 + ], + "lines": [ + { + "bbox": [ + 457, + 323, + 520, + 360 + ], + "spans": [ + { + "bbox": [ + 457, + 323, + 520, + 360 + ], + "type": "text", + "content": "Question: Which number birthday is probably being celebrated?" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 457, + 370, + 506, + 380 + ], + "lines": [ + { + "bbox": [ + 457, + 370, + 506, + 380 + ], + "spans": [ + { + "bbox": [ + 457, + 370, + 506, + 380 + ], + "type": "text", + "content": "Answer: thirty" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 64, + 432, + 142, + 511 + ], + "blocks": [ + { + "bbox": [ + 64, + 432, + 142, + 511 + ], + "lines": [ + { + "bbox": [ + 64, + 432, + 142, + 511 + ], + "spans": [ + { + "bbox": [ + 64, + 432, + 142, + 511 + ], + "type": "image", + "image_path": "79f93e931fd94d63593c874d6ac1fb094c20d25d037a24a190c7c96be9620be3.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 87, + 516, + 201, + 524 + ], + "lines": [ + { + "bbox": [ + 87, + 516, + 201, + 524 + ], + "spans": [ + { + "bbox": [ + 87, + 516, + 201, + 524 + ], + "type": "text", + "content": "Change the black suitcase to a white one" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 143, + 433, + 222, + 511 + ], + "blocks": [ + { + "bbox": [ + 143, + 433, + 222, + 511 + ], + "lines": [ + { + "bbox": [ + 143, + 433, + 222, + 511 + ], + "spans": [ + { + "bbox": [ + 143, + 433, + 222, + 511 + ], + "type": "image", + "image_path": "d19bf5d7e7f56008e163ebc8e441f8a1112671cf8a8ef094221c9f5680a30ced.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 225, + 433, + 304, + 511 + ], + "blocks": [ + { + "bbox": [ + 225, + 433, + 304, + 511 + ], + "lines": [ + { + "bbox": [ + 225, + 433, + 304, + 511 + ], + "spans": [ + { + "bbox": [ + 225, + 433, + 304, + 511 + ], + "type": "image", + "image_path": "1d019c5e0c764eb1e8b95793df132237f61dc32dfaf1373ae7c746f9613670e2.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 248, + 516, + 343, + 525 + ], + "lines": [ + { + "bbox": [ + 248, + 516, + 343, + 525 + ], + "spans": [ + { + "bbox": [ + 248, + 516, + 343, + 525 + ], + "type": "text", + "content": "Remove the fence from the photo" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 304, + 433, + 383, + 511 + ], + "blocks": [ + { + "bbox": [ + 304, + 433, + 383, + 511 + ], + "lines": [ + { + "bbox": [ + 304, + 433, + 383, + 511 + ], + "spans": [ + { + "bbox": [ + 304, + 433, + 383, + 511 + ], + "type": "image", + "image_path": "7f3981b5455644ddc8ab004d938034d1d81756f69c154a9d395fc70887dc2689.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 387, + 433, + 466, + 511 + ], + "blocks": [ + { + "bbox": [ + 387, + 433, + 466, + 511 + ], + "lines": [ + { + "bbox": [ + 387, + 433, + 466, + 511 + ], + "spans": [ + { + "bbox": [ + 387, + 433, + 466, + 511 + ], + "type": "image", + "image_path": "857c3d2b4d90a930f09ebe7c2eaecd6e41843ff5f2eb1e4dfd4d5f0fbb8c4eae.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 395, + 516, + 538, + 525 + ], + "lines": [ + { + "bbox": [ + 395, + 516, + 538, + 525 + ], + "spans": [ + { + "bbox": [ + 395, + 516, + 538, + 525 + ], + "type": "text", + "content": "Change this into a 1950s Flintstone cartoon art style" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_caption" + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 466, + 433, + 545, + 511 + ], + "blocks": [ + { + "bbox": [ + 466, + 433, + 545, + 511 + ], + "lines": [ + { + "bbox": [ + 466, + 433, + 545, + 511 + ], + "spans": [ + { + "bbox": [ + 466, + 433, + 545, + 511 + ], + "type": "image", + "image_path": "5d06f1f7f1bc3e93747fda98c60b99db1e7b3bfc38ea08d5f99f70ff408d4703.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + } + ], + "index": 24 + }, + { + "type": "image", + "bbox": [ + 64, + 529, + 142, + 608 + ], + "blocks": [ + { + "bbox": [ + 64, + 529, + 142, + 608 + ], + "lines": [ + { + "bbox": [ + 64, + 529, + 142, + 608 + ], + "spans": [ + { + "bbox": [ + 64, + 529, + 142, + 608 + ], + "type": "image", + "image_path": "480023cdc110c691db2b63895df6e29d8f62b6c948074069eb234e06d6a1643d.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 86, + 612, + 197, + 620 + ], + "lines": [ + { + "bbox": [ + 86, + 612, + 197, + 620 + ], + "spans": [ + { + "bbox": [ + 86, + 612, + 197, + 620 + ], + "type": "text", + "content": "Change the trees from flowers to icicles" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 143, + 529, + 222, + 608 + ], + "blocks": [ + { + "bbox": [ + 143, + 529, + 222, + 608 + ], + "lines": [ + { + "bbox": [ + 143, + 529, + 222, + 608 + ], + "spans": [ + { + "bbox": [ + 143, + 529, + 222, + 608 + ], + "type": "image", + "image_path": "3cf06912c8adfa112e4cec26c1ef46e0c4e30e821c79f1b2de0b87c712b9bc5c.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 225, + 529, + 304, + 608 + ], + "blocks": [ + { + "bbox": [ + 225, + 529, + 304, + 608 + ], + "lines": [ + { + "bbox": [ + 225, + 529, + 304, + 608 + ], + "spans": [ + { + "bbox": [ + 225, + 529, + 304, + 608 + ], + "type": "image", + "image_path": "eb0786feda26f92aea48692bb7ee90a2bd953e0f5f3bb17a29cbf991e6540cde.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 249, + 612, + 358, + 620 + ], + "lines": [ + { + "bbox": [ + 249, + 612, + 358, + 620 + ], + "spans": [ + { + "bbox": [ + 249, + 612, + 358, + 620 + ], + "type": "text", + "content": "Change the elephant into a giant rabbit" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_caption" + } + ], + "index": 28 + }, + { + "type": "image", + "bbox": [ + 304, + 529, + 383, + 608 + ], + "blocks": [ + { + "bbox": [ + 304, + 529, + 383, + 608 + ], + "lines": [ + { + "bbox": [ + 304, + 529, + 383, + 608 + ], + "spans": [ + { + "bbox": [ + 304, + 529, + 383, + 608 + ], + "type": "image", + "image_path": "ab435942c6a892c5e005c6f3e596ec777b7002aee1398e2d94d8acf12d940801.jpg" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_body" + } + ], + "index": 29 + }, + { + "type": "image", + "bbox": [ + 388, + 529, + 466, + 608 + ], + "blocks": [ + { + "bbox": [ + 388, + 529, + 466, + 608 + ], + "lines": [ + { + "bbox": [ + 388, + 529, + 466, + 608 + ], + "spans": [ + { + "bbox": [ + 388, + 529, + 466, + 608 + ], + "type": "image", + "image_path": "9c446d91fda981918537212203a547eb8e4fb4c4bbdb313a91d2a0e202c7d290.jpg" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_body" + } + ], + "index": 31 + }, + { + "type": "image", + "bbox": [ + 466, + 529, + 545, + 608 + ], + "blocks": [ + { + "bbox": [ + 466, + 529, + 545, + 608 + ], + "lines": [ + { + "bbox": [ + 466, + 529, + 545, + 608 + ], + "spans": [ + { + "bbox": [ + 466, + 529, + 545, + 608 + ], + "type": "image", + "image_path": "7f9899abfaf1d95a495617d9c2b59fbcd4e91d93fcf676ed5bac64c54a57d59d.jpg" + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_body" + } + ], + "index": 32 + }, + { + "type": "image", + "bbox": [ + 64, + 625, + 142, + 704 + ], + "blocks": [ + { + "bbox": [ + 64, + 625, + 142, + 704 + ], + "lines": [ + { + "bbox": [ + 64, + 625, + 142, + 704 + ], + "spans": [ + { + "bbox": [ + 64, + 625, + 142, + 704 + ], + "type": "image", + "image_path": "37e1926ec2131e720adf5a00ec75683674a2fdf5d3477f9049de45486d0cd9f1.jpg" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 709, + 180, + 717 + ], + "lines": [ + { + "bbox": [ + 104, + 709, + 180, + 717 + ], + "spans": [ + { + "bbox": [ + 104, + 709, + 180, + 717 + ], + "type": "text", + "content": "Change the bird to all white" + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_caption" + } + ], + "index": 33 + }, + { + "type": "image", + "bbox": [ + 143, + 625, + 222, + 704 + ], + "blocks": [ + { + "bbox": [ + 143, + 625, + 222, + 704 + ], + "lines": [ + { + "bbox": [ + 143, + 625, + 222, + 704 + ], + "spans": [ + { + "bbox": [ + 143, + 625, + 222, + 704 + ], + "type": "image", + "image_path": "00ace96392ae997da28586fe1293583f6d92d6429a3fcfed2d0751233fbb1bb4.jpg" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 718, + 506, + 738 + ], + "lines": [ + { + "bbox": [ + 104, + 718, + 506, + 738 + ], + "spans": [ + { + "bbox": [ + 104, + 718, + 506, + 738 + ], + "type": "text", + "content": "Figure 7: Image editing results on evaluation benchmark from UniFluid autoregressive model after fine-tuning. It handles multiple editing tasks effectively, including object removal, insertion, style and color changes." + } + ] + } + ], + "index": 43, + "angle": 0, + "type": "image_caption" + } + ], + "index": 34 + }, + { + "type": "image", + "bbox": [ + 241, + 626, + 299, + 703 + ], + "blocks": [ + { + "bbox": [ + 241, + 626, + 299, + 703 + ], + "lines": [ + { + "bbox": [ + 241, + 626, + 299, + 703 + ], + "spans": [ + { + "bbox": [ + 241, + 626, + 299, + 703 + ], + "type": "image", + "image_path": "83cafdcca03ca33853f930748ce61120aaab1ee886c0c13f8fcc11862d110f05.jpg" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 242, + 709, + 361, + 717 + ], + "lines": [ + { + "bbox": [ + 242, + 709, + 361, + 717 + ], + "spans": [ + { + "bbox": [ + 242, + 709, + 361, + 717 + ], + "type": "text", + "content": "Turn it into a Paul Cezanne still life painting" + } + ] + } + ], + "index": 41, + "angle": 0, + "type": "image_caption" + } + ], + "index": 36 + }, + { + "type": "image", + "bbox": [ + 306, + 626, + 385, + 704 + ], + "blocks": [ + { + "bbox": [ + 306, + 626, + 385, + 704 + ], + "lines": [ + { + "bbox": [ + 306, + 626, + 385, + 704 + ], + "spans": [ + { + "bbox": [ + 306, + 626, + 385, + 704 + ], + "type": "image", + "image_path": "5117cd2da0dfc63a8e7e5811316065d98885423bf2410418a9ecb417bb9c410b.jpg" + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_body" + } + ], + "index": 37 + }, + { + "type": "image", + "bbox": [ + 387, + 625, + 466, + 703 + ], + "blocks": [ + { + "bbox": [ + 387, + 625, + 466, + 703 + ], + "lines": [ + { + "bbox": [ + 387, + 625, + 466, + 703 + ], + "spans": [ + { + "bbox": [ + 387, + 625, + 466, + 703 + ], + "type": "image", + "image_path": "2f04b59d14ad478ebcffecd91ec88aa335419c135b386a9254939ca44fbb078b.jpg" + } + ] + } + ], + "index": 38, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 423, + 709, + 506, + 717 + ], + "lines": [ + { + "bbox": [ + 423, + 709, + 506, + 717 + ], + "spans": [ + { + "bbox": [ + 423, + 709, + 506, + 717 + ], + "type": "text", + "content": "Change the meat into carrots" + } + ] + } + ], + "index": 42, + "angle": 0, + "type": "image_caption" + } + ], + "index": 38 + }, + { + "type": "image", + "bbox": [ + 466, + 625, + 545, + 703 + ], + "blocks": [ + { + "bbox": [ + 466, + 625, + 545, + 703 + ], + "lines": [ + { + "bbox": [ + 466, + 625, + 545, + 703 + ], + "spans": [ + { + "bbox": [ + 466, + 625, + 545, + 703 + ], + "type": "image", + "image_path": "46973e49a359e902ed1f7d9f02c2def2ed4f8fe3f1d72a0e4d3fdeb7fc3d6011.jpg" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "image_body" + } + ], + "index": 39 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 44 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 143, + 98, + 467, + 150 + ], + "blocks": [ + { + "bbox": [ + 105, + 78, + 504, + 97 + ], + "lines": [ + { + "bbox": [ + 105, + 78, + 504, + 97 + ], + "spans": [ + { + "bbox": [ + 105, + 78, + 504, + 97 + ], + "type": "text", + "content": "Table 4: Performance comparison of image generation and understanding of UniFluid trained with different image generation order. FID and CIDEr is measured on MS-COCO." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 143, + 98, + 467, + 150 + ], + "lines": [ + { + "bbox": [ + 143, + 98, + 467, + 150 + ], + "spans": [ + { + "bbox": [ + 143, + 98, + 467, + 150 + ], + "type": "table", + "html": "
Generation OrderGenerationUnderstanding
COCO FID ↓GenEval ↑COCO CIDEr ↑Cap Avg ↑QA Avg ↑
Raster8.280.5945.57116.5561.81
Random7.200.5940.91116.1362.10
", + "image_path": "8027d54f52a4ad71cfa0c7bea294a4fb80ff464f7ea42d394d811564e25d0f95.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 170, + 506, + 216 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 170, + 506, + 216 + ], + "spans": [ + { + "bbox": [ + 104, + 170, + 506, + 216 + ], + "type": "text", + "content": "as a backbone is important for unlocking improved visual quality in a unified model training setup. The image understanding performance also improved slightly when using stronger LLM, which is a trend also noted in PaliGemma2. We show the detailed comparison on each downstream visual understanding benchmark in Figure 4." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 219, + 504, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 219, + 504, + 275 + ], + "spans": [ + { + "bbox": [ + 104, + 219, + 504, + 275 + ], + "type": "text", + "content": "Training with Random Order Helps Generation But Not Understanding. Images inherently possess 2D patterns. As demonstrated in Fluid, raster-order training can be problematic, potentially leading to collapse and artifacts characterized by disappearing patterns. Approaches such as RAR [58] and RandAR [33] propose training image generation AR models with random-order training, which can improve ImageNet FID and result in better visual quality." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 279, + 506, + 369 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 279, + 506, + 369 + ], + "spans": [ + { + "bbox": [ + 104, + 279, + 506, + 369 + ], + "type": "text", + "content": "Here we study the effect of different visual generation orders during training within our unified framework setup. We compare the performance between random-order and raster-order training, both with Gemma-2 2B as backbone LLM. The results presented in Table 4 indicate that for per-token image generation within a unified framework, raster-order training continues to underperform compared to random-order generation. Incorporating random-order during training could ensure the generated images are of high quality. However, it does not necessarily improve the visual understanding performance, where raster-order achieves better performance on MS-COCO CIDEr and downstream captioning task average (Cap Avg)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 380, + 258, + 392 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 380, + 258, + 392 + ], + "spans": [ + { + "bbox": [ + 105, + 380, + 258, + 392 + ], + "type": "text", + "content": "5.3 More Generation Capabilities" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 400, + 484, + 412 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 400, + 484, + 412 + ], + "spans": [ + { + "bbox": [ + 104, + 400, + 484, + 412 + ], + "type": "text", + "content": "We also verify the transferability of the trained model to various downstream generation tasks." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 416, + 506, + 440 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 416, + 506, + 440 + ], + "spans": [ + { + "bbox": [ + 104, + 416, + 506, + 440 + ], + "type": "text", + "content": "Aesthetic Fine-Tuning. To enhance the visual quality and aesthetic appeal of the generated images, we perform aesthetic fine-tuning on a publicly available dataset. The results are shown in Figure 5." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 444, + 506, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 444, + 506, + 510 + ], + "spans": [ + { + "bbox": [ + 104, + 444, + 506, + 510 + ], + "type": "text", + "content": "Image Editing Task. Since our unified framework is trained with multimodal inputs, it can naturally extend to image editing tasks that involve both image and text prompt inputs. We fine-tune the 2B UniFluid model with 4M image editing pairs from HQEdit [18] and UltraEdit [60]. In Figure 7, we apply the fine-tuned model to the input images and editing prompts from a public available benchmark. Although preliminary, the experiments show that UniFluid is able to adapt and generalize to tasks that involve interleaved data modalities." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 525, + 185, + 538 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 525, + 185, + 538 + ], + "spans": [ + { + "bbox": [ + 105, + 525, + 185, + 538 + ], + "type": "text", + "content": "6 Conclusion" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 549, + 506, + 660 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 549, + 506, + 660 + ], + "spans": [ + { + "bbox": [ + 104, + 549, + 506, + 660 + ], + "type": "text", + "content": "In this paper, we presented UniFluid, a pure autoregressive framework for joint visual generation and understanding, utilizing continuous visual tokens. We identified an inherent trade-off between the visual generation and understanding tasks, but the two tasks can benefit each other with tuned training recipes. Careful choice of the loss balance between the two tasks allows a single unified model to achieve performance comparable to or exceeding single-task baselines. We conducted investigation of key design choices for UniFluid training, revealing the critical importance of employing strong backbone LLM and random-order generation to unlock high-quality visual generation capabilities. We believe that this work encourages future research into the exploration of continuous visual tokens for unified vision-language model training, paving the way for more efficient and powerful autoregressive multimodal systems." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 665, + 506, + 709 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 665, + 506, + 709 + ], + "spans": [ + { + "bbox": [ + 104, + 665, + 506, + 709 + ], + "type": "text", + "content": "Acknowledgements. We are grateful to Alex Rizkowsky and Amy Shen for their support in securing computational resources. We also wish to thank Charles Herrmann, Junhwa Hur, Shangbang Long, André Susano Pinto, Srinivas Kaza, David Salesin, and the VisCam team for their insightful discussions and constructive feedback, which greatly improved this work." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 312, + 750 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 71, + 165, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 71, + 165, + 84 + ], + "spans": [ + { + "bbox": [ + 106, + 71, + 165, + 84 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 105, + 89, + 507, + 721 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 111, + 89, + 505, + 122 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 89, + 505, + 122 + ], + "spans": [ + { + "bbox": [ + 111, + 89, + 505, + 122 + ], + "type": "text", + "content": "[1] Manoj Acharya, Kushal Kafle, and Christopher Kanan. Tallyqa: Answering complex counting questions. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 8076-8084, 2019." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 111, + 129, + 507, + 174 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 129, + 507, + 174 + ], + "spans": [ + { + "bbox": [ + 111, + 129, + 507, + 174 + ], + "type": "text", + "content": "[2] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems, 35:23716-23736, 2022." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 111, + 178, + 505, + 214 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 178, + 505, + 214 + ], + "spans": [ + { + "bbox": [ + 111, + 178, + 505, + 214 + ], + "type": "text", + "content": "[3] Lucas Beyer, Andreas Steiner, André Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neumann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, et al. Paligemma: A versatile 3b vlm for transfer. arXiv preprint arXiv:2407.07726, 2024." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 111, + 219, + 505, + 254 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 219, + 505, + 254 + ], + "spans": [ + { + "bbox": [ + 111, + 219, + 505, + 254 + ], + "type": "text", + "content": "[4] Ali Furkan Biten, Ruben Tito, Andres Mafla, Lluis Gomez, Marçal Rusinol, Ernest Valveny, CV Jawahar, and Dimosthenis Karatzas. Scene text visual question answering. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4291-4301, 2019." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 111, + 258, + 505, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 258, + 505, + 293 + ], + "spans": [ + { + "bbox": [ + 111, + 258, + 505, + 293 + ], + "type": "text", + "content": "[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 111, + 298, + 505, + 333 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 298, + 505, + 333 + ], + "spans": [ + { + "bbox": [ + 111, + 298, + 505, + 333 + ], + "type": "text", + "content": "[6] Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint arXiv:2301.00704, 2023." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 111, + 338, + 505, + 372 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 338, + 505, + 372 + ], + "spans": [ + { + "bbox": [ + 111, + 338, + 505, + 372 + ], + "type": "text", + "content": "[7] Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled multilingual language-image model. arXiv preprint arXiv:2209.06794, 2022." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 111, + 377, + 505, + 412 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 377, + 505, + 412 + ], + "spans": [ + { + "bbox": [ + 111, + 377, + 505, + 412 + ], + "type": "text", + "content": "[8] Xiaokang Chen, Zhiyu Wu, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, and Chong Ruan. Janus-pro: Unified multimodal understanding and generation with data and model scaling. arXiv preprint arXiv:2501.17811, 2025." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 111, + 417, + 507, + 460 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 417, + 507, + 460 + ], + "spans": [ + { + "bbox": [ + 111, + 417, + 507, + 460 + ], + "type": "text", + "content": "[9] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1-113, 2023." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 468, + 505, + 502 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 468, + 505, + 502 + ], + "spans": [ + { + "bbox": [ + 105, + 468, + 505, + 502 + ], + "type": "text", + "content": "[10] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructlip: Towards general-purpose vision-language models with instruction tuning, 2023." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 507, + 507, + 553 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 507, + 507, + 553 + ], + "spans": [ + { + "bbox": [ + 105, + 507, + 507, + 553 + ], + "type": "text", + "content": "[11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pages 4171–4186, 2019." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 558, + 505, + 593 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 558, + 505, + 593 + ], + "spans": [ + { + "bbox": [ + 105, + 558, + 505, + 593 + ], + "type": "text", + "content": "[12] Lijie Fan, Tianhong Li, Siyang Qin, Yuanzhen Li, Chen Sun, Michael Rubinstein, Deqing Sun, Kaiming He, and Yonglong Tian. Fluid: Scaling autoregressive text-to-image generative models with continuous tokens. arXiv preprint arXiv:2410.13863, 2024." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 597, + 505, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 597, + 505, + 632 + ], + "spans": [ + { + "bbox": [ + 105, + 597, + 505, + 632 + ], + "type": "text", + "content": "[13] Dhruba Ghosh, Hannaneh Hajishirzi, and Ludwig Schmidt. Geneval: An object-focused framework for evaluating text-to-image alignment. Advances in Neural Information Processing Systems, 36:52132-52152, 2023." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 637, + 505, + 671 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 637, + 505, + 671 + ], + "spans": [ + { + "bbox": [ + 105, + 637, + 505, + 671 + ], + "type": "text", + "content": "[14] Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive networks. In International Conference on Machine Learning, pages 1242–1250. PMLR, 2014." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 677, + 505, + 721 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 677, + 505, + 721 + ], + "spans": [ + { + "bbox": [ + 105, + 677, + 505, + 721 + ], + "type": "text", + "content": "[15] Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3608-3617, 2018." + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 72, + 506, + 723 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 106, + 72, + 506, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 72, + 506, + 106 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 506, + 106 + ], + "type": "text", + "content": "[16] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 112, + 505, + 147 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 112, + 505, + 147 + ], + "spans": [ + { + "bbox": [ + 106, + 112, + 505, + 147 + ], + "type": "text", + "content": "[17] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700-6709, 2019." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 152, + 504, + 186 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 152, + 504, + 186 + ], + "spans": [ + { + "bbox": [ + 107, + 152, + 504, + 186 + ], + "type": "text", + "content": "[18] Mude Hui, Siwei Yang, Bingchen Zhao, Yichun Shi, Heng Wang, Peng Wang, Yuyin Zhou, and Cihang Xie. Hq-edit: A high-quality dataset for instruction-based image editing. arXiv preprint arXiv:2404.09990, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 193, + 504, + 238 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 193, + 504, + 238 + ], + "spans": [ + { + "bbox": [ + 107, + 193, + 504, + 238 + ], + "type": "text", + "content": "[19] Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV 14, pages 235-251. Springer, 2016." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 244, + 504, + 268 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 244, + 504, + 268 + ], + "spans": [ + { + "bbox": [ + 107, + 244, + 504, + 268 + ], + "type": "text", + "content": "[20] Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226, 2018." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 274, + 505, + 307 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 274, + 505, + 307 + ], + "spans": [ + { + "bbox": [ + 107, + 274, + 505, + 307 + ], + "type": "text", + "content": "[21] Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. Autoregressive image generation without vector quantization. Advances in Neural Information Processing Systems, 37:56424-56445, 2024." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 314, + 505, + 348 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 314, + 505, + 348 + ], + "spans": [ + { + "bbox": [ + 107, + 314, + 505, + 348 + ], + "type": "text", + "content": "[22] Yang Li, Gang Li, Luheng He, Jingjie Zheng, Hong Li, and Zhiwei Guan. Widget captioning: Generating natural language description for mobile user interface elements. arXiv preprint arXiv:2010.04295, 2020." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 354, + 505, + 388 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 354, + 505, + 388 + ], + "spans": [ + { + "bbox": [ + 107, + 354, + 505, + 388 + ], + "type": "text", + "content": "[23] Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, and Song Han. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 26689-26699, 2024." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 395, + 505, + 440 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 395, + 505, + 440 + ], + "spans": [ + { + "bbox": [ + 107, + 395, + 505, + 440 + ], + "type": "text", + "content": "[24] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer vision-ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part v 13, pages 740-755. Springer, 2014." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 446, + 505, + 480 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 446, + 505, + 480 + ], + "spans": [ + { + "bbox": [ + 107, + 446, + 505, + 480 + ], + "type": "text", + "content": "[25] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26296-26306, 2024." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 487, + 505, + 510 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 487, + 505, + 510 + ], + "spans": [ + { + "bbox": [ + 107, + 487, + 505, + 510 + ], + "type": "text", + "content": "[26] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36:34892-34916, 2023." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 516, + 506, + 548 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 516, + 506, + 548 + ], + "spans": [ + { + "bbox": [ + 107, + 516, + 506, + 548 + ], + "type": "text", + "content": "[27] Sylvain Lobry, Diego Marcos, Jesse Murray, and Devis Tuia. Rsvqa: Visual question answering for remote sensing data. IEEE Transactions on Geoscience and Remote Sensing, 58(12):8555-8566, 2020." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 556, + 506, + 601 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 556, + 506, + 601 + ], + "spans": [ + { + "bbox": [ + 107, + 556, + 506, + 601 + ], + "type": "text", + "content": "[28] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems, 35:2507–2521, 2022." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 607, + 506, + 642 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 607, + 506, + 642 + ], + "spans": [ + { + "bbox": [ + 107, + 607, + 506, + 642 + ], + "type": "text", + "content": "[29] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195-3204, 2019." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 647, + 505, + 681 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 647, + 505, + 681 + ], + "spans": [ + { + "bbox": [ + 107, + 647, + 505, + 681 + ], + "type": "text", + "content": "[30] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answering about charts with visual and logical reasoning. arXiv preprint arXiv:2203.10244, 2022." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 107, + 688, + 506, + 723 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 688, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 107, + 688, + 506, + 723 + ], + "type": "text", + "content": "[31] Minesh Mathew, Viraj Bagal, Ruben Tito, Dimosthenis Karatzas, Ernest Valveny, and CV Jawahar. Infographicvqa. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1697-1706, 2022." + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 72, + 506, + 723 + ], + "type": "list", + "angle": 0, + "index": 16, + "blocks": [ + { + "bbox": [ + 106, + 72, + 506, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 72, + 506, + 106 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 506, + 106 + ], + "type": "text", + "content": "[32] Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 2200-2209, 2021." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 114, + 506, + 148 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 114, + 506, + 148 + ], + "spans": [ + { + "bbox": [ + 106, + 114, + 506, + 148 + ], + "type": "text", + "content": "[33] Ziqi Pang, Tianyuan Zhang, Fujun Luan, Yunze Man, Hao Tan, Kai Zhang, William T Freeman, and Yu-Xiong Wang. Randar: Decoder-only autoregressive visual generation in random orders. arXiv preprint arXiv:2412.01827, 2024." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 157, + 506, + 190 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 157, + 506, + 190 + ], + "spans": [ + { + "bbox": [ + 107, + 157, + 506, + 190 + ], + "type": "text", + "content": "[34] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In International conference on machine learning, pages 4055-4064. PMLR, 2018." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 199, + 504, + 224 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 199, + 504, + 224 + ], + "spans": [ + { + "bbox": [ + 107, + 199, + 504, + 224 + ], + "type": "text", + "content": "[35] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Technical Report, 2018." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 232, + 506, + 255 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 232, + 506, + 255 + ], + "spans": [ + { + "bbox": [ + 107, + 232, + 506, + 255 + ], + "type": "text", + "content": "[36] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 262, + 504, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 262, + 504, + 297 + ], + "spans": [ + { + "bbox": [ + 107, + 262, + 504, + 297 + ], + "type": "text", + "content": "[37] Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-okvqa: A benchmark for visual question answering using world knowledge. In European conference on computer vision, pages 146–162. Springer, 2022." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 305, + 506, + 339 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 305, + 506, + 339 + ], + "spans": [ + { + "bbox": [ + 107, + 305, + 506, + 339 + ], + "type": "text", + "content": "[38] Weijia Shi, Xiaochuang Han, Chunting Zhou, Weixin Liang, Xi Victoria Lin, Luke Zettlemoyer, and Lili Yu. Llamafusion: Adapting pretrained language models for multimodal generation. arXiv preprint arXiv:2412.15188, 2024." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 348, + 504, + 392 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 348, + 504, + 392 + ], + "spans": [ + { + "bbox": [ + 107, + 348, + 504, + 392 + ], + "type": "text", + "content": "[39] Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part II 16, pages 742-758. Springer, 2020." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 401, + 504, + 436 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 401, + 504, + 436 + ], + "spans": [ + { + "bbox": [ + 107, + 401, + 504, + 436 + ], + "type": "text", + "content": "[40] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8317-8326, 2019." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 444, + 504, + 478 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 444, + 504, + 478 + ], + "spans": [ + { + "bbox": [ + 107, + 444, + 504, + 478 + ], + "type": "text", + "content": "[41] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 486, + 506, + 520 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 486, + 506, + 520 + ], + "spans": [ + { + "bbox": [ + 107, + 486, + 506, + 520 + ], + "type": "text", + "content": "[42] Andreas Steiner, André Susano Pinto, Michael Tschannen, Daniel Keysers, Xiao Wang, Yonatan Bitton, Alexey Gritsanko, Matthias Minderer, Anthony Sherbondy, Shangbang Long, et al. Paligemma 2: A family of versatile vlms for transfer. arXiv preprint arXiv:2412.03555, 2024." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 529, + 506, + 563 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 529, + 506, + 563 + ], + "spans": [ + { + "bbox": [ + 107, + 529, + 506, + 563 + ], + "type": "text", + "content": "[43] Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. arXiv preprint arXiv:1811.00491, 2018." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 571, + 506, + 605 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 571, + 506, + 605 + ], + "spans": [ + { + "bbox": [ + 107, + 571, + 506, + 605 + ], + "type": "text", + "content": "[44] Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Emu: Generative pretraining in multimodality. arXiv preprint arXiv:2307.05222, 2023." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 613, + 504, + 648 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 613, + 504, + 648 + ], + "spans": [ + { + "bbox": [ + 107, + 613, + 504, + 648 + ], + "type": "text", + "content": "[45] Yutao Sun, Hangbo Bao, Wenhui Wang, Zhiliang Peng, Li Dong, Shaohan Huang, Jianyong Wang, and Furu Wei. Multimodal latent language modeling with next-token diffusion. arXiv preprint arXiv:2412.08635, 2024." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 657, + 504, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 657, + 504, + 678 + ], + "spans": [ + { + "bbox": [ + 107, + 657, + 504, + 678 + ], + "type": "text", + "content": "[46] Chameleon Team. Chameleon: Mixed-modal early-fusion foundation models. arXiv preprint arXiv:2405.09818, 2024." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 107, + 688, + 504, + 723 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 688, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 107, + 688, + 504, + 723 + ], + "type": "text", + "content": "[47] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Riviere, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024." + } + ] + } + ], + "index": 15 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 72, + 506, + 686 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 106, + 72, + 506, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 72, + 506, + 117 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 506, + 117 + ], + "type": "text", + "content": "[48] Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118, 2024." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 124, + 505, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 124, + 505, + 158 + ], + "spans": [ + { + "bbox": [ + 106, + 124, + 505, + 158 + ], + "type": "text", + "content": "[49] Keyu Tian, Yi Jiang, Zehuan Yuan, Bingyue Peng, and Liwei Wang. Visual autoregressive modeling: Scalable image generation via next-scale prediction. Advances in neural information processing systems, 37:84839-84865, 2024." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 106, + 164, + 506, + 199 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 164, + 506, + 199 + ], + "spans": [ + { + "bbox": [ + 106, + 164, + 506, + 199 + ], + "type": "text", + "content": "[50] Shengbang Tong, David Fan, Jiachen Zhu, Yunyang Xiong, Xinlei Chen, Koustuv Sinha, Michael Rabbat, Yann LeCun, Saining Xie, and Zhuang Liu. Metamorph: Multimodal understanding and generation via instruction tuning. arXiv preprint arXiv:2412.14164, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 106, + 205, + 506, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 205, + 506, + 239 + ], + "spans": [ + { + "bbox": [ + 106, + 205, + 506, + 239 + ], + "type": "text", + "content": "[51] Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelCNN decoders. Advances in neural information processing systems, 29, 2016." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 245, + 504, + 269 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 245, + 504, + 269 + ], + "spans": [ + { + "bbox": [ + 107, + 245, + 504, + 269 + ], + "type": "text", + "content": "[52] Aäron Van Den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International conference on machine learning, pages 1747-1756. PMLR, 2016." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 275, + 505, + 310 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 275, + 505, + 310 + ], + "spans": [ + { + "bbox": [ + 107, + 275, + 505, + 310 + ], + "type": "text", + "content": "[53] Bryan Wang, Gang Li, Xin Zhou, Zhourong Chen, Tovi Grossman, and Yang Li. Screen2words: Automatic mobile ui summarization with multimodal learning. In The 34th Annual ACM Symposium on User Interface Software and Technology, pages 498-510, 2021." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 316, + 505, + 350 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 316, + 505, + 350 + ], + "spans": [ + { + "bbox": [ + 107, + 316, + 505, + 350 + ], + "type": "text", + "content": "[54] Chengyue Wu, Xiaokang Chen, Zhiyu Wu, Yiyang Ma, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, Chong Ruan, et al. Janus: Decoupling visual encoding for unified multimodal understanding and generation. arXiv preprint arXiv:2410.13848, 2024." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 357, + 506, + 400 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 357, + 506, + 400 + ], + "spans": [ + { + "bbox": [ + 107, + 357, + 506, + 400 + ], + "type": "text", + "content": "[55] Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, and Mike Zheng Shou. Show-o: One single transformer to unify multimodal understanding and generation. arXiv preprint arXiv:2408.12528, 2024." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 408, + 505, + 443 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 408, + 505, + 443 + ], + "spans": [ + { + "bbox": [ + 107, + 408, + 505, + 443 + ], + "type": "text", + "content": "[56] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 449, + 506, + 493 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 449, + 506, + 493 + ], + "spans": [ + { + "bbox": [ + 107, + 449, + 506, + 493 + ], + "type": "text", + "content": "[57] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2(3):5, 2022." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 501, + 506, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 501, + 506, + 525 + ], + "spans": [ + { + "bbox": [ + 107, + 501, + 506, + 525 + ], + "type": "text", + "content": "[58] Qihang Yu, Ju He, Xueqing Deng, Xiaohui Shen, and Liang-Chieh Chen. Randomized autoregressive visual generation. arXiv preprint arXiv:2411.00776, 2024." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 531, + 505, + 565 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 531, + 505, + 565 + ], + "spans": [ + { + "bbox": [ + 107, + 531, + 505, + 565 + ], + "type": "text", + "content": "[59] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF international conference on computer vision, pages 11975-11986, 2023." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 571, + 505, + 605 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 571, + 505, + 605 + ], + "spans": [ + { + "bbox": [ + 107, + 571, + 505, + 605 + ], + "type": "text", + "content": "[60] Haozhe Zhao, Xiaojian Shawn Ma, Liang Chen, Shuzheng Si, Rujie Wu, Kaikai An, Peiyu Yu, Minjia Zhang, Qing Li, and Baobao Chang. Ultraedit: Instruction-based fine-grained image editing at scale. Advances in Neural Information Processing Systems, 37:3058-3093, 2024." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 612, + 506, + 647 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 612, + 506, + 647 + ], + "spans": [ + { + "bbox": [ + 107, + 612, + 506, + 647 + ], + "type": "text", + "content": "[61] Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Xuezhe Ma, Luke Zettlemoyer, and Omer Levy. Transfusion: Predict the next token and diffuse images with one multi-modal model. arXiv preprint arXiv:2408.11039, 2024." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 653, + 506, + 686 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 653, + 506, + 686 + ], + "spans": [ + { + "bbox": [ + 107, + 653, + 506, + 686 + ], + "type": "text", + "content": "[62] Deyao Zhu, Jun Chen, Xiaogian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023." + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13439/696dfadf-883e-4109-9010-e095246a341a_content_list.json b/data/2025/2503_13xxx/2503.13439/696dfadf-883e-4109-9010-e095246a341a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e686810bbdac7680d5bba228dce1396c725ec586 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13439/696dfadf-883e-4109-9010-e095246a341a_content_list.json @@ -0,0 +1,4966 @@ +[ + { + "type": "text", + "text": "Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images", + "text_level": 1, + "bbox": [ + 165, + 130, + 831, + 152 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Tianhao Wu $^{1*}$ , Chuanxia Zheng $^{2\\dagger}$ , Frank Guan $^{3}$ , Andrea Vedaldi $^{2}$ , Tat-Jen Cham $^{1}$", + "bbox": [ + 174, + 179, + 818, + 199 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "$^{*}$ S-Lab, $^{1}$ Nanyang Technological University, $^{2}$ Visual Geometry Group, University of Oxford $^{3}$ Singapore Institute of Technology", + "bbox": [ + 132, + 205, + 864, + 244 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{tianhao001,astjcham}@ntu.edu.sg,{cxzheng,vedaldi} $@$ robots.ox.ac.uk Frank.guan@singaporetech.edu.sg", + "bbox": [ + 202, + 250, + 792, + 279 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/4143f763f0017e95d50b3bfa0df41273b2f73431d8dd3e70b30d7b6b03fd374b.jpg", + "image_caption": [ + "3D Recast", + "2D Input" + ], + "image_footnote": [], + "bbox": [ + 99, + 306, + 179, + 421 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/53928c56c2fe7023f935c8ed7e190c3c99585158a99df33f98e4c617f9a52d82.jpg", + "image_caption": [ + "Stochastic sample 1" + ], + "image_footnote": [], + "bbox": [ + 179, + 292, + 321, + 420 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/b2eac496a0db86dd84347dc3365cd86f4d9d9dbebf9db5f474bbce4cba6e3601.jpg", + "image_caption": [ + "Stochastic sample 2" + ], + "image_footnote": [], + "bbox": [ + 330, + 292, + 470, + 421 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/3f426092ee64dead895249bbba8131fb3b6e9802518102ab8dd167ce2d8b47aa.jpg", + "image_caption": [ + "2D Input" + ], + "image_footnote": [], + "bbox": [ + 478, + 311, + 614, + 415 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/9ead7c9cb53f7909b29e7f85b60b6bdac4b80a3a2f295f3707c7c8f057e70b5c.jpg", + "image_caption": [ + "In-the-wild Reconstruction" + ], + "image_footnote": [], + "bbox": [ + 616, + 303, + 756, + 386 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/465bfb5cb8402acef9e313ef6594a15529e8bea8756855a43fef3610bc64338a.jpg", + "image_caption": [ + "Stochastic sample 1" + ], + "image_footnote": [], + "bbox": [ + 616, + 386, + 686, + 417 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/11eb59d8c9fdfcd0e9dd496e456bfa47d8b2b36fe75963ece0988b80f5c87583.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 763, + 301, + 903, + 386 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/340194d0841a1f92e679b6b0daba73136cb660484090381712dd57d67deb3dfc.jpg", + "image_caption": [ + "Stochastic sample 2" + ], + "image_footnote": [], + "bbox": [ + 763, + 386, + 834, + 417 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/a8eb7dcb0fe28ebd2f0dafc836463edc5beb06c4ddc6ccc58bd84a56089f2e40.jpg", + "image_caption": [ + "3D Scene Decomposition", + "2D Input", + "Figure 1. Example results of Amodal3R. Given partially visible objects within images (occluded regions are shown in black, visible areas in red outlines), our Amodal3R generates diverse semantically meaningful 3D assets with reasonable geometry and plausible appearance. We sample multiple times to get different results from the same occluded input. Trained on synthetic datasets, it generalizes well to real-scene and in-the-wild images, where most objects are partially visible, and reconstructs reasonable 3D assets." + ], + "image_footnote": [], + "bbox": [ + 106, + 450, + 220, + 540 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/ac961d2f9dbd923ec7b9abaa05db1e5574bddbc55d3e3a7e5686690e26040fe0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 228, + 452, + 444, + 539 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/57e6957b2cf630b806a6c83f6eb4350841401464b6edcd0f59c622a62f1e78b9.jpg", + "image_caption": [ + "Decomposed 3D assets" + ], + "image_footnote": [], + "bbox": [ + 452, + 450, + 669, + 540 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/47b75ad456c10ac0a22005fe83b7b75c7a51c488ce3da0a89e9c6ba62d467e22.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 678, + 450, + 893, + 540 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 248, + 635, + 326, + 650 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Most image-based 3D object reconstructors assume that objects are fully visible, ignoring occlusions that commonly occur in real-world scenarios. In this paper, we introduce Amodal3R, a conditional 3D generative model designed to reconstruct 3D objects from partial observations. We start from a \"foundation\" 3D generative model and extend it to recover plausible 3D geometry and appearance from occluded objects. We introduce a mask-weighted multi-head cross-attention mechanism followed by an occlusion-aware attention layer that explicitly leverages occlusion priors to guide the reconstruction process. We demonstrate that, by training solely on synthetic data, Amodal3R learns to recover full 3D objects even in the presence of occlusions in real scenes. It substantially outperforms existing meth", + "bbox": [ + 86, + 666, + 485, + 878 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "ods that independently perform 2D amodal completion followed by 3D reconstruction, thereby establishing a new benchmark for occlusion-aware 3D reconstruction. See our project page https://sm0kywu.github.io/Amodal3R/.", + "bbox": [ + 511, + 637, + 906, + 699 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 513, + 724, + 643, + 739 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Humans possess a remarkable ability to infer the complete 3D shape and appearance of objects from single views, even when the objects are partly hidden behind occluders. This ability, namely amodal completion, is key to operating in complex real-world environments, where objects are often partially occluded by surrounding clutter. However, existing image-based 3D reconstruction models [5, 6, 24, 42, 49, 50, 67, 72, 73, 76, 78, 81, 88, 89] fail to recover full 3D assets when the object is partially occluded. We thus consider the problem of reconstructing 3D objects", + "bbox": [ + 509, + 750, + 906, + 902 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2503.13439v1 [cs.CV] 17 Mar 2025", + "bbox": [ + 22, + 260, + 57, + 705 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "†Project Lead.", + "bbox": [ + 109, + 886, + 187, + 898 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 493, + 925, + 503, + 935 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "from one or more partially-occluded views, a novel task that we call amodal 3D reconstruction.", + "bbox": [ + 89, + 90, + 482, + 119 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Amodal 3D reconstruction is a challenging task that requires reconstructing an object's 3D geometry and appearance while completing its occluded parts, both of which are highly ambiguous. Previous approaches to amodal 3D reconstruction [7, 56] have decomposed the task into 2D amodal completion [85, 86, 91], followed by conventional 3D reconstruction [42, 76, 89]. While these two-stage pipelines are easy to implement, they have some drawbacks. First, 2D amodal completion methods rely predominantly on appearance-based priors rather than explicit 3D structural cues. This lack of 3D geometric reasoning is suboptimal. Second, some 3D reconstruction methods can use or require multiple views to function. In this case, 2D amodal completion may lack multi-view consistency, particularly when it is performed independently for different views, which confuses the 3D reconstruction process.", + "bbox": [ + 89, + 122, + 482, + 364 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this paper, we introduce Amodal3R, a novel single-stage paradigm for amodal 3D reconstruction that surpasses previous state-of-the-art approaches (see Fig. 1). Amodal3R augments the \"foundation\" 3D generator TRELLIS [81] with an additional branch to condition on occlusions. Its key advantage is performing reconstruction, completion, and occlusion reasoning directly in a 3D latent space instead of using a two-stage scheme. In this way, the model can reconstruct both visible and occluded regions of the object coherently and plausibly. To adapt TRELLIS to amodal reconstruction, we introduce mask weighting in multi-head cross-attention and a new occlusion-aware layer. These guide the model to focus more on visible parts of the object without perturbing the pre-trained model too much.", + "bbox": [ + 89, + 364, + 482, + 575 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We evaluated the effectiveness of Amodal3R on datasets like Google Scanned Objects [16] and Toys4K [69] augmented with occlusions, on 3D scenes from Replica [70], and on real-world monocular images. Without relying on additional heuristics, Amodal3R achieves significantly superior performance compared to state-of-the-art models and generalizes well to different datasets, including real ones.", + "bbox": [ + 89, + 578, + 482, + 683 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In summary, our main contributions are as follows:", + "bbox": [ + 109, + 685, + 447, + 699 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- We propose a novel occlusion-aware 3D reconstruction model that directly reconstructs complete and high-quality 3D objects from one or more partially occluded views, without relying on 2D amodal completion models.", + "- We introduce a mask-weighted cross-attention mechanism and an occlusion-aware layer to inject occlusion awareness into an existing 3D reconstruction model, improving both the geometry and appearance of the reconstructed objects when they are partially occluded.", + "- We demonstrate via experiments on the 3D scene dataset Replica and real-world images that our one-stage pipeline significantly outperforms existing two-stage ones, establishing a new benchmark for amodal 3D reconstruction." + ], + "bbox": [ + 91, + 702, + 482, + 897 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Background", + "text_level": 1, + "bbox": [ + 513, + 89, + 640, + 107 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2D Amodal Completion. Recent advances in 2D amodal completion [2, 56, 83, 85, 86, 91] have achieved significant success in reconstructing occluded regions of objects in 2D images. While these methods show promise for 3D generation pipelines [2, 56], they still have limitations. First, 2D amodal completion models have limited 3D geometric understanding as they interpret images as 2D patterns. While excelling at completing textures, they may generate physically implausible geometries when handling complex occlusions. This stems from their lack of explicit 3D reasoning and reliance on 2D appearance priors, without true volumetric understanding. Second, for models that use multi-view images as input, the results of the 2D amodal completion are often inconsistent across views. Inconsistent views need to be corrected by the 3D reconstructor, which cause confusion (see Sec. 4). Although there has been significant work on multi-view consistent generation [7, 64, 72, 74, 82, 94], multi-view consistent completion is less explored. RenderDiffusion [1] contains an example, but the results are often blurry or lack details. This motivates our 3D-centric reconstruction framework.", + "bbox": [ + 511, + 114, + 903, + 430 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3D Shape Completion. Several methods [12, 14, 71, 95] start from a partial 3D reconstruction, then complete it in 3D. They use encoder-decoder architectures [25] or volumetric representations [10, 30] to robustly recover 3D shape but not texture, which is necessary in many applications. They also still require recovering the partial 3D geometry from an occluded image, a challenge in itself. Furthermore, they ignore the input image when completing the object in 3D, disregarding the occlusion pattern that caused the 3D geometry to be recovered only partially. Recent work [11] utilizes the partially visible object as input specifically for 3D shape completion. However, it focuses solely on geometry reconstruction, without recovering the object's texture and appearance. In contrast, our approach is end-to-end, recovering the complete 3D shape and appearance of the object from the occluded image.", + "bbox": [ + 511, + 431, + 903, + 672 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "3D Generative Models. Early advancements in 3D generation are based on GANs [19], exploring various 3D representations such as point clouds [26, 40], voxel grids [84, 96], view sets [50, 57], NeRF [4, 5, 52, 62], SDF [18], and 3D Gaussian mixtures [79]. While GANs can capture complex 3D structures, they struggle with stability, scalability, and data diversity. The focus then shifted to diffusion models [23, 61, 68], which were more capable and versatile. They were first applied to novel view synthesis [77] in image space, before expanding to model a variety of 3D representations, including point clouds [46, 47, 51, 80, 93], voxel grids [38, 48], triplanes [66, 97], meshes [20, 44], and 3D Gaussian mixtures [9, 36, 39, 87]. Autoregressive models [8, 49, 67] have also been explored for mesh generation, focusing on improving the topology of the final 3D mesh.", + "bbox": [ + 511, + 674, + 903, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 924, + 504, + 935 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/46e0fb0ee3858047a89e9ac6e8e2762da5cdf37cc141d2d1a5005dc02c86b545.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 98, + 92, + 898, + 224 + ], + "page_idx": 2 + }, + { + "type": "image", + "img_path": "images/0dfbd6bc3d0e73132a8794d2bf334fd7a6b88689f397e20c56056d8c9a2dbaa5.jpg", + "image_caption": [ + "Figure 2. Overview of Amodal3R. Given an image as input and the regions of interest, Amodal3R first extracts the partially visible target object, along with the visibility and occlusion masks using an off-the-shelf 2D segmenter. It then applies DINOv2 [55] to extract features $\\pmb{c}_{dino}$ as additional conditioning for the 3D reconstructor. To enhance occlusion reasoning, each transformer block incorporates a mask-weighted cross-attention (via $\\pmb{c}_{vis}$ ) and occlusion-aware attention layer (via $\\pmb{c}_{occ}$ ), ensuring the 3D reconstructor accurately perceives visible information while effectively inferring occluded parts. For conditioning details, see Sec. 3.2." + ], + "image_footnote": [], + "bbox": [ + 98, + 228, + 898, + 349 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Authors have also shifted from performing diffusion in 2D image space [64, 72, 89] to 3D latent spaces [29, 53, 60, 65, 75, 81]. Such methods typically consist of two stages: the first for generating geometry and the second for generating appearance, and are capable of high-quality imaged-to-3D generation. However, they assume that input objects are fully visible, which limits their application to real-world scenes, where occlusions are frequent. Here, we extend such approaches to generate high-quality 3D assets from occluded input images — a challenging task that requires inferring complete 3D objects from partial information.", + "bbox": [ + 88, + 455, + 482, + 622 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Method", + "text_level": 1, + "bbox": [ + 89, + 633, + 181, + 648 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Consider an image $x$ containing a partially occluded view of an object of interest $o_i$ , we wish to reconstruct the complete 3D shape and appearance $y$ of the object $o_i$ . This task is thus conceptually similar to existing image-to-3D reconstruction, except that here the object is partially occluded instead of being fully visible in $x$ . We call this problem amodal 3D reconstruction.", + "bbox": [ + 88, + 657, + 482, + 763 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Here, we elaborate on Amodal3R (Fig. 2), a new method for amodal 3D reconstruction. Unlike recent two-stage methods [7, 56] that first perform 2D amodal completion followed by 3D reconstruction, Amodal3R is an end-to-end occlusion-aware 3D reconstruction model that generates complete shapes and multi-view geometry directly within the 3D space. Formally, Amodal3R is a conditional generator $v_{\\theta}(y|o_i,M_{\\mathrm{vis}},M_{\\mathrm{occ}})$ that takes as input the image $x$ , centered on the partially visible object $o_i$ , as well as the", + "bbox": [ + 88, + 763, + 482, + 901 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "visibility mask $M_{\\mathrm{vis}}$ and occlusion mask $M_{\\mathrm{occ}}$ . The visibility mask $M_{\\mathrm{vis}}$ marks the pixels of image $x$ containing the object $o_i$ , while $M_{\\mathrm{occ}}$ marks the pixels containing the oc-clusters, i.e., all other objects that potentially obscure $o_i$ . For real images, these masks can be efficiently obtained using pre-trained 2D segmentation models like Segment Anything [35]. By providing point coordinate prompts for the interest object $o_i$ and its occlusters respectively, the segmentation model can generate the corresponding masks. In cases where multiple occlusters are present or when occlusters fragment the visible area of the target object, a sequential process is employed. Specifically, point prompts for each visible/occluding region are provided to the 2D segmenter one at a time, with the model generating an individual mask for each region. These masks are then aggregated to form a comprehensive visibility/occlusion mask.", + "bbox": [ + 511, + 455, + 906, + 698 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "The challenges for Amodal3R are how to: (1) produce a plausible and complete 3D shape and appearance from partial observations, even in the absence of multi-view information; and (2) ensure 3D consistency in terms of geometry and photometry, maintaining seamless visual coherence between visible and generated regions.", + "bbox": [ + 511, + 698, + 908, + 789 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Preliminaries: TRELLIS", + "text_level": 1, + "bbox": [ + 511, + 801, + 743, + 816 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "We begin by briefly reviewing the TRELLIS [81] model on which our model is based. TRELLIS is a conditional 3D diffusion model that performs denoising in a sparse 3D latent space. First, it introduces a transformer-based variational autoencoder (VAE) $(\\mathcal{E},\\mathcal{D})$ , where the encoder $\\mathcal{E}$ maps", + "bbox": [ + 511, + 824, + 906, + 902 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "sparse voxel features to structured latents $\\mathbf{z}$ , and the decoder $\\mathcal{D}$ converts them into desired output representations, including 3D Gaussians [31], radiance fields, and meshes. In particular, a 3D object $\\mathbf{o}_i$ is encoded using its structured latent variables (SLAT) defined as $\\mathbf{z} = \\{(z_i, p_i)\\}_{i=1}^L$ , where $z_i \\in \\mathbb{R}^C$ is a local latent feature attached to the voxel at position $p_i \\in \\{0, 1, \\dots, N-1\\}^3$ , $N$ is the spatial resolution of the grid, and $L \\ll N^3$ is the number of active voxels intersecting the object's surface. This representation encodes both coarse geometric structures and fine appearance details by associating local latents with active voxels.", + "bbox": [ + 89, + 90, + 483, + 256 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "TRELLIS comprises two diffusion models, one to predict the active voxel centers $\\{\\pmb{p}_i\\}_{i=1}^L$ (stage 1) and the other to recover the corresponding latents $\\{\\pmb{z}_i\\}_{i=1}^L$ (stage 2). Each model can be viewed as a denoising neural network $v_\\theta$ operating in a latent space $\\ell$ , and trained to remove Gaussian noise $\\epsilon \\sim \\mathcal{N}(0, I)$ added to the latent code, i.e., $\\ell^{(t)} = (1 - t)\\ell^{(0)} + t\\epsilon$ , where $t \\in [0,1]$ is the noise level [43]. The denoising network $v_\\theta$ is trained to minimize the flow loss:", + "bbox": [ + 89, + 257, + 483, + 393 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\min _ {\\theta} \\mathbb {E} _ {(\\boldsymbol {\\ell} ^ {(0)}, x), t, \\epsilon} \\| v _ {\\theta} (\\boldsymbol {\\ell} ^ {(t)}, x, t) - (\\epsilon - \\boldsymbol {\\ell} ^ {(0)}) \\| _ {2} ^ {2}, \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 133, + 400, + 482, + 424 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $x$ is the image prompt. In stage 1, the latent code is a compressed version of the $N \\times N \\times N$ occupancy volume, where the spatial resolution is reduced from $N = 64$ to $r = 16$ . Hence, in this case the latent vector is a matrix $\\ell \\in \\mathbb{R}^{L' \\times C'}$ of $L' = r^3 = 4096C'$ -dimensional tokens. In stage 2, the latent code $\\ell = \\{z_i\\}_{i=1}^L \\in \\mathbb{R}^{L \\times C}$ is a matrix of $LC$ -dimensional tokens, where $L$ is now the number of active voxels. Similar transformers are implemented to the corresponding denoising networks $v_{\\theta}$ (Fig. 3). The conditioning image $x$ is read via cross-attention layers that pool information from the tokens $c_{\\mathrm{dino}}$ extracted by DINO [55] from the image $x$ .", + "bbox": [ + 89, + 431, + 483, + 613 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. Mask-Conditional Generative Models", + "text_level": 1, + "bbox": [ + 89, + 622, + 416, + 636 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "The key change needed to fine-tune the generator to work with partially occluded images is to condition the transformers $v_{\\theta}$ not only on the image $x$ , but also on the masks $M_{\\mathrm{vis}}$ and $M_{\\mathrm{occ}}$ . A naive approach is to downsample or embed the masks to obtain tokens $(c_{\\mathrm{vis}}, c_{\\mathrm{occ}})$ that can be concatenated to the image tokens $c_{\\mathrm{dino}}$ , to be processed by cross-attention by the transformer as before. However, the image $x$ contains significantly more information than the binary masks $M_{\\mathrm{vis}}$ and $M_{\\mathrm{occ}}$ , so the transformer, which is initially trained to consider $x$ only, may simply ignore this information. This is compounded by the fact that learning to use this new information, which involves a new type of tokens that are incompatible with image ones, may require aggressive fine-tuning of the transformer.", + "bbox": [ + 89, + 643, + 483, + 854 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To sidestep this problem, inspired by masked attention approaches in 2D completion [92], we introduce mask-weighted cross-attention and an occlusion-aware attention", + "bbox": [ + 89, + 854, + 483, + 900 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/c28eb32c3ce9ca4c19b888186c109fcd0b6e8954a43fed553318ca76a53e098b.jpg", + "image_caption": [ + "(a) TRELLIS [81]", + "Figure 3. The transformer structure of Amodal3R. Compared with the original TRELLIS [81] design, we further introduce the mask-weighted cross-attention and occlusion-aware layer. It applies to both sparse structure and SLAT diffusion models." + ], + "image_footnote": [], + "bbox": [ + 517, + 95, + 666, + 304 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/e3ed020a61c74b01acd0ad7190934e06b7482437b4cce85de31b597a5ed49498.jpg", + "image_caption": [ + "(b) Amodal3R (Ours)" + ], + "image_footnote": [], + "bbox": [ + 674, + 89, + 903, + 304 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "layer to better utilize the visibility mask $M_{\\mathrm{vis}}$ and the occlusion mask $M_{\\mathrm{occ}}$ without disrupting the pre-trained 3D generator too much. These are described next.", + "bbox": [ + 511, + 412, + 903, + 455 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Mask-weighted Cross-Attention. A key novel component of our model is mask-weighted cross-attention, which allows the model to focus its attention on the visible parts of the object. We implement this mechanism in each attention block in the transformers $v_{\\theta}$ of Sec. 3.1. Given the latent tokens $\\ell \\in \\mathbb{R}^{L \\times C}$ input to a cross-attention layer as well as the image features $c_{\\mathrm{dino}} \\in \\mathbb{R}^{K \\times C'}$ of the partially visible object, cross-attention computes the similarity score matrix", + "bbox": [ + 511, + 457, + 906, + 579 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {q} = W _ {q} \\boldsymbol {\\ell}, \\quad [ \\boldsymbol {k}, \\boldsymbol {v} ] = W _ {k v} \\boldsymbol {c} _ {\\text {d i n o}}, \\quad S = \\boldsymbol {q} \\boldsymbol {k} ^ {\\top} / \\sqrt {D}, \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 531, + 585, + 903, + 606 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "where $W_{q}$ and $W_{kv}$ are learnable projections that map the latents $\\ell$ to the query $q$ and the conditioning image feature $c_{\\mathrm{dino}}$ to the key $k$ and the value $v$ , respectively, and $D$ is the dimension of the query and key vectors.", + "bbox": [ + 511, + 612, + 903, + 672 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We wish to bias the token similarity matrix $S \\in \\mathbb{R}^{L \\times K}$ towards the visible parts of the object. Recall that $K$ is the number of tokens $c_{\\mathrm{dino}}$ extracted by DINOv2 [54] from the occluded image $x \\cdot M_{\\mathrm{vis}}$ . Each token thus corresponds to a $P \\times P$ patch in the input image (where $P = 14$ ). We extract analogous flatten 1-D tokens $c_{\\mathrm{vis}} = [\\rho(M_{\\mathrm{vis},1}), \\dots, \\rho(M_{\\mathrm{vis},K})]$ from the visibility mask by calculating the fraction of $P \\times P$ pixels that are visible in the $j$ -th image patch $M_{\\mathrm{vis},j}$ . $c_{\\mathrm{vis},j} = \\rho(M_{\\mathrm{vis},j}) \\in [0,1]$ . We then use these quantities to bias the computation of the attention matrix $A = \\text{softmax}(S + \\log c_{\\mathrm{vis}}) \\in [0,1]^{L \\times M}$ via broadcasting. Hence:", + "bbox": [ + 511, + 672, + 906, + 854 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nA _ {i j} = \\frac {\\boldsymbol {c} _ {\\text {v i s} , j} \\exp \\left(S _ {i j}\\right)}{\\sum_ {k = 1} ^ {K} \\boldsymbol {c} _ {\\text {v i s} , k} \\exp \\left(S _ {i k}\\right)}. \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 607, + 862, + 903, + 898 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 493, + 924, + 504, + 935 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/5965ea718b9d135af93c8a920e517f6aac40214a9ac9d61495389ea154071baa.jpg", + "image_caption": [ + "Figure 4. 3D-consistent mask example. Given a 3D mesh, we render selected triangles in a distinct color from the others to generate multi-view consistent masks. It allows the evaluation of multi-view methods in handling contact occlusion. (The occluded regions are shown in red.)" + ], + "image_footnote": [], + "bbox": [ + 109, + 88, + 457, + 154 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In this manner, the visibility flag modulates the attention matrix $A$ smoothly. If there are no visible pixels in a patch $j$ , then $A_{ij} = 0$ , so the corresponding image tokens are skipped in cross attention. While we have illustrated how this works for a single head, in practice Amodal3R uses a multiple-head transformer, to which Eq. (3) extends trivially. Please see the supplementary materials for details.", + "bbox": [ + 89, + 260, + 483, + 366 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Occlusion-aware Attention Layer. In addition to encouraging the network to focus its attention on the visible part of the object, we also introduce an occlusion-aware attention layer. For amodal completion, it is not enough to specify the visible information; instead, we must also differentiate foreground occluders from the background, as this explicitly identifies the potential regions requiring completion. Namely, if a pixel is denoted as invisible in the mask $M_{\\mathrm{vis}}$ , this might be because there is an occluder in front of that pixel (so the pixel could have contained the object except due to occlusion), or because the pixel is entirely off the object. This information is encoded by the mask $M_{\\mathrm{occ}}$ .", + "bbox": [ + 89, + 367, + 483, + 547 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To allow the model to distinguish between visible, occluded and background areas, we add one more cross-attention layer placed immediately after the mask-weighted cross-attention layer, and pooling solely the occlusion mask $M_{\\mathrm{occ}}$ . In order to do so, we encode the occlusions mask $M_{\\mathrm{vis}}$ as a set of flatten 1-D tokens $c_{\\mathrm{occ}} = [\\rho(M_{\\mathrm{occ},1}), \\ldots, \\rho(M_{\\mathrm{occ},M})]$ , as before and then simply pool $c_{\\mathrm{occ}}$ using a cross-attention layer.", + "bbox": [ + 89, + 549, + 483, + 670 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.3. Simulating Occluded 3D Data", + "text_level": 1, + "bbox": [ + 89, + 681, + 356, + 696 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To train and evaluate our model, we require a dataset of 3D assets imaged in scenarios with clutter and occlusions. It is challenging to obtain such data in the real world, so we resorted to synthetic data and simulation to train our model.", + "bbox": [ + 89, + 703, + 482, + 762 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Random 2D Occlusions. In order to train our model, each training sample $(o,x,M_{\\mathrm{vis}},M_{\\mathrm{occ}})$ consists of a 3D object $o$ (from which ground truth latents can be obtained by using the encoders of Sec. 3.1), an image $x$ with partial occlusions, and corresponding visibility and occlusion masks $M_{\\mathrm{vis}}$ and $M_{\\mathrm{occ}}$ . In a real scenario, occlusions arise from other objects in the scene. In a multi-view setting, like the one discussed below, these occlusions need to be consistent across views, reflecting the underlying scene geometry.", + "bbox": [ + 89, + 763, + 483, + 901 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "try. However, because our model is trained for single-view reconstruction, we can simulate occlusions by randomly masking parts of the object after rendering it in 2D. This is simpler and allows us to generate fresh occlusion patterns each time a view is sampled for training.", + "bbox": [ + 511, + 90, + 903, + 167 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Thus, given $o$ and an image $x$ rendered from a random viewpoint, we generate random occlusion masks as follows. Inspired by work on 2D completion [27, 41, 58, 90], we randomly place lines, ellipses, and rectangles, simulating diverse masking patterns. The union of these random shapes gives us the occlusion mask $M_{\\mathrm{occ}}$ . Then, if $M_{\\mathrm{obj}}$ is the mask of the unoccluded object, the visible mask is given by $M_{\\mathrm{vis}} = M_{\\mathrm{obj}} \\odot (1 - M_{\\mathrm{occ}})$ . Examples of such patterns are given in the supplementary material.", + "bbox": [ + 511, + 167, + 905, + 303 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3D-consistent occlusions. In a real scene imaged from multiple viewpoints, occlusions are not random but consistent across views as they are caused by other objects. This is particularly true for contact occlusions, where part of an object remains occluded by another from all viewpoints. To evaluate the model's performance under such challenging conditions, 3D-consistent masks are required.", + "bbox": [ + 511, + 304, + 903, + 409 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "To efficiently generate such contact occlusion masks, we leverage 3D meshes during rendering. Starting from a randomly chosen triangle, we apply a random-walk strategy to iteratively select neighboring triangles, forming continuous occluded regions until the predefined mask ratio is met. By rendering these masked meshes using the same camera parameters as the RGB images, we ensure multi-view consistency in the generated occlusion masks (see Fig. 4).", + "bbox": [ + 511, + 410, + 905, + 531 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4. Reconstruction with Multi-view Input", + "text_level": 1, + "bbox": [ + 511, + 542, + 841, + 559 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Since our flow-based model performs multiple denoising steps and does not require known camera poses for input views, Amodal3R can accept multi-view reference images $\\mathcal{X} = \\{x_i\\}_{i=1}^N$ as conditioning inputs at different steps of the denoising process. While multi-view conditioning naturally improves reconstruction performance, a potential issue with such multi-view conditioning is that an image used earlier in the denoising process has a stronger influence on the final 3D geometry. This is because early denoising steps establish the coarse geometry, whereas later steps refine higher-frequency details [33, 34]. Therefore, we prioritize input images based on their visibility. Specifically, when experimenting with multi-view inputs, we sort the images in proportion to the object visibility $|M_{\\mathrm{vis}}|$ , ensuring that images with higher visibility are used as primary inputs.", + "bbox": [ + 511, + 565, + 906, + 792 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 511, + 806, + 645, + 824 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Experiment Settings", + "text_level": 1, + "bbox": [ + 511, + 832, + 705, + 848 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Datasets. Amodal3R is trained on a combination of 3D synthetic dataset: 3D-FUTURE (9,472 objects [17]), ABO (4,485 objects [13]), and HSSD (6,670 objects [32]). Once", + "bbox": [ + 511, + 854, + 905, + 901 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/b29270b02390ef686f91f13023c3fe889c64b63a45444e98460ec0d9fa2347a8.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodV-num2D-CompFID ↓KID(%) ↓CLIP ↑P-FID ↓COV(%) ↑MMD(‰) ↓
GaussianAnything [37]1pix2gestalt [56]92.261.300.7434.6935.925.03
Real3D [28]1pix2gestalt [56]91.212.020.7523.9219.619.21
TRELLIS [81]1pix2gestalt [56]58.825.870.7626.4331.654.17
Amodal3R (Ours)1-30.640.350.817.6939.613.62
LaRa [6]4pix2gestalt [56]172.844.540.7066.3424.568.11
LaRa [6]4pix2gestalt [56]+MV97.532.630.7521.8026.218.61
TRELLIS [81]4pix2gestalt [56]65.696.920.7824.6432.334.26
TRELLIS [81]4pix2gestalt [56]+MV60.371.850.8319.6831.754.21
Amodal3R (Ours)4-26.270.220.845.0338.743.61
", + "bbox": [ + 93, + 88, + 906, + 250 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/7785c1f05f18d792d32248c31ceb5b4f2262fd60cff1f3bb586b7cc7a502d695.jpg", + "table_caption": [ + "Table 1. Amodal 3D reconstruction results on GSO [16]. Here, V-num denotes the number of input views, and 2D-Comp refers to the 2D completion method. For single-view image-to-3D, we first complete occluded objects using pix2gestalt [56] before passing them to various 3D models. For multi-view image-to-3D, we adopt two variants: 1) pix2gestalt [56] is applied independently on each view; 2) pix2gestalt $[56] + \\mathrm{MV}$ : a single-view completion is generated first, followed by multi-view diffusion [63] to synthesize 4 views as inputs. Without bells and whistles, Amodal3R consistently outperforms state-of-the-art models across all evaluation metrics." + ], + "table_footnote": [], + "table_body": "
MethodV-num2D-CompFID ↓KID(%) ↓CLIP ↑P-FID ↓COV(%) ↑MMD(‰) ↓
GaussianAnything [37]1pix2gestalt [56]57.171.220.8021.9733.567.23
Real3D [28]1pix2gestalt [56]59.921.630.7923.3124.359.53
TRELLIS [81]1pix2gestalt [56]43.056.830.8026.0426.286.87
Amodal3R (Ours)1-23.450.420.835.0037.095.89
LaRa [6]4pix2gestalt [56]123.523.610.7545.9127.899.67
LaRa [6]4pix2gestalt [56]+MV75.334.140.8013.0024.8210.93
TRELLIS [81]4pix2gestalt [56]46.348.770.8128.7625.357.13
TRELLIS [81]4pix2gestalt [56]+MV43.007.530.8124.4126.557.05
Amodal3R (Ours)4-20.930.500.853.7839.035.75
", + "bbox": [ + 93, + 337, + 906, + 498 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 2. Amodal 3D reconstruction results on Toys4K [69]. The experimental setting is the same to Tab. 1, except for the dataset.", + "bbox": [ + 107, + 502, + 885, + 517 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Amodal3R is trained, we first assess its effectiveness on Toys4K (randomly sampling 1,500 objects [69]) and Google Scanned Objects (GSO) (1,030 objects [16]), which are excluded from our training data for our and the baseline model. During inference, a 3D-consistent mask occludes each input object, and each view is augmented with additional randomly generated occlusion areas. This ensures that the model cannot directly extract the region required to be completed from the occlusion regions. To further assess the out-of-distribution generalization ability in practical applications, we also evaluate all models on the 3D scene dataset Replica [70] as well as on in-the-wild images. The training and evaluation dataset will be released.", + "bbox": [ + 88, + 542, + 482, + 741 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Metrics. To measure the quality of the rendered images, we use the Fréchet Inception Distance (FID) [21] and the Kernel Inception Distance (KID) [3]. To measure the quality of the reconstructed 3D geometry, we use the Coverage Score (COV), the Point cloud FID (P-FID) [51], and the Minimum Matching Distance (MMD) using the Chamfer Distance (CD). COV measures the diversity of the results and P-FID and MMD the quality of the 3D reconstruction. We also use the CLIP score [59] to evaluate the consistency between each pair generated and ground-truth objects.", + "bbox": [ + 88, + 750, + 482, + 902 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Baselines. Most 3D generative models are trained on complete object inputs. To ensure fair comparisons, we complete the partially visible 2D objects before passing them to 3D generators. Sepecifically, we use pix2gestalt [56], a state-of-the-art 2D amodal completion network. Using this protocol, we compare Amodal3R to state-of-the-art methods such as TRELLIS [81], Real3D [28], GaussianAnything [37]ICLR'25, and LaRa [6]ECCV'24.", + "bbox": [ + 511, + 542, + 906, + 666 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Implementation Details. Amodal3R is trained on 4 A100 GPUs (40G) for 20K steps with a batch size of 16, taking approximately a day. Following TRELLIS [81], we implement classifier-free guidance (CFG [22]) with a drop rate of 0.1 and AdamW [45] optimizer with a learning rate of 1e-4. More details are provided in the supplementary material.", + "bbox": [ + 511, + 666, + 908, + 758 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2. Main Results", + "text_level": 1, + "bbox": [ + 511, + 770, + 653, + 786 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Quantitative Results. We compare Amodal3R to the state-of-the-art for amodal 3D reconstruction in Tabs. 1 and 2. Amodal3R significantly outperforms the baselines across all metrics in both single- and multi-view 3D reconstruction, demonstrating its effectiveness. Interestingly, baselines that use multiple but potentially inconsistently-completed views (\"4 V-num + pix2gestalt\") are worse than", + "bbox": [ + 511, + 794, + 906, + 902 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 925, + 504, + 936 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/4b8d63cfeb365c168c06a219dc90ea157ceb77c53fd03485af894f8574f4b1fa.jpg", + "image_caption": [ + "Input" + ], + "image_footnote": [], + "bbox": [ + 99, + 88, + 176, + 189 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/344c6fd2fe665ca0ce9a8f9179dbb7878728c433a2dd43bd58d637e4948f50cf.jpg", + "image_caption": [ + "pix2gestalt [56]" + ], + "image_footnote": [], + "bbox": [ + 187, + 90, + 241, + 188 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/16e94e10bf7663bc001757c6ebe65f1b7b94567cdffeb0398ac46bf6e56d3e57.jpg", + "image_caption": [ + "GaussianAnything [37]" + ], + "image_footnote": [], + "bbox": [ + 263, + 90, + 330, + 186 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/5a1da316ffe4ed72d7be02956ca4e03b0d234d5db15a8162959d54e1d1e6dfe8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 334, + 90, + 401, + 186 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "nything [37]", + "bbox": [ + 334, + 191, + 398, + 203 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/ddc7a6df708c27a38eaf717a8aec85c34b164d6a97f8e0d778beae5ec5f9f310.jpg", + "image_caption": [ + "Real3D [28]" + ], + "image_footnote": [], + "bbox": [ + 421, + 90, + 480, + 186 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/53065c4743828b25fc578e7b842aa49005b69b45655eb966cf5d59c062cdae21.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 493, + 90, + 555, + 186 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/3bca9cdae6c37dd6588bf565d6e9080fbded9ce3398cf90ae1e0e62214b9d844.jpg", + "image_caption": [ + "TRELLIS [81]" + ], + "image_footnote": [], + "bbox": [ + 586, + 90, + 633, + 185 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/4aae53f2a0e5edd86fb299ae111186e8b48fbc9ea078f0c202ed0c0010334b89.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 643, + 90, + 709, + 185 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/ff797a21c781d899dc09562ba90d6384c222e9ad2c7f5bfd1d718e730abed118.jpg", + "image_caption": [ + "Amodal3R (Ours)" + ], + "image_footnote": [], + "bbox": [ + 740, + 90, + 803, + 186 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/24633556114d5a1d318f35be6d2097f51e2e72d27af2b872c7713a407b004ee8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 818, + 90, + 880, + 186 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/fc8f85350f0e48cf02df3b0419da21402ace9dd34d2f514b8e2615466d2415fd.jpg", + "image_caption": [ + "Figure 5. Single-view amodal 3D reconstruction. The occlusion regions are shown in black and the visible regions are highlighted with red outlines. More examples are provided in supplementary material Fig. C.4.", + "Input", + "Figure 6. Multi-view amodal 3D reconstruction. The above results are reconstructed using 4 occluded input views. Due to limited space, we present only the best results for LaRa and TRELLIS under the \"pix2gestalt+MV\" setting. We apply 3d-consistent mask and random extended occlusion areas. More examples are provided in supplementary material Fig. C.5." + ], + "image_footnote": [], + "bbox": [ + 91, + 247, + 194, + 351 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/f7805e3946002aa68a40022a399e5caa06e504fa500c83933080db55dfd6dab4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 212, + 251, + 279, + 290 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/5339ec554482daf2e2b186da5f98bd2ba1dbad05ab5f4ea3186b550e61a5e906.jpg", + "image_caption": [ + "pix2gestalt [56]" + ], + "image_footnote": [], + "bbox": [ + 222, + 303, + 254, + 344 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/a50ea849adc47903143abd5e8b28edbcaeb90432984776e24f975dad521fb428.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 313, + 252, + 385, + 289 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/b5e9f37b71f7f904705327927ff5b42973efda9d537d57fd5fc2f8b77fdd81d0.jpg", + "image_caption": [ + "LaRa [6]" + ], + "image_footnote": [], + "bbox": [ + 312, + 300, + 380, + 340 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/eb1459a55bc21480741609bfe8d472593a3382ef284c65f29b396b9a855330cc.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 400, + 252, + 470, + 286 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/08c973d3912e5936e18998074a07045d201ec77a25069277dd7ae43a3791ac01.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 400, + 303, + 470, + 339 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/c3ed58be125412bbae5a4b13f4510c85405227759cdedbec02d084f4d8e353fb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 514, + 253, + 586, + 295 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/0ae607b0ed818e015cd57d7087130d6ce02033f15d780990f3987937fd9ce03c.jpg", + "image_caption": [ + "TRELLIS [81]" + ], + "image_footnote": [], + "bbox": [ + 514, + 299, + 584, + 340 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/41d1151c236a7cec6d230fea72a0f2caa1d9a5274fe82667022ece097f847a36.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 607, + 253, + 676, + 292 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/76f887eb88ebf2c6160d9c327f4bf15d0edfd97f184077642698e01d28a2a6d7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 607, + 303, + 678, + 339 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/8b6750b249027312a8ede14ba56685d18050f471e8c5276fb89b63ce7f37a3ff.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 715, + 252, + 790, + 295 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/2214f9535709da7a66e819a76fed68d43724677f6fb237c4c0f16ae40cce9c3b.jpg", + "image_caption": [ + "Amodal3R (Ours)" + ], + "image_footnote": [], + "bbox": [ + 715, + 296, + 792, + 345 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/af774c2718c4ddc1604bb1329c96d6898f06b93c429021097ad4f2c16fd21cee.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 816, + 253, + 883, + 295 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/a42058f2e19492d3aa48221c2c95e3d28f0636c89d280016eca2cd55950ba8ec.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 813, + 299, + 888, + 342 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/dbff7913cd8a313d4d155aa78cbab97d8e0023cfcee1bef29ec045ca5ba7f12b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 91, + 434, + 160, + 481 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/170a8c5842860fb67710a18f8d7e53ca8d1ee22168712426fc88da4a3c3f7e2d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 93, + 481, + 156, + 532 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/35a7947758f0102a26fee60a36285b0655e2f366cddf71340a6b72a4c2391df8.jpg", + "image_caption": [ + "Input", + "' Naive concatenation", + "Figure 7. Ablation examples. Naive concatenation fails to reconstruct complete shape and appearance. Mask-weighted attention alone extends geometry into background regions, while occlusion-aware attention alone cannot guarantee photorealistic appearance." + ], + "image_footnote": [], + "bbox": [ + 94, + 532, + 158, + 580 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/7c81d0cddabd718eb9edf4e8e3b2a8a16407800d75d548cd2907a4c38bd9bea0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 173, + 441, + 225, + 482 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/4ca588e11862304c6a919e4a6f6fa6e8bbaae77b1fe6d06e08bd164094c6c0e9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 165, + 496, + 227, + 523 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/28ce4b6da60edc15f838a61563cd751ed3180e19828fe1f64678481295a8197b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 173, + 537, + 218, + 571 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/e87e5b38346fb07edb29b77cd1f21a4dd50ca17335e4996826642195b95ff208.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 238, + 434, + 305, + 489 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/33fe9e4da816b749ece4560cd589d4bea0c510d1f66b97c80af08c91a5959dc2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 238, + 492, + 313, + 526 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/282a12ceadacfc33dc7090398b24cbde1b529ccb2a9de736439c241daada2ff2.jpg", + "image_caption": [ + "w/only $M_{\\mathrm{vis}}$ attention" + ], + "image_footnote": [], + "bbox": [ + 254, + 537, + 299, + 571 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/a475ad7bf12abea80cb0ba9edd5f93c6aa0826811af10dc901c047e0cd304868.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 333, + 441, + 385, + 484 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/45e13209ef78b48205eadcb06c0615e6bc18a41dd4a749ac5ded2f055c6e65f0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 330, + 497, + 387, + 523 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/e3167b745021840cfe0c3aeba487e726a61f5d1ee451d65b929016dff2e6ffc9.jpg", + "image_caption": [ + "w/only $M_{\\mathrm{occ}}$ attention" + ], + "image_footnote": [], + "bbox": [ + 338, + 537, + 385, + 575 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/77d9f6728e5665e1306318e198443c8a1031444766ad537fd7d9d609d84da3dd.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 413, + 440, + 467, + 484 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/0292327dff33db77e2f43cfdcf0aca252b42b48e115c1bbff578282a1e557d98.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 413, + 494, + 470, + 523 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/40ac8e5cd597b91c59eb1ea0902c1c0bc39caba18c756f0f933280933cf02a9c.jpg", + "image_caption": [ + "Full model" + ], + "image_footnote": [], + "bbox": [ + 421, + 537, + 462, + 573 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "using a single completed view (\"1 V-num + pix2gestalt\"). This shows that inconsistent 2D completion does confuse reconstruction models to the point that using a single view is preferable (Sec. 2). This issue does not affect Amodal3R as it does not rely on 2D completion; in our case, utilizing more occluded views does improve the results as expected.", + "bbox": [ + 89, + 699, + 482, + 790 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Qualitative Results. The qualitative results are shown in Figs. 5 and 6 and in supplementary material Figs. C.4 and C.5. For all baselines, pix2gestalt is first applied for 2D amodal completion (second column), and the completed images are passed to each baseline image-to-3D model. Amodal3R produces high-quality 3D assets even under challenging conditions in both single-view and multi-view", + "bbox": [ + 89, + 794, + 483, + 900 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/c24c5b76fb282ec366c012952b591f859bcb5ba2288c1d1aee8b47f6b51fed93.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodFID ↓KID(%) ↓COV(%) ↑MMD(‰) ↓
naive conditioning31.960.4937.963.61
w/ only mask-weighted attention30.530.3836.903.69
w/ only occlusion-aware layer31.770.5740.193.51
full model (Ours)30.640.3539.613.62
", + "bbox": [ + 516, + 435, + 905, + 503 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 3. Ablations on different mask conditioning designs. The Best and the second best results are highlighted.", + "bbox": [ + 513, + 508, + 905, + 536 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "image-to-3D tasks. In contrast, 2D amodal completions inconsistencies accumulate as more views are added, particularly when pix2gestalt is more uncertain, confusing the reconstructor models downstream. For instance, in the giraffe example in Fig. 5, the pix2gestalt completion fails to capture the overall structure of the 3D object correctly, which in turn leads the 3D generator models to misinterpret it as a woodstick-like shape. In contrast, Amodal3R reconstructs the 3D geometry and appearance well, with good alignment to the occluded inputs. These findings highlight not only the effectiveness of Amodal3R but also the advantage of completing objects while reconstructing them, which avoids relying on monocular 2D completion models that may introduce inconsistencies.", + "bbox": [ + 511, + 564, + 906, + 773 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.3. Ablation Study", + "text_level": 1, + "bbox": [ + 511, + 786, + 666, + 801 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We conducted several ablations to study the impact of the different components of Amodal3R and report the results in Tab. 3 and Fig. 7. For these experiments, we test single-view image-to-3D reconstruction on the GSO dataset.", + "bbox": [ + 511, + 809, + 905, + 869 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Naive Conditioning. We first evaluated a version of the model that still conditions the reconstruction on the visi", + "bbox": [ + 511, + 869, + 905, + 898 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/200397b453ee819ae7dd4f2ea646793e770a7310034f8a2f32d950b559d1aaad.jpg", + "image_caption": [ + "Occluded input" + ], + "image_footnote": [], + "bbox": [ + 101, + 87, + 189, + 286 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/7063db8c5a5bd2b6301c906bc345c66adcbd53290a21cbd3959582d1ae385ba6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 205, + 113, + 254, + 138 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/0b5a2809eb2eb70a683759fa3294f926e8118b6f3f5bc2e8298be8c0dda5acf6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 205, + 180, + 254, + 194 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/10bb96547372de9b4f65d4eaf0c6141fbc7f0c222a00a2e942f4714b00276065.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 205, + 234, + 250, + 276 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/ba5eb638fabfa1d44f967225d2a2705ce9672b003cd7ee8dbbffca2815b08da6.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 295, + 112, + 343, + 138 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/350c99a7dd911ccf593f469842074ab9d5ce521a997ceaef24354976106ec05f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 292, + 180, + 343, + 198 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/76ab33323ee953b7484b82fd71034ba1c0694d70c026740e58f0e37071d04d7b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 300, + 229, + 336, + 276 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/e9fc53c83e64d69d56473b13d9cf025038ffb0a22ca274e9c7cc9d215834edbd.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 369, + 98, + 485, + 142 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/be3264b8b941c2ab0f81fe0ad141d6ac5078eec8d0feb03fdac2e59db115630d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 370, + 174, + 483, + 209 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/76822cbfe9df726693fb6a192713b4c26a42092d8f023015bcff394b26c0e8ba.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 380, + 222, + 475, + 276 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/be50f897dc7b5833fd3799b532973036dc6f43eca836308d1eae730060cf5386.jpg", + "image_caption": [ + "pix2gestalt [56]" + ], + "image_footnote": [], + "bbox": [ + 369, + 297, + 467, + 369 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/bc2fc0f147d342da6fc5086cdbc462ef9a6133e193a57d59e3342584957f4600.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 393, + 364, + 467, + 416 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Novel views", + "bbox": [ + 348, + 422, + 416, + 434 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/3f0690db07f3e2de1dcb4539f91f6fd2da780fcf0b23ecdd19b09f10c4a2b482.jpg", + "image_caption": [ + "Real3D [28]" + ], + "image_footnote": [], + "bbox": [ + 470, + 297, + 545, + 366 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/6be0ab649a90c7e78d3f937affcceda47d57a38310d041186c17d451bbd7fc15.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 472, + 364, + 545, + 416 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/8c88269e51c66fd1d7a2f8e4eee5496f8dd65a9c22e0427e65b3071ee782d987.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 496, + 98, + 611, + 146 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/e9461051aa711aa2a02f6151e859e9065c3cc6830a48ca9819d07d694df23137.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 496, + 174, + 607, + 203 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/9887c95bebc5c836fb6365b6d00fa9131761303e1f18801193509e0736ae6d6d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 514, + 227, + 601, + 277 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/5b3325bbcf41da6807dfe91632d5b72309bb8f548f7117299add63bfb36f0dd5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 625, + 98, + 730, + 150 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/0436f7b4f917ac40bc1178bcecd4d2507c73de79c39d3247c622f3b88ec8dab8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 622, + 174, + 738, + 208 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/de8e3973fc101975ccc4d55e99daa0e75ecdf0ffe76d175c3a4eb8f66839420b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 624, + 234, + 738, + 266 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/16074b541591a6c0599039739318680c096fc55bc1bd6d6c758d7914adccda39.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 759, + 95, + 883, + 157 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/736e8cf2151488d8fa503ff4669c5f471d9197d18f871258fe25745f54af420e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 759, + 174, + 892, + 208 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/fc1c7b906106d3c2535af2b3ff68f48d36973bf0713210e9b5530f249970eedb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 759, + 233, + 898, + 268 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/958925104b6a6de5fb02fda3c3d1e28e93fccfbdbb89d9f987c768b2ff365f3a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 102, + 297, + 187, + 359 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/a50c8ecd263a6364cf47396dc2a5a0e1535a8c6165bebb4a6e03063e8a105a8c.jpg", + "image_caption": [ + "Occluded input", + "Figure 8. Examples on Replica [70] and in-the-wild images. The target objects and the occluders are shown in red and green outlines." + ], + "image_footnote": [], + "bbox": [ + 102, + 362, + 187, + 422 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/45e6b6daacb0e381376c64d91c368b9d73b9f823fb4c0240369ddaa998806e7c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 223, + 310, + 294, + 361 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/17048f440df24527f3766f08317540eebaa019d762cb04164ad02bdc06cb028d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 225, + 368, + 292, + 416 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/800958d49af741babbe642e9558420b3494187539a3e11efe91645f4be6f77ef.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 305, + 311, + 375, + 362 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/f7b7f66e83d2bf216ea9dbe2aa9de530f935d25f68b932e37f802f1b1eaa07a4.jpg", + "image_caption": [ + "Novel views" + ], + "image_footnote": [], + "bbox": [ + 303, + 367, + 372, + 416 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "", + "bbox": [ + 393, + 299, + 465, + 369 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "", + "bbox": [ + 395, + 364, + 467, + 416 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/3ca9dcb41744fa7472a5dc51f67271f0714f4d2f4c1467b5e01052836b1d0e0e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 576, + 310, + 645, + 359 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/3a243e98dffc9f87e7ad296a48b11c97595783f58d9893880df71e77bb952838.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 578, + 366, + 645, + 416 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/f1d752d3d5e77e56be8c66c9c56e72dffb7bfdcbfe2586e72410d9c2b8500fe0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 656, + 310, + 725, + 361 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/2dbda0b4fbc6011b286ddeb6f2034aeffe7b9797fc27cc220c8d72b5abde4352.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 651, + 366, + 720, + 416 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/4ef4fb4a6a9188008e9ff35804e9704402f4e8a130e42e4a5758ef9c1d7656a5.jpg", + "image_caption": [ + "TRELLIS [81]", + "Amodal3R (Ours)" + ], + "image_footnote": [], + "bbox": [ + 735, + 299, + 883, + 369 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/789951e6b632a16df3aa69afad044cfe8b02196fc2f0ae28e80fc511b5fcdc94.jpg", + "image_caption": [ + "Normal maps" + ], + "image_footnote": [], + "bbox": [ + 735, + 369, + 883, + 417 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "bility and occlusion masks, but without using the modules of Sec. 3.2. Instead, we directly concatenate the tokens $c_{\\mathrm{vis}}$ to $c_{\\mathrm{dino}}$ to form the condition for the cross-attention layer. The results (first row in Tab. 3 and second column in Fig. 7) show that, while the resulting model can still perform basic completion, the textures in the occluded regions are inconsistent with those in the visible ones, and the reconstructed geometry is inaccurate, e.g. the hole in the shoe.", + "bbox": [ + 89, + 481, + 482, + 602 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Mask-weighted Attention. To evaluate the effectiveness of our proposed mask-weighted attention mechanism, we integrate it into the training while omitting the occlusion-aware layer. The results demonstrate improved rendering quality — especially in capturing texture details in the visible areas — and significantly enhanced appearance consistency. However, the geometries exhibit deficiencies, as seen in the problematic shoe in Fig. 7 (first row), and the toy monster with a broken back (third row).", + "bbox": [ + 89, + 614, + 483, + 751 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Occlusion-aware Layer. The integration of the occlusion-aware layer improves the geometry both quantitatively and qualitatively. This aligns with our motivation for introducing an additional cross-attention layer, aimed at reconstructing the visible areas via the image-conditioned layer and occluded areas via the subsequent layer. However, occlusion-aware layer alone still results in unsatisfactory appearances, which again indirectly attests to the effectiveness of mask-weighted attention mechanism.", + "bbox": [ + 89, + 763, + 483, + 900 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Full Model. Consequently, the full model integrating both modules achieves optimal 3D generation results characterized by complete geometry and consistent textures.", + "bbox": [ + 511, + 481, + 905, + 527 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.4. Real-World Generation / Completion", + "text_level": 1, + "bbox": [ + 511, + 536, + 834, + 553 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Amodal3R is inherently superior in generalizing to out-of-distribution amodal 3D reconstruction, primarily due to the fact that we build upon the model on a \"foundation\" 3D generator, and fine-tune it with diverse categories. To demonstrate this advantage, we conduct scene-level dataset evaluations: Replica [70] (first two rows) and on in-the-wild images (3rd-5th rows) in Fig. 8. Here we adopt Segment Anything [35] to get the visibility and occlusion masks. The results show that Amodal3R generates reasonable 3D assets, whereas pix2gestalt fails to infer complete shapes from the same inputs, leading to unsatisfactory results from subsequent image-to-3D models. We also visualize the colored normal maps, which show that the results of Amodal3R are simultaneously rich in geometric and textural detail.", + "bbox": [ + 511, + 558, + 906, + 771 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 511, + 784, + 633, + 800 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We have introduced Amodal3R, a novel approach to reconstruct complete 3D shape and appearance from partially visible 2D images. The key to our success is that we construct mask-weighted cross-attention mechanism and occlusion-aware layer to effectively exploit visible and occluded information. Compared to the state-of-the-art methods that rely", + "bbox": [ + 511, + 809, + 906, + 902 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "on sequential 2D completion followed by 3D generation, our Amodal3R achieves remarkably better performance by operating directly in 3D space. Furthermore, results on inthe-wild images indicate its potential for subsequent applications in 3D decomposition and scene understanding, marking a step towards robust 3D asset reconstruction in real-world environments with complex occlusion.", + "bbox": [ + 89, + 90, + 485, + 196 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 91, + 208, + 187, + 224 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Titas Anciukevicius, Zexiang Xu, Matthew Fisher, Paul Henderson, Hakan Bilen, Niloy J Mitra, and Paul Guerrero. Renderdiffusion: Image diffusion for 3d reconstruction, inpainting and generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12608-12618, 2023. 2", + "[2] Jiayang Ao, Yanbei Jiang, Qiuhong Ke, and Krista A Ehinger. Open-world amodal appearance completion. arXiv preprint arXiv:2411.13019, 2024. 2", + "[3] Mikołaj Binkowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans. arXiv preprint arXiv:1801.01401, 2018. 6, 2", + "[4] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 5799-5809, 2021. 2", + "[5] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In CVPR, 2022. 1, 2", + "[6] Anpei Chen, Haofei Xu, Stefano Esposito, Siyu Tang, and Andreas Geiger. Lara: Efficient large-baseline radiance fields. In European Conference on Computer Vision, pages 338-355. Springer, 2024. 1, 6, 7, 5", + "[7] Minghao Chen, Roman Shapovalov, Iro Laina, Tom Monnier, Jianyuan Wang, David Novotny, and Andrea Vedaldi. Partgen: Part-level 3d generation and reconstruction with multi-view diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 2, 3", + "[8] Yiwen Chen, Tong He, Di Huang, Weicai Ye, Sijin Chen, Ji-axiang Tang, Xin Chen, Zhongang Cai, Lei Yang, Gang Yu, et al. Meshanything: Artist-created mesh generation with autoregressive transformers. arXiv preprint arXiv:2406.10163, 2024. 2", + "[9] Yuedong Chen, Chuanxia Zheng, Haofei Xu, Bohan Zhuang, Andrea Vedaldi, Tat-Jen Cham, and Jianfei Cai. Mvsplat360: Feed-forward 360 scene synthesis from sparse views. In Neural Information Processing Systems (NeurIPS), 2024. 2", + "[10] Yen-Chi Cheng, Hsin-Ying Lee, Sergey Tulyakov, Alexander G Schwing, and Liang-Yan Gui. Sdfusion: Multimodal 3d shape completion, reconstruction, and generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4456-4465, 2023. 2", + "[11] Junhyeong Cho, Kim Youwang, Hunmin Yang, and TaeHyun Oh. Robust 3d shape reconstruction in zero-shot from" + ], + "bbox": [ + 93, + 233, + 483, + 900 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "a single image in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 2", + "[12] Ruihang Chu, Enze Xie, Shentong Mo, Zhenguo Li, Matthias Nießner, Chi-Wing Fu, and Jiaya Jia. Diffcomplete: Diffusion-based generative 3d shape completion. Advances in Neural Information Processing Systems, 36, 2024. 2", + "[13] Jasmine Collins, Shubham Goel, Kenan Deng, Achleshwar Luthra, Leon Xu, Erhan Gundogdu, Xi Zhang, Tomas F Yago Vicente, Thomas Dideriksen, Himanshu Arora, et al. Abo: Dataset and benchmarks for real-world 3d object understanding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 21126-21136, 2022. 5", + "[14] Ruikai Cui, Weizhe Liu, Weixuan Sun, Senbo Wang, Taizhang Shang, Yang Li, Xibin Song, Han Yan, Zhennan Wu, Shenzhen Chen, et al. Neusdfusion: A spatial-aware generative model for 3d shape completion, reconstruction, and generation. In European Conference on Computer Vision, pages 1-18. Springer, 2024. 2", + "[15] Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak Gadre, et al. Objverse-xl: A universe of $10\\mathrm{m} + 3\\mathrm{d}$ objects. Advances in Neural Information Processing Systems, 36, 2024. 3", + "[16] Laura Downs, Anthony Francis, Nate Koenig, Brandon Kinman, Ryan Hickman, Krista Reymann, Thomas B McHugh, and Vincent Vanhoucke. Google scanned objects: A high-quality dataset of 3d scanned household items. In 2022 International Conference on Robotics and Automation (ICRA), pages 2553-2560. IEEE, 2022. 2, 6", + "[17] Huan Fu, Rongfei Jia, Lin Gao, Mingming Gong, Binqiang Zhao, Steve Maybank, and Dacheng Tao. 3d-future: 3d furniture shape with texture. International Journal of Computer Vision, 129:3313-3337, 2021. 5", + "[18] Jun Gao, Tianchang Shen, Zian Wang, Wenzheng Chen, Kangxue Yin, Daiqing Li, Or Litany, Zan Gojcic, and Sanja Fidler. Get3d: A generative model of high quality 3d textured shapes learned from images. Advances In Neural Information Processing Systems, 35:31841-31854, 2022. 2", + "[19] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139-144, 2020. 2", + "[20] Anchit Gupta, Wenhan Xiong, Yixin Nie, Ian Jones, and Barlas Oğuz. 3dgen: Triplane latent diffusion for textured mesh generation. arXiv preprint arXiv:2303.05371, 2023. 2", + "[21] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 6, 2", + "[22] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 6", + "[23] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems (NeurIPS), pages 6840-6851, 2020. 2" + ], + "bbox": [ + 516, + 92, + 903, + 898 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 924, + 504, + 936 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[24] Yicong Hong, Kai Zhang, Juixiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. Lrm: Large reconstruction model for single image to 3d. In International Conference on Learning Representations (ICLR), 2024. 1", + "[25] Tao Hu, Zhizhong Han, and Matthias Zwicker. 3d shape completion with multi-view consistent inference. In Proceedings of the AAAI conference on artificial intelligence, pages 10997-11004, 2020. 2", + "[26] Zitian Huang, Yikuan Yu, Jiawen Xu, Feng Ni, and Xinyi Le. Pf-net: Point fractal network for 3d point cloud completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 7662-7670, 2020. 2", + "[27] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics (TOG), 36(4):107, 2017. 5", + "[28] Hanwen Jiang, Qixing Huang, and Georgios Pavlakos. Real3d: Scaling up large reconstruction models with real-world images. arXiv preprint arXiv:2406.08479, 2024. 6, 7, 8, 2, 4", + "[29] Heewoo Jun and Alex Nichol. Shap-e: Generating conditional 3d implicit functions. arXiv preprint arXiv:2305.02463, 2023. 3", + "[30] Yoni Kasten, Ohad Rahamim, and Gal Chechik. Point cloud completion with pretrained text-to-image diffusion models. Advances in Neural Information Processing Systems, 36, 2024. 2", + "[31] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. 4", + "[32] Mukul Khanna, Yongsen Mao, Hanxiao Jiang, Sanjay Haresh, Brennan Shacklett, Dhruv Batra, Alexander Clegg, Eric Undersander, Angel X Chang, and Manolis Savva. Habitat synthetic scenes dataset (hssd-200): An analysis of 3d scene scale and realism tradeoffs for objectgoal navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16384–16393, 2024. 5", + "[33] Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye. Diffus ionclip: Text-guided diffusion models for robust image man i plication. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2426-2435, 2022. 5", + "[34] Yulhwa Kim, Dongwon Jo, Hyesung Jeon, Taesu Kim, Daehyun Ahn, Hyungjun Kim, et al. Leveraging early-stage robustness in diffusion models for efficient and high-quality image synthesis. Advances in Neural Information Processing Systems, 36:1229-1244, 2023. 5", + "[35] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4015-4026, 2023. 3, 8", + "[36] Yushi Lan, Feitong Tan, Di Qiu, Qiangeng Xu, Kyle Genova, Zeng Huang, Sean Fanello, Rohit Pandey, Thomas" + ], + "bbox": [ + 91, + 90, + 480, + 900 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Funkhouser, Chen Change Loy, et al. Gaussian3diff: 3d gaussian diffusion for 3d full head synthesis and editing. arXiv preprint arXiv:2312.03763, 2023. 2", + "[37] Yushi Lan, Shangchen Zhou, Zhaoyang Lyu, Fangzhou Hong, Shuai Yang, Bo Dai, Xingang Pan, and Chen Change Loy. Gaussiananything: Interactive point cloud latent diffusion for 3d generation. arXiv preprint arXiv:2411.08033, 2024. 6, 7, 8, 2, 4", + "[38] Muheng Li, Yueqi Duan, Jie Zhou, and Jiwen Lu. Diffusionsdf: Text-to-shape via voxelized diffusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12642-12651, 2023. 2", + "[39] Xinhai Li, Huaibin Wang, and Kuo-Kun Tseng. Gaussian diffusion: 3d gaussian splatting for denoising diffusion probabilistic models with structured noise. arXiv preprint arXiv:2311.11221, 2023. 2", + "[40] Chen-Hsuan Lin, Chen Kong, and Simon Lucey. Learning efficient point cloud generation for dense 3d object reconstruction. In proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2018. 2", + "[41] Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), 2018. 5", + "[42] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), pages 9298–9309, 2023. 1, 2", + "[43] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022. 4", + "[44] Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, Yuan Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, et al. Wonder3d: Single image to 3d using cross-domain diffusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 9970-9980, 2024. 2", + "[45] I Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 6", + "[46] Shitong Luo and Wei Hu. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 2837-2845, 2021. 2", + "[47] Luke Melas-Kyriazi, Christian Rupprecht, and Andrea Vedaldi. Pc2: Projection-conditioned point cloud diffusion for single-image 3d reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12923-12932, 2023. 2", + "[48] Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulo, Peter Kontschieder, and Matthias Nießner. Diffrf: Rendering-guided 3d radiance field diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4328-4338, 2023. 2", + "[49] Charlie Nash, Yaroslav Ganin, SM Ali Eslami, and Peter Battaglia. *Polygon: An autoregressive generative model of" + ], + "bbox": [ + 516, + 90, + 906, + 900 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 490, + 925, + 508, + 936 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "3d meshes. In International Conference on Machine Learning (ICML), pages 7220-7229. PMLR, 2020. 1, 2", + "[50] Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), pages 7588–7597, 2019. 1, 2", + "[51] Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. Point-e: A system for generating 3d point clouds from complex prompts. arXiv preprint arXiv:2212.08751, 2022. 2, 6", + "[52] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 11453-11464, 2021. 2", + "[53] Evangelos Ntavelis, Aliaksandr Siarohin, Kyle Olszewski, Chaoyang Wang, Luc V Gool, and Sergey Tulyakov. Autodecoding latent 3d diffusion models. Advances in Neural Information Processing Systems, 36:67021-67047, 2023. 3", + "[54] Maxime Oquab, Timothee Darcet, Theo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 4, 1", + "[55] Maxime Aquab, Timothee Darcet, Théo Moutakanni, Huy V Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel HAZIZA, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. Transactions on Machine Learning Research, 2023. 3, 4", + "[56] Ege Ozguroglu, Ruoshi Liu, Dídac Surís, Dian Chen, Achal Dave, Pavel Tokmakov, and Carl Vondrick. pix2gestalt: Amodal segmentation by synthesizing wholes. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3931-3940. IEEE Computer Society, 2024. 2, 3, 6, 7, 8, 4, 5", + "[57] Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, and Alexander C Berg. Transformation-grounded image generation network for novel 3d view synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 3500-3509, 2017. 2", + "[58] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2536-2544. IEEE, 2016. 5", + "[59] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning (ICML), pages 8748-8763. PmLR, 2021. 6, 2", + "[60] Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Angela Dai, and Matthias Nießner. L3dg: Latent 3d gaussian diffusion. In SIGGRAPH Asia 2024 Conference Papers, pages 1-11, 2024. 3" + ], + "bbox": [ + 91, + 90, + 482, + 900 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[61] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 10684-10695, 2022. 2", + "[62] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. Advances in Neural Information Processing Systems (NeurIPS), 33:20154-20166, 2020. 2", + "[63] Ruoxi Shi, Hansheng Chen, Zhuoyang Zhang, Minghua Liu, Chao Xu, Xinyue Wei, Linghao Chen, Chong Zeng, and Hao Su. Zero123++: a single image to consistent multi-view diffusion base model. arXiv preprint arXiv:2310.15110, 2023. 6, 2", + "[64] Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li, and Xiao Yang. Mvdream: Multi-view diffusion for 3d generation. arXiv preprint arXiv:2308.16512, 2023. 2, 3", + "[65] Wu Shuang, Youtian Lin, Yifei Zeng, Feihu Zhang, Jingxi Xu, Philip Torr, Xun Cao, and Yao Yao. Direct3d: Scalable image-to-3d generation via 3d latent diffusion transformer. Advances in Neural Information Processing Systems, 37:121859–121881, 2025. 3", + "[66] J Ryan Shue, Eric Ryan Chan, Ryan Po, Zachary Ankner, Jiajun Wu, and Gordon Wetzstein. 3d neural field generation using triplane diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20875-20886, 2023. 2", + "[67] Yawar Siddiqui, Antonio Alliegro, Alexey Artemov, Tatiana Tommasi, Daniele Sirigatti, Vladislav Rosov, Angela Dai, and Matthias Nießner. Meshgpt: Generating triangle meshes with decoder-only transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19615-19625, 2024. 1, 2", + "[68] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning (ICML), pages 2256-2265. pmlr, 2015. 2", + "[69] Stefan Stojanov, Anh Thai, and James M Rehg. Using shape to categorize: Low-shot learning with an explicit shape bias. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1798-1808, 2021. 2, 6", + "[70] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, et al. The replica dataset: A digital replica of indoor spaces. arXiv preprint arXiv:1906.05797, 2019. 2, 6, 8", + "[71] David Stutz and Andreas Geiger. Learning 3d shape completion from laser scan data with weak supervision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1955-1964, 2018. 2", + "[72] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Viewset diffusion: (0-)image-conditioned 3d generative models from 2d data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 8863-8873, 2023. 1, 2, 3", + "[73] Stanislaw Szymanowicz, Chrisitian Rupprecht, and Andrea Vedaldi. Splatter image: Ultra-fast single-view 3d recon" + ], + "bbox": [ + 516, + 92, + 906, + 900 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 490, + 924, + 506, + 936 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "struction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10208-10217, 2024. 1", + "[74] Shitao Tang, Jiacheng Chen, Dilin Wang, Chengzhou Tang, Fuyang Zhang, Yuchen Fan, Vikas Chandra, Yasutaka Furukawa, and Rakesh Ranjan. Mvdiffusion++: A dense high-resolution multi-view diffusion model for single or sparse-view 3d object reconstruction. In European Conference on Computer Vision, pages 175-191. Springer, 2024. 2", + "[75] Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, Karsten Kreis, et al. Lion: Latent point diffusion models for 3d shape generation. Advances in Neural Information Processing Systems, 35:10021-10039, 2022. 3", + "[76] Vikram Voleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, and Varun Jampani. Sv3d: Novel multi-view synthesis and 3d generation from a single image using latent video diffusion. In European Conference on Computer Vision (ECCV), pages 439-457. Springer, 2024. 1, 2", + "[77] Daniel Watson, William Chan, Ricardo Martin-Brualla, Jonathan Ho, Andrea Tagliasacchi, and Mohammad Norouzi. Novel view synthesis with diffusion models. arXiv preprint arXiv:2210.04628, 2022. 2", + "[78] Daniel Watson, William Chan, Ricardo Martin Brulla, Jonathan Ho, Andrea Tagliasacchi, and Mohammad Norouzi. Novel view synthesis with diffusion models. In The Eleventh International Conference on Learning Representations (ICLR), 2023. 1", + "[79] Christopher Wewer, Kevin Raj, Eddy Ilg, Bernt Schiele, and Jan Eric Lenssen. latentsplat: Autoencoding variational gaussians for fast generalizable 3d reconstruction. In European Conference on Computer Vision (ECCV), pages 456-473. Springer, 2024. 2", + "[80] Zijie Wu, Yaonan Wang, Mingtao Feng, He Xie, and Ajmal Mian. Sketch and text guided diffusion model for colored point cloud generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8929-8939, 2023. 2", + "[81] Jianfeng Xiang, Zelong Lv, Sicheng Xu, Yu Deng, Ruicheng Wang, Bowen Zhang, Dong Chen, Xin Tong, and Jiaolong Yang. Structured 3d latents for scalable and versatile 3d generation. arXiv preprint arXiv:2412.01506, 2024. 1, 2, 3, 4, 6, 7, 8, 5", + "[82] Desai Xie, Jiahao Li, Hao Tan, Xin Sun, Zhixin Shu, Yi Zhou, Sai Bi, Soren Pirk, and Arie E Kaufman. Carve3d: Improving multi-view reconstruction consistency for diffusion models with rl finetuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6369-6379, 2024. 2", + "[83] Katherine Xu, Lingzhi Zhang, and Jianbo Shi. Amodal completion via progressive mixed context diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9099-9109, 2024. 2", + "[84] Bo Yang, Hongkai Wen, Sen Wang, Ronald Clark, Andrew Markham, and Niki Trigoni. 3d object reconstruction from a single depth view with adversarial learning. In Proceedings of the IEEE international conference on computer vision workshops (ICCVW), pages 679-688, 2017. 2" + ], + "bbox": [ + 91, + 90, + 482, + 900 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[85] Guanqi Zhan, Chuanxia Zheng, Weidi Xie, and Andrew Zisserman. Amodal ground truth and completion in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 28003-28013, 2024. 2", + "[86] Xiaohang Zhan, Xingang Pan, Bo Dai, Ziwei Liu, Dahua Lin, and Chen Change Loy. Self-supervised scene de- occlusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 3784-3792, 2020. 2", + "[87] Bowen Zhang, Yiji Cheng, Jiaolong Yang, Chunyu Wang, Feng Zhao, Yansong Tang, Dong Chen, and Baining Guo. Gaussiancube: Structuring gaussian splatting using optimal transport for 3d generative modeling. arXiv preprint arXiv:2403.19655, 2024. 2", + "[88] Longwen Zhang, Ziyu Wang, Qixuan Zhang, Qiwei Qiu, Anqi Pang, Haoran Jiang, Wei Yang, Lan Xu, and Jingyi Yu. Clay: A controllable large-scale generative model for creating high-quality 3d assets. ACM Transactions on Graphics (TOG), 43(4):1-20, 2024. 1", + "[89] Chuanxia Zheng and Andrea Vedaldi. Free3d: Consistent novel view synthesis without 3d representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9720-9731, 2024. 1, 2, 3", + "[90] Chuanxia Zheng, Tat-Jen Cham, and Jianfei Cai. Pluralistic image completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1438-1447, 2019. 5", + "[91] Chuanxia Zheng, Duy-Son Dao, Guoxian Song, Tat-Jen Cham, and Jianfei Cai. Visiting the invisible: Layer-by-layer completed scene decomposition. International Journal of Computer Vision (IJCV), 129:3195-3215, 2021. 2", + "[92] Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai, and Dinh Phung. Bridging global context interactions for high-fidelity image completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 11512-11522, 2022. 4", + "[93] Xiao Zheng, Xiaoshui Huang, Guofeng Mei, Yuenan Hou, Zhaoyang Lyu, Bo Dai, Wanli Ouyang, and Yongshun Gong. Point cloud pre-training with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22935-22945, 2024. 2", + "[94] Xin-Yang Zheng, Hao Pan, Yu-Xiao Guo, Xin Tong, and Yang Liu. Mvd^2: Efficient multiview 3d reconstruction for multiview diffusion. In ACM SIGGRAPH 2024 Conference Papers, pages 1-11, 2024. 2", + "[95] Linqi Zhou, Yilun Du, and Jiajun Wu. 3d shape generation and completion through point-voxel diffusion. In Proceedings of the IEEE/CVF international conference on computer vision, pages 5826-5835, 2021. 2", + "[96] Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Josh Tenenbaum, and Bill Freeman. Visual object networks: Image generation with disentangled 3d representations. Advances in neural information processing systems (NeurIPS), 31, 2018. 2", + "[97] Zi-Xin Zou, Zhipeng Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Yan-Pei Cao, and Song-Hai Zhang. Triplane" + ], + "bbox": [ + 516, + 90, + 903, + 900 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 490, + 924, + 508, + 936 + ], + "page_idx": 11 + }, + { + "type": "text", + "text": "meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10324-10335, 2024. 2", + "bbox": [ + 122, + 90, + 485, + 148 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 490, + 924, + 508, + 936 + ], + "page_idx": 12 + }, + { + "type": "text", + "text": "Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images Supplementary Material", + "text_level": 1, + "bbox": [ + 165, + 85, + 831, + 138 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "A. Implementation Details", + "text_level": 1, + "bbox": [ + 89, + 155, + 316, + 171 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "A.1. Network Design", + "text_level": 1, + "bbox": [ + 89, + 180, + 253, + 196 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "We adopt the network design in TRELLIS [81] to load the pre-trained image-to-3D weights and integrate the mask-weighted cross-attention mechanism to each DiT block (24 blocks in total). And each image-conditioned cross-attention layer is immediately followed by an occlusion-aware cross-attention layer.", + "bbox": [ + 89, + 200, + 483, + 292 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(a) Patchify and weight of visibility/occlusion mask The input condition image has a resolution of $512 \\times 512$ , which is resized to $518 \\times 518$ to facilitate splitting into patches of size $14 \\times 14$ , as required by DINOv2 [54]. The resulting condition is then flattened into a tensor $\\mathbf{c}_{dino} \\in \\mathbb{R}^{1374 \\times 1024}$ , where the sequence length corresponds to $37 \\times 37$ patches plus 1 CLS token and 4 register tokens. To better align the visibility and occlusion masks with the DINOv2 features, we first split the masks into patches of the same size, then calculate the weight score for each patch using Eq. 4 and Eq. 5. The final $\\mathbf{c}_{vis} \\in \\mathbb{R}^{1374 \\times 1}$ and $\\mathbf{c}_{occ} \\in \\mathbb{R}^{1374 \\times 1}$ are obtained by flattening the weight scores, with a value of 1 assigned to the CLS and register token positions.", + "(b) Occlusion-aware cross-attention layer. We set the feature dimension of the occlusion-aware cross-attention layers to 1024, matching that of the image-conditioned cross-attention layers. To maintain consistent dimensions, we replicate the flattened occlusion masks to form a tensor $\\mathbf{c}_{\\text{occ\\_stack}} \\in \\mathbb{R}^{1374 \\times 1024}$ .", + "(c) Multi-head Cross-Attention. Our mask-weighted multi-head cross-attention (MHA) layer, which is implemented to encourage the model to focus its attention on the visible parts of the object, is an extension of the cross-attention described in the main paper. Specifically, $H$ heads are run in parallel, resulting in $H$ attention scores. For mask-weighted attention mechanism, we impose $c_{\\mathrm{vis}}$ simultaneously to each head:" + ], + "bbox": [ + 89, + 292, + 483, + 700 + ], + "page_idx": 13 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {A} _ {h} = \\operatorname {s o f t m a x} \\left(\\boldsymbol {S} _ {h} + \\log c _ {\\text {v i s}}\\right), \\tag {A.1}\n$$\n", + "text_format": "latex", + "bbox": [ + 178, + 727, + 480, + 743 + ], + "page_idx": 13 + }, + { + "type": "equation", + "text": "\n$$\n\\mathrm {M H A} = \\left[ \\boldsymbol {A} _ {1} \\boldsymbol {v}; \\boldsymbol {A} _ {2} \\boldsymbol {v}; \\dots ; \\boldsymbol {A} _ {H} \\boldsymbol {v} \\right] \\tag {A.2}\n$$\n", + "text_format": "latex", + "bbox": [ + 178, + 746, + 482, + 762 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "A.2. Training Details", + "text_level": 1, + "bbox": [ + 89, + 772, + 256, + 787 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "(a) Pre-trained model loading. While TRELLIS is split into multiple modules, in our work we only train the sparse structure flow transformer and the structured latent flow transformer (see the overview figure where the \"fire\" symbols indicate the parts that are fine-tuned, and \"snowflake\" symbols indicate that we directly use the pretrained weights).", + "bbox": [ + 89, + 794, + 483, + 900 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "(b) Data Augmentation. As described in Sec. 3.3, we generate random masks during training for data augmentation. Specifically, we begin by randomly drawing 1 to 3 lines, circles, and ellipses in the mask image. Next, to ensure these regions connect — thereby better simulating real-world occlusions, where mask regions are typically not highly fragmented — we randomly add 3 to 7 rectangular regions that have undergone an expansion operation. This results in a stable masking of the objects in the training data. Example inputs are presented in Fig. A.1.", + "bbox": [ + 511, + 156, + 906, + 309 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/b04b0527dc65b40fe900a0bb5245fee2ca569c931b5126483f78c3e1eb5d89d7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 521, + 338, + 645, + 433 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/e473421bda3a60dfa3a9ac5d71ce8d3d0be3247fe5c3eb8a7b8a514a7ac65a4c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 650, + 338, + 769, + 433 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/705a3efa88479430786188109444797316a4b50d5391f193ea586c1243e89535.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 776, + 338, + 897, + 433 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/d62f7a0625431a474dc72369f8aeddce76336606025ee6c862bf83766204c772.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 522, + 435, + 643, + 530 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/c1883fbc94e049a66af85bf3efd9200fd3ffaf4eff7aa042007643987dfcbbbf.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 650, + 435, + 769, + 530 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/08394c2eb97d7c0453e361592e75fef7fdc4a1a8791d4a1d772760388c47551c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 776, + 435, + 897, + 530 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/596129c8c6c42a54370e7e075fff5cdb848c8ad31815f9ac36baa3aa8e493904.jpg", + "image_caption": [ + "Original image" + ], + "image_footnote": [], + "bbox": [ + 522, + 531, + 643, + 626 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/ac1fecb53dd177f8397f60a2a1b0ae32e3d2d0f89ebaa3ecb8d06a193b2f9337.jpg", + "image_caption": [ + "Masked image" + ], + "image_footnote": [], + "bbox": [ + 650, + 531, + 769, + 626 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/e10d9e59eaa16897267857117a05853cf8cbb446ccfa5611419002eeb268c0bd.jpg", + "image_caption": [ + "Occlusion mask", + "Figure A.1. Examples of random mask generation. The visible areas are shown in white, occluded areas in gray and background in black." + ], + "image_footnote": [], + "bbox": [ + 776, + 531, + 897, + 626 + ], + "page_idx": 13 + }, + { + "type": "text", + "text": "A.3. Inference Details", + "text_level": 1, + "bbox": [ + 513, + 747, + 684, + 762 + ], + "page_idx": 13 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "(a) 3D-consistent mask ratio. For the multi-view 3D-consistent masks described in Sec. 3.3, we set the mask ratio to a random number between 0.4 and 0.6 for each object, which results in a variety of reasonable mask areas.", + "(b) Time consumption. Despite the introduction of additional cross-attention layers, our inference time remains comparable to that of the baselines. Amodal3R can generate and render each object in under 10 seconds." + ], + "bbox": [ + 511, + 773, + 906, + 900 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/9c2ef8895af5bc56a4e22ac31ba78caceebce58eaa6b1587f72aa517f01ca087.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 109, + 95, + 189, + 159 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/615aab167763d7d564aa2c043f298f360c03cd068519012f496cea56c402d227.jpg", + "image_caption": [ + "Occluded input" + ], + "image_footnote": [], + "bbox": [ + 114, + 162, + 194, + 236 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/92616bc57210944caf7bac8ffcd90b9a67a8a7c6dbde9f7f2aeb54b6b08db6eb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 220, + 97, + 285, + 159 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/9f79bf6a3e693fc48d5f856d4bbe95c8b716127d1e40aa554ba0a670ae682e57.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 285, + 98, + 357, + 159 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/35269d85c0ccd1276b1a039bef54ae08104a9b17d31f6d3822c4377144211c03.jpg", + "image_caption": [ + "GaussianAnything [37]" + ], + "image_footnote": [], + "bbox": [ + 209, + 165, + 285, + 227 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/6a37b06161e088662aaa499b14ba3cbed12315720d8ca0de6d5770dfa5caa4fd.jpg", + "image_caption": [ + "Figure A.2. Examples using occluded images directly as the input of baseline models." + ], + "image_footnote": [], + "bbox": [ + 285, + 167, + 367, + 228 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/df95bfe63946dca7058cf2d4d385bca8b0cbc56bee3a96355da3c17b7da46a7d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 380, + 97, + 450, + 159 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/2f2a03cbbf9479eaf62690774c6f7f0ad40af3149594ddf1f89c57b4a6660468.jpg", + "image_caption": [ + "Real3D [28]" + ], + "image_footnote": [], + "bbox": [ + 392, + 164, + 454, + 222 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/876ecbf2904d5d5ba41a09e1642ef3a94b1d577e0832f68e3d8210ceb08e9465.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 464, + 95, + 527, + 157 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/a78b1617d2c8fa14a4ed066af6b7ae056aa1f5cab50f8367db695bc937d65401.jpg", + "image_caption": [ + "TRELLIS [81]" + ], + "image_footnote": [], + "bbox": [ + 468, + 161, + 529, + 223 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/78fa239609ad9b322309c6000a6d97b1fa4731ce41715a07b7d7a24d1adcdb1b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 555, + 94, + 627, + 160 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/8c3e4a89f0670e172fadded10975eee54aebfa304b9e6815e71adc3cdc6a4b2e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 557, + 167, + 617, + 223 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/313f3ceb6f3d0da0644f0be2a0c948874779d72b805c97e3f0e8afc23b6ca136.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 637, + 95, + 705, + 159 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/df3d6e97efca84d18606fb24d27a150c0e19b7bed6dc0235a72e0f460a05c3a2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 637, + 169, + 696, + 223 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/117bbb669d8338533595438417c5cd2c4062a58aa591ef0dbef7be0f3f0975ca.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 733, + 95, + 803, + 161 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/12e4b12feb71553492b2bba143a90efbae32c993ab3feaddbf364d9d0b4da9f1.jpg", + "image_caption": [ + "Amodal3R (Ours)" + ], + "image_footnote": [], + "bbox": [ + 728, + 169, + 805, + 233 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/1e0fbdc5fba5f402fc4063a139413b5100bc2a80dcba73fd638f35d60287d4e7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 812, + 95, + 882, + 161 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/4f77cadcb832bb42842ceb4bc7c97687976ee5d613d7316f2b53f0858f0814fd.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 813, + 171, + 895, + 234 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "B. Experimental Details", + "text_level": 1, + "bbox": [ + 89, + 305, + 295, + 321 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "B.1. Evaluation Protocol", + "text_level": 1, + "bbox": [ + 89, + 329, + 282, + 344 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "We evaluate the results using Google Scanned Objects (GSO) (1,030 objects) [16] and a randomly sampled subset of Toys4K [69] containing 1,500 objects. Here, we provide additional details regarding the computation of our evaluation metrics.", + "bbox": [ + 89, + 351, + 482, + 425 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "(a) Rendering quality and semantic consistency alignment To assess overall rendering quality, we compute the Fréchet Inception Distance (FID) [21] and Kernel Inception Distance (KID) [3]. Moreover, we evaluate semantic consistency using the CLIP score [59] by measuring the cosine similarity between the CLIP features of each generated image and its corresponding ground truth. For each object, we render 4 views using cameras with yaw angles of $\\{0^{\\circ}, 90^{\\circ}, 180^{\\circ}, 270^{\\circ}\\}$ and a pitch angle of $30^{\\circ}$ . The camera is positioned with a radius of 2.0 and looks at the origin with a FoV of $40^{\\circ}$ , consistent with TRELLIS [81]. While FID and KID are calculated between the ground truth and generated sets (6,000 images for Toys4K and 4,120 images for GSO), the CLIP score is calculated in a pair-wise manner, and we report the mean value to evaluate semantic consistency.", + "bbox": [ + 89, + 426, + 482, + 652 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "(b) Geometry quality For 3D geometry evaluation, we adopt Point cloud FID (P-FID) [51], Coverage Score (COV), and Minimum Matching Distance (MMD) using Chamfer Distance (CD). Following previous work [37, 51, 81], we sampled 4096 points from each GT/generated point cloud, which are obtained from the unprojected multi-view depth maps using the farthest point sampling.", + "bbox": [ + 89, + 652, + 482, + 758 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "C. More Results", + "text_level": 1, + "bbox": [ + 89, + 770, + 230, + 786 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "In this section, we provide additional qualitative examples and comparison results to further demonstrate the performance of our Amodal3R.", + "bbox": [ + 89, + 795, + 482, + 840 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "C.1. Baselines using occluded input", + "text_level": 1, + "bbox": [ + 89, + 848, + 366, + 864 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "We have stated in the main paper that \"occluded images will lead to incomplete or broken structures\" in current", + "bbox": [ + 89, + 869, + 482, + 900 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "3D generative models. Here, we provide examples where pix2gestalt is omitted and the occluded images are directly used as the input. As shown in Fig. A.2, when baseline models receive images of partially visible objects as input, they often fail to recover complete and intact 3D assets.", + "bbox": [ + 511, + 306, + 905, + 381 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "C.2. More single-view to 3D examples", + "text_level": 1, + "bbox": [ + 511, + 393, + 805, + 410 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Due to the page restriction, we only provide limited examples in the main paper. Here we visualize more examples of single-view to 3D to further demonstrate the effectiveness of our method in Fig. C.4. The results show that compared with the 2D amodal completion + 3D generation baselines, our Amodal3R yields higher quality 3D assets across multiple categories.", + "bbox": [ + 511, + 417, + 905, + 523 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "C.3. More multi-view to 3D examples", + "text_level": 1, + "bbox": [ + 513, + 534, + 803, + 551 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/f8245dfc94ff1585afa4578515cf4ebfaec3862cbc99047d3ff96a7f875b984d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 568, + 574, + 844, + 655 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/52930762b79c0e57ccf7fd99bf51f14ffd04d0b75f4411c8357e139aeb338c72.jpg", + "image_caption": [ + "Multi-view Occluded input", + "pix2gestalt [56]" + ], + "image_footnote": [], + "bbox": [ + 568, + 666, + 767, + 743 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/9f90fd4f57664dfc4099d96b880e1abfb3de5b968deaa8ca18b1f426ca59759d.jpg", + "image_caption": [ + "pix2gestalt [56] + Zero123++ [63]", + "Figure C.3. Example of \"pix2gestalt\" and \"pix2gestalt + MV\" input of multi-view to 3D evaluation." + ], + "image_footnote": [], + "bbox": [ + 568, + 758, + 841, + 838 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "We first provide visualized examples to explicitly explain the difference between the \"pix2gestalt\" and \"pix2gestalt + MV\" settings in the multi-view to 3D generation in Fig. C.3. For the \"pix2gestalt\" setting, we directly implement pix2gestalt for the amodal completion independently under each view. For the \"pix2gestalt + MV\" setting, we first choose the view with the greatest visibility from the 4 occluded views, then use pix2gestalt to complete the object (which is shown in the pix2gestalt column in the qualitative result), and subsequently use Zero123++ to get the 4 consistent views as the input of LaRa and TRELLIS. It can be observed that pix2gestalt alone results in obvious multiview inconsistency, while with Zero123++ the consistency is significantly improved, thus leading to better 3D generation quality.", + "bbox": [ + 89, + 90, + 480, + 316 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "More multi-view to 3D examples are provided in Fig. C.5, where our Amodal3R again generates 3D assets with better quality than the baselines.", + "bbox": [ + 89, + 316, + 480, + 363 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "C.4. More diverse 3D reconstruction results", + "text_level": 1, + "bbox": [ + 89, + 373, + 429, + 388 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "In Fig. C.6, we show more examples to demonstrate that Amodal3R is able to generate diverse reasonable results from the occluded input with multiple stochastic samplings.", + "bbox": [ + 89, + 396, + 480, + 441 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "C.5. More in-the-wild results", + "text_level": 1, + "bbox": [ + 89, + 452, + 316, + 467 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "We provide more examples where we compare Amodal3R with \"pix2gestalt + TRELLIS\" pipeline in Fig. C.7. The results further demonstrate that 2D amodal method lacks 3D geometric understanding, often resulting in improper completion, such as completing the armrest of the chair as the backrest. In contrast, Amodal3R generates more plausible results with reasonable geometry and rich textural details.", + "bbox": [ + 89, + 474, + 480, + 580 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "In Fig. C.8 and Fig. C.9 we provide more visualization results on the in-the-wild images and the corresponding colored normal maps to show the geometry details. We implement Segment Anything [35] to obtain the segmentation masks.", + "bbox": [ + 89, + 580, + 480, + 655 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "C.6. Discussion, Limitation and Future Work", + "text_level": 1, + "bbox": [ + 89, + 666, + 441, + 681 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "While Amodal3R achieves impressive 3D amodal completion, it comes with several limitations we hope to solve in the future. 1) Dataset expansion. Due to the computational resources limitation, Amodal3R is trained on very limited data, consisting of only 20,627 synthetic 3D assets, predominantly confined to the furniture category. Consequently, completions on some complex or out-of-distribution objects may fail or lead to unrealistic structures. We believe that training on larger datasets, e.g. Objaverse-XL [15], could mitigate these issues. 2) Real-World data adaptation. Different from pix2gestalt, Amodal3R is trained exclusively on synthetic data. As a result, it cannot leverage environmental cues and must rely solely on the visible portions of occluded objects for reconstruction. Creating real", + "bbox": [ + 89, + 688, + 482, + 900 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "world 3D modal datasets will further enhance the ability to apply models to real scenes. 3) Controllable completion. Currently, how objects are completed is entirely up to the model itself and lacks control. Therefore, to further enhance the model to accept additional conditions, such as text, and allow users to edit and control the style of the completion process will be an important research direction for us in the future.", + "bbox": [ + 511, + 90, + 903, + 210 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 15 + }, + { + "type": "image", + "img_path": "images/200788fa8c59010615e251b7c018191c590445a647523496f6bff0ccc66bf77e.jpg", + "image_caption": [ + "Figure C.4. Additional single-view to 3D comparison examples. The occluders are shown in black and the visible regions are highlighted with red outlines." + ], + "image_footnote": [], + "bbox": [ + 89, + 152, + 898, + 806 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 493, + 924, + 504, + 935 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/6e3e8cb3f6e682b55056f262e4f9a2f12e3f1a7a2aebcb739a7935617b01060b.jpg", + "image_caption": [ + "Figure C.5. Additional multi-view to 3D comparison examples. The occluders are shown in black and the visible regions are highlighted with red outlines." + ], + "image_footnote": [], + "bbox": [ + 84, + 148, + 901, + 801 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 924, + 504, + 936 + ], + "page_idx": 17 + }, + { + "type": "image", + "img_path": "images/bf75ddd1a0b936e11399a4404cec99a271558a5059efd0f5ea400b27d6a896dd.jpg", + "image_caption": [ + "Figure C.6. Additional diverse examples. The occluders are shown in black and the visible regions are highlighted with red outlines." + ], + "image_footnote": [], + "bbox": [ + 94, + 157, + 890, + 797 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 925, + 504, + 936 + ], + "page_idx": 18 + }, + { + "type": "image", + "img_path": "images/e870017230928572f834d4e3b7b44e9ccc89b671cc7b2dfbdeca47b3555aa063.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 101, + 140, + 209, + 220 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/6492dba26fef53a6b15fcf4f6bd0f48ff1bd3a409eed95af009133f8b85c0fd3.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 101, + 220, + 207, + 301 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/d90c74b16e5686f3fd0a043e1d39ccc701a7b9835351836631c2644d63c39a4e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 101, + 305, + 207, + 385 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/685ad6bc4dc8ba6e901173eaab3f4f5e07a5bb2bcf50d3d5f20d6720d0395192.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 101, + 388, + 207, + 465 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/ecd950c091f52dd14a378d38d688ffb86bfaa6418ed1a46ce552bfb485beda3f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 101, + 467, + 207, + 542 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/98e46a5dcf149f1e0f883787c69fb2eeb6d45f364250f5b58e7230c52ce6f351.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 101, + 545, + 207, + 621 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/72372d8636e2039e9e78d331bc0c155b39ac6b8cbfebfe37d6e1c282235da333.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 101, + 623, + 207, + 698 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/9407f92f2328bc77b19b079886cca8c258b58a670a9d3cb0028c09f6d6f59ea4.jpg", + "image_caption": [ + "Input image", + "Figure C.7. Additional in-the-wild examples compared with pix2gestalt + TRELLIS. The target objects and occluders are marked with the red and green outlines." + ], + "image_footnote": [], + "bbox": [ + 101, + 702, + 207, + 781 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/dcf6c81676d2bfcc52799174f6ea258f27f99878bb1d8ce177ccd911a170df41.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 207, + 157, + 893, + 218 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/7221e58f8fa1621d86ddcfca117da0a9ab550e97df879da747515b50089871ab.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 209, + 239, + 893, + 300 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/ee3d4702f03df512a1dcd2f450f6229b648789fc7b2e62cd2be6ba680f612a1c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 209, + 305, + 893, + 375 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/348936c94c048be73a84a1a35305c574d97223f7ddaecd36f5f5c956c0e0243a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 209, + 388, + 888, + 452 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/e9235990f61e5d288eaf7d5196b8d147e27d2759c3e59bd094647efae78f5478.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 209, + 474, + 888, + 545 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/4f68e7ed21a55dc404a754e72d158812e6855ef3c19eb4fe6da655eaa44f381c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 209, + 547, + 888, + 619 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/5778f0574d6cb26ba5afdae215feb108c626210189ec2e390e3e7bfe3abc61f5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 209, + 623, + 893, + 691 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/d195416a2f5327884ffc98ff5a1d8dcb4ff34a0d0f4a043146c8f62254dfc341.jpg", + "image_caption": [ + "pix2gestalt [56]", + "TRELLIS [81]", + "Amodal3R (Ours)" + ], + "image_footnote": [], + "bbox": [ + 209, + 702, + 901, + 781 + ], + "page_idx": 19 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 493, + 924, + 504, + 935 + ], + "page_idx": 19 + }, + { + "type": "image", + "img_path": "images/ada9408678dd16ac9a17139bc6bcffa21cef5bfdfb1d7de869883fff85659fde.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 106, + 104, + 227, + 188 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/6062e969b144b68bde5c4787caf92635af76db3c1558a9bb607b2b4c8d4b3eef.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 106, + 191, + 227, + 284 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/2d6aa5a394deca4847eb806106419625bb3c8facd8f63c970e6d2e709d187393.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 106, + 287, + 227, + 377 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/b1ee42e3f1c1b55b7c2ad148ca3beddd296d22b422314abeba1672040f573ebb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 106, + 380, + 227, + 472 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/351f4fc242c00ce05b1a676430843ead379c1561c82870ff79d72e7ef9fad3ec.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 106, + 477, + 227, + 565 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/20456732dc746a6bebf876773ba1fe04717282542debba0a537100f6b3a79c41.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 106, + 569, + 227, + 654 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/14513e3cf22f958af8f2643bf88c8e1a382b22520cab9cbf47471ead011e29d4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 106, + 657, + 227, + 739 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/accebdc45f78add46402aaf69516164e54dcabd80c7e75d86ee723403791d785.jpg", + "image_caption": [ + "Input image", + "Figure C.8. Additional in-the-wild examples. The target objects and occluders are marked with the red and green outlines." + ], + "image_footnote": [], + "bbox": [ + 106, + 744, + 227, + 830 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/7422c5ddae7e24c65457d03b1bd98f2cf53e00b23dfb4d18efada84fb0cf74e5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 243, + 114, + 901, + 180 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/abdf454b1c319bbc3abe24bb70aea384919e5811cfb8e8caaac98e01fb1d1563.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 254, + 212, + 883, + 271 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/217a80eff1e8e6c5b9a4f61bdf9bef041254d406ea49291a148013926a2424ee.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 256, + 310, + 870, + 359 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/590eeb7939486a44b00107d94f5a70ac9d3c78fd64407589f2f325dbfc8c3856.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 233, + 407, + 903, + 452 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/af2ed36a83bcdb256e48f085f9d29306437b2c92f8a174de87bb9c2c8abaf3ca.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 236, + 500, + 900, + 546 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/57cb0acdeb0fddc57c4dc24b70a23ccfb10ceecd0a119ca07fd1fd93d810939b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 241, + 593, + 893, + 630 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/e2ecb2a27e1f5f46cb1e1761e433b07111ed89b0cfa1d826e0401f18bb8190a5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 261, + 662, + 869, + 729 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/3354b81a13aa69181692ebc240c5711c8fd2df331d852e05674a15348bb8042c.jpg", + "image_caption": [ + "Reconstruction result" + ], + "image_footnote": [], + "bbox": [ + 245, + 755, + 888, + 823 + ], + "page_idx": 20 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 925, + 503, + 935 + ], + "page_idx": 20 + }, + { + "type": "image", + "img_path": "images/095b0a8683c9a1abe5d1fd0351f2b0bd251d363bc2faf44ef3b8c6a7aedcea05.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 106, + 99, + 222, + 193 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/c359707e04e058224e1375c5e81fbc8ccddcb047b0de7412b82c559e5f02bc21.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 104, + 196, + 222, + 284 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/769f2f75988fa8c20f7a150b28cc25793d51852498286a3fa481276629ccca9c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 106, + 287, + 222, + 373 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/1184cd58dac7c2417f151966e2cecd6fc93b1d0e7753d2a88787190ba2b76e60.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 107, + 380, + 222, + 473 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/33f0534eb2700edc8fc3ea5301c729d4d26448af5eac394994064a31ae54f642.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 107, + 477, + 220, + 565 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/0a8e71eded34cdb29ae8a23f3edcf84087356e3b089e8f2c59718d46781c60a1.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 107, + 568, + 222, + 656 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/d0e2fc738ba13d07dee36c377c1631c2356014708deba436d5f0e69e34bdc11b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 107, + 659, + 222, + 743 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/f4d800f760e32f85a18e0d7b49cd24ec481ab920ef0ced066be5a55ba0cca701.jpg", + "image_caption": [ + "Input image" + ], + "image_footnote": [], + "bbox": [ + 107, + 746, + 222, + 830 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/929298d615a3fb7cc447c0f98e95baf173faa9e04a7d618c8a9009c54c584308.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 236, + 107, + 380, + 176 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/e16f1142d0a59d54d6b9c2c7b7795759e11379f48803ebd164889819bec4222e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 419, + 108, + 467, + 176 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/f7039e6b445a5be5eee0bd5040ae487c6e7aee8781d78bfa73e920a9c75420a8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 500, + 108, + 545, + 176 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/52bff517732bce595a3fc8d13402ba1d0f1b3b7a1eb2d4299eeefdcf805fcaee.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 598, + 109, + 643, + 176 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/f01a9e0ef9e9acd6cba92a918067dc45c9b7675a402b6e0568af1eb69c21ab5a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 674, + 109, + 720, + 176 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/a15c57d10a0d91aa88495617e816b58fca0cf35ec218ed9cdf2e5a8522be81f0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 758, + 109, + 802, + 176 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/c740a0bd12726124d22a0278522bd429dd12f45e9318efe95ac729e78ae54dc7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 834, + 109, + 877, + 176 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/6bf181e361af64cbd49bdff2b0cd43c6a50dc0ad36963d735832233052683d74.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 253, + 231, + 300, + 253 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/e61d8fec364efc27a470eff5648933302e713b474924644e2edd3c55bfeda54d.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 330, + 233, + 377, + 253 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/d7115e74e97401735aa2884ea8f3346d25d37ce117b630b4920fac2bc1285067.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 426, + 232, + 473, + 253 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/5cc2da83fe61a2fdc4d3e7041779c61220f5b3b073385721791245264d4a1551.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 503, + 232, + 550, + 253 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/21d725ad0989e383de4f63e90ce664105e05c73d0b034d3c5ebf71613a830643.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 593, + 229, + 640, + 252 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/fd66ab2fe02a9a52c5e590980319f647bf554f99542cd9d6afc3047a2004f208.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 669, + 229, + 718, + 252 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/a8220c93bb0bff26f3701389b85c2241b024c6820bd425f0edf30ea47e53680b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 756, + 229, + 803, + 253 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/762a5473b8c38fc5078ebc4dbaea183295506b6315263d1760274a5aef82e4b4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 831, + 229, + 880, + 253 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/2d36140df0c83d5417674ee1873c2c32baa4cb1e52aa906d5a68028dd586970c.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 241, + 316, + 308, + 359 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/970aed0843d92ecca9bd858e64a69a83cd4e96e405421cc40641733dd55582c0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 318, + 316, + 385, + 359 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/89410ff08ecccd04301f629f52ea7c1ca4b6aa0a815571a2208a032b95d229ed.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 421, + 316, + 485, + 359 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/13596732bab156e6c1c51aedacbd0bc7a5776d2e2cc38344b9cc580eec754512.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 500, + 316, + 563, + 359 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/9b298e3f8c6888d371ef219db051868f274af1ba76ad87df5c2ba35ac7b9074b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 584, + 311, + 651, + 363 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/985f93d20324281b66926cb467e7671a0f7de8875707c8b704e7bd81f0b59b9f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 661, + 311, + 728, + 363 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/6a977f45798fb36ceed91ca7038971b88bf42b34f6281c7a7935b709d39eaec2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 746, + 311, + 810, + 364 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/da6e2cc1a6c104125f2b358fff6df5fa7d9c602c75f5f555b7a225f5c3d82c80.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 823, + 311, + 888, + 364 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/ea02900ca8a78886a66d9fb14ad43bf1237f53854ed37e043f35a033d10bf8fa.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 258, + 390, + 303, + 455 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/1c189b1c7514283599bb7a16a392b365213526e29463166ec4895e2329bb5384.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 334, + 391, + 382, + 455 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/4598111e22944965925b8b6255fa8536d238e0728b54bc775ab3665a150e976b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 424, + 391, + 468, + 455 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/f787f003adb94debc9461808ba6d8cba55f045b73a279df257fc5bc5af0106d7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 501, + 390, + 547, + 455 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/d39f4a0e9775078f66933bd2af24c5c55e7c30c9e5950af798f575ded3dae3a4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 596, + 390, + 643, + 455 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/79d787bb95891662bc37839bf54d0e2b3ad0d240183333d2106b7850bbc79c3b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 673, + 390, + 718, + 455 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/4a57bcadb2ebc10c34dc6b9a62ab981bdefca9bdd0aa3b91e1e53b571cabe5ca.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 756, + 390, + 802, + 455 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/be015bfa3cdaf2b44b0e77531148136a68d7dcb50f87bba546a02fc49b7f56cc.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 828, + 391, + 874, + 455 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/3efd00852b0f34f9a3949add9bb86779268e9d0665af6f4022ec6e8f664dfcfc.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 254, + 494, + 305, + 550 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/79ce326daeeca762ef4a892284df06b42bfe31f307b79d485ade96a33fd8703b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 333, + 494, + 383, + 550 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/7f282d478cac67c93f20c570e20b2bf512b87fa1c5c30d2ed3ea85a3cfdc8394.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 419, + 494, + 475, + 550 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/7db37af3fa865b27c84c75f3b197817c940260ea28dd6f311150bee7883c2ba3.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 496, + 494, + 552, + 550 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/3172f4ce237b0712571e6a666cdef028fafd9ada1a3e0a3c51f58a2a9e08f2cf.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 594, + 494, + 647, + 551 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/063a1b1a1dece2445c978efb79cb481cb6c2dc5b05b4b900347370a525c3330e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 669, + 493, + 723, + 551 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/d649001cf1a64a9bccb83e00ca44bc0289da7de112ff76e9de8b7b104650577a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 751, + 493, + 805, + 551 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/397bf7ab6c20cf152a4187b0bd8cb0fee54d69de302d7d64096b293d13739ef9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 826, + 493, + 880, + 553 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/f3c00f4554388b763523d94ccd0be111d32b9b3979b17b25ace8f3286372199e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 250, + 588, + 310, + 643 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/cb48875f1c399a28a2bf0bfdbd3f70c36f20a6b5f11875bbf048a93e4f984bc1.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 326, + 588, + 387, + 643 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/bc5de79c5c67ade429863cc11eee1a6b0afc78315a836bda44223f7048af9ac7.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 418, + 587, + 478, + 642 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/4bd145682614b11bac9a56dc099cac6e59e04947fbe792c6055ee4d589f19a9a.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 496, + 585, + 555, + 642 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/20de841a7493cc43c2cb949ab3629aae813eba4f26bd57745396dde8a6f55c56.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 593, + 584, + 650, + 642 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/f33003c0021a10760e608a09f1e2174e235a893c03af224878b7982e917691db.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 669, + 584, + 725, + 642 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/3c43596c3225d0d82ee1fa5757da092b782bcbb2c6d31d69f28d886f7e3c50a5.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 750, + 584, + 805, + 642 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/57b44f4b221ef212ab7d4735d105e5f5b1f68a51a901e2996938b59f4901c482.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 823, + 584, + 882, + 643 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/33279c0867af9b38f5de35cfd3cc9b07a12aacdfdd07534a9b1454796260888e.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 264, + 662, + 295, + 727 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/1324ef4f7b4588d0896385dd82eb74a26f890c8a7c2de8af1bd12a8b0492af05.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 343, + 662, + 372, + 727 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/16b11c1c1c3f695af6be023b67866171414828de5ee231504795bca73dc24e87.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 431, + 662, + 460, + 726 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/e0cc24d2030cadf890830426e0016b8aec88c55941529bcb3fc028858245f431.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 509, + 662, + 540, + 727 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/ed1c74da8409a19812447005299ab422bc021b9f4e61e61d6af4d27597b0c2ea.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 606, + 662, + 635, + 727 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/1f58f96a7868db3a8e7b4263ea01128181413837fa70a9aab1033ad5a8d881de.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 681, + 662, + 710, + 727 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/e59d1e6a2e3198192a88bec3b29fe6dec3edb5841dbe4e86bcc01287b56b0458.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 764, + 662, + 794, + 727 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/a833d704f859f81aee44b1e1c5223e6db3d47aac3a3225785bfc2af8fb4313cb.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 834, + 662, + 866, + 727 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/afc33314881650e81bb306937ec04dd25c43a4cea1368f0c20098c2f1c8c3d5d.jpg", + "image_caption": [ + "Figure C.9. Additional in-the-wild examples. The target objects and occluders are marked with the red and green outlines." + ], + "image_footnote": [], + "bbox": [ + 243, + 767, + 313, + 815 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/e24369acd600aa02074e8619cb93b3572d2da16ca27557d198a9c5fea693cab8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 320, + 768, + 393, + 815 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/8fcfaaec7bcf00678512fe514fdd2d9b17c4675d66f7db3ac9d06b1af67bd111.jpg", + "image_caption": [ + "Reconstruction result" + ], + "image_footnote": [], + "bbox": [ + 413, + 767, + 485, + 815 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/7f78d508d64ff47702c464b05edaa8c3cd052d7b9802f8fae3f7f5d9309c1391.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 490, + 767, + 563, + 815 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/bc66ed7a19f36ca900f75954dd3b53d4decaf085a178460092343926fa3f4cd9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 581, + 767, + 653, + 815 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/b6cb93b3533b6ba8a98c9de6eba7fefb2244b9e7342055fb2f9fdb67b03c7356.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 656, + 767, + 728, + 815 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/2396eeeede72ea8776bb1f3e698cae247f18e1eb947624d405061a86b31c50e1.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 748, + 767, + 820, + 815 + ], + "page_idx": 21 + }, + { + "type": "image", + "img_path": "images/60ad8f6ac17efd1191fdef40ba05f35cc66e4232ac53ca3680e348d92e5d3380.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 823, + 767, + 895, + 815 + ], + "page_idx": 21 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 925, + 503, + 935 + ], + "page_idx": 21 + } +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13439/696dfadf-883e-4109-9010-e095246a341a_model.json b/data/2025/2503_13xxx/2503.13439/696dfadf-883e-4109-9010-e095246a341a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ec8d4382b6f4f1fb78728aa8a8e790f83bcf033e --- /dev/null +++ b/data/2025/2503_13xxx/2503.13439/696dfadf-883e-4109-9010-e095246a341a_model.json @@ -0,0 +1,6261 @@ +[ + [ + { + "type": "aside_text", + "bbox": [ + 0.023, + 0.261, + 0.058, + 0.707 + ], + "angle": 270, + "content": "arXiv:2503.13439v1 [cs.CV] 17 Mar 2025" + }, + { + "type": "title", + "bbox": [ + 0.166, + 0.131, + 0.833, + 0.154 + ], + "angle": 0, + "content": "Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images" + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.18, + 0.819, + 0.2 + ], + "angle": 0, + "content": "Tianhao Wu\\(^{1*}\\), Chuanxia Zheng\\(^{2\\dagger}\\), Frank Guan\\(^{3}\\), Andrea Vedaldi\\(^{2}\\), Tat-Jen Cham\\(^{1}\\)" + }, + { + "type": "text", + "bbox": [ + 0.133, + 0.206, + 0.865, + 0.246 + ], + "angle": 0, + "content": "\\(^{*}\\)S-Lab, \\(^{1}\\)Nanyang Technological University, \\(^{2}\\)Visual Geometry Group, University of Oxford \\(^{3}\\)Singapore Institute of Technology" + }, + { + "type": "text", + "bbox": [ + 0.204, + 0.25, + 0.794, + 0.28 + ], + "angle": 0, + "content": "{tianhao001,astjcham}@ntu.edu.sg,{cxzheng,vedaldi} \\(@\\) robots.ox.ac.uk Frank.guan@singaporetech.edu.sg" + }, + { + "type": "image_caption", + "bbox": [ + 0.098, + 0.295, + 0.172, + 0.308 + ], + "angle": 0, + "content": "3D Recast" + }, + { + "type": "image", + "bbox": [ + 0.101, + 0.308, + 0.18, + 0.422 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.105, + 0.422, + 0.159, + 0.433 + ], + "angle": 0, + "content": "2D Input" + }, + { + "type": "image", + "bbox": [ + 0.18, + 0.294, + 0.323, + 0.421 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.194, + 0.422, + 0.306, + 0.433 + ], + "angle": 0, + "content": "Stochastic sample 1" + }, + { + "type": "image", + "bbox": [ + 0.331, + 0.294, + 0.471, + 0.422 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.354, + 0.423, + 0.466, + 0.434 + ], + "angle": 0, + "content": "Stochastic sample 2" + }, + { + "type": "image_caption", + "bbox": [ + 0.479, + 0.296, + 0.67, + 0.308 + ], + "angle": 0, + "content": "In-the-wild Reconstruction" + }, + { + "type": "image", + "bbox": [ + 0.479, + 0.313, + 0.616, + 0.416 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.521, + 0.422, + 0.576, + 0.434 + ], + "angle": 0, + "content": "2D Input" + }, + { + "type": "image", + "bbox": [ + 0.617, + 0.304, + 0.758, + 0.387 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.617, + 0.387, + 0.687, + 0.418 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.639, + 0.422, + 0.75, + 0.434 + ], + "angle": 0, + "content": "Stochastic sample 1" + }, + { + "type": "image", + "bbox": [ + 0.764, + 0.302, + 0.904, + 0.387 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.764, + 0.387, + 0.835, + 0.419 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.784, + 0.422, + 0.896, + 0.434 + ], + "angle": 0, + "content": "Stochastic sample 2" + }, + { + "type": "image_caption", + "bbox": [ + 0.098, + 0.438, + 0.273, + 0.452 + ], + "angle": 0, + "content": "3D Scene Decomposition" + }, + { + "type": "image", + "bbox": [ + 0.107, + 0.452, + 0.221, + 0.541 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.131, + 0.542, + 0.184, + 0.553 + ], + "angle": 0, + "content": "2D Input" + }, + { + "type": "image", + "bbox": [ + 0.23, + 0.453, + 0.445, + 0.54 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.453, + 0.452, + 0.671, + 0.541 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.492, + 0.542, + 0.62, + 0.553 + ], + "angle": 0, + "content": "Decomposed 3D assets" + }, + { + "type": "image", + "bbox": [ + 0.679, + 0.452, + 0.895, + 0.541 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.567, + 0.907, + 0.624 + ], + "angle": 0, + "content": "Figure 1. Example results of Amodal3R. Given partially visible objects within images (occluded regions are shown in black, visible areas in red outlines), our Amodal3R generates diverse semantically meaningful 3D assets with reasonable geometry and plausible appearance. We sample multiple times to get different results from the same occluded input. Trained on synthetic datasets, it generalizes well to real-scene and in-the-wild images, where most objects are partially visible, and reconstructs reasonable 3D assets." + }, + { + "type": "title", + "bbox": [ + 0.249, + 0.636, + 0.327, + 0.651 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.088, + 0.667, + 0.486, + 0.879 + ], + "angle": 0, + "content": "Most image-based 3D object reconstructors assume that objects are fully visible, ignoring occlusions that commonly occur in real-world scenarios. In this paper, we introduce Amodal3R, a conditional 3D generative model designed to reconstruct 3D objects from partial observations. We start from a \"foundation\" 3D generative model and extend it to recover plausible 3D geometry and appearance from occluded objects. We introduce a mask-weighted multi-head cross-attention mechanism followed by an occlusion-aware attention layer that explicitly leverages occlusion priors to guide the reconstruction process. We demonstrate that, by training solely on synthetic data, Amodal3R learns to recover full 3D objects even in the presence of occlusions in real scenes. It substantially outperforms existing meth" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.638, + 0.907, + 0.7 + ], + "angle": 0, + "content": "ods that independently perform 2D amodal completion followed by 3D reconstruction, thereby establishing a new benchmark for occlusion-aware 3D reconstruction. See our project page https://sm0kywu.github.io/Amodal3R/." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.725, + 0.645, + 0.74 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.75, + 0.907, + 0.903 + ], + "angle": 0, + "content": "Humans possess a remarkable ability to infer the complete 3D shape and appearance of objects from single views, even when the objects are partly hidden behind occluders. This ability, namely amodal completion, is key to operating in complex real-world environments, where objects are often partially occluded by surrounding clutter. However, existing image-based 3D reconstruction models [5, 6, 24, 42, 49, 50, 67, 72, 73, 76, 78, 81, 88, 89] fail to recover full 3D assets when the object is partially occluded. We thus consider the problem of reconstructing 3D objects" + }, + { + "type": "page_footnote", + "bbox": [ + 0.11, + 0.887, + 0.189, + 0.9 + ], + "angle": 0, + "content": "†Project Lead." + }, + { + "type": "page_number", + "bbox": [ + 0.495, + 0.926, + 0.504, + 0.936 + ], + "angle": 0, + "content": "1" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.483, + 0.121 + ], + "angle": 0, + "content": "from one or more partially-occluded views, a novel task that we call amodal 3D reconstruction." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.123, + 0.483, + 0.365 + ], + "angle": 0, + "content": "Amodal 3D reconstruction is a challenging task that requires reconstructing an object's 3D geometry and appearance while completing its occluded parts, both of which are highly ambiguous. Previous approaches to amodal 3D reconstruction [7, 56] have decomposed the task into 2D amodal completion [85, 86, 91], followed by conventional 3D reconstruction [42, 76, 89]. While these two-stage pipelines are easy to implement, they have some drawbacks. First, 2D amodal completion methods rely predominantly on appearance-based priors rather than explicit 3D structural cues. This lack of 3D geometric reasoning is suboptimal. Second, some 3D reconstruction methods can use or require multiple views to function. In this case, 2D amodal completion may lack multi-view consistency, particularly when it is performed independently for different views, which confuses the 3D reconstruction process." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.366, + 0.483, + 0.577 + ], + "angle": 0, + "content": "In this paper, we introduce Amodal3R, a novel single-stage paradigm for amodal 3D reconstruction that surpasses previous state-of-the-art approaches (see Fig. 1). Amodal3R augments the \"foundation\" 3D generator TRELLIS [81] with an additional branch to condition on occlusions. Its key advantage is performing reconstruction, completion, and occlusion reasoning directly in a 3D latent space instead of using a two-stage scheme. In this way, the model can reconstruct both visible and occluded regions of the object coherently and plausibly. To adapt TRELLIS to amodal reconstruction, we introduce mask weighting in multi-head cross-attention and a new occlusion-aware layer. These guide the model to focus more on visible parts of the object without perturbing the pre-trained model too much." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.579, + 0.483, + 0.684 + ], + "angle": 0, + "content": "We evaluated the effectiveness of Amodal3R on datasets like Google Scanned Objects [16] and Toys4K [69] augmented with occlusions, on 3D scenes from Replica [70], and on real-world monocular images. Without relying on additional heuristics, Amodal3R achieves significantly superior performance compared to state-of-the-art models and generalizes well to different datasets, including real ones." + }, + { + "type": "text", + "bbox": [ + 0.11, + 0.686, + 0.449, + 0.7 + ], + "angle": 0, + "content": "In summary, our main contributions are as follows:" + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.703, + 0.483, + 0.761 + ], + "angle": 0, + "content": "- We propose a novel occlusion-aware 3D reconstruction model that directly reconstructs complete and high-quality 3D objects from one or more partially occluded views, without relying on 2D amodal completion models." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.763, + 0.483, + 0.837 + ], + "angle": 0, + "content": "- We introduce a mask-weighted cross-attention mechanism and an occlusion-aware layer to inject occlusion awareness into an existing 3D reconstruction model, improving both the geometry and appearance of the reconstructed objects when they are partially occluded." + }, + { + "type": "text", + "bbox": [ + 0.092, + 0.838, + 0.483, + 0.898 + ], + "angle": 0, + "content": "- We demonstrate via experiments on the 3D scene dataset Replica and real-world images that our one-stage pipeline significantly outperforms existing two-stage ones, establishing a new benchmark for amodal 3D reconstruction." + }, + { + "type": "list", + "bbox": [ + 0.092, + 0.703, + 0.483, + 0.898 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.09, + 0.642, + 0.108 + ], + "angle": 0, + "content": "2. Background" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.115, + 0.905, + 0.431 + ], + "angle": 0, + "content": "2D Amodal Completion. Recent advances in 2D amodal completion [2, 56, 83, 85, 86, 91] have achieved significant success in reconstructing occluded regions of objects in 2D images. While these methods show promise for 3D generation pipelines [2, 56], they still have limitations. First, 2D amodal completion models have limited 3D geometric understanding as they interpret images as 2D patterns. While excelling at completing textures, they may generate physically implausible geometries when handling complex occlusions. This stems from their lack of explicit 3D reasoning and reliance on 2D appearance priors, without true volumetric understanding. Second, for models that use multi-view images as input, the results of the 2D amodal completion are often inconsistent across views. Inconsistent views need to be corrected by the 3D reconstructor, which cause confusion (see Sec. 4). Although there has been significant work on multi-view consistent generation [7, 64, 72, 74, 82, 94], multi-view consistent completion is less explored. RenderDiffusion [1] contains an example, but the results are often blurry or lack details. This motivates our 3D-centric reconstruction framework." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.433, + 0.905, + 0.674 + ], + "angle": 0, + "content": "3D Shape Completion. Several methods [12, 14, 71, 95] start from a partial 3D reconstruction, then complete it in 3D. They use encoder-decoder architectures [25] or volumetric representations [10, 30] to robustly recover 3D shape but not texture, which is necessary in many applications. They also still require recovering the partial 3D geometry from an occluded image, a challenge in itself. Furthermore, they ignore the input image when completing the object in 3D, disregarding the occlusion pattern that caused the 3D geometry to be recovered only partially. Recent work [11] utilizes the partially visible object as input specifically for 3D shape completion. However, it focuses solely on geometry reconstruction, without recovering the object's texture and appearance. In contrast, our approach is end-to-end, recovering the complete 3D shape and appearance of the object from the occluded image." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.675, + 0.905, + 0.901 + ], + "angle": 0, + "content": "3D Generative Models. Early advancements in 3D generation are based on GANs [19], exploring various 3D representations such as point clouds [26, 40], voxel grids [84, 96], view sets [50, 57], NeRF [4, 5, 52, 62], SDF [18], and 3D Gaussian mixtures [79]. While GANs can capture complex 3D structures, they struggle with stability, scalability, and data diversity. The focus then shifted to diffusion models [23, 61, 68], which were more capable and versatile. They were first applied to novel view synthesis [77] in image space, before expanding to model a variety of 3D representations, including point clouds [46, 47, 51, 80, 93], voxel grids [38, 48], triplanes [66, 97], meshes [20, 44], and 3D Gaussian mixtures [9, 36, 39, 87]. Autoregressive models [8, 49, 67] have also been explored for mesh generation, focusing on improving the topology of the final 3D mesh." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.505, + 0.936 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.099, + 0.093, + 0.899, + 0.226 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.099, + 0.229, + 0.899, + 0.35 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.36, + 0.907, + 0.43 + ], + "angle": 0, + "content": "Figure 2. Overview of Amodal3R. Given an image as input and the regions of interest, Amodal3R first extracts the partially visible target object, along with the visibility and occlusion masks using an off-the-shelf 2D segmenter. It then applies DINOv2 [55] to extract features \\( \\pmb{c}_{dino} \\) as additional conditioning for the 3D reconstructor. To enhance occlusion reasoning, each transformer block incorporates a mask-weighted cross-attention (via \\( \\pmb{c}_{vis} \\)) and occlusion-aware attention layer (via \\( \\pmb{c}_{occ} \\)), ensuring the 3D reconstructor accurately perceives visible information while effectively inferring occluded parts. For conditioning details, see Sec. 3.2." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.456, + 0.483, + 0.623 + ], + "angle": 0, + "content": "Authors have also shifted from performing diffusion in 2D image space [64, 72, 89] to 3D latent spaces [29, 53, 60, 65, 75, 81]. Such methods typically consist of two stages: the first for generating geometry and the second for generating appearance, and are capable of high-quality imaged-to-3D generation. However, they assume that input objects are fully visible, which limits their application to real-world scenes, where occlusions are frequent. Here, we extend such approaches to generate high-quality 3D assets from occluded input images — a challenging task that requires inferring complete 3D objects from partial information." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.634, + 0.182, + 0.649 + ], + "angle": 0, + "content": "3. Method" + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.659, + 0.483, + 0.764 + ], + "angle": 0, + "content": "Consider an image \\( x \\) containing a partially occluded view of an object of interest \\( o_i \\), we wish to reconstruct the complete 3D shape and appearance \\( y \\) of the object \\( o_i \\). This task is thus conceptually similar to existing image-to-3D reconstruction, except that here the object is partially occluded instead of being fully visible in \\( x \\). We call this problem amodal 3D reconstruction." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.765, + 0.483, + 0.902 + ], + "angle": 0, + "content": "Here, we elaborate on Amodal3R (Fig. 2), a new method for amodal 3D reconstruction. Unlike recent two-stage methods [7, 56] that first perform 2D amodal completion followed by 3D reconstruction, Amodal3R is an end-to-end occlusion-aware 3D reconstruction model that generates complete shapes and multi-view geometry directly within the 3D space. Formally, Amodal3R is a conditional generator \\( v_{\\theta}(y|o_i,M_{\\mathrm{vis}},M_{\\mathrm{occ}}) \\) that takes as input the image \\( x \\), centered on the partially visible object \\( o_i \\), as well as the" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.456, + 0.907, + 0.699 + ], + "angle": 0, + "content": "visibility mask \\( M_{\\mathrm{vis}} \\) and occlusion mask \\( M_{\\mathrm{occ}} \\). The visibility mask \\( M_{\\mathrm{vis}} \\) marks the pixels of image \\( x \\) containing the object \\( o_i \\), while \\( M_{\\mathrm{occ}} \\) marks the pixels containing the oc-clusters, i.e., all other objects that potentially obscure \\( o_i \\). For real images, these masks can be efficiently obtained using pre-trained 2D segmentation models like Segment Anything [35]. By providing point coordinate prompts for the interest object \\( o_i \\) and its occlusters respectively, the segmentation model can generate the corresponding masks. In cases where multiple occlusters are present or when occlusters fragment the visible area of the target object, a sequential process is employed. Specifically, point prompts for each visible/occluding region are provided to the 2D segmenter one at a time, with the model generating an individual mask for each region. These masks are then aggregated to form a comprehensive visibility/occlusion mask." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.699, + 0.909, + 0.79 + ], + "angle": 0, + "content": "The challenges for Amodal3R are how to: (1) produce a plausible and complete 3D shape and appearance from partial observations, even in the absence of multi-view information; and (2) ensure 3D consistency in terms of geometry and photometry, maintaining seamless visual coherence between visible and generated regions." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.802, + 0.744, + 0.817 + ], + "angle": 0, + "content": "3.1. Preliminaries: TRELLIS" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.825, + 0.907, + 0.903 + ], + "angle": 0, + "content": "We begin by briefly reviewing the TRELLIS [81] model on which our model is based. TRELLIS is a conditional 3D diffusion model that performs denoising in a sparse 3D latent space. First, it introduces a transformer-based variational autoencoder (VAE) \\((\\mathcal{E},\\mathcal{D})\\), where the encoder \\(\\mathcal{E}\\) maps" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.485, + 0.257 + ], + "angle": 0, + "content": "sparse voxel features to structured latents \\(\\mathbf{z}\\), and the decoder \\(\\mathcal{D}\\) converts them into desired output representations, including 3D Gaussians [31], radiance fields, and meshes. In particular, a 3D object \\(\\mathbf{o}_i\\) is encoded using its structured latent variables (SLAT) defined as \\(\\mathbf{z} = \\{(z_i, p_i)\\}_{i=1}^L\\), where \\(z_i \\in \\mathbb{R}^C\\) is a local latent feature attached to the voxel at position \\(p_i \\in \\{0, 1, \\dots, N-1\\}^3\\), \\(N\\) is the spatial resolution of the grid, and \\(L \\ll N^3\\) is the number of active voxels intersecting the object's surface. This representation encodes both coarse geometric structures and fine appearance details by associating local latents with active voxels." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.258, + 0.484, + 0.395 + ], + "angle": 0, + "content": "TRELLIS comprises two diffusion models, one to predict the active voxel centers \\(\\{\\pmb{p}_i\\}_{i=1}^L\\) (stage 1) and the other to recover the corresponding latents \\(\\{\\pmb{z}_i\\}_{i=1}^L\\) (stage 2). Each model can be viewed as a denoising neural network \\(v_\\theta\\) operating in a latent space \\(\\ell\\), and trained to remove Gaussian noise \\(\\epsilon \\sim \\mathcal{N}(0, I)\\) added to the latent code, i.e., \\(\\ell^{(t)} = (1 - t)\\ell^{(0)} + t\\epsilon\\), where \\(t \\in [0,1]\\) is the noise level [43]. The denoising network \\(v_\\theta\\) is trained to minimize the flow loss:" + }, + { + "type": "equation", + "bbox": [ + 0.135, + 0.401, + 0.483, + 0.425 + ], + "angle": 0, + "content": "\\[\n\\min _ {\\theta} \\mathbb {E} _ {(\\boldsymbol {\\ell} ^ {(0)}, x), t, \\epsilon} \\| v _ {\\theta} (\\boldsymbol {\\ell} ^ {(t)}, x, t) - (\\epsilon - \\boldsymbol {\\ell} ^ {(0)}) \\| _ {2} ^ {2}, \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.433, + 0.484, + 0.614 + ], + "angle": 0, + "content": "where \\( x \\) is the image prompt. In stage 1, the latent code is a compressed version of the \\( N \\times N \\times N \\) occupancy volume, where the spatial resolution is reduced from \\( N = 64 \\) to \\( r = 16 \\). Hence, in this case the latent vector is a matrix \\( \\ell \\in \\mathbb{R}^{L' \\times C'} \\) of \\( L' = r^3 = 4096C' \\)-dimensional tokens. In stage 2, the latent code \\( \\ell = \\{z_i\\}_{i=1}^L \\in \\mathbb{R}^{L \\times C} \\) is a matrix of \\( LC \\)-dimensional tokens, where \\( L \\) is now the number of active voxels. Similar transformers are implemented to the corresponding denoising networks \\( v_{\\theta} \\) (Fig. 3). The conditioning image \\( x \\) is read via cross-attention layers that pool information from the tokens \\( c_{\\mathrm{dino}} \\) extracted by DINO [55] from the image \\( x \\)." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.623, + 0.418, + 0.637 + ], + "angle": 0, + "content": "3.2. Mask-Conditional Generative Models" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.645, + 0.484, + 0.856 + ], + "angle": 0, + "content": "The key change needed to fine-tune the generator to work with partially occluded images is to condition the transformers \\( v_{\\theta} \\) not only on the image \\( x \\), but also on the masks \\( M_{\\mathrm{vis}} \\) and \\( M_{\\mathrm{occ}} \\). A naive approach is to downsample or embed the masks to obtain tokens \\( (c_{\\mathrm{vis}}, c_{\\mathrm{occ}}) \\) that can be concatenated to the image tokens \\( c_{\\mathrm{dino}} \\), to be processed by cross-attention by the transformer as before. However, the image \\( x \\) contains significantly more information than the binary masks \\( M_{\\mathrm{vis}} \\) and \\( M_{\\mathrm{occ}} \\), so the transformer, which is initially trained to consider \\( x \\) only, may simply ignore this information. This is compounded by the fact that learning to use this new information, which involves a new type of tokens that are incompatible with image ones, may require aggressive fine-tuning of the transformer." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.856, + 0.484, + 0.901 + ], + "angle": 0, + "content": "To sidestep this problem, inspired by masked attention approaches in 2D completion [92], we introduce mask-weighted cross-attention and an occlusion-aware attention" + }, + { + "type": "image", + "bbox": [ + 0.518, + 0.096, + 0.668, + 0.305 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.547, + 0.316, + 0.65, + 0.328 + ], + "angle": 0, + "content": "(a) TRELLIS [81]" + }, + { + "type": "image", + "bbox": [ + 0.675, + 0.09, + 0.904, + 0.305 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.734, + 0.316, + 0.856, + 0.328 + ], + "angle": 0, + "content": "(b) Amodal3R (Ours)" + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.333, + 0.907, + 0.39 + ], + "angle": 0, + "content": "Figure 3. The transformer structure of Amodal3R. Compared with the original TRELLIS [81] design, we further introduce the mask-weighted cross-attention and occlusion-aware layer. It applies to both sparse structure and SLAT diffusion models." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.413, + 0.905, + 0.457 + ], + "angle": 0, + "content": "layer to better utilize the visibility mask \\( M_{\\mathrm{vis}} \\) and the occlusion mask \\( M_{\\mathrm{occ}} \\) without disrupting the pre-trained 3D generator too much. These are described next." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.458, + 0.907, + 0.58 + ], + "angle": 0, + "content": "Mask-weighted Cross-Attention. A key novel component of our model is mask-weighted cross-attention, which allows the model to focus its attention on the visible parts of the object. We implement this mechanism in each attention block in the transformers \\( v_{\\theta} \\) of Sec. 3.1. Given the latent tokens \\( \\ell \\in \\mathbb{R}^{L \\times C} \\) input to a cross-attention layer as well as the image features \\( c_{\\mathrm{dino}} \\in \\mathbb{R}^{K \\times C'} \\) of the partially visible object, cross-attention computes the similarity score matrix" + }, + { + "type": "equation", + "bbox": [ + 0.532, + 0.586, + 0.905, + 0.607 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {q} = W _ {q} \\boldsymbol {\\ell}, \\quad [ \\boldsymbol {k}, \\boldsymbol {v} ] = W _ {k v} \\boldsymbol {c} _ {\\text {d i n o}}, \\quad S = \\boldsymbol {q} \\boldsymbol {k} ^ {\\top} / \\sqrt {D}, \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.613, + 0.905, + 0.673 + ], + "angle": 0, + "content": "where \\( W_{q} \\) and \\( W_{kv} \\) are learnable projections that map the latents \\( \\ell \\) to the query \\( q \\) and the conditioning image feature \\( c_{\\mathrm{dino}} \\) to the key \\( k \\) and the value \\( v \\), respectively, and \\( D \\) is the dimension of the query and key vectors." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.673, + 0.907, + 0.855 + ], + "angle": 0, + "content": "We wish to bias the token similarity matrix \\( S \\in \\mathbb{R}^{L \\times K} \\) towards the visible parts of the object. Recall that \\( K \\) is the number of tokens \\( c_{\\mathrm{dino}} \\) extracted by DINOv2 [54] from the occluded image \\( x \\cdot M_{\\mathrm{vis}} \\). Each token thus corresponds to a \\( P \\times P \\) patch in the input image (where \\( P = 14 \\)). We extract analogous flatten 1-D tokens \\( c_{\\mathrm{vis}} = [\\rho(M_{\\mathrm{vis},1}), \\dots, \\rho(M_{\\mathrm{vis},K})] \\) from the visibility mask by calculating the fraction of \\( P \\times P \\) pixels that are visible in the \\( j \\)-th image patch \\( M_{\\mathrm{vis},j} \\). \\( c_{\\mathrm{vis},j} = \\rho(M_{\\mathrm{vis},j}) \\in [0,1] \\). We then use these quantities to bias the computation of the attention matrix \\( A = \\text{softmax}(S + \\log c_{\\mathrm{vis}}) \\in [0,1]^{L \\times M} \\) via broadcasting. Hence:" + }, + { + "type": "equation", + "bbox": [ + 0.609, + 0.863, + 0.905, + 0.9 + ], + "angle": 0, + "content": "\\[\nA _ {i j} = \\frac {\\boldsymbol {c} _ {\\text {v i s} , j} \\exp \\left(S _ {i j}\\right)}{\\sum_ {k = 1} ^ {K} \\boldsymbol {c} _ {\\text {v i s} , k} \\exp \\left(S _ {i k}\\right)}. \\tag {3}\n\\]" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.505, + 0.936 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.11, + 0.089, + 0.458, + 0.155 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.164, + 0.486, + 0.234 + ], + "angle": 0, + "content": "Figure 4. 3D-consistent mask example. Given a 3D mesh, we render selected triangles in a distinct color from the others to generate multi-view consistent masks. It allows the evaluation of multi-view methods in handling contact occlusion. (The occluded regions are shown in red.)" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.261, + 0.484, + 0.367 + ], + "angle": 0, + "content": "In this manner, the visibility flag modulates the attention matrix \\( A \\) smoothly. If there are no visible pixels in a patch \\( j \\), then \\( A_{ij} = 0 \\), so the corresponding image tokens are skipped in cross attention. While we have illustrated how this works for a single head, in practice Amodal3R uses a multiple-head transformer, to which Eq. (3) extends trivially. Please see the supplementary materials for details." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.368, + 0.484, + 0.548 + ], + "angle": 0, + "content": "Occlusion-aware Attention Layer. In addition to encouraging the network to focus its attention on the visible part of the object, we also introduce an occlusion-aware attention layer. For amodal completion, it is not enough to specify the visible information; instead, we must also differentiate foreground occluders from the background, as this explicitly identifies the potential regions requiring completion. Namely, if a pixel is denoted as invisible in the mask \\( M_{\\mathrm{vis}} \\), this might be because there is an occluder in front of that pixel (so the pixel could have contained the object except due to occlusion), or because the pixel is entirely off the object. This information is encoded by the mask \\( M_{\\mathrm{occ}} \\)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.55, + 0.484, + 0.671 + ], + "angle": 0, + "content": "To allow the model to distinguish between visible, occluded and background areas, we add one more cross-attention layer placed immediately after the mask-weighted cross-attention layer, and pooling solely the occlusion mask \\( M_{\\mathrm{occ}} \\). In order to do so, we encode the occlusions mask \\( M_{\\mathrm{vis}} \\) as a set of flatten 1-D tokens \\( c_{\\mathrm{occ}} = [\\rho(M_{\\mathrm{occ},1}), \\ldots, \\rho(M_{\\mathrm{occ},M})] \\), as before and then simply pool \\( c_{\\mathrm{occ}} \\) using a cross-attention layer." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.682, + 0.357, + 0.697 + ], + "angle": 0, + "content": "3.3. Simulating Occluded 3D Data" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.704, + 0.483, + 0.763 + ], + "angle": 0, + "content": "To train and evaluate our model, we require a dataset of 3D assets imaged in scenarios with clutter and occlusions. It is challenging to obtain such data in the real world, so we resorted to synthetic data and simulation to train our model." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.765, + 0.484, + 0.902 + ], + "angle": 0, + "content": "Random 2D Occlusions. In order to train our model, each training sample \\((o,x,M_{\\mathrm{vis}},M_{\\mathrm{occ}})\\) consists of a 3D object \\(o\\) (from which ground truth latents can be obtained by using the encoders of Sec. 3.1), an image \\(x\\) with partial occlusions, and corresponding visibility and occlusion masks \\(M_{\\mathrm{vis}}\\) and \\(M_{\\mathrm{occ}}\\). In a real scenario, occlusions arise from other objects in the scene. In a multi-view setting, like the one discussed below, these occlusions need to be consistent across views, reflecting the underlying scene geometry." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.092, + 0.905, + 0.168 + ], + "angle": 0, + "content": "try. However, because our model is trained for single-view reconstruction, we can simulate occlusions by randomly masking parts of the object after rendering it in 2D. This is simpler and allows us to generate fresh occlusion patterns each time a view is sampled for training." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.168, + 0.906, + 0.304 + ], + "angle": 0, + "content": "Thus, given \\( o \\) and an image \\( x \\) rendered from a random viewpoint, we generate random occlusion masks as follows. Inspired by work on 2D completion [27, 41, 58, 90], we randomly place lines, ellipses, and rectangles, simulating diverse masking patterns. The union of these random shapes gives us the occlusion mask \\( M_{\\mathrm{occ}} \\). Then, if \\( M_{\\mathrm{obj}} \\) is the mask of the unoccluded object, the visible mask is given by \\( M_{\\mathrm{vis}} = M_{\\mathrm{obj}} \\odot (1 - M_{\\mathrm{occ}}) \\). Examples of such patterns are given in the supplementary material." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.305, + 0.905, + 0.41 + ], + "angle": 0, + "content": "3D-consistent occlusions. In a real scene imaged from multiple viewpoints, occlusions are not random but consistent across views as they are caused by other objects. This is particularly true for contact occlusions, where part of an object remains occluded by another from all viewpoints. To evaluate the model's performance under such challenging conditions, 3D-consistent masks are required." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.411, + 0.906, + 0.532 + ], + "angle": 0, + "content": "To efficiently generate such contact occlusion masks, we leverage 3D meshes during rendering. Starting from a randomly chosen triangle, we apply a random-walk strategy to iteratively select neighboring triangles, forming continuous occluded regions until the predefined mask ratio is met. By rendering these masked meshes using the same camera parameters as the RGB images, we ensure multi-view consistency in the generated occlusion masks (see Fig. 4)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.543, + 0.842, + 0.56 + ], + "angle": 0, + "content": "3.4. Reconstruction with Multi-view Input" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.566, + 0.907, + 0.793 + ], + "angle": 0, + "content": "Since our flow-based model performs multiple denoising steps and does not require known camera poses for input views, Amodal3R can accept multi-view reference images \\(\\mathcal{X} = \\{x_i\\}_{i=1}^N\\) as conditioning inputs at different steps of the denoising process. While multi-view conditioning naturally improves reconstruction performance, a potential issue with such multi-view conditioning is that an image used earlier in the denoising process has a stronger influence on the final 3D geometry. This is because early denoising steps establish the coarse geometry, whereas later steps refine higher-frequency details [33, 34]. Therefore, we prioritize input images based on their visibility. Specifically, when experimenting with multi-view inputs, we sort the images in proportion to the object visibility \\(|M_{\\mathrm{vis}}|\\), ensuring that images with higher visibility are used as primary inputs." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.808, + 0.646, + 0.825 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.833, + 0.707, + 0.849 + ], + "angle": 0, + "content": "4.1. Experiment Settings" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.856, + 0.906, + 0.902 + ], + "angle": 0, + "content": "Datasets. Amodal3R is trained on a combination of 3D synthetic dataset: 3D-FUTURE (9,472 objects [17]), ABO (4,485 objects [13]), and HSSD (6,670 objects [32]). Once" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.094, + 0.089, + 0.907, + 0.25 + ], + "angle": 0, + "content": "
MethodV-num2D-CompFID ↓KID(%) ↓CLIP ↑P-FID ↓COV(%) ↑MMD(‰) ↓
GaussianAnything [37]1pix2gestalt [56]92.261.300.7434.6935.925.03
Real3D [28]1pix2gestalt [56]91.212.020.7523.9219.619.21
TRELLIS [81]1pix2gestalt [56]58.825.870.7626.4331.654.17
Amodal3R (Ours)1-30.640.350.817.6939.613.62
LaRa [6]4pix2gestalt [56]172.844.540.7066.3424.568.11
LaRa [6]4pix2gestalt [56]+MV97.532.630.7521.8026.218.61
TRELLIS [81]4pix2gestalt [56]65.696.920.7824.6432.334.26
TRELLIS [81]4pix2gestalt [56]+MV60.371.850.8319.6831.754.21
Amodal3R (Ours)4-26.270.220.845.0338.743.61
" + }, + { + "type": "table_caption", + "bbox": [ + 0.09, + 0.254, + 0.907, + 0.325 + ], + "angle": 0, + "content": "Table 1. Amodal 3D reconstruction results on GSO [16]. Here, V-num denotes the number of input views, and 2D-Comp refers to the 2D completion method. For single-view image-to-3D, we first complete occluded objects using pix2gestalt [56] before passing them to various 3D models. For multi-view image-to-3D, we adopt two variants: 1) pix2gestalt [56] is applied independently on each view; 2) pix2gestalt \\([56] + \\mathrm{MV}\\): a single-view completion is generated first, followed by multi-view diffusion [63] to synthesize 4 views as inputs. Without bells and whistles, Amodal3R consistently outperforms state-of-the-art models across all evaluation metrics." + }, + { + "type": "table", + "bbox": [ + 0.094, + 0.338, + 0.907, + 0.499 + ], + "angle": 0, + "content": "
MethodV-num2D-CompFID ↓KID(%) ↓CLIP ↑P-FID ↓COV(%) ↑MMD(‰) ↓
GaussianAnything [37]1pix2gestalt [56]57.171.220.8021.9733.567.23
Real3D [28]1pix2gestalt [56]59.921.630.7923.3124.359.53
TRELLIS [81]1pix2gestalt [56]43.056.830.8026.0426.286.87
Amodal3R (Ours)1-23.450.420.835.0037.095.89
LaRa [6]4pix2gestalt [56]123.523.610.7545.9127.899.67
LaRa [6]4pix2gestalt [56]+MV75.334.140.8013.0024.8210.93
TRELLIS [81]4pix2gestalt [56]46.348.770.8128.7625.357.13
TRELLIS [81]4pix2gestalt [56]+MV43.007.530.8124.4126.557.05
Amodal3R (Ours)4-20.930.500.853.7839.035.75
" + }, + { + "type": "table_caption", + "bbox": [ + 0.108, + 0.503, + 0.887, + 0.518 + ], + "angle": 0, + "content": "Table 2. Amodal 3D reconstruction results on Toys4K [69]. The experimental setting is the same to Tab. 1, except for the dataset." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.544, + 0.483, + 0.742 + ], + "angle": 0, + "content": "Amodal3R is trained, we first assess its effectiveness on Toys4K (randomly sampling 1,500 objects [69]) and Google Scanned Objects (GSO) (1,030 objects [16]), which are excluded from our training data for our and the baseline model. During inference, a 3D-consistent mask occludes each input object, and each view is augmented with additional randomly generated occlusion areas. This ensures that the model cannot directly extract the region required to be completed from the occlusion regions. To further assess the out-of-distribution generalization ability in practical applications, we also evaluate all models on the 3D scene dataset Replica [70] as well as on in-the-wild images. The training and evaluation dataset will be released." + }, + { + "type": "text", + "bbox": [ + 0.089, + 0.75, + 0.483, + 0.903 + ], + "angle": 0, + "content": "Metrics. To measure the quality of the rendered images, we use the Fréchet Inception Distance (FID) [21] and the Kernel Inception Distance (KID) [3]. To measure the quality of the reconstructed 3D geometry, we use the Coverage Score (COV), the Point cloud FID (P-FID) [51], and the Minimum Matching Distance (MMD) using the Chamfer Distance (CD). COV measures the diversity of the results and P-FID and MMD the quality of the 3D reconstruction. We also use the CLIP score [59] to evaluate the consistency between each pair generated and ground-truth objects." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.544, + 0.907, + 0.667 + ], + "angle": 0, + "content": "Baselines. Most 3D generative models are trained on complete object inputs. To ensure fair comparisons, we complete the partially visible 2D objects before passing them to 3D generators. Sepecifically, we use pix2gestalt [56], a state-of-the-art 2D amodal completion network. Using this protocol, we compare Amodal3R to state-of-the-art methods such as TRELLIS [81], Real3D [28], GaussianAnything [37]ICLR'25, and LaRa [6]ECCV'24." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.667, + 0.909, + 0.759 + ], + "angle": 0, + "content": "Implementation Details. Amodal3R is trained on 4 A100 GPUs (40G) for 20K steps with a batch size of 16, taking approximately a day. Following TRELLIS [81], we implement classifier-free guidance (CFG [22]) with a drop rate of 0.1 and AdamW [45] optimizer with a learning rate of 1e-4. More details are provided in the supplementary material." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.771, + 0.655, + 0.787 + ], + "angle": 0, + "content": "4.2. Main Results" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.795, + 0.907, + 0.903 + ], + "angle": 0, + "content": "Quantitative Results. We compare Amodal3R to the state-of-the-art for amodal 3D reconstruction in Tabs. 1 and 2. Amodal3R significantly outperforms the baselines across all metrics in both single- and multi-view 3D reconstruction, demonstrating its effectiveness. Interestingly, baselines that use multiple but potentially inconsistently-completed views (\"4 V-num + pix2gestalt\") are worse than" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.926, + 0.505, + 0.937 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.1, + 0.089, + 0.177, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.13, + 0.192, + 0.16, + 0.203 + ], + "angle": 0, + "content": "Input" + }, + { + "type": "image", + "bbox": [ + 0.188, + 0.092, + 0.243, + 0.189 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.179, + 0.192, + 0.262, + 0.204 + ], + "angle": 0, + "content": "pix2gestalt [56]" + }, + { + "type": "image", + "bbox": [ + 0.264, + 0.091, + 0.331, + 0.187 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.276, + 0.192, + 0.399, + 0.204 + ], + "angle": 0, + "content": "GaussianAnything [37]" + }, + { + "type": "image", + "bbox": [ + 0.335, + 0.092, + 0.402, + 0.187 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.336, + 0.192, + 0.4, + 0.204 + ], + "angle": 0, + "content": "nything [37]" + }, + { + "type": "image", + "bbox": [ + 0.422, + 0.091, + 0.482, + 0.187 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.46, + 0.192, + 0.526, + 0.203 + ], + "angle": 0, + "content": "Real3D [28]" + }, + { + "type": "image", + "bbox": [ + 0.494, + 0.092, + 0.557, + 0.187 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.587, + 0.092, + 0.635, + 0.186 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.645, + 0.092, + 0.71, + 0.186 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.614, + 0.192, + 0.692, + 0.203 + ], + "angle": 0, + "content": "TRELLIS [81]" + }, + { + "type": "image", + "bbox": [ + 0.741, + 0.091, + 0.804, + 0.187 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.82, + 0.091, + 0.882, + 0.187 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.756, + 0.192, + 0.853, + 0.203 + ], + "angle": 0, + "content": "Amodal3R (Ours)" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.208, + 0.907, + 0.237 + ], + "angle": 0, + "content": "Figure 5. Single-view amodal 3D reconstruction. The occlusion regions are shown in black and the visible regions are highlighted with red outlines. More examples are provided in supplementary material Fig. C.4." + }, + { + "type": "image", + "bbox": [ + 0.092, + 0.248, + 0.196, + 0.352 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.13, + 0.353, + 0.16, + 0.364 + ], + "angle": 0, + "content": "Input" + }, + { + "type": "image", + "bbox": [ + 0.214, + 0.252, + 0.28, + 0.291 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.223, + 0.304, + 0.256, + 0.345 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.203, + 0.353, + 0.287, + 0.365 + ], + "angle": 0, + "content": "pix2gestalt [56]" + }, + { + "type": "image", + "bbox": [ + 0.315, + 0.253, + 0.386, + 0.29 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.313, + 0.301, + 0.381, + 0.342 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.373, + 0.353, + 0.423, + 0.364 + ], + "angle": 0, + "content": "LaRa [6]" + }, + { + "type": "image", + "bbox": [ + 0.401, + 0.253, + 0.472, + 0.287 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.401, + 0.304, + 0.472, + 0.34 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.515, + 0.254, + 0.588, + 0.296 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.515, + 0.3, + 0.585, + 0.341 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.609, + 0.254, + 0.677, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.608, + 0.304, + 0.679, + 0.34 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.562, + 0.353, + 0.64, + 0.364 + ], + "angle": 0, + "content": "TRELLIS [81]" + }, + { + "type": "image", + "bbox": [ + 0.717, + 0.253, + 0.791, + 0.296 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.716, + 0.297, + 0.793, + 0.347 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.817, + 0.255, + 0.884, + 0.296 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.815, + 0.3, + 0.89, + 0.343 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.757, + 0.353, + 0.853, + 0.364 + ], + "angle": 0, + "content": "Amodal3R (Ours)" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.369, + 0.907, + 0.412 + ], + "angle": 0, + "content": "Figure 6. Multi-view amodal 3D reconstruction. The above results are reconstructed using 4 occluded input views. Due to limited space, we present only the best results for LaRa and TRELLIS under the \"pix2gestalt+MV\" setting. We apply 3d-consistent mask and random extended occlusion areas. More examples are provided in supplementary material Fig. C.5." + }, + { + "type": "image", + "bbox": [ + 0.092, + 0.435, + 0.161, + 0.482 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.094, + 0.482, + 0.157, + 0.533 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.096, + 0.533, + 0.159, + 0.582 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.107, + 0.585, + 0.137, + 0.596 + ], + "angle": 0, + "content": "Input" + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.443, + 0.227, + 0.483 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.166, + 0.497, + 0.228, + 0.525 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.174, + 0.539, + 0.22, + 0.573 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.159, + 0.579, + 0.233, + 0.599 + ], + "angle": 0, + "content": "' Naive concatenation" + }, + { + "type": "image", + "bbox": [ + 0.24, + 0.435, + 0.307, + 0.49 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.24, + 0.493, + 0.314, + 0.527 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.256, + 0.539, + 0.3, + 0.573 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.24, + 0.577, + 0.311, + 0.599 + ], + "angle": 0, + "content": "w/only \\(M_{\\mathrm{vis}}\\) attention" + }, + { + "type": "image", + "bbox": [ + 0.334, + 0.443, + 0.387, + 0.485 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.331, + 0.498, + 0.388, + 0.524 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.339, + 0.539, + 0.386, + 0.576 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.323, + 0.578, + 0.396, + 0.599 + ], + "angle": 0, + "content": "w/only \\(M_{\\mathrm{occ}}\\) attention" + }, + { + "type": "image", + "bbox": [ + 0.415, + 0.441, + 0.468, + 0.486 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.415, + 0.496, + 0.471, + 0.525 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.423, + 0.538, + 0.464, + 0.574 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.422, + 0.578, + 0.457, + 0.599 + ], + "angle": 0, + "content": "Full model" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.609, + 0.484, + 0.665 + ], + "angle": 0, + "content": "Figure 7. Ablation examples. Naive concatenation fails to reconstruct complete shape and appearance. Mask-weighted attention alone extends geometry into background regions, while occlusion-aware attention alone cannot guarantee photorealistic appearance." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.7, + 0.483, + 0.791 + ], + "angle": 0, + "content": "using a single completed view (\"1 V-num + pix2gestalt\"). This shows that inconsistent 2D completion does confuse reconstruction models to the point that using a single view is preferable (Sec. 2). This issue does not affect Amodal3R as it does not rely on 2D completion; in our case, utilizing more occluded views does improve the results as expected." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Qualitative Results. The qualitative results are shown in Figs. 5 and 6 and in supplementary material Figs. C.4 and C.5. For all baselines, pix2gestalt is first applied for 2D amodal completion (second column), and the completed images are passed to each baseline image-to-3D model. Amodal3R produces high-quality 3D assets even under challenging conditions in both single-view and multi-view" + }, + { + "type": "table", + "bbox": [ + 0.517, + 0.436, + 0.906, + 0.505 + ], + "angle": 0, + "content": "
MethodFID ↓KID(%) ↓COV(%) ↑MMD(‰) ↓
naive conditioning31.960.4937.963.61
w/ only mask-weighted attention30.530.3836.903.69
w/ only occlusion-aware layer31.770.5740.193.51
full model (Ours)30.640.3539.613.62
" + }, + { + "type": "table_caption", + "bbox": [ + 0.514, + 0.509, + 0.906, + 0.537 + ], + "angle": 0, + "content": "Table 3. Ablations on different mask conditioning designs. The Best and the second best results are highlighted." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.565, + 0.907, + 0.775 + ], + "angle": 0, + "content": "image-to-3D tasks. In contrast, 2D amodal completions inconsistencies accumulate as more views are added, particularly when pix2gestalt is more uncertain, confusing the reconstructor models downstream. For instance, in the giraffe example in Fig. 5, the pix2gestalt completion fails to capture the overall structure of the 3D object correctly, which in turn leads the 3D generator models to misinterpret it as a woodstick-like shape. In contrast, Amodal3R reconstructs the 3D geometry and appearance well, with good alignment to the occluded inputs. These findings highlight not only the effectiveness of Amodal3R but also the advantage of completing objects while reconstructing them, which avoids relying on monocular 2D completion models that may introduce inconsistencies." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.787, + 0.667, + 0.803 + ], + "angle": 0, + "content": "4.3. Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.81, + 0.906, + 0.87 + ], + "angle": 0, + "content": "We conducted several ablations to study the impact of the different components of Amodal3R and report the results in Tab. 3 and Fig. 7. For these experiments, we test single-view image-to-3D reconstruction on the GSO dataset." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.871, + 0.906, + 0.9 + ], + "angle": 0, + "content": "Naive Conditioning. We first evaluated a version of the model that still conditions the reconstruction on the visi" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.102, + 0.088, + 0.19, + 0.287 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.145, + 0.287, + 0.235, + 0.298 + ], + "angle": 0, + "content": "Occluded input" + }, + { + "type": "image", + "bbox": [ + 0.207, + 0.114, + 0.255, + 0.14 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.206, + 0.181, + 0.255, + 0.195 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.206, + 0.235, + 0.25, + 0.277 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.296, + 0.113, + 0.344, + 0.139 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.293, + 0.181, + 0.344, + 0.199 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.302, + 0.23, + 0.337, + 0.277 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.37, + 0.099, + 0.486, + 0.143 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.372, + 0.175, + 0.485, + 0.21 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.381, + 0.223, + 0.477, + 0.277 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.271, + 0.286, + 0.36, + 0.298 + ], + "angle": 0, + "content": "pix2gestalt [56]" + }, + { + "type": "image", + "bbox": [ + 0.37, + 0.299, + 0.468, + 0.371 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.395, + 0.365, + 0.468, + 0.417 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.349, + 0.424, + 0.418, + 0.435 + ], + "angle": 0, + "content": "Novel views" + }, + { + "type": "image", + "bbox": [ + 0.472, + 0.299, + 0.546, + 0.367 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.473, + 0.365, + 0.546, + 0.417 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.099, + 0.612, + 0.147 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.175, + 0.608, + 0.204 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.515, + 0.228, + 0.602, + 0.278 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.523, + 0.286, + 0.592, + 0.298 + ], + "angle": 0, + "content": "Real3D [28]" + }, + { + "type": "image", + "bbox": [ + 0.627, + 0.099, + 0.732, + 0.151 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.624, + 0.175, + 0.739, + 0.209 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.625, + 0.235, + 0.739, + 0.267 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.642, + 0.286, + 0.726, + 0.299 + ], + "angle": 0, + "content": "TRELLIS [81]" + }, + { + "type": "image", + "bbox": [ + 0.761, + 0.097, + 0.885, + 0.158 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.76, + 0.175, + 0.893, + 0.209 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.76, + 0.234, + 0.899, + 0.269 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.773, + 0.286, + 0.875, + 0.298 + ], + "angle": 0, + "content": "Amodal3R (Ours)" + }, + { + "type": "image", + "bbox": [ + 0.104, + 0.299, + 0.188, + 0.361 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.104, + 0.363, + 0.188, + 0.424 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.105, + 0.424, + 0.193, + 0.435 + ], + "angle": 0, + "content": "Occluded input" + }, + { + "type": "image", + "bbox": [ + 0.225, + 0.311, + 0.295, + 0.362 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.226, + 0.369, + 0.293, + 0.417 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.307, + 0.312, + 0.376, + 0.363 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.304, + 0.368, + 0.373, + 0.417 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.348, + 0.424, + 0.418, + 0.435 + ], + "angle": 0, + "content": "Novel views" + }, + { + "type": "image", + "bbox": [ + 0.395, + 0.3, + 0.466, + 0.371 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.396, + 0.366, + 0.468, + 0.417 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.577, + 0.311, + 0.647, + 0.361 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.58, + 0.367, + 0.647, + 0.417 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.657, + 0.311, + 0.726, + 0.362 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.652, + 0.367, + 0.721, + 0.417 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.736, + 0.3, + 0.884, + 0.371 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.736, + 0.37, + 0.885, + 0.418 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.691, + 0.424, + 0.77, + 0.436 + ], + "angle": 0, + "content": "Normal maps" + }, + { + "type": "image_caption", + "bbox": [ + 0.097, + 0.441, + 0.899, + 0.456 + ], + "angle": 0, + "content": "Figure 8. Examples on Replica [70] and in-the-wild images. The target objects and the occluders are shown in red and green outlines." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.482, + 0.483, + 0.603 + ], + "angle": 0, + "content": "bility and occlusion masks, but without using the modules of Sec. 3.2. Instead, we directly concatenate the tokens \\( c_{\\mathrm{vis}} \\) to \\( c_{\\mathrm{dino}} \\) to form the condition for the cross-attention layer. The results (first row in Tab. 3 and second column in Fig. 7) show that, while the resulting model can still perform basic completion, the textures in the occluded regions are inconsistent with those in the visible ones, and the reconstructed geometry is inaccurate, e.g. the hole in the shoe." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.616, + 0.484, + 0.752 + ], + "angle": 0, + "content": "Mask-weighted Attention. To evaluate the effectiveness of our proposed mask-weighted attention mechanism, we integrate it into the training while omitting the occlusion-aware layer. The results demonstrate improved rendering quality — especially in capturing texture details in the visible areas — and significantly enhanced appearance consistency. However, the geometries exhibit deficiencies, as seen in the problematic shoe in Fig. 7 (first row), and the toy monster with a broken back (third row)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.765, + 0.484, + 0.901 + ], + "angle": 0, + "content": "Occlusion-aware Layer. The integration of the occlusion-aware layer improves the geometry both quantitatively and qualitatively. This aligns with our motivation for introducing an additional cross-attention layer, aimed at reconstructing the visible areas via the image-conditioned layer and occluded areas via the subsequent layer. However, occlusion-aware layer alone still results in unsatisfactory appearances, which again indirectly attests to the effectiveness of mask-weighted attention mechanism." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.482, + 0.906, + 0.528 + ], + "angle": 0, + "content": "Full Model. Consequently, the full model integrating both modules achieves optimal 3D generation results characterized by complete geometry and consistent textures." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.537, + 0.836, + 0.554 + ], + "angle": 0, + "content": "4.4. Real-World Generation / Completion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.559, + 0.907, + 0.772 + ], + "angle": 0, + "content": "Amodal3R is inherently superior in generalizing to out-of-distribution amodal 3D reconstruction, primarily due to the fact that we build upon the model on a \"foundation\" 3D generator, and fine-tune it with diverse categories. To demonstrate this advantage, we conduct scene-level dataset evaluations: Replica [70] (first two rows) and on in-the-wild images (3rd-5th rows) in Fig. 8. Here we adopt Segment Anything [35] to get the visibility and occlusion masks. The results show that Amodal3R generates reasonable 3D assets, whereas pix2gestalt fails to infer complete shapes from the same inputs, leading to unsatisfactory results from subsequent image-to-3D models. We also visualize the colored normal maps, which show that the results of Amodal3R are simultaneously rich in geometric and textural detail." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.785, + 0.634, + 0.801 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.81, + 0.907, + 0.903 + ], + "angle": 0, + "content": "We have introduced Amodal3R, a novel approach to reconstruct complete 3D shape and appearance from partially visible 2D images. The key to our success is that we construct mask-weighted cross-attention mechanism and occlusion-aware layer to effectively exploit visible and occluded information. Compared to the state-of-the-art methods that rely" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.486, + 0.198 + ], + "angle": 0, + "content": "on sequential 2D completion followed by 3D generation, our Amodal3R achieves remarkably better performance by operating directly in 3D space. Furthermore, results on inthe-wild images indicate its potential for subsequent applications in 3D decomposition and scene understanding, marking a step towards robust 3D asset reconstruction in real-world environments with complex occlusion." + }, + { + "type": "title", + "bbox": [ + 0.093, + 0.209, + 0.188, + 0.225 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.101, + 0.234, + 0.484, + 0.316 + ], + "angle": 0, + "content": "[1] Titas Anciukevicius, Zexiang Xu, Matthew Fisher, Paul Henderson, Hakan Bilen, Niloy J Mitra, and Paul Guerrero. Renderdiffusion: Image diffusion for 3d reconstruction, inpainting and generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12608-12618, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.318, + 0.484, + 0.358 + ], + "angle": 0, + "content": "[2] Jiayang Ao, Yanbei Jiang, Qiuhong Ke, and Krista A Ehinger. Open-world amodal appearance completion. arXiv preprint arXiv:2411.13019, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.36, + 0.483, + 0.4 + ], + "angle": 0, + "content": "[3] Mikołaj Binkowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans. arXiv preprint arXiv:1801.01401, 2018. 6, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.402, + 0.482, + 0.47 + ], + "angle": 0, + "content": "[4] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 5799-5809, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.471, + 0.482, + 0.538 + ], + "angle": 0, + "content": "[5] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In CVPR, 2022. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.541, + 0.482, + 0.595 + ], + "angle": 0, + "content": "[6] Anpei Chen, Haofei Xu, Stefano Esposito, Siyu Tang, and Andreas Geiger. Lara: Efficient large-baseline radiance fields. In European Conference on Computer Vision, pages 338-355. Springer, 2024. 1, 6, 7, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.597, + 0.482, + 0.678 + ], + "angle": 0, + "content": "[7] Minghao Chen, Roman Shapovalov, Iro Laina, Tom Monnier, Jianyuan Wang, David Novotny, and Andrea Vedaldi. Partgen: Part-level 3d generation and reconstruction with multi-view diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.68, + 0.482, + 0.746 + ], + "angle": 0, + "content": "[8] Yiwen Chen, Tong He, Di Huang, Weicai Ye, Sijin Chen, Ji-axiang Tang, Xin Chen, Zhongang Cai, Lei Yang, Gang Yu, et al. Meshanything: Artist-created mesh generation with autoregressive transformers. arXiv preprint arXiv:2406.10163, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.102, + 0.749, + 0.482, + 0.803 + ], + "angle": 0, + "content": "[9] Yuedong Chen, Chuanxia Zheng, Haofei Xu, Bohan Zhuang, Andrea Vedaldi, Tat-Jen Cham, and Jianfei Cai. Mvsplat360: Feed-forward 360 scene synthesis from sparse views. In Neural Information Processing Systems (NeurIPS), 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.805, + 0.482, + 0.873 + ], + "angle": 0, + "content": "[10] Yen-Chi Cheng, Hsin-Ying Lee, Sergey Tulyakov, Alexander G Schwing, and Liang-Yan Gui. Sdfusion: Multimodal 3d shape completion, reconstruction, and generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4456-4465, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.094, + 0.875, + 0.482, + 0.901 + ], + "angle": 0, + "content": "[11] Junhyeong Cho, Kim Youwang, Hunmin Yang, and TaeHyun Oh. Robust 3d shape reconstruction in zero-shot from" + }, + { + "type": "list", + "bbox": [ + 0.094, + 0.234, + 0.484, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.093, + 0.905, + 0.134 + ], + "angle": 0, + "content": "a single image in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.136, + 0.905, + 0.19 + ], + "angle": 0, + "content": "[12] Ruihang Chu, Enze Xie, Shentong Mo, Zhenguo Li, Matthias Nießner, Chi-Wing Fu, and Jiaya Jia. Diffcomplete: Diffusion-based generative 3d shape completion. Advances in Neural Information Processing Systems, 36, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.192, + 0.905, + 0.286 + ], + "angle": 0, + "content": "[13] Jasmine Collins, Shubham Goel, Kenan Deng, Achleshwar Luthra, Leon Xu, Erhan Gundogdu, Xi Zhang, Tomas F Yago Vicente, Thomas Dideriksen, Himanshu Arora, et al. Abo: Dataset and benchmarks for real-world 3d object understanding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 21126-21136, 2022. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.288, + 0.905, + 0.37 + ], + "angle": 0, + "content": "[14] Ruikai Cui, Weizhe Liu, Weixuan Sun, Senbo Wang, Taizhang Shang, Yang Li, Xibin Song, Han Yan, Zhennan Wu, Shenzhen Chen, et al. Neusdfusion: A spatial-aware generative model for 3d shape completion, reconstruction, and generation. In European Conference on Computer Vision, pages 1-18. Springer, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.372, + 0.904, + 0.44 + ], + "angle": 0, + "content": "[15] Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak Gadre, et al. Objverse-xl: A universe of \\(10\\mathrm{m} + 3\\mathrm{d}\\) objects. Advances in Neural Information Processing Systems, 36, 2024. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.442, + 0.904, + 0.524 + ], + "angle": 0, + "content": "[16] Laura Downs, Anthony Francis, Nate Koenig, Brandon Kinman, Ryan Hickman, Krista Reymann, Thomas B McHugh, and Vincent Vanhoucke. Google scanned objects: A high-quality dataset of 3d scanned household items. In 2022 International Conference on Robotics and Automation (ICRA), pages 2553-2560. IEEE, 2022. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.526, + 0.905, + 0.579 + ], + "angle": 0, + "content": "[17] Huan Fu, Rongfei Jia, Lin Gao, Mingming Gong, Binqiang Zhao, Steve Maybank, and Dacheng Tao. 3d-future: 3d furniture shape with texture. International Journal of Computer Vision, 129:3313-3337, 2021. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.581, + 0.905, + 0.649 + ], + "angle": 0, + "content": "[18] Jun Gao, Tianchang Shen, Zian Wang, Wenzheng Chen, Kangxue Yin, Daiqing Li, Or Litany, Zan Gojcic, and Sanja Fidler. Get3d: A generative model of high quality 3d textured shapes learned from images. Advances In Neural Information Processing Systems, 35:31841-31854, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.651, + 0.905, + 0.704 + ], + "angle": 0, + "content": "[19] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139-144, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.707, + 0.905, + 0.747 + ], + "angle": 0, + "content": "[20] Anchit Gupta, Wenhan Xiong, Yixin Nie, Ian Jones, and Barlas Oğuz. 3dgen: Triplane latent diffusion for textured mesh generation. arXiv preprint arXiv:2303.05371, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.749, + 0.905, + 0.815 + ], + "angle": 0, + "content": "[21] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 6, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.818, + 0.905, + 0.845 + ], + "angle": 0, + "content": "[22] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.847, + 0.905, + 0.899 + ], + "angle": 0, + "content": "[23] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems (NeurIPS), pages 6840-6851, 2020. 2" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.905, + 0.899 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.505, + 0.937 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.092, + 0.482, + 0.161 + ], + "angle": 0, + "content": "[24] Yicong Hong, Kai Zhang, Juixiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. Lrm: Large reconstruction model for single image to 3d. In International Conference on Learning Representations (ICLR), 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.164, + 0.482, + 0.219 + ], + "angle": 0, + "content": "[25] Tao Hu, Zhizhong Han, and Matthias Zwicker. 3d shape completion with multi-view consistent inference. In Proceedings of the AAAI conference on artificial intelligence, pages 10997-11004, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.221, + 0.482, + 0.289 + ], + "angle": 0, + "content": "[26] Zitian Huang, Yikuan Yu, Jiawen Xu, Feng Ni, and Xinyi Le. Pf-net: Point fractal network for 3d point cloud completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 7662-7670, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.291, + 0.482, + 0.333 + ], + "angle": 0, + "content": "[27] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics (TOG), 36(4):107, 2017. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.334, + 0.482, + 0.389 + ], + "angle": 0, + "content": "[28] Hanwen Jiang, Qixing Huang, and Georgios Pavlakos. Real3d: Scaling up large reconstruction models with real-world images. arXiv preprint arXiv:2406.08479, 2024. 6, 7, 8, 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.391, + 0.482, + 0.433 + ], + "angle": 0, + "content": "[29] Heewoo Jun and Alex Nichol. Shap-e: Generating conditional 3d implicit functions. arXiv preprint arXiv:2305.02463, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.435, + 0.482, + 0.489 + ], + "angle": 0, + "content": "[30] Yoni Kasten, Ohad Rahamim, and Gal Chechik. Point cloud completion with pretrained text-to-image diffusion models. Advances in Neural Information Processing Systems, 36, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.491, + 0.482, + 0.545 + ], + "angle": 0, + "content": "[31] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.548, + 0.482, + 0.657 + ], + "angle": 0, + "content": "[32] Mukul Khanna, Yongsen Mao, Hanxiao Jiang, Sanjay Haresh, Brennan Shacklett, Dhruv Batra, Alexander Clegg, Eric Undersander, Angel X Chang, and Manolis Savva. Habitat synthetic scenes dataset (hssd-200): An analysis of 3d scene scale and realism tradeoffs for objectgoal navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16384–16393, 2024. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.661, + 0.482, + 0.729 + ], + "angle": 0, + "content": "[33] Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye. Diffus ionclip: Text-guided diffusion models for robust image man i plication. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2426-2435, 2022. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.732, + 0.482, + 0.801 + ], + "angle": 0, + "content": "[34] Yulhwa Kim, Dongwon Jo, Hyesung Jeon, Taesu Kim, Daehyun Ahn, Hyungjun Kim, et al. Leveraging early-stage robustness in diffusion models for efficient and high-quality image synthesis. Advances in Neural Information Processing Systems, 36:1229-1244, 2023. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.803, + 0.482, + 0.871 + ], + "angle": 0, + "content": "[35] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4015-4026, 2023. 3, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.873, + 0.482, + 0.901 + ], + "angle": 0, + "content": "[36] Yushi Lan, Feitong Tan, Di Qiu, Qiangeng Xu, Kyle Genova, Zeng Huang, Sean Fanello, Rohit Pandey, Thomas" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.482, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.545, + 0.092, + 0.905, + 0.134 + ], + "angle": 0, + "content": "Funkhouser, Chen Change Loy, et al. Gaussian3diff: 3d gaussian diffusion for 3d full head synthesis and editing. arXiv preprint arXiv:2312.03763, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.136, + 0.905, + 0.203 + ], + "angle": 0, + "content": "[37] Yushi Lan, Shangchen Zhou, Zhaoyang Lyu, Fangzhou Hong, Shuai Yang, Bo Dai, Xingang Pan, and Chen Change Loy. Gaussiananything: Interactive point cloud latent diffusion for 3d generation. arXiv preprint arXiv:2411.08033, 2024. 6, 7, 8, 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.205, + 0.905, + 0.259 + ], + "angle": 0, + "content": "[38] Muheng Li, Yueqi Duan, Jie Zhou, and Jiwen Lu. Diffusionsdf: Text-to-shape via voxelized diffusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12642-12651, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.261, + 0.905, + 0.315 + ], + "angle": 0, + "content": "[39] Xinhai Li, Huaibin Wang, and Kuo-Kun Tseng. Gaussian diffusion: 3d gaussian splatting for denoising diffusion probabilistic models with structured noise. arXiv preprint arXiv:2311.11221, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.316, + 0.905, + 0.371 + ], + "angle": 0, + "content": "[40] Chen-Hsuan Lin, Chen Kong, and Simon Lucey. Learning efficient point cloud generation for dense 3d object reconstruction. In proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.372, + 0.905, + 0.439 + ], + "angle": 0, + "content": "[41] Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), 2018. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.441, + 0.907, + 0.51 + ], + "angle": 0, + "content": "[42] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), pages 9298–9309, 2023. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.511, + 0.905, + 0.552 + ], + "angle": 0, + "content": "[43] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.554, + 0.905, + 0.636 + ], + "angle": 0, + "content": "[44] Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, Yuan Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, et al. Wonder3d: Single image to 3d using cross-domain diffusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 9970-9980, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.637, + 0.905, + 0.664 + ], + "angle": 0, + "content": "[45] I Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.665, + 0.907, + 0.719 + ], + "angle": 0, + "content": "[46] Shitong Luo and Wei Hu. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 2837-2845, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.721, + 0.907, + 0.789 + ], + "angle": 0, + "content": "[47] Luke Melas-Kyriazi, Christian Rupprecht, and Andrea Vedaldi. Pc2: Projection-conditioned point cloud diffusion for single-image 3d reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12923-12932, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.79, + 0.905, + 0.871 + ], + "angle": 0, + "content": "[48] Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulo, Peter Kontschieder, and Matthias Nießner. Diffrf: Rendering-guided 3d radiance field diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4328-4338, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.873, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[49] Charlie Nash, Yaroslav Ganin, SM Ali Eslami, and Peter Battaglia. *Polygon: An autoregressive generative model of" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.092, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.926, + 0.509, + 0.937 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.125, + 0.092, + 0.482, + 0.12 + ], + "angle": 0, + "content": "3d meshes. In International Conference on Machine Learning (ICML), pages 7220-7229. PMLR, 2020. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.122, + 0.483, + 0.191 + ], + "angle": 0, + "content": "[50] Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), pages 7588–7597, 2019. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.193, + 0.483, + 0.248 + ], + "angle": 0, + "content": "[51] Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. Point-e: A system for generating 3d point clouds from complex prompts. arXiv preprint arXiv:2212.08751, 2022. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.25, + 0.482, + 0.318 + ], + "angle": 0, + "content": "[52] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 11453-11464, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.321, + 0.483, + 0.377 + ], + "angle": 0, + "content": "[53] Evangelos Ntavelis, Aliaksandr Siarohin, Kyle Olszewski, Chaoyang Wang, Luc V Gool, and Sergey Tulyakov. Autodecoding latent 3d diffusion models. Advances in Neural Information Processing Systems, 36:67021-67047, 2023. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.379, + 0.482, + 0.447 + ], + "angle": 0, + "content": "[54] Maxime Oquab, Timothee Darcet, Theo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 4, 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.449, + 0.483, + 0.531 + ], + "angle": 0, + "content": "[55] Maxime Aquab, Timothee Darcet, Théo Moutakanni, Huy V Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel HAZIZA, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. Transactions on Machine Learning Research, 2023. 3, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.534, + 0.483, + 0.617 + ], + "angle": 0, + "content": "[56] Ege Ozguroglu, Ruoshi Liu, Dídac Surís, Dian Chen, Achal Dave, Pavel Tokmakov, and Carl Vondrick. pix2gestalt: Amodal segmentation by synthesizing wholes. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3931-3940. IEEE Computer Society, 2024. 2, 3, 6, 7, 8, 4, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.619, + 0.483, + 0.688 + ], + "angle": 0, + "content": "[57] Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, and Alexander C Berg. Transformation-grounded image generation network for novel 3d view synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 3500-3509, 2017. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.689, + 0.483, + 0.759 + ], + "angle": 0, + "content": "[58] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2536-2544. IEEE, 2016. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.761, + 0.483, + 0.843 + ], + "angle": 0, + "content": "[59] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning (ICML), pages 8748-8763. PmLR, 2021. 6, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.845, + 0.483, + 0.901 + ], + "angle": 0, + "content": "[60] Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Angela Dai, and Matthias Nießner. L3dg: Latent 3d gaussian diffusion. In SIGGRAPH Asia 2024 Conference Papers, pages 1-11, 2024. 3" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.093, + 0.907, + 0.162 + ], + "angle": 0, + "content": "[61] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 10684-10695, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.163, + 0.906, + 0.217 + ], + "angle": 0, + "content": "[62] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. Advances in Neural Information Processing Systems (NeurIPS), 33:20154-20166, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.219, + 0.906, + 0.286 + ], + "angle": 0, + "content": "[63] Ruoxi Shi, Hansheng Chen, Zhuoyang Zhang, Minghua Liu, Chao Xu, Xinyue Wei, Linghao Chen, Chong Zeng, and Hao Su. Zero123++: a single image to consistent multi-view diffusion base model. arXiv preprint arXiv:2310.15110, 2023. 6, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.288, + 0.906, + 0.329 + ], + "angle": 0, + "content": "[64] Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li, and Xiao Yang. Mvdream: Multi-view diffusion for 3d generation. arXiv preprint arXiv:2308.16512, 2023. 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.33, + 0.906, + 0.398 + ], + "angle": 0, + "content": "[65] Wu Shuang, Youtian Lin, Yifei Zeng, Feihu Zhang, Jingxi Xu, Philip Torr, Xun Cao, and Yao Yao. Direct3d: Scalable image-to-3d generation via 3d latent diffusion transformer. Advances in Neural Information Processing Systems, 37:121859–121881, 2025. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.4, + 0.906, + 0.469 + ], + "angle": 0, + "content": "[66] J Ryan Shue, Eric Ryan Chan, Ryan Po, Zachary Ankner, Jiajun Wu, and Gordon Wetzstein. 3d neural field generation using triplane diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20875-20886, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.47, + 0.907, + 0.552 + ], + "angle": 0, + "content": "[67] Yawar Siddiqui, Antonio Alliegro, Alexey Artemov, Tatiana Tommasi, Daniele Sirigatti, Vladislav Rosov, Angela Dai, and Matthias Nießner. Meshgpt: Generating triangle meshes with decoder-only transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19615-19625, 2024. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.553, + 0.906, + 0.621 + ], + "angle": 0, + "content": "[68] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning (ICML), pages 2256-2265. pmlr, 2015. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.623, + 0.906, + 0.678 + ], + "angle": 0, + "content": "[69] Stefan Stojanov, Anh Thai, and James M Rehg. Using shape to categorize: Low-shot learning with an explicit shape bias. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1798-1808, 2021. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.679, + 0.906, + 0.747 + ], + "angle": 0, + "content": "[70] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, et al. The replica dataset: A digital replica of indoor spaces. arXiv preprint arXiv:1906.05797, 2019. 2, 6, 8" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.748, + 0.906, + 0.803 + ], + "angle": 0, + "content": "[71] David Stutz and Andreas Geiger. Learning 3d shape completion from laser scan data with weak supervision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1955-1964, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.804, + 0.906, + 0.872 + ], + "angle": 0, + "content": "[72] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Viewset diffusion: (0-)image-conditioned 3d generative models from 2d data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 8863-8873, 2023. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.873, + 0.906, + 0.901 + ], + "angle": 0, + "content": "[73] Stanislaw Szymanowicz, Chrisitian Rupprecht, and Andrea Vedaldi. Splatter image: Ultra-fast single-view 3d recon" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.093, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.508, + 0.937 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.126, + 0.092, + 0.482, + 0.134 + ], + "angle": 0, + "content": "struction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10208-10217, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.135, + 0.483, + 0.218 + ], + "angle": 0, + "content": "[74] Shitao Tang, Jiacheng Chen, Dilin Wang, Chengzhou Tang, Fuyang Zhang, Yuchen Fan, Vikas Chandra, Yasutaka Furukawa, and Rakesh Ranjan. Mvdiffusion++: A dense high-resolution multi-view diffusion model for single or sparse-view 3d object reconstruction. In European Conference on Computer Vision, pages 175-191. Springer, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.219, + 0.483, + 0.273 + ], + "angle": 0, + "content": "[75] Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, Karsten Kreis, et al. Lion: Latent point diffusion models for 3d shape generation. Advances in Neural Information Processing Systems, 35:10021-10039, 2022. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.274, + 0.482, + 0.357 + ], + "angle": 0, + "content": "[76] Vikram Voleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, and Varun Jampani. Sv3d: Novel multi-view synthesis and 3d generation from a single image using latent video diffusion. In European Conference on Computer Vision (ECCV), pages 439-457. Springer, 2024. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.358, + 0.482, + 0.413 + ], + "angle": 0, + "content": "[77] Daniel Watson, William Chan, Ricardo Martin-Brualla, Jonathan Ho, Andrea Tagliasacchi, and Mohammad Norouzi. Novel view synthesis with diffusion models. arXiv preprint arXiv:2210.04628, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.414, + 0.482, + 0.482 + ], + "angle": 0, + "content": "[78] Daniel Watson, William Chan, Ricardo Martin Brulla, Jonathan Ho, Andrea Tagliasacchi, and Mohammad Norouzi. Novel view synthesis with diffusion models. In The Eleventh International Conference on Learning Representations (ICLR), 2023. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.484, + 0.482, + 0.552 + ], + "angle": 0, + "content": "[79] Christopher Wewer, Kevin Raj, Eddy Ilg, Bernt Schiele, and Jan Eric Lenssen. latentsplat: Autoencoding variational gaussians for fast generalizable 3d reconstruction. In European Conference on Computer Vision (ECCV), pages 456-473. Springer, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.553, + 0.482, + 0.621 + ], + "angle": 0, + "content": "[80] Zijie Wu, Yaonan Wang, Mingtao Feng, He Xie, and Ajmal Mian. Sketch and text guided diffusion model for colored point cloud generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8929-8939, 2023. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.622, + 0.482, + 0.691 + ], + "angle": 0, + "content": "[81] Jianfeng Xiang, Zelong Lv, Sicheng Xu, Yu Deng, Ruicheng Wang, Bowen Zhang, Dong Chen, Xin Tong, and Jiaolong Yang. Structured 3d latents for scalable and versatile 3d generation. arXiv preprint arXiv:2412.01506, 2024. 1, 2, 3, 4, 6, 7, 8, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.692, + 0.482, + 0.775 + ], + "angle": 0, + "content": "[82] Desai Xie, Jiahao Li, Hao Tan, Xin Sun, Zhixin Shu, Yi Zhou, Sai Bi, Soren Pirk, and Arie E Kaufman. Carve3d: Improving multi-view reconstruction consistency for diffusion models with rl finetuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6369-6379, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.776, + 0.482, + 0.831 + ], + "angle": 0, + "content": "[83] Katherine Xu, Lingzhi Zhang, and Jianbo Shi. Amodal completion via progressive mixed context diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9099-9109, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.093, + 0.832, + 0.482, + 0.901 + ], + "angle": 0, + "content": "[84] Bo Yang, Hongkai Wen, Sen Wang, Ronald Clark, Andrew Markham, and Niki Trigoni. 3d object reconstruction from a single depth view with adversarial learning. In Proceedings of the IEEE international conference on computer vision workshops (ICCVW), pages 679-688, 2017. 2" + }, + { + "type": "list", + "bbox": [ + 0.093, + 0.092, + 0.483, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.092, + 0.905, + 0.16 + ], + "angle": 0, + "content": "[85] Guanqi Zhan, Chuanxia Zheng, Weidi Xie, and Andrew Zisserman. Amodal ground truth and completion in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 28003-28013, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.162, + 0.905, + 0.231 + ], + "angle": 0, + "content": "[86] Xiaohang Zhan, Xingang Pan, Bo Dai, Ziwei Liu, Dahua Lin, and Chen Change Loy. Self-supervised scene de- occlusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 3784-3792, 2020. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.232, + 0.905, + 0.3 + ], + "angle": 0, + "content": "[87] Bowen Zhang, Yiji Cheng, Jiaolong Yang, Chunyu Wang, Feng Zhao, Yansong Tang, Dong Chen, and Baining Guo. Gaussiancube: Structuring gaussian splatting using optimal transport for 3d generative modeling. arXiv preprint arXiv:2403.19655, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.301, + 0.905, + 0.37 + ], + "angle": 0, + "content": "[88] Longwen Zhang, Ziyu Wang, Qixuan Zhang, Qiwei Qiu, Anqi Pang, Haoran Jiang, Wei Yang, Lan Xu, and Jingyi Yu. Clay: A controllable large-scale generative model for creating high-quality 3d assets. ACM Transactions on Graphics (TOG), 43(4):1-20, 2024. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.371, + 0.905, + 0.439 + ], + "angle": 0, + "content": "[89] Chuanxia Zheng and Andrea Vedaldi. Free3d: Consistent novel view synthesis without 3d representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9720-9731, 2024. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.441, + 0.905, + 0.496 + ], + "angle": 0, + "content": "[90] Chuanxia Zheng, Tat-Jen Cham, and Jianfei Cai. Pluralistic image completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1438-1447, 2019. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.497, + 0.905, + 0.552 + ], + "angle": 0, + "content": "[91] Chuanxia Zheng, Duy-Son Dao, Guoxian Song, Tat-Jen Cham, and Jianfei Cai. Visiting the invisible: Layer-by-layer completed scene decomposition. International Journal of Computer Vision (IJCV), 129:3195-3215, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.553, + 0.905, + 0.621 + ], + "angle": 0, + "content": "[92] Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai, and Dinh Phung. Bridging global context interactions for high-fidelity image completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 11512-11522, 2022. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.622, + 0.905, + 0.691 + ], + "angle": 0, + "content": "[93] Xiao Zheng, Xiaoshui Huang, Guofeng Mei, Yuenan Hou, Zhaoyang Lyu, Bo Dai, Wanli Ouyang, and Yongshun Gong. Point cloud pre-training with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22935-22945, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.692, + 0.905, + 0.748 + ], + "angle": 0, + "content": "[94] Xin-Yang Zheng, Hao Pan, Yu-Xiao Guo, Xin Tong, and Yang Liu. Mvd^2: Efficient multiview 3d reconstruction for multiview diffusion. In ACM SIGGRAPH 2024 Conference Papers, pages 1-11, 2024. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.748, + 0.905, + 0.803 + ], + "angle": 0, + "content": "[95] Linqi Zhou, Yilun Du, and Jiajun Wu. 3d shape generation and completion through point-voxel diffusion. In Proceedings of the IEEE/CVF international conference on computer vision, pages 5826-5835, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.804, + 0.905, + 0.872 + ], + "angle": 0, + "content": "[96] Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Josh Tenenbaum, and Bill Freeman. Visual object networks: Image generation with disentangled 3d representations. Advances in neural information processing systems (NeurIPS), 31, 2018. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.873, + 0.905, + 0.901 + ], + "angle": 0, + "content": "[97] Zi-Xin Zou, Zhipeng Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Yan-Pei Cao, and Song-Hai Zhang. Triplane" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.092, + 0.905, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.124, + 0.092, + 0.486, + 0.15 + ], + "angle": 0, + "content": "meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10324-10335, 2024. 2" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.925, + 0.509, + 0.937 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.166, + 0.086, + 0.833, + 0.14 + ], + "angle": 0, + "content": "Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images Supplementary Material" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.156, + 0.318, + 0.172 + ], + "angle": 0, + "content": "A. Implementation Details" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.181, + 0.254, + 0.197 + ], + "angle": 0, + "content": "A.1. Network Design" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.202, + 0.484, + 0.294 + ], + "angle": 0, + "content": "We adopt the network design in TRELLIS [81] to load the pre-trained image-to-3D weights and integrate the mask-weighted cross-attention mechanism to each DiT block (24 blocks in total). And each image-conditioned cross-attention layer is immediately followed by an occlusion-aware cross-attention layer." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.294, + 0.484, + 0.49 + ], + "angle": 0, + "content": "(a) Patchify and weight of visibility/occlusion mask The input condition image has a resolution of \\(512 \\times 512\\), which is resized to \\(518 \\times 518\\) to facilitate splitting into patches of size \\(14 \\times 14\\), as required by DINOv2 [54]. The resulting condition is then flattened into a tensor \\(\\mathbf{c}_{dino} \\in \\mathbb{R}^{1374 \\times 1024}\\), where the sequence length corresponds to \\(37 \\times 37\\) patches plus 1 CLS token and 4 register tokens. To better align the visibility and occlusion masks with the DINOv2 features, we first split the masks into patches of the same size, then calculate the weight score for each patch using Eq. 4 and Eq. 5. The final \\(\\mathbf{c}_{vis} \\in \\mathbb{R}^{1374 \\times 1}\\) and \\(\\mathbf{c}_{occ} \\in \\mathbb{R}^{1374 \\times 1}\\) are obtained by flattening the weight scores, with a value of 1 assigned to the CLS and register token positions." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.49, + 0.484, + 0.58 + ], + "angle": 0, + "content": "(b) Occlusion-aware cross-attention layer. We set the feature dimension of the occlusion-aware cross-attention layers to 1024, matching that of the image-conditioned cross-attention layers. To maintain consistent dimensions, we replicate the flattened occlusion masks to form a tensor \\(\\mathbf{c}_{\\text{occ\\_stack}} \\in \\mathbb{R}^{1374 \\times 1024}\\)." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.581, + 0.484, + 0.702 + ], + "angle": 0, + "content": "(c) Multi-head Cross-Attention. Our mask-weighted multi-head cross-attention (MHA) layer, which is implemented to encourage the model to focus its attention on the visible parts of the object, is an extension of the cross-attention described in the main paper. Specifically, \\( H \\) heads are run in parallel, resulting in \\( H \\) attention scores. For mask-weighted attention mechanism, we impose \\( c_{\\mathrm{vis}} \\) simultaneously to each head:" + }, + { + "type": "list", + "bbox": [ + 0.09, + 0.294, + 0.484, + 0.702 + ], + "angle": 0, + "content": null + }, + { + "type": "equation", + "bbox": [ + 0.179, + 0.728, + 0.482, + 0.744 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {A} _ {h} = \\operatorname {s o f t m a x} \\left(\\boldsymbol {S} _ {h} + \\log c _ {\\text {v i s}}\\right), \\tag {A.1}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.179, + 0.747, + 0.483, + 0.763 + ], + "angle": 0, + "content": "\\[\n\\mathrm {M H A} = \\left[ \\boldsymbol {A} _ {1} \\boldsymbol {v}; \\boldsymbol {A} _ {2} \\boldsymbol {v}; \\dots ; \\boldsymbol {A} _ {H} \\boldsymbol {v} \\right] \\tag {A.2}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.773, + 0.257, + 0.789 + ], + "angle": 0, + "content": "A.2. Training Details" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.795, + 0.484, + 0.901 + ], + "angle": 0, + "content": "(a) Pre-trained model loading. While TRELLIS is split into multiple modules, in our work we only train the sparse structure flow transformer and the structured latent flow transformer (see the overview figure where the \"fire\" symbols indicate the parts that are fine-tuned, and \"snowflake\" symbols indicate that we directly use the pretrained weights)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.157, + 0.907, + 0.31 + ], + "angle": 0, + "content": "(b) Data Augmentation. As described in Sec. 3.3, we generate random masks during training for data augmentation. Specifically, we begin by randomly drawing 1 to 3 lines, circles, and ellipses in the mask image. Next, to ensure these regions connect — thereby better simulating real-world occlusions, where mask regions are typically not highly fragmented — we randomly add 3 to 7 rectangular regions that have undergone an expansion operation. This results in a stable masking of the objects in the training data. Example inputs are presented in Fig. A.1." + }, + { + "type": "image", + "bbox": [ + 0.522, + 0.339, + 0.646, + 0.434 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.651, + 0.339, + 0.771, + 0.434 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.777, + 0.339, + 0.898, + 0.434 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.523, + 0.436, + 0.645, + 0.531 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.651, + 0.436, + 0.771, + 0.531 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.777, + 0.436, + 0.898, + 0.531 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.523, + 0.532, + 0.645, + 0.627 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.651, + 0.532, + 0.771, + 0.627 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.777, + 0.532, + 0.898, + 0.627 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.541, + 0.636, + 0.629, + 0.649 + ], + "angle": 0, + "content": "Original image" + }, + { + "type": "image_caption", + "bbox": [ + 0.668, + 0.636, + 0.752, + 0.649 + ], + "angle": 0, + "content": "Masked image" + }, + { + "type": "image_caption", + "bbox": [ + 0.793, + 0.636, + 0.885, + 0.648 + ], + "angle": 0, + "content": "Occlusion mask" + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.66, + 0.907, + 0.703 + ], + "angle": 0, + "content": "Figure A.1. Examples of random mask generation. The visible areas are shown in white, occluded areas in gray and background in black." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.748, + 0.685, + 0.763 + ], + "angle": 0, + "content": "A.3. Inference Details" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.775, + 0.907, + 0.836 + ], + "angle": 0, + "content": "(a) 3D-consistent mask ratio. For the multi-view 3D-consistent masks described in Sec. 3.3, we set the mask ratio to a random number between 0.4 and 0.6 for each object, which results in a variety of reasonable mask areas." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.84, + 0.907, + 0.901 + ], + "angle": 0, + "content": "(b) Time consumption. Despite the introduction of additional cross-attention layers, our inference time remains comparable to that of the baselines. Amodal3R can generate and render each object in under 10 seconds." + }, + { + "type": "list", + "bbox": [ + 0.512, + 0.775, + 0.907, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.495, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "1" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.111, + 0.096, + 0.191, + 0.16 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.115, + 0.164, + 0.196, + 0.237 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.11, + 0.241, + 0.2, + 0.254 + ], + "angle": 0, + "content": "Occluded input" + }, + { + "type": "image", + "bbox": [ + 0.221, + 0.098, + 0.286, + 0.16 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.287, + 0.099, + 0.359, + 0.16 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.21, + 0.166, + 0.287, + 0.228 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.286, + 0.168, + 0.368, + 0.229 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.225, + 0.241, + 0.356, + 0.254 + ], + "angle": 0, + "content": "GaussianAnything [37]" + }, + { + "type": "image", + "bbox": [ + 0.382, + 0.098, + 0.452, + 0.16 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.393, + 0.165, + 0.455, + 0.223 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.465, + 0.097, + 0.528, + 0.158 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.47, + 0.162, + 0.53, + 0.224 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.417, + 0.241, + 0.486, + 0.254 + ], + "angle": 0, + "content": "Real3D [28]" + }, + { + "type": "image", + "bbox": [ + 0.557, + 0.095, + 0.628, + 0.161 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.558, + 0.169, + 0.619, + 0.224 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.588, + 0.241, + 0.672, + 0.254 + ], + "angle": 0, + "content": "TRELLIS [81]" + }, + { + "type": "image", + "bbox": [ + 0.638, + 0.096, + 0.706, + 0.16 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.638, + 0.17, + 0.697, + 0.224 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.734, + 0.096, + 0.805, + 0.162 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.729, + 0.17, + 0.807, + 0.234 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.813, + 0.096, + 0.883, + 0.162 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.815, + 0.172, + 0.896, + 0.235 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.767, + 0.241, + 0.869, + 0.253 + ], + "angle": 0, + "content": "Amodal3R (Ours)" + }, + { + "type": "image_caption", + "bbox": [ + 0.232, + 0.266, + 0.764, + 0.28 + ], + "angle": 0, + "content": "Figure A.2. Examples using occluded images directly as the input of baseline models." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.306, + 0.296, + 0.323 + ], + "angle": 0, + "content": "B. Experimental Details" + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.33, + 0.284, + 0.345 + ], + "angle": 0, + "content": "B.1. Evaluation Protocol" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.352, + 0.483, + 0.426 + ], + "angle": 0, + "content": "We evaluate the results using Google Scanned Objects (GSO) (1,030 objects) [16] and a randomly sampled subset of Toys4K [69] containing 1,500 objects. Here, we provide additional details regarding the computation of our evaluation metrics." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.428, + 0.483, + 0.653 + ], + "angle": 0, + "content": "(a) Rendering quality and semantic consistency alignment To assess overall rendering quality, we compute the Fréchet Inception Distance (FID) [21] and Kernel Inception Distance (KID) [3]. Moreover, we evaluate semantic consistency using the CLIP score [59] by measuring the cosine similarity between the CLIP features of each generated image and its corresponding ground truth. For each object, we render 4 views using cameras with yaw angles of \\(\\{0^{\\circ}, 90^{\\circ}, 180^{\\circ}, 270^{\\circ}\\}\\) and a pitch angle of \\(30^{\\circ}\\). The camera is positioned with a radius of 2.0 and looks at the origin with a FoV of \\(40^{\\circ}\\), consistent with TRELLIS [81]. While FID and KID are calculated between the ground truth and generated sets (6,000 images for Toys4K and 4,120 images for GSO), the CLIP score is calculated in a pair-wise manner, and we report the mean value to evaluate semantic consistency." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.654, + 0.483, + 0.76 + ], + "angle": 0, + "content": "(b) Geometry quality For 3D geometry evaluation, we adopt Point cloud FID (P-FID) [51], Coverage Score (COV), and Minimum Matching Distance (MMD) using Chamfer Distance (CD). Following previous work [37, 51, 81], we sampled 4096 points from each GT/generated point cloud, which are obtained from the unprojected multi-view depth maps using the farthest point sampling." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.771, + 0.232, + 0.787 + ], + "angle": 0, + "content": "C. More Results" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.796, + 0.483, + 0.841 + ], + "angle": 0, + "content": "In this section, we provide additional qualitative examples and comparison results to further demonstrate the performance of our Amodal3R." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.849, + 0.367, + 0.866 + ], + "angle": 0, + "content": "C.1. Baselines using occluded input" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.871, + 0.483, + 0.901 + ], + "angle": 0, + "content": "We have stated in the main paper that \"occluded images will lead to incomplete or broken structures\" in current" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.308, + 0.906, + 0.382 + ], + "angle": 0, + "content": "3D generative models. Here, we provide examples where pix2gestalt is omitted and the occluded images are directly used as the input. As shown in Fig. A.2, when baseline models receive images of partially visible objects as input, they often fail to recover complete and intact 3D assets." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.395, + 0.807, + 0.411 + ], + "angle": 0, + "content": "C.2. More single-view to 3D examples" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.418, + 0.906, + 0.524 + ], + "angle": 0, + "content": "Due to the page restriction, we only provide limited examples in the main paper. Here we visualize more examples of single-view to 3D to further demonstrate the effectiveness of our method in Fig. C.4. The results show that compared with the 2D amodal completion + 3D generation baselines, our Amodal3R yields higher quality 3D assets across multiple categories." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.535, + 0.804, + 0.552 + ], + "angle": 0, + "content": "C.3. More multi-view to 3D examples" + }, + { + "type": "image", + "bbox": [ + 0.57, + 0.575, + 0.845, + 0.656 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.628, + 0.657, + 0.78, + 0.668 + ], + "angle": 0, + "content": "Multi-view Occluded input" + }, + { + "type": "image", + "bbox": [ + 0.57, + 0.667, + 0.769, + 0.744 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.669, + 0.746, + 0.756, + 0.758 + ], + "angle": 0, + "content": "pix2gestalt [56]" + }, + { + "type": "image", + "bbox": [ + 0.569, + 0.759, + 0.842, + 0.839 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.619, + 0.841, + 0.807, + 0.853 + ], + "angle": 0, + "content": "pix2gestalt [56] + Zero123++ [63]" + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.869, + 0.906, + 0.897 + ], + "angle": 0, + "content": "Figure C.3. Example of \"pix2gestalt\" and \"pix2gestalt + MV\" input of multi-view to 3D evaluation." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.09, + 0.092, + 0.482, + 0.317 + ], + "angle": 0, + "content": "We first provide visualized examples to explicitly explain the difference between the \"pix2gestalt\" and \"pix2gestalt + MV\" settings in the multi-view to 3D generation in Fig. C.3. For the \"pix2gestalt\" setting, we directly implement pix2gestalt for the amodal completion independently under each view. For the \"pix2gestalt + MV\" setting, we first choose the view with the greatest visibility from the 4 occluded views, then use pix2gestalt to complete the object (which is shown in the pix2gestalt column in the qualitative result), and subsequently use Zero123++ to get the 4 consistent views as the input of LaRa and TRELLIS. It can be observed that pix2gestalt alone results in obvious multiview inconsistency, while with Zero123++ the consistency is significantly improved, thus leading to better 3D generation quality." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.318, + 0.482, + 0.364 + ], + "angle": 0, + "content": "More multi-view to 3D examples are provided in Fig. C.5, where our Amodal3R again generates 3D assets with better quality than the baselines." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.374, + 0.43, + 0.389 + ], + "angle": 0, + "content": "C.4. More diverse 3D reconstruction results" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.397, + 0.482, + 0.443 + ], + "angle": 0, + "content": "In Fig. C.6, we show more examples to demonstrate that Amodal3R is able to generate diverse reasonable results from the occluded input with multiple stochastic samplings." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.453, + 0.317, + 0.468 + ], + "angle": 0, + "content": "C.5. More in-the-wild results" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.475, + 0.482, + 0.581 + ], + "angle": 0, + "content": "We provide more examples where we compare Amodal3R with \"pix2gestalt + TRELLIS\" pipeline in Fig. C.7. The results further demonstrate that 2D amodal method lacks 3D geometric understanding, often resulting in improper completion, such as completing the armrest of the chair as the backrest. In contrast, Amodal3R generates more plausible results with reasonable geometry and rich textural details." + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.582, + 0.482, + 0.656 + ], + "angle": 0, + "content": "In Fig. C.8 and Fig. C.9 we provide more visualization results on the in-the-wild images and the corresponding colored normal maps to show the geometry details. We implement Segment Anything [35] to obtain the segmentation masks." + }, + { + "type": "title", + "bbox": [ + 0.091, + 0.667, + 0.442, + 0.683 + ], + "angle": 0, + "content": "C.6. Discussion, Limitation and Future Work" + }, + { + "type": "text", + "bbox": [ + 0.09, + 0.689, + 0.483, + 0.901 + ], + "angle": 0, + "content": "While Amodal3R achieves impressive 3D amodal completion, it comes with several limitations we hope to solve in the future. 1) Dataset expansion. Due to the computational resources limitation, Amodal3R is trained on very limited data, consisting of only 20,627 synthetic 3D assets, predominantly confined to the furniture category. Consequently, completions on some complex or out-of-distribution objects may fail or lead to unrealistic structures. We believe that training on larger datasets, e.g. Objaverse-XL [15], could mitigate these issues. 2) Real-World data adaptation. Different from pix2gestalt, Amodal3R is trained exclusively on synthetic data. As a result, it cannot leverage environmental cues and must rely solely on the visible portions of occluded objects for reconstruction. Creating real" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.092, + 0.905, + 0.212 + ], + "angle": 0, + "content": "world 3D modal datasets will further enhance the ability to apply models to real scenes. 3) Controllable completion. Currently, how objects are completed is entirely up to the model itself and lacks control. Therefore, to further enhance the model to accept additional conditions, such as text, and allow users to edit and control the style of the completion process will be an important research direction for us in the future." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.504, + 0.936 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.09, + 0.153, + 0.899, + 0.808 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.814, + 0.908, + 0.842 + ], + "angle": 0, + "content": "Figure C.4. Additional single-view to 3D comparison examples. The occluders are shown in black and the visible regions are highlighted with red outlines." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.506, + 0.936 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.086, + 0.149, + 0.903, + 0.803 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.811, + 0.907, + 0.839 + ], + "angle": 0, + "content": "Figure C.5. Additional multi-view to 3D comparison examples. The occluders are shown in black and the visible regions are highlighted with red outlines." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.506, + 0.937 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.095, + 0.159, + 0.891, + 0.799 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.102, + 0.813, + 0.893, + 0.829 + ], + "angle": 0, + "content": "Figure C.6. Additional diverse examples. The occluders are shown in black and the visible regions are highlighted with red outlines." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.926, + 0.506, + 0.937 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.102, + 0.141, + 0.21, + 0.221 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.102, + 0.222, + 0.208, + 0.303 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.102, + 0.306, + 0.208, + 0.386 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.102, + 0.389, + 0.208, + 0.466 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.102, + 0.468, + 0.208, + 0.544 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.102, + 0.546, + 0.208, + 0.622 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.102, + 0.624, + 0.208, + 0.699 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.102, + 0.703, + 0.208, + 0.782 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.123, + 0.787, + 0.195, + 0.8 + ], + "angle": 0, + "content": "Input image" + }, + { + "type": "image", + "bbox": [ + 0.209, + 0.158, + 0.895, + 0.219 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.21, + 0.24, + 0.895, + 0.301 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.21, + 0.306, + 0.895, + 0.376 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.21, + 0.39, + 0.89, + 0.453 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.21, + 0.476, + 0.89, + 0.546 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.21, + 0.549, + 0.89, + 0.62 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.21, + 0.624, + 0.895, + 0.692 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.21, + 0.703, + 0.903, + 0.782 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.212, + 0.787, + 0.301, + 0.8 + ], + "angle": 0, + "content": "pix2gestalt [56]" + }, + { + "type": "image_caption", + "bbox": [ + 0.392, + 0.787, + 0.477, + 0.8 + ], + "angle": 0, + "content": "TRELLIS [81]" + }, + { + "type": "image_caption", + "bbox": [ + 0.692, + 0.787, + 0.796, + 0.8 + ], + "angle": 0, + "content": "Amodal3R (Ours)" + }, + { + "type": "image_caption", + "bbox": [ + 0.09, + 0.821, + 0.907, + 0.851 + ], + "angle": 0, + "content": "Figure C.7. Additional in-the-wild examples compared with pix2gestalt + TRELLIS. The target objects and occluders are marked with the red and green outlines." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.925, + 0.505, + 0.936 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.107, + 0.106, + 0.228, + 0.189 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.107, + 0.193, + 0.228, + 0.285 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.107, + 0.288, + 0.228, + 0.378 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.107, + 0.381, + 0.228, + 0.473 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.107, + 0.478, + 0.228, + 0.566 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.107, + 0.57, + 0.228, + 0.655 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.107, + 0.659, + 0.228, + 0.74 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.107, + 0.745, + 0.228, + 0.832 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.131, + 0.841, + 0.203, + 0.854 + ], + "angle": 0, + "content": "Input image" + }, + { + "type": "image", + "bbox": [ + 0.244, + 0.116, + 0.902, + 0.181 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.255, + 0.213, + 0.884, + 0.272 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.258, + 0.311, + 0.872, + 0.36 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.234, + 0.409, + 0.905, + 0.453 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.238, + 0.501, + 0.901, + 0.547 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.243, + 0.594, + 0.895, + 0.631 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.263, + 0.663, + 0.87, + 0.73 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.246, + 0.756, + 0.889, + 0.824 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.514, + 0.841, + 0.637, + 0.853 + ], + "angle": 0, + "content": "Reconstruction result" + }, + { + "type": "image_caption", + "bbox": [ + 0.131, + 0.875, + 0.863, + 0.889 + ], + "angle": 0, + "content": "Figure C.8. Additional in-the-wild examples. The target objects and occluders are marked with the red and green outlines." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.926, + 0.504, + 0.936 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.107, + 0.101, + 0.223, + 0.194 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.106, + 0.198, + 0.223, + 0.285 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.107, + 0.288, + 0.223, + 0.374 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.108, + 0.381, + 0.223, + 0.474 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.108, + 0.478, + 0.222, + 0.566 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.108, + 0.569, + 0.223, + 0.657 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.108, + 0.66, + 0.223, + 0.744 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.108, + 0.747, + 0.223, + 0.832 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.133, + 0.842, + 0.202, + 0.854 + ], + "angle": 0, + "content": "Input image" + }, + { + "type": "image", + "bbox": [ + 0.238, + 0.108, + 0.382, + 0.177 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.421, + 0.109, + 0.468, + 0.177 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.501, + 0.109, + 0.546, + 0.177 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.599, + 0.11, + 0.644, + 0.177 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.676, + 0.11, + 0.721, + 0.177 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.759, + 0.11, + 0.803, + 0.177 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.835, + 0.11, + 0.879, + 0.177 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.254, + 0.232, + 0.301, + 0.255 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.331, + 0.234, + 0.379, + 0.255 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.427, + 0.233, + 0.474, + 0.255 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.504, + 0.233, + 0.552, + 0.255 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.594, + 0.23, + 0.642, + 0.253 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.67, + 0.23, + 0.719, + 0.253 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.757, + 0.23, + 0.805, + 0.254 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.833, + 0.23, + 0.882, + 0.254 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.243, + 0.317, + 0.31, + 0.361 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.32, + 0.318, + 0.387, + 0.361 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.423, + 0.317, + 0.486, + 0.36 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.5, + 0.317, + 0.565, + 0.36 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.586, + 0.313, + 0.653, + 0.364 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.662, + 0.313, + 0.729, + 0.364 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.748, + 0.313, + 0.812, + 0.366 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.824, + 0.313, + 0.89, + 0.366 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.259, + 0.391, + 0.305, + 0.456 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.336, + 0.392, + 0.383, + 0.456 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.425, + 0.392, + 0.47, + 0.456 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.502, + 0.391, + 0.548, + 0.456 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.598, + 0.391, + 0.645, + 0.456 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.674, + 0.391, + 0.72, + 0.456 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.757, + 0.391, + 0.803, + 0.456 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.829, + 0.392, + 0.875, + 0.456 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.255, + 0.495, + 0.307, + 0.551 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.334, + 0.495, + 0.385, + 0.551 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.421, + 0.495, + 0.477, + 0.551 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.495, + 0.553, + 0.551 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.595, + 0.495, + 0.648, + 0.553 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.671, + 0.494, + 0.724, + 0.553 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.752, + 0.494, + 0.806, + 0.553 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.828, + 0.494, + 0.882, + 0.554 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.251, + 0.589, + 0.311, + 0.644 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.328, + 0.589, + 0.388, + 0.644 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.419, + 0.588, + 0.48, + 0.643 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.587, + 0.556, + 0.643 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.594, + 0.585, + 0.651, + 0.643 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.67, + 0.585, + 0.727, + 0.643 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.75, + 0.585, + 0.807, + 0.643 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.825, + 0.585, + 0.883, + 0.644 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.266, + 0.663, + 0.297, + 0.728 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.344, + 0.664, + 0.374, + 0.728 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.433, + 0.663, + 0.462, + 0.727 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.51, + 0.663, + 0.541, + 0.728 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.607, + 0.663, + 0.637, + 0.728 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.683, + 0.663, + 0.712, + 0.728 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.765, + 0.663, + 0.795, + 0.728 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.836, + 0.663, + 0.867, + 0.728 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.244, + 0.768, + 0.315, + 0.816 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.321, + 0.769, + 0.395, + 0.816 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.414, + 0.768, + 0.486, + 0.816 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.491, + 0.768, + 0.564, + 0.816 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.582, + 0.768, + 0.655, + 0.816 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.658, + 0.768, + 0.729, + 0.816 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.749, + 0.768, + 0.821, + 0.816 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.825, + 0.768, + 0.897, + 0.816 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.515, + 0.842, + 0.637, + 0.853 + ], + "angle": 0, + "content": "Reconstruction result" + }, + { + "type": "image_caption", + "bbox": [ + 0.133, + 0.875, + 0.863, + 0.889 + ], + "angle": 0, + "content": "Figure C.9. Additional in-the-wild examples. The target objects and occluders are marked with the red and green outlines." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.926, + 0.504, + 0.936 + ], + "angle": 0, + "content": "9" + } + ] +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13439/696dfadf-883e-4109-9010-e095246a341a_origin.pdf b/data/2025/2503_13xxx/2503.13439/696dfadf-883e-4109-9010-e095246a341a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0ef73d10bc022cc4820d7e8aebab4d47f4671036 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13439/696dfadf-883e-4109-9010-e095246a341a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0590ed78029929616c3666a7352ef15879e0d427bd4447b2805e82e41432d44c +size 12404791 diff --git a/data/2025/2503_13xxx/2503.13439/full.md b/data/2025/2503_13xxx/2503.13439/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d9c679b0639a2dfb479695d51b7bd11122ff1c91 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13439/full.md @@ -0,0 +1,894 @@ +# Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images + +Tianhao Wu $^{1*}$ , Chuanxia Zheng $^{2\dagger}$ , Frank Guan $^{3}$ , Andrea Vedaldi $^{2}$ , Tat-Jen Cham $^{1}$ + +$^{*}$ S-Lab, $^{1}$ Nanyang Technological University, $^{2}$ Visual Geometry Group, University of Oxford $^{3}$ Singapore Institute of Technology + +{tianhao001,astjcham}@ntu.edu.sg,{cxzheng,vedaldi} $@$ robots.ox.ac.uk Frank.guan@singaporetech.edu.sg + +![](images/4143f763f0017e95d50b3bfa0df41273b2f73431d8dd3e70b30d7b6b03fd374b.jpg) +3D Recast +2D Input + +![](images/53928c56c2fe7023f935c8ed7e190c3c99585158a99df33f98e4c617f9a52d82.jpg) +Stochastic sample 1 + +![](images/b2eac496a0db86dd84347dc3365cd86f4d9d9dbebf9db5f474bbce4cba6e3601.jpg) +Stochastic sample 2 + +![](images/3f426092ee64dead895249bbba8131fb3b6e9802518102ab8dd167ce2d8b47aa.jpg) +2D Input + +![](images/9ead7c9cb53f7909b29e7f85b60b6bdac4b80a3a2f295f3707c7c8f057e70b5c.jpg) +In-the-wild Reconstruction + +![](images/465bfb5cb8402acef9e313ef6594a15529e8bea8756855a43fef3610bc64338a.jpg) +Stochastic sample 1 + +![](images/11eb59d8c9fdfcd0e9dd496e456bfa47d8b2b36fe75963ece0988b80f5c87583.jpg) + +![](images/340194d0841a1f92e679b6b0daba73136cb660484090381712dd57d67deb3dfc.jpg) +Stochastic sample 2 + +![](images/a8eb7dcb0fe28ebd2f0dafc836463edc5beb06c4ddc6ccc58bd84a56089f2e40.jpg) +3D Scene Decomposition +2D Input +Figure 1. Example results of Amodal3R. Given partially visible objects within images (occluded regions are shown in black, visible areas in red outlines), our Amodal3R generates diverse semantically meaningful 3D assets with reasonable geometry and plausible appearance. We sample multiple times to get different results from the same occluded input. Trained on synthetic datasets, it generalizes well to real-scene and in-the-wild images, where most objects are partially visible, and reconstructs reasonable 3D assets. + +![](images/ac961d2f9dbd923ec7b9abaa05db1e5574bddbc55d3e3a7e5686690e26040fe0.jpg) + +![](images/57e6957b2cf630b806a6c83f6eb4350841401464b6edcd0f59c622a62f1e78b9.jpg) +Decomposed 3D assets + +![](images/47b75ad456c10ac0a22005fe83b7b75c7a51c488ce3da0a89e9c6ba62d467e22.jpg) + +# Abstract + +Most image-based 3D object reconstructors assume that objects are fully visible, ignoring occlusions that commonly occur in real-world scenarios. In this paper, we introduce Amodal3R, a conditional 3D generative model designed to reconstruct 3D objects from partial observations. We start from a "foundation" 3D generative model and extend it to recover plausible 3D geometry and appearance from occluded objects. We introduce a mask-weighted multi-head cross-attention mechanism followed by an occlusion-aware attention layer that explicitly leverages occlusion priors to guide the reconstruction process. We demonstrate that, by training solely on synthetic data, Amodal3R learns to recover full 3D objects even in the presence of occlusions in real scenes. It substantially outperforms existing meth + +ods that independently perform 2D amodal completion followed by 3D reconstruction, thereby establishing a new benchmark for occlusion-aware 3D reconstruction. See our project page https://sm0kywu.github.io/Amodal3R/. + +# 1. Introduction + +Humans possess a remarkable ability to infer the complete 3D shape and appearance of objects from single views, even when the objects are partly hidden behind occluders. This ability, namely amodal completion, is key to operating in complex real-world environments, where objects are often partially occluded by surrounding clutter. However, existing image-based 3D reconstruction models [5, 6, 24, 42, 49, 50, 67, 72, 73, 76, 78, 81, 88, 89] fail to recover full 3D assets when the object is partially occluded. We thus consider the problem of reconstructing 3D objects + +from one or more partially-occluded views, a novel task that we call amodal 3D reconstruction. + +Amodal 3D reconstruction is a challenging task that requires reconstructing an object's 3D geometry and appearance while completing its occluded parts, both of which are highly ambiguous. Previous approaches to amodal 3D reconstruction [7, 56] have decomposed the task into 2D amodal completion [85, 86, 91], followed by conventional 3D reconstruction [42, 76, 89]. While these two-stage pipelines are easy to implement, they have some drawbacks. First, 2D amodal completion methods rely predominantly on appearance-based priors rather than explicit 3D structural cues. This lack of 3D geometric reasoning is suboptimal. Second, some 3D reconstruction methods can use or require multiple views to function. In this case, 2D amodal completion may lack multi-view consistency, particularly when it is performed independently for different views, which confuses the 3D reconstruction process. + +In this paper, we introduce Amodal3R, a novel single-stage paradigm for amodal 3D reconstruction that surpasses previous state-of-the-art approaches (see Fig. 1). Amodal3R augments the "foundation" 3D generator TRELLIS [81] with an additional branch to condition on occlusions. Its key advantage is performing reconstruction, completion, and occlusion reasoning directly in a 3D latent space instead of using a two-stage scheme. In this way, the model can reconstruct both visible and occluded regions of the object coherently and plausibly. To adapt TRELLIS to amodal reconstruction, we introduce mask weighting in multi-head cross-attention and a new occlusion-aware layer. These guide the model to focus more on visible parts of the object without perturbing the pre-trained model too much. + +We evaluated the effectiveness of Amodal3R on datasets like Google Scanned Objects [16] and Toys4K [69] augmented with occlusions, on 3D scenes from Replica [70], and on real-world monocular images. Without relying on additional heuristics, Amodal3R achieves significantly superior performance compared to state-of-the-art models and generalizes well to different datasets, including real ones. + +In summary, our main contributions are as follows: + +- We propose a novel occlusion-aware 3D reconstruction model that directly reconstructs complete and high-quality 3D objects from one or more partially occluded views, without relying on 2D amodal completion models. +- We introduce a mask-weighted cross-attention mechanism and an occlusion-aware layer to inject occlusion awareness into an existing 3D reconstruction model, improving both the geometry and appearance of the reconstructed objects when they are partially occluded. +- We demonstrate via experiments on the 3D scene dataset Replica and real-world images that our one-stage pipeline significantly outperforms existing two-stage ones, establishing a new benchmark for amodal 3D reconstruction. + +# 2. Background + +2D Amodal Completion. Recent advances in 2D amodal completion [2, 56, 83, 85, 86, 91] have achieved significant success in reconstructing occluded regions of objects in 2D images. While these methods show promise for 3D generation pipelines [2, 56], they still have limitations. First, 2D amodal completion models have limited 3D geometric understanding as they interpret images as 2D patterns. While excelling at completing textures, they may generate physically implausible geometries when handling complex occlusions. This stems from their lack of explicit 3D reasoning and reliance on 2D appearance priors, without true volumetric understanding. Second, for models that use multi-view images as input, the results of the 2D amodal completion are often inconsistent across views. Inconsistent views need to be corrected by the 3D reconstructor, which cause confusion (see Sec. 4). Although there has been significant work on multi-view consistent generation [7, 64, 72, 74, 82, 94], multi-view consistent completion is less explored. RenderDiffusion [1] contains an example, but the results are often blurry or lack details. This motivates our 3D-centric reconstruction framework. + +3D Shape Completion. Several methods [12, 14, 71, 95] start from a partial 3D reconstruction, then complete it in 3D. They use encoder-decoder architectures [25] or volumetric representations [10, 30] to robustly recover 3D shape but not texture, which is necessary in many applications. They also still require recovering the partial 3D geometry from an occluded image, a challenge in itself. Furthermore, they ignore the input image when completing the object in 3D, disregarding the occlusion pattern that caused the 3D geometry to be recovered only partially. Recent work [11] utilizes the partially visible object as input specifically for 3D shape completion. However, it focuses solely on geometry reconstruction, without recovering the object's texture and appearance. In contrast, our approach is end-to-end, recovering the complete 3D shape and appearance of the object from the occluded image. + +3D Generative Models. Early advancements in 3D generation are based on GANs [19], exploring various 3D representations such as point clouds [26, 40], voxel grids [84, 96], view sets [50, 57], NeRF [4, 5, 52, 62], SDF [18], and 3D Gaussian mixtures [79]. While GANs can capture complex 3D structures, they struggle with stability, scalability, and data diversity. The focus then shifted to diffusion models [23, 61, 68], which were more capable and versatile. They were first applied to novel view synthesis [77] in image space, before expanding to model a variety of 3D representations, including point clouds [46, 47, 51, 80, 93], voxel grids [38, 48], triplanes [66, 97], meshes [20, 44], and 3D Gaussian mixtures [9, 36, 39, 87]. Autoregressive models [8, 49, 67] have also been explored for mesh generation, focusing on improving the topology of the final 3D mesh. + +![](images/46e0fb0ee3858047a89e9ac6e8e2762da5cdf37cc141d2d1a5005dc02c86b545.jpg) + +![](images/0dfbd6bc3d0e73132a8794d2bf334fd7a6b88689f397e20c56056d8c9a2dbaa5.jpg) +Figure 2. Overview of Amodal3R. Given an image as input and the regions of interest, Amodal3R first extracts the partially visible target object, along with the visibility and occlusion masks using an off-the-shelf 2D segmenter. It then applies DINOv2 [55] to extract features $\pmb{c}_{dino}$ as additional conditioning for the 3D reconstructor. To enhance occlusion reasoning, each transformer block incorporates a mask-weighted cross-attention (via $\pmb{c}_{vis}$ ) and occlusion-aware attention layer (via $\pmb{c}_{occ}$ ), ensuring the 3D reconstructor accurately perceives visible information while effectively inferring occluded parts. For conditioning details, see Sec. 3.2. + +Authors have also shifted from performing diffusion in 2D image space [64, 72, 89] to 3D latent spaces [29, 53, 60, 65, 75, 81]. Such methods typically consist of two stages: the first for generating geometry and the second for generating appearance, and are capable of high-quality imaged-to-3D generation. However, they assume that input objects are fully visible, which limits their application to real-world scenes, where occlusions are frequent. Here, we extend such approaches to generate high-quality 3D assets from occluded input images — a challenging task that requires inferring complete 3D objects from partial information. + +# 3. Method + +Consider an image $x$ containing a partially occluded view of an object of interest $o_i$ , we wish to reconstruct the complete 3D shape and appearance $y$ of the object $o_i$ . This task is thus conceptually similar to existing image-to-3D reconstruction, except that here the object is partially occluded instead of being fully visible in $x$ . We call this problem amodal 3D reconstruction. + +Here, we elaborate on Amodal3R (Fig. 2), a new method for amodal 3D reconstruction. Unlike recent two-stage methods [7, 56] that first perform 2D amodal completion followed by 3D reconstruction, Amodal3R is an end-to-end occlusion-aware 3D reconstruction model that generates complete shapes and multi-view geometry directly within the 3D space. Formally, Amodal3R is a conditional generator $v_{\theta}(y|o_i,M_{\mathrm{vis}},M_{\mathrm{occ}})$ that takes as input the image $x$ , centered on the partially visible object $o_i$ , as well as the + +visibility mask $M_{\mathrm{vis}}$ and occlusion mask $M_{\mathrm{occ}}$ . The visibility mask $M_{\mathrm{vis}}$ marks the pixels of image $x$ containing the object $o_i$ , while $M_{\mathrm{occ}}$ marks the pixels containing the oc-clusters, i.e., all other objects that potentially obscure $o_i$ . For real images, these masks can be efficiently obtained using pre-trained 2D segmentation models like Segment Anything [35]. By providing point coordinate prompts for the interest object $o_i$ and its occlusters respectively, the segmentation model can generate the corresponding masks. In cases where multiple occlusters are present or when occlusters fragment the visible area of the target object, a sequential process is employed. Specifically, point prompts for each visible/occluding region are provided to the 2D segmenter one at a time, with the model generating an individual mask for each region. These masks are then aggregated to form a comprehensive visibility/occlusion mask. + +The challenges for Amodal3R are how to: (1) produce a plausible and complete 3D shape and appearance from partial observations, even in the absence of multi-view information; and (2) ensure 3D consistency in terms of geometry and photometry, maintaining seamless visual coherence between visible and generated regions. + +# 3.1. Preliminaries: TRELLIS + +We begin by briefly reviewing the TRELLIS [81] model on which our model is based. TRELLIS is a conditional 3D diffusion model that performs denoising in a sparse 3D latent space. First, it introduces a transformer-based variational autoencoder (VAE) $(\mathcal{E},\mathcal{D})$ , where the encoder $\mathcal{E}$ maps + +sparse voxel features to structured latents $\mathbf{z}$ , and the decoder $\mathcal{D}$ converts them into desired output representations, including 3D Gaussians [31], radiance fields, and meshes. In particular, a 3D object $\mathbf{o}_i$ is encoded using its structured latent variables (SLAT) defined as $\mathbf{z} = \{(z_i, p_i)\}_{i=1}^L$ , where $z_i \in \mathbb{R}^C$ is a local latent feature attached to the voxel at position $p_i \in \{0, 1, \dots, N-1\}^3$ , $N$ is the spatial resolution of the grid, and $L \ll N^3$ is the number of active voxels intersecting the object's surface. This representation encodes both coarse geometric structures and fine appearance details by associating local latents with active voxels. + +TRELLIS comprises two diffusion models, one to predict the active voxel centers $\{\pmb{p}_i\}_{i=1}^L$ (stage 1) and the other to recover the corresponding latents $\{\pmb{z}_i\}_{i=1}^L$ (stage 2). Each model can be viewed as a denoising neural network $v_\theta$ operating in a latent space $\ell$ , and trained to remove Gaussian noise $\epsilon \sim \mathcal{N}(0, I)$ added to the latent code, i.e., $\ell^{(t)} = (1 - t)\ell^{(0)} + t\epsilon$ , where $t \in [0,1]$ is the noise level [43]. The denoising network $v_\theta$ is trained to minimize the flow loss: + +$$ +\min _ {\theta} \mathbb {E} _ {(\boldsymbol {\ell} ^ {(0)}, x), t, \epsilon} \| v _ {\theta} (\boldsymbol {\ell} ^ {(t)}, x, t) - (\epsilon - \boldsymbol {\ell} ^ {(0)}) \| _ {2} ^ {2}, \tag {1} +$$ + +where $x$ is the image prompt. In stage 1, the latent code is a compressed version of the $N \times N \times N$ occupancy volume, where the spatial resolution is reduced from $N = 64$ to $r = 16$ . Hence, in this case the latent vector is a matrix $\ell \in \mathbb{R}^{L' \times C'}$ of $L' = r^3 = 4096C'$ -dimensional tokens. In stage 2, the latent code $\ell = \{z_i\}_{i=1}^L \in \mathbb{R}^{L \times C}$ is a matrix of $LC$ -dimensional tokens, where $L$ is now the number of active voxels. Similar transformers are implemented to the corresponding denoising networks $v_{\theta}$ (Fig. 3). The conditioning image $x$ is read via cross-attention layers that pool information from the tokens $c_{\mathrm{dino}}$ extracted by DINO [55] from the image $x$ . + +# 3.2. Mask-Conditional Generative Models + +The key change needed to fine-tune the generator to work with partially occluded images is to condition the transformers $v_{\theta}$ not only on the image $x$ , but also on the masks $M_{\mathrm{vis}}$ and $M_{\mathrm{occ}}$ . A naive approach is to downsample or embed the masks to obtain tokens $(c_{\mathrm{vis}}, c_{\mathrm{occ}})$ that can be concatenated to the image tokens $c_{\mathrm{dino}}$ , to be processed by cross-attention by the transformer as before. However, the image $x$ contains significantly more information than the binary masks $M_{\mathrm{vis}}$ and $M_{\mathrm{occ}}$ , so the transformer, which is initially trained to consider $x$ only, may simply ignore this information. This is compounded by the fact that learning to use this new information, which involves a new type of tokens that are incompatible with image ones, may require aggressive fine-tuning of the transformer. + +To sidestep this problem, inspired by masked attention approaches in 2D completion [92], we introduce mask-weighted cross-attention and an occlusion-aware attention + +![](images/c28eb32c3ce9ca4c19b888186c109fcd0b6e8954a43fed553318ca76a53e098b.jpg) +(a) TRELLIS [81] +Figure 3. The transformer structure of Amodal3R. Compared with the original TRELLIS [81] design, we further introduce the mask-weighted cross-attention and occlusion-aware layer. It applies to both sparse structure and SLAT diffusion models. + +![](images/e3ed020a61c74b01acd0ad7190934e06b7482437b4cce85de31b597a5ed49498.jpg) +(b) Amodal3R (Ours) + +layer to better utilize the visibility mask $M_{\mathrm{vis}}$ and the occlusion mask $M_{\mathrm{occ}}$ without disrupting the pre-trained 3D generator too much. These are described next. + +Mask-weighted Cross-Attention. A key novel component of our model is mask-weighted cross-attention, which allows the model to focus its attention on the visible parts of the object. We implement this mechanism in each attention block in the transformers $v_{\theta}$ of Sec. 3.1. Given the latent tokens $\ell \in \mathbb{R}^{L \times C}$ input to a cross-attention layer as well as the image features $c_{\mathrm{dino}} \in \mathbb{R}^{K \times C'}$ of the partially visible object, cross-attention computes the similarity score matrix + +$$ +\boldsymbol {q} = W _ {q} \boldsymbol {\ell}, \quad [ \boldsymbol {k}, \boldsymbol {v} ] = W _ {k v} \boldsymbol {c} _ {\text {d i n o}}, \quad S = \boldsymbol {q} \boldsymbol {k} ^ {\top} / \sqrt {D}, \tag {2} +$$ + +where $W_{q}$ and $W_{kv}$ are learnable projections that map the latents $\ell$ to the query $q$ and the conditioning image feature $c_{\mathrm{dino}}$ to the key $k$ and the value $v$ , respectively, and $D$ is the dimension of the query and key vectors. + +We wish to bias the token similarity matrix $S \in \mathbb{R}^{L \times K}$ towards the visible parts of the object. Recall that $K$ is the number of tokens $c_{\mathrm{dino}}$ extracted by DINOv2 [54] from the occluded image $x \cdot M_{\mathrm{vis}}$ . Each token thus corresponds to a $P \times P$ patch in the input image (where $P = 14$ ). We extract analogous flatten 1-D tokens $c_{\mathrm{vis}} = [\rho(M_{\mathrm{vis},1}), \dots, \rho(M_{\mathrm{vis},K})]$ from the visibility mask by calculating the fraction of $P \times P$ pixels that are visible in the $j$ -th image patch $M_{\mathrm{vis},j}$ . $c_{\mathrm{vis},j} = \rho(M_{\mathrm{vis},j}) \in [0,1]$ . We then use these quantities to bias the computation of the attention matrix $A = \text{softmax}(S + \log c_{\mathrm{vis}}) \in [0,1]^{L \times M}$ via broadcasting. Hence: + +$$ +A _ {i j} = \frac {\boldsymbol {c} _ {\text {v i s} , j} \exp \left(S _ {i j}\right)}{\sum_ {k = 1} ^ {K} \boldsymbol {c} _ {\text {v i s} , k} \exp \left(S _ {i k}\right)}. \tag {3} +$$ + +![](images/5965ea718b9d135af93c8a920e517f6aac40214a9ac9d61495389ea154071baa.jpg) +Figure 4. 3D-consistent mask example. Given a 3D mesh, we render selected triangles in a distinct color from the others to generate multi-view consistent masks. It allows the evaluation of multi-view methods in handling contact occlusion. (The occluded regions are shown in red.) + +In this manner, the visibility flag modulates the attention matrix $A$ smoothly. If there are no visible pixels in a patch $j$ , then $A_{ij} = 0$ , so the corresponding image tokens are skipped in cross attention. While we have illustrated how this works for a single head, in practice Amodal3R uses a multiple-head transformer, to which Eq. (3) extends trivially. Please see the supplementary materials for details. + +Occlusion-aware Attention Layer. In addition to encouraging the network to focus its attention on the visible part of the object, we also introduce an occlusion-aware attention layer. For amodal completion, it is not enough to specify the visible information; instead, we must also differentiate foreground occluders from the background, as this explicitly identifies the potential regions requiring completion. Namely, if a pixel is denoted as invisible in the mask $M_{\mathrm{vis}}$ , this might be because there is an occluder in front of that pixel (so the pixel could have contained the object except due to occlusion), or because the pixel is entirely off the object. This information is encoded by the mask $M_{\mathrm{occ}}$ . + +To allow the model to distinguish between visible, occluded and background areas, we add one more cross-attention layer placed immediately after the mask-weighted cross-attention layer, and pooling solely the occlusion mask $M_{\mathrm{occ}}$ . In order to do so, we encode the occlusions mask $M_{\mathrm{vis}}$ as a set of flatten 1-D tokens $c_{\mathrm{occ}} = [\rho(M_{\mathrm{occ},1}), \ldots, \rho(M_{\mathrm{occ},M})]$ , as before and then simply pool $c_{\mathrm{occ}}$ using a cross-attention layer. + +# 3.3. Simulating Occluded 3D Data + +To train and evaluate our model, we require a dataset of 3D assets imaged in scenarios with clutter and occlusions. It is challenging to obtain such data in the real world, so we resorted to synthetic data and simulation to train our model. + +Random 2D Occlusions. In order to train our model, each training sample $(o,x,M_{\mathrm{vis}},M_{\mathrm{occ}})$ consists of a 3D object $o$ (from which ground truth latents can be obtained by using the encoders of Sec. 3.1), an image $x$ with partial occlusions, and corresponding visibility and occlusion masks $M_{\mathrm{vis}}$ and $M_{\mathrm{occ}}$ . In a real scenario, occlusions arise from other objects in the scene. In a multi-view setting, like the one discussed below, these occlusions need to be consistent across views, reflecting the underlying scene geometry. + +try. However, because our model is trained for single-view reconstruction, we can simulate occlusions by randomly masking parts of the object after rendering it in 2D. This is simpler and allows us to generate fresh occlusion patterns each time a view is sampled for training. + +Thus, given $o$ and an image $x$ rendered from a random viewpoint, we generate random occlusion masks as follows. Inspired by work on 2D completion [27, 41, 58, 90], we randomly place lines, ellipses, and rectangles, simulating diverse masking patterns. The union of these random shapes gives us the occlusion mask $M_{\mathrm{occ}}$ . Then, if $M_{\mathrm{obj}}$ is the mask of the unoccluded object, the visible mask is given by $M_{\mathrm{vis}} = M_{\mathrm{obj}} \odot (1 - M_{\mathrm{occ}})$ . Examples of such patterns are given in the supplementary material. + +3D-consistent occlusions. In a real scene imaged from multiple viewpoints, occlusions are not random but consistent across views as they are caused by other objects. This is particularly true for contact occlusions, where part of an object remains occluded by another from all viewpoints. To evaluate the model's performance under such challenging conditions, 3D-consistent masks are required. + +To efficiently generate such contact occlusion masks, we leverage 3D meshes during rendering. Starting from a randomly chosen triangle, we apply a random-walk strategy to iteratively select neighboring triangles, forming continuous occluded regions until the predefined mask ratio is met. By rendering these masked meshes using the same camera parameters as the RGB images, we ensure multi-view consistency in the generated occlusion masks (see Fig. 4). + +# 3.4. Reconstruction with Multi-view Input + +Since our flow-based model performs multiple denoising steps and does not require known camera poses for input views, Amodal3R can accept multi-view reference images $\mathcal{X} = \{x_i\}_{i=1}^N$ as conditioning inputs at different steps of the denoising process. While multi-view conditioning naturally improves reconstruction performance, a potential issue with such multi-view conditioning is that an image used earlier in the denoising process has a stronger influence on the final 3D geometry. This is because early denoising steps establish the coarse geometry, whereas later steps refine higher-frequency details [33, 34]. Therefore, we prioritize input images based on their visibility. Specifically, when experimenting with multi-view inputs, we sort the images in proportion to the object visibility $|M_{\mathrm{vis}}|$ , ensuring that images with higher visibility are used as primary inputs. + +# 4. Experiments + +# 4.1. Experiment Settings + +Datasets. Amodal3R is trained on a combination of 3D synthetic dataset: 3D-FUTURE (9,472 objects [17]), ABO (4,485 objects [13]), and HSSD (6,670 objects [32]). Once + +
MethodV-num2D-CompFID ↓KID(%) ↓CLIP ↑P-FID ↓COV(%) ↑MMD(‰) ↓
GaussianAnything [37]1pix2gestalt [56]92.261.300.7434.6935.925.03
Real3D [28]1pix2gestalt [56]91.212.020.7523.9219.619.21
TRELLIS [81]1pix2gestalt [56]58.825.870.7626.4331.654.17
Amodal3R (Ours)1-30.640.350.817.6939.613.62
LaRa [6]4pix2gestalt [56]172.844.540.7066.3424.568.11
LaRa [6]4pix2gestalt [56]+MV97.532.630.7521.8026.218.61
TRELLIS [81]4pix2gestalt [56]65.696.920.7824.6432.334.26
TRELLIS [81]4pix2gestalt [56]+MV60.371.850.8319.6831.754.21
Amodal3R (Ours)4-26.270.220.845.0338.743.61
+ +Table 1. Amodal 3D reconstruction results on GSO [16]. Here, V-num denotes the number of input views, and 2D-Comp refers to the 2D completion method. For single-view image-to-3D, we first complete occluded objects using pix2gestalt [56] before passing them to various 3D models. For multi-view image-to-3D, we adopt two variants: 1) pix2gestalt [56] is applied independently on each view; 2) pix2gestalt $[56] + \mathrm{MV}$ : a single-view completion is generated first, followed by multi-view diffusion [63] to synthesize 4 views as inputs. Without bells and whistles, Amodal3R consistently outperforms state-of-the-art models across all evaluation metrics. + +
MethodV-num2D-CompFID ↓KID(%) ↓CLIP ↑P-FID ↓COV(%) ↑MMD(‰) ↓
GaussianAnything [37]1pix2gestalt [56]57.171.220.8021.9733.567.23
Real3D [28]1pix2gestalt [56]59.921.630.7923.3124.359.53
TRELLIS [81]1pix2gestalt [56]43.056.830.8026.0426.286.87
Amodal3R (Ours)1-23.450.420.835.0037.095.89
LaRa [6]4pix2gestalt [56]123.523.610.7545.9127.899.67
LaRa [6]4pix2gestalt [56]+MV75.334.140.8013.0024.8210.93
TRELLIS [81]4pix2gestalt [56]46.348.770.8128.7625.357.13
TRELLIS [81]4pix2gestalt [56]+MV43.007.530.8124.4126.557.05
Amodal3R (Ours)4-20.930.500.853.7839.035.75
+ +Table 2. Amodal 3D reconstruction results on Toys4K [69]. The experimental setting is the same to Tab. 1, except for the dataset. + +Amodal3R is trained, we first assess its effectiveness on Toys4K (randomly sampling 1,500 objects [69]) and Google Scanned Objects (GSO) (1,030 objects [16]), which are excluded from our training data for our and the baseline model. During inference, a 3D-consistent mask occludes each input object, and each view is augmented with additional randomly generated occlusion areas. This ensures that the model cannot directly extract the region required to be completed from the occlusion regions. To further assess the out-of-distribution generalization ability in practical applications, we also evaluate all models on the 3D scene dataset Replica [70] as well as on in-the-wild images. The training and evaluation dataset will be released. + +Metrics. To measure the quality of the rendered images, we use the Fréchet Inception Distance (FID) [21] and the Kernel Inception Distance (KID) [3]. To measure the quality of the reconstructed 3D geometry, we use the Coverage Score (COV), the Point cloud FID (P-FID) [51], and the Minimum Matching Distance (MMD) using the Chamfer Distance (CD). COV measures the diversity of the results and P-FID and MMD the quality of the 3D reconstruction. We also use the CLIP score [59] to evaluate the consistency between each pair generated and ground-truth objects. + +Baselines. Most 3D generative models are trained on complete object inputs. To ensure fair comparisons, we complete the partially visible 2D objects before passing them to 3D generators. Sepecifically, we use pix2gestalt [56], a state-of-the-art 2D amodal completion network. Using this protocol, we compare Amodal3R to state-of-the-art methods such as TRELLIS [81], Real3D [28], GaussianAnything [37]ICLR'25, and LaRa [6]ECCV'24. + +Implementation Details. Amodal3R is trained on 4 A100 GPUs (40G) for 20K steps with a batch size of 16, taking approximately a day. Following TRELLIS [81], we implement classifier-free guidance (CFG [22]) with a drop rate of 0.1 and AdamW [45] optimizer with a learning rate of 1e-4. More details are provided in the supplementary material. + +# 4.2. Main Results + +Quantitative Results. We compare Amodal3R to the state-of-the-art for amodal 3D reconstruction in Tabs. 1 and 2. Amodal3R significantly outperforms the baselines across all metrics in both single- and multi-view 3D reconstruction, demonstrating its effectiveness. Interestingly, baselines that use multiple but potentially inconsistently-completed views ("4 V-num + pix2gestalt") are worse than + +![](images/4b8d63cfeb365c168c06a219dc90ea157ceb77c53fd03485af894f8574f4b1fa.jpg) +Input + +![](images/344c6fd2fe665ca0ce9a8f9179dbb7878728c433a2dd43bd58d637e4948f50cf.jpg) +pix2gestalt [56] + +![](images/16e94e10bf7663bc001757c6ebe65f1b7b94567cdffeb0398ac46bf6e56d3e57.jpg) +GaussianAnything [37] + +![](images/5a1da316ffe4ed72d7be02956ca4e03b0d234d5db15a8162959d54e1d1e6dfe8.jpg) + +nything [37] + +![](images/ddc7a6df708c27a38eaf717a8aec85c34b164d6a97f8e0d778beae5ec5f9f310.jpg) +Real3D [28] + +![](images/53065c4743828b25fc578e7b842aa49005b69b45655eb966cf5d59c062cdae21.jpg) + +![](images/3bca9cdae6c37dd6588bf565d6e9080fbded9ce3398cf90ae1e0e62214b9d844.jpg) +TRELLIS [81] + +![](images/4aae53f2a0e5edd86fb299ae111186e8b48fbc9ea078f0c202ed0c0010334b89.jpg) + +![](images/ff797a21c781d899dc09562ba90d6384c222e9ad2c7f5bfd1d718e730abed118.jpg) +Amodal3R (Ours) + +![](images/24633556114d5a1d318f35be6d2097f51e2e72d27af2b872c7713a407b004ee8.jpg) + +![](images/fc8f85350f0e48cf02df3b0419da21402ace9dd34d2f514b8e2615466d2415fd.jpg) +Figure 5. Single-view amodal 3D reconstruction. The occlusion regions are shown in black and the visible regions are highlighted with red outlines. More examples are provided in supplementary material Fig. C.4. +Input +Figure 6. Multi-view amodal 3D reconstruction. The above results are reconstructed using 4 occluded input views. Due to limited space, we present only the best results for LaRa and TRELLIS under the "pix2gestalt+MV" setting. We apply 3d-consistent mask and random extended occlusion areas. More examples are provided in supplementary material Fig. C.5. + +![](images/f7805e3946002aa68a40022a399e5caa06e504fa500c83933080db55dfd6dab4.jpg) + +![](images/5339ec554482daf2e2b186da5f98bd2ba1dbad05ab5f4ea3186b550e61a5e906.jpg) +pix2gestalt [56] + +![](images/a50ea849adc47903143abd5e8b28edbcaeb90432984776e24f975dad521fb428.jpg) + +![](images/b5e9f37b71f7f904705327927ff5b42973efda9d537d57fd5fc2f8b77fdd81d0.jpg) +LaRa [6] + +![](images/eb1459a55bc21480741609bfe8d472593a3382ef284c65f29b396b9a855330cc.jpg) + +![](images/08c973d3912e5936e18998074a07045d201ec77a25069277dd7ae43a3791ac01.jpg) + +![](images/c3ed58be125412bbae5a4b13f4510c85405227759cdedbec02d084f4d8e353fb.jpg) + +![](images/0ae607b0ed818e015cd57d7087130d6ce02033f15d780990f3987937fd9ce03c.jpg) +TRELLIS [81] + +![](images/41d1151c236a7cec6d230fea72a0f2caa1d9a5274fe82667022ece097f847a36.jpg) + +![](images/76f887eb88ebf2c6160d9c327f4bf15d0edfd97f184077642698e01d28a2a6d7.jpg) + +![](images/8b6750b249027312a8ede14ba56685d18050f471e8c5276fb89b63ce7f37a3ff.jpg) + +![](images/2214f9535709da7a66e819a76fed68d43724677f6fb237c4c0f16ae40cce9c3b.jpg) +Amodal3R (Ours) + +![](images/af774c2718c4ddc1604bb1329c96d6898f06b93c429021097ad4f2c16fd21cee.jpg) + +![](images/a42058f2e19492d3aa48221c2c95e3d28f0636c89d280016eca2cd55950ba8ec.jpg) + +![](images/dbff7913cd8a313d4d155aa78cbab97d8e0023cfcee1bef29ec045ca5ba7f12b.jpg) + +![](images/170a8c5842860fb67710a18f8d7e53ca8d1ee22168712426fc88da4a3c3f7e2d.jpg) + +![](images/35a7947758f0102a26fee60a36285b0655e2f366cddf71340a6b72a4c2391df8.jpg) +Input +' Naive concatenation +Figure 7. Ablation examples. Naive concatenation fails to reconstruct complete shape and appearance. Mask-weighted attention alone extends geometry into background regions, while occlusion-aware attention alone cannot guarantee photorealistic appearance. + +![](images/7c81d0cddabd718eb9edf4e8e3b2a8a16407800d75d548cd2907a4c38bd9bea0.jpg) + +![](images/4ca588e11862304c6a919e4a6f6fa6e8bbaae77b1fe6d06e08bd164094c6c0e9.jpg) + +![](images/28ce4b6da60edc15f838a61563cd751ed3180e19828fe1f64678481295a8197b.jpg) + +![](images/e87e5b38346fb07edb29b77cd1f21a4dd50ca17335e4996826642195b95ff208.jpg) + +![](images/33fe9e4da816b749ece4560cd589d4bea0c510d1f66b97c80af08c91a5959dc2.jpg) + +![](images/282a12ceadacfc33dc7090398b24cbde1b529ccb2a9de736439c241daada2ff2.jpg) +w/only $M_{\mathrm{vis}}$ attention + +![](images/a475ad7bf12abea80cb0ba9edd5f93c6aa0826811af10dc901c047e0cd304868.jpg) + +![](images/45e13209ef78b48205eadcb06c0615e6bc18a41dd4a749ac5ded2f055c6e65f0.jpg) + +![](images/e3167b745021840cfe0c3aeba487e726a61f5d1ee451d65b929016dff2e6ffc9.jpg) +w/only $M_{\mathrm{occ}}$ attention + +![](images/77d9f6728e5665e1306318e198443c8a1031444766ad537fd7d9d609d84da3dd.jpg) + +![](images/0292327dff33db77e2f43cfdcf0aca252b42b48e115c1bbff578282a1e557d98.jpg) + +![](images/40ac8e5cd597b91c59eb1ea0902c1c0bc39caba18c756f0f933280933cf02a9c.jpg) +Full model + +using a single completed view ("1 V-num + pix2gestalt"). This shows that inconsistent 2D completion does confuse reconstruction models to the point that using a single view is preferable (Sec. 2). This issue does not affect Amodal3R as it does not rely on 2D completion; in our case, utilizing more occluded views does improve the results as expected. + +Qualitative Results. The qualitative results are shown in Figs. 5 and 6 and in supplementary material Figs. C.4 and C.5. For all baselines, pix2gestalt is first applied for 2D amodal completion (second column), and the completed images are passed to each baseline image-to-3D model. Amodal3R produces high-quality 3D assets even under challenging conditions in both single-view and multi-view + +
MethodFID ↓KID(%) ↓COV(%) ↑MMD(‰) ↓
naive conditioning31.960.4937.963.61
w/ only mask-weighted attention30.530.3836.903.69
w/ only occlusion-aware layer31.770.5740.193.51
full model (Ours)30.640.3539.613.62
+ +Table 3. Ablations on different mask conditioning designs. The Best and the second best results are highlighted. + +image-to-3D tasks. In contrast, 2D amodal completions inconsistencies accumulate as more views are added, particularly when pix2gestalt is more uncertain, confusing the reconstructor models downstream. For instance, in the giraffe example in Fig. 5, the pix2gestalt completion fails to capture the overall structure of the 3D object correctly, which in turn leads the 3D generator models to misinterpret it as a woodstick-like shape. In contrast, Amodal3R reconstructs the 3D geometry and appearance well, with good alignment to the occluded inputs. These findings highlight not only the effectiveness of Amodal3R but also the advantage of completing objects while reconstructing them, which avoids relying on monocular 2D completion models that may introduce inconsistencies. + +# 4.3. Ablation Study + +We conducted several ablations to study the impact of the different components of Amodal3R and report the results in Tab. 3 and Fig. 7. For these experiments, we test single-view image-to-3D reconstruction on the GSO dataset. + +Naive Conditioning. We first evaluated a version of the model that still conditions the reconstruction on the visi + +![](images/200397b453ee819ae7dd4f2ea646793e770a7310034f8a2f32d950b559d1aaad.jpg) +Occluded input + +![](images/7063db8c5a5bd2b6301c906bc345c66adcbd53290a21cbd3959582d1ae385ba6.jpg) + +![](images/0b5a2809eb2eb70a683759fa3294f926e8118b6f3f5bc2e8298be8c0dda5acf6.jpg) + +![](images/10bb96547372de9b4f65d4eaf0c6141fbc7f0c222a00a2e942f4714b00276065.jpg) + +![](images/ba5eb638fabfa1d44f967225d2a2705ce9672b003cd7ee8dbbffca2815b08da6.jpg) + +![](images/350c99a7dd911ccf593f469842074ab9d5ce521a997ceaef24354976106ec05f.jpg) + +![](images/76ab33323ee953b7484b82fd71034ba1c0694d70c026740e58f0e37071d04d7b.jpg) + +![](images/e9fc53c83e64d69d56473b13d9cf025038ffb0a22ca274e9c7cc9d215834edbd.jpg) + +![](images/be3264b8b941c2ab0f81fe0ad141d6ac5078eec8d0feb03fdac2e59db115630d.jpg) + +![](images/76822cbfe9df726693fb6a192713b4c26a42092d8f023015bcff394b26c0e8ba.jpg) + +![](images/be50f897dc7b5833fd3799b532973036dc6f43eca836308d1eae730060cf5386.jpg) +pix2gestalt [56] + +![](images/bc2fc0f147d342da6fc5086cdbc462ef9a6133e193a57d59e3342584957f4600.jpg) + +Novel views + +![](images/3f0690db07f3e2de1dcb4539f91f6fd2da780fcf0b23ecdd19b09f10c4a2b482.jpg) +Real3D [28] + +![](images/6be0ab649a90c7e78d3f937affcceda47d57a38310d041186c17d451bbd7fc15.jpg) + +![](images/8c88269e51c66fd1d7a2f8e4eee5496f8dd65a9c22e0427e65b3071ee782d987.jpg) + +![](images/e9461051aa711aa2a02f6151e859e9065c3cc6830a48ca9819d07d694df23137.jpg) + +![](images/9887c95bebc5c836fb6365b6d00fa9131761303e1f18801193509e0736ae6d6d.jpg) + +![](images/5b3325bbcf41da6807dfe91632d5b72309bb8f548f7117299add63bfb36f0dd5.jpg) + +![](images/0436f7b4f917ac40bc1178bcecd4d2507c73de79c39d3247c622f3b88ec8dab8.jpg) + +![](images/de8e3973fc101975ccc4d55e99daa0e75ecdf0ffe76d175c3a4eb8f66839420b.jpg) + +![](images/16074b541591a6c0599039739318680c096fc55bc1bd6d6c758d7914adccda39.jpg) + +![](images/736e8cf2151488d8fa503ff4669c5f471d9197d18f871258fe25745f54af420e.jpg) + +![](images/fc1c7b906106d3c2535af2b3ff68f48d36973bf0713210e9b5530f249970eedb.jpg) + +![](images/958925104b6a6de5fb02fda3c3d1e28e93fccfbdbb89d9f987c768b2ff365f3a.jpg) + +![](images/a50c8ecd263a6364cf47396dc2a5a0e1535a8c6165bebb4a6e03063e8a105a8c.jpg) +Occluded input +Figure 8. Examples on Replica [70] and in-the-wild images. The target objects and the occluders are shown in red and green outlines. + +![](images/45e6b6daacb0e381376c64d91c368b9d73b9f823fb4c0240369ddaa998806e7c.jpg) + +![](images/17048f440df24527f3766f08317540eebaa019d762cb04164ad02bdc06cb028d.jpg) + +![](images/800958d49af741babbe642e9558420b3494187539a3e11efe91645f4be6f77ef.jpg) + +![](images/f7b7f66e83d2bf216ea9dbe2aa9de530f935d25f68b932e37f802f1b1eaa07a4.jpg) +Novel views + +![](images/3ca9dcb41744fa7472a5dc51f67271f0714f4d2f4c1467b5e01052836b1d0e0e.jpg) + +![](images/3a243e98dffc9f87e7ad296a48b11c97595783f58d9893880df71e77bb952838.jpg) + +![](images/f1d752d3d5e77e56be8c66c9c56e72dffb7bfdcbfe2586e72410d9c2b8500fe0.jpg) + +![](images/2dbda0b4fbc6011b286ddeb6f2034aeffe7b9797fc27cc220c8d72b5abde4352.jpg) + +![](images/4ef4fb4a6a9188008e9ff35804e9704402f4e8a130e42e4a5758ef9c1d7656a5.jpg) +TRELLIS [81] +Amodal3R (Ours) + +![](images/789951e6b632a16df3aa69afad044cfe8b02196fc2f0ae28e80fc511b5fcdc94.jpg) +Normal maps + +bility and occlusion masks, but without using the modules of Sec. 3.2. Instead, we directly concatenate the tokens $c_{\mathrm{vis}}$ to $c_{\mathrm{dino}}$ to form the condition for the cross-attention layer. The results (first row in Tab. 3 and second column in Fig. 7) show that, while the resulting model can still perform basic completion, the textures in the occluded regions are inconsistent with those in the visible ones, and the reconstructed geometry is inaccurate, e.g. the hole in the shoe. + +Mask-weighted Attention. To evaluate the effectiveness of our proposed mask-weighted attention mechanism, we integrate it into the training while omitting the occlusion-aware layer. The results demonstrate improved rendering quality — especially in capturing texture details in the visible areas — and significantly enhanced appearance consistency. However, the geometries exhibit deficiencies, as seen in the problematic shoe in Fig. 7 (first row), and the toy monster with a broken back (third row). + +Occlusion-aware Layer. The integration of the occlusion-aware layer improves the geometry both quantitatively and qualitatively. This aligns with our motivation for introducing an additional cross-attention layer, aimed at reconstructing the visible areas via the image-conditioned layer and occluded areas via the subsequent layer. However, occlusion-aware layer alone still results in unsatisfactory appearances, which again indirectly attests to the effectiveness of mask-weighted attention mechanism. + +Full Model. Consequently, the full model integrating both modules achieves optimal 3D generation results characterized by complete geometry and consistent textures. + +# 4.4. Real-World Generation / Completion + +Amodal3R is inherently superior in generalizing to out-of-distribution amodal 3D reconstruction, primarily due to the fact that we build upon the model on a "foundation" 3D generator, and fine-tune it with diverse categories. To demonstrate this advantage, we conduct scene-level dataset evaluations: Replica [70] (first two rows) and on in-the-wild images (3rd-5th rows) in Fig. 8. Here we adopt Segment Anything [35] to get the visibility and occlusion masks. The results show that Amodal3R generates reasonable 3D assets, whereas pix2gestalt fails to infer complete shapes from the same inputs, leading to unsatisfactory results from subsequent image-to-3D models. We also visualize the colored normal maps, which show that the results of Amodal3R are simultaneously rich in geometric and textural detail. + +# 5. Conclusion + +We have introduced Amodal3R, a novel approach to reconstruct complete 3D shape and appearance from partially visible 2D images. The key to our success is that we construct mask-weighted cross-attention mechanism and occlusion-aware layer to effectively exploit visible and occluded information. Compared to the state-of-the-art methods that rely + +on sequential 2D completion followed by 3D generation, our Amodal3R achieves remarkably better performance by operating directly in 3D space. Furthermore, results on inthe-wild images indicate its potential for subsequent applications in 3D decomposition and scene understanding, marking a step towards robust 3D asset reconstruction in real-world environments with complex occlusion. + +# References + +[1] Titas Anciukevicius, Zexiang Xu, Matthew Fisher, Paul Henderson, Hakan Bilen, Niloy J Mitra, and Paul Guerrero. Renderdiffusion: Image diffusion for 3d reconstruction, inpainting and generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12608-12618, 2023. 2 +[2] Jiayang Ao, Yanbei Jiang, Qiuhong Ke, and Krista A Ehinger. Open-world amodal appearance completion. arXiv preprint arXiv:2411.13019, 2024. 2 +[3] Mikołaj Binkowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans. arXiv preprint arXiv:1801.01401, 2018. 6, 2 +[4] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 5799-5809, 2021. 2 +[5] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In CVPR, 2022. 1, 2 +[6] Anpei Chen, Haofei Xu, Stefano Esposito, Siyu Tang, and Andreas Geiger. Lara: Efficient large-baseline radiance fields. In European Conference on Computer Vision, pages 338-355. Springer, 2024. 1, 6, 7, 5 +[7] Minghao Chen, Roman Shapovalov, Iro Laina, Tom Monnier, Jianyuan Wang, David Novotny, and Andrea Vedaldi. Partgen: Part-level 3d generation and reconstruction with multi-view diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 2, 3 +[8] Yiwen Chen, Tong He, Di Huang, Weicai Ye, Sijin Chen, Ji-axiang Tang, Xin Chen, Zhongang Cai, Lei Yang, Gang Yu, et al. Meshanything: Artist-created mesh generation with autoregressive transformers. arXiv preprint arXiv:2406.10163, 2024. 2 +[9] Yuedong Chen, Chuanxia Zheng, Haofei Xu, Bohan Zhuang, Andrea Vedaldi, Tat-Jen Cham, and Jianfei Cai. Mvsplat360: Feed-forward 360 scene synthesis from sparse views. In Neural Information Processing Systems (NeurIPS), 2024. 2 +[10] Yen-Chi Cheng, Hsin-Ying Lee, Sergey Tulyakov, Alexander G Schwing, and Liang-Yan Gui. Sdfusion: Multimodal 3d shape completion, reconstruction, and generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4456-4465, 2023. 2 +[11] Junhyeong Cho, Kim Youwang, Hunmin Yang, and TaeHyun Oh. Robust 3d shape reconstruction in zero-shot from + +a single image in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 2 +[12] Ruihang Chu, Enze Xie, Shentong Mo, Zhenguo Li, Matthias Nießner, Chi-Wing Fu, and Jiaya Jia. Diffcomplete: Diffusion-based generative 3d shape completion. Advances in Neural Information Processing Systems, 36, 2024. 2 +[13] Jasmine Collins, Shubham Goel, Kenan Deng, Achleshwar Luthra, Leon Xu, Erhan Gundogdu, Xi Zhang, Tomas F Yago Vicente, Thomas Dideriksen, Himanshu Arora, et al. Abo: Dataset and benchmarks for real-world 3d object understanding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 21126-21136, 2022. 5 +[14] Ruikai Cui, Weizhe Liu, Weixuan Sun, Senbo Wang, Taizhang Shang, Yang Li, Xibin Song, Han Yan, Zhennan Wu, Shenzhen Chen, et al. Neusdfusion: A spatial-aware generative model for 3d shape completion, reconstruction, and generation. In European Conference on Computer Vision, pages 1-18. Springer, 2024. 2 +[15] Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak Gadre, et al. Objverse-xl: A universe of $10\mathrm{m} + 3\mathrm{d}$ objects. Advances in Neural Information Processing Systems, 36, 2024. 3 +[16] Laura Downs, Anthony Francis, Nate Koenig, Brandon Kinman, Ryan Hickman, Krista Reymann, Thomas B McHugh, and Vincent Vanhoucke. Google scanned objects: A high-quality dataset of 3d scanned household items. In 2022 International Conference on Robotics and Automation (ICRA), pages 2553-2560. IEEE, 2022. 2, 6 +[17] Huan Fu, Rongfei Jia, Lin Gao, Mingming Gong, Binqiang Zhao, Steve Maybank, and Dacheng Tao. 3d-future: 3d furniture shape with texture. International Journal of Computer Vision, 129:3313-3337, 2021. 5 +[18] Jun Gao, Tianchang Shen, Zian Wang, Wenzheng Chen, Kangxue Yin, Daiqing Li, Or Litany, Zan Gojcic, and Sanja Fidler. Get3d: A generative model of high quality 3d textured shapes learned from images. Advances In Neural Information Processing Systems, 35:31841-31854, 2022. 2 +[19] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139-144, 2020. 2 +[20] Anchit Gupta, Wenhan Xiong, Yixin Nie, Ian Jones, and Barlas Oğuz. 3dgen: Triplane latent diffusion for textured mesh generation. arXiv preprint arXiv:2303.05371, 2023. 2 +[21] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 6, 2 +[22] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 6 +[23] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems (NeurIPS), pages 6840-6851, 2020. 2 + +[24] Yicong Hong, Kai Zhang, Juixiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. Lrm: Large reconstruction model for single image to 3d. In International Conference on Learning Representations (ICLR), 2024. 1 +[25] Tao Hu, Zhizhong Han, and Matthias Zwicker. 3d shape completion with multi-view consistent inference. In Proceedings of the AAAI conference on artificial intelligence, pages 10997-11004, 2020. 2 +[26] Zitian Huang, Yikuan Yu, Jiawen Xu, Feng Ni, and Xinyi Le. Pf-net: Point fractal network for 3d point cloud completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 7662-7670, 2020. 2 +[27] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics (TOG), 36(4):107, 2017. 5 +[28] Hanwen Jiang, Qixing Huang, and Georgios Pavlakos. Real3d: Scaling up large reconstruction models with real-world images. arXiv preprint arXiv:2406.08479, 2024. 6, 7, 8, 2, 4 +[29] Heewoo Jun and Alex Nichol. Shap-e: Generating conditional 3d implicit functions. arXiv preprint arXiv:2305.02463, 2023. 3 +[30] Yoni Kasten, Ohad Rahamim, and Gal Chechik. Point cloud completion with pretrained text-to-image diffusion models. Advances in Neural Information Processing Systems, 36, 2024. 2 +[31] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. 4 +[32] Mukul Khanna, Yongsen Mao, Hanxiao Jiang, Sanjay Haresh, Brennan Shacklett, Dhruv Batra, Alexander Clegg, Eric Undersander, Angel X Chang, and Manolis Savva. Habitat synthetic scenes dataset (hssd-200): An analysis of 3d scene scale and realism tradeoffs for objectgoal navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16384–16393, 2024. 5 +[33] Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye. Diffus ionclip: Text-guided diffusion models for robust image man i plication. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2426-2435, 2022. 5 +[34] Yulhwa Kim, Dongwon Jo, Hyesung Jeon, Taesu Kim, Daehyun Ahn, Hyungjun Kim, et al. Leveraging early-stage robustness in diffusion models for efficient and high-quality image synthesis. Advances in Neural Information Processing Systems, 36:1229-1244, 2023. 5 +[35] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4015-4026, 2023. 3, 8 +[36] Yushi Lan, Feitong Tan, Di Qiu, Qiangeng Xu, Kyle Genova, Zeng Huang, Sean Fanello, Rohit Pandey, Thomas + +Funkhouser, Chen Change Loy, et al. Gaussian3diff: 3d gaussian diffusion for 3d full head synthesis and editing. arXiv preprint arXiv:2312.03763, 2023. 2 +[37] Yushi Lan, Shangchen Zhou, Zhaoyang Lyu, Fangzhou Hong, Shuai Yang, Bo Dai, Xingang Pan, and Chen Change Loy. Gaussiananything: Interactive point cloud latent diffusion for 3d generation. arXiv preprint arXiv:2411.08033, 2024. 6, 7, 8, 2, 4 +[38] Muheng Li, Yueqi Duan, Jie Zhou, and Jiwen Lu. Diffusionsdf: Text-to-shape via voxelized diffusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12642-12651, 2023. 2 +[39] Xinhai Li, Huaibin Wang, and Kuo-Kun Tseng. Gaussian diffusion: 3d gaussian splatting for denoising diffusion probabilistic models with structured noise. arXiv preprint arXiv:2311.11221, 2023. 2 +[40] Chen-Hsuan Lin, Chen Kong, and Simon Lucey. Learning efficient point cloud generation for dense 3d object reconstruction. In proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2018. 2 +[41] Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), 2018. 5 +[42] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), pages 9298–9309, 2023. 1, 2 +[43] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022. 4 +[44] Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, Yuan Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, et al. Wonder3d: Single image to 3d using cross-domain diffusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 9970-9980, 2024. 2 +[45] I Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 6 +[46] Shitong Luo and Wei Hu. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 2837-2845, 2021. 2 +[47] Luke Melas-Kyriazi, Christian Rupprecht, and Andrea Vedaldi. Pc2: Projection-conditioned point cloud diffusion for single-image 3d reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12923-12932, 2023. 2 +[48] Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulo, Peter Kontschieder, and Matthias Nießner. Diffrf: Rendering-guided 3d radiance field diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4328-4338, 2023. 2 +[49] Charlie Nash, Yaroslav Ganin, SM Ali Eslami, and Peter Battaglia. *Polygon: An autoregressive generative model of + +3d meshes. In International Conference on Machine Learning (ICML), pages 7220-7229. PMLR, 2020. 1, 2 +[50] Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), pages 7588–7597, 2019. 1, 2 +[51] Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. Point-e: A system for generating 3d point clouds from complex prompts. arXiv preprint arXiv:2212.08751, 2022. 2, 6 +[52] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 11453-11464, 2021. 2 +[53] Evangelos Ntavelis, Aliaksandr Siarohin, Kyle Olszewski, Chaoyang Wang, Luc V Gool, and Sergey Tulyakov. Autodecoding latent 3d diffusion models. Advances in Neural Information Processing Systems, 36:67021-67047, 2023. 3 +[54] Maxime Oquab, Timothee Darcet, Theo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 4, 1 +[55] Maxime Aquab, Timothee Darcet, Théo Moutakanni, Huy V Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel HAZIZA, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. Transactions on Machine Learning Research, 2023. 3, 4 +[56] Ege Ozguroglu, Ruoshi Liu, Dídac Surís, Dian Chen, Achal Dave, Pavel Tokmakov, and Carl Vondrick. pix2gestalt: Amodal segmentation by synthesizing wholes. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3931-3940. IEEE Computer Society, 2024. 2, 3, 6, 7, 8, 4, 5 +[57] Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, and Alexander C Berg. Transformation-grounded image generation network for novel 3d view synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 3500-3509, 2017. 2 +[58] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2536-2544. IEEE, 2016. 5 +[59] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning (ICML), pages 8748-8763. PmLR, 2021. 6, 2 +[60] Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Angela Dai, and Matthias Nießner. L3dg: Latent 3d gaussian diffusion. In SIGGRAPH Asia 2024 Conference Papers, pages 1-11, 2024. 3 + +[61] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 10684-10695, 2022. 2 +[62] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. Advances in Neural Information Processing Systems (NeurIPS), 33:20154-20166, 2020. 2 +[63] Ruoxi Shi, Hansheng Chen, Zhuoyang Zhang, Minghua Liu, Chao Xu, Xinyue Wei, Linghao Chen, Chong Zeng, and Hao Su. Zero123++: a single image to consistent multi-view diffusion base model. arXiv preprint arXiv:2310.15110, 2023. 6, 2 +[64] Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li, and Xiao Yang. Mvdream: Multi-view diffusion for 3d generation. arXiv preprint arXiv:2308.16512, 2023. 2, 3 +[65] Wu Shuang, Youtian Lin, Yifei Zeng, Feihu Zhang, Jingxi Xu, Philip Torr, Xun Cao, and Yao Yao. Direct3d: Scalable image-to-3d generation via 3d latent diffusion transformer. Advances in Neural Information Processing Systems, 37:121859–121881, 2025. 3 +[66] J Ryan Shue, Eric Ryan Chan, Ryan Po, Zachary Ankner, Jiajun Wu, and Gordon Wetzstein. 3d neural field generation using triplane diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20875-20886, 2023. 2 +[67] Yawar Siddiqui, Antonio Alliegro, Alexey Artemov, Tatiana Tommasi, Daniele Sirigatti, Vladislav Rosov, Angela Dai, and Matthias Nießner. Meshgpt: Generating triangle meshes with decoder-only transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19615-19625, 2024. 1, 2 +[68] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning (ICML), pages 2256-2265. pmlr, 2015. 2 +[69] Stefan Stojanov, Anh Thai, and James M Rehg. Using shape to categorize: Low-shot learning with an explicit shape bias. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1798-1808, 2021. 2, 6 +[70] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, et al. The replica dataset: A digital replica of indoor spaces. arXiv preprint arXiv:1906.05797, 2019. 2, 6, 8 +[71] David Stutz and Andreas Geiger. Learning 3d shape completion from laser scan data with weak supervision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1955-1964, 2018. 2 +[72] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Viewset diffusion: (0-)image-conditioned 3d generative models from 2d data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 8863-8873, 2023. 1, 2, 3 +[73] Stanislaw Szymanowicz, Chrisitian Rupprecht, and Andrea Vedaldi. Splatter image: Ultra-fast single-view 3d recon + +struction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10208-10217, 2024. 1 +[74] Shitao Tang, Jiacheng Chen, Dilin Wang, Chengzhou Tang, Fuyang Zhang, Yuchen Fan, Vikas Chandra, Yasutaka Furukawa, and Rakesh Ranjan. Mvdiffusion++: A dense high-resolution multi-view diffusion model for single or sparse-view 3d object reconstruction. In European Conference on Computer Vision, pages 175-191. Springer, 2024. 2 +[75] Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, Karsten Kreis, et al. Lion: Latent point diffusion models for 3d shape generation. Advances in Neural Information Processing Systems, 35:10021-10039, 2022. 3 +[76] Vikram Voleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, and Varun Jampani. Sv3d: Novel multi-view synthesis and 3d generation from a single image using latent video diffusion. In European Conference on Computer Vision (ECCV), pages 439-457. Springer, 2024. 1, 2 +[77] Daniel Watson, William Chan, Ricardo Martin-Brualla, Jonathan Ho, Andrea Tagliasacchi, and Mohammad Norouzi. Novel view synthesis with diffusion models. arXiv preprint arXiv:2210.04628, 2022. 2 +[78] Daniel Watson, William Chan, Ricardo Martin Brulla, Jonathan Ho, Andrea Tagliasacchi, and Mohammad Norouzi. Novel view synthesis with diffusion models. In The Eleventh International Conference on Learning Representations (ICLR), 2023. 1 +[79] Christopher Wewer, Kevin Raj, Eddy Ilg, Bernt Schiele, and Jan Eric Lenssen. latentsplat: Autoencoding variational gaussians for fast generalizable 3d reconstruction. In European Conference on Computer Vision (ECCV), pages 456-473. Springer, 2024. 2 +[80] Zijie Wu, Yaonan Wang, Mingtao Feng, He Xie, and Ajmal Mian. Sketch and text guided diffusion model for colored point cloud generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8929-8939, 2023. 2 +[81] Jianfeng Xiang, Zelong Lv, Sicheng Xu, Yu Deng, Ruicheng Wang, Bowen Zhang, Dong Chen, Xin Tong, and Jiaolong Yang. Structured 3d latents for scalable and versatile 3d generation. arXiv preprint arXiv:2412.01506, 2024. 1, 2, 3, 4, 6, 7, 8, 5 +[82] Desai Xie, Jiahao Li, Hao Tan, Xin Sun, Zhixin Shu, Yi Zhou, Sai Bi, Soren Pirk, and Arie E Kaufman. Carve3d: Improving multi-view reconstruction consistency for diffusion models with rl finetuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6369-6379, 2024. 2 +[83] Katherine Xu, Lingzhi Zhang, and Jianbo Shi. Amodal completion via progressive mixed context diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9099-9109, 2024. 2 +[84] Bo Yang, Hongkai Wen, Sen Wang, Ronald Clark, Andrew Markham, and Niki Trigoni. 3d object reconstruction from a single depth view with adversarial learning. In Proceedings of the IEEE international conference on computer vision workshops (ICCVW), pages 679-688, 2017. 2 + +[85] Guanqi Zhan, Chuanxia Zheng, Weidi Xie, and Andrew Zisserman. Amodal ground truth and completion in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 28003-28013, 2024. 2 +[86] Xiaohang Zhan, Xingang Pan, Bo Dai, Ziwei Liu, Dahua Lin, and Chen Change Loy. Self-supervised scene de- occlusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 3784-3792, 2020. 2 +[87] Bowen Zhang, Yiji Cheng, Jiaolong Yang, Chunyu Wang, Feng Zhao, Yansong Tang, Dong Chen, and Baining Guo. Gaussiancube: Structuring gaussian splatting using optimal transport for 3d generative modeling. arXiv preprint arXiv:2403.19655, 2024. 2 +[88] Longwen Zhang, Ziyu Wang, Qixuan Zhang, Qiwei Qiu, Anqi Pang, Haoran Jiang, Wei Yang, Lan Xu, and Jingyi Yu. Clay: A controllable large-scale generative model for creating high-quality 3d assets. ACM Transactions on Graphics (TOG), 43(4):1-20, 2024. 1 +[89] Chuanxia Zheng and Andrea Vedaldi. Free3d: Consistent novel view synthesis without 3d representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9720-9731, 2024. 1, 2, 3 +[90] Chuanxia Zheng, Tat-Jen Cham, and Jianfei Cai. Pluralistic image completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1438-1447, 2019. 5 +[91] Chuanxia Zheng, Duy-Son Dao, Guoxian Song, Tat-Jen Cham, and Jianfei Cai. Visiting the invisible: Layer-by-layer completed scene decomposition. International Journal of Computer Vision (IJCV), 129:3195-3215, 2021. 2 +[92] Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai, and Dinh Phung. Bridging global context interactions for high-fidelity image completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 11512-11522, 2022. 4 +[93] Xiao Zheng, Xiaoshui Huang, Guofeng Mei, Yuenan Hou, Zhaoyang Lyu, Bo Dai, Wanli Ouyang, and Yongshun Gong. Point cloud pre-training with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22935-22945, 2024. 2 +[94] Xin-Yang Zheng, Hao Pan, Yu-Xiao Guo, Xin Tong, and Yang Liu. Mvd^2: Efficient multiview 3d reconstruction for multiview diffusion. In ACM SIGGRAPH 2024 Conference Papers, pages 1-11, 2024. 2 +[95] Linqi Zhou, Yilun Du, and Jiajun Wu. 3d shape generation and completion through point-voxel diffusion. In Proceedings of the IEEE/CVF international conference on computer vision, pages 5826-5835, 2021. 2 +[96] Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Josh Tenenbaum, and Bill Freeman. Visual object networks: Image generation with disentangled 3d representations. Advances in neural information processing systems (NeurIPS), 31, 2018. 2 +[97] Zi-Xin Zou, Zhipeng Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Yan-Pei Cao, and Song-Hai Zhang. Triplane + +meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10324-10335, 2024. 2 + +# Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images Supplementary Material + +# A. Implementation Details + +# A.1. Network Design + +We adopt the network design in TRELLIS [81] to load the pre-trained image-to-3D weights and integrate the mask-weighted cross-attention mechanism to each DiT block (24 blocks in total). And each image-conditioned cross-attention layer is immediately followed by an occlusion-aware cross-attention layer. + +(a) Patchify and weight of visibility/occlusion mask The input condition image has a resolution of $512 \times 512$ , which is resized to $518 \times 518$ to facilitate splitting into patches of size $14 \times 14$ , as required by DINOv2 [54]. The resulting condition is then flattened into a tensor $\mathbf{c}_{dino} \in \mathbb{R}^{1374 \times 1024}$ , where the sequence length corresponds to $37 \times 37$ patches plus 1 CLS token and 4 register tokens. To better align the visibility and occlusion masks with the DINOv2 features, we first split the masks into patches of the same size, then calculate the weight score for each patch using Eq. 4 and Eq. 5. The final $\mathbf{c}_{vis} \in \mathbb{R}^{1374 \times 1}$ and $\mathbf{c}_{occ} \in \mathbb{R}^{1374 \times 1}$ are obtained by flattening the weight scores, with a value of 1 assigned to the CLS and register token positions. +(b) Occlusion-aware cross-attention layer. We set the feature dimension of the occlusion-aware cross-attention layers to 1024, matching that of the image-conditioned cross-attention layers. To maintain consistent dimensions, we replicate the flattened occlusion masks to form a tensor $\mathbf{c}_{\text{occ\_stack}} \in \mathbb{R}^{1374 \times 1024}$ . +(c) Multi-head Cross-Attention. Our mask-weighted multi-head cross-attention (MHA) layer, which is implemented to encourage the model to focus its attention on the visible parts of the object, is an extension of the cross-attention described in the main paper. Specifically, $H$ heads are run in parallel, resulting in $H$ attention scores. For mask-weighted attention mechanism, we impose $c_{\mathrm{vis}}$ simultaneously to each head: + +$$ +\boldsymbol {A} _ {h} = \operatorname {s o f t m a x} \left(\boldsymbol {S} _ {h} + \log c _ {\text {v i s}}\right), \tag {A.1} +$$ + +$$ +\mathrm {M H A} = \left[ \boldsymbol {A} _ {1} \boldsymbol {v}; \boldsymbol {A} _ {2} \boldsymbol {v}; \dots ; \boldsymbol {A} _ {H} \boldsymbol {v} \right] \tag {A.2} +$$ + +# A.2. Training Details + +(a) Pre-trained model loading. While TRELLIS is split into multiple modules, in our work we only train the sparse structure flow transformer and the structured latent flow transformer (see the overview figure where the "fire" symbols indicate the parts that are fine-tuned, and "snowflake" symbols indicate that we directly use the pretrained weights). + +(b) Data Augmentation. As described in Sec. 3.3, we generate random masks during training for data augmentation. Specifically, we begin by randomly drawing 1 to 3 lines, circles, and ellipses in the mask image. Next, to ensure these regions connect — thereby better simulating real-world occlusions, where mask regions are typically not highly fragmented — we randomly add 3 to 7 rectangular regions that have undergone an expansion operation. This results in a stable masking of the objects in the training data. Example inputs are presented in Fig. A.1. + +![](images/b04b0527dc65b40fe900a0bb5245fee2ca569c931b5126483f78c3e1eb5d89d7.jpg) + +![](images/e473421bda3a60dfa3a9ac5d71ce8d3d0be3247fe5c3eb8a7b8a514a7ac65a4c.jpg) + +![](images/705a3efa88479430786188109444797316a4b50d5391f193ea586c1243e89535.jpg) + +![](images/d62f7a0625431a474dc72369f8aeddce76336606025ee6c862bf83766204c772.jpg) + +![](images/c1883fbc94e049a66af85bf3efd9200fd3ffaf4eff7aa042007643987dfcbbbf.jpg) + +![](images/08394c2eb97d7c0453e361592e75fef7fdc4a1a8791d4a1d772760388c47551c.jpg) + +![](images/596129c8c6c42a54370e7e075fff5cdb848c8ad31815f9ac36baa3aa8e493904.jpg) +Original image + +![](images/ac1fecb53dd177f8397f60a2a1b0ae32e3d2d0f89ebaa3ecb8d06a193b2f9337.jpg) +Masked image + +![](images/e10d9e59eaa16897267857117a05853cf8cbb446ccfa5611419002eeb268c0bd.jpg) +Occlusion mask +Figure A.1. Examples of random mask generation. The visible areas are shown in white, occluded areas in gray and background in black. + +# A.3. Inference Details + +(a) 3D-consistent mask ratio. For the multi-view 3D-consistent masks described in Sec. 3.3, we set the mask ratio to a random number between 0.4 and 0.6 for each object, which results in a variety of reasonable mask areas. +(b) Time consumption. Despite the introduction of additional cross-attention layers, our inference time remains comparable to that of the baselines. Amodal3R can generate and render each object in under 10 seconds. + +![](images/9c2ef8895af5bc56a4e22ac31ba78caceebce58eaa6b1587f72aa517f01ca087.jpg) + +![](images/615aab167763d7d564aa2c043f298f360c03cd068519012f496cea56c402d227.jpg) +Occluded input + +![](images/92616bc57210944caf7bac8ffcd90b9a67a8a7c6dbde9f7f2aeb54b6b08db6eb.jpg) + +![](images/9f79bf6a3e693fc48d5f856d4bbe95c8b716127d1e40aa554ba0a670ae682e57.jpg) + +![](images/35269d85c0ccd1276b1a039bef54ae08104a9b17d31f6d3822c4377144211c03.jpg) +GaussianAnything [37] + +![](images/6a37b06161e088662aaa499b14ba3cbed12315720d8ca0de6d5770dfa5caa4fd.jpg) +Figure A.2. Examples using occluded images directly as the input of baseline models. + +![](images/df95bfe63946dca7058cf2d4d385bca8b0cbc56bee3a96355da3c17b7da46a7d.jpg) + +![](images/2f2a03cbbf9479eaf62690774c6f7f0ad40af3149594ddf1f89c57b4a6660468.jpg) +Real3D [28] + +![](images/876ecbf2904d5d5ba41a09e1642ef3a94b1d577e0832f68e3d8210ceb08e9465.jpg) + +![](images/a78b1617d2c8fa14a4ed066af6b7ae056aa1f5cab50f8367db695bc937d65401.jpg) +TRELLIS [81] + +![](images/78fa239609ad9b322309c6000a6d97b1fa4731ce41715a07b7d7a24d1adcdb1b.jpg) + +![](images/8c3e4a89f0670e172fadded10975eee54aebfa304b9e6815e71adc3cdc6a4b2e.jpg) + +![](images/313f3ceb6f3d0da0644f0be2a0c948874779d72b805c97e3f0e8afc23b6ca136.jpg) + +![](images/df3d6e97efca84d18606fb24d27a150c0e19b7bed6dc0235a72e0f460a05c3a2.jpg) + +![](images/117bbb669d8338533595438417c5cd2c4062a58aa591ef0dbef7be0f3f0975ca.jpg) + +![](images/12e4b12feb71553492b2bba143a90efbae32c993ab3feaddbf364d9d0b4da9f1.jpg) +Amodal3R (Ours) + +![](images/1e0fbdc5fba5f402fc4063a139413b5100bc2a80dcba73fd638f35d60287d4e7.jpg) + +![](images/4f77cadcb832bb42842ceb4bc7c97687976ee5d613d7316f2b53f0858f0814fd.jpg) + +# B. Experimental Details + +# B.1. Evaluation Protocol + +We evaluate the results using Google Scanned Objects (GSO) (1,030 objects) [16] and a randomly sampled subset of Toys4K [69] containing 1,500 objects. Here, we provide additional details regarding the computation of our evaluation metrics. + +(a) Rendering quality and semantic consistency alignment To assess overall rendering quality, we compute the Fréchet Inception Distance (FID) [21] and Kernel Inception Distance (KID) [3]. Moreover, we evaluate semantic consistency using the CLIP score [59] by measuring the cosine similarity between the CLIP features of each generated image and its corresponding ground truth. For each object, we render 4 views using cameras with yaw angles of $\{0^{\circ}, 90^{\circ}, 180^{\circ}, 270^{\circ}\}$ and a pitch angle of $30^{\circ}$ . The camera is positioned with a radius of 2.0 and looks at the origin with a FoV of $40^{\circ}$ , consistent with TRELLIS [81]. While FID and KID are calculated between the ground truth and generated sets (6,000 images for Toys4K and 4,120 images for GSO), the CLIP score is calculated in a pair-wise manner, and we report the mean value to evaluate semantic consistency. + +(b) Geometry quality For 3D geometry evaluation, we adopt Point cloud FID (P-FID) [51], Coverage Score (COV), and Minimum Matching Distance (MMD) using Chamfer Distance (CD). Following previous work [37, 51, 81], we sampled 4096 points from each GT/generated point cloud, which are obtained from the unprojected multi-view depth maps using the farthest point sampling. + +# C. More Results + +In this section, we provide additional qualitative examples and comparison results to further demonstrate the performance of our Amodal3R. + +# C.1. Baselines using occluded input + +We have stated in the main paper that "occluded images will lead to incomplete or broken structures" in current + +3D generative models. Here, we provide examples where pix2gestalt is omitted and the occluded images are directly used as the input. As shown in Fig. A.2, when baseline models receive images of partially visible objects as input, they often fail to recover complete and intact 3D assets. + +# C.2. More single-view to 3D examples + +Due to the page restriction, we only provide limited examples in the main paper. Here we visualize more examples of single-view to 3D to further demonstrate the effectiveness of our method in Fig. C.4. The results show that compared with the 2D amodal completion + 3D generation baselines, our Amodal3R yields higher quality 3D assets across multiple categories. + +# C.3. More multi-view to 3D examples + +![](images/f8245dfc94ff1585afa4578515cf4ebfaec3862cbc99047d3ff96a7f875b984d.jpg) + +![](images/52930762b79c0e57ccf7fd99bf51f14ffd04d0b75f4411c8357e139aeb338c72.jpg) +Multi-view Occluded input +pix2gestalt [56] + +![](images/9f90fd4f57664dfc4099d96b880e1abfb3de5b968deaa8ca18b1f426ca59759d.jpg) +pix2gestalt [56] + Zero123++ [63] +Figure C.3. Example of "pix2gestalt" and "pix2gestalt + MV" input of multi-view to 3D evaluation. + +We first provide visualized examples to explicitly explain the difference between the "pix2gestalt" and "pix2gestalt + MV" settings in the multi-view to 3D generation in Fig. C.3. For the "pix2gestalt" setting, we directly implement pix2gestalt for the amodal completion independently under each view. For the "pix2gestalt + MV" setting, we first choose the view with the greatest visibility from the 4 occluded views, then use pix2gestalt to complete the object (which is shown in the pix2gestalt column in the qualitative result), and subsequently use Zero123++ to get the 4 consistent views as the input of LaRa and TRELLIS. It can be observed that pix2gestalt alone results in obvious multiview inconsistency, while with Zero123++ the consistency is significantly improved, thus leading to better 3D generation quality. + +More multi-view to 3D examples are provided in Fig. C.5, where our Amodal3R again generates 3D assets with better quality than the baselines. + +# C.4. More diverse 3D reconstruction results + +In Fig. C.6, we show more examples to demonstrate that Amodal3R is able to generate diverse reasonable results from the occluded input with multiple stochastic samplings. + +# C.5. More in-the-wild results + +We provide more examples where we compare Amodal3R with "pix2gestalt + TRELLIS" pipeline in Fig. C.7. The results further demonstrate that 2D amodal method lacks 3D geometric understanding, often resulting in improper completion, such as completing the armrest of the chair as the backrest. In contrast, Amodal3R generates more plausible results with reasonable geometry and rich textural details. + +In Fig. C.8 and Fig. C.9 we provide more visualization results on the in-the-wild images and the corresponding colored normal maps to show the geometry details. We implement Segment Anything [35] to obtain the segmentation masks. + +# C.6. Discussion, Limitation and Future Work + +While Amodal3R achieves impressive 3D amodal completion, it comes with several limitations we hope to solve in the future. 1) Dataset expansion. Due to the computational resources limitation, Amodal3R is trained on very limited data, consisting of only 20,627 synthetic 3D assets, predominantly confined to the furniture category. Consequently, completions on some complex or out-of-distribution objects may fail or lead to unrealistic structures. We believe that training on larger datasets, e.g. Objaverse-XL [15], could mitigate these issues. 2) Real-World data adaptation. Different from pix2gestalt, Amodal3R is trained exclusively on synthetic data. As a result, it cannot leverage environmental cues and must rely solely on the visible portions of occluded objects for reconstruction. Creating real + +world 3D modal datasets will further enhance the ability to apply models to real scenes. 3) Controllable completion. Currently, how objects are completed is entirely up to the model itself and lacks control. Therefore, to further enhance the model to accept additional conditions, such as text, and allow users to edit and control the style of the completion process will be an important research direction for us in the future. + +![](images/200788fa8c59010615e251b7c018191c590445a647523496f6bff0ccc66bf77e.jpg) +Figure C.4. Additional single-view to 3D comparison examples. The occluders are shown in black and the visible regions are highlighted with red outlines. + +![](images/6e3e8cb3f6e682b55056f262e4f9a2f12e3f1a7a2aebcb739a7935617b01060b.jpg) +Figure C.5. Additional multi-view to 3D comparison examples. The occluders are shown in black and the visible regions are highlighted with red outlines. + +![](images/bf75ddd1a0b936e11399a4404cec99a271558a5059efd0f5ea400b27d6a896dd.jpg) +Figure C.6. Additional diverse examples. The occluders are shown in black and the visible regions are highlighted with red outlines. + +![](images/e870017230928572f834d4e3b7b44e9ccc89b671cc7b2dfbdeca47b3555aa063.jpg) + +![](images/6492dba26fef53a6b15fcf4f6bd0f48ff1bd3a409eed95af009133f8b85c0fd3.jpg) + +![](images/d90c74b16e5686f3fd0a043e1d39ccc701a7b9835351836631c2644d63c39a4e.jpg) + +![](images/685ad6bc4dc8ba6e901173eaab3f4f5e07a5bb2bcf50d3d5f20d6720d0395192.jpg) + +![](images/ecd950c091f52dd14a378d38d688ffb86bfaa6418ed1a46ce552bfb485beda3f.jpg) + +![](images/98e46a5dcf149f1e0f883787c69fb2eeb6d45f364250f5b58e7230c52ce6f351.jpg) + +![](images/72372d8636e2039e9e78d331bc0c155b39ac6b8cbfebfe37d6e1c282235da333.jpg) + +![](images/9407f92f2328bc77b19b079886cca8c258b58a670a9d3cb0028c09f6d6f59ea4.jpg) +Input image +Figure C.7. Additional in-the-wild examples compared with pix2gestalt + TRELLIS. The target objects and occluders are marked with the red and green outlines. + +![](images/dcf6c81676d2bfcc52799174f6ea258f27f99878bb1d8ce177ccd911a170df41.jpg) + +![](images/7221e58f8fa1621d86ddcfca117da0a9ab550e97df879da747515b50089871ab.jpg) + +![](images/ee3d4702f03df512a1dcd2f450f6229b648789fc7b2e62cd2be6ba680f612a1c.jpg) + +![](images/348936c94c048be73a84a1a35305c574d97223f7ddaecd36f5f5c956c0e0243a.jpg) + +![](images/e9235990f61e5d288eaf7d5196b8d147e27d2759c3e59bd094647efae78f5478.jpg) + +![](images/4f68e7ed21a55dc404a754e72d158812e6855ef3c19eb4fe6da655eaa44f381c.jpg) + +![](images/5778f0574d6cb26ba5afdae215feb108c626210189ec2e390e3e7bfe3abc61f5.jpg) + +![](images/d195416a2f5327884ffc98ff5a1d8dcb4ff34a0d0f4a043146c8f62254dfc341.jpg) +pix2gestalt [56] +TRELLIS [81] +Amodal3R (Ours) + +![](images/ada9408678dd16ac9a17139bc6bcffa21cef5bfdfb1d7de869883fff85659fde.jpg) + +![](images/6062e969b144b68bde5c4787caf92635af76db3c1558a9bb607b2b4c8d4b3eef.jpg) + +![](images/2d6aa5a394deca4847eb806106419625bb3c8facd8f63c970e6d2e709d187393.jpg) + +![](images/b1ee42e3f1c1b55b7c2ad148ca3beddd296d22b422314abeba1672040f573ebb.jpg) + +![](images/351f4fc242c00ce05b1a676430843ead379c1561c82870ff79d72e7ef9fad3ec.jpg) + +![](images/20456732dc746a6bebf876773ba1fe04717282542debba0a537100f6b3a79c41.jpg) + +![](images/14513e3cf22f958af8f2643bf88c8e1a382b22520cab9cbf47471ead011e29d4.jpg) + +![](images/accebdc45f78add46402aaf69516164e54dcabd80c7e75d86ee723403791d785.jpg) +Input image +Figure C.8. Additional in-the-wild examples. The target objects and occluders are marked with the red and green outlines. + +![](images/7422c5ddae7e24c65457d03b1bd98f2cf53e00b23dfb4d18efada84fb0cf74e5.jpg) + +![](images/abdf454b1c319bbc3abe24bb70aea384919e5811cfb8e8caaac98e01fb1d1563.jpg) + +![](images/217a80eff1e8e6c5b9a4f61bdf9bef041254d406ea49291a148013926a2424ee.jpg) + +![](images/590eeb7939486a44b00107d94f5a70ac9d3c78fd64407589f2f325dbfc8c3856.jpg) + +![](images/af2ed36a83bcdb256e48f085f9d29306437b2c92f8a174de87bb9c2c8abaf3ca.jpg) + +![](images/57cb0acdeb0fddc57c4dc24b70a23ccfb10ceecd0a119ca07fd1fd93d810939b.jpg) + +![](images/e2ecb2a27e1f5f46cb1e1761e433b07111ed89b0cfa1d826e0401f18bb8190a5.jpg) + +![](images/3354b81a13aa69181692ebc240c5711c8fd2df331d852e05674a15348bb8042c.jpg) +Reconstruction result + +![](images/095b0a8683c9a1abe5d1fd0351f2b0bd251d363bc2faf44ef3b8c6a7aedcea05.jpg) + +![](images/c359707e04e058224e1375c5e81fbc8ccddcb047b0de7412b82c559e5f02bc21.jpg) + +![](images/769f2f75988fa8c20f7a150b28cc25793d51852498286a3fa481276629ccca9c.jpg) + +![](images/1184cd58dac7c2417f151966e2cecd6fc93b1d0e7753d2a88787190ba2b76e60.jpg) + +![](images/33f0534eb2700edc8fc3ea5301c729d4d26448af5eac394994064a31ae54f642.jpg) + +![](images/0a8e71eded34cdb29ae8a23f3edcf84087356e3b089e8f2c59718d46781c60a1.jpg) + +![](images/d0e2fc738ba13d07dee36c377c1631c2356014708deba436d5f0e69e34bdc11b.jpg) + +![](images/f4d800f760e32f85a18e0d7b49cd24ec481ab920ef0ced066be5a55ba0cca701.jpg) +Input image + +![](images/929298d615a3fb7cc447c0f98e95baf173faa9e04a7d618c8a9009c54c584308.jpg) + +![](images/e16f1142d0a59d54d6b9c2c7b7795759e11379f48803ebd164889819bec4222e.jpg) + +![](images/f7039e6b445a5be5eee0bd5040ae487c6e7aee8781d78bfa73e920a9c75420a8.jpg) + +![](images/52bff517732bce595a3fc8d13402ba1d0f1b3b7a1eb2d4299eeefdcf805fcaee.jpg) + +![](images/f01a9e0ef9e9acd6cba92a918067dc45c9b7675a402b6e0568af1eb69c21ab5a.jpg) + +![](images/a15c57d10a0d91aa88495617e816b58fca0cf35ec218ed9cdf2e5a8522be81f0.jpg) + +![](images/c740a0bd12726124d22a0278522bd429dd12f45e9318efe95ac729e78ae54dc7.jpg) + +![](images/6bf181e361af64cbd49bdff2b0cd43c6a50dc0ad36963d735832233052683d74.jpg) + +![](images/e61d8fec364efc27a470eff5648933302e713b474924644e2edd3c55bfeda54d.jpg) + +![](images/d7115e74e97401735aa2884ea8f3346d25d37ce117b630b4920fac2bc1285067.jpg) + +![](images/5cc2da83fe61a2fdc4d3e7041779c61220f5b3b073385721791245264d4a1551.jpg) + +![](images/21d725ad0989e383de4f63e90ce664105e05c73d0b034d3c5ebf71613a830643.jpg) + +![](images/fd66ab2fe02a9a52c5e590980319f647bf554f99542cd9d6afc3047a2004f208.jpg) + +![](images/a8220c93bb0bff26f3701389b85c2241b024c6820bd425f0edf30ea47e53680b.jpg) + +![](images/762a5473b8c38fc5078ebc4dbaea183295506b6315263d1760274a5aef82e4b4.jpg) + +![](images/2d36140df0c83d5417674ee1873c2c32baa4cb1e52aa906d5a68028dd586970c.jpg) + +![](images/970aed0843d92ecca9bd858e64a69a83cd4e96e405421cc40641733dd55582c0.jpg) + +![](images/89410ff08ecccd04301f629f52ea7c1ca4b6aa0a815571a2208a032b95d229ed.jpg) + +![](images/13596732bab156e6c1c51aedacbd0bc7a5776d2e2cc38344b9cc580eec754512.jpg) + +![](images/9b298e3f8c6888d371ef219db051868f274af1ba76ad87df5c2ba35ac7b9074b.jpg) + +![](images/985f93d20324281b66926cb467e7671a0f7de8875707c8b704e7bd81f0b59b9f.jpg) + +![](images/6a977f45798fb36ceed91ca7038971b88bf42b34f6281c7a7935b709d39eaec2.jpg) + +![](images/da6e2cc1a6c104125f2b358fff6df5fa7d9c602c75f5f555b7a225f5c3d82c80.jpg) + +![](images/ea02900ca8a78886a66d9fb14ad43bf1237f53854ed37e043f35a033d10bf8fa.jpg) + +![](images/1c189b1c7514283599bb7a16a392b365213526e29463166ec4895e2329bb5384.jpg) + +![](images/4598111e22944965925b8b6255fa8536d238e0728b54bc775ab3665a150e976b.jpg) + +![](images/f787f003adb94debc9461808ba6d8cba55f045b73a279df257fc5bc5af0106d7.jpg) + +![](images/d39f4a0e9775078f66933bd2af24c5c55e7c30c9e5950af798f575ded3dae3a4.jpg) + +![](images/79d787bb95891662bc37839bf54d0e2b3ad0d240183333d2106b7850bbc79c3b.jpg) + +![](images/4a57bcadb2ebc10c34dc6b9a62ab981bdefca9bdd0aa3b91e1e53b571cabe5ca.jpg) + +![](images/be015bfa3cdaf2b44b0e77531148136a68d7dcb50f87bba546a02fc49b7f56cc.jpg) + +![](images/3efd00852b0f34f9a3949add9bb86779268e9d0665af6f4022ec6e8f664dfcfc.jpg) + +![](images/79ce326daeeca762ef4a892284df06b42bfe31f307b79d485ade96a33fd8703b.jpg) + +![](images/7f282d478cac67c93f20c570e20b2bf512b87fa1c5c30d2ed3ea85a3cfdc8394.jpg) + +![](images/7db37af3fa865b27c84c75f3b197817c940260ea28dd6f311150bee7883c2ba3.jpg) + +![](images/3172f4ce237b0712571e6a666cdef028fafd9ada1a3e0a3c51f58a2a9e08f2cf.jpg) + +![](images/063a1b1a1dece2445c978efb79cb481cb6c2dc5b05b4b900347370a525c3330e.jpg) + +![](images/d649001cf1a64a9bccb83e00ca44bc0289da7de112ff76e9de8b7b104650577a.jpg) + +![](images/397bf7ab6c20cf152a4187b0bd8cb0fee54d69de302d7d64096b293d13739ef9.jpg) + +![](images/f3c00f4554388b763523d94ccd0be111d32b9b3979b17b25ace8f3286372199e.jpg) + +![](images/cb48875f1c399a28a2bf0bfdbd3f70c36f20a6b5f11875bbf048a93e4f984bc1.jpg) + +![](images/bc5de79c5c67ade429863cc11eee1a6b0afc78315a836bda44223f7048af9ac7.jpg) + +![](images/4bd145682614b11bac9a56dc099cac6e59e04947fbe792c6055ee4d589f19a9a.jpg) + +![](images/20de841a7493cc43c2cb949ab3629aae813eba4f26bd57745396dde8a6f55c56.jpg) + +![](images/f33003c0021a10760e608a09f1e2174e235a893c03af224878b7982e917691db.jpg) + +![](images/3c43596c3225d0d82ee1fa5757da092b782bcbb2c6d31d69f28d886f7e3c50a5.jpg) + +![](images/57b44f4b221ef212ab7d4735d105e5f5b1f68a51a901e2996938b59f4901c482.jpg) + +![](images/33279c0867af9b38f5de35cfd3cc9b07a12aacdfdd07534a9b1454796260888e.jpg) + +![](images/1324ef4f7b4588d0896385dd82eb74a26f890c8a7c2de8af1bd12a8b0492af05.jpg) + +![](images/16b11c1c1c3f695af6be023b67866171414828de5ee231504795bca73dc24e87.jpg) + +![](images/e0cc24d2030cadf890830426e0016b8aec88c55941529bcb3fc028858245f431.jpg) + +![](images/ed1c74da8409a19812447005299ab422bc021b9f4e61e61d6af4d27597b0c2ea.jpg) + +![](images/1f58f96a7868db3a8e7b4263ea01128181413837fa70a9aab1033ad5a8d881de.jpg) + +![](images/e59d1e6a2e3198192a88bec3b29fe6dec3edb5841dbe4e86bcc01287b56b0458.jpg) + +![](images/a833d704f859f81aee44b1e1c5223e6db3d47aac3a3225785bfc2af8fb4313cb.jpg) + +![](images/afc33314881650e81bb306937ec04dd25c43a4cea1368f0c20098c2f1c8c3d5d.jpg) +Figure C.9. Additional in-the-wild examples. The target objects and occluders are marked with the red and green outlines. + +![](images/e24369acd600aa02074e8619cb93b3572d2da16ca27557d198a9c5fea693cab8.jpg) + +![](images/8fcfaaec7bcf00678512fe514fdd2d9b17c4675d66f7db3ac9d06b1af67bd111.jpg) +Reconstruction result + +![](images/7f78d508d64ff47702c464b05edaa8c3cd052d7b9802f8fae3f7f5d9309c1391.jpg) + +![](images/bc66ed7a19f36ca900f75954dd3b53d4decaf085a178460092343926fa3f4cd9.jpg) + +![](images/b6cb93b3533b6ba8a98c9de6eba7fefb2244b9e7342055fb2f9fdb67b03c7356.jpg) + +![](images/2396eeeede72ea8776bb1f3e698cae247f18e1eb947624d405061a86b31c50e1.jpg) + +![](images/60ad8f6ac17efd1191fdef40ba05f35cc66e4232ac53ca3680e348d92e5d3380.jpg) \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13439/images.zip b/data/2025/2503_13xxx/2503.13439/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a64445c86d6ac9edf46d96ebb1d8a82963d2b85a --- /dev/null +++ b/data/2025/2503_13xxx/2503.13439/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2b775af6407ba04537bf8dfb4691dd6fe02bef09e6fefce03a32c525e122ce80 +size 1966175 diff --git a/data/2025/2503_13xxx/2503.13439/layout.json b/data/2025/2503_13xxx/2503.13439/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3f316f5efac0757c00ec95c231ab4cd342e5b8a7 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13439/layout.json @@ -0,0 +1,24164 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 101, + 103, + 509, + 121 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 101, + 103, + 509, + 121 + ], + "spans": [ + { + "bbox": [ + 101, + 103, + 509, + 121 + ], + "type": "text", + "content": "Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 142, + 501, + 158 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 142, + 501, + 158 + ], + "spans": [ + { + "bbox": [ + 107, + 142, + 501, + 158 + ], + "type": "text", + "content": "Tianhao Wu" + }, + { + "bbox": [ + 107, + 142, + 501, + 158 + ], + "type": "inline_equation", + "content": "^{1*}" + }, + { + "bbox": [ + 107, + 142, + 501, + 158 + ], + "type": "text", + "content": ", Chuanxia Zheng" + }, + { + "bbox": [ + 107, + 142, + 501, + 158 + ], + "type": "inline_equation", + "content": "^{2\\dagger}" + }, + { + "bbox": [ + 107, + 142, + 501, + 158 + ], + "type": "text", + "content": ", Frank Guan" + }, + { + "bbox": [ + 107, + 142, + 501, + 158 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 107, + 142, + 501, + 158 + ], + "type": "text", + "content": ", Andrea Vedaldi" + }, + { + "bbox": [ + 107, + 142, + 501, + 158 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 107, + 142, + 501, + 158 + ], + "type": "text", + "content": ", Tat-Jen Cham" + }, + { + "bbox": [ + 107, + 142, + 501, + 158 + ], + "type": "inline_equation", + "content": "^{1}" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 81, + 163, + 529, + 194 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 81, + 163, + 529, + 194 + ], + "spans": [ + { + "bbox": [ + 81, + 163, + 529, + 194 + ], + "type": "inline_equation", + "content": "^{*}" + }, + { + "bbox": [ + 81, + 163, + 529, + 194 + ], + "type": "text", + "content": "S-Lab, " + }, + { + "bbox": [ + 81, + 163, + 529, + 194 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 81, + 163, + 529, + 194 + ], + "type": "text", + "content": "Nanyang Technological University, " + }, + { + "bbox": [ + 81, + 163, + 529, + 194 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 81, + 163, + 529, + 194 + ], + "type": "text", + "content": "Visual Geometry Group, University of Oxford " + }, + { + "bbox": [ + 81, + 163, + 529, + 194 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 81, + 163, + 529, + 194 + ], + "type": "text", + "content": "Singapore Institute of Technology" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 124, + 198, + 485, + 221 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 198, + 485, + 221 + ], + "spans": [ + { + "bbox": [ + 124, + 198, + 485, + 221 + ], + "type": "text", + "content": "{tianhao001,astjcham}@ntu.edu.sg,{cxzheng,vedaldi} " + }, + { + "bbox": [ + 124, + 198, + 485, + 221 + ], + "type": "inline_equation", + "content": "@" + }, + { + "bbox": [ + 124, + 198, + 485, + 221 + ], + "type": "text", + "content": " robots.ox.ac.uk Frank.guan@singaporetech.edu.sg" + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 61, + 243, + 110, + 334 + ], + "blocks": [ + { + "bbox": [ + 59, + 233, + 105, + 243 + ], + "lines": [ + { + "bbox": [ + 59, + 233, + 105, + 243 + ], + "spans": [ + { + "bbox": [ + 59, + 233, + 105, + 243 + ], + "type": "text", + "content": "3D Recast" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 61, + 243, + 110, + 334 + ], + "lines": [ + { + "bbox": [ + 61, + 243, + 110, + 334 + ], + "spans": [ + { + "bbox": [ + 61, + 243, + 110, + 334 + ], + "type": "image", + "image_path": "4143f763f0017e95d50b3bfa0df41273b2f73431d8dd3e70b30d7b6b03fd374b.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 64, + 334, + 97, + 342 + ], + "lines": [ + { + "bbox": [ + 64, + 334, + 97, + 342 + ], + "spans": [ + { + "bbox": [ + 64, + 334, + 97, + 342 + ], + "type": "text", + "content": "2D Input" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 110, + 232, + 197, + 333 + ], + "blocks": [ + { + "bbox": [ + 110, + 232, + 197, + 333 + ], + "lines": [ + { + "bbox": [ + 110, + 232, + 197, + 333 + ], + "spans": [ + { + "bbox": [ + 110, + 232, + 197, + 333 + ], + "type": "image", + "image_path": "53928c56c2fe7023f935c8ed7e190c3c99585158a99df33f98e4c617f9a52d82.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 118, + 334, + 187, + 342 + ], + "lines": [ + { + "bbox": [ + 118, + 334, + 187, + 342 + ], + "spans": [ + { + "bbox": [ + 118, + 334, + 187, + 342 + ], + "type": "text", + "content": "Stochastic sample 1" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 202, + 232, + 288, + 334 + ], + "blocks": [ + { + "bbox": [ + 202, + 232, + 288, + 334 + ], + "lines": [ + { + "bbox": [ + 202, + 232, + 288, + 334 + ], + "spans": [ + { + "bbox": [ + 202, + 232, + 288, + 334 + ], + "type": "image", + "image_path": "b2eac496a0db86dd84347dc3365cd86f4d9d9dbebf9db5f474bbce4cba6e3601.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 216, + 335, + 285, + 343 + ], + "lines": [ + { + "bbox": [ + 216, + 335, + 285, + 343 + ], + "spans": [ + { + "bbox": [ + 216, + 335, + 285, + 343 + ], + "type": "text", + "content": "Stochastic sample 2" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 293, + 247, + 376, + 329 + ], + "blocks": [ + { + "bbox": [ + 293, + 247, + 376, + 329 + ], + "lines": [ + { + "bbox": [ + 293, + 247, + 376, + 329 + ], + "spans": [ + { + "bbox": [ + 293, + 247, + 376, + 329 + ], + "type": "image", + "image_path": "3f426092ee64dead895249bbba8131fb3b6e9802518102ab8dd167ce2d8b47aa.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 318, + 334, + 352, + 343 + ], + "lines": [ + { + "bbox": [ + 318, + 334, + 352, + 343 + ], + "spans": [ + { + "bbox": [ + 318, + 334, + 352, + 343 + ], + "type": "text", + "content": "2D Input" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 377, + 240, + 463, + 306 + ], + "blocks": [ + { + "bbox": [ + 293, + 234, + 410, + 243 + ], + "lines": [ + { + "bbox": [ + 293, + 234, + 410, + 243 + ], + "spans": [ + { + "bbox": [ + 293, + 234, + 410, + 243 + ], + "type": "text", + "content": "In-the-wild Reconstruction" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 377, + 240, + 463, + 306 + ], + "lines": [ + { + "bbox": [ + 377, + 240, + 463, + 306 + ], + "spans": [ + { + "bbox": [ + 377, + 240, + 463, + 306 + ], + "type": "image", + "image_path": "9ead7c9cb53f7909b29e7f85b60b6bdac4b80a3a2f295f3707c7c8f057e70b5c.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 377, + 306, + 420, + 331 + ], + "blocks": [ + { + "bbox": [ + 377, + 306, + 420, + 331 + ], + "lines": [ + { + "bbox": [ + 377, + 306, + 420, + 331 + ], + "spans": [ + { + "bbox": [ + 377, + 306, + 420, + 331 + ], + "type": "image", + "image_path": "465bfb5cb8402acef9e313ef6594a15529e8bea8756855a43fef3610bc64338a.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 391, + 334, + 459, + 343 + ], + "lines": [ + { + "bbox": [ + 391, + 334, + 459, + 343 + ], + "spans": [ + { + "bbox": [ + 391, + 334, + 459, + 343 + ], + "type": "text", + "content": "Stochastic sample 1" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 467, + 239, + 553, + 306 + ], + "blocks": [ + { + "bbox": [ + 467, + 239, + 553, + 306 + ], + "lines": [ + { + "bbox": [ + 467, + 239, + 553, + 306 + ], + "spans": [ + { + "bbox": [ + 467, + 239, + 553, + 306 + ], + "type": "image", + "image_path": "11eb59d8c9fdfcd0e9dd496e456bfa47d8b2b36fe75963ece0988b80f5c87583.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 467, + 306, + 511, + 331 + ], + "blocks": [ + { + "bbox": [ + 467, + 306, + 511, + 331 + ], + "lines": [ + { + "bbox": [ + 467, + 306, + 511, + 331 + ], + "spans": [ + { + "bbox": [ + 467, + 306, + 511, + 331 + ], + "type": "image", + "image_path": "340194d0841a1f92e679b6b0daba73136cb660484090381712dd57d67deb3dfc.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 479, + 334, + 548, + 343 + ], + "lines": [ + { + "bbox": [ + 479, + 334, + 548, + 343 + ], + "spans": [ + { + "bbox": [ + 479, + 334, + 548, + 343 + ], + "type": "text", + "content": "Stochastic sample 2" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 65, + 357, + 135, + 428 + ], + "blocks": [ + { + "bbox": [ + 59, + 346, + 167, + 357 + ], + "lines": [ + { + "bbox": [ + 59, + 346, + 167, + 357 + ], + "spans": [ + { + "bbox": [ + 59, + 346, + 167, + 357 + ], + "type": "text", + "content": "3D Scene Decomposition" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 65, + 357, + 135, + 428 + ], + "lines": [ + { + "bbox": [ + 65, + 357, + 135, + 428 + ], + "spans": [ + { + "bbox": [ + 65, + 357, + 135, + 428 + ], + "type": "image", + "image_path": "a8eb7dcb0fe28ebd2f0dafc836463edc5beb06c4ddc6ccc58bd84a56089f2e40.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 80, + 429, + 112, + 437 + ], + "lines": [ + { + "bbox": [ + 80, + 429, + 112, + 437 + ], + "spans": [ + { + "bbox": [ + 80, + 429, + 112, + 437 + ], + "type": "text", + "content": "2D Input" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 449, + 555, + 494 + ], + "lines": [ + { + "bbox": [ + 55, + 449, + 555, + 494 + ], + "spans": [ + { + "bbox": [ + 55, + 449, + 555, + 494 + ], + "type": "text", + "content": "Figure 1. Example results of Amodal3R. Given partially visible objects within images (occluded regions are shown in black, visible areas in red outlines), our Amodal3R generates diverse semantically meaningful 3D assets with reasonable geometry and plausible appearance. We sample multiple times to get different results from the same occluded input. Trained on synthetic datasets, it generalizes well to real-scene and in-the-wild images, where most objects are partially visible, and reconstructs reasonable 3D assets." + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_caption" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 140, + 358, + 272, + 427 + ], + "blocks": [ + { + "bbox": [ + 140, + 358, + 272, + 427 + ], + "lines": [ + { + "bbox": [ + 140, + 358, + 272, + 427 + ], + "spans": [ + { + "bbox": [ + 140, + 358, + 272, + 427 + ], + "type": "image", + "image_path": "ac961d2f9dbd923ec7b9abaa05db1e5574bddbc55d3e3a7e5686690e26040fe0.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + } + ], + "index": 24 + }, + { + "type": "image", + "bbox": [ + 277, + 357, + 410, + 428 + ], + "blocks": [ + { + "bbox": [ + 277, + 357, + 410, + 428 + ], + "lines": [ + { + "bbox": [ + 277, + 357, + 410, + 428 + ], + "spans": [ + { + "bbox": [ + 277, + 357, + 410, + 428 + ], + "type": "image", + "image_path": "57e6957b2cf630b806a6c83f6eb4350841401464b6edcd0f59c622a62f1e78b9.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 301, + 429, + 379, + 437 + ], + "lines": [ + { + "bbox": [ + 301, + 429, + 379, + 437 + ], + "spans": [ + { + "bbox": [ + 301, + 429, + 379, + 437 + ], + "type": "text", + "content": "Decomposed 3D assets" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_caption" + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 415, + 357, + 547, + 428 + ], + "blocks": [ + { + "bbox": [ + 415, + 357, + 547, + 428 + ], + "lines": [ + { + "bbox": [ + 415, + 357, + 547, + 428 + ], + "spans": [ + { + "bbox": [ + 415, + 357, + 547, + 428 + ], + "type": "image", + "image_path": "47b75ad456c10ac0a22005fe83b7b75c7a51c488ce3da0a89e9c6ba62d467e22.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + } + ], + "index": 27 + }, + { + "bbox": [ + 152, + 503, + 200, + 515 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 152, + 503, + 200, + 515 + ], + "spans": [ + { + "bbox": [ + 152, + 503, + 200, + 515 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 53, + 528, + 297, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 528, + 297, + 696 + ], + "spans": [ + { + "bbox": [ + 53, + 528, + 297, + 696 + ], + "type": "text", + "content": "Most image-based 3D object reconstructors assume that objects are fully visible, ignoring occlusions that commonly occur in real-world scenarios. In this paper, we introduce Amodal3R, a conditional 3D generative model designed to reconstruct 3D objects from partial observations. We start from a \"foundation\" 3D generative model and extend it to recover plausible 3D geometry and appearance from occluded objects. We introduce a mask-weighted multi-head cross-attention mechanism followed by an occlusion-aware attention layer that explicitly leverages occlusion priors to guide the reconstruction process. We demonstrate that, by training solely on synthetic data, Amodal3R learns to recover full 3D objects even in the presence of occlusions in real scenes. It substantially outperforms existing meth" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 313, + 505, + 555, + 554 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 505, + 555, + 554 + ], + "spans": [ + { + "bbox": [ + 313, + 505, + 555, + 554 + ], + "type": "text", + "content": "ods that independently perform 2D amodal completion followed by 3D reconstruction, thereby establishing a new benchmark for occlusion-aware 3D reconstruction. See our project page https://sm0kywu.github.io/Amodal3R/." + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 314, + 574, + 394, + 586 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 574, + 394, + 586 + ], + "spans": [ + { + "bbox": [ + 314, + 574, + 394, + 586 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 312, + 594, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 312, + 594, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 312, + 594, + 555, + 715 + ], + "type": "text", + "content": "Humans possess a remarkable ability to infer the complete 3D shape and appearance of objects from single views, even when the objects are partly hidden behind occluders. This ability, namely amodal completion, is key to operating in complex real-world environments, where objects are often partially occluded by surrounding clutter. However, existing image-based 3D reconstruction models [5, 6, 24, 42, 49, 50, 67, 72, 73, 76, 78, 81, 88, 89] fail to recover full 3D assets when the object is partially occluded. We thus consider the problem of reconstructing 3D objects" + } + ] + } + ], + "index": 33 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 14, + 206, + 35, + 559 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 14, + 206, + 35, + 559 + ], + "spans": [ + { + "bbox": [ + 14, + 206, + 35, + 559 + ], + "type": "text", + "content": "arXiv:2503.13439v1 [cs.CV] 17 Mar 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 67, + 702, + 115, + 712 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 702, + 115, + 712 + ], + "spans": [ + { + "bbox": [ + 67, + 702, + 115, + 712 + ], + "type": "text", + "content": "†Project Lead." + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 302, + 733, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 308, + 741 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 35 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 295, + 95 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 295, + 95 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 295, + 95 + ], + "type": "text", + "content": "from one or more partially-occluded views, a novel task that we call amodal 3D reconstruction." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 97, + 295, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 97, + 295, + 289 + ], + "spans": [ + { + "bbox": [ + 55, + 97, + 295, + 289 + ], + "type": "text", + "content": "Amodal 3D reconstruction is a challenging task that requires reconstructing an object's 3D geometry and appearance while completing its occluded parts, both of which are highly ambiguous. Previous approaches to amodal 3D reconstruction [7, 56] have decomposed the task into 2D amodal completion [85, 86, 91], followed by conventional 3D reconstruction [42, 76, 89]. While these two-stage pipelines are easy to implement, they have some drawbacks. First, 2D amodal completion methods rely predominantly on appearance-based priors rather than explicit 3D structural cues. This lack of 3D geometric reasoning is suboptimal. Second, some 3D reconstruction methods can use or require multiple views to function. In this case, 2D amodal completion may lack multi-view consistency, particularly when it is performed independently for different views, which confuses the 3D reconstruction process." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 289, + 295, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 289, + 295, + 456 + ], + "spans": [ + { + "bbox": [ + 55, + 289, + 295, + 456 + ], + "type": "text", + "content": "In this paper, we introduce Amodal3R, a novel single-stage paradigm for amodal 3D reconstruction that surpasses previous state-of-the-art approaches (see Fig. 1). Amodal3R augments the \"foundation\" 3D generator TRELLIS [81] with an additional branch to condition on occlusions. Its key advantage is performing reconstruction, completion, and occlusion reasoning directly in a 3D latent space instead of using a two-stage scheme. In this way, the model can reconstruct both visible and occluded regions of the object coherently and plausibly. To adapt TRELLIS to amodal reconstruction, we introduce mask weighting in multi-head cross-attention and a new occlusion-aware layer. These guide the model to focus more on visible parts of the object without perturbing the pre-trained model too much." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 458, + 295, + 541 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 458, + 295, + 541 + ], + "spans": [ + { + "bbox": [ + 55, + 458, + 295, + 541 + ], + "type": "text", + "content": "We evaluated the effectiveness of Amodal3R on datasets like Google Scanned Objects [16] and Toys4K [69] augmented with occlusions, on 3D scenes from Replica [70], and on real-world monocular images. Without relying on additional heuristics, Amodal3R achieves significantly superior performance compared to state-of-the-art models and generalizes well to different datasets, including real ones." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 67, + 543, + 274, + 554 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 67, + 543, + 274, + 554 + ], + "spans": [ + { + "bbox": [ + 67, + 543, + 274, + 554 + ], + "type": "text", + "content": "In summary, our main contributions are as follows:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 556, + 295, + 711 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 56, + 556, + 295, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 556, + 295, + 602 + ], + "spans": [ + { + "bbox": [ + 56, + 556, + 295, + 602 + ], + "type": "text", + "content": "- We propose a novel occlusion-aware 3D reconstruction model that directly reconstructs complete and high-quality 3D objects from one or more partially occluded views, without relying on 2D amodal completion models." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 604, + 295, + 662 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 604, + 295, + 662 + ], + "spans": [ + { + "bbox": [ + 56, + 604, + 295, + 662 + ], + "type": "text", + "content": "- We introduce a mask-weighted cross-attention mechanism and an occlusion-aware layer to inject occlusion awareness into an existing 3D reconstruction model, improving both the geometry and appearance of the reconstructed objects when they are partially occluded." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 663, + 295, + 711 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 663, + 295, + 711 + ], + "spans": [ + { + "bbox": [ + 56, + 663, + 295, + 711 + ], + "type": "text", + "content": "- We demonstrate via experiments on the 3D scene dataset Replica and real-world images that our one-stage pipeline significantly outperforms existing two-stage ones, establishing a new benchmark for amodal 3D reconstruction." + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 314, + 71, + 392, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 71, + 392, + 85 + ], + "spans": [ + { + "bbox": [ + 314, + 71, + 392, + 85 + ], + "type": "text", + "content": "2. Background" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 91, + 553, + 341 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 91, + 553, + 341 + ], + "spans": [ + { + "bbox": [ + 313, + 91, + 553, + 341 + ], + "type": "text", + "content": "2D Amodal Completion. Recent advances in 2D amodal completion [2, 56, 83, 85, 86, 91] have achieved significant success in reconstructing occluded regions of objects in 2D images. While these methods show promise for 3D generation pipelines [2, 56], they still have limitations. First, 2D amodal completion models have limited 3D geometric understanding as they interpret images as 2D patterns. While excelling at completing textures, they may generate physically implausible geometries when handling complex occlusions. This stems from their lack of explicit 3D reasoning and reliance on 2D appearance priors, without true volumetric understanding. Second, for models that use multi-view images as input, the results of the 2D amodal completion are often inconsistent across views. Inconsistent views need to be corrected by the 3D reconstructor, which cause confusion (see Sec. 4). Although there has been significant work on multi-view consistent generation [7, 64, 72, 74, 82, 94], multi-view consistent completion is less explored. RenderDiffusion [1] contains an example, but the results are often blurry or lack details. This motivates our 3D-centric reconstruction framework." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 342, + 553, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 342, + 553, + 533 + ], + "spans": [ + { + "bbox": [ + 313, + 342, + 553, + 533 + ], + "type": "text", + "content": "3D Shape Completion. Several methods [12, 14, 71, 95] start from a partial 3D reconstruction, then complete it in 3D. They use encoder-decoder architectures [25] or volumetric representations [10, 30] to robustly recover 3D shape but not texture, which is necessary in many applications. They also still require recovering the partial 3D geometry from an occluded image, a challenge in itself. Furthermore, they ignore the input image when completing the object in 3D, disregarding the occlusion pattern that caused the 3D geometry to be recovered only partially. Recent work [11] utilizes the partially visible object as input specifically for 3D shape completion. However, it focuses solely on geometry reconstruction, without recovering the object's texture and appearance. In contrast, our approach is end-to-end, recovering the complete 3D shape and appearance of the object from the occluded image." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 534, + 553, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 534, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 534, + 553, + 713 + ], + "type": "text", + "content": "3D Generative Models. Early advancements in 3D generation are based on GANs [19], exploring various 3D representations such as point clouds [26, 40], voxel grids [84, 96], view sets [50, 57], NeRF [4, 5, 52, 62], SDF [18], and 3D Gaussian mixtures [79]. While GANs can capture complex 3D structures, they struggle with stability, scalability, and data diversity. The focus then shifted to diffusion models [23, 61, 68], which were more capable and versatile. They were first applied to novel view synthesis [77] in image space, before expanding to model a variety of 3D representations, including point clouds [46, 47, 51, 80, 93], voxel grids [38, 48], triplanes [66, 97], meshes [20, 44], and 3D Gaussian mixtures [9, 36, 39, 87]. Autoregressive models [8, 49, 67] have also been explored for mesh generation, focusing on improving the topology of the final 3D mesh." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 60, + 73, + 550, + 178 + ], + "blocks": [ + { + "bbox": [ + 60, + 73, + 550, + 178 + ], + "lines": [ + { + "bbox": [ + 60, + 73, + 550, + 178 + ], + "spans": [ + { + "bbox": [ + 60, + 73, + 550, + 178 + ], + "type": "image", + "image_path": "46e0fb0ee3858047a89e9ac6e8e2762da5cdf37cc141d2d1a5005dc02c86b545.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 60, + 181, + 550, + 277 + ], + "blocks": [ + { + "bbox": [ + 60, + 181, + 550, + 277 + ], + "lines": [ + { + "bbox": [ + 60, + 181, + 550, + 277 + ], + "spans": [ + { + "bbox": [ + 60, + 181, + 550, + 277 + ], + "type": "image", + "image_path": "0dfbd6bc3d0e73132a8794d2bf334fd7a6b88689f397e20c56056d8c9a2dbaa5.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 285, + 555, + 340 + ], + "lines": [ + { + "bbox": [ + 55, + 285, + 555, + 340 + ], + "spans": [ + { + "bbox": [ + 55, + 285, + 555, + 340 + ], + "type": "text", + "content": "Figure 2. Overview of Amodal3R. Given an image as input and the regions of interest, Amodal3R first extracts the partially visible target object, along with the visibility and occlusion masks using an off-the-shelf 2D segmenter. It then applies DINOv2 [55] to extract features " + }, + { + "bbox": [ + 55, + 285, + 555, + 340 + ], + "type": "inline_equation", + "content": "\\pmb{c}_{dino}" + }, + { + "bbox": [ + 55, + 285, + 555, + 340 + ], + "type": "text", + "content": " as additional conditioning for the 3D reconstructor. To enhance occlusion reasoning, each transformer block incorporates a mask-weighted cross-attention (via " + }, + { + "bbox": [ + 55, + 285, + 555, + 340 + ], + "type": "inline_equation", + "content": "\\pmb{c}_{vis}" + }, + { + "bbox": [ + 55, + 285, + 555, + 340 + ], + "type": "text", + "content": ") and occlusion-aware attention layer (via " + }, + { + "bbox": [ + 55, + 285, + 555, + 340 + ], + "type": "inline_equation", + "content": "\\pmb{c}_{occ}" + }, + { + "bbox": [ + 55, + 285, + 555, + 340 + ], + "type": "text", + "content": "), ensuring the 3D reconstructor accurately perceives visible information while effectively inferring occluded parts. For conditioning details, see Sec. 3.2." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 54, + 361, + 295, + 493 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 361, + 295, + 493 + ], + "spans": [ + { + "bbox": [ + 54, + 361, + 295, + 493 + ], + "type": "text", + "content": "Authors have also shifted from performing diffusion in 2D image space [64, 72, 89] to 3D latent spaces [29, 53, 60, 65, 75, 81]. Such methods typically consist of two stages: the first for generating geometry and the second for generating appearance, and are capable of high-quality imaged-to-3D generation. However, they assume that input objects are fully visible, which limits their application to real-world scenes, where occlusions are frequent. Here, we extend such approaches to generate high-quality 3D assets from occluded input images — a challenging task that requires inferring complete 3D objects from partial information." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 502, + 111, + 514 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 502, + 111, + 514 + ], + "spans": [ + { + "bbox": [ + 55, + 502, + 111, + 514 + ], + "type": "text", + "content": "3. Method" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "spans": [ + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "type": "text", + "content": "Consider an image " + }, + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "type": "text", + "content": " containing a partially occluded view of an object of interest " + }, + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "type": "inline_equation", + "content": "o_i" + }, + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "type": "text", + "content": ", we wish to reconstruct the complete 3D shape and appearance " + }, + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "type": "text", + "content": " of the object " + }, + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "type": "inline_equation", + "content": "o_i" + }, + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "type": "text", + "content": ". This task is thus conceptually similar to existing image-to-3D reconstruction, except that here the object is partially occluded instead of being fully visible in " + }, + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 54, + 521, + 295, + 605 + ], + "type": "text", + "content": ". We call this problem amodal 3D reconstruction." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 54, + 605, + 295, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 605, + 295, + 714 + ], + "spans": [ + { + "bbox": [ + 54, + 605, + 295, + 714 + ], + "type": "text", + "content": "Here, we elaborate on Amodal3R (Fig. 2), a new method for amodal 3D reconstruction. Unlike recent two-stage methods [7, 56] that first perform 2D amodal completion followed by 3D reconstruction, Amodal3R is an end-to-end occlusion-aware 3D reconstruction model that generates complete shapes and multi-view geometry directly within the 3D space. Formally, Amodal3R is a conditional generator " + }, + { + "bbox": [ + 54, + 605, + 295, + 714 + ], + "type": "inline_equation", + "content": "v_{\\theta}(y|o_i,M_{\\mathrm{vis}},M_{\\mathrm{occ}})" + }, + { + "bbox": [ + 54, + 605, + 295, + 714 + ], + "type": "text", + "content": " that takes as input the image " + }, + { + "bbox": [ + 54, + 605, + 295, + 714 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 54, + 605, + 295, + 714 + ], + "type": "text", + "content": ", centered on the partially visible object " + }, + { + "bbox": [ + 54, + 605, + 295, + 714 + ], + "type": "inline_equation", + "content": "o_i" + }, + { + "bbox": [ + 54, + 605, + 295, + 714 + ], + "type": "text", + "content": ", as well as the" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "spans": [ + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "text", + "content": "visibility mask " + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{vis}}" + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "text", + "content": " and occlusion mask " + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{occ}}" + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "text", + "content": ". The visibility mask " + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{vis}}" + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "text", + "content": " marks the pixels of image " + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "text", + "content": " containing the object " + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "inline_equation", + "content": "o_i" + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "text", + "content": ", while " + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{occ}}" + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "text", + "content": " marks the pixels containing the oc-clusters, i.e., all other objects that potentially obscure " + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "inline_equation", + "content": "o_i" + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "text", + "content": ". For real images, these masks can be efficiently obtained using pre-trained 2D segmentation models like Segment Anything [35]. By providing point coordinate prompts for the interest object " + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "inline_equation", + "content": "o_i" + }, + { + "bbox": [ + 313, + 361, + 555, + 553 + ], + "type": "text", + "content": " and its occlusters respectively, the segmentation model can generate the corresponding masks. In cases where multiple occlusters are present or when occlusters fragment the visible area of the target object, a sequential process is employed. Specifically, point prompts for each visible/occluding region are provided to the 2D segmenter one at a time, with the model generating an individual mask for each region. These masks are then aggregated to form a comprehensive visibility/occlusion mask." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 553, + 556, + 625 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 553, + 556, + 625 + ], + "spans": [ + { + "bbox": [ + 313, + 553, + 556, + 625 + ], + "type": "text", + "content": "The challenges for Amodal3R are how to: (1) produce a plausible and complete 3D shape and appearance from partial observations, even in the absence of multi-view information; and (2) ensure 3D consistency in terms of geometry and photometry, maintaining seamless visual coherence between visible and generated regions." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 635, + 455, + 647 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 635, + 455, + 647 + ], + "spans": [ + { + "bbox": [ + 313, + 635, + 455, + 647 + ], + "type": "text", + "content": "3.1. Preliminaries: TRELLIS" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 653, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 653, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 653, + 555, + 715 + ], + "type": "text", + "content": "We begin by briefly reviewing the TRELLIS [81] model on which our model is based. TRELLIS is a conditional 3D diffusion model that performs denoising in a sparse 3D latent space. First, it introduces a transformer-based variational autoencoder (VAE) " + }, + { + "bbox": [ + 313, + 653, + 555, + 715 + ], + "type": "inline_equation", + "content": "(\\mathcal{E},\\mathcal{D})" + }, + { + "bbox": [ + 313, + 653, + 555, + 715 + ], + "type": "text", + "content": ", where the encoder " + }, + { + "bbox": [ + 313, + 653, + 555, + 715 + ], + "type": "inline_equation", + "content": "\\mathcal{E}" + }, + { + "bbox": [ + 313, + 653, + 555, + 715 + ], + "type": "text", + "content": " maps" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "text", + "content": "sparse voxel features to structured latents " + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "inline_equation", + "content": "\\mathbf{z}" + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "text", + "content": ", and the decoder " + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "inline_equation", + "content": "\\mathcal{D}" + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "text", + "content": " converts them into desired output representations, including 3D Gaussians [31], radiance fields, and meshes. In particular, a 3D object " + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "inline_equation", + "content": "\\mathbf{o}_i" + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "text", + "content": " is encoded using its structured latent variables (SLAT) defined as " + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "inline_equation", + "content": "\\mathbf{z} = \\{(z_i, p_i)\\}_{i=1}^L" + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "inline_equation", + "content": "z_i \\in \\mathbb{R}^C" + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "text", + "content": " is a local latent feature attached to the voxel at position " + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "inline_equation", + "content": "p_i \\in \\{0, 1, \\dots, N-1\\}^3" + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "text", + "content": " is the spatial resolution of the grid, and " + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "inline_equation", + "content": "L \\ll N^3" + }, + { + "bbox": [ + 55, + 72, + 296, + 203 + ], + "type": "text", + "content": " is the number of active voxels intersecting the object's surface. This representation encodes both coarse geometric structures and fine appearance details by associating local latents with active voxels." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "spans": [ + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "text", + "content": "TRELLIS comprises two diffusion models, one to predict the active voxel centers " + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "inline_equation", + "content": "\\{\\pmb{p}_i\\}_{i=1}^L" + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "text", + "content": " (stage 1) and the other to recover the corresponding latents " + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "inline_equation", + "content": "\\{\\pmb{z}_i\\}_{i=1}^L" + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "text", + "content": " (stage 2). Each model can be viewed as a denoising neural network " + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "inline_equation", + "content": "v_\\theta" + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "text", + "content": " operating in a latent space " + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "inline_equation", + "content": "\\ell" + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "text", + "content": ", and trained to remove Gaussian noise " + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "inline_equation", + "content": "\\epsilon \\sim \\mathcal{N}(0, I)" + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "text", + "content": " added to the latent code, i.e., " + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "inline_equation", + "content": "\\ell^{(t)} = (1 - t)\\ell^{(0)} + t\\epsilon" + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "inline_equation", + "content": "t \\in [0,1]" + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "text", + "content": " is the noise level [43]. The denoising network " + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "inline_equation", + "content": "v_\\theta" + }, + { + "bbox": [ + 55, + 204, + 296, + 312 + ], + "type": "text", + "content": " is trained to minimize the flow loss:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 82, + 317, + 295, + 336 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 82, + 317, + 295, + 336 + ], + "spans": [ + { + "bbox": [ + 82, + 317, + 295, + 336 + ], + "type": "interline_equation", + "content": "\\min _ {\\theta} \\mathbb {E} _ {(\\boldsymbol {\\ell} ^ {(0)}, x), t, \\epsilon} \\| v _ {\\theta} (\\boldsymbol {\\ell} ^ {(t)}, x, t) - (\\epsilon - \\boldsymbol {\\ell} ^ {(0)}) \\| _ {2} ^ {2}, \\tag {1}", + "image_path": "706e1f93f660a73f3860f9ba509b7bd8169cadcc36187998fe3fbc149c60f447.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "spans": [ + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": " is the image prompt. In stage 1, the latent code is a compressed version of the " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "N \\times N \\times N" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": " occupancy volume, where the spatial resolution is reduced from " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "N = 64" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "r = 16" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": ". Hence, in this case the latent vector is a matrix " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "\\ell \\in \\mathbb{R}^{L' \\times C'}" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": " of " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "L' = r^3 = 4096C'" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": "-dimensional tokens. In stage 2, the latent code " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "\\ell = \\{z_i\\}_{i=1}^L \\in \\mathbb{R}^{L \\times C}" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": " is a matrix of " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "LC" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": "-dimensional tokens, where " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "L" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": " is now the number of active voxels. Similar transformers are implemented to the corresponding denoising networks " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "v_{\\theta}" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": " (Fig. 3). The conditioning image " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": " is read via cross-attention layers that pool information from the tokens " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{dino}}" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": " extracted by DINO [55] from the image " + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 55, + 342, + 296, + 486 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 493, + 255, + 504 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 493, + 255, + 504 + ], + "spans": [ + { + "bbox": [ + 55, + 493, + 255, + 504 + ], + "type": "text", + "content": "3.2. Mask-Conditional Generative Models" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "spans": [ + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "text", + "content": "The key change needed to fine-tune the generator to work with partially occluded images is to condition the transformers " + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "inline_equation", + "content": "v_{\\theta}" + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "text", + "content": " not only on the image " + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "text", + "content": ", but also on the masks " + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{vis}}" + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{occ}}" + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "text", + "content": ". A naive approach is to downsample or embed the masks to obtain tokens " + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "inline_equation", + "content": "(c_{\\mathrm{vis}}, c_{\\mathrm{occ}})" + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "text", + "content": " that can be concatenated to the image tokens " + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{dino}}" + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "text", + "content": ", to be processed by cross-attention by the transformer as before. However, the image " + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "text", + "content": " contains significantly more information than the binary masks " + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{vis}}" + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{occ}}" + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "text", + "content": ", so the transformer, which is initially trained to consider " + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 55, + 510, + 296, + 677 + ], + "type": "text", + "content": " only, may simply ignore this information. This is compounded by the fact that learning to use this new information, which involves a new type of tokens that are incompatible with image ones, may require aggressive fine-tuning of the transformer." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 677, + 296, + 713 + ], + "type": "text", + "content": "To sidestep this problem, inspired by masked attention approaches in 2D completion [92], we introduce mask-weighted cross-attention and an occlusion-aware attention" + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 317, + 76, + 408, + 241 + ], + "blocks": [ + { + "bbox": [ + 317, + 76, + 408, + 241 + ], + "lines": [ + { + "bbox": [ + 317, + 76, + 408, + 241 + ], + "spans": [ + { + "bbox": [ + 317, + 76, + 408, + 241 + ], + "type": "image", + "image_path": "c28eb32c3ce9ca4c19b888186c109fcd0b6e8954a43fed553318ca76a53e098b.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 334, + 250, + 397, + 259 + ], + "lines": [ + { + "bbox": [ + 334, + 250, + 397, + 259 + ], + "spans": [ + { + "bbox": [ + 334, + 250, + 397, + 259 + ], + "type": "text", + "content": "(a) TRELLIS [81]" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 313, + 263, + 555, + 308 + ], + "lines": [ + { + "bbox": [ + 313, + 263, + 555, + 308 + ], + "spans": [ + { + "bbox": [ + 313, + 263, + 555, + 308 + ], + "type": "text", + "content": "Figure 3. The transformer structure of Amodal3R. Compared with the original TRELLIS [81] design, we further introduce the mask-weighted cross-attention and occlusion-aware layer. It applies to both sparse structure and SLAT diffusion models." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 413, + 71, + 553, + 241 + ], + "blocks": [ + { + "bbox": [ + 413, + 71, + 553, + 241 + ], + "lines": [ + { + "bbox": [ + 413, + 71, + 553, + 241 + ], + "spans": [ + { + "bbox": [ + 413, + 71, + 553, + 241 + ], + "type": "image", + "image_path": "e3ed020a61c74b01acd0ad7190934e06b7482437b4cce85de31b597a5ed49498.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 449, + 250, + 523, + 259 + ], + "lines": [ + { + "bbox": [ + 449, + 250, + 523, + 259 + ], + "spans": [ + { + "bbox": [ + 449, + 250, + 523, + 259 + ], + "type": "text", + "content": "(b) Amodal3R (Ours)" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 327, + 553, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 327, + 553, + 361 + ], + "spans": [ + { + "bbox": [ + 313, + 327, + 553, + 361 + ], + "type": "text", + "content": "layer to better utilize the visibility mask " + }, + { + "bbox": [ + 313, + 327, + 553, + 361 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{vis}}" + }, + { + "bbox": [ + 313, + 327, + 553, + 361 + ], + "type": "text", + "content": " and the occlusion mask " + }, + { + "bbox": [ + 313, + 327, + 553, + 361 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{occ}}" + }, + { + "bbox": [ + 313, + 327, + 553, + 361 + ], + "type": "text", + "content": " without disrupting the pre-trained 3D generator too much. These are described next." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 362, + 555, + 459 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 362, + 555, + 459 + ], + "spans": [ + { + "bbox": [ + 313, + 362, + 555, + 459 + ], + "type": "text", + "content": "Mask-weighted Cross-Attention. A key novel component of our model is mask-weighted cross-attention, which allows the model to focus its attention on the visible parts of the object. We implement this mechanism in each attention block in the transformers " + }, + { + "bbox": [ + 313, + 362, + 555, + 459 + ], + "type": "inline_equation", + "content": "v_{\\theta}" + }, + { + "bbox": [ + 313, + 362, + 555, + 459 + ], + "type": "text", + "content": " of Sec. 3.1. Given the latent tokens " + }, + { + "bbox": [ + 313, + 362, + 555, + 459 + ], + "type": "inline_equation", + "content": "\\ell \\in \\mathbb{R}^{L \\times C}" + }, + { + "bbox": [ + 313, + 362, + 555, + 459 + ], + "type": "text", + "content": " input to a cross-attention layer as well as the image features " + }, + { + "bbox": [ + 313, + 362, + 555, + 459 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{dino}} \\in \\mathbb{R}^{K \\times C'}" + }, + { + "bbox": [ + 313, + 362, + 555, + 459 + ], + "type": "text", + "content": " of the partially visible object, cross-attention computes the similarity score matrix" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 325, + 464, + 553, + 480 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 464, + 553, + 480 + ], + "spans": [ + { + "bbox": [ + 325, + 464, + 553, + 480 + ], + "type": "interline_equation", + "content": "\\boldsymbol {q} = W _ {q} \\boldsymbol {\\ell}, \\quad [ \\boldsymbol {k}, \\boldsymbol {v} ] = W _ {k v} \\boldsymbol {c} _ {\\text {d i n o}}, \\quad S = \\boldsymbol {q} \\boldsymbol {k} ^ {\\top} / \\sqrt {D}, \\tag {2}", + "image_path": "4ab5972ac9846308f5694ddd2a4176e0bb00bfd17195005d56b1d89d6233643d.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "spans": [ + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "inline_equation", + "content": "W_{q}" + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "inline_equation", + "content": "W_{kv}" + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "text", + "content": " are learnable projections that map the latents " + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "inline_equation", + "content": "\\ell" + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "text", + "content": " to the query " + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "text", + "content": " and the conditioning image feature " + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{dino}}" + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "text", + "content": " to the key " + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "text", + "content": " and the value " + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "inline_equation", + "content": "v" + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "text", + "content": ", respectively, and " + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 313, + 485, + 553, + 533 + ], + "type": "text", + "content": " is the dimension of the query and key vectors." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "spans": [ + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": "We wish to bias the token similarity matrix " + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "inline_equation", + "content": "S \\in \\mathbb{R}^{L \\times K}" + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": " towards the visible parts of the object. Recall that " + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": " is the number of tokens " + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{dino}}" + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": " extracted by DINOv2 [54] from the occluded image " + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "inline_equation", + "content": "x \\cdot M_{\\mathrm{vis}}" + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": ". Each token thus corresponds to a " + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "inline_equation", + "content": "P \\times P" + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": " patch in the input image (where " + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "inline_equation", + "content": "P = 14" + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": "). We extract analogous flatten 1-D tokens " + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{vis}} = [\\rho(M_{\\mathrm{vis},1}), \\dots, \\rho(M_{\\mathrm{vis},K})]" + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": " from the visibility mask by calculating the fraction of " + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "inline_equation", + "content": "P \\times P" + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": " pixels that are visible in the " + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": "-th image patch " + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{vis},j}" + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{vis},j} = \\rho(M_{\\mathrm{vis},j}) \\in [0,1]" + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": ". We then use these quantities to bias the computation of the attention matrix " + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "inline_equation", + "content": "A = \\text{softmax}(S + \\log c_{\\mathrm{vis}}) \\in [0,1]^{L \\times M}" + }, + { + "bbox": [ + 313, + 533, + 555, + 677 + ], + "type": "text", + "content": " via broadcasting. Hence:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 372, + 683, + 553, + 712 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 372, + 683, + 553, + 712 + ], + "spans": [ + { + "bbox": [ + 372, + 683, + 553, + 712 + ], + "type": "interline_equation", + "content": "A _ {i j} = \\frac {\\boldsymbol {c} _ {\\text {v i s} , j} \\exp \\left(S _ {i j}\\right)}{\\sum_ {k = 1} ^ {K} \\boldsymbol {c} _ {\\text {v i s} , k} \\exp \\left(S _ {i k}\\right)}. \\tag {3}", + "image_path": "5e6013154a211e77c566a3cf8038ea8cdd2eba934b329f3bca48f675fff163f3.jpg" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 67, + 70, + 280, + 122 + ], + "blocks": [ + { + "bbox": [ + 67, + 70, + 280, + 122 + ], + "lines": [ + { + "bbox": [ + 67, + 70, + 280, + 122 + ], + "spans": [ + { + "bbox": [ + 67, + 70, + 280, + 122 + ], + "type": "image", + "image_path": "5965ea718b9d135af93c8a920e517f6aac40214a9ac9d61495389ea154071baa.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 129, + 297, + 185 + ], + "lines": [ + { + "bbox": [ + 55, + 129, + 297, + 185 + ], + "spans": [ + { + "bbox": [ + 55, + 129, + 297, + 185 + ], + "type": "text", + "content": "Figure 4. 3D-consistent mask example. Given a 3D mesh, we render selected triangles in a distinct color from the others to generate multi-view consistent masks. It allows the evaluation of multi-view methods in handling contact occlusion. (The occluded regions are shown in red.)" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 206, + 296, + 290 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 206, + 296, + 290 + ], + "spans": [ + { + "bbox": [ + 55, + 206, + 296, + 290 + ], + "type": "text", + "content": "In this manner, the visibility flag modulates the attention matrix " + }, + { + "bbox": [ + 55, + 206, + 296, + 290 + ], + "type": "inline_equation", + "content": "A" + }, + { + "bbox": [ + 55, + 206, + 296, + 290 + ], + "type": "text", + "content": " smoothly. If there are no visible pixels in a patch " + }, + { + "bbox": [ + 55, + 206, + 296, + 290 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 55, + 206, + 296, + 290 + ], + "type": "text", + "content": ", then " + }, + { + "bbox": [ + 55, + 206, + 296, + 290 + ], + "type": "inline_equation", + "content": "A_{ij} = 0" + }, + { + "bbox": [ + 55, + 206, + 296, + 290 + ], + "type": "text", + "content": ", so the corresponding image tokens are skipped in cross attention. While we have illustrated how this works for a single head, in practice Amodal3R uses a multiple-head transformer, to which Eq. (3) extends trivially. Please see the supplementary materials for details." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 291, + 296, + 434 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 291, + 296, + 434 + ], + "spans": [ + { + "bbox": [ + 55, + 291, + 296, + 434 + ], + "type": "text", + "content": "Occlusion-aware Attention Layer. In addition to encouraging the network to focus its attention on the visible part of the object, we also introduce an occlusion-aware attention layer. For amodal completion, it is not enough to specify the visible information; instead, we must also differentiate foreground occluders from the background, as this explicitly identifies the potential regions requiring completion. Namely, if a pixel is denoted as invisible in the mask " + }, + { + "bbox": [ + 55, + 291, + 296, + 434 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{vis}}" + }, + { + "bbox": [ + 55, + 291, + 296, + 434 + ], + "type": "text", + "content": ", this might be because there is an occluder in front of that pixel (so the pixel could have contained the object except due to occlusion), or because the pixel is entirely off the object. This information is encoded by the mask " + }, + { + "bbox": [ + 55, + 291, + 296, + 434 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{occ}}" + }, + { + "bbox": [ + 55, + 291, + 296, + 434 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 435, + 296, + 531 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 435, + 296, + 531 + ], + "spans": [ + { + "bbox": [ + 55, + 435, + 296, + 531 + ], + "type": "text", + "content": "To allow the model to distinguish between visible, occluded and background areas, we add one more cross-attention layer placed immediately after the mask-weighted cross-attention layer, and pooling solely the occlusion mask " + }, + { + "bbox": [ + 55, + 435, + 296, + 531 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{occ}}" + }, + { + "bbox": [ + 55, + 435, + 296, + 531 + ], + "type": "text", + "content": ". In order to do so, we encode the occlusions mask " + }, + { + "bbox": [ + 55, + 435, + 296, + 531 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{vis}}" + }, + { + "bbox": [ + 55, + 435, + 296, + 531 + ], + "type": "text", + "content": " as a set of flatten 1-D tokens " + }, + { + "bbox": [ + 55, + 435, + 296, + 531 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{occ}} = [\\rho(M_{\\mathrm{occ},1}), \\ldots, \\rho(M_{\\mathrm{occ},M})]" + }, + { + "bbox": [ + 55, + 435, + 296, + 531 + ], + "type": "text", + "content": ", as before and then simply pool " + }, + { + "bbox": [ + 55, + 435, + 296, + 531 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{occ}}" + }, + { + "bbox": [ + 55, + 435, + 296, + 531 + ], + "type": "text", + "content": " using a cross-attention layer." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 540, + 218, + 552 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 540, + 218, + 552 + ], + "spans": [ + { + "bbox": [ + 55, + 540, + 218, + 552 + ], + "type": "text", + "content": "3.3. Simulating Occluded 3D Data" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 557, + 295, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 557, + 295, + 604 + ], + "spans": [ + { + "bbox": [ + 55, + 557, + 295, + 604 + ], + "type": "text", + "content": "To train and evaluate our model, we require a dataset of 3D assets imaged in scenarios with clutter and occlusions. It is challenging to obtain such data in the real world, so we resorted to synthetic data and simulation to train our model." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "spans": [ + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "content": "Random 2D Occlusions. In order to train our model, each training sample " + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "inline_equation", + "content": "(o,x,M_{\\mathrm{vis}},M_{\\mathrm{occ}})" + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "content": " consists of a 3D object " + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "inline_equation", + "content": "o" + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "content": " (from which ground truth latents can be obtained by using the encoders of Sec. 3.1), an image " + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "content": " with partial occlusions, and corresponding visibility and occlusion masks " + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{vis}}" + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{occ}}" + }, + { + "bbox": [ + 55, + 605, + 296, + 714 + ], + "type": "text", + "content": ". In a real scenario, occlusions arise from other objects in the scene. In a multi-view setting, like the one discussed below, these occlusions need to be consistent across views, reflecting the underlying scene geometry." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 72, + 553, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 133 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 133 + ], + "type": "text", + "content": "try. However, because our model is trained for single-view reconstruction, we can simulate occlusions by randomly masking parts of the object after rendering it in 2D. This is simpler and allows us to generate fresh occlusion patterns each time a view is sampled for training." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "spans": [ + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "type": "text", + "content": "Thus, given " + }, + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "type": "inline_equation", + "content": "o" + }, + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "type": "text", + "content": " and an image " + }, + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "type": "text", + "content": " rendered from a random viewpoint, we generate random occlusion masks as follows. Inspired by work on 2D completion [27, 41, 58, 90], we randomly place lines, ellipses, and rectangles, simulating diverse masking patterns. The union of these random shapes gives us the occlusion mask " + }, + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{occ}}" + }, + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "type": "text", + "content": ". Then, if " + }, + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{obj}}" + }, + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "type": "text", + "content": " is the mask of the unoccluded object, the visible mask is given by " + }, + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{vis}} = M_{\\mathrm{obj}} \\odot (1 - M_{\\mathrm{occ}})" + }, + { + "bbox": [ + 313, + 133, + 554, + 240 + ], + "type": "text", + "content": ". Examples of such patterns are given in the supplementary material." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 313, + 241, + 553, + 324 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 241, + 553, + 324 + ], + "spans": [ + { + "bbox": [ + 313, + 241, + 553, + 324 + ], + "type": "text", + "content": "3D-consistent occlusions. In a real scene imaged from multiple viewpoints, occlusions are not random but consistent across views as they are caused by other objects. This is particularly true for contact occlusions, where part of an object remains occluded by another from all viewpoints. To evaluate the model's performance under such challenging conditions, 3D-consistent masks are required." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 313, + 325, + 554, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 325, + 554, + 421 + ], + "spans": [ + { + "bbox": [ + 313, + 325, + 554, + 421 + ], + "type": "text", + "content": "To efficiently generate such contact occlusion masks, we leverage 3D meshes during rendering. Starting from a randomly chosen triangle, we apply a random-walk strategy to iteratively select neighboring triangles, forming continuous occluded regions until the predefined mask ratio is met. By rendering these masked meshes using the same camera parameters as the RGB images, we ensure multi-view consistency in the generated occlusion masks (see Fig. 4)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 430, + 515, + 443 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 430, + 515, + 443 + ], + "spans": [ + { + "bbox": [ + 313, + 430, + 515, + 443 + ], + "type": "text", + "content": "3.4. Reconstruction with Multi-view Input" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 313, + 448, + 555, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 448, + 555, + 628 + ], + "spans": [ + { + "bbox": [ + 313, + 448, + 555, + 628 + ], + "type": "text", + "content": "Since our flow-based model performs multiple denoising steps and does not require known camera poses for input views, Amodal3R can accept multi-view reference images " + }, + { + "bbox": [ + 313, + 448, + 555, + 628 + ], + "type": "inline_equation", + "content": "\\mathcal{X} = \\{x_i\\}_{i=1}^N" + }, + { + "bbox": [ + 313, + 448, + 555, + 628 + ], + "type": "text", + "content": " as conditioning inputs at different steps of the denoising process. While multi-view conditioning naturally improves reconstruction performance, a potential issue with such multi-view conditioning is that an image used earlier in the denoising process has a stronger influence on the final 3D geometry. This is because early denoising steps establish the coarse geometry, whereas later steps refine higher-frequency details [33, 34]. Therefore, we prioritize input images based on their visibility. Specifically, when experimenting with multi-view inputs, we sort the images in proportion to the object visibility " + }, + { + "bbox": [ + 313, + 448, + 555, + 628 + ], + "type": "inline_equation", + "content": "|M_{\\mathrm{vis}}|" + }, + { + "bbox": [ + 313, + 448, + 555, + 628 + ], + "type": "text", + "content": ", ensuring that images with higher visibility are used as primary inputs." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 313, + 639, + 395, + 653 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 639, + 395, + 653 + ], + "spans": [ + { + "bbox": [ + 313, + 639, + 395, + 653 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 313, + 659, + 432, + 672 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 659, + 432, + 672 + ], + "spans": [ + { + "bbox": [ + 313, + 659, + 432, + 672 + ], + "type": "text", + "content": "4.1. Experiment Settings" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 313, + 677, + 554, + 714 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 677, + 554, + 714 + ], + "spans": [ + { + "bbox": [ + 313, + 677, + 554, + 714 + ], + "type": "text", + "content": "Datasets. Amodal3R is trained on a combination of 3D synthetic dataset: 3D-FUTURE (9,472 objects [17]), ABO (4,485 objects [13]), and HSSD (6,670 objects [32]). Once" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 57, + 70, + 555, + 198 + ], + "blocks": [ + { + "bbox": [ + 57, + 70, + 555, + 198 + ], + "lines": [ + { + "bbox": [ + 57, + 70, + 555, + 198 + ], + "spans": [ + { + "bbox": [ + 57, + 70, + 555, + 198 + ], + "type": "table", + "html": "
MethodV-num2D-CompFID ↓KID(%) ↓CLIP ↑P-FID ↓COV(%) ↑MMD(‰) ↓
GaussianAnything [37]1pix2gestalt [56]92.261.300.7434.6935.925.03
Real3D [28]1pix2gestalt [56]91.212.020.7523.9219.619.21
TRELLIS [81]1pix2gestalt [56]58.825.870.7626.4331.654.17
Amodal3R (Ours)1-30.640.350.817.6939.613.62
LaRa [6]4pix2gestalt [56]172.844.540.7066.3424.568.11
LaRa [6]4pix2gestalt [56]+MV97.532.630.7521.8026.218.61
TRELLIS [81]4pix2gestalt [56]65.696.920.7824.6432.334.26
TRELLIS [81]4pix2gestalt [56]+MV60.371.850.8319.6831.754.21
Amodal3R (Ours)4-26.270.220.845.0338.743.61
", + "image_path": "b29270b02390ef686f91f13023c3fe889c64b63a45444e98460ec0d9fa2347a8.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 57, + 267, + 555, + 395 + ], + "blocks": [ + { + "bbox": [ + 55, + 201, + 555, + 257 + ], + "lines": [ + { + "bbox": [ + 55, + 201, + 555, + 257 + ], + "spans": [ + { + "bbox": [ + 55, + 201, + 555, + 257 + ], + "type": "text", + "content": "Table 1. Amodal 3D reconstruction results on GSO [16]. Here, V-num denotes the number of input views, and 2D-Comp refers to the 2D completion method. For single-view image-to-3D, we first complete occluded objects using pix2gestalt [56] before passing them to various 3D models. For multi-view image-to-3D, we adopt two variants: 1) pix2gestalt [56] is applied independently on each view; 2) pix2gestalt " + }, + { + "bbox": [ + 55, + 201, + 555, + 257 + ], + "type": "inline_equation", + "content": "[56] + \\mathrm{MV}" + }, + { + "bbox": [ + 55, + 201, + 555, + 257 + ], + "type": "text", + "content": ": a single-view completion is generated first, followed by multi-view diffusion [63] to synthesize 4 views as inputs. Without bells and whistles, Amodal3R consistently outperforms state-of-the-art models across all evaluation metrics." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 57, + 267, + 555, + 395 + ], + "lines": [ + { + "bbox": [ + 57, + 267, + 555, + 395 + ], + "spans": [ + { + "bbox": [ + 57, + 267, + 555, + 395 + ], + "type": "table", + "html": "
MethodV-num2D-CompFID ↓KID(%) ↓CLIP ↑P-FID ↓COV(%) ↑MMD(‰) ↓
GaussianAnything [37]1pix2gestalt [56]57.171.220.8021.9733.567.23
Real3D [28]1pix2gestalt [56]59.921.630.7923.3124.359.53
TRELLIS [81]1pix2gestalt [56]43.056.830.8026.0426.286.87
Amodal3R (Ours)1-23.450.420.835.0037.095.89
LaRa [6]4pix2gestalt [56]123.523.610.7545.9127.899.67
LaRa [6]4pix2gestalt [56]+MV75.334.140.8013.0024.8210.93
TRELLIS [81]4pix2gestalt [56]46.348.770.8128.7625.357.13
TRELLIS [81]4pix2gestalt [56]+MV43.007.530.8124.4126.557.05
Amodal3R (Ours)4-20.930.500.853.7839.035.75
", + "image_path": "7785c1f05f18d792d32248c31ceb5b4f2262fd60cff1f3bb586b7cc7a502d695.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 66, + 398, + 542, + 410 + ], + "lines": [ + { + "bbox": [ + 66, + 398, + 542, + 410 + ], + "spans": [ + { + "bbox": [ + 66, + 398, + 542, + 410 + ], + "type": "text", + "content": "Table 2. Amodal 3D reconstruction results on Toys4K [69]. The experimental setting is the same to Tab. 1, except for the dataset." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 54, + 430, + 295, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 430, + 295, + 587 + ], + "spans": [ + { + "bbox": [ + 54, + 430, + 295, + 587 + ], + "type": "text", + "content": "Amodal3R is trained, we first assess its effectiveness on Toys4K (randomly sampling 1,500 objects [69]) and Google Scanned Objects (GSO) (1,030 objects [16]), which are excluded from our training data for our and the baseline model. During inference, a 3D-consistent mask occludes each input object, and each view is augmented with additional randomly generated occlusion areas. This ensures that the model cannot directly extract the region required to be completed from the occlusion regions. To further assess the out-of-distribution generalization ability in practical applications, we also evaluate all models on the 3D scene dataset Replica [70] as well as on in-the-wild images. The training and evaluation dataset will be released." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 54, + 594, + 295, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 54, + 594, + 295, + 715 + ], + "spans": [ + { + "bbox": [ + 54, + 594, + 295, + 715 + ], + "type": "text", + "content": "Metrics. To measure the quality of the rendered images, we use the Fréchet Inception Distance (FID) [21] and the Kernel Inception Distance (KID) [3]. To measure the quality of the reconstructed 3D geometry, we use the Coverage Score (COV), the Point cloud FID (P-FID) [51], and the Minimum Matching Distance (MMD) using the Chamfer Distance (CD). COV measures the diversity of the results and P-FID and MMD the quality of the 3D reconstruction. We also use the CLIP score [59] to evaluate the consistency between each pair generated and ground-truth objects." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 313, + 430, + 555, + 528 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 430, + 555, + 528 + ], + "spans": [ + { + "bbox": [ + 313, + 430, + 555, + 528 + ], + "type": "text", + "content": "Baselines. Most 3D generative models are trained on complete object inputs. To ensure fair comparisons, we complete the partially visible 2D objects before passing them to 3D generators. Sepecifically, we use pix2gestalt [56], a state-of-the-art 2D amodal completion network. Using this protocol, we compare Amodal3R to state-of-the-art methods such as TRELLIS [81], Real3D [28], GaussianAnything [37]ICLR'25, and LaRa [6]ECCV'24." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 313, + 528, + 556, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 528, + 556, + 601 + ], + "spans": [ + { + "bbox": [ + 313, + 528, + 556, + 601 + ], + "type": "text", + "content": "Implementation Details. Amodal3R is trained on 4 A100 GPUs (40G) for 20K steps with a batch size of 16, taking approximately a day. Following TRELLIS [81], we implement classifier-free guidance (CFG [22]) with a drop rate of 0.1 and AdamW [45] optimizer with a learning rate of 1e-4. More details are provided in the supplementary material." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 313, + 610, + 400, + 623 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 610, + 400, + 623 + ], + "spans": [ + { + "bbox": [ + 313, + 610, + 400, + 623 + ], + "type": "text", + "content": "4.2. Main Results" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 629, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 629, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 629, + 555, + 715 + ], + "type": "text", + "content": "Quantitative Results. We compare Amodal3R to the state-of-the-art for amodal 3D reconstruction in Tabs. 1 and 2. Amodal3R significantly outperforms the baselines across all metrics in both single- and multi-view 3D reconstruction, demonstrating its effectiveness. Interestingly, baselines that use multiple but potentially inconsistently-completed views (\"4 V-num + pix2gestalt\") are worse than" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 733, + 309, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 309, + 742 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 309, + 742 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 61, + 70, + 108, + 150 + ], + "blocks": [ + { + "bbox": [ + 61, + 70, + 108, + 150 + ], + "lines": [ + { + "bbox": [ + 61, + 70, + 108, + 150 + ], + "spans": [ + { + "bbox": [ + 61, + 70, + 108, + 150 + ], + "type": "image", + "image_path": "4b8d63cfeb365c168c06a219dc90ea157ceb77c53fd03485af894f8574f4b1fa.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 79, + 152, + 97, + 160 + ], + "lines": [ + { + "bbox": [ + 79, + 152, + 97, + 160 + ], + "spans": [ + { + "bbox": [ + 79, + 152, + 97, + 160 + ], + "type": "text", + "content": "Input" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 115, + 72, + 148, + 149 + ], + "blocks": [ + { + "bbox": [ + 115, + 72, + 148, + 149 + ], + "lines": [ + { + "bbox": [ + 115, + 72, + 148, + 149 + ], + "spans": [ + { + "bbox": [ + 115, + 72, + 148, + 149 + ], + "type": "image", + "image_path": "344c6fd2fe665ca0ce9a8f9179dbb7878728c433a2dd43bd58d637e4948f50cf.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 109, + 152, + 160, + 161 + ], + "lines": [ + { + "bbox": [ + 109, + 152, + 160, + 161 + ], + "spans": [ + { + "bbox": [ + 109, + 152, + 160, + 161 + ], + "type": "text", + "content": "pix2gestalt [56]" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 161, + 72, + 202, + 148 + ], + "blocks": [ + { + "bbox": [ + 161, + 72, + 202, + 148 + ], + "lines": [ + { + "bbox": [ + 161, + 72, + 202, + 148 + ], + "spans": [ + { + "bbox": [ + 161, + 72, + 202, + 148 + ], + "type": "image", + "image_path": "16e94e10bf7663bc001757c6ebe65f1b7b94567cdffeb0398ac46bf6e56d3e57.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 168, + 152, + 244, + 161 + ], + "lines": [ + { + "bbox": [ + 168, + 152, + 244, + 161 + ], + "spans": [ + { + "bbox": [ + 168, + 152, + 244, + 161 + ], + "type": "text", + "content": "GaussianAnything [37]" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 205, + 72, + 246, + 148 + ], + "blocks": [ + { + "bbox": [ + 205, + 72, + 246, + 148 + ], + "lines": [ + { + "bbox": [ + 205, + 72, + 246, + 148 + ], + "spans": [ + { + "bbox": [ + 205, + 72, + 246, + 148 + ], + "type": "image", + "image_path": "5a1da316ffe4ed72d7be02956ca4e03b0d234d5db15a8162959d54e1d1e6dfe8.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "bbox": [ + 205, + 152, + 244, + 161 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 205, + 152, + 244, + 161 + ], + "spans": [ + { + "bbox": [ + 205, + 152, + 244, + 161 + ], + "type": "text", + "content": "nything [37]" + } + ] + } + ], + "index": 7, + "type": "text" + }, + { + "type": "image", + "bbox": [ + 258, + 72, + 294, + 148 + ], + "blocks": [ + { + "bbox": [ + 258, + 72, + 294, + 148 + ], + "lines": [ + { + "bbox": [ + 258, + 72, + 294, + 148 + ], + "spans": [ + { + "bbox": [ + 258, + 72, + 294, + 148 + ], + "type": "image", + "image_path": "ddc7a6df708c27a38eaf717a8aec85c34b164d6a97f8e0d778beae5ec5f9f310.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 281, + 152, + 321, + 160 + ], + "lines": [ + { + "bbox": [ + 281, + 152, + 321, + 160 + ], + "spans": [ + { + "bbox": [ + 281, + 152, + 321, + 160 + ], + "type": "text", + "content": "Real3D [28]" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 302, + 72, + 340, + 148 + ], + "blocks": [ + { + "bbox": [ + 302, + 72, + 340, + 148 + ], + "lines": [ + { + "bbox": [ + 302, + 72, + 340, + 148 + ], + "spans": [ + { + "bbox": [ + 302, + 72, + 340, + 148 + ], + "type": "image", + "image_path": "53065c4743828b25fc578e7b842aa49005b69b45655eb966cf5d59c062cdae21.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 359, + 72, + 388, + 147 + ], + "blocks": [ + { + "bbox": [ + 359, + 72, + 388, + 147 + ], + "lines": [ + { + "bbox": [ + 359, + 72, + 388, + 147 + ], + "spans": [ + { + "bbox": [ + 359, + 72, + 388, + 147 + ], + "type": "image", + "image_path": "3bca9cdae6c37dd6588bf565d6e9080fbded9ce3398cf90ae1e0e62214b9d844.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 375, + 152, + 423, + 160 + ], + "lines": [ + { + "bbox": [ + 375, + 152, + 423, + 160 + ], + "spans": [ + { + "bbox": [ + 375, + 152, + 423, + 160 + ], + "type": "text", + "content": "TRELLIS [81]" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 394, + 72, + 434, + 147 + ], + "blocks": [ + { + "bbox": [ + 394, + 72, + 434, + 147 + ], + "lines": [ + { + "bbox": [ + 394, + 72, + 434, + 147 + ], + "spans": [ + { + "bbox": [ + 394, + 72, + 434, + 147 + ], + "type": "image", + "image_path": "4aae53f2a0e5edd86fb299ae111186e8b48fbc9ea078f0c202ed0c0010334b89.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 453, + 72, + 492, + 148 + ], + "blocks": [ + { + "bbox": [ + 453, + 72, + 492, + 148 + ], + "lines": [ + { + "bbox": [ + 453, + 72, + 492, + 148 + ], + "spans": [ + { + "bbox": [ + 453, + 72, + 492, + 148 + ], + "type": "image", + "image_path": "ff797a21c781d899dc09562ba90d6384c222e9ad2c7f5bfd1d718e730abed118.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 462, + 152, + 522, + 160 + ], + "lines": [ + { + "bbox": [ + 462, + 152, + 522, + 160 + ], + "spans": [ + { + "bbox": [ + 462, + 152, + 522, + 160 + ], + "type": "text", + "content": "Amodal3R (Ours)" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 501, + 72, + 539, + 148 + ], + "blocks": [ + { + "bbox": [ + 501, + 72, + 539, + 148 + ], + "lines": [ + { + "bbox": [ + 501, + 72, + 539, + 148 + ], + "spans": [ + { + "bbox": [ + 501, + 72, + 539, + 148 + ], + "type": "image", + "image_path": "24633556114d5a1d318f35be6d2097f51e2e72d27af2b872c7713a407b004ee8.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 56, + 196, + 119, + 278 + ], + "blocks": [ + { + "bbox": [ + 55, + 164, + 555, + 187 + ], + "lines": [ + { + "bbox": [ + 55, + 164, + 555, + 187 + ], + "spans": [ + { + "bbox": [ + 55, + 164, + 555, + 187 + ], + "type": "text", + "content": "Figure 5. Single-view amodal 3D reconstruction. The occlusion regions are shown in black and the visible regions are highlighted with red outlines. More examples are provided in supplementary material Fig. C.4." + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 56, + 196, + 119, + 278 + ], + "lines": [ + { + "bbox": [ + 56, + 196, + 119, + 278 + ], + "spans": [ + { + "bbox": [ + 56, + 196, + 119, + 278 + ], + "type": "image", + "image_path": "fc8f85350f0e48cf02df3b0419da21402ace9dd34d2f514b8e2615466d2415fd.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 79, + 279, + 97, + 288 + ], + "lines": [ + { + "bbox": [ + 79, + 279, + 97, + 288 + ], + "spans": [ + { + "bbox": [ + 79, + 279, + 97, + 288 + ], + "type": "text", + "content": "Input" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 292, + 555, + 326 + ], + "lines": [ + { + "bbox": [ + 55, + 292, + 555, + 326 + ], + "spans": [ + { + "bbox": [ + 55, + 292, + 555, + 326 + ], + "type": "text", + "content": "Figure 6. Multi-view amodal 3D reconstruction. The above results are reconstructed using 4 occluded input views. Due to limited space, we present only the best results for LaRa and TRELLIS under the \"pix2gestalt+MV\" setting. We apply 3d-consistent mask and random extended occlusion areas. More examples are provided in supplementary material Fig. C.5." + } + ] + } + ], + "index": 38, + "angle": 0, + "type": "image_caption" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 130, + 199, + 171, + 230 + ], + "blocks": [ + { + "bbox": [ + 130, + 199, + 171, + 230 + ], + "lines": [ + { + "bbox": [ + 130, + 199, + 171, + 230 + ], + "spans": [ + { + "bbox": [ + 130, + 199, + 171, + 230 + ], + "type": "image", + "image_path": "f7805e3946002aa68a40022a399e5caa06e504fa500c83933080db55dfd6dab4.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 136, + 240, + 156, + 273 + ], + "blocks": [ + { + "bbox": [ + 136, + 240, + 156, + 273 + ], + "lines": [ + { + "bbox": [ + 136, + 240, + 156, + 273 + ], + "spans": [ + { + "bbox": [ + 136, + 240, + 156, + 273 + ], + "type": "image", + "image_path": "5339ec554482daf2e2b186da5f98bd2ba1dbad05ab5f4ea3186b550e61a5e906.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 124, + 279, + 175, + 289 + ], + "lines": [ + { + "bbox": [ + 124, + 279, + 175, + 289 + ], + "spans": [ + { + "bbox": [ + 124, + 279, + 175, + 289 + ], + "type": "text", + "content": "pix2gestalt [56]" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_caption" + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 192, + 200, + 236, + 229 + ], + "blocks": [ + { + "bbox": [ + 192, + 200, + 236, + 229 + ], + "lines": [ + { + "bbox": [ + 192, + 200, + 236, + 229 + ], + "spans": [ + { + "bbox": [ + 192, + 200, + 236, + 229 + ], + "type": "image", + "image_path": "a50ea849adc47903143abd5e8b28edbcaeb90432984776e24f975dad521fb428.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 191, + 238, + 233, + 270 + ], + "blocks": [ + { + "bbox": [ + 191, + 238, + 233, + 270 + ], + "lines": [ + { + "bbox": [ + 191, + 238, + 233, + 270 + ], + "spans": [ + { + "bbox": [ + 191, + 238, + 233, + 270 + ], + "type": "image", + "image_path": "b5e9f37b71f7f904705327927ff5b42973efda9d537d57fd5fc2f8b77fdd81d0.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 228, + 279, + 258, + 288 + ], + "lines": [ + { + "bbox": [ + 228, + 279, + 258, + 288 + ], + "spans": [ + { + "bbox": [ + 228, + 279, + 258, + 288 + ], + "type": "text", + "content": "LaRa [6]" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_caption" + } + ], + "index": 24 + }, + { + "type": "image", + "bbox": [ + 245, + 200, + 288, + 227 + ], + "blocks": [ + { + "bbox": [ + 245, + 200, + 288, + 227 + ], + "lines": [ + { + "bbox": [ + 245, + 200, + 288, + 227 + ], + "spans": [ + { + "bbox": [ + 245, + 200, + 288, + 227 + ], + "type": "image", + "image_path": "eb1459a55bc21480741609bfe8d472593a3382ef284c65f29b396b9a855330cc.jpg" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_body" + } + ], + "index": 26 + }, + { + "type": "image", + "bbox": [ + 245, + 240, + 288, + 269 + ], + "blocks": [ + { + "bbox": [ + 245, + 240, + 288, + 269 + ], + "lines": [ + { + "bbox": [ + 245, + 240, + 288, + 269 + ], + "spans": [ + { + "bbox": [ + 245, + 240, + 288, + 269 + ], + "type": "image", + "image_path": "08c973d3912e5936e18998074a07045d201ec77a25069277dd7ae43a3791ac01.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 315, + 201, + 359, + 234 + ], + "blocks": [ + { + "bbox": [ + 315, + 201, + 359, + 234 + ], + "lines": [ + { + "bbox": [ + 315, + 201, + 359, + 234 + ], + "spans": [ + { + "bbox": [ + 315, + 201, + 359, + 234 + ], + "type": "image", + "image_path": "c3ed58be125412bbae5a4b13f4510c85405227759cdedbec02d084f4d8e353fb.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + } + ], + "index": 28 + }, + { + "type": "image", + "bbox": [ + 315, + 237, + 358, + 270 + ], + "blocks": [ + { + "bbox": [ + 315, + 237, + 358, + 270 + ], + "lines": [ + { + "bbox": [ + 315, + 237, + 358, + 270 + ], + "spans": [ + { + "bbox": [ + 315, + 237, + 358, + 270 + ], + "type": "image", + "image_path": "0ae607b0ed818e015cd57d7087130d6ce02033f15d780990f3987937fd9ce03c.jpg" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 343, + 279, + 391, + 288 + ], + "lines": [ + { + "bbox": [ + 343, + 279, + 391, + 288 + ], + "spans": [ + { + "bbox": [ + 343, + 279, + 391, + 288 + ], + "type": "text", + "content": "TRELLIS [81]" + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_caption" + } + ], + "index": 29 + }, + { + "type": "image", + "bbox": [ + 372, + 201, + 414, + 232 + ], + "blocks": [ + { + "bbox": [ + 372, + 201, + 414, + 232 + ], + "lines": [ + { + "bbox": [ + 372, + 201, + 414, + 232 + ], + "spans": [ + { + "bbox": [ + 372, + 201, + 414, + 232 + ], + "type": "image", + "image_path": "41d1151c236a7cec6d230fea72a0f2caa1d9a5274fe82667022ece097f847a36.jpg" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_body" + } + ], + "index": 30 + }, + { + "type": "image", + "bbox": [ + 372, + 240, + 415, + 269 + ], + "blocks": [ + { + "bbox": [ + 372, + 240, + 415, + 269 + ], + "lines": [ + { + "bbox": [ + 372, + 240, + 415, + 269 + ], + "spans": [ + { + "bbox": [ + 372, + 240, + 415, + 269 + ], + "type": "image", + "image_path": "76f887eb88ebf2c6160d9c327f4bf15d0edfd97f184077642698e01d28a2a6d7.jpg" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_body" + } + ], + "index": 31 + }, + { + "type": "image", + "bbox": [ + 438, + 200, + 484, + 234 + ], + "blocks": [ + { + "bbox": [ + 438, + 200, + 484, + 234 + ], + "lines": [ + { + "bbox": [ + 438, + 200, + 484, + 234 + ], + "spans": [ + { + "bbox": [ + 438, + 200, + 484, + 234 + ], + "type": "image", + "image_path": "8b6750b249027312a8ede14ba56685d18050f471e8c5276fb89b63ce7f37a3ff.jpg" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_body" + } + ], + "index": 33 + }, + { + "type": "image", + "bbox": [ + 438, + 235, + 485, + 274 + ], + "blocks": [ + { + "bbox": [ + 438, + 235, + 485, + 274 + ], + "lines": [ + { + "bbox": [ + 438, + 235, + 485, + 274 + ], + "spans": [ + { + "bbox": [ + 438, + 235, + 485, + 274 + ], + "type": "image", + "image_path": "2214f9535709da7a66e819a76fed68d43724677f6fb237c4c0f16ae40cce9c3b.jpg" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 463, + 279, + 522, + 288 + ], + "lines": [ + { + "bbox": [ + 463, + 279, + 522, + 288 + ], + "spans": [ + { + "bbox": [ + 463, + 279, + 522, + 288 + ], + "type": "text", + "content": "Amodal3R (Ours)" + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_caption" + } + ], + "index": 34 + }, + { + "type": "image", + "bbox": [ + 500, + 201, + 541, + 234 + ], + "blocks": [ + { + "bbox": [ + 500, + 201, + 541, + 234 + ], + "lines": [ + { + "bbox": [ + 500, + 201, + 541, + 234 + ], + "spans": [ + { + "bbox": [ + 500, + 201, + 541, + 234 + ], + "type": "image", + "image_path": "af774c2718c4ddc1604bb1329c96d6898f06b93c429021097ad4f2c16fd21cee.jpg" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_body" + } + ], + "index": 35 + }, + { + "type": "image", + "bbox": [ + 498, + 237, + 544, + 271 + ], + "blocks": [ + { + "bbox": [ + 498, + 237, + 544, + 271 + ], + "lines": [ + { + "bbox": [ + 498, + 237, + 544, + 271 + ], + "spans": [ + { + "bbox": [ + 498, + 237, + 544, + 271 + ], + "type": "image", + "image_path": "a42058f2e19492d3aa48221c2c95e3d28f0636c89d280016eca2cd55950ba8ec.jpg" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_body" + } + ], + "index": 36 + }, + { + "type": "image", + "bbox": [ + 56, + 344, + 98, + 381 + ], + "blocks": [ + { + "bbox": [ + 56, + 344, + 98, + 381 + ], + "lines": [ + { + "bbox": [ + 56, + 344, + 98, + 381 + ], + "spans": [ + { + "bbox": [ + 56, + 344, + 98, + 381 + ], + "type": "image", + "image_path": "dbff7913cd8a313d4d155aa78cbab97d8e0023cfcee1bef29ec045ca5ba7f12b.jpg" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "image_body" + } + ], + "index": 39 + }, + { + "type": "image", + "bbox": [ + 57, + 381, + 96, + 422 + ], + "blocks": [ + { + "bbox": [ + 57, + 381, + 96, + 422 + ], + "lines": [ + { + "bbox": [ + 57, + 381, + 96, + 422 + ], + "spans": [ + { + "bbox": [ + 57, + 381, + 96, + 422 + ], + "type": "image", + "image_path": "170a8c5842860fb67710a18f8d7e53ca8d1ee22168712426fc88da4a3c3f7e2d.jpg" + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_body" + } + ], + "index": 40 + }, + { + "type": "image", + "bbox": [ + 58, + 422, + 97, + 460 + ], + "blocks": [ + { + "bbox": [ + 58, + 422, + 97, + 460 + ], + "lines": [ + { + "bbox": [ + 58, + 422, + 97, + 460 + ], + "spans": [ + { + "bbox": [ + 58, + 422, + 97, + 460 + ], + "type": "image", + "image_path": "35a7947758f0102a26fee60a36285b0655e2f366cddf71340a6b72a4c2391df8.jpg" + } + ] + } + ], + "index": 41, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 65, + 463, + 83, + 472 + ], + "lines": [ + { + "bbox": [ + 65, + 463, + 83, + 472 + ], + "spans": [ + { + "bbox": [ + 65, + 463, + 83, + 472 + ], + "type": "text", + "content": "Input" + } + ] + } + ], + "index": 42, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 97, + 458, + 142, + 474 + ], + "lines": [ + { + "bbox": [ + 97, + 458, + 142, + 474 + ], + "spans": [ + { + "bbox": [ + 97, + 458, + 142, + 474 + ], + "type": "text", + "content": "' Naive concatenation" + } + ] + } + ], + "index": 46, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 482, + 296, + 526 + ], + "lines": [ + { + "bbox": [ + 55, + 482, + 296, + 526 + ], + "spans": [ + { + "bbox": [ + 55, + 482, + 296, + 526 + ], + "type": "text", + "content": "Figure 7. Ablation examples. Naive concatenation fails to reconstruct complete shape and appearance. Mask-weighted attention alone extends geometry into background regions, while occlusion-aware attention alone cannot guarantee photorealistic appearance." + } + ] + } + ], + "index": 59, + "angle": 0, + "type": "image_caption" + } + ], + "index": 41 + }, + { + "type": "image", + "bbox": [ + 106, + 350, + 138, + 382 + ], + "blocks": [ + { + "bbox": [ + 106, + 350, + 138, + 382 + ], + "lines": [ + { + "bbox": [ + 106, + 350, + 138, + 382 + ], + "spans": [ + { + "bbox": [ + 106, + 350, + 138, + 382 + ], + "type": "image", + "image_path": "7c81d0cddabd718eb9edf4e8e3b2a8a16407800d75d548cd2907a4c38bd9bea0.jpg" + } + ] + } + ], + "index": 43, + "angle": 0, + "type": "image_body" + } + ], + "index": 43 + }, + { + "type": "image", + "bbox": [ + 101, + 393, + 139, + 415 + ], + "blocks": [ + { + "bbox": [ + 101, + 393, + 139, + 415 + ], + "lines": [ + { + "bbox": [ + 101, + 393, + 139, + 415 + ], + "spans": [ + { + "bbox": [ + 101, + 393, + 139, + 415 + ], + "type": "image", + "image_path": "4ca588e11862304c6a919e4a6f6fa6e8bbaae77b1fe6d06e08bd164094c6c0e9.jpg" + } + ] + } + ], + "index": 44, + "angle": 0, + "type": "image_body" + } + ], + "index": 44 + }, + { + "type": "image", + "bbox": [ + 106, + 426, + 134, + 453 + ], + "blocks": [ + { + "bbox": [ + 106, + 426, + 134, + 453 + ], + "lines": [ + { + "bbox": [ + 106, + 426, + 134, + 453 + ], + "spans": [ + { + "bbox": [ + 106, + 426, + 134, + 453 + ], + "type": "image", + "image_path": "28ce4b6da60edc15f838a61563cd751ed3180e19828fe1f64678481295a8197b.jpg" + } + ] + } + ], + "index": 45, + "angle": 0, + "type": "image_body" + } + ], + "index": 45 + }, + { + "type": "image", + "bbox": [ + 146, + 344, + 187, + 388 + ], + "blocks": [ + { + "bbox": [ + 146, + 344, + 187, + 388 + ], + "lines": [ + { + "bbox": [ + 146, + 344, + 187, + 388 + ], + "spans": [ + { + "bbox": [ + 146, + 344, + 187, + 388 + ], + "type": "image", + "image_path": "e87e5b38346fb07edb29b77cd1f21a4dd50ca17335e4996826642195b95ff208.jpg" + } + ] + } + ], + "index": 47, + "angle": 0, + "type": "image_body" + } + ], + "index": 47 + }, + { + "type": "image", + "bbox": [ + 146, + 390, + 192, + 417 + ], + "blocks": [ + { + "bbox": [ + 146, + 390, + 192, + 417 + ], + "lines": [ + { + "bbox": [ + 146, + 390, + 192, + 417 + ], + "spans": [ + { + "bbox": [ + 146, + 390, + 192, + 417 + ], + "type": "image", + "image_path": "33fe9e4da816b749ece4560cd589d4bea0c510d1f66b97c80af08c91a5959dc2.jpg" + } + ] + } + ], + "index": 48, + "angle": 0, + "type": "image_body" + } + ], + "index": 48 + }, + { + "type": "image", + "bbox": [ + 156, + 426, + 183, + 453 + ], + "blocks": [ + { + "bbox": [ + 156, + 426, + 183, + 453 + ], + "lines": [ + { + "bbox": [ + 156, + 426, + 183, + 453 + ], + "spans": [ + { + "bbox": [ + 156, + 426, + 183, + 453 + ], + "type": "image", + "image_path": "282a12ceadacfc33dc7090398b24cbde1b529ccb2a9de736439c241daada2ff2.jpg" + } + ] + } + ], + "index": 49, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 146, + 456, + 190, + 474 + ], + "lines": [ + { + "bbox": [ + 146, + 456, + 190, + 474 + ], + "spans": [ + { + "bbox": [ + 146, + 456, + 190, + 474 + ], + "type": "text", + "content": "w/only " + }, + { + "bbox": [ + 146, + 456, + 190, + 474 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{vis}}" + }, + { + "bbox": [ + 146, + 456, + 190, + 474 + ], + "type": "text", + "content": " attention" + } + ] + } + ], + "index": 50, + "angle": 0, + "type": "image_caption" + } + ], + "index": 49 + }, + { + "type": "image", + "bbox": [ + 204, + 350, + 236, + 384 + ], + "blocks": [ + { + "bbox": [ + 204, + 350, + 236, + 384 + ], + "lines": [ + { + "bbox": [ + 204, + 350, + 236, + 384 + ], + "spans": [ + { + "bbox": [ + 204, + 350, + 236, + 384 + ], + "type": "image", + "image_path": "a475ad7bf12abea80cb0ba9edd5f93c6aa0826811af10dc901c047e0cd304868.jpg" + } + ] + } + ], + "index": 51, + "angle": 0, + "type": "image_body" + } + ], + "index": 51 + }, + { + "type": "image", + "bbox": [ + 202, + 394, + 237, + 415 + ], + "blocks": [ + { + "bbox": [ + 202, + 394, + 237, + 415 + ], + "lines": [ + { + "bbox": [ + 202, + 394, + 237, + 415 + ], + "spans": [ + { + "bbox": [ + 202, + 394, + 237, + 415 + ], + "type": "image", + "image_path": "45e13209ef78b48205eadcb06c0615e6bc18a41dd4a749ac5ded2f055c6e65f0.jpg" + } + ] + } + ], + "index": 52, + "angle": 0, + "type": "image_body" + } + ], + "index": 52 + }, + { + "type": "image", + "bbox": [ + 207, + 426, + 236, + 456 + ], + "blocks": [ + { + "bbox": [ + 207, + 426, + 236, + 456 + ], + "lines": [ + { + "bbox": [ + 207, + 426, + 236, + 456 + ], + "spans": [ + { + "bbox": [ + 207, + 426, + 236, + 456 + ], + "type": "image", + "image_path": "e3167b745021840cfe0c3aeba487e726a61f5d1ee451d65b929016dff2e6ffc9.jpg" + } + ] + } + ], + "index": 53, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 197, + 457, + 242, + 474 + ], + "lines": [ + { + "bbox": [ + 197, + 457, + 242, + 474 + ], + "spans": [ + { + "bbox": [ + 197, + 457, + 242, + 474 + ], + "type": "text", + "content": "w/only " + }, + { + "bbox": [ + 197, + 457, + 242, + 474 + ], + "type": "inline_equation", + "content": "M_{\\mathrm{occ}}" + }, + { + "bbox": [ + 197, + 457, + 242, + 474 + ], + "type": "text", + "content": " attention" + } + ] + } + ], + "index": 54, + "angle": 0, + "type": "image_caption" + } + ], + "index": 53 + }, + { + "type": "image", + "bbox": [ + 253, + 349, + 286, + 384 + ], + "blocks": [ + { + "bbox": [ + 253, + 349, + 286, + 384 + ], + "lines": [ + { + "bbox": [ + 253, + 349, + 286, + 384 + ], + "spans": [ + { + "bbox": [ + 253, + 349, + 286, + 384 + ], + "type": "image", + "image_path": "77d9f6728e5665e1306318e198443c8a1031444766ad537fd7d9d609d84da3dd.jpg" + } + ] + } + ], + "index": 55, + "angle": 0, + "type": "image_body" + } + ], + "index": 55 + }, + { + "type": "image", + "bbox": [ + 253, + 392, + 288, + 415 + ], + "blocks": [ + { + "bbox": [ + 253, + 392, + 288, + 415 + ], + "lines": [ + { + "bbox": [ + 253, + 392, + 288, + 415 + ], + "spans": [ + { + "bbox": [ + 253, + 392, + 288, + 415 + ], + "type": "image", + "image_path": "0292327dff33db77e2f43cfdcf0aca252b42b48e115c1bbff578282a1e557d98.jpg" + } + ] + } + ], + "index": 56, + "angle": 0, + "type": "image_body" + } + ], + "index": 56 + }, + { + "type": "image", + "bbox": [ + 258, + 426, + 283, + 454 + ], + "blocks": [ + { + "bbox": [ + 258, + 426, + 283, + 454 + ], + "lines": [ + { + "bbox": [ + 258, + 426, + 283, + 454 + ], + "spans": [ + { + "bbox": [ + 258, + 426, + 283, + 454 + ], + "type": "image", + "image_path": "40ac8e5cd597b91c59eb1ea0902c1c0bc39caba18c756f0f933280933cf02a9c.jpg" + } + ] + } + ], + "index": 57, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 258, + 457, + 279, + 474 + ], + "lines": [ + { + "bbox": [ + 258, + 457, + 279, + 474 + ], + "spans": [ + { + "bbox": [ + 258, + 457, + 279, + 474 + ], + "type": "text", + "content": "Full model" + } + ] + } + ], + "index": 58, + "angle": 0, + "type": "image_caption" + } + ], + "index": 57 + }, + { + "bbox": [ + 55, + 554, + 295, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 554, + 295, + 626 + ], + "spans": [ + { + "bbox": [ + 55, + 554, + 295, + 626 + ], + "type": "text", + "content": "using a single completed view (\"1 V-num + pix2gestalt\"). This shows that inconsistent 2D completion does confuse reconstruction models to the point that using a single view is preferable (Sec. 2). This issue does not affect Amodal3R as it does not rely on 2D completion; in our case, utilizing more occluded views does improve the results as expected." + } + ] + } + ], + "index": 60 + }, + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "type": "text", + "content": "Qualitative Results. The qualitative results are shown in Figs. 5 and 6 and in supplementary material Figs. C.4 and C.5. For all baselines, pix2gestalt is first applied for 2D amodal completion (second column), and the completed images are passed to each baseline image-to-3D model. Amodal3R produces high-quality 3D assets even under challenging conditions in both single-view and multi-view" + } + ] + } + ], + "index": 61 + }, + { + "type": "table", + "bbox": [ + 316, + 345, + 554, + 399 + ], + "blocks": [ + { + "bbox": [ + 316, + 345, + 554, + 399 + ], + "lines": [ + { + "bbox": [ + 316, + 345, + 554, + 399 + ], + "spans": [ + { + "bbox": [ + 316, + 345, + 554, + 399 + ], + "type": "table", + "html": "
MethodFID ↓KID(%) ↓COV(%) ↑MMD(‰) ↓
naive conditioning31.960.4937.963.61
w/ only mask-weighted attention30.530.3836.903.69
w/ only occlusion-aware layer31.770.5740.193.51
full model (Ours)30.640.3539.613.62
", + "image_path": "c24c5b76fb282ec366c012952b591f859bcb5ba2288c1d1aee8b47f6b51fed93.jpg" + } + ] + } + ], + "index": 62, + "angle": 0, + "type": "table_body" + } + ], + "index": 62 + }, + { + "bbox": [ + 314, + 403, + 554, + 425 + ], + "lines": [ + { + "bbox": [ + 314, + 403, + 554, + 425 + ], + "spans": [ + { + "bbox": [ + 314, + 403, + 554, + 425 + ], + "type": "text", + "content": "Table 3. Ablations on different mask conditioning designs. The Best and the second best results are highlighted." + } + ] + } + ], + "index": 63, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 313, + 447, + 555, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 447, + 555, + 613 + ], + "spans": [ + { + "bbox": [ + 313, + 447, + 555, + 613 + ], + "type": "text", + "content": "image-to-3D tasks. In contrast, 2D amodal completions inconsistencies accumulate as more views are added, particularly when pix2gestalt is more uncertain, confusing the reconstructor models downstream. For instance, in the giraffe example in Fig. 5, the pix2gestalt completion fails to capture the overall structure of the 3D object correctly, which in turn leads the 3D generator models to misinterpret it as a woodstick-like shape. In contrast, Amodal3R reconstructs the 3D geometry and appearance well, with good alignment to the occluded inputs. These findings highlight not only the effectiveness of Amodal3R but also the advantage of completing objects while reconstructing them, which avoids relying on monocular 2D completion models that may introduce inconsistencies." + } + ] + } + ], + "index": 64 + }, + { + "bbox": [ + 313, + 623, + 408, + 635 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 623, + 408, + 635 + ], + "spans": [ + { + "bbox": [ + 313, + 623, + 408, + 635 + ], + "type": "text", + "content": "4.3. Ablation Study" + } + ] + } + ], + "index": 65 + }, + { + "bbox": [ + 313, + 641, + 554, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 641, + 554, + 689 + ], + "spans": [ + { + "bbox": [ + 313, + 641, + 554, + 689 + ], + "type": "text", + "content": "We conducted several ablations to study the impact of the different components of Amodal3R and report the results in Tab. 3 and Fig. 7. For these experiments, we test single-view image-to-3D reconstruction on the GSO dataset." + } + ] + } + ], + "index": 66 + }, + { + "bbox": [ + 313, + 689, + 554, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 689, + 554, + 712 + ], + "spans": [ + { + "bbox": [ + 313, + 689, + 554, + 712 + ], + "type": "text", + "content": "Naive Conditioning. We first evaluated a version of the model that still conditions the reconstruction on the visi" + } + ] + } + ], + "index": 67 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 68 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 62, + 69, + 116, + 227 + ], + "blocks": [ + { + "bbox": [ + 62, + 69, + 116, + 227 + ], + "lines": [ + { + "bbox": [ + 62, + 69, + 116, + 227 + ], + "spans": [ + { + "bbox": [ + 62, + 69, + 116, + 227 + ], + "type": "image", + "image_path": "200397b453ee819ae7dd4f2ea646793e770a7310034f8a2f32d950b559d1aaad.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 88, + 227, + 143, + 236 + ], + "lines": [ + { + "bbox": [ + 88, + 227, + 143, + 236 + ], + "spans": [ + { + "bbox": [ + 88, + 227, + 143, + 236 + ], + "type": "text", + "content": "Occluded input" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 126, + 90, + 156, + 110 + ], + "blocks": [ + { + "bbox": [ + 126, + 90, + 156, + 110 + ], + "lines": [ + { + "bbox": [ + 126, + 90, + 156, + 110 + ], + "spans": [ + { + "bbox": [ + 126, + 90, + 156, + 110 + ], + "type": "image", + "image_path": "7063db8c5a5bd2b6301c906bc345c66adcbd53290a21cbd3959582d1ae385ba6.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 126, + 143, + 156, + 154 + ], + "blocks": [ + { + "bbox": [ + 126, + 143, + 156, + 154 + ], + "lines": [ + { + "bbox": [ + 126, + 143, + 156, + 154 + ], + "spans": [ + { + "bbox": [ + 126, + 143, + 156, + 154 + ], + "type": "image", + "image_path": "0b5a2809eb2eb70a683759fa3294f926e8118b6f3f5bc2e8298be8c0dda5acf6.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 126, + 186, + 153, + 219 + ], + "blocks": [ + { + "bbox": [ + 126, + 186, + 153, + 219 + ], + "lines": [ + { + "bbox": [ + 126, + 186, + 153, + 219 + ], + "spans": [ + { + "bbox": [ + 126, + 186, + 153, + 219 + ], + "type": "image", + "image_path": "10bb96547372de9b4f65d4eaf0c6141fbc7f0c222a00a2e942f4714b00276065.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 181, + 89, + 210, + 110 + ], + "blocks": [ + { + "bbox": [ + 181, + 89, + 210, + 110 + ], + "lines": [ + { + "bbox": [ + 181, + 89, + 210, + 110 + ], + "spans": [ + { + "bbox": [ + 181, + 89, + 210, + 110 + ], + "type": "image", + "image_path": "ba5eb638fabfa1d44f967225d2a2705ce9672b003cd7ee8dbbffca2815b08da6.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 179, + 143, + 210, + 157 + ], + "blocks": [ + { + "bbox": [ + 179, + 143, + 210, + 157 + ], + "lines": [ + { + "bbox": [ + 179, + 143, + 210, + 157 + ], + "spans": [ + { + "bbox": [ + 179, + 143, + 210, + 157 + ], + "type": "image", + "image_path": "350c99a7dd911ccf593f469842074ab9d5ce521a997ceaef24354976106ec05f.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 184, + 182, + 206, + 219 + ], + "blocks": [ + { + "bbox": [ + 184, + 182, + 206, + 219 + ], + "lines": [ + { + "bbox": [ + 184, + 182, + 206, + 219 + ], + "spans": [ + { + "bbox": [ + 184, + 182, + 206, + 219 + ], + "type": "image", + "image_path": "76ab33323ee953b7484b82fd71034ba1c0694d70c026740e58f0e37071d04d7b.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 226, + 78, + 297, + 113 + ], + "blocks": [ + { + "bbox": [ + 226, + 78, + 297, + 113 + ], + "lines": [ + { + "bbox": [ + 226, + 78, + 297, + 113 + ], + "spans": [ + { + "bbox": [ + 226, + 78, + 297, + 113 + ], + "type": "image", + "image_path": "e9fc53c83e64d69d56473b13d9cf025038ffb0a22ca274e9c7cc9d215834edbd.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 227, + 138, + 296, + 166 + ], + "blocks": [ + { + "bbox": [ + 227, + 138, + 296, + 166 + ], + "lines": [ + { + "bbox": [ + 227, + 138, + 296, + 166 + ], + "spans": [ + { + "bbox": [ + 227, + 138, + 296, + 166 + ], + "type": "image", + "image_path": "be3264b8b941c2ab0f81fe0ad141d6ac5078eec8d0feb03fdac2e59db115630d.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 233, + 176, + 291, + 219 + ], + "blocks": [ + { + "bbox": [ + 233, + 176, + 291, + 219 + ], + "lines": [ + { + "bbox": [ + 233, + 176, + 291, + 219 + ], + "spans": [ + { + "bbox": [ + 233, + 176, + 291, + 219 + ], + "type": "image", + "image_path": "76822cbfe9df726693fb6a192713b4c26a42092d8f023015bcff394b26c0e8ba.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 226, + 236, + 286, + 293 + ], + "blocks": [ + { + "bbox": [ + 165, + 226, + 220, + 236 + ], + "lines": [ + { + "bbox": [ + 165, + 226, + 220, + 236 + ], + "spans": [ + { + "bbox": [ + 165, + 226, + 220, + 236 + ], + "type": "text", + "content": "pix2gestalt [56]" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 226, + 236, + 286, + 293 + ], + "lines": [ + { + "bbox": [ + 226, + 236, + 286, + 293 + ], + "spans": [ + { + "bbox": [ + 226, + 236, + 286, + 293 + ], + "type": "image", + "image_path": "be50f897dc7b5833fd3799b532973036dc6f43eca836308d1eae730060cf5386.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 241, + 289, + 286, + 330 + ], + "blocks": [ + { + "bbox": [ + 241, + 289, + 286, + 330 + ], + "lines": [ + { + "bbox": [ + 241, + 289, + 286, + 330 + ], + "spans": [ + { + "bbox": [ + 241, + 289, + 286, + 330 + ], + "type": "image", + "image_path": "bc2fc0f147d342da6fc5086cdbc462ef9a6133e193a57d59e3342584957f4600.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "bbox": [ + 213, + 335, + 255, + 344 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 213, + 335, + 255, + 344 + ], + "spans": [ + { + "bbox": [ + 213, + 335, + 255, + 344 + ], + "type": "text", + "content": "Novel views" + } + ] + } + ], + "index": 14, + "type": "text" + }, + { + "type": "image", + "bbox": [ + 288, + 236, + 334, + 290 + ], + "blocks": [ + { + "bbox": [ + 288, + 236, + 334, + 290 + ], + "lines": [ + { + "bbox": [ + 288, + 236, + 334, + 290 + ], + "spans": [ + { + "bbox": [ + 288, + 236, + 334, + 290 + ], + "type": "image", + "image_path": "3f0690db07f3e2de1dcb4539f91f6fd2da780fcf0b23ecdd19b09f10c4a2b482.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 320, + 226, + 362, + 236 + ], + "lines": [ + { + "bbox": [ + 320, + 226, + 362, + 236 + ], + "spans": [ + { + "bbox": [ + 320, + 226, + 362, + 236 + ], + "type": "text", + "content": "Real3D [28]" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 289, + 289, + 334, + 330 + ], + "blocks": [ + { + "bbox": [ + 289, + 289, + 334, + 330 + ], + "lines": [ + { + "bbox": [ + 289, + 289, + 334, + 330 + ], + "spans": [ + { + "bbox": [ + 289, + 289, + 334, + 330 + ], + "type": "image", + "image_path": "6be0ab649a90c7e78d3f937affcceda47d57a38310d041186c17d451bbd7fc15.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 304, + 78, + 374, + 116 + ], + "blocks": [ + { + "bbox": [ + 304, + 78, + 374, + 116 + ], + "lines": [ + { + "bbox": [ + 304, + 78, + 374, + 116 + ], + "spans": [ + { + "bbox": [ + 304, + 78, + 374, + 116 + ], + "type": "image", + "image_path": "8c88269e51c66fd1d7a2f8e4eee5496f8dd65a9c22e0427e65b3071ee782d987.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 304, + 138, + 372, + 161 + ], + "blocks": [ + { + "bbox": [ + 304, + 138, + 372, + 161 + ], + "lines": [ + { + "bbox": [ + 304, + 138, + 372, + 161 + ], + "spans": [ + { + "bbox": [ + 304, + 138, + 372, + 161 + ], + "type": "image", + "image_path": "e9461051aa711aa2a02f6151e859e9065c3cc6830a48ca9819d07d694df23137.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 315, + 180, + 368, + 220 + ], + "blocks": [ + { + "bbox": [ + 315, + 180, + 368, + 220 + ], + "lines": [ + { + "bbox": [ + 315, + 180, + 368, + 220 + ], + "spans": [ + { + "bbox": [ + 315, + 180, + 368, + 220 + ], + "type": "image", + "image_path": "9887c95bebc5c836fb6365b6d00fa9131761303e1f18801193509e0736ae6d6d.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 383, + 78, + 447, + 119 + ], + "blocks": [ + { + "bbox": [ + 383, + 78, + 447, + 119 + ], + "lines": [ + { + "bbox": [ + 383, + 78, + 447, + 119 + ], + "spans": [ + { + "bbox": [ + 383, + 78, + 447, + 119 + ], + "type": "image", + "image_path": "5b3325bbcf41da6807dfe91632d5b72309bb8f548f7117299add63bfb36f0dd5.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 381, + 138, + 452, + 165 + ], + "blocks": [ + { + "bbox": [ + 381, + 138, + 452, + 165 + ], + "lines": [ + { + "bbox": [ + 381, + 138, + 452, + 165 + ], + "spans": [ + { + "bbox": [ + 381, + 138, + 452, + 165 + ], + "type": "image", + "image_path": "0436f7b4f917ac40bc1178bcecd4d2507c73de79c39d3247c622f3b88ec8dab8.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 382, + 186, + 452, + 211 + ], + "blocks": [ + { + "bbox": [ + 382, + 186, + 452, + 211 + ], + "lines": [ + { + "bbox": [ + 382, + 186, + 452, + 211 + ], + "spans": [ + { + "bbox": [ + 382, + 186, + 452, + 211 + ], + "type": "image", + "image_path": "de8e3973fc101975ccc4d55e99daa0e75ecdf0ffe76d175c3a4eb8f66839420b.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 465, + 76, + 541, + 125 + ], + "blocks": [ + { + "bbox": [ + 465, + 76, + 541, + 125 + ], + "lines": [ + { + "bbox": [ + 465, + 76, + 541, + 125 + ], + "spans": [ + { + "bbox": [ + 465, + 76, + 541, + 125 + ], + "type": "image", + "image_path": "16074b541591a6c0599039739318680c096fc55bc1bd6d6c758d7914adccda39.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_body" + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 465, + 138, + 546, + 165 + ], + "blocks": [ + { + "bbox": [ + 465, + 138, + 546, + 165 + ], + "lines": [ + { + "bbox": [ + 465, + 138, + 546, + 165 + ], + "spans": [ + { + "bbox": [ + 465, + 138, + 546, + 165 + ], + "type": "image", + "image_path": "736e8cf2151488d8fa503ff4669c5f471d9197d18f871258fe25745f54af420e.jpg" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_body" + } + ], + "index": 26 + }, + { + "type": "image", + "bbox": [ + 465, + 185, + 550, + 213 + ], + "blocks": [ + { + "bbox": [ + 465, + 185, + 550, + 213 + ], + "lines": [ + { + "bbox": [ + 465, + 185, + 550, + 213 + ], + "spans": [ + { + "bbox": [ + 465, + 185, + 550, + 213 + ], + "type": "image", + "image_path": "fc1c7b906106d3c2535af2b3ff68f48d36973bf0713210e9b5530f249970eedb.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 63, + 236, + 115, + 285 + ], + "blocks": [ + { + "bbox": [ + 63, + 236, + 115, + 285 + ], + "lines": [ + { + "bbox": [ + 63, + 236, + 115, + 285 + ], + "spans": [ + { + "bbox": [ + 63, + 236, + 115, + 285 + ], + "type": "image", + "image_path": "958925104b6a6de5fb02fda3c3d1e28e93fccfbdbb89d9f987c768b2ff365f3a.jpg" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_body" + } + ], + "index": 29 + }, + { + "type": "image", + "bbox": [ + 63, + 287, + 115, + 335 + ], + "blocks": [ + { + "bbox": [ + 63, + 287, + 115, + 335 + ], + "lines": [ + { + "bbox": [ + 63, + 287, + 115, + 335 + ], + "spans": [ + { + "bbox": [ + 63, + 287, + 115, + 335 + ], + "type": "image", + "image_path": "a50c8ecd263a6364cf47396dc2a5a0e1535a8c6165bebb4a6e03063e8a105a8c.jpg" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 64, + 335, + 118, + 344 + ], + "lines": [ + { + "bbox": [ + 64, + 335, + 118, + 344 + ], + "spans": [ + { + "bbox": [ + 64, + 335, + 118, + 344 + ], + "type": "text", + "content": "Occluded input" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 59, + 349, + 550, + 361 + ], + "lines": [ + { + "bbox": [ + 59, + 349, + 550, + 361 + ], + "spans": [ + { + "bbox": [ + 59, + 349, + 550, + 361 + ], + "type": "text", + "content": "Figure 8. Examples on Replica [70] and in-the-wild images. The target objects and the occluders are shown in red and green outlines." + } + ] + } + ], + "index": 46, + "angle": 0, + "type": "image_caption" + } + ], + "index": 30 + }, + { + "type": "image", + "bbox": [ + 137, + 246, + 180, + 286 + ], + "blocks": [ + { + "bbox": [ + 137, + 246, + 180, + 286 + ], + "lines": [ + { + "bbox": [ + 137, + 246, + 180, + 286 + ], + "spans": [ + { + "bbox": [ + 137, + 246, + 180, + 286 + ], + "type": "image", + "image_path": "45e6b6daacb0e381376c64d91c368b9d73b9f823fb4c0240369ddaa998806e7c.jpg" + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_body" + } + ], + "index": 32 + }, + { + "type": "image", + "bbox": [ + 138, + 292, + 179, + 330 + ], + "blocks": [ + { + "bbox": [ + 138, + 292, + 179, + 330 + ], + "lines": [ + { + "bbox": [ + 138, + 292, + 179, + 330 + ], + "spans": [ + { + "bbox": [ + 138, + 292, + 179, + 330 + ], + "type": "image", + "image_path": "17048f440df24527f3766f08317540eebaa019d762cb04164ad02bdc06cb028d.jpg" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_body" + } + ], + "index": 33 + }, + { + "type": "image", + "bbox": [ + 187, + 247, + 230, + 287 + ], + "blocks": [ + { + "bbox": [ + 187, + 247, + 230, + 287 + ], + "lines": [ + { + "bbox": [ + 187, + 247, + 230, + 287 + ], + "spans": [ + { + "bbox": [ + 187, + 247, + 230, + 287 + ], + "type": "image", + "image_path": "800958d49af741babbe642e9558420b3494187539a3e11efe91645f4be6f77ef.jpg" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_body" + } + ], + "index": 34 + }, + { + "type": "image", + "bbox": [ + 186, + 291, + 228, + 330 + ], + "blocks": [ + { + "bbox": [ + 186, + 291, + 228, + 330 + ], + "lines": [ + { + "bbox": [ + 186, + 291, + 228, + 330 + ], + "spans": [ + { + "bbox": [ + 186, + 291, + 228, + 330 + ], + "type": "image", + "image_path": "f7b7f66e83d2bf216ea9dbe2aa9de530f935d25f68b932e37f802f1b1eaa07a4.jpg" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 212, + 335, + 255, + 344 + ], + "lines": [ + { + "bbox": [ + 212, + 335, + 255, + 344 + ], + "spans": [ + { + "bbox": [ + 212, + 335, + 255, + 344 + ], + "type": "text", + "content": "Novel views" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_caption" + } + ], + "index": 35 + }, + { + "bbox": [ + 241, + 237, + 285, + 293 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 241, + 237, + 285, + 293 + ], + "spans": [ + { + "bbox": [ + 241, + 237, + 285, + 293 + ], + "type": "image", + "image_path": "b4ceec4bd731d38f71d42deb970e9b0c5848de0469f315d19ea9701c00561329.jpg" + } + ] + } + ], + "index": 37, + "type": "text" + }, + { + "bbox": [ + 242, + 289, + 286, + 330 + ], + "angle": 0, + "lines": [ + { + "bbox": [ + 242, + 289, + 286, + 330 + ], + "spans": [ + { + "bbox": [ + 242, + 289, + 286, + 330 + ], + "type": "image", + "image_path": "59e88732da48f90c6a5ab40790481d8ce6640879122148f1e2e16e7497cefa26.jpg" + } + ] + } + ], + "index": 38, + "type": "text" + }, + { + "type": "image", + "bbox": [ + 353, + 246, + 395, + 285 + ], + "blocks": [ + { + "bbox": [ + 353, + 246, + 395, + 285 + ], + "lines": [ + { + "bbox": [ + 353, + 246, + 395, + 285 + ], + "spans": [ + { + "bbox": [ + 353, + 246, + 395, + 285 + ], + "type": "image", + "image_path": "3ca9dcb41744fa7472a5dc51f67271f0714f4d2f4c1467b5e01052836b1d0e0e.jpg" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "image_body" + } + ], + "index": 39 + }, + { + "type": "image", + "bbox": [ + 354, + 290, + 395, + 330 + ], + "blocks": [ + { + "bbox": [ + 354, + 290, + 395, + 330 + ], + "lines": [ + { + "bbox": [ + 354, + 290, + 395, + 330 + ], + "spans": [ + { + "bbox": [ + 354, + 290, + 395, + 330 + ], + "type": "image", + "image_path": "3a243e98dffc9f87e7ad296a48b11c97595783f58d9893880df71e77bb952838.jpg" + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_body" + } + ], + "index": 40 + }, + { + "type": "image", + "bbox": [ + 402, + 246, + 444, + 286 + ], + "blocks": [ + { + "bbox": [ + 402, + 246, + 444, + 286 + ], + "lines": [ + { + "bbox": [ + 402, + 246, + 444, + 286 + ], + "spans": [ + { + "bbox": [ + 402, + 246, + 444, + 286 + ], + "type": "image", + "image_path": "f1d752d3d5e77e56be8c66c9c56e72dffb7bfdcbfe2586e72410d9c2b8500fe0.jpg" + } + ] + } + ], + "index": 41, + "angle": 0, + "type": "image_body" + } + ], + "index": 41 + }, + { + "type": "image", + "bbox": [ + 399, + 290, + 441, + 330 + ], + "blocks": [ + { + "bbox": [ + 399, + 290, + 441, + 330 + ], + "lines": [ + { + "bbox": [ + 399, + 290, + 441, + 330 + ], + "spans": [ + { + "bbox": [ + 399, + 290, + 441, + 330 + ], + "type": "image", + "image_path": "2dbda0b4fbc6011b286ddeb6f2034aeffe7b9797fc27cc220c8d72b5abde4352.jpg" + } + ] + } + ], + "index": 42, + "angle": 0, + "type": "image_body" + } + ], + "index": 42 + }, + { + "type": "image", + "bbox": [ + 450, + 237, + 541, + 293 + ], + "blocks": [ + { + "bbox": [ + 392, + 226, + 444, + 236 + ], + "lines": [ + { + "bbox": [ + 392, + 226, + 444, + 236 + ], + "spans": [ + { + "bbox": [ + 392, + 226, + 444, + 236 + ], + "type": "text", + "content": "TRELLIS [81]" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 473, + 226, + 535, + 236 + ], + "lines": [ + { + "bbox": [ + 473, + 226, + 535, + 236 + ], + "spans": [ + { + "bbox": [ + 473, + 226, + 535, + 236 + ], + "type": "text", + "content": "Amodal3R (Ours)" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 450, + 237, + 541, + 293 + ], + "lines": [ + { + "bbox": [ + 450, + 237, + 541, + 293 + ], + "spans": [ + { + "bbox": [ + 450, + 237, + 541, + 293 + ], + "type": "image", + "image_path": "4ef4fb4a6a9188008e9ff35804e9704402f4e8a130e42e4a5758ef9c1d7656a5.jpg" + } + ] + } + ], + "index": 43, + "angle": 0, + "type": "image_body" + } + ], + "index": 43 + }, + { + "type": "image", + "bbox": [ + 450, + 293, + 541, + 331 + ], + "blocks": [ + { + "bbox": [ + 450, + 293, + 541, + 331 + ], + "lines": [ + { + "bbox": [ + 450, + 293, + 541, + 331 + ], + "spans": [ + { + "bbox": [ + 450, + 293, + 541, + 331 + ], + "type": "image", + "image_path": "789951e6b632a16df3aa69afad044cfe8b02196fc2f0ae28e80fc511b5fcdc94.jpg" + } + ] + } + ], + "index": 44, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 422, + 335, + 471, + 345 + ], + "lines": [ + { + "bbox": [ + 422, + 335, + 471, + 345 + ], + "spans": [ + { + "bbox": [ + 422, + 335, + 471, + 345 + ], + "type": "text", + "content": "Normal maps" + } + ] + } + ], + "index": 45, + "angle": 0, + "type": "image_caption" + } + ], + "index": 44 + }, + { + "bbox": [ + 55, + 381, + 295, + 477 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 381, + 295, + 477 + ], + "spans": [ + { + "bbox": [ + 55, + 381, + 295, + 477 + ], + "type": "text", + "content": "bility and occlusion masks, but without using the modules of Sec. 3.2. Instead, we directly concatenate the tokens " + }, + { + "bbox": [ + 55, + 381, + 295, + 477 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{vis}}" + }, + { + "bbox": [ + 55, + 381, + 295, + 477 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 55, + 381, + 295, + 477 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{dino}}" + }, + { + "bbox": [ + 55, + 381, + 295, + 477 + ], + "type": "text", + "content": " to form the condition for the cross-attention layer. The results (first row in Tab. 3 and second column in Fig. 7) show that, while the resulting model can still perform basic completion, the textures in the occluded regions are inconsistent with those in the visible ones, and the reconstructed geometry is inaccurate, e.g. the hole in the shoe." + } + ] + } + ], + "index": 47 + }, + { + "bbox": [ + 55, + 487, + 296, + 595 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 487, + 296, + 595 + ], + "spans": [ + { + "bbox": [ + 55, + 487, + 296, + 595 + ], + "type": "text", + "content": "Mask-weighted Attention. To evaluate the effectiveness of our proposed mask-weighted attention mechanism, we integrate it into the training while omitting the occlusion-aware layer. The results demonstrate improved rendering quality — especially in capturing texture details in the visible areas — and significantly enhanced appearance consistency. However, the geometries exhibit deficiencies, as seen in the problematic shoe in Fig. 7 (first row), and the toy monster with a broken back (third row)." + } + ] + } + ], + "index": 48 + }, + { + "bbox": [ + 55, + 605, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 605, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 605, + 296, + 713 + ], + "type": "text", + "content": "Occlusion-aware Layer. The integration of the occlusion-aware layer improves the geometry both quantitatively and qualitatively. This aligns with our motivation for introducing an additional cross-attention layer, aimed at reconstructing the visible areas via the image-conditioned layer and occluded areas via the subsequent layer. However, occlusion-aware layer alone still results in unsatisfactory appearances, which again indirectly attests to the effectiveness of mask-weighted attention mechanism." + } + ] + } + ], + "index": 49 + }, + { + "bbox": [ + 313, + 381, + 554, + 418 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 381, + 554, + 418 + ], + "spans": [ + { + "bbox": [ + 313, + 381, + 554, + 418 + ], + "type": "text", + "content": "Full Model. Consequently, the full model integrating both modules achieves optimal 3D generation results characterized by complete geometry and consistent textures." + } + ] + } + ], + "index": 50 + }, + { + "bbox": [ + 313, + 425, + 511, + 438 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 425, + 511, + 438 + ], + "spans": [ + { + "bbox": [ + 313, + 425, + 511, + 438 + ], + "type": "text", + "content": "4.4. Real-World Generation / Completion" + } + ] + } + ], + "index": 51 + }, + { + "bbox": [ + 313, + 442, + 555, + 611 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 442, + 555, + 611 + ], + "spans": [ + { + "bbox": [ + 313, + 442, + 555, + 611 + ], + "type": "text", + "content": "Amodal3R is inherently superior in generalizing to out-of-distribution amodal 3D reconstruction, primarily due to the fact that we build upon the model on a \"foundation\" 3D generator, and fine-tune it with diverse categories. To demonstrate this advantage, we conduct scene-level dataset evaluations: Replica [70] (first two rows) and on in-the-wild images (3rd-5th rows) in Fig. 8. Here we adopt Segment Anything [35] to get the visibility and occlusion masks. The results show that Amodal3R generates reasonable 3D assets, whereas pix2gestalt fails to infer complete shapes from the same inputs, leading to unsatisfactory results from subsequent image-to-3D models. We also visualize the colored normal maps, which show that the results of Amodal3R are simultaneously rich in geometric and textural detail." + } + ] + } + ], + "index": 52 + }, + { + "bbox": [ + 313, + 621, + 388, + 634 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 621, + 388, + 634 + ], + "spans": [ + { + "bbox": [ + 313, + 621, + 388, + 634 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 53 + }, + { + "bbox": [ + 313, + 641, + 555, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 641, + 555, + 715 + ], + "spans": [ + { + "bbox": [ + 313, + 641, + 555, + 715 + ], + "type": "text", + "content": "We have introduced Amodal3R, a novel approach to reconstruct complete 3D shape and appearance from partially visible 2D images. The key to our success is that we construct mask-weighted cross-attention mechanism and occlusion-aware layer to effectively exploit visible and occluded information. Compared to the state-of-the-art methods that rely" + } + ] + } + ], + "index": 54 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 55 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 297, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 297, + 156 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 297, + 156 + ], + "type": "text", + "content": "on sequential 2D completion followed by 3D generation, our Amodal3R achieves remarkably better performance by operating directly in 3D space. Furthermore, results on inthe-wild images indicate its potential for subsequent applications in 3D decomposition and scene understanding, marking a step towards robust 3D asset reconstruction in real-world environments with complex occlusion." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 165, + 115, + 178 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 165, + 115, + 178 + ], + "spans": [ + { + "bbox": [ + 56, + 165, + 115, + 178 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 57, + 185, + 296, + 713 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 61, + 185, + 296, + 250 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 61, + 185, + 296, + 250 + ], + "spans": [ + { + "bbox": [ + 61, + 185, + 296, + 250 + ], + "type": "text", + "content": "[1] Titas Anciukevicius, Zexiang Xu, Matthew Fisher, Paul Henderson, Hakan Bilen, Niloy J Mitra, and Paul Guerrero. Renderdiffusion: Image diffusion for 3d reconstruction, inpainting and generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12608-12618, 2023. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 62, + 251, + 296, + 283 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 251, + 296, + 283 + ], + "spans": [ + { + "bbox": [ + 62, + 251, + 296, + 283 + ], + "type": "text", + "content": "[2] Jiayang Ao, Yanbei Jiang, Qiuhong Ke, and Krista A Ehinger. Open-world amodal appearance completion. arXiv preprint arXiv:2411.13019, 2024. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 62, + 285, + 295, + 316 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 285, + 295, + 316 + ], + "spans": [ + { + "bbox": [ + 62, + 285, + 295, + 316 + ], + "type": "text", + "content": "[3] Mikołaj Binkowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans. arXiv preprint arXiv:1801.01401, 2018. 6, 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 62, + 318, + 294, + 372 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 318, + 294, + 372 + ], + "spans": [ + { + "bbox": [ + 62, + 318, + 294, + 372 + ], + "type": "text", + "content": "[4] Eric R Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 5799-5809, 2021. 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 62, + 373, + 294, + 426 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 373, + 294, + 426 + ], + "spans": [ + { + "bbox": [ + 62, + 373, + 294, + 426 + ], + "type": "text", + "content": "[5] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In CVPR, 2022. 1, 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 62, + 428, + 294, + 471 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 428, + 294, + 471 + ], + "spans": [ + { + "bbox": [ + 62, + 428, + 294, + 471 + ], + "type": "text", + "content": "[6] Anpei Chen, Haofei Xu, Stefano Esposito, Siyu Tang, and Andreas Geiger. Lara: Efficient large-baseline radiance fields. In European Conference on Computer Vision, pages 338-355. Springer, 2024. 1, 6, 7, 5" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 62, + 472, + 294, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 472, + 294, + 536 + ], + "spans": [ + { + "bbox": [ + 62, + 472, + 294, + 536 + ], + "type": "text", + "content": "[7] Minghao Chen, Roman Shapovalov, Iro Laina, Tom Monnier, Jianyuan Wang, David Novotny, and Andrea Vedaldi. Partgen: Part-level 3d generation and reconstruction with multi-view diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 2, 3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 62, + 538, + 294, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 538, + 294, + 590 + ], + "spans": [ + { + "bbox": [ + 62, + 538, + 294, + 590 + ], + "type": "text", + "content": "[8] Yiwen Chen, Tong He, Di Huang, Weicai Ye, Sijin Chen, Ji-axiang Tang, Xin Chen, Zhongang Cai, Lei Yang, Gang Yu, et al. Meshanything: Artist-created mesh generation with autoregressive transformers. arXiv preprint arXiv:2406.10163, 2024. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 62, + 593, + 294, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 62, + 593, + 294, + 635 + ], + "spans": [ + { + "bbox": [ + 62, + 593, + 294, + 635 + ], + "type": "text", + "content": "[9] Yuedong Chen, Chuanxia Zheng, Haofei Xu, Bohan Zhuang, Andrea Vedaldi, Tat-Jen Cham, and Jianfei Cai. Mvsplat360: Feed-forward 360 scene synthesis from sparse views. In Neural Information Processing Systems (NeurIPS), 2024. 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 57, + 637, + 294, + 691 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 637, + 294, + 691 + ], + "spans": [ + { + "bbox": [ + 57, + 637, + 294, + 691 + ], + "type": "text", + "content": "[10] Yen-Chi Cheng, Hsin-Ying Lee, Sergey Tulyakov, Alexander G Schwing, and Liang-Yan Gui. Sdfusion: Multimodal 3d shape completion, reconstruction, and generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4456-4465, 2023. 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 693, + 294, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 693, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 57, + 693, + 294, + 713 + ], + "type": "text", + "content": "[11] Junhyeong Cho, Kim Youwang, Hunmin Yang, and TaeHyun Oh. Robust 3d shape reconstruction in zero-shot from" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 553, + 712 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 333, + 73, + 553, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 73, + 553, + 106 + ], + "spans": [ + { + "bbox": [ + 333, + 73, + 553, + 106 + ], + "type": "text", + "content": "a single image in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 317, + 107, + 553, + 150 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 107, + 553, + 150 + ], + "spans": [ + { + "bbox": [ + 317, + 107, + 553, + 150 + ], + "type": "text", + "content": "[12] Ruihang Chu, Enze Xie, Shentong Mo, Zhenguo Li, Matthias Nießner, Chi-Wing Fu, and Jiaya Jia. Diffcomplete: Diffusion-based generative 3d shape completion. Advances in Neural Information Processing Systems, 36, 2024. 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 317, + 152, + 553, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 152, + 553, + 226 + ], + "spans": [ + { + "bbox": [ + 317, + 152, + 553, + 226 + ], + "type": "text", + "content": "[13] Jasmine Collins, Shubham Goel, Kenan Deng, Achleshwar Luthra, Leon Xu, Erhan Gundogdu, Xi Zhang, Tomas F Yago Vicente, Thomas Dideriksen, Himanshu Arora, et al. Abo: Dataset and benchmarks for real-world 3d object understanding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 21126-21136, 2022. 5" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 317, + 228, + 553, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 228, + 553, + 293 + ], + "spans": [ + { + "bbox": [ + 317, + 228, + 553, + 293 + ], + "type": "text", + "content": "[14] Ruikai Cui, Weizhe Liu, Weixuan Sun, Senbo Wang, Taizhang Shang, Yang Li, Xibin Song, Han Yan, Zhennan Wu, Shenzhen Chen, et al. Neusdfusion: A spatial-aware generative model for 3d shape completion, reconstruction, and generation. In European Conference on Computer Vision, pages 1-18. Springer, 2024. 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 317, + 294, + 553, + 348 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 294, + 553, + 348 + ], + "spans": [ + { + "bbox": [ + 317, + 294, + 553, + 348 + ], + "type": "text", + "content": "[15] Matt Deitke, Ruoshi Liu, Matthew Wallingford, Huong Ngo, Oscar Michel, Aditya Kusupati, Alan Fan, Christian Laforte, Vikram Voleti, Samir Yitzhak Gadre, et al. Objverse-xl: A universe of " + }, + { + "bbox": [ + 317, + 294, + 553, + 348 + ], + "type": "inline_equation", + "content": "10\\mathrm{m} + 3\\mathrm{d}" + }, + { + "bbox": [ + 317, + 294, + 553, + 348 + ], + "type": "text", + "content": " objects. Advances in Neural Information Processing Systems, 36, 2024. 3" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 317, + 350, + 553, + 415 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 350, + 553, + 415 + ], + "spans": [ + { + "bbox": [ + 317, + 350, + 553, + 415 + ], + "type": "text", + "content": "[16] Laura Downs, Anthony Francis, Nate Koenig, Brandon Kinman, Ryan Hickman, Krista Reymann, Thomas B McHugh, and Vincent Vanhoucke. Google scanned objects: A high-quality dataset of 3d scanned household items. In 2022 International Conference on Robotics and Automation (ICRA), pages 2553-2560. IEEE, 2022. 2, 6" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 317, + 416, + 553, + 458 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 416, + 553, + 458 + ], + "spans": [ + { + "bbox": [ + 317, + 416, + 553, + 458 + ], + "type": "text", + "content": "[17] Huan Fu, Rongfei Jia, Lin Gao, Mingming Gong, Binqiang Zhao, Steve Maybank, and Dacheng Tao. 3d-future: 3d furniture shape with texture. International Journal of Computer Vision, 129:3313-3337, 2021. 5" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 317, + 460, + 553, + 514 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 460, + 553, + 514 + ], + "spans": [ + { + "bbox": [ + 317, + 460, + 553, + 514 + ], + "type": "text", + "content": "[18] Jun Gao, Tianchang Shen, Zian Wang, Wenzheng Chen, Kangxue Yin, Daiqing Li, Or Litany, Zan Gojcic, and Sanja Fidler. Get3d: A generative model of high quality 3d textured shapes learned from images. Advances In Neural Information Processing Systems, 35:31841-31854, 2022. 2" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 317, + 515, + 553, + 557 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 515, + 553, + 557 + ], + "spans": [ + { + "bbox": [ + 317, + 515, + 553, + 557 + ], + "type": "text", + "content": "[19] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139-144, 2020. 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 317, + 559, + 553, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 559, + 553, + 591 + ], + "spans": [ + { + "bbox": [ + 317, + 559, + 553, + 591 + ], + "type": "text", + "content": "[20] Anchit Gupta, Wenhan Xiong, Yixin Nie, Ian Jones, and Barlas Oğuz. 3dgen: Triplane latent diffusion for textured mesh generation. arXiv preprint arXiv:2303.05371, 2023. 2" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 317, + 593, + 553, + 645 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 317, + 593, + 553, + 645 + ], + "spans": [ + { + "bbox": [ + 317, + 593, + 553, + 645 + ], + "type": "text", + "content": "[21] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. 6, 2" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 647, + 553, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 647, + 553, + 669 + ], + "spans": [ + { + "bbox": [ + 316, + 647, + 553, + 669 + ], + "type": "text", + "content": "[22] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022. 6" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 670, + 553, + 712 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 670, + 553, + 712 + ], + "spans": [ + { + "bbox": [ + 316, + 670, + 553, + 712 + ], + "type": "text", + "content": "[23] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems (NeurIPS), pages 6840-6851, 2020. 2" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 742 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 742 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 294, + 713 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 56, + 72, + 294, + 127 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 72, + 294, + 127 + ], + "spans": [ + { + "bbox": [ + 56, + 72, + 294, + 127 + ], + "type": "text", + "content": "[24] Yicong Hong, Kai Zhang, Juixiang Gu, Sai Bi, Yang Zhou, Difan Liu, Feng Liu, Kalyan Sunkavalli, Trung Bui, and Hao Tan. Lrm: Large reconstruction model for single image to 3d. In International Conference on Learning Representations (ICLR), 2024. 1" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 129, + 294, + 173 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 129, + 294, + 173 + ], + "spans": [ + { + "bbox": [ + 56, + 129, + 294, + 173 + ], + "type": "text", + "content": "[25] Tao Hu, Zhizhong Han, and Matthias Zwicker. 3d shape completion with multi-view consistent inference. In Proceedings of the AAAI conference on artificial intelligence, pages 10997-11004, 2020. 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 175, + 294, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 175, + 294, + 228 + ], + "spans": [ + { + "bbox": [ + 56, + 175, + 294, + 228 + ], + "type": "text", + "content": "[26] Zitian Huang, Yikuan Yu, Jiawen Xu, Feng Ni, and Xinyi Le. Pf-net: Point fractal network for 3d point cloud completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 7662-7670, 2020. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 230, + 294, + 263 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 230, + 294, + 263 + ], + "spans": [ + { + "bbox": [ + 56, + 230, + 294, + 263 + ], + "type": "text", + "content": "[27] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics (TOG), 36(4):107, 2017. 5" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 264, + 294, + 308 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 264, + 294, + 308 + ], + "spans": [ + { + "bbox": [ + 56, + 264, + 294, + 308 + ], + "type": "text", + "content": "[28] Hanwen Jiang, Qixing Huang, and Georgios Pavlakos. Real3d: Scaling up large reconstruction models with real-world images. arXiv preprint arXiv:2406.08479, 2024. 6, 7, 8, 2, 4" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 309, + 294, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 309, + 294, + 342 + ], + "spans": [ + { + "bbox": [ + 56, + 309, + 294, + 342 + ], + "type": "text", + "content": "[29] Heewoo Jun and Alex Nichol. Shap-e: Generating conditional 3d implicit functions. arXiv preprint arXiv:2305.02463, 2023. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 344, + 294, + 387 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 344, + 294, + 387 + ], + "spans": [ + { + "bbox": [ + 56, + 344, + 294, + 387 + ], + "type": "text", + "content": "[30] Yoni Kasten, Ohad Rahamim, and Gal Chechik. Point cloud completion with pretrained text-to-image diffusion models. Advances in Neural Information Processing Systems, 36, 2024. 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 388, + 294, + 431 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 388, + 294, + 431 + ], + "spans": [ + { + "bbox": [ + 56, + 388, + 294, + 431 + ], + "type": "text", + "content": "[31] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkuhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. 4" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 434, + 294, + 520 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 434, + 294, + 520 + ], + "spans": [ + { + "bbox": [ + 56, + 434, + 294, + 520 + ], + "type": "text", + "content": "[32] Mukul Khanna, Yongsen Mao, Hanxiao Jiang, Sanjay Haresh, Brennan Shacklett, Dhruv Batra, Alexander Clegg, Eric Undersander, Angel X Chang, and Manolis Savva. Habitat synthetic scenes dataset (hssd-200): An analysis of 3d scene scale and realism tradeoffs for objectgoal navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16384–16393, 2024. 5" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 523, + 294, + 577 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 523, + 294, + 577 + ], + "spans": [ + { + "bbox": [ + 56, + 523, + 294, + 577 + ], + "type": "text", + "content": "[33] Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye. Diffus ionclip: Text-guided diffusion models for robust image man i plication. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2426-2435, 2022. 5" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 579, + 294, + 634 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 579, + 294, + 634 + ], + "spans": [ + { + "bbox": [ + 56, + 579, + 294, + 634 + ], + "type": "text", + "content": "[34] Yulhwa Kim, Dongwon Jo, Hyesung Jeon, Taesu Kim, Daehyun Ahn, Hyungjun Kim, et al. Leveraging early-stage robustness in diffusion models for efficient and high-quality image synthesis. Advances in Neural Information Processing Systems, 36:1229-1244, 2023. 5" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 635, + 294, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 635, + 294, + 689 + ], + "spans": [ + { + "bbox": [ + 56, + 635, + 294, + 689 + ], + "type": "text", + "content": "[35] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4015-4026, 2023. 3, 8" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 56, + 691, + 294, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 691, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 691, + 294, + 713 + ], + "type": "text", + "content": "[36] Yushi Lan, Feitong Tan, Di Qiu, Qiangeng Xu, Kyle Genova, Zeng Huang, Sean Fanello, Rohit Pandey, Thomas" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 72, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 28, + "blocks": [ + { + "bbox": [ + 333, + 72, + 553, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 333, + 72, + 553, + 106 + ], + "spans": [ + { + "bbox": [ + 333, + 72, + 553, + 106 + ], + "type": "text", + "content": "Funkhouser, Chen Change Loy, et al. Gaussian3diff: 3d gaussian diffusion for 3d full head synthesis and editing. arXiv preprint arXiv:2312.03763, 2023. 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 107, + 553, + 160 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 107, + 553, + 160 + ], + "spans": [ + { + "bbox": [ + 316, + 107, + 553, + 160 + ], + "type": "text", + "content": "[37] Yushi Lan, Shangchen Zhou, Zhaoyang Lyu, Fangzhou Hong, Shuai Yang, Bo Dai, Xingang Pan, and Chen Change Loy. Gaussiananything: Interactive point cloud latent diffusion for 3d generation. arXiv preprint arXiv:2411.08033, 2024. 6, 7, 8, 2, 4" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 162, + 553, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 162, + 553, + 205 + ], + "spans": [ + { + "bbox": [ + 316, + 162, + 553, + 205 + ], + "type": "text", + "content": "[38] Muheng Li, Yueqi Duan, Jie Zhou, and Jiwen Lu. Diffusionsdf: Text-to-shape via voxelized diffusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12642-12651, 2023. 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 206, + 553, + 249 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 206, + 553, + 249 + ], + "spans": [ + { + "bbox": [ + 316, + 206, + 553, + 249 + ], + "type": "text", + "content": "[39] Xinhai Li, Huaibin Wang, and Kuo-Kun Tseng. Gaussian diffusion: 3d gaussian splatting for denoising diffusion probabilistic models with structured noise. arXiv preprint arXiv:2311.11221, 2023. 2" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 250, + 553, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 250, + 553, + 293 + ], + "spans": [ + { + "bbox": [ + 316, + 250, + 553, + 293 + ], + "type": "text", + "content": "[40] Chen-Hsuan Lin, Chen Kong, and Simon Lucey. Learning efficient point cloud generation for dense 3d object reconstruction. In proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2018. 2" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 294, + 553, + 347 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 294, + 553, + 347 + ], + "spans": [ + { + "bbox": [ + 316, + 294, + 553, + 347 + ], + "type": "text", + "content": "[41] Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), 2018. 5" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 349, + 555, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 349, + 555, + 403 + ], + "spans": [ + { + "bbox": [ + 316, + 349, + 555, + 403 + ], + "type": "text", + "content": "[42] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), pages 9298–9309, 2023. 1, 2" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 404, + 553, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 404, + 553, + 437 + ], + "spans": [ + { + "bbox": [ + 316, + 404, + 553, + 437 + ], + "type": "text", + "content": "[43] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022. 4" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 438, + 553, + 503 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 438, + 553, + 503 + ], + "spans": [ + { + "bbox": [ + 316, + 438, + 553, + 503 + ], + "type": "text", + "content": "[44] Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, Yuan Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, et al. Wonder3d: Single image to 3d using cross-domain diffusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 9970-9980, 2024. 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 504, + 553, + 525 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 504, + 553, + 525 + ], + "spans": [ + { + "bbox": [ + 316, + 504, + 553, + 525 + ], + "type": "text", + "content": "[45] I Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 6" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 526, + 555, + 569 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 526, + 555, + 569 + ], + "spans": [ + { + "bbox": [ + 316, + 526, + 555, + 569 + ], + "type": "text", + "content": "[46] Shitong Luo and Wei Hu. Diffusion probabilistic models for 3d point cloud generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 2837-2845, 2021. 2" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 571, + 555, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 571, + 555, + 624 + ], + "spans": [ + { + "bbox": [ + 316, + 571, + 555, + 624 + ], + "type": "text", + "content": "[47] Luke Melas-Kyriazi, Christian Rupprecht, and Andrea Vedaldi. Pc2: Projection-conditioned point cloud diffusion for single-image 3d reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12923-12932, 2023. 2" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 316, + 625, + 553, + 689 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 625, + 553, + 689 + ], + "spans": [ + { + "bbox": [ + 316, + 625, + 553, + 689 + ], + "type": "text", + "content": "[48] Norman Müller, Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Bulo, Peter Kontschieder, and Matthias Nießner. Diffrf: Rendering-guided 3d radiance field diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4328-4338, 2023. 2" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 316, + 691, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 691, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 691, + 553, + 713 + ], + "type": "text", + "content": "[49] Charlie Nash, Yaroslav Ganin, SM Ali Eslami, and Peter Battaglia. *Polygon: An autoregressive generative model of" + } + ] + } + ], + "index": 27 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 733, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 733, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 733, + 311, + 742 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 713 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 76, + 72, + 294, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 76, + 72, + 294, + 95 + ], + "spans": [ + { + "bbox": [ + 76, + 72, + 294, + 95 + ], + "type": "text", + "content": "3d meshes. In International Conference on Machine Learning (ICML), pages 7220-7229. PMLR, 2020. 1, 2" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 96, + 295, + 151 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 96, + 295, + 151 + ], + "spans": [ + { + "bbox": [ + 56, + 96, + 295, + 151 + ], + "type": "text", + "content": "[50] Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), pages 7588–7597, 2019. 1, 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 152, + 295, + 196 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 152, + 295, + 196 + ], + "spans": [ + { + "bbox": [ + 56, + 152, + 295, + 196 + ], + "type": "text", + "content": "[51] Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. Point-e: A system for generating 3d point clouds from complex prompts. arXiv preprint arXiv:2212.08751, 2022. 2, 6" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 198, + 294, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 198, + 294, + 251 + ], + "spans": [ + { + "bbox": [ + 56, + 198, + 294, + 251 + ], + "type": "text", + "content": "[52] Michael Niemeyer and Andreas Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 11453-11464, 2021. 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 254, + 295, + 298 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 254, + 295, + 298 + ], + "spans": [ + { + "bbox": [ + 56, + 254, + 295, + 298 + ], + "type": "text", + "content": "[53] Evangelos Ntavelis, Aliaksandr Siarohin, Kyle Olszewski, Chaoyang Wang, Luc V Gool, and Sergey Tulyakov. Autodecoding latent 3d diffusion models. Advances in Neural Information Processing Systems, 36:67021-67047, 2023. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 300, + 294, + 354 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 300, + 294, + 354 + ], + "spans": [ + { + "bbox": [ + 56, + 300, + 294, + 354 + ], + "type": "text", + "content": "[54] Maxime Oquab, Timothee Darcet, Theo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 4, 1" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 355, + 295, + 420 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 355, + 295, + 420 + ], + "spans": [ + { + "bbox": [ + 56, + 355, + 295, + 420 + ], + "type": "text", + "content": "[55] Maxime Aquab, Timothee Darcet, Théo Moutakanni, Huy V Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel HAZIZA, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. Transactions on Machine Learning Research, 2023. 3, 4" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 422, + 295, + 488 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 422, + 295, + 488 + ], + "spans": [ + { + "bbox": [ + 56, + 422, + 295, + 488 + ], + "type": "text", + "content": "[56] Ege Ozguroglu, Ruoshi Liu, Dídac Surís, Dian Chen, Achal Dave, Pavel Tokmakov, and Carl Vondrick. pix2gestalt: Amodal segmentation by synthesizing wholes. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3931-3940. IEEE Computer Society, 2024. 2, 3, 6, 7, 8, 4, 5" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 490, + 295, + 544 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 490, + 295, + 544 + ], + "spans": [ + { + "bbox": [ + 56, + 490, + 295, + 544 + ], + "type": "text", + "content": "[57] Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, and Alexander C Berg. Transformation-grounded image generation network for novel 3d view synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 3500-3509, 2017. 2" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 545, + 295, + 601 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 545, + 295, + 601 + ], + "spans": [ + { + "bbox": [ + 56, + 545, + 295, + 601 + ], + "type": "text", + "content": "[58] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2536-2544. IEEE, 2016. 5" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 602, + 295, + 667 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 602, + 295, + 667 + ], + "spans": [ + { + "bbox": [ + 56, + 602, + 295, + 667 + ], + "type": "text", + "content": "[59] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning (ICML), pages 8748-8763. PmLR, 2021. 6, 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 669, + 295, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 669, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 669, + 295, + 713 + ], + "type": "text", + "content": "[60] Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Angela Dai, and Matthias Nießner. L3dg: Latent 3d gaussian diffusion. In SIGGRAPH Asia 2024 Conference Papers, pages 1-11, 2024. 3" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 73, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 316, + 73, + 555, + 128 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 73, + 555, + 128 + ], + "spans": [ + { + "bbox": [ + 316, + 73, + 555, + 128 + ], + "type": "text", + "content": "[61] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 10684-10695, 2022. 2" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 316, + 129, + 554, + 171 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 129, + 554, + 171 + ], + "spans": [ + { + "bbox": [ + 316, + 129, + 554, + 171 + ], + "type": "text", + "content": "[62] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. Advances in Neural Information Processing Systems (NeurIPS), 33:20154-20166, 2020. 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 173, + 554, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 173, + 554, + 226 + ], + "spans": [ + { + "bbox": [ + 316, + 173, + 554, + 226 + ], + "type": "text", + "content": "[63] Ruoxi Shi, Hansheng Chen, Zhuoyang Zhang, Minghua Liu, Chao Xu, Xinyue Wei, Linghao Chen, Chong Zeng, and Hao Su. Zero123++: a single image to consistent multi-view diffusion base model. arXiv preprint arXiv:2310.15110, 2023. 6, 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 228, + 554, + 260 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 228, + 554, + 260 + ], + "spans": [ + { + "bbox": [ + 316, + 228, + 554, + 260 + ], + "type": "text", + "content": "[64] Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li, and Xiao Yang. Mvdream: Multi-view diffusion for 3d generation. arXiv preprint arXiv:2308.16512, 2023. 2, 3" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 261, + 554, + 315 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 261, + 554, + 315 + ], + "spans": [ + { + "bbox": [ + 316, + 261, + 554, + 315 + ], + "type": "text", + "content": "[65] Wu Shuang, Youtian Lin, Yifei Zeng, Feihu Zhang, Jingxi Xu, Philip Torr, Xun Cao, and Yao Yao. Direct3d: Scalable image-to-3d generation via 3d latent diffusion transformer. Advances in Neural Information Processing Systems, 37:121859–121881, 2025. 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 316, + 554, + 371 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 316, + 554, + 371 + ], + "spans": [ + { + "bbox": [ + 316, + 316, + 554, + 371 + ], + "type": "text", + "content": "[66] J Ryan Shue, Eric Ryan Chan, Ryan Po, Zachary Ankner, Jiajun Wu, and Gordon Wetzstein. 3d neural field generation using triplane diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20875-20886, 2023. 2" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 372, + 555, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 372, + 555, + 437 + ], + "spans": [ + { + "bbox": [ + 316, + 372, + 555, + 437 + ], + "type": "text", + "content": "[67] Yawar Siddiqui, Antonio Alliegro, Alexey Artemov, Tatiana Tommasi, Daniele Sirigatti, Vladislav Rosov, Angela Dai, and Matthias Nießner. Meshgpt: Generating triangle meshes with decoder-only transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19615-19625, 2024. 1, 2" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 437, + 554, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 437, + 554, + 491 + ], + "spans": [ + { + "bbox": [ + 316, + 437, + 554, + 491 + ], + "type": "text", + "content": "[68] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning (ICML), pages 2256-2265. pmlr, 2015. 2" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 493, + 554, + 536 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 493, + 554, + 536 + ], + "spans": [ + { + "bbox": [ + 316, + 493, + 554, + 536 + ], + "type": "text", + "content": "[69] Stefan Stojanov, Anh Thai, and James M Rehg. Using shape to categorize: Low-shot learning with an explicit shape bias. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1798-1808, 2021. 2, 6" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 537, + 554, + 591 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 537, + 554, + 591 + ], + "spans": [ + { + "bbox": [ + 316, + 537, + 554, + 591 + ], + "type": "text", + "content": "[70] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, et al. The replica dataset: A digital replica of indoor spaces. arXiv preprint arXiv:1906.05797, 2019. 2, 6, 8" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 592, + 554, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 592, + 554, + 635 + ], + "spans": [ + { + "bbox": [ + 316, + 592, + 554, + 635 + ], + "type": "text", + "content": "[71] David Stutz and Andreas Geiger. Learning 3d shape completion from laser scan data with weak supervision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1955-1964, 2018. 2" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 636, + 554, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 636, + 554, + 690 + ], + "spans": [ + { + "bbox": [ + 316, + 636, + 554, + 690 + ], + "type": "text", + "content": "[72] Stanislaw Szymanowicz, Christian Rupprecht, and Andrea Vedaldi. Viewset diffusion: (0-)image-conditioned 3d generative models from 2d data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 8863-8873, 2023. 1, 2, 3" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 691, + 554, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 691, + 554, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 691, + 554, + 713 + ], + "type": "text", + "content": "[73] Stanislaw Szymanowicz, Chrisitian Rupprecht, and Andrea Vedaldi. Splatter image: Ultra-fast single-view 3d recon" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 310, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 310, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 310, + 742 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 72, + 295, + 713 + ], + "type": "list", + "angle": 0, + "index": 12, + "blocks": [ + { + "bbox": [ + 77, + 72, + 294, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 77, + 72, + 294, + 106 + ], + "spans": [ + { + "bbox": [ + 77, + 72, + 294, + 106 + ], + "type": "text", + "content": "struction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10208-10217, 2024. 1" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 106, + 295, + 172 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 106, + 295, + 172 + ], + "spans": [ + { + "bbox": [ + 56, + 106, + 295, + 172 + ], + "type": "text", + "content": "[74] Shitao Tang, Jiacheng Chen, Dilin Wang, Chengzhou Tang, Fuyang Zhang, Yuchen Fan, Vikas Chandra, Yasutaka Furukawa, and Rakesh Ranjan. Mvdiffusion++: A dense high-resolution multi-view diffusion model for single or sparse-view 3d object reconstruction. In European Conference on Computer Vision, pages 175-191. Springer, 2024. 2" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 173, + 295, + 216 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 173, + 295, + 216 + ], + "spans": [ + { + "bbox": [ + 56, + 173, + 295, + 216 + ], + "type": "text", + "content": "[75] Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, Karsten Kreis, et al. Lion: Latent point diffusion models for 3d shape generation. Advances in Neural Information Processing Systems, 35:10021-10039, 2022. 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 217, + 294, + 282 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 217, + 294, + 282 + ], + "spans": [ + { + "bbox": [ + 56, + 217, + 294, + 282 + ], + "type": "text", + "content": "[76] Vikram Voleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, and Varun Jampani. Sv3d: Novel multi-view synthesis and 3d generation from a single image using latent video diffusion. In European Conference on Computer Vision (ECCV), pages 439-457. Springer, 2024. 1, 2" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 283, + 294, + 327 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 283, + 294, + 327 + ], + "spans": [ + { + "bbox": [ + 56, + 283, + 294, + 327 + ], + "type": "text", + "content": "[77] Daniel Watson, William Chan, Ricardo Martin-Brualla, Jonathan Ho, Andrea Tagliasacchi, and Mohammad Norouzi. Novel view synthesis with diffusion models. arXiv preprint arXiv:2210.04628, 2022. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 327, + 294, + 381 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 327, + 294, + 381 + ], + "spans": [ + { + "bbox": [ + 56, + 327, + 294, + 381 + ], + "type": "text", + "content": "[78] Daniel Watson, William Chan, Ricardo Martin Brulla, Jonathan Ho, Andrea Tagliasacchi, and Mohammad Norouzi. Novel view synthesis with diffusion models. In The Eleventh International Conference on Learning Representations (ICLR), 2023. 1" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 56, + 383, + 294, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 383, + 294, + 437 + ], + "spans": [ + { + "bbox": [ + 56, + 383, + 294, + 437 + ], + "type": "text", + "content": "[79] Christopher Wewer, Kevin Raj, Eddy Ilg, Bernt Schiele, and Jan Eric Lenssen. latentsplat: Autoencoding variational gaussians for fast generalizable 3d reconstruction. In European Conference on Computer Vision (ECCV), pages 456-473. Springer, 2024. 2" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 437, + 294, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 437, + 294, + 491 + ], + "spans": [ + { + "bbox": [ + 56, + 437, + 294, + 491 + ], + "type": "text", + "content": "[80] Zijie Wu, Yaonan Wang, Mingtao Feng, He Xie, and Ajmal Mian. Sketch and text guided diffusion model for colored point cloud generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8929-8939, 2023. 2" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 492, + 294, + 547 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 492, + 294, + 547 + ], + "spans": [ + { + "bbox": [ + 56, + 492, + 294, + 547 + ], + "type": "text", + "content": "[81] Jianfeng Xiang, Zelong Lv, Sicheng Xu, Yu Deng, Ruicheng Wang, Bowen Zhang, Dong Chen, Xin Tong, and Jiaolong Yang. Structured 3d latents for scalable and versatile 3d generation. arXiv preprint arXiv:2412.01506, 2024. 1, 2, 3, 4, 6, 7, 8, 5" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 56, + 548, + 294, + 613 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 548, + 294, + 613 + ], + "spans": [ + { + "bbox": [ + 56, + 548, + 294, + 613 + ], + "type": "text", + "content": "[82] Desai Xie, Jiahao Li, Hao Tan, Xin Sun, Zhixin Shu, Yi Zhou, Sai Bi, Soren Pirk, and Arie E Kaufman. Carve3d: Improving multi-view reconstruction consistency for diffusion models with rl finetuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6369-6379, 2024. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 56, + 614, + 294, + 658 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 614, + 294, + 658 + ], + "spans": [ + { + "bbox": [ + 56, + 614, + 294, + 658 + ], + "type": "text", + "content": "[83] Katherine Xu, Lingzhi Zhang, and Jianbo Shi. Amodal completion via progressive mixed context diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9099-9109, 2024. 2" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 56, + 658, + 294, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 658, + 294, + 713 + ], + "spans": [ + { + "bbox": [ + 56, + 658, + 294, + 713 + ], + "type": "text", + "content": "[84] Bo Yang, Hongkai Wen, Sen Wang, Ronald Clark, Andrew Markham, and Niki Trigoni. 3d object reconstruction from a single depth view with adversarial learning. In Proceedings of the IEEE international conference on computer vision workshops (ICCVW), pages 679-688, 2017. 2" + } + ] + } + ], + "index": 11 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 316, + 72, + 553, + 713 + ], + "type": "list", + "angle": 0, + "index": 26, + "blocks": [ + { + "bbox": [ + 316, + 72, + 553, + 126 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 72, + 553, + 126 + ], + "spans": [ + { + "bbox": [ + 316, + 72, + 553, + 126 + ], + "type": "text", + "content": "[85] Guanqi Zhan, Chuanxia Zheng, Weidi Xie, and Andrew Zisserman. Amodal ground truth and completion in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 28003-28013, 2024. 2" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 316, + 128, + 553, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 128, + 553, + 182 + ], + "spans": [ + { + "bbox": [ + 316, + 128, + 553, + 182 + ], + "type": "text", + "content": "[86] Xiaohang Zhan, Xingang Pan, Bo Dai, Ziwei Liu, Dahua Lin, and Chen Change Loy. Self-supervised scene de- occlusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 3784-3792, 2020. 2" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 316, + 183, + 553, + 237 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 183, + 553, + 237 + ], + "spans": [ + { + "bbox": [ + 316, + 183, + 553, + 237 + ], + "type": "text", + "content": "[87] Bowen Zhang, Yiji Cheng, Jiaolong Yang, Chunyu Wang, Feng Zhao, Yansong Tang, Dong Chen, and Baining Guo. Gaussiancube: Structuring gaussian splatting using optimal transport for 3d generative modeling. arXiv preprint arXiv:2403.19655, 2024. 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 316, + 238, + 553, + 293 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 238, + 553, + 293 + ], + "spans": [ + { + "bbox": [ + 316, + 238, + 553, + 293 + ], + "type": "text", + "content": "[88] Longwen Zhang, Ziyu Wang, Qixuan Zhang, Qiwei Qiu, Anqi Pang, Haoran Jiang, Wei Yang, Lan Xu, and Jingyi Yu. Clay: A controllable large-scale generative model for creating high-quality 3d assets. ACM Transactions on Graphics (TOG), 43(4):1-20, 2024. 1" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 316, + 293, + 553, + 347 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 293, + 553, + 347 + ], + "spans": [ + { + "bbox": [ + 316, + 293, + 553, + 347 + ], + "type": "text", + "content": "[89] Chuanxia Zheng and Andrea Vedaldi. Free3d: Consistent novel view synthesis without 3d representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9720-9731, 2024. 1, 2, 3" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 316, + 349, + 553, + 392 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 349, + 553, + 392 + ], + "spans": [ + { + "bbox": [ + 316, + 349, + 553, + 392 + ], + "type": "text", + "content": "[90] Chuanxia Zheng, Tat-Jen Cham, and Jianfei Cai. Pluralistic image completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1438-1447, 2019. 5" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 316, + 393, + 553, + 437 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 393, + 553, + 437 + ], + "spans": [ + { + "bbox": [ + 316, + 393, + 553, + 437 + ], + "type": "text", + "content": "[91] Chuanxia Zheng, Duy-Son Dao, Guoxian Song, Tat-Jen Cham, and Jianfei Cai. Visiting the invisible: Layer-by-layer completed scene decomposition. International Journal of Computer Vision (IJCV), 129:3195-3215, 2021. 2" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 437, + 553, + 491 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 437, + 553, + 491 + ], + "spans": [ + { + "bbox": [ + 316, + 437, + 553, + 491 + ], + "type": "text", + "content": "[92] Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai, and Dinh Phung. Bridging global context interactions for high-fidelity image completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pages 11512-11522, 2022. 4" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 316, + 492, + 553, + 547 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 492, + 553, + 547 + ], + "spans": [ + { + "bbox": [ + 316, + 492, + 553, + 547 + ], + "type": "text", + "content": "[93] Xiao Zheng, Xiaoshui Huang, Guofeng Mei, Yuenan Hou, Zhaoyang Lyu, Bo Dai, Wanli Ouyang, and Yongshun Gong. Point cloud pre-training with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22935-22945, 2024. 2" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 548, + 553, + 592 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 548, + 553, + 592 + ], + "spans": [ + { + "bbox": [ + 316, + 548, + 553, + 592 + ], + "type": "text", + "content": "[94] Xin-Yang Zheng, Hao Pan, Yu-Xiao Guo, Xin Tong, and Yang Liu. Mvd^2: Efficient multiview 3d reconstruction for multiview diffusion. In ACM SIGGRAPH 2024 Conference Papers, pages 1-11, 2024. 2" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 316, + 592, + 553, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 592, + 553, + 635 + ], + "spans": [ + { + "bbox": [ + 316, + 592, + 553, + 635 + ], + "type": "text", + "content": "[95] Linqi Zhou, Yilun Du, and Jiajun Wu. 3d shape generation and completion through point-voxel diffusion. In Proceedings of the IEEE/CVF international conference on computer vision, pages 5826-5835, 2021. 2" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 636, + 553, + 690 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 636, + 553, + 690 + ], + "spans": [ + { + "bbox": [ + 316, + 636, + 553, + 690 + ], + "type": "text", + "content": "[96] Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Josh Tenenbaum, and Bill Freeman. Visual object networks: Image generation with disentangled 3d representations. Advances in neural information processing systems (NeurIPS), 31, 2018. 2" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 316, + 691, + 553, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 691, + 553, + 713 + ], + "spans": [ + { + "bbox": [ + 316, + 691, + 553, + 713 + ], + "type": "text", + "content": "[97] Zi-Xin Zou, Zhipeng Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Yan-Pei Cao, and Song-Hai Zhang. Triplane" + } + ] + } + ], + "index": 25 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 27 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 75, + 72, + 297, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 75, + 72, + 297, + 118 + ], + "spans": [ + { + "bbox": [ + 75, + 72, + 297, + 118 + ], + "type": "text", + "content": "meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10324-10335, 2024. 2" + } + ] + } + ], + "index": 0 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "spans": [ + { + "bbox": [ + 300, + 732, + 311, + 742 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 1 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 101, + 68, + 509, + 110 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 101, + 68, + 509, + 110 + ], + "spans": [ + { + "bbox": [ + 101, + 68, + 509, + 110 + ], + "type": "text", + "content": "Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images Supplementary Material" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 123, + 194, + 136 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 123, + 194, + 136 + ], + "spans": [ + { + "bbox": [ + 55, + 123, + 194, + 136 + ], + "type": "text", + "content": "A. Implementation Details" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 143, + 155, + 156 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 143, + 155, + 156 + ], + "spans": [ + { + "bbox": [ + 55, + 143, + 155, + 156 + ], + "type": "text", + "content": "A.1. Network Design" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 159, + 296, + 232 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 159, + 296, + 232 + ], + "spans": [ + { + "bbox": [ + 55, + 159, + 296, + 232 + ], + "type": "text", + "content": "We adopt the network design in TRELLIS [81] to load the pre-trained image-to-3D weights and integrate the mask-weighted cross-attention mechanism to each DiT block (24 blocks in total). And each image-conditioned cross-attention layer is immediately followed by an occlusion-aware cross-attention layer." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 232, + 296, + 555 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "spans": [ + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "text", + "content": "(a) Patchify and weight of visibility/occlusion mask The input condition image has a resolution of " + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "inline_equation", + "content": "512 \\times 512" + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "text", + "content": ", which is resized to " + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "inline_equation", + "content": "518 \\times 518" + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "text", + "content": " to facilitate splitting into patches of size " + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "inline_equation", + "content": "14 \\times 14" + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "text", + "content": ", as required by DINOv2 [54]. The resulting condition is then flattened into a tensor " + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "inline_equation", + "content": "\\mathbf{c}_{dino} \\in \\mathbb{R}^{1374 \\times 1024}" + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "text", + "content": ", where the sequence length corresponds to " + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "inline_equation", + "content": "37 \\times 37" + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "text", + "content": " patches plus 1 CLS token and 4 register tokens. To better align the visibility and occlusion masks with the DINOv2 features, we first split the masks into patches of the same size, then calculate the weight score for each patch using Eq. 4 and Eq. 5. The final " + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "inline_equation", + "content": "\\mathbf{c}_{vis} \\in \\mathbb{R}^{1374 \\times 1}" + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "inline_equation", + "content": "\\mathbf{c}_{occ} \\in \\mathbb{R}^{1374 \\times 1}" + }, + { + "bbox": [ + 55, + 232, + 296, + 388 + ], + "type": "text", + "content": " are obtained by flattening the weight scores, with a value of 1 assigned to the CLS and register token positions." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 388, + 296, + 459 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 388, + 296, + 459 + ], + "spans": [ + { + "bbox": [ + 55, + 388, + 296, + 459 + ], + "type": "text", + "content": "(b) Occlusion-aware cross-attention layer. We set the feature dimension of the occlusion-aware cross-attention layers to 1024, matching that of the image-conditioned cross-attention layers. To maintain consistent dimensions, we replicate the flattened occlusion masks to form a tensor " + }, + { + "bbox": [ + 55, + 388, + 296, + 459 + ], + "type": "inline_equation", + "content": "\\mathbf{c}_{\\text{occ\\_stack}} \\in \\mathbb{R}^{1374 \\times 1024}" + }, + { + "bbox": [ + 55, + 388, + 296, + 459 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 460, + 296, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 460, + 296, + 555 + ], + "spans": [ + { + "bbox": [ + 55, + 460, + 296, + 555 + ], + "type": "text", + "content": "(c) Multi-head Cross-Attention. Our mask-weighted multi-head cross-attention (MHA) layer, which is implemented to encourage the model to focus its attention on the visible parts of the object, is an extension of the cross-attention described in the main paper. Specifically, " + }, + { + "bbox": [ + 55, + 460, + 296, + 555 + ], + "type": "inline_equation", + "content": "H" + }, + { + "bbox": [ + 55, + 460, + 296, + 555 + ], + "type": "text", + "content": " heads are run in parallel, resulting in " + }, + { + "bbox": [ + 55, + 460, + 296, + 555 + ], + "type": "inline_equation", + "content": "H" + }, + { + "bbox": [ + 55, + 460, + 296, + 555 + ], + "type": "text", + "content": " attention scores. For mask-weighted attention mechanism, we impose " + }, + { + "bbox": [ + 55, + 460, + 296, + 555 + ], + "type": "inline_equation", + "content": "c_{\\mathrm{vis}}" + }, + { + "bbox": [ + 55, + 460, + 296, + 555 + ], + "type": "text", + "content": " simultaneously to each head:" + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 109, + 576, + 294, + 589 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 576, + 294, + 589 + ], + "spans": [ + { + "bbox": [ + 109, + 576, + 294, + 589 + ], + "type": "interline_equation", + "content": "\\boldsymbol {A} _ {h} = \\operatorname {s o f t m a x} \\left(\\boldsymbol {S} _ {h} + \\log c _ {\\text {v i s}}\\right), \\tag {A.1}", + "image_path": "376f0f23a57fbbc98b1b8804f0233127bae5c6dec61dd3877280fae0d561b452.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 109, + 591, + 295, + 604 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 591, + 295, + 604 + ], + "spans": [ + { + "bbox": [ + 109, + 591, + 295, + 604 + ], + "type": "interline_equation", + "content": "\\mathrm {M H A} = \\left[ \\boldsymbol {A} _ {1} \\boldsymbol {v}; \\boldsymbol {A} _ {2} \\boldsymbol {v}; \\dots ; \\boldsymbol {A} _ {H} \\boldsymbol {v} \\right] \\tag {A.2}", + "image_path": "bec95e9d5492096fcc6673c34cc7389d626caf677639f6d0de0401d3d1b09250.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 55, + 612, + 157, + 624 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 612, + 157, + 624 + ], + "spans": [ + { + "bbox": [ + 55, + 612, + 157, + 624 + ], + "type": "text", + "content": "A.2. Training Details" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 629, + 296, + 713 + ], + "type": "text", + "content": "(a) Pre-trained model loading. While TRELLIS is split into multiple modules, in our work we only train the sparse structure flow transformer and the structured latent flow transformer (see the overview figure where the \"fire\" symbols indicate the parts that are fine-tuned, and \"snowflake\" symbols indicate that we directly use the pretrained weights)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 313, + 124, + 555, + 245 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 124, + 555, + 245 + ], + "spans": [ + { + "bbox": [ + 313, + 124, + 555, + 245 + ], + "type": "text", + "content": "(b) Data Augmentation. As described in Sec. 3.3, we generate random masks during training for data augmentation. Specifically, we begin by randomly drawing 1 to 3 lines, circles, and ellipses in the mask image. Next, to ensure these regions connect — thereby better simulating real-world occlusions, where mask regions are typically not highly fragmented — we randomly add 3 to 7 rectangular regions that have undergone an expansion operation. This results in a stable masking of the objects in the training data. Example inputs are presented in Fig. A.1." + } + ] + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 319, + 268, + 395, + 343 + ], + "blocks": [ + { + "bbox": [ + 319, + 268, + 395, + 343 + ], + "lines": [ + { + "bbox": [ + 319, + 268, + 395, + 343 + ], + "spans": [ + { + "bbox": [ + 319, + 268, + 395, + 343 + ], + "type": "image", + "image_path": "b04b0527dc65b40fe900a0bb5245fee2ca569c931b5126483f78c3e1eb5d89d7.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 398, + 268, + 471, + 343 + ], + "blocks": [ + { + "bbox": [ + 398, + 268, + 471, + 343 + ], + "lines": [ + { + "bbox": [ + 398, + 268, + 471, + 343 + ], + "spans": [ + { + "bbox": [ + 398, + 268, + 471, + 343 + ], + "type": "image", + "image_path": "e473421bda3a60dfa3a9ac5d71ce8d3d0be3247fe5c3eb8a7b8a514a7ac65a4c.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 475, + 268, + 549, + 343 + ], + "blocks": [ + { + "bbox": [ + 475, + 268, + 549, + 343 + ], + "lines": [ + { + "bbox": [ + 475, + 268, + 549, + 343 + ], + "spans": [ + { + "bbox": [ + 475, + 268, + 549, + 343 + ], + "type": "image", + "image_path": "705a3efa88479430786188109444797316a4b50d5391f193ea586c1243e89535.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 320, + 345, + 394, + 420 + ], + "blocks": [ + { + "bbox": [ + 320, + 345, + 394, + 420 + ], + "lines": [ + { + "bbox": [ + 320, + 345, + 394, + 420 + ], + "spans": [ + { + "bbox": [ + 320, + 345, + 394, + 420 + ], + "type": "image", + "image_path": "d62f7a0625431a474dc72369f8aeddce76336606025ee6c862bf83766204c772.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 398, + 345, + 471, + 420 + ], + "blocks": [ + { + "bbox": [ + 398, + 345, + 471, + 420 + ], + "lines": [ + { + "bbox": [ + 398, + 345, + 471, + 420 + ], + "spans": [ + { + "bbox": [ + 398, + 345, + 471, + 420 + ], + "type": "image", + "image_path": "c1883fbc94e049a66af85bf3efd9200fd3ffaf4eff7aa042007643987dfcbbbf.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 475, + 345, + 549, + 420 + ], + "blocks": [ + { + "bbox": [ + 475, + 345, + 549, + 420 + ], + "lines": [ + { + "bbox": [ + 475, + 345, + 549, + 420 + ], + "spans": [ + { + "bbox": [ + 475, + 345, + 549, + 420 + ], + "type": "image", + "image_path": "08394c2eb97d7c0453e361592e75fef7fdc4a1a8791d4a1d772760388c47551c.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 320, + 421, + 394, + 496 + ], + "blocks": [ + { + "bbox": [ + 320, + 421, + 394, + 496 + ], + "lines": [ + { + "bbox": [ + 320, + 421, + 394, + 496 + ], + "spans": [ + { + "bbox": [ + 320, + 421, + 394, + 496 + ], + "type": "image", + "image_path": "596129c8c6c42a54370e7e075fff5cdb848c8ad31815f9ac36baa3aa8e493904.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 331, + 503, + 384, + 514 + ], + "lines": [ + { + "bbox": [ + 331, + 503, + 384, + 514 + ], + "spans": [ + { + "bbox": [ + 331, + 503, + 384, + 514 + ], + "type": "text", + "content": "Original image" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_caption" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 398, + 421, + 471, + 496 + ], + "blocks": [ + { + "bbox": [ + 398, + 421, + 471, + 496 + ], + "lines": [ + { + "bbox": [ + 398, + 421, + 471, + 496 + ], + "spans": [ + { + "bbox": [ + 398, + 421, + 471, + 496 + ], + "type": "image", + "image_path": "ac1fecb53dd177f8397f60a2a1b0ae32e3d2d0f89ebaa3ecb8d06a193b2f9337.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 408, + 503, + 460, + 514 + ], + "lines": [ + { + "bbox": [ + 408, + 503, + 460, + 514 + ], + "spans": [ + { + "bbox": [ + 408, + 503, + 460, + 514 + ], + "type": "text", + "content": "Masked image" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_caption" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 475, + 421, + 549, + 496 + ], + "blocks": [ + { + "bbox": [ + 475, + 421, + 549, + 496 + ], + "lines": [ + { + "bbox": [ + 475, + 421, + 549, + 496 + ], + "spans": [ + { + "bbox": [ + 475, + 421, + 549, + 496 + ], + "type": "image", + "image_path": "e10d9e59eaa16897267857117a05853cf8cbb446ccfa5611419002eeb268c0bd.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 485, + 503, + 541, + 513 + ], + "lines": [ + { + "bbox": [ + 485, + 503, + 541, + 513 + ], + "spans": [ + { + "bbox": [ + 485, + 503, + 541, + 513 + ], + "type": "text", + "content": "Occlusion mask" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 313, + 522, + 555, + 556 + ], + "lines": [ + { + "bbox": [ + 313, + 522, + 555, + 556 + ], + "spans": [ + { + "bbox": [ + 313, + 522, + 555, + 556 + ], + "type": "text", + "content": "Figure A.1. Examples of random mask generation. The visible areas are shown in white, occluded areas in gray and background in black." + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_caption" + } + ], + "index": 21 + }, + { + "bbox": [ + 314, + 592, + 419, + 604 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 592, + 419, + 604 + ], + "spans": [ + { + "bbox": [ + 314, + 592, + 419, + 604 + ], + "type": "text", + "content": "A.3. Inference Details" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 313, + 613, + 555, + 713 + ], + "type": "list", + "angle": 0, + "index": 29, + "blocks": [ + { + "bbox": [ + 313, + 613, + 555, + 662 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 613, + 555, + 662 + ], + "spans": [ + { + "bbox": [ + 313, + 613, + 555, + 662 + ], + "type": "text", + "content": "(a) 3D-consistent mask ratio. For the multi-view 3D-consistent masks described in Sec. 3.3, we set the mask ratio to a random number between 0.4 and 0.6 for each object, which results in a variety of reasonable mask areas." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 313, + 665, + 555, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 665, + 555, + 713 + ], + "spans": [ + { + "bbox": [ + 313, + 665, + 555, + 713 + ], + "type": "text", + "content": "(b) Time consumption. Despite the introduction of additional cross-attention layers, our inference time remains comparable to that of the baselines. Amodal3R can generate and render each object in under 10 seconds." + } + ] + } + ], + "index": 28 + } + ], + "sub_type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 67, + 76, + 116, + 126 + ], + "blocks": [ + { + "bbox": [ + 67, + 76, + 116, + 126 + ], + "lines": [ + { + "bbox": [ + 67, + 76, + 116, + 126 + ], + "spans": [ + { + "bbox": [ + 67, + 76, + 116, + 126 + ], + "type": "image", + "image_path": "9c2ef8895af5bc56a4e22ac31ba78caceebce58eaa6b1587f72aa517f01ca087.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 70, + 129, + 119, + 187 + ], + "blocks": [ + { + "bbox": [ + 70, + 129, + 119, + 187 + ], + "lines": [ + { + "bbox": [ + 70, + 129, + 119, + 187 + ], + "spans": [ + { + "bbox": [ + 70, + 129, + 119, + 187 + ], + "type": "image", + "image_path": "615aab167763d7d564aa2c043f298f360c03cd068519012f496cea56c402d227.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 67, + 190, + 122, + 201 + ], + "lines": [ + { + "bbox": [ + 67, + 190, + 122, + 201 + ], + "spans": [ + { + "bbox": [ + 67, + 190, + 122, + 201 + ], + "type": "text", + "content": "Occluded input" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 135, + 77, + 175, + 126 + ], + "blocks": [ + { + "bbox": [ + 135, + 77, + 175, + 126 + ], + "lines": [ + { + "bbox": [ + 135, + 77, + 175, + 126 + ], + "spans": [ + { + "bbox": [ + 135, + 77, + 175, + 126 + ], + "type": "image", + "image_path": "92616bc57210944caf7bac8ffcd90b9a67a8a7c6dbde9f7f2aeb54b6b08db6eb.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 175, + 78, + 219, + 126 + ], + "blocks": [ + { + "bbox": [ + 175, + 78, + 219, + 126 + ], + "lines": [ + { + "bbox": [ + 175, + 78, + 219, + 126 + ], + "spans": [ + { + "bbox": [ + 175, + 78, + 219, + 126 + ], + "type": "image", + "image_path": "9f79bf6a3e693fc48d5f856d4bbe95c8b716127d1e40aa554ba0a670ae682e57.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 128, + 131, + 175, + 180 + ], + "blocks": [ + { + "bbox": [ + 128, + 131, + 175, + 180 + ], + "lines": [ + { + "bbox": [ + 128, + 131, + 175, + 180 + ], + "spans": [ + { + "bbox": [ + 128, + 131, + 175, + 180 + ], + "type": "image", + "image_path": "35269d85c0ccd1276b1a039bef54ae08104a9b17d31f6d3822c4377144211c03.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 137, + 190, + 217, + 201 + ], + "lines": [ + { + "bbox": [ + 137, + 190, + 217, + 201 + ], + "spans": [ + { + "bbox": [ + 137, + 190, + 217, + 201 + ], + "type": "text", + "content": "GaussianAnything [37]" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 175, + 133, + 225, + 181 + ], + "blocks": [ + { + "bbox": [ + 175, + 133, + 225, + 181 + ], + "lines": [ + { + "bbox": [ + 175, + 133, + 225, + 181 + ], + "spans": [ + { + "bbox": [ + 175, + 133, + 225, + 181 + ], + "type": "image", + "image_path": "6a37b06161e088662aaa499b14ba3cbed12315720d8ca0de6d5770dfa5caa4fd.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 141, + 210, + 467, + 221 + ], + "lines": [ + { + "bbox": [ + 141, + 210, + 467, + 221 + ], + "spans": [ + { + "bbox": [ + 141, + 210, + 467, + 221 + ], + "type": "text", + "content": "Figure A.2. Examples using occluded images directly as the input of baseline models." + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 233, + 77, + 276, + 126 + ], + "blocks": [ + { + "bbox": [ + 233, + 77, + 276, + 126 + ], + "lines": [ + { + "bbox": [ + 233, + 77, + 276, + 126 + ], + "spans": [ + { + "bbox": [ + 233, + 77, + 276, + 126 + ], + "type": "image", + "image_path": "df95bfe63946dca7058cf2d4d385bca8b0cbc56bee3a96355da3c17b7da46a7d.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 240, + 130, + 278, + 176 + ], + "blocks": [ + { + "bbox": [ + 240, + 130, + 278, + 176 + ], + "lines": [ + { + "bbox": [ + 240, + 130, + 278, + 176 + ], + "spans": [ + { + "bbox": [ + 240, + 130, + 278, + 176 + ], + "type": "image", + "image_path": "2f2a03cbbf9479eaf62690774c6f7f0ad40af3149594ddf1f89c57b4a6660468.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 255, + 190, + 297, + 201 + ], + "lines": [ + { + "bbox": [ + 255, + 190, + 297, + 201 + ], + "spans": [ + { + "bbox": [ + 255, + 190, + 297, + 201 + ], + "type": "text", + "content": "Real3D [28]" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 284, + 76, + 323, + 125 + ], + "blocks": [ + { + "bbox": [ + 284, + 76, + 323, + 125 + ], + "lines": [ + { + "bbox": [ + 284, + 76, + 323, + 125 + ], + "spans": [ + { + "bbox": [ + 284, + 76, + 323, + 125 + ], + "type": "image", + "image_path": "876ecbf2904d5d5ba41a09e1642ef3a94b1d577e0832f68e3d8210ceb08e9465.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 287, + 128, + 324, + 177 + ], + "blocks": [ + { + "bbox": [ + 287, + 128, + 324, + 177 + ], + "lines": [ + { + "bbox": [ + 287, + 128, + 324, + 177 + ], + "spans": [ + { + "bbox": [ + 287, + 128, + 324, + 177 + ], + "type": "image", + "image_path": "a78b1617d2c8fa14a4ed066af6b7ae056aa1f5cab50f8367db695bc937d65401.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 359, + 190, + 411, + 201 + ], + "lines": [ + { + "bbox": [ + 359, + 190, + 411, + 201 + ], + "spans": [ + { + "bbox": [ + 359, + 190, + 411, + 201 + ], + "type": "text", + "content": "TRELLIS [81]" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 340, + 75, + 384, + 127 + ], + "blocks": [ + { + "bbox": [ + 340, + 75, + 384, + 127 + ], + "lines": [ + { + "bbox": [ + 340, + 75, + 384, + 127 + ], + "spans": [ + { + "bbox": [ + 340, + 75, + 384, + 127 + ], + "type": "image", + "image_path": "78fa239609ad9b322309c6000a6d97b1fa4731ce41715a07b7d7a24d1adcdb1b.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 341, + 133, + 378, + 177 + ], + "blocks": [ + { + "bbox": [ + 341, + 133, + 378, + 177 + ], + "lines": [ + { + "bbox": [ + 341, + 133, + 378, + 177 + ], + "spans": [ + { + "bbox": [ + 341, + 133, + 378, + 177 + ], + "type": "image", + "image_path": "8c3e4a89f0670e172fadded10975eee54aebfa304b9e6815e71adc3cdc6a4b2e.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 390, + 76, + 432, + 126 + ], + "blocks": [ + { + "bbox": [ + 390, + 76, + 432, + 126 + ], + "lines": [ + { + "bbox": [ + 390, + 76, + 432, + 126 + ], + "spans": [ + { + "bbox": [ + 390, + 76, + 432, + 126 + ], + "type": "image", + "image_path": "313f3ceb6f3d0da0644f0be2a0c948874779d72b805c97e3f0e8afc23b6ca136.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 390, + 134, + 426, + 177 + ], + "blocks": [ + { + "bbox": [ + 390, + 134, + 426, + 177 + ], + "lines": [ + { + "bbox": [ + 390, + 134, + 426, + 177 + ], + "spans": [ + { + "bbox": [ + 390, + 134, + 426, + 177 + ], + "type": "image", + "image_path": "df3d6e97efca84d18606fb24d27a150c0e19b7bed6dc0235a72e0f460a05c3a2.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 449, + 76, + 492, + 128 + ], + "blocks": [ + { + "bbox": [ + 449, + 76, + 492, + 128 + ], + "lines": [ + { + "bbox": [ + 449, + 76, + 492, + 128 + ], + "spans": [ + { + "bbox": [ + 449, + 76, + 492, + 128 + ], + "type": "image", + "image_path": "117bbb669d8338533595438417c5cd2c4062a58aa591ef0dbef7be0f3f0975ca.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 446, + 134, + 493, + 185 + ], + "blocks": [ + { + "bbox": [ + 446, + 134, + 493, + 185 + ], + "lines": [ + { + "bbox": [ + 446, + 134, + 493, + 185 + ], + "spans": [ + { + "bbox": [ + 446, + 134, + 493, + 185 + ], + "type": "image", + "image_path": "12e4b12feb71553492b2bba143a90efbae32c993ab3feaddbf364d9d0b4da9f1.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 469, + 190, + 531, + 200 + ], + "lines": [ + { + "bbox": [ + 469, + 190, + 531, + 200 + ], + "spans": [ + { + "bbox": [ + 469, + 190, + 531, + 200 + ], + "type": "text", + "content": "Amodal3R (Ours)" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_caption" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 497, + 76, + 540, + 128 + ], + "blocks": [ + { + "bbox": [ + 497, + 76, + 540, + 128 + ], + "lines": [ + { + "bbox": [ + 497, + 76, + 540, + 128 + ], + "spans": [ + { + "bbox": [ + 497, + 76, + 540, + 128 + ], + "type": "image", + "image_path": "1e0fbdc5fba5f402fc4063a139413b5100bc2a80dcba73fd638f35d60287d4e7.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 498, + 136, + 548, + 186 + ], + "blocks": [ + { + "bbox": [ + 498, + 136, + 548, + 186 + ], + "lines": [ + { + "bbox": [ + 498, + 136, + 548, + 186 + ], + "spans": [ + { + "bbox": [ + 498, + 136, + 548, + 186 + ], + "type": "image", + "image_path": "4f77cadcb832bb42842ceb4bc7c97687976ee5d613d7316f2b53f0858f0814fd.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + } + ], + "index": 21 + }, + { + "bbox": [ + 55, + 242, + 181, + 255 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 242, + 181, + 255 + ], + "spans": [ + { + "bbox": [ + 55, + 242, + 181, + 255 + ], + "type": "text", + "content": "B. Experimental Details" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 55, + 261, + 173, + 273 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 261, + 173, + 273 + ], + "spans": [ + { + "bbox": [ + 55, + 261, + 173, + 273 + ], + "type": "text", + "content": "B.1. Evaluation Protocol" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 55, + 278, + 295, + 337 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 278, + 295, + 337 + ], + "spans": [ + { + "bbox": [ + 55, + 278, + 295, + 337 + ], + "type": "text", + "content": "We evaluate the results using Google Scanned Objects (GSO) (1,030 objects) [16] and a randomly sampled subset of Toys4K [69] containing 1,500 objects. Here, we provide additional details regarding the computation of our evaluation metrics." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 55, + 338, + 295, + 517 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 338, + 295, + 517 + ], + "spans": [ + { + "bbox": [ + 55, + 338, + 295, + 517 + ], + "type": "text", + "content": "(a) Rendering quality and semantic consistency alignment To assess overall rendering quality, we compute the Fréchet Inception Distance (FID) [21] and Kernel Inception Distance (KID) [3]. Moreover, we evaluate semantic consistency using the CLIP score [59] by measuring the cosine similarity between the CLIP features of each generated image and its corresponding ground truth. For each object, we render 4 views using cameras with yaw angles of " + }, + { + "bbox": [ + 55, + 338, + 295, + 517 + ], + "type": "inline_equation", + "content": "\\{0^{\\circ}, 90^{\\circ}, 180^{\\circ}, 270^{\\circ}\\}" + }, + { + "bbox": [ + 55, + 338, + 295, + 517 + ], + "type": "text", + "content": " and a pitch angle of " + }, + { + "bbox": [ + 55, + 338, + 295, + 517 + ], + "type": "inline_equation", + "content": "30^{\\circ}" + }, + { + "bbox": [ + 55, + 338, + 295, + 517 + ], + "type": "text", + "content": ". The camera is positioned with a radius of 2.0 and looks at the origin with a FoV of " + }, + { + "bbox": [ + 55, + 338, + 295, + 517 + ], + "type": "inline_equation", + "content": "40^{\\circ}" + }, + { + "bbox": [ + 55, + 338, + 295, + 517 + ], + "type": "text", + "content": ", consistent with TRELLIS [81]. While FID and KID are calculated between the ground truth and generated sets (6,000 images for Toys4K and 4,120 images for GSO), the CLIP score is calculated in a pair-wise manner, and we report the mean value to evaluate semantic consistency." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 55, + 517, + 295, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 517, + 295, + 601 + ], + "spans": [ + { + "bbox": [ + 55, + 517, + 295, + 601 + ], + "type": "text", + "content": "(b) Geometry quality For 3D geometry evaluation, we adopt Point cloud FID (P-FID) [51], Coverage Score (COV), and Minimum Matching Distance (MMD) using Chamfer Distance (CD). Following previous work [37, 51, 81], we sampled 4096 points from each GT/generated point cloud, which are obtained from the unprojected multi-view depth maps using the farthest point sampling." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 55, + 610, + 141, + 623 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 610, + 141, + 623 + ], + "spans": [ + { + "bbox": [ + 55, + 610, + 141, + 623 + ], + "type": "text", + "content": "C. More Results" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 55, + 630, + 295, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 630, + 295, + 666 + ], + "spans": [ + { + "bbox": [ + 55, + 630, + 295, + 666 + ], + "type": "text", + "content": "In this section, we provide additional qualitative examples and comparison results to further demonstrate the performance of our Amodal3R." + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 55, + 672, + 224, + 685 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 672, + 224, + 685 + ], + "spans": [ + { + "bbox": [ + 55, + 672, + 224, + 685 + ], + "type": "text", + "content": "C.1. Baselines using occluded input" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 689, + 295, + 713 + ], + "type": "text", + "content": "We have stated in the main paper that \"occluded images will lead to incomplete or broken structures\" in current" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 313, + 243, + 554, + 302 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 243, + 554, + 302 + ], + "spans": [ + { + "bbox": [ + 313, + 243, + 554, + 302 + ], + "type": "text", + "content": "3D generative models. Here, we provide examples where pix2gestalt is omitted and the occluded images are directly used as the input. As shown in Fig. A.2, when baseline models receive images of partially visible objects as input, they often fail to recover complete and intact 3D assets." + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 313, + 312, + 493, + 325 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 312, + 493, + 325 + ], + "spans": [ + { + "bbox": [ + 313, + 312, + 493, + 325 + ], + "type": "text", + "content": "C.2. More single-view to 3D examples" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 313, + 331, + 554, + 415 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 331, + 554, + 415 + ], + "spans": [ + { + "bbox": [ + 313, + 331, + 554, + 415 + ], + "type": "text", + "content": "Due to the page restriction, we only provide limited examples in the main paper. Here we visualize more examples of single-view to 3D to further demonstrate the effectiveness of our method in Fig. C.4. The results show that compared with the 2D amodal completion + 3D generation baselines, our Amodal3R yields higher quality 3D assets across multiple categories." + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 314, + 423, + 492, + 437 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 314, + 423, + 492, + 437 + ], + "spans": [ + { + "bbox": [ + 314, + 423, + 492, + 437 + ], + "type": "text", + "content": "C.3. More multi-view to 3D examples" + } + ] + } + ], + "index": 36 + }, + { + "type": "image", + "bbox": [ + 348, + 455, + 517, + 519 + ], + "blocks": [ + { + "bbox": [ + 348, + 455, + 517, + 519 + ], + "lines": [ + { + "bbox": [ + 348, + 455, + 517, + 519 + ], + "spans": [ + { + "bbox": [ + 348, + 455, + 517, + 519 + ], + "type": "image", + "image_path": "f8245dfc94ff1585afa4578515cf4ebfaec3862cbc99047d3ff96a7f875b984d.jpg" + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_body" + } + ], + "index": 37 + }, + { + "type": "image", + "bbox": [ + 348, + 528, + 470, + 589 + ], + "blocks": [ + { + "bbox": [ + 384, + 520, + 477, + 529 + ], + "lines": [ + { + "bbox": [ + 384, + 520, + 477, + 529 + ], + "spans": [ + { + "bbox": [ + 384, + 520, + 477, + 529 + ], + "type": "text", + "content": "Multi-view Occluded input" + } + ] + } + ], + "index": 38, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 348, + 528, + 470, + 589 + ], + "lines": [ + { + "bbox": [ + 348, + 528, + 470, + 589 + ], + "spans": [ + { + "bbox": [ + 348, + 528, + 470, + 589 + ], + "type": "image", + "image_path": "52930762b79c0e57ccf7fd99bf51f14ffd04d0b75f4411c8357e139aeb338c72.jpg" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 409, + 590, + 462, + 600 + ], + "lines": [ + { + "bbox": [ + 409, + 590, + 462, + 600 + ], + "spans": [ + { + "bbox": [ + 409, + 590, + 462, + 600 + ], + "type": "text", + "content": "pix2gestalt [56]" + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_caption" + } + ], + "index": 39 + }, + { + "type": "image", + "bbox": [ + 348, + 601, + 515, + 664 + ], + "blocks": [ + { + "bbox": [ + 348, + 601, + 515, + 664 + ], + "lines": [ + { + "bbox": [ + 348, + 601, + 515, + 664 + ], + "spans": [ + { + "bbox": [ + 348, + 601, + 515, + 664 + ], + "type": "image", + "image_path": "9f90fd4f57664dfc4099d96b880e1abfb3de5b968deaa8ca18b1f426ca59759d.jpg" + } + ] + } + ], + "index": 41, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 378, + 666, + 493, + 675 + ], + "lines": [ + { + "bbox": [ + 378, + 666, + 493, + 675 + ], + "spans": [ + { + "bbox": [ + 378, + 666, + 493, + 675 + ], + "type": "text", + "content": "pix2gestalt [56] + Zero123++ [63]" + } + ] + } + ], + "index": 42, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 313, + 688, + 554, + 710 + ], + "lines": [ + { + "bbox": [ + 313, + 688, + 554, + 710 + ], + "spans": [ + { + "bbox": [ + 313, + 688, + 554, + 710 + ], + "type": "text", + "content": "Figure C.3. Example of \"pix2gestalt\" and \"pix2gestalt + MV\" input of multi-view to 3D evaluation." + } + ] + } + ], + "index": 43, + "angle": 0, + "type": "image_caption" + } + ], + "index": 41 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 44 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 55, + 72, + 294, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 72, + 294, + 251 + ], + "spans": [ + { + "bbox": [ + 55, + 72, + 294, + 251 + ], + "type": "text", + "content": "We first provide visualized examples to explicitly explain the difference between the \"pix2gestalt\" and \"pix2gestalt + MV\" settings in the multi-view to 3D generation in Fig. C.3. For the \"pix2gestalt\" setting, we directly implement pix2gestalt for the amodal completion independently under each view. For the \"pix2gestalt + MV\" setting, we first choose the view with the greatest visibility from the 4 occluded views, then use pix2gestalt to complete the object (which is shown in the pix2gestalt column in the qualitative result), and subsequently use Zero123++ to get the 4 consistent views as the input of LaRa and TRELLIS. It can be observed that pix2gestalt alone results in obvious multiview inconsistency, while with Zero123++ the consistency is significantly improved, thus leading to better 3D generation quality." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 55, + 251, + 294, + 288 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 251, + 294, + 288 + ], + "spans": [ + { + "bbox": [ + 55, + 251, + 294, + 288 + ], + "type": "text", + "content": "More multi-view to 3D examples are provided in Fig. C.5, where our Amodal3R again generates 3D assets with better quality than the baselines." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 55, + 296, + 263, + 308 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 296, + 263, + 308 + ], + "spans": [ + { + "bbox": [ + 55, + 296, + 263, + 308 + ], + "type": "text", + "content": "C.4. More diverse 3D reconstruction results" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 55, + 314, + 294, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 314, + 294, + 350 + ], + "spans": [ + { + "bbox": [ + 55, + 314, + 294, + 350 + ], + "type": "text", + "content": "In Fig. C.6, we show more examples to demonstrate that Amodal3R is able to generate diverse reasonable results from the occluded input with multiple stochastic samplings." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 55, + 358, + 194, + 370 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 358, + 194, + 370 + ], + "spans": [ + { + "bbox": [ + 55, + 358, + 194, + 370 + ], + "type": "text", + "content": "C.5. More in-the-wild results" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 55, + 376, + 294, + 460 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 376, + 294, + 460 + ], + "spans": [ + { + "bbox": [ + 55, + 376, + 294, + 460 + ], + "type": "text", + "content": "We provide more examples where we compare Amodal3R with \"pix2gestalt + TRELLIS\" pipeline in Fig. C.7. The results further demonstrate that 2D amodal method lacks 3D geometric understanding, often resulting in improper completion, such as completing the armrest of the chair as the backrest. In contrast, Amodal3R generates more plausible results with reasonable geometry and rich textural details." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 55, + 460, + 294, + 519 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 460, + 294, + 519 + ], + "spans": [ + { + "bbox": [ + 55, + 460, + 294, + 519 + ], + "type": "text", + "content": "In Fig. C.8 and Fig. C.9 we provide more visualization results on the in-the-wild images and the corresponding colored normal maps to show the geometry details. We implement Segment Anything [35] to obtain the segmentation masks." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 55, + 528, + 270, + 540 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 528, + 270, + 540 + ], + "spans": [ + { + "bbox": [ + 55, + 528, + 270, + 540 + ], + "type": "text", + "content": "C.6. Discussion, Limitation and Future Work" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 55, + 545, + 295, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 55, + 545, + 295, + 713 + ], + "spans": [ + { + "bbox": [ + 55, + 545, + 295, + 713 + ], + "type": "text", + "content": "While Amodal3R achieves impressive 3D amodal completion, it comes with several limitations we hope to solve in the future. 1) Dataset expansion. Due to the computational resources limitation, Amodal3R is trained on very limited data, consisting of only 20,627 synthetic 3D assets, predominantly confined to the furniture category. Consequently, completions on some complex or out-of-distribution objects may fail or lead to unrealistic structures. We believe that training on larger datasets, e.g. Objaverse-XL [15], could mitigate these issues. 2) Real-World data adaptation. Different from pix2gestalt, Amodal3R is trained exclusively on synthetic data. As a result, it cannot leverage environmental cues and must rely solely on the visible portions of occluded objects for reconstruction. Creating real" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 313, + 72, + 553, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 313, + 72, + 553, + 167 + ], + "spans": [ + { + "bbox": [ + 313, + 72, + 553, + 167 + ], + "type": "text", + "content": "world 3D modal datasets will further enhance the ability to apply models to real scenes. 3) Controllable completion. Currently, how objects are completed is entirely up to the model itself and lacks control. Therefore, to further enhance the model to accept additional conditions, such as text, and allow users to edit and control the style of the completion process will be an important research direction for us in the future." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 308, + 741 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 55, + 121, + 550, + 639 + ], + "blocks": [ + { + "bbox": [ + 55, + 121, + 550, + 639 + ], + "lines": [ + { + "bbox": [ + 55, + 121, + 550, + 639 + ], + "spans": [ + { + "bbox": [ + 55, + 121, + 550, + 639 + ], + "type": "image", + "image_path": "200788fa8c59010615e251b7c018191c590445a647523496f6bff0ccc66bf77e.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 644, + 555, + 666 + ], + "lines": [ + { + "bbox": [ + 55, + 644, + 555, + 666 + ], + "spans": [ + { + "bbox": [ + 55, + 644, + 555, + 666 + ], + "type": "text", + "content": "Figure C.4. Additional single-view to 3D comparison examples. The occluders are shown in black and the visible regions are highlighted with red outlines." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 52, + 118, + 552, + 635 + ], + "blocks": [ + { + "bbox": [ + 52, + 118, + 552, + 635 + ], + "lines": [ + { + "bbox": [ + 52, + 118, + 552, + 635 + ], + "spans": [ + { + "bbox": [ + 52, + 118, + 552, + 635 + ], + "type": "image", + "image_path": "6e3e8cb3f6e682b55056f262e4f9a2f12e3f1a7a2aebcb739a7935617b01060b.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 55, + 642, + 555, + 664 + ], + "lines": [ + { + "bbox": [ + 55, + 642, + 555, + 664 + ], + "spans": [ + { + "bbox": [ + 55, + 642, + 555, + 664 + ], + "type": "text", + "content": "Figure C.5. Additional multi-view to 3D comparison examples. The occluders are shown in black and the visible regions are highlighted with red outlines." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 742 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 742 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 58, + 125, + 545, + 632 + ], + "blocks": [ + { + "bbox": [ + 58, + 125, + 545, + 632 + ], + "lines": [ + { + "bbox": [ + 58, + 125, + 545, + 632 + ], + "spans": [ + { + "bbox": [ + 58, + 125, + 545, + 632 + ], + "type": "image", + "image_path": "bf75ddd1a0b936e11399a4404cec99a271558a5059efd0f5ea400b27d6a896dd.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 62, + 643, + 546, + 656 + ], + "lines": [ + { + "bbox": [ + 62, + 643, + 546, + 656 + ], + "spans": [ + { + "bbox": [ + 62, + 643, + 546, + 656 + ], + "type": "text", + "content": "Figure C.6. Additional diverse examples. The occluders are shown in black and the visible regions are highlighted with red outlines." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 733, + 309, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 309, + 742 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 309, + 742 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 62, + 111, + 128, + 175 + ], + "blocks": [ + { + "bbox": [ + 62, + 111, + 128, + 175 + ], + "lines": [ + { + "bbox": [ + 62, + 111, + 128, + 175 + ], + "spans": [ + { + "bbox": [ + 62, + 111, + 128, + 175 + ], + "type": "image", + "image_path": "e870017230928572f834d4e3b7b44e9ccc89b671cc7b2dfbdeca47b3555aa063.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 62, + 175, + 127, + 239 + ], + "blocks": [ + { + "bbox": [ + 62, + 175, + 127, + 239 + ], + "lines": [ + { + "bbox": [ + 62, + 175, + 127, + 239 + ], + "spans": [ + { + "bbox": [ + 62, + 175, + 127, + 239 + ], + "type": "image", + "image_path": "6492dba26fef53a6b15fcf4f6bd0f48ff1bd3a409eed95af009133f8b85c0fd3.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 62, + 242, + 127, + 305 + ], + "blocks": [ + { + "bbox": [ + 62, + 242, + 127, + 305 + ], + "lines": [ + { + "bbox": [ + 62, + 242, + 127, + 305 + ], + "spans": [ + { + "bbox": [ + 62, + 242, + 127, + 305 + ], + "type": "image", + "image_path": "d90c74b16e5686f3fd0a043e1d39ccc701a7b9835351836631c2644d63c39a4e.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 62, + 308, + 127, + 369 + ], + "blocks": [ + { + "bbox": [ + 62, + 308, + 127, + 369 + ], + "lines": [ + { + "bbox": [ + 62, + 308, + 127, + 369 + ], + "spans": [ + { + "bbox": [ + 62, + 308, + 127, + 369 + ], + "type": "image", + "image_path": "685ad6bc4dc8ba6e901173eaab3f4f5e07a5bb2bcf50d3d5f20d6720d0395192.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 62, + 370, + 127, + 430 + ], + "blocks": [ + { + "bbox": [ + 62, + 370, + 127, + 430 + ], + "lines": [ + { + "bbox": [ + 62, + 370, + 127, + 430 + ], + "spans": [ + { + "bbox": [ + 62, + 370, + 127, + 430 + ], + "type": "image", + "image_path": "ecd950c091f52dd14a378d38d688ffb86bfaa6418ed1a46ce552bfb485beda3f.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 62, + 432, + 127, + 492 + ], + "blocks": [ + { + "bbox": [ + 62, + 432, + 127, + 492 + ], + "lines": [ + { + "bbox": [ + 62, + 432, + 127, + 492 + ], + "spans": [ + { + "bbox": [ + 62, + 432, + 127, + 492 + ], + "type": "image", + "image_path": "98e46a5dcf149f1e0f883787c69fb2eeb6d45f364250f5b58e7230c52ce6f351.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 62, + 494, + 127, + 553 + ], + "blocks": [ + { + "bbox": [ + 62, + 494, + 127, + 553 + ], + "lines": [ + { + "bbox": [ + 62, + 494, + 127, + 553 + ], + "spans": [ + { + "bbox": [ + 62, + 494, + 127, + 553 + ], + "type": "image", + "image_path": "72372d8636e2039e9e78d331bc0c155b39ac6b8cbfebfe37d6e1c282235da333.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 62, + 556, + 127, + 619 + ], + "blocks": [ + { + "bbox": [ + 62, + 556, + 127, + 619 + ], + "lines": [ + { + "bbox": [ + 62, + 556, + 127, + 619 + ], + "spans": [ + { + "bbox": [ + 62, + 556, + 127, + 619 + ], + "type": "image", + "image_path": "9407f92f2328bc77b19b079886cca8c258b58a670a9d3cb0028c09f6d6f59ea4.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 75, + 623, + 119, + 633 + ], + "lines": [ + { + "bbox": [ + 75, + 623, + 119, + 633 + ], + "spans": [ + { + "bbox": [ + 75, + 623, + 119, + 633 + ], + "type": "text", + "content": "Input image" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 650, + 555, + 673 + ], + "lines": [ + { + "bbox": [ + 55, + 650, + 555, + 673 + ], + "spans": [ + { + "bbox": [ + 55, + 650, + 555, + 673 + ], + "type": "text", + "content": "Figure C.7. Additional in-the-wild examples compared with pix2gestalt + TRELLIS. The target objects and occluders are marked with the red and green outlines." + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 127, + 125, + 547, + 173 + ], + "blocks": [ + { + "bbox": [ + 127, + 125, + 547, + 173 + ], + "lines": [ + { + "bbox": [ + 127, + 125, + 547, + 173 + ], + "spans": [ + { + "bbox": [ + 127, + 125, + 547, + 173 + ], + "type": "image", + "image_path": "dcf6c81676d2bfcc52799174f6ea258f27f99878bb1d8ce177ccd911a170df41.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 128, + 190, + 547, + 238 + ], + "blocks": [ + { + "bbox": [ + 128, + 190, + 547, + 238 + ], + "lines": [ + { + "bbox": [ + 128, + 190, + 547, + 238 + ], + "spans": [ + { + "bbox": [ + 128, + 190, + 547, + 238 + ], + "type": "image", + "image_path": "7221e58f8fa1621d86ddcfca117da0a9ab550e97df879da747515b50089871ab.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 128, + 242, + 547, + 297 + ], + "blocks": [ + { + "bbox": [ + 128, + 242, + 547, + 297 + ], + "lines": [ + { + "bbox": [ + 128, + 242, + 547, + 297 + ], + "spans": [ + { + "bbox": [ + 128, + 242, + 547, + 297 + ], + "type": "image", + "image_path": "ee3d4702f03df512a1dcd2f450f6229b648789fc7b2e62cd2be6ba680f612a1c.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 128, + 308, + 544, + 358 + ], + "blocks": [ + { + "bbox": [ + 128, + 308, + 544, + 358 + ], + "lines": [ + { + "bbox": [ + 128, + 308, + 544, + 358 + ], + "spans": [ + { + "bbox": [ + 128, + 308, + 544, + 358 + ], + "type": "image", + "image_path": "348936c94c048be73a84a1a35305c574d97223f7ddaecd36f5f5c956c0e0243a.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 128, + 376, + 544, + 432 + ], + "blocks": [ + { + "bbox": [ + 128, + 376, + 544, + 432 + ], + "lines": [ + { + "bbox": [ + 128, + 376, + 544, + 432 + ], + "spans": [ + { + "bbox": [ + 128, + 376, + 544, + 432 + ], + "type": "image", + "image_path": "e9235990f61e5d288eaf7d5196b8d147e27d2759c3e59bd094647efae78f5478.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 128, + 434, + 544, + 491 + ], + "blocks": [ + { + "bbox": [ + 128, + 434, + 544, + 491 + ], + "lines": [ + { + "bbox": [ + 128, + 434, + 544, + 491 + ], + "spans": [ + { + "bbox": [ + 128, + 434, + 544, + 491 + ], + "type": "image", + "image_path": "4f68e7ed21a55dc404a754e72d158812e6855ef3c19eb4fe6da655eaa44f381c.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 128, + 494, + 547, + 548 + ], + "blocks": [ + { + "bbox": [ + 128, + 494, + 547, + 548 + ], + "lines": [ + { + "bbox": [ + 128, + 494, + 547, + 548 + ], + "spans": [ + { + "bbox": [ + 128, + 494, + 547, + 548 + ], + "type": "image", + "image_path": "5778f0574d6cb26ba5afdae215feb108c626210189ec2e390e3e7bfe3abc61f5.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 128, + 556, + 552, + 619 + ], + "blocks": [ + { + "bbox": [ + 128, + 556, + 552, + 619 + ], + "lines": [ + { + "bbox": [ + 128, + 556, + 552, + 619 + ], + "spans": [ + { + "bbox": [ + 128, + 556, + 552, + 619 + ], + "type": "image", + "image_path": "d195416a2f5327884ffc98ff5a1d8dcb4ff34a0d0f4a043146c8f62254dfc341.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 129, + 623, + 184, + 633 + ], + "lines": [ + { + "bbox": [ + 129, + 623, + 184, + 633 + ], + "spans": [ + { + "bbox": [ + 129, + 623, + 184, + 633 + ], + "type": "text", + "content": "pix2gestalt [56]" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 239, + 623, + 291, + 633 + ], + "lines": [ + { + "bbox": [ + 239, + 623, + 291, + 633 + ], + "spans": [ + { + "bbox": [ + 239, + 623, + 291, + 633 + ], + "type": "text", + "content": "TRELLIS [81]" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 423, + 623, + 487, + 633 + ], + "lines": [ + { + "bbox": [ + 423, + 623, + 487, + 633 + ], + "spans": [ + { + "bbox": [ + 423, + 623, + 487, + 633 + ], + "type": "text", + "content": "Amodal3R (Ours)" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 732, + 309, + 741 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 65, + 83, + 139, + 149 + ], + "blocks": [ + { + "bbox": [ + 65, + 83, + 139, + 149 + ], + "lines": [ + { + "bbox": [ + 65, + 83, + 139, + 149 + ], + "spans": [ + { + "bbox": [ + 65, + 83, + 139, + 149 + ], + "type": "image", + "image_path": "ada9408678dd16ac9a17139bc6bcffa21cef5bfdfb1d7de869883fff85659fde.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 65, + 152, + 139, + 225 + ], + "blocks": [ + { + "bbox": [ + 65, + 152, + 139, + 225 + ], + "lines": [ + { + "bbox": [ + 65, + 152, + 139, + 225 + ], + "spans": [ + { + "bbox": [ + 65, + 152, + 139, + 225 + ], + "type": "image", + "image_path": "6062e969b144b68bde5c4787caf92635af76db3c1558a9bb607b2b4c8d4b3eef.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 65, + 228, + 139, + 299 + ], + "blocks": [ + { + "bbox": [ + 65, + 228, + 139, + 299 + ], + "lines": [ + { + "bbox": [ + 65, + 228, + 139, + 299 + ], + "spans": [ + { + "bbox": [ + 65, + 228, + 139, + 299 + ], + "type": "image", + "image_path": "2d6aa5a394deca4847eb806106419625bb3c8facd8f63c970e6d2e709d187393.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 65, + 301, + 139, + 374 + ], + "blocks": [ + { + "bbox": [ + 65, + 301, + 139, + 374 + ], + "lines": [ + { + "bbox": [ + 65, + 301, + 139, + 374 + ], + "spans": [ + { + "bbox": [ + 65, + 301, + 139, + 374 + ], + "type": "image", + "image_path": "b1ee42e3f1c1b55b7c2ad148ca3beddd296d22b422314abeba1672040f573ebb.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 65, + 378, + 139, + 448 + ], + "blocks": [ + { + "bbox": [ + 65, + 378, + 139, + 448 + ], + "lines": [ + { + "bbox": [ + 65, + 378, + 139, + 448 + ], + "spans": [ + { + "bbox": [ + 65, + 378, + 139, + 448 + ], + "type": "image", + "image_path": "351f4fc242c00ce05b1a676430843ead379c1561c82870ff79d72e7ef9fad3ec.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 65, + 451, + 139, + 518 + ], + "blocks": [ + { + "bbox": [ + 65, + 451, + 139, + 518 + ], + "lines": [ + { + "bbox": [ + 65, + 451, + 139, + 518 + ], + "spans": [ + { + "bbox": [ + 65, + 451, + 139, + 518 + ], + "type": "image", + "image_path": "20456732dc746a6bebf876773ba1fe04717282542debba0a537100f6b3a79c41.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 65, + 521, + 139, + 586 + ], + "blocks": [ + { + "bbox": [ + 65, + 521, + 139, + 586 + ], + "lines": [ + { + "bbox": [ + 65, + 521, + 139, + 586 + ], + "spans": [ + { + "bbox": [ + 65, + 521, + 139, + 586 + ], + "type": "image", + "image_path": "14513e3cf22f958af8f2643bf88c8e1a382b22520cab9cbf47471ead011e29d4.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 65, + 590, + 139, + 658 + ], + "blocks": [ + { + "bbox": [ + 65, + 590, + 139, + 658 + ], + "lines": [ + { + "bbox": [ + 65, + 590, + 139, + 658 + ], + "spans": [ + { + "bbox": [ + 65, + 590, + 139, + 658 + ], + "type": "image", + "image_path": "accebdc45f78add46402aaf69516164e54dcabd80c7e75d86ee723403791d785.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 80, + 666, + 124, + 676 + ], + "lines": [ + { + "bbox": [ + 80, + 666, + 124, + 676 + ], + "spans": [ + { + "bbox": [ + 80, + 666, + 124, + 676 + ], + "type": "text", + "content": "Input image" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 80, + 693, + 528, + 704 + ], + "lines": [ + { + "bbox": [ + 80, + 693, + 528, + 704 + ], + "spans": [ + { + "bbox": [ + 80, + 693, + 528, + 704 + ], + "type": "text", + "content": "Figure C.8. Additional in-the-wild examples. The target objects and occluders are marked with the red and green outlines." + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 149, + 91, + 552, + 143 + ], + "blocks": [ + { + "bbox": [ + 149, + 91, + 552, + 143 + ], + "lines": [ + { + "bbox": [ + 149, + 91, + 552, + 143 + ], + "spans": [ + { + "bbox": [ + 149, + 91, + 552, + 143 + ], + "type": "image", + "image_path": "7422c5ddae7e24c65457d03b1bd98f2cf53e00b23dfb4d18efada84fb0cf74e5.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 156, + 168, + 541, + 215 + ], + "blocks": [ + { + "bbox": [ + 156, + 168, + 541, + 215 + ], + "lines": [ + { + "bbox": [ + 156, + 168, + 541, + 215 + ], + "spans": [ + { + "bbox": [ + 156, + 168, + 541, + 215 + ], + "type": "image", + "image_path": "abdf454b1c319bbc3abe24bb70aea384919e5811cfb8e8caaac98e01fb1d1563.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 157, + 246, + 533, + 285 + ], + "blocks": [ + { + "bbox": [ + 157, + 246, + 533, + 285 + ], + "lines": [ + { + "bbox": [ + 157, + 246, + 533, + 285 + ], + "spans": [ + { + "bbox": [ + 157, + 246, + 533, + 285 + ], + "type": "image", + "image_path": "217a80eff1e8e6c5b9a4f61bdf9bef041254d406ea49291a148013926a2424ee.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 143, + 323, + 553, + 358 + ], + "blocks": [ + { + "bbox": [ + 143, + 323, + 553, + 358 + ], + "lines": [ + { + "bbox": [ + 143, + 323, + 553, + 358 + ], + "spans": [ + { + "bbox": [ + 143, + 323, + 553, + 358 + ], + "type": "image", + "image_path": "590eeb7939486a44b00107d94f5a70ac9d3c78fd64407589f2f325dbfc8c3856.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 145, + 396, + 551, + 433 + ], + "blocks": [ + { + "bbox": [ + 145, + 396, + 551, + 433 + ], + "lines": [ + { + "bbox": [ + 145, + 396, + 551, + 433 + ], + "spans": [ + { + "bbox": [ + 145, + 396, + 551, + 433 + ], + "type": "image", + "image_path": "af2ed36a83bcdb256e48f085f9d29306437b2c92f8a174de87bb9c2c8abaf3ca.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 148, + 470, + 547, + 499 + ], + "blocks": [ + { + "bbox": [ + 148, + 470, + 547, + 499 + ], + "lines": [ + { + "bbox": [ + 148, + 470, + 547, + 499 + ], + "spans": [ + { + "bbox": [ + 148, + 470, + 547, + 499 + ], + "type": "image", + "image_path": "57cb0acdeb0fddc57c4dc24b70a23ccfb10ceecd0a119ca07fd1fd93d810939b.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 160, + 525, + 532, + 578 + ], + "blocks": [ + { + "bbox": [ + 160, + 525, + 532, + 578 + ], + "lines": [ + { + "bbox": [ + 160, + 525, + 532, + 578 + ], + "spans": [ + { + "bbox": [ + 160, + 525, + 532, + 578 + ], + "type": "image", + "image_path": "e2ecb2a27e1f5f46cb1e1761e433b07111ed89b0cfa1d826e0401f18bb8190a5.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 150, + 598, + 544, + 652 + ], + "blocks": [ + { + "bbox": [ + 150, + 598, + 544, + 652 + ], + "lines": [ + { + "bbox": [ + 150, + 598, + 544, + 652 + ], + "spans": [ + { + "bbox": [ + 150, + 598, + 544, + 652 + ], + "type": "image", + "image_path": "3354b81a13aa69181692ebc240c5711c8fd2df331d852e05674a15348bb8042c.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 314, + 666, + 389, + 675 + ], + "lines": [ + { + "bbox": [ + 314, + 666, + 389, + 675 + ], + "spans": [ + { + "bbox": [ + 314, + 666, + 389, + 675 + ], + "type": "text", + "content": "Reconstruction result" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_caption" + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 733, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 308, + 741 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 65, + 79, + 136, + 153 + ], + "blocks": [ + { + "bbox": [ + 65, + 79, + 136, + 153 + ], + "lines": [ + { + "bbox": [ + 65, + 79, + 136, + 153 + ], + "spans": [ + { + "bbox": [ + 65, + 79, + 136, + 153 + ], + "type": "image", + "image_path": "095b0a8683c9a1abe5d1fd0351f2b0bd251d363bc2faf44ef3b8c6a7aedcea05.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 64, + 156, + 136, + 225 + ], + "blocks": [ + { + "bbox": [ + 64, + 156, + 136, + 225 + ], + "lines": [ + { + "bbox": [ + 64, + 156, + 136, + 225 + ], + "spans": [ + { + "bbox": [ + 64, + 156, + 136, + 225 + ], + "type": "image", + "image_path": "c359707e04e058224e1375c5e81fbc8ccddcb047b0de7412b82c559e5f02bc21.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 65, + 228, + 136, + 296 + ], + "blocks": [ + { + "bbox": [ + 65, + 228, + 136, + 296 + ], + "lines": [ + { + "bbox": [ + 65, + 228, + 136, + 296 + ], + "spans": [ + { + "bbox": [ + 65, + 228, + 136, + 296 + ], + "type": "image", + "image_path": "769f2f75988fa8c20f7a150b28cc25793d51852498286a3fa481276629ccca9c.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 66, + 301, + 136, + 375 + ], + "blocks": [ + { + "bbox": [ + 66, + 301, + 136, + 375 + ], + "lines": [ + { + "bbox": [ + 66, + 301, + 136, + 375 + ], + "spans": [ + { + "bbox": [ + 66, + 301, + 136, + 375 + ], + "type": "image", + "image_path": "1184cd58dac7c2417f151966e2cecd6fc93b1d0e7753d2a88787190ba2b76e60.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 66, + 378, + 135, + 448 + ], + "blocks": [ + { + "bbox": [ + 66, + 378, + 135, + 448 + ], + "lines": [ + { + "bbox": [ + 66, + 378, + 135, + 448 + ], + "spans": [ + { + "bbox": [ + 66, + 378, + 135, + 448 + ], + "type": "image", + "image_path": "33f0534eb2700edc8fc3ea5301c729d4d26448af5eac394994064a31ae54f642.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 66, + 450, + 136, + 520 + ], + "blocks": [ + { + "bbox": [ + 66, + 450, + 136, + 520 + ], + "lines": [ + { + "bbox": [ + 66, + 450, + 136, + 520 + ], + "spans": [ + { + "bbox": [ + 66, + 450, + 136, + 520 + ], + "type": "image", + "image_path": "0a8e71eded34cdb29ae8a23f3edcf84087356e3b089e8f2c59718d46781c60a1.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 66, + 522, + 136, + 589 + ], + "blocks": [ + { + "bbox": [ + 66, + 522, + 136, + 589 + ], + "lines": [ + { + "bbox": [ + 66, + 522, + 136, + 589 + ], + "spans": [ + { + "bbox": [ + 66, + 522, + 136, + 589 + ], + "type": "image", + "image_path": "d0e2fc738ba13d07dee36c377c1631c2356014708deba436d5f0e69e34bdc11b.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 66, + 591, + 136, + 658 + ], + "blocks": [ + { + "bbox": [ + 66, + 591, + 136, + 658 + ], + "lines": [ + { + "bbox": [ + 66, + 591, + 136, + 658 + ], + "spans": [ + { + "bbox": [ + 66, + 591, + 136, + 658 + ], + "type": "image", + "image_path": "f4d800f760e32f85a18e0d7b49cd24ec481ab920ef0ced066be5a55ba0cca701.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 81, + 666, + 123, + 676 + ], + "lines": [ + { + "bbox": [ + 81, + 666, + 123, + 676 + ], + "spans": [ + { + "bbox": [ + 81, + 666, + 123, + 676 + ], + "type": "text", + "content": "Input image" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 145, + 85, + 233, + 140 + ], + "blocks": [ + { + "bbox": [ + 145, + 85, + 233, + 140 + ], + "lines": [ + { + "bbox": [ + 145, + 85, + 233, + 140 + ], + "spans": [ + { + "bbox": [ + 145, + 85, + 233, + 140 + ], + "type": "image", + "image_path": "929298d615a3fb7cc447c0f98e95baf173faa9e04a7d618c8a9009c54c584308.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 257, + 86, + 286, + 140 + ], + "blocks": [ + { + "bbox": [ + 257, + 86, + 286, + 140 + ], + "lines": [ + { + "bbox": [ + 257, + 86, + 286, + 140 + ], + "spans": [ + { + "bbox": [ + 257, + 86, + 286, + 140 + ], + "type": "image", + "image_path": "e16f1142d0a59d54d6b9c2c7b7795759e11379f48803ebd164889819bec4222e.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 306, + 86, + 334, + 140 + ], + "blocks": [ + { + "bbox": [ + 306, + 86, + 334, + 140 + ], + "lines": [ + { + "bbox": [ + 306, + 86, + 334, + 140 + ], + "spans": [ + { + "bbox": [ + 306, + 86, + 334, + 140 + ], + "type": "image", + "image_path": "f7039e6b445a5be5eee0bd5040ae487c6e7aee8781d78bfa73e920a9c75420a8.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 366, + 87, + 394, + 140 + ], + "blocks": [ + { + "bbox": [ + 366, + 87, + 394, + 140 + ], + "lines": [ + { + "bbox": [ + 366, + 87, + 394, + 140 + ], + "spans": [ + { + "bbox": [ + 366, + 87, + 394, + 140 + ], + "type": "image", + "image_path": "52bff517732bce595a3fc8d13402ba1d0f1b3b7a1eb2d4299eeefdcf805fcaee.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 413, + 87, + 441, + 140 + ], + "blocks": [ + { + "bbox": [ + 413, + 87, + 441, + 140 + ], + "lines": [ + { + "bbox": [ + 413, + 87, + 441, + 140 + ], + "spans": [ + { + "bbox": [ + 413, + 87, + 441, + 140 + ], + "type": "image", + "image_path": "f01a9e0ef9e9acd6cba92a918067dc45c9b7675a402b6e0568af1eb69c21ab5a.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 464, + 87, + 491, + 140 + ], + "blocks": [ + { + "bbox": [ + 464, + 87, + 491, + 140 + ], + "lines": [ + { + "bbox": [ + 464, + 87, + 491, + 140 + ], + "spans": [ + { + "bbox": [ + 464, + 87, + 491, + 140 + ], + "type": "image", + "image_path": "a15c57d10a0d91aa88495617e816b58fca0cf35ec218ed9cdf2e5a8522be81f0.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_body" + } + ], + "index": 14 + }, + { + "type": "image", + "bbox": [ + 511, + 87, + 537, + 140 + ], + "blocks": [ + { + "bbox": [ + 511, + 87, + 537, + 140 + ], + "lines": [ + { + "bbox": [ + 511, + 87, + 537, + 140 + ], + "spans": [ + { + "bbox": [ + 511, + 87, + 537, + 140 + ], + "type": "image", + "image_path": "c740a0bd12726124d22a0278522bd429dd12f45e9318efe95ac729e78ae54dc7.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 155, + 183, + 184, + 201 + ], + "blocks": [ + { + "bbox": [ + 155, + 183, + 184, + 201 + ], + "lines": [ + { + "bbox": [ + 155, + 183, + 184, + 201 + ], + "spans": [ + { + "bbox": [ + 155, + 183, + 184, + 201 + ], + "type": "image", + "image_path": "6bf181e361af64cbd49bdff2b0cd43c6a50dc0ad36963d735832233052683d74.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 202, + 185, + 231, + 201 + ], + "blocks": [ + { + "bbox": [ + 202, + 185, + 231, + 201 + ], + "lines": [ + { + "bbox": [ + 202, + 185, + 231, + 201 + ], + "spans": [ + { + "bbox": [ + 202, + 185, + 231, + 201 + ], + "type": "image", + "image_path": "e61d8fec364efc27a470eff5648933302e713b474924644e2edd3c55bfeda54d.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 261, + 184, + 290, + 201 + ], + "blocks": [ + { + "bbox": [ + 261, + 184, + 290, + 201 + ], + "lines": [ + { + "bbox": [ + 261, + 184, + 290, + 201 + ], + "spans": [ + { + "bbox": [ + 261, + 184, + 290, + 201 + ], + "type": "image", + "image_path": "d7115e74e97401735aa2884ea8f3346d25d37ce117b630b4920fac2bc1285067.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 308, + 184, + 337, + 201 + ], + "blocks": [ + { + "bbox": [ + 308, + 184, + 337, + 201 + ], + "lines": [ + { + "bbox": [ + 308, + 184, + 337, + 201 + ], + "spans": [ + { + "bbox": [ + 308, + 184, + 337, + 201 + ], + "type": "image", + "image_path": "5cc2da83fe61a2fdc4d3e7041779c61220f5b3b073385721791245264d4a1551.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 363, + 182, + 392, + 200 + ], + "blocks": [ + { + "bbox": [ + 363, + 182, + 392, + 200 + ], + "lines": [ + { + "bbox": [ + 363, + 182, + 392, + 200 + ], + "spans": [ + { + "bbox": [ + 363, + 182, + 392, + 200 + ], + "type": "image", + "image_path": "21d725ad0989e383de4f63e90ce664105e05c73d0b034d3c5ebf71613a830643.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 410, + 182, + 440, + 200 + ], + "blocks": [ + { + "bbox": [ + 410, + 182, + 440, + 200 + ], + "lines": [ + { + "bbox": [ + 410, + 182, + 440, + 200 + ], + "spans": [ + { + "bbox": [ + 410, + 182, + 440, + 200 + ], + "type": "image", + "image_path": "fd66ab2fe02a9a52c5e590980319f647bf554f99542cd9d6afc3047a2004f208.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 463, + 182, + 492, + 201 + ], + "blocks": [ + { + "bbox": [ + 463, + 182, + 492, + 201 + ], + "lines": [ + { + "bbox": [ + 463, + 182, + 492, + 201 + ], + "spans": [ + { + "bbox": [ + 463, + 182, + 492, + 201 + ], + "type": "image", + "image_path": "a8220c93bb0bff26f3701389b85c2241b024c6820bd425f0edf30ea47e53680b.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 509, + 182, + 539, + 201 + ], + "blocks": [ + { + "bbox": [ + 509, + 182, + 539, + 201 + ], + "lines": [ + { + "bbox": [ + 509, + 182, + 539, + 201 + ], + "spans": [ + { + "bbox": [ + 509, + 182, + 539, + 201 + ], + "type": "image", + "image_path": "762a5473b8c38fc5078ebc4dbaea183295506b6315263d1760274a5aef82e4b4.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 148, + 251, + 189, + 285 + ], + "blocks": [ + { + "bbox": [ + 148, + 251, + 189, + 285 + ], + "lines": [ + { + "bbox": [ + 148, + 251, + 189, + 285 + ], + "spans": [ + { + "bbox": [ + 148, + 251, + 189, + 285 + ], + "type": "image", + "image_path": "2d36140df0c83d5417674ee1873c2c32baa4cb1e52aa906d5a68028dd586970c.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + } + ], + "index": 24 + }, + { + "type": "image", + "bbox": [ + 195, + 251, + 236, + 285 + ], + "blocks": [ + { + "bbox": [ + 195, + 251, + 236, + 285 + ], + "lines": [ + { + "bbox": [ + 195, + 251, + 236, + 285 + ], + "spans": [ + { + "bbox": [ + 195, + 251, + 236, + 285 + ], + "type": "image", + "image_path": "970aed0843d92ecca9bd858e64a69a83cd4e96e405421cc40641733dd55582c0.jpg" + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_body" + } + ], + "index": 25 + }, + { + "type": "image", + "bbox": [ + 258, + 251, + 297, + 285 + ], + "blocks": [ + { + "bbox": [ + 258, + 251, + 297, + 285 + ], + "lines": [ + { + "bbox": [ + 258, + 251, + 297, + 285 + ], + "spans": [ + { + "bbox": [ + 258, + 251, + 297, + 285 + ], + "type": "image", + "image_path": "89410ff08ecccd04301f629f52ea7c1ca4b6aa0a815571a2208a032b95d229ed.jpg" + } + ] + } + ], + "index": 26, + "angle": 0, + "type": "image_body" + } + ], + "index": 26 + }, + { + "type": "image", + "bbox": [ + 306, + 251, + 345, + 285 + ], + "blocks": [ + { + "bbox": [ + 306, + 251, + 345, + 285 + ], + "lines": [ + { + "bbox": [ + 306, + 251, + 345, + 285 + ], + "spans": [ + { + "bbox": [ + 306, + 251, + 345, + 285 + ], + "type": "image", + "image_path": "13596732bab156e6c1c51aedacbd0bc7a5776d2e2cc38344b9cc580eec754512.jpg" + } + ] + } + ], + "index": 27, + "angle": 0, + "type": "image_body" + } + ], + "index": 27 + }, + { + "type": "image", + "bbox": [ + 358, + 247, + 399, + 288 + ], + "blocks": [ + { + "bbox": [ + 358, + 247, + 399, + 288 + ], + "lines": [ + { + "bbox": [ + 358, + 247, + 399, + 288 + ], + "spans": [ + { + "bbox": [ + 358, + 247, + 399, + 288 + ], + "type": "image", + "image_path": "9b298e3f8c6888d371ef219db051868f274af1ba76ad87df5c2ba35ac7b9074b.jpg" + } + ] + } + ], + "index": 28, + "angle": 0, + "type": "image_body" + } + ], + "index": 28 + }, + { + "type": "image", + "bbox": [ + 405, + 247, + 446, + 288 + ], + "blocks": [ + { + "bbox": [ + 405, + 247, + 446, + 288 + ], + "lines": [ + { + "bbox": [ + 405, + 247, + 446, + 288 + ], + "spans": [ + { + "bbox": [ + 405, + 247, + 446, + 288 + ], + "type": "image", + "image_path": "985f93d20324281b66926cb467e7671a0f7de8875707c8b704e7bd81f0b59b9f.jpg" + } + ] + } + ], + "index": 29, + "angle": 0, + "type": "image_body" + } + ], + "index": 29 + }, + { + "type": "image", + "bbox": [ + 457, + 247, + 496, + 289 + ], + "blocks": [ + { + "bbox": [ + 457, + 247, + 496, + 289 + ], + "lines": [ + { + "bbox": [ + 457, + 247, + 496, + 289 + ], + "spans": [ + { + "bbox": [ + 457, + 247, + 496, + 289 + ], + "type": "image", + "image_path": "6a977f45798fb36ceed91ca7038971b88bf42b34f6281c7a7935b709d39eaec2.jpg" + } + ] + } + ], + "index": 30, + "angle": 0, + "type": "image_body" + } + ], + "index": 30 + }, + { + "type": "image", + "bbox": [ + 504, + 247, + 544, + 289 + ], + "blocks": [ + { + "bbox": [ + 504, + 247, + 544, + 289 + ], + "lines": [ + { + "bbox": [ + 504, + 247, + 544, + 289 + ], + "spans": [ + { + "bbox": [ + 504, + 247, + 544, + 289 + ], + "type": "image", + "image_path": "da6e2cc1a6c104125f2b358fff6df5fa7d9c602c75f5f555b7a225f5c3d82c80.jpg" + } + ] + } + ], + "index": 31, + "angle": 0, + "type": "image_body" + } + ], + "index": 31 + }, + { + "type": "image", + "bbox": [ + 158, + 309, + 186, + 361 + ], + "blocks": [ + { + "bbox": [ + 158, + 309, + 186, + 361 + ], + "lines": [ + { + "bbox": [ + 158, + 309, + 186, + 361 + ], + "spans": [ + { + "bbox": [ + 158, + 309, + 186, + 361 + ], + "type": "image", + "image_path": "ea02900ca8a78886a66d9fb14ad43bf1237f53854ed37e043f35a033d10bf8fa.jpg" + } + ] + } + ], + "index": 32, + "angle": 0, + "type": "image_body" + } + ], + "index": 32 + }, + { + "type": "image", + "bbox": [ + 205, + 310, + 234, + 361 + ], + "blocks": [ + { + "bbox": [ + 205, + 310, + 234, + 361 + ], + "lines": [ + { + "bbox": [ + 205, + 310, + 234, + 361 + ], + "spans": [ + { + "bbox": [ + 205, + 310, + 234, + 361 + ], + "type": "image", + "image_path": "1c189b1c7514283599bb7a16a392b365213526e29463166ec4895e2329bb5384.jpg" + } + ] + } + ], + "index": 33, + "angle": 0, + "type": "image_body" + } + ], + "index": 33 + }, + { + "type": "image", + "bbox": [ + 260, + 310, + 287, + 361 + ], + "blocks": [ + { + "bbox": [ + 260, + 310, + 287, + 361 + ], + "lines": [ + { + "bbox": [ + 260, + 310, + 287, + 361 + ], + "spans": [ + { + "bbox": [ + 260, + 310, + 287, + 361 + ], + "type": "image", + "image_path": "4598111e22944965925b8b6255fa8536d238e0728b54bc775ab3665a150e976b.jpg" + } + ] + } + ], + "index": 34, + "angle": 0, + "type": "image_body" + } + ], + "index": 34 + }, + { + "type": "image", + "bbox": [ + 307, + 309, + 335, + 361 + ], + "blocks": [ + { + "bbox": [ + 307, + 309, + 335, + 361 + ], + "lines": [ + { + "bbox": [ + 307, + 309, + 335, + 361 + ], + "spans": [ + { + "bbox": [ + 307, + 309, + 335, + 361 + ], + "type": "image", + "image_path": "f787f003adb94debc9461808ba6d8cba55f045b73a279df257fc5bc5af0106d7.jpg" + } + ] + } + ], + "index": 35, + "angle": 0, + "type": "image_body" + } + ], + "index": 35 + }, + { + "type": "image", + "bbox": [ + 365, + 309, + 394, + 361 + ], + "blocks": [ + { + "bbox": [ + 365, + 309, + 394, + 361 + ], + "lines": [ + { + "bbox": [ + 365, + 309, + 394, + 361 + ], + "spans": [ + { + "bbox": [ + 365, + 309, + 394, + 361 + ], + "type": "image", + "image_path": "d39f4a0e9775078f66933bd2af24c5c55e7c30c9e5950af798f575ded3dae3a4.jpg" + } + ] + } + ], + "index": 36, + "angle": 0, + "type": "image_body" + } + ], + "index": 36 + }, + { + "type": "image", + "bbox": [ + 412, + 309, + 440, + 361 + ], + "blocks": [ + { + "bbox": [ + 412, + 309, + 440, + 361 + ], + "lines": [ + { + "bbox": [ + 412, + 309, + 440, + 361 + ], + "spans": [ + { + "bbox": [ + 412, + 309, + 440, + 361 + ], + "type": "image", + "image_path": "79d787bb95891662bc37839bf54d0e2b3ad0d240183333d2106b7850bbc79c3b.jpg" + } + ] + } + ], + "index": 37, + "angle": 0, + "type": "image_body" + } + ], + "index": 37 + }, + { + "type": "image", + "bbox": [ + 463, + 309, + 491, + 361 + ], + "blocks": [ + { + "bbox": [ + 463, + 309, + 491, + 361 + ], + "lines": [ + { + "bbox": [ + 463, + 309, + 491, + 361 + ], + "spans": [ + { + "bbox": [ + 463, + 309, + 491, + 361 + ], + "type": "image", + "image_path": "4a57bcadb2ebc10c34dc6b9a62ab981bdefca9bdd0aa3b91e1e53b571cabe5ca.jpg" + } + ] + } + ], + "index": 38, + "angle": 0, + "type": "image_body" + } + ], + "index": 38 + }, + { + "type": "image", + "bbox": [ + 507, + 310, + 535, + 361 + ], + "blocks": [ + { + "bbox": [ + 507, + 310, + 535, + 361 + ], + "lines": [ + { + "bbox": [ + 507, + 310, + 535, + 361 + ], + "spans": [ + { + "bbox": [ + 507, + 310, + 535, + 361 + ], + "type": "image", + "image_path": "be015bfa3cdaf2b44b0e77531148136a68d7dcb50f87bba546a02fc49b7f56cc.jpg" + } + ] + } + ], + "index": 39, + "angle": 0, + "type": "image_body" + } + ], + "index": 39 + }, + { + "type": "image", + "bbox": [ + 156, + 392, + 187, + 436 + ], + "blocks": [ + { + "bbox": [ + 156, + 392, + 187, + 436 + ], + "lines": [ + { + "bbox": [ + 156, + 392, + 187, + 436 + ], + "spans": [ + { + "bbox": [ + 156, + 392, + 187, + 436 + ], + "type": "image", + "image_path": "3efd00852b0f34f9a3949add9bb86779268e9d0665af6f4022ec6e8f664dfcfc.jpg" + } + ] + } + ], + "index": 40, + "angle": 0, + "type": "image_body" + } + ], + "index": 40 + }, + { + "type": "image", + "bbox": [ + 204, + 392, + 235, + 436 + ], + "blocks": [ + { + "bbox": [ + 204, + 392, + 235, + 436 + ], + "lines": [ + { + "bbox": [ + 204, + 392, + 235, + 436 + ], + "spans": [ + { + "bbox": [ + 204, + 392, + 235, + 436 + ], + "type": "image", + "image_path": "79ce326daeeca762ef4a892284df06b42bfe31f307b79d485ade96a33fd8703b.jpg" + } + ] + } + ], + "index": 41, + "angle": 0, + "type": "image_body" + } + ], + "index": 41 + }, + { + "type": "image", + "bbox": [ + 257, + 392, + 291, + 436 + ], + "blocks": [ + { + "bbox": [ + 257, + 392, + 291, + 436 + ], + "lines": [ + { + "bbox": [ + 257, + 392, + 291, + 436 + ], + "spans": [ + { + "bbox": [ + 257, + 392, + 291, + 436 + ], + "type": "image", + "image_path": "7f282d478cac67c93f20c570e20b2bf512b87fa1c5c30d2ed3ea85a3cfdc8394.jpg" + } + ] + } + ], + "index": 42, + "angle": 0, + "type": "image_body" + } + ], + "index": 42 + }, + { + "type": "image", + "bbox": [ + 304, + 392, + 338, + 436 + ], + "blocks": [ + { + "bbox": [ + 304, + 392, + 338, + 436 + ], + "lines": [ + { + "bbox": [ + 304, + 392, + 338, + 436 + ], + "spans": [ + { + "bbox": [ + 304, + 392, + 338, + 436 + ], + "type": "image", + "image_path": "7db37af3fa865b27c84c75f3b197817c940260ea28dd6f311150bee7883c2ba3.jpg" + } + ] + } + ], + "index": 43, + "angle": 0, + "type": "image_body" + } + ], + "index": 43 + }, + { + "type": "image", + "bbox": [ + 364, + 392, + 396, + 437 + ], + "blocks": [ + { + "bbox": [ + 364, + 392, + 396, + 437 + ], + "lines": [ + { + "bbox": [ + 364, + 392, + 396, + 437 + ], + "spans": [ + { + "bbox": [ + 364, + 392, + 396, + 437 + ], + "type": "image", + "image_path": "3172f4ce237b0712571e6a666cdef028fafd9ada1a3e0a3c51f58a2a9e08f2cf.jpg" + } + ] + } + ], + "index": 44, + "angle": 0, + "type": "image_body" + } + ], + "index": 44 + }, + { + "type": "image", + "bbox": [ + 410, + 391, + 443, + 437 + ], + "blocks": [ + { + "bbox": [ + 410, + 391, + 443, + 437 + ], + "lines": [ + { + "bbox": [ + 410, + 391, + 443, + 437 + ], + "spans": [ + { + "bbox": [ + 410, + 391, + 443, + 437 + ], + "type": "image", + "image_path": "063a1b1a1dece2445c978efb79cb481cb6c2dc5b05b4b900347370a525c3330e.jpg" + } + ] + } + ], + "index": 45, + "angle": 0, + "type": "image_body" + } + ], + "index": 45 + }, + { + "type": "image", + "bbox": [ + 460, + 391, + 493, + 437 + ], + "blocks": [ + { + "bbox": [ + 460, + 391, + 493, + 437 + ], + "lines": [ + { + "bbox": [ + 460, + 391, + 493, + 437 + ], + "spans": [ + { + "bbox": [ + 460, + 391, + 493, + 437 + ], + "type": "image", + "image_path": "d649001cf1a64a9bccb83e00ca44bc0289da7de112ff76e9de8b7b104650577a.jpg" + } + ] + } + ], + "index": 46, + "angle": 0, + "type": "image_body" + } + ], + "index": 46 + }, + { + "type": "image", + "bbox": [ + 506, + 391, + 539, + 438 + ], + "blocks": [ + { + "bbox": [ + 506, + 391, + 539, + 438 + ], + "lines": [ + { + "bbox": [ + 506, + 391, + 539, + 438 + ], + "spans": [ + { + "bbox": [ + 506, + 391, + 539, + 438 + ], + "type": "image", + "image_path": "397bf7ab6c20cf152a4187b0bd8cb0fee54d69de302d7d64096b293d13739ef9.jpg" + } + ] + } + ], + "index": 47, + "angle": 0, + "type": "image_body" + } + ], + "index": 47 + }, + { + "type": "image", + "bbox": [ + 153, + 466, + 190, + 510 + ], + "blocks": [ + { + "bbox": [ + 153, + 466, + 190, + 510 + ], + "lines": [ + { + "bbox": [ + 153, + 466, + 190, + 510 + ], + "spans": [ + { + "bbox": [ + 153, + 466, + 190, + 510 + ], + "type": "image", + "image_path": "f3c00f4554388b763523d94ccd0be111d32b9b3979b17b25ace8f3286372199e.jpg" + } + ] + } + ], + "index": 48, + "angle": 0, + "type": "image_body" + } + ], + "index": 48 + }, + { + "type": "image", + "bbox": [ + 200, + 466, + 237, + 510 + ], + "blocks": [ + { + "bbox": [ + 200, + 466, + 237, + 510 + ], + "lines": [ + { + "bbox": [ + 200, + 466, + 237, + 510 + ], + "spans": [ + { + "bbox": [ + 200, + 466, + 237, + 510 + ], + "type": "image", + "image_path": "cb48875f1c399a28a2bf0bfdbd3f70c36f20a6b5f11875bbf048a93e4f984bc1.jpg" + } + ] + } + ], + "index": 49, + "angle": 0, + "type": "image_body" + } + ], + "index": 49 + }, + { + "type": "image", + "bbox": [ + 256, + 465, + 293, + 509 + ], + "blocks": [ + { + "bbox": [ + 256, + 465, + 293, + 509 + ], + "lines": [ + { + "bbox": [ + 256, + 465, + 293, + 509 + ], + "spans": [ + { + "bbox": [ + 256, + 465, + 293, + 509 + ], + "type": "image", + "image_path": "bc5de79c5c67ade429863cc11eee1a6b0afc78315a836bda44223f7048af9ac7.jpg" + } + ] + } + ], + "index": 50, + "angle": 0, + "type": "image_body" + } + ], + "index": 50 + }, + { + "type": "image", + "bbox": [ + 304, + 464, + 340, + 509 + ], + "blocks": [ + { + "bbox": [ + 304, + 464, + 340, + 509 + ], + "lines": [ + { + "bbox": [ + 304, + 464, + 340, + 509 + ], + "spans": [ + { + "bbox": [ + 304, + 464, + 340, + 509 + ], + "type": "image", + "image_path": "4bd145682614b11bac9a56dc099cac6e59e04947fbe792c6055ee4d589f19a9a.jpg" + } + ] + } + ], + "index": 51, + "angle": 0, + "type": "image_body" + } + ], + "index": 51 + }, + { + "type": "image", + "bbox": [ + 363, + 463, + 398, + 509 + ], + "blocks": [ + { + "bbox": [ + 363, + 463, + 398, + 509 + ], + "lines": [ + { + "bbox": [ + 363, + 463, + 398, + 509 + ], + "spans": [ + { + "bbox": [ + 363, + 463, + 398, + 509 + ], + "type": "image", + "image_path": "20de841a7493cc43c2cb949ab3629aae813eba4f26bd57745396dde8a6f55c56.jpg" + } + ] + } + ], + "index": 52, + "angle": 0, + "type": "image_body" + } + ], + "index": 52 + }, + { + "type": "image", + "bbox": [ + 410, + 463, + 444, + 509 + ], + "blocks": [ + { + "bbox": [ + 410, + 463, + 444, + 509 + ], + "lines": [ + { + "bbox": [ + 410, + 463, + 444, + 509 + ], + "spans": [ + { + "bbox": [ + 410, + 463, + 444, + 509 + ], + "type": "image", + "image_path": "f33003c0021a10760e608a09f1e2174e235a893c03af224878b7982e917691db.jpg" + } + ] + } + ], + "index": 53, + "angle": 0, + "type": "image_body" + } + ], + "index": 53 + }, + { + "type": "image", + "bbox": [ + 459, + 463, + 493, + 509 + ], + "blocks": [ + { + "bbox": [ + 459, + 463, + 493, + 509 + ], + "lines": [ + { + "bbox": [ + 459, + 463, + 493, + 509 + ], + "spans": [ + { + "bbox": [ + 459, + 463, + 493, + 509 + ], + "type": "image", + "image_path": "3c43596c3225d0d82ee1fa5757da092b782bcbb2c6d31d69f28d886f7e3c50a5.jpg" + } + ] + } + ], + "index": 54, + "angle": 0, + "type": "image_body" + } + ], + "index": 54 + }, + { + "type": "image", + "bbox": [ + 504, + 463, + 540, + 510 + ], + "blocks": [ + { + "bbox": [ + 504, + 463, + 540, + 510 + ], + "lines": [ + { + "bbox": [ + 504, + 463, + 540, + 510 + ], + "spans": [ + { + "bbox": [ + 504, + 463, + 540, + 510 + ], + "type": "image", + "image_path": "57b44f4b221ef212ab7d4735d105e5f5b1f68a51a901e2996938b59f4901c482.jpg" + } + ] + } + ], + "index": 55, + "angle": 0, + "type": "image_body" + } + ], + "index": 55 + }, + { + "type": "image", + "bbox": [ + 162, + 525, + 181, + 576 + ], + "blocks": [ + { + "bbox": [ + 162, + 525, + 181, + 576 + ], + "lines": [ + { + "bbox": [ + 162, + 525, + 181, + 576 + ], + "spans": [ + { + "bbox": [ + 162, + 525, + 181, + 576 + ], + "type": "image", + "image_path": "33279c0867af9b38f5de35cfd3cc9b07a12aacdfdd07534a9b1454796260888e.jpg" + } + ] + } + ], + "index": 56, + "angle": 0, + "type": "image_body" + } + ], + "index": 56 + }, + { + "type": "image", + "bbox": [ + 210, + 525, + 228, + 576 + ], + "blocks": [ + { + "bbox": [ + 210, + 525, + 228, + 576 + ], + "lines": [ + { + "bbox": [ + 210, + 525, + 228, + 576 + ], + "spans": [ + { + "bbox": [ + 210, + 525, + 228, + 576 + ], + "type": "image", + "image_path": "1324ef4f7b4588d0896385dd82eb74a26f890c8a7c2de8af1bd12a8b0492af05.jpg" + } + ] + } + ], + "index": 57, + "angle": 0, + "type": "image_body" + } + ], + "index": 57 + }, + { + "type": "image", + "bbox": [ + 264, + 525, + 282, + 575 + ], + "blocks": [ + { + "bbox": [ + 264, + 525, + 282, + 575 + ], + "lines": [ + { + "bbox": [ + 264, + 525, + 282, + 575 + ], + "spans": [ + { + "bbox": [ + 264, + 525, + 282, + 575 + ], + "type": "image", + "image_path": "16b11c1c1c3f695af6be023b67866171414828de5ee231504795bca73dc24e87.jpg" + } + ] + } + ], + "index": 58, + "angle": 0, + "type": "image_body" + } + ], + "index": 58 + }, + { + "type": "image", + "bbox": [ + 312, + 525, + 331, + 576 + ], + "blocks": [ + { + "bbox": [ + 312, + 525, + 331, + 576 + ], + "lines": [ + { + "bbox": [ + 312, + 525, + 331, + 576 + ], + "spans": [ + { + "bbox": [ + 312, + 525, + 331, + 576 + ], + "type": "image", + "image_path": "e0cc24d2030cadf890830426e0016b8aec88c55941529bcb3fc028858245f431.jpg" + } + ] + } + ], + "index": 59, + "angle": 0, + "type": "image_body" + } + ], + "index": 59 + }, + { + "type": "image", + "bbox": [ + 371, + 525, + 389, + 576 + ], + "blocks": [ + { + "bbox": [ + 371, + 525, + 389, + 576 + ], + "lines": [ + { + "bbox": [ + 371, + 525, + 389, + 576 + ], + "spans": [ + { + "bbox": [ + 371, + 525, + 389, + 576 + ], + "type": "image", + "image_path": "ed1c74da8409a19812447005299ab422bc021b9f4e61e61d6af4d27597b0c2ea.jpg" + } + ] + } + ], + "index": 60, + "angle": 0, + "type": "image_body" + } + ], + "index": 60 + }, + { + "type": "image", + "bbox": [ + 417, + 525, + 435, + 576 + ], + "blocks": [ + { + "bbox": [ + 417, + 525, + 435, + 576 + ], + "lines": [ + { + "bbox": [ + 417, + 525, + 435, + 576 + ], + "spans": [ + { + "bbox": [ + 417, + 525, + 435, + 576 + ], + "type": "image", + "image_path": "1f58f96a7868db3a8e7b4263ea01128181413837fa70a9aab1033ad5a8d881de.jpg" + } + ] + } + ], + "index": 61, + "angle": 0, + "type": "image_body" + } + ], + "index": 61 + }, + { + "type": "image", + "bbox": [ + 468, + 525, + 486, + 576 + ], + "blocks": [ + { + "bbox": [ + 468, + 525, + 486, + 576 + ], + "lines": [ + { + "bbox": [ + 468, + 525, + 486, + 576 + ], + "spans": [ + { + "bbox": [ + 468, + 525, + 486, + 576 + ], + "type": "image", + "image_path": "e59d1e6a2e3198192a88bec3b29fe6dec3edb5841dbe4e86bcc01287b56b0458.jpg" + } + ] + } + ], + "index": 62, + "angle": 0, + "type": "image_body" + } + ], + "index": 62 + }, + { + "type": "image", + "bbox": [ + 511, + 525, + 530, + 576 + ], + "blocks": [ + { + "bbox": [ + 511, + 525, + 530, + 576 + ], + "lines": [ + { + "bbox": [ + 511, + 525, + 530, + 576 + ], + "spans": [ + { + "bbox": [ + 511, + 525, + 530, + 576 + ], + "type": "image", + "image_path": "a833d704f859f81aee44b1e1c5223e6db3d47aac3a3225785bfc2af8fb4313cb.jpg" + } + ] + } + ], + "index": 63, + "angle": 0, + "type": "image_body" + } + ], + "index": 63 + }, + { + "type": "image", + "bbox": [ + 149, + 608, + 192, + 646 + ], + "blocks": [ + { + "bbox": [ + 149, + 608, + 192, + 646 + ], + "lines": [ + { + "bbox": [ + 149, + 608, + 192, + 646 + ], + "spans": [ + { + "bbox": [ + 149, + 608, + 192, + 646 + ], + "type": "image", + "image_path": "afc33314881650e81bb306937ec04dd25c43a4cea1368f0c20098c2f1c8c3d5d.jpg" + } + ] + } + ], + "index": 64, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 81, + 693, + 528, + 704 + ], + "lines": [ + { + "bbox": [ + 81, + 693, + 528, + 704 + ], + "spans": [ + { + "bbox": [ + 81, + 693, + 528, + 704 + ], + "type": "text", + "content": "Figure C.9. Additional in-the-wild examples. The target objects and occluders are marked with the red and green outlines." + } + ] + } + ], + "index": 73, + "angle": 0, + "type": "image_caption" + } + ], + "index": 64 + }, + { + "type": "image", + "bbox": [ + 196, + 609, + 241, + 646 + ], + "blocks": [ + { + "bbox": [ + 196, + 609, + 241, + 646 + ], + "lines": [ + { + "bbox": [ + 196, + 609, + 241, + 646 + ], + "spans": [ + { + "bbox": [ + 196, + 609, + 241, + 646 + ], + "type": "image", + "image_path": "e24369acd600aa02074e8619cb93b3572d2da16ca27557d198a9c5fea693cab8.jpg" + } + ] + } + ], + "index": 65, + "angle": 0, + "type": "image_body" + } + ], + "index": 65 + }, + { + "type": "image", + "bbox": [ + 253, + 608, + 297, + 646 + ], + "blocks": [ + { + "bbox": [ + 253, + 608, + 297, + 646 + ], + "lines": [ + { + "bbox": [ + 253, + 608, + 297, + 646 + ], + "spans": [ + { + "bbox": [ + 253, + 608, + 297, + 646 + ], + "type": "image", + "image_path": "8fcfaaec7bcf00678512fe514fdd2d9b17c4675d66f7db3ac9d06b1af67bd111.jpg" + } + ] + } + ], + "index": 66, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 315, + 666, + 389, + 675 + ], + "lines": [ + { + "bbox": [ + 315, + 666, + 389, + 675 + ], + "spans": [ + { + "bbox": [ + 315, + 666, + 389, + 675 + ], + "type": "text", + "content": "Reconstruction result" + } + ] + } + ], + "index": 72, + "angle": 0, + "type": "image_caption" + } + ], + "index": 66 + }, + { + "type": "image", + "bbox": [ + 300, + 608, + 345, + 646 + ], + "blocks": [ + { + "bbox": [ + 300, + 608, + 345, + 646 + ], + "lines": [ + { + "bbox": [ + 300, + 608, + 345, + 646 + ], + "spans": [ + { + "bbox": [ + 300, + 608, + 345, + 646 + ], + "type": "image", + "image_path": "7f78d508d64ff47702c464b05edaa8c3cd052d7b9802f8fae3f7f5d9309c1391.jpg" + } + ] + } + ], + "index": 67, + "angle": 0, + "type": "image_body" + } + ], + "index": 67 + }, + { + "type": "image", + "bbox": [ + 356, + 608, + 400, + 646 + ], + "blocks": [ + { + "bbox": [ + 356, + 608, + 400, + 646 + ], + "lines": [ + { + "bbox": [ + 356, + 608, + 400, + 646 + ], + "spans": [ + { + "bbox": [ + 356, + 608, + 400, + 646 + ], + "type": "image", + "image_path": "bc66ed7a19f36ca900f75954dd3b53d4decaf085a178460092343926fa3f4cd9.jpg" + } + ] + } + ], + "index": 68, + "angle": 0, + "type": "image_body" + } + ], + "index": 68 + }, + { + "type": "image", + "bbox": [ + 402, + 608, + 446, + 646 + ], + "blocks": [ + { + "bbox": [ + 402, + 608, + 446, + 646 + ], + "lines": [ + { + "bbox": [ + 402, + 608, + 446, + 646 + ], + "spans": [ + { + "bbox": [ + 402, + 608, + 446, + 646 + ], + "type": "image", + "image_path": "b6cb93b3533b6ba8a98c9de6eba7fefb2244b9e7342055fb2f9fdb67b03c7356.jpg" + } + ] + } + ], + "index": 69, + "angle": 0, + "type": "image_body" + } + ], + "index": 69 + }, + { + "type": "image", + "bbox": [ + 458, + 608, + 502, + 646 + ], + "blocks": [ + { + "bbox": [ + 458, + 608, + 502, + 646 + ], + "lines": [ + { + "bbox": [ + 458, + 608, + 502, + 646 + ], + "spans": [ + { + "bbox": [ + 458, + 608, + 502, + 646 + ], + "type": "image", + "image_path": "2396eeeede72ea8776bb1f3e698cae247f18e1eb947624d405061a86b31c50e1.jpg" + } + ] + } + ], + "index": 70, + "angle": 0, + "type": "image_body" + } + ], + "index": 70 + }, + { + "type": "image", + "bbox": [ + 504, + 608, + 548, + 646 + ], + "blocks": [ + { + "bbox": [ + 504, + 608, + 548, + 646 + ], + "lines": [ + { + "bbox": [ + 504, + 608, + 548, + 646 + ], + "spans": [ + { + "bbox": [ + 504, + 608, + 548, + 646 + ], + "type": "image", + "image_path": "60ad8f6ac17efd1191fdef40ba05f35cc66e4232ac53ca3680e348d92e5d3380.jpg" + } + ] + } + ], + "index": 71, + "angle": 0, + "type": "image_body" + } + ], + "index": 71 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 733, + 308, + 741 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 733, + 308, + 741 + ], + "spans": [ + { + "bbox": [ + 302, + 733, + 308, + 741 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 74 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 21 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13441/7dee0cb2-adf2-4076-a132-e57fc86ae441_content_list.json b/data/2025/2503_13xxx/2503.13441/7dee0cb2-adf2-4076-a132-e57fc86ae441_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..885ea0e218a2897fc81eb79a50cef113a7274de8 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13441/7dee0cb2-adf2-4076-a132-e57fc86ae441_content_list.json @@ -0,0 +1,1915 @@ +[ + { + "type": "text", + "text": "Humanoid Policy $\\sim$ Human Policy", + "text_level": 1, + "bbox": [ + 289, + 102, + 709, + 127 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Ri-Zhao Qiu*,1 Shiqi Yang*,1 Xuxin Cheng*,1 Chaitanya Chawla*,2 Jialong Li", + "bbox": [ + 192, + 148, + 802, + 166 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Tairan He² Ge Yan⁴ David Yoon³ Ryan Hoque³ Lars Paulsen¹", + "bbox": [ + 248, + 167, + 743, + 181 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Ge Yang $^{5}$ Jian Zhang $^{3}$ Sha Yi $^{1}$ Guanya Shi $^{2}$ Xiaolong Wang $^{1}$", + "bbox": [ + 250, + 181, + 738, + 196 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 UC San Diego, 2 CMU, 3 Apple, 4 University of Washington, 5 MIT", + "bbox": [ + 272, + 196, + 727, + 210 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "https://human-as-robot.github.io/", + "bbox": [ + 354, + 212, + 640, + 224 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/4405aa2b835aba6a277e6c9198aa9977469d97508f3fdf4c76aee82216576500.jpg", + "image_caption": [ + "Egocentric Vision", + "Small-scale Humanoid Data", + "1.5k demos" + ], + "image_footnote": [], + "bbox": [ + 285, + 287, + 400, + 342 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/aea774d82dbe5bd4fbe061109175f75ddddda90ba7ff4fdaa5626b9bd22ef7f5.jpg", + "image_caption": [ + "Unified State-Action Space", + "Fingers/Wrist" + ], + "image_footnote": [], + "bbox": [ + 429, + 275, + 568, + 323 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/6de31e2c3f7ef7882133985700e8e04374897bf36bf16ee566165c8b694a3018.jpg", + "image_caption": [ + "Large-scale Human Data", + "27k demos" + ], + "image_footnote": [], + "bbox": [ + 269, + 369, + 401, + 436 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/346158b88ba04c6e7ab081daf4c31661e0295710b7ae42f838fcf37309e1f6c9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 431, + 397, + 563, + 436 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/d758fb486b915cb7b8bba76f25c64677ff6383244dd1b2d11810084c616ff526.jpg", + "image_caption": [ + "Robot Policies" + ], + "image_footnote": [], + "bbox": [ + 607, + 273, + 707, + 330 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/155fe39651ff5c1dc0794715d35ba201201e8503baae6301d1a5408214e69852.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 704, + 273, + 797, + 330 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/9ba220379b07dcbb35a75d6613b9038f779a2745c67964f1ea4c2fcc4b21bac8.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 607, + 330, + 704, + 388 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/d09737e448a71aaf9de7992f65ae4c12d439120d577b56996d26b2f49dd055e1.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 704, + 330, + 795, + 388 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/e60053de8c621fdd2935ccfd57a062430af67270a90436afec5f8a43a6dfa737.jpg", + "image_caption": [ + "Figure 1: This paper advocates high-quality human data as a data source for cross-embodiment learning - task-oriented egocentric human data. We collect a large-scale dataset, Physical Human-Humanoid Data $(\\mathbf{PH}^2\\mathbf{D})$ , with hand-finger 3D poses from consumer-grade VR devices on well-defined manipulation tasks directly aligned with robots. Without relying on modular perception, we train a Human Action Transformer (HAT) manipulation policy by directly modeling humans as a different humanoid embodiment in an end-to-end manner." + ], + "image_footnote": [], + "bbox": [ + 607, + 388, + 704, + 446 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/932b5667c208befbac683cbcd8b77d681fb74f6dc2da9b82aa158bd701300670.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 704, + 388, + 797, + 446 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract: Training manipulation policies for humanoid robots with diverse data enhances their robustness and generalization across tasks and platforms. However, learning solely from robot demonstrations is labor-intensive, requiring expensive tele-operated data collection, which is difficult to scale. This paper investigates a more scalable data source, egocentric human demonstrations, to serve as cross-embediment training data for robot learning. We mitigate the embodiment gap between humanoids and humans from both the data and modeling perspectives. We collect an egocentric task-oriented dataset $(\\mathbf{PH}^2\\mathbf{D})$ that is directly aligned with humanoid manipulation demonstrations. We then train a human-humanoid behavior policy, which we term Human Action Transformer (HAT). The state-action space of HAT is unified for both humans and humanoid robots and can be differentiably retargeted to robot actions. Co-trained with smaller-scale robot data, HAT directly models humanoid robots and humans as different embodiments without additional supervision. We show that human data improve both generalization and robustness of HAT with significantly better data collection efficiency.", + "bbox": [ + 228, + 551, + 766, + 779 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Keywords: Robot Manipulation, Cross-Embodiment, Humanoid", + "bbox": [ + 230, + 791, + 658, + 806 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1 Introduction", + "text_level": 1, + "bbox": [ + 171, + 825, + 313, + 840 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Learning from real robot demonstrations has led to great progress in robotic manipulation recently [1, 2, 3, 4]. One key advancement to enable such progress was hardware / software codesigns to scale up data collection using teleoperation [5, 6, 7, 8, 9, 10] and directly controlling", + "bbox": [ + 169, + 858, + 823, + 904 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2503.13441v3 [cs.RO] 5 Oct 2025", + "bbox": [ + 27, + 282, + 60, + 710 + ], + "page_idx": 0 + }, + { + "type": "footer", + "text": "9th Conference on Robot Learning (CoRL 2025), Seoul, Korea.", + "bbox": [ + 171, + 925, + 550, + 939 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "the robot end effector [11, 12, 5, 6, 13, 7]. Instead of gathering data on a single robot, collective efforts have been made to merge diverse robot data and train foundational policies across embodiments [11, 14, 2, 1, 3, 4], which have shown to improve cross-embodiment and cross-task generalizability.", + "bbox": [ + 169, + 90, + 823, + 154 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "However, collecting structured real-robot data is expensive and time-consuming. We are still far away from building a robust and generalizable model as what has been achieved in Computer Vision [17] and NLP [18]. If we examine humanoid robot teleoperation more closely, it involves robots mimicking human actions", + "bbox": [ + 169, + 157, + 364, + 354 + ], + "page_idx": 1 + }, + { + "type": "table", + "img_path": "images/491be4059e6073cde8add8b32e34a6cc496333027f66d01b6d5e1e6df6ff1814.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
DatasetHumanRobot
# Frames# Demos# Frames# Demos
DexCap [15]~378k787NANA
EgoMimic [16]~432k†2,1501.29M†1,000
PH2D (Ours)~3.02M26,824~668k1,552
", + "bbox": [ + 377, + 169, + 821, + 263 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Table 1: Comparisons of task-oriented egocentric human datasets. Besides having the most demonstrations, $\\mathrm{PH}^2\\mathrm{D}$ is collected on various manipulation tasks, diverse objects and scenes, with accurate 3D hand-finger poses and language annotations. †: estimated based on reported data collection time with $30\\mathrm{Hz}$ ; whereas DexCap [15] and $\\mathrm{PH}^2\\mathrm{D}$ report processed frames for training.", + "bbox": [ + 374, + 265, + 823, + 349 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "using geometric transforms or retargeting to control robot joints and end-effectors. From this perspective, we propose to model robots in a human-centric representation, and the robot action is just a transformation away from the human action. If we can accurately capture the end-effector and head poses of humans, egocentric human demonstrations will be a more scalable source of training data, as we can collect them efficiently, in any place, and without a robot.", + "bbox": [ + 169, + 354, + 826, + 431 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this paper, we perform cross-human and humanoid embodiment training for robotic manipulation. Our key insight is to model bimanual humanoid behaviors by directly imitating human behaviors without using learning surrogates such as affordances [19, 20]. To realize this, we first collect an egocentric task-oriented dataset of Physical Humanoid-Human Data, dubbed $\\mathrm{PH^2D}$ . We adapt consumer-grade VR devices to collect egocentric videos with automatic but accurate hand pose and end effector (i.e., hand) annotations. Compared to existing human daily behavior datasets [21, 22], $\\mathrm{PH^2D}$ is task-oriented so that it can be directly used for co-training. The same VR hardwares are then used to perform teleoperation to collect smaller-scale humanoid data for better alignment. We then train a Human-humanoid Action Transformer (HAT), which predicts future hand-finger trajectories in a unified human-centric state-action representation space. To obtain robot actions, we simply apply inverse kinematics and hand retargeting to differentiably convert human actions to robot actions for deployment.", + "bbox": [ + 169, + 436, + 826, + 619 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We conduct real-robot evaluations on different manipulation tasks with extensive ablation studies to investigate how to best align human and humanoid demonstrations. In particular, we found that co-training with diverse human data improves robustness against spatial variance and background perturbation, generalizing in settings unseen in robot data but seen in human data. We believe that these findings highlight the potential of using human data for large-scale cross-embodiment learning.", + "bbox": [ + 169, + 625, + 823, + 702 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In summary, our contributions are:", + "bbox": [ + 171, + 708, + 403, + 722 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- A dataset, $\\mathrm{PH}^2\\mathrm{D}$ , which is a large egocentric, task-oriented human-humanoid dataset with accurate hand and wrist poses for modeling human behavior (see Tab. 1).", + "- A cross human-humanoid manipulation policy, HAT, that introduces a unified state-action space and other alignment techniques for humanoid manipulation.", + "- Improved policy robustness and generalization validated by extensive experiments and ablation studies to show the benefits of co-training with human data." + ], + "bbox": [ + 215, + 732, + 823, + 830 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2 Related Work", + "text_level": 1, + "bbox": [ + 171, + 849, + 321, + 866 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Imitation Learning for Robot Manipulation. Recently, learning robot policy with data gathered directly from the multiple and target robot embodiment has shown impressive robustness and dex-", + "bbox": [ + 169, + 881, + 823, + 912 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "terity [23, 2, 24, 1, 25, 26, 9, 27, 28]. The scale of data for imitation learning has grown substantially with recent advancements in data collection [29, 9, 7, 8], where human operators can efficiently collect large amounts of high-quality, task-oriented data. Despite these advances, achieving open-world generalization still remains a significant challenge due to lack of internet-scale training data.", + "bbox": [ + 169, + 90, + 823, + 151 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Learning from Human Videos. Learning policies from human videos is a long-standing topic in both computer vision and robotics due to the vast existence of human data. Existing works can be approximately divided into two categories: aligning observations or actions.", + "bbox": [ + 169, + 159, + 823, + 204 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Learn from Human - Aligning Observations. While teleoperating the actual robot platform allows learning policy with great dexterity, there is still a long way to go to achieve higher levels of generalization across diverse tasks, environments, and platforms. Unlike fields such as computer vision [17] and natural language processing [18] benefiting from internet-scale data, robot data collection in the real world is far more constrained. Various approaches have attempted to use internet-scale human videos to train robot policies [30, 31, 32, 33, 34, 35]. Due to various discrepancies (e.g., supervision and viewpoints) between egocentric robot views and internet videos, most existing work [19, 20] use modular approaches with intermediate representations as surrogates for training. The most representative ones are affordances [19, 20] for object interaction, object keypoints predictions [36, 37, 38, 39, 40], or other types of object representations [41, 42, 43].", + "bbox": [ + 169, + 210, + 826, + 361 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Learn from Human - Aligning Actions. Beyond observation alignment, transferring human demonstrations to robotic platforms introduces additional challenges due to differences in embodiment, actuation, and control dynamics. Specific alignment of human and robot actions is required to overcome these disparities. Approaches have employed masking in egocentric views [16], aligning motion trajectories or flow [44, 45], object-centric actions [46, 47], or hand tracking with specialized hardware [15]. Most closely related to our work, HumanPlus [48] designs a remapping method from 3D human pose estimation to tele-operate humanoid robots. Compared to HumanPlus, the insight of our method is to waive the requirement for robot hardware in collecting human data and collect diverse human data directly for co-training. In contrast to HumanPlus, we intentionally avoid performing retargeting on human demonstrations and designed the policy to directly use human hand poses as states/actions. On the other hand, the 'human shadowing' retargeting in HumanPlus is a teleoperation method that still requires robots, leading to lower collection efficiency than ours.", + "bbox": [ + 169, + 369, + 826, + 549 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Cross-Embodiment. Cross-embodiment pre-training has been shown to improve adaptability and generalization over different embodiments [49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61]. When utilizing human videos, introducing intermediate representations can be prone to composite errors. Recent works investigate end-to-end approaches [2, 24, 1, 3] using cross-embodied robot data to reduce such compounding perceptive errors. Noticeably, these works have found that such end-to-end learning leads to desired behaviors such as retrying [3]. Some other work [62, 38] enforces viewpoint constraints between training human demonstrations and test-time robot deployment to allow learning on human data but it trades off the scalability of the data collection process.", + "bbox": [ + 169, + 556, + 826, + 676 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Concurrent Work. Some concurrent work [15, 16, 63] also attempts to use egocentric human demonstrations for end-to-end cross-embedment policy learning. DexCap [15] uses gloves to track 3D hand poses with a chest-mounted RGBD camera to capture egocentric human videos. However, DexCap relies on 3D inputs, whereas some recent works [3, 1] have shown the scalability of 2D visual inputs. Most related to our work, EgoMimic [16] also proposes to collect data using wearable device [64] with 2D visual inputs. However, EgoMimic requires strict visual sensor alignments; whereas we show that scaling up diverse observations with different cameras makes the policy more robust. In addition, $\\mathrm{PH}^2\\mathrm{D}$ is also greater in dataset scale and object diversity. We also show our policy can be deployed on real robots without strict requirements of visual sensors and heuristics, which paves the way for scalable data collection.", + "bbox": [ + 169, + 684, + 826, + 834 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3 Method", + "text_level": 1, + "bbox": [ + 171, + 89, + 272, + 104 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To collect more data to train generalizable robot policies, recent research has explored cross-embodiment learning, enabling policies to generalize across diverse physical forms [3, 1, 4, 2, 65, 14]. This paper proposes egocentric human manipulation demonstrations as a scalable source of cross-embodiment training data. Sec. 3.1 describes our approach to adapt consumer-grade VR devices to scale up human data collection conveniently for a dataset of task-oriented egocentric human demonstrations. Sec. 3.2 describes various techniques to handle domain gaps to align human data and robot data for learning humanoid manipulation policy.", + "bbox": [ + 169, + 123, + 826, + 229 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1 $\\mathbf{PH}^2\\mathbf{D}$ : Task-oriented Physical Humanoid-Human Data", + "text_level": 1, + "bbox": [ + 171, + 244, + 596, + 260 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Though there has been existing work that collects egocentric human videos [16, 22, 21, 15], they either (1) provide demonstrations mostly for non-task-oriented skills (e.g., dancing) and do not provide world-frame 3D head and hand poses estimations for imitation learning supervision [21, 22] or (2) require specialized hardware or robot setups [15, 16].", + "bbox": [ + 169, + 272, + 823, + 333 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "To address these issues, we propose $\\mathrm{PH^2D}$ . $\\mathrm{PH^2D}$ address these two issues by (1) collecting task-oriented human demonstrations that are directly related to robot execution, (2) adapting well-engineered SDKs of VR devices (illustrated in Fig. 2) to provide supervision, and (3) diversifying tasks, camera sensors, and reducing whole-body movement to reduce domain gaps in both vision and behaviors.", + "bbox": [ + 169, + 339, + 485, + 474 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/01ad4450db2b81f63c3f2521e32a09dfef5f050e7dd08149de218571ffa6fb2a.jpg", + "image_caption": [ + "Figure 2: Consumer-grade Devices for Data Collection. To avoid relying on specialized hardware for data collection to make our method scalable, we design our data collection process using consumer-grade VR devices." + ], + "image_footnote": [], + "bbox": [ + 496, + 335, + 823, + 431 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Adapting Low-cost Commercial Devices With development in pose estimation [66] and", + "text_level": 1, + "bbox": [ + 171, + 492, + 483, + 521 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "system engineering, modern mobile devices are capable of providing accurate on-device world frame 3D head pose tracking and 3D hand keypoint tracking [9], which has proved to be stable enough to teleoperate robot in real-time [9, 13]. We design software and hardware to support convenient data collection across different devices. Different cameras provide better visual diversity.", + "bbox": [ + 169, + 522, + 823, + 583 + ], + "page_idx": 3 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "- Apple Vision Pro + Built-in Camera. We developed a Vision OS App that uses the built-in camera for visual observation and uses the Apple ARKit for 3D head and hand poses.", + "- Meta Quest 3 / Apple Vision Pro + ZED Camera. We developed a web-based application based on OpenTelevision [9] to gather 3D head and hand poses. We also designed a 3D-printed holder to mount ZED Mini Stereo cameras on these devices. This configuration is both low-cost ( $<700$ ) and introduces more diversity with stereo cameras." + ], + "bbox": [ + 215, + 595, + 823, + 691 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Data Collection Pipeline We collect task-oriented egocentric human demonstrations by asking human operators to perform tasks overlapping with robot execution (e.g., grasping and pouring) when wearing the VR devices. For every demonstration, we provide language instructions (e.g., grasp a can of coke zero with right hand), and synchronize proprioception inputs and visual inputs by closest timestamps.", + "bbox": [ + 169, + 708, + 823, + 784 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Action Domain Gap. Human actions and tele-operated robot actions exhibit two distinct characteristics: (1) human manipulation usually involves involuntary whole-body movement, and (2) humans are more dexterous than robots and have significantly faster task completion time than robots. We mitigate the first gap by requesting the human data collectors to sit in an upright position. For the second speed gap, we interpolate translation and rotations of human data during training (effectively 'slowing down' actions). The slow-down factors $\\alpha_{\\mathrm{slow}}$ are obtained by normalizing the average task completion time of humans and humanoids, which is empirically distributed around 4. For consistency, we use $\\alpha_{\\mathrm{slow}} = 4$ in all tasks.", + "bbox": [ + 169, + 791, + 826, + 912 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "4", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 3 + }, + { + "type": "image", + "img_path": "images/59c67885ce45ec56b97cbfea1a92fcbe69c8e4d04c354db3d05a4da260c37a2a.jpg", + "image_caption": [ + "Figure 3: Overview of HAT. Human Action Transformer (HAT) learns a robot policy by modeling humans. During training, we sample a state-action pair from either human data or robot data. The images are encoded by a frozen DinoV2 encoder [67]. The HAT model makes predictions in a human-centric action space using wrist 6 DoF poses and finger tips, which is retargeted to robot poses during real-robot deployment." + ], + "image_footnote": [], + "bbox": [ + 179, + 88, + 823, + 315 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.2 HAT: Human Action Transformer", + "text_level": 1, + "bbox": [ + 171, + 417, + 454, + 431 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "HAT learns cross-embodied robot policy by modeling humans. We demonstrate that treating bi-manual humanoid robots and humans as different robot embodiments via retargeting improves both generalizability and robustness of HAT.", + "bbox": [ + 169, + 446, + 823, + 493 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "More concretely, let $\\mathcal{D}_{\\text{robot}} = \\{(\\mathbf{S}_i, \\mathbf{A}_i)\\}_{i=1}^N$ be the set of data collected from real bimanual humanoid robots using teleoperation [9], where $\\mathbf{S}_i$ is the states including proprioceptive and visual observations of $i$ -th demonstration and $\\mathbf{A}_i$ be the actions. The collected $\\mathrm{PH}^2\\mathrm{D}$ dataset, $\\mathcal{D}_{\\text{human}} = \\{(\\tilde{\\mathbf{S}}_i, \\tilde{\\mathbf{A}}_i)\\}_{i=1}^M$ is used to augment the training process. Note that it is reasonable to assume $M \\gg N$ due to the significantly better human data collection efficiency.", + "bbox": [ + 169, + 497, + 823, + 575 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "The goal is to design a policy $\\pi : \\mathbf{S} \\to \\mathbf{A}$ that predicts future robot actions $\\mathbf{a}_t$ given current robot observation $\\mathbf{s}_t$ at time $t$ , where the future actions $\\mathbf{a}_{t+1}$ is usually a chunk of actions for multi-step execution (with slight abuse of notation). We model $\\pi$ using HAT, which is a transformer-based architecture predicting action chunks [5]. The overview of the model is illustrated in Fig. 3. We discuss key design choices of HAT with experimental ablations.", + "bbox": [ + 169, + 580, + 826, + 657 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Unified State-Action Space. Both bimanual robots and humans have two end effectors. In our case, our robots are also equipped with an actuated 2DoF neck that can rotate, which resembles the autonomous head movement when humans perform manipulation. Therefore, we design a unified state-action space (i.e., $(\\mathbf{S},\\mathbf{A})\\equiv (\\tilde{\\mathbf{S}},\\tilde{\\mathbf{A}}))$ for both bimanual robots and humans. More concretely, the proprioceptive observation is a 54-dimensional vector (6D rotations [68] of the head, left wrist, and right wrist; $x / y / z$ of left and right wrists and 10 finger tips). In this work, since we deploy our policy on robots with 5-fingered dexterous hands (shown in Fig. 4), there exists a bijective mapping between the finger tips of robot hands and human hands. Note that injective mapping is also possible (e.g., mapping distance between the thumb finger and other fingers to parallel gripper distance).", + "bbox": [ + 169, + 662, + 826, + 800 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Visual Domain Gap. Two types of domain gaps exist for co-training on human/humanoid data: camera sensors and end effector appearance. Since our human data collection process includes cameras different from robot deployment, this leads to camera domain gaps such as tones. Also, the appearances of human and humanoid end effectors are different. However, with sufficiently large and diverse data, we find it not a strict necessity to apply heuristic processing such as visual artifacts [16] or generative methods [69] to train human-robot policies - basic image augmentations such as color jittering and Gaussian blurring are effective regularization.", + "bbox": [ + 169, + 806, + 823, + 912 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "5", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/907faeb6fd6e2c7136323eb78a719b367cbed6eb8a5156d731ff41c131e3f716.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Meth.H. DataD. NormPassingHorizontal GraspVertical GraspPouringOvr. Succ.
I.D.O.O.D.I.D.O.O.D.I.D.O.O.D.I.D.O.O.D.I.D.O.O.D.
ACTXNA19/2036/608/107/307/2015/708/101/1042/6059/170
HATX17/2051/609/1011/3014/2030/705/105/1045/6097/170
HAT20/2052/608/1012/3013/2029/708/108/1049/60101/170
Type of GeneralizationBackgroundTextureObj. Placement
", + "bbox": [ + 176, + 88, + 823, + 175 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table 2: Success rate of autonomous skill execution. Co-training with human data (H. Data) significantly improves the Out-Of-Distribution (O.O.D.) performance with nearly $100\\%$ relative improvement on all tasks on Humanoid A. We also ablate the design choice of using different normalizations (D. Norm) for different embodiments. We designate each task setting to investigate a single type of generalization. Detailed analysis of each type of generalization is presented in Sec. C.", + "bbox": [ + 169, + 181, + 823, + 252 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Training. The final policy is denoted as $\\pi : f_{\\theta}(\\cdot) \\to \\mathbf{A}$ for both human and robot policy, where $f_{\\theta}$ is a transformer-based neural network parametrized by $\\theta$ . The final loss is given by,", + "bbox": [ + 169, + 268, + 823, + 299 + ], + "page_idx": 5 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {L} = \\ell_ {1} \\left(\\pi \\left(s _ {i}\\right), a _ {i}\\right) + \\lambda \\cdot \\ell_ {1} \\left(\\pi \\left(s _ {i}\\right) _ {\\mathrm {E E F}}, a _ {i, \\mathrm {E E F}}\\right), \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 346, + 305, + 823, + 321 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "where EEF are the indices of the translation vectors of the left and right wrists, and $\\lambda = 2$ is an (insensitive) hyperparameter used to balance loss to emphasize the importance of end effector positions over learning unnecessarily precise finger tip keypoints.", + "bbox": [ + 169, + 325, + 826, + 372 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4 Experiments", + "text_level": 1, + "bbox": [ + 171, + 390, + 313, + 407 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Hardware Platforms. We run our experiments on two humanoid robots (Humanoid A and Humanoid B shown in Fig. 4) equipped with 6-DOF Inspire dexterous hands. Humanoid A is a Unitree H1 robot and Humanoid B is a Unitree H1_2 robot with different arm configurations. Similar to humans, both robots (1) are equipped with actuated necks [9] to get make use of egocentric views and (2) do not have wrist cameras. Unless otherwise noted, most humanoid data collection is done with Humanoid A. We use Humanoid B mainly for testing cross-humanoid generalization.", + "bbox": [ + 169, + 421, + 485, + 619 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/4c584c96607aaeb29e9b43e2c7ed41bb0e70f76fd5bedf0f003b16fa1b9660c9.jpg", + "image_caption": [ + "(a) Humanoid A", + "Figure 4: Hardware Illustration. Most robot data attributes to Humanoid A, a Unitree H1 robot. Humanoid B, a Unitree H1-2 robot with different arm motor configurations, is used to evaluate few-shot cross-humanoid transfer. Detailed comparisons in Sec. D" + ], + "image_footnote": [], + "bbox": [ + 496, + 414, + 656, + 530 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/c658da49d15f8271384f414e1d227a0a00368b23fa0f816d5b26eb80997e7154.jpg", + "image_caption": [ + "(b) Humanoid B" + ], + "image_footnote": [], + "bbox": [ + 665, + 414, + 823, + 529 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Implementation Details. We implement policy architecture by adopting an transformer-based architecture predicting future action chunks [5]. We use a frozen DinoV2 ViT-S [67] as the visual backbone. We implement two variants: (1) ACT: baseline implementation using the Action Chunk Transformer [5], trained using only robot data. Robot states are represented as joint positions. (2) HAT: same architecture as ACT, but the state encoder operates in the unified state-action space. Unless otherwise stated, HAT is co-trained on robot and human data. A checkpoint is trained for each task with approximately 250-400 robot demonstrations.", + "bbox": [ + 169, + 633, + 826, + 739 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Experimental Protocol. We collect robot and human demonstrations in different object sets. Since human demonstrations are easier to collect, the settings in human demonstrations are generally more diverse, which include background, object types, object positions, and the relative position of the human to the table.", + "bbox": [ + 169, + 753, + 823, + 814 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We experimented with four different dexterous manipulation tasks and investigated in-distribution and out-of-distribution setups. The in-distribution (I.D.) setting tests the learned skills with backgrounds and object arrangements approximately similar to the training demonstrations presented in the real-robot data. In the Out-Of-Distribution (O.O.D.) setting, we test generalizability and robustness by introducing novel setups that were presented in human data but not in robot data. Fig. 7 visualizes different manipulation tasks and how we define out-of-distribution settings for each task.", + "bbox": [ + 169, + 821, + 825, + 912 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "6", + "bbox": [ + 493, + 936, + 504, + 946 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/9b89a15df58365bcccfb9db4b6d750eab29560b650833938df03595db378a427.jpg", + "image_caption": [ + "Figure 5: Few-Shot Adaptation. Co-training consistently outperforms isolated training as Humanoid B demonstrations increase, achieving robust success rates even in low-data regimes." + ], + "image_footnote": [ + "(a) Performance of Humanoid B co-trained with $\\mathrm{PH}^2\\mathrm{D}$ on horizontal grasping. o1 is seen by Humanoid B. o2 and o3 seen in human data. o4 is unseen in all data." + ], + "bbox": [ + 176, + 95, + 493, + 291 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/e4fd12fd6c055191040209ed8b5dd29a02bd388cfb49a7541bb3832b5932007c.jpg", + "image_caption": [], + "image_footnote": [ + "(b) Co-training consistently outperforms isolated training as Humanoid B demonstrations increase, achieving good success rates even in low-data regimes." + ], + "bbox": [ + 506, + 92, + 821, + 287 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.1 Main Evaluation", + "text_level": 1, + "bbox": [ + 171, + 381, + 330, + 395 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Human data has minor effects on I.D. testing. From Tab. 2, we can see that I.D. performance with or without co-training with human data gives similar results. In the I.D. setting, we closely match the scene setups as training demonstrations, including both background, object types, and object placements. Thus, policies trained with only a small amount of Humanoid A data performed well in this setting. This finding is consistent with recent work [9, 7] that frozen visual foundation models [17, 67] improve robustness against certain external perturbations such as lighting.", + "bbox": [ + 169, + 407, + 823, + 500 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Human data improves the O.O.D. settings with many generalizations. One common challenge in imitation learning is overfitting to only in-distribution task settings. Hence, it is crucial for a robot policy to generalize beyond the scene setups seen in a limited set of single-embodiment data. To demonstrate how co-training with human data reduces such overfitting, we introduce O.O.D. task settings to evaluate such generalization. From Tab. 2, we can see that co-training drastically improves O.O.D. settings, achieving nearly $100\\%$ relative improvement in settings unseen by the robot data. In particular, we find that human data improves three types of generalization: background, object placement, and appearance. To isolate the effect of each variable, each task focuses on a specific type of generalization as listed in Tab. 2, with in-depth analyses in Sec. C.", + "bbox": [ + 169, + 506, + 826, + 642 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "4.2 Few-Shot Transfer across Heterogenous Embodiments", + "text_level": 1, + "bbox": [ + 169, + 659, + 591, + 674 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We conducted few-shot generalization experiments on a distinct humanoid platform (Humanoid B), contrasting it with our primary platform, Humanoid A. Notably, Humanoid B's demonstration data were collected in an entirely separate environment, introducing both embodiment and environmental shifts. We highlight two key advantages of our approach: (1) the ability to unify heterogeneous human-centric data sources (humanoids and humans) into a generalizable policy framework, and (2) the capacity to rapidly adapt to new embodiments with drastically reduced data requirements.", + "bbox": [ + 169, + 686, + 823, + 777 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Experiment 1: Cross-embodiment co-training efficacy Using only 20 demonstrations from Humanoid B, we trained 3 policies - respectively on data from (i) Humanoid B only, (ii) Humanoid B + Humanoid A (cross-embodiment), and (iii) Humanoid B + Humanoid A + Human (cross-embodiment and human priors). As shown in Fig. 5a, co-trained policies (ii) and (iii) substantially outperformed the Humanoid B-only baselines on all task settings, underscoring the method's ability to transfer latent task structure across embodiments.", + "bbox": [ + 169, + 784, + 823, + 873 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Experiment 2: Scaling Demonstrations for Few-Shot Adaptation We further quantified the relationship between required for few-shot generalization. We hold Humanoid A and human datasets fixed", + "bbox": [ + 169, + 881, + 823, + 912 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "7", + "bbox": [ + 493, + 935, + 504, + 946 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/d1abd18a54dcf019aac825156a4dd6ef52c0918ea26fa6a585afc6d147e83292.jpg", + "image_caption": [ + "Figure 6: Human data has better sampling efficiency. Per-grid vertical grasping successes out of 10 trials with models trained with robot-only data and mixed data. Red boxes indicate where training data is collected." + ], + "image_footnote": [], + "bbox": [ + 173, + 90, + 433, + 200 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/d81ce5430621f43196bfb4090fd92ce98453108b4e753cfa0147aac2867c12ea.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
TaskState SpaceAction SpeedSuccess
Vertical GraspingX1/10
X0/10
4/10
", + "bbox": [ + 452, + 119, + 834, + 181 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 3: Importance of unifying policy inputs and outputs. We report the number of successes of vertical grasping objects in the upper-left block as illustrated in Fig. 8. Baselines use joint positions as state input or do not interpolate human motions.", + "bbox": [ + 449, + 183, + 836, + 252 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "for the horizontal grasping task and ablate number of demonstrations required for Humanoid B in Fig. 5. Co-training (Humanoid $\\mathrm{B} + \\mathrm{A} + \\mathrm{Human}$ ) consistently outperformed isolated training on Humanoid B across all settings, especially in the few-data regime.", + "bbox": [ + 169, + 300, + 823, + 347 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.3 Ablation Study", + "text_level": 1, + "bbox": [ + 171, + 359, + 318, + 376 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Sampling Efficiency of Human and Humanoid Data. Conceptually, collecting human data is less expensive, not just because it can be done faster, but also because it can be done in in-the-wild scenes; reduces setup cost before every data collection; and avoids the hardware cost to equip every operator with robots.", + "bbox": [ + 169, + 387, + 823, + 446 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We perform additional experiments to show that even in the lab setting, human data can have better sampling efficiency in unit time. In particular, we provide a small-scale experiment on the vertical grasping task. Allocating 20 minutes for two settings, we collected (1) 60 Humanoid A demonstrations, (2) 30 Humanoid A demonstrations, and 120 human demonstrations. To avoid confating diversity and data size, the object placements in all demonstrations are evenly distributed at the bottom 6 cells. The results are given in Fig. 6. The policy trained with mixed robot and human data performs significantly better, which validates the sampling efficiency of human data over robot data. Each cell represents a $10\\mathrm{cm} \\times 10\\mathrm{cm}$ region where the robot attempts to pick up a box.", + "bbox": [ + 169, + 454, + 825, + 575 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "State-Action Design. In Tab. 3, we ablate the design choices of the proprioception state space and the speed of output actions. In particular, using the same set of robot and human data, we implement two baselines: 1) a unified state-action space, but does not interpolate (i.e., slow down) the human actions; and 2) a baseline that interpolates human actions but uses separate state representation for humanoid (joint positions) and humans (EEF representation). The policies exhibit different failure patterns during the rollout of these two baselines. Without interpolating human actions, the speed of the predicted actions fluctuates between fast (resembling humans) and slow (resembling teleoperation), which leads to instability. Without a unified state space, the policy is given a 'short-cut' to distinguish between embodiments, which leads to on-par in-distribution performance and significantly worse OOD performance.", + "bbox": [ + 169, + 582, + 823, + 733 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "More Ablation Study. Due to space limit, please refer to the appendix and the supplementary for more qualitative visualization and quantitative ablation studies.", + "bbox": [ + 169, + 739, + 823, + 771 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5 Conclusions", + "text_level": 1, + "bbox": [ + 171, + 789, + 307, + 804 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "This paper proposes $\\mathrm{PH^2D}$ , an effort to construct a large-scale human task-oriented behavior dataset, along with the training pipeline HAT, which leverages $\\mathrm{PH^2D}$ and robot data to show how humans can be treated as a data source for cross-embodiment learning. We show that it is possible to directly train an imitation learning model with mixed human-humanoid data without any training surrogates when the human data are aligned with the robot data. The learned policy shows improved generalization and robustness compared to the counterpart trained using only real-robot data.", + "bbox": [ + 169, + 821, + 823, + 912 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "8", + "bbox": [ + 493, + 935, + 503, + 946 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6 Limitations", + "text_level": 1, + "bbox": [ + 174, + 89, + 303, + 106 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "Although we also collect language instructions in $\\mathrm{PH^2D}$ , due to our focus on investigating the embodiment gap between humans and humanoids, one limitation of the current version of the paper uses a relatively simple architecture for learning policy. In the near future, we plan to expand the policy learning process to train a large language-conditioned cross-embodiment policy to investigate generalization to novel language using human demonstrations. The collection of human data relies on off-the-shelf VR hardwares and their hand tracking SDKs. Since these SDKs were trained mostly for VR applications, hand keypoint tracking can fail for certain motions with heavy occlusion. In addition, though the proposed method conceptually extends to more robot morphologies, current evaluations are done on robots equipped with dexterous hands.", + "bbox": [ + 174, + 121, + 823, + 257 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "7 Acknowledgment", + "text_level": 1, + "bbox": [ + 174, + 275, + 349, + 292 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "This work was supported, in part, by NSF CAREER Award IIS-2240014, NSF CCF-2112665 (TILOS), and gifts from Amazon, Meta and Apple.", + "bbox": [ + 174, + 306, + 823, + 339 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "9", + "bbox": [ + 493, + 936, + 503, + 946 + ], + "page_idx": 8 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 173, + 89, + 269, + 106 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] S. Liu, L. Wu, B. Li, H. Tan, H. Chen, Z. Wang, K. Xu, H. Su, and J. Zhu. Rdt-1b: a diffusion foundation model for bimanual manipulation. arXiv preprint arXiv:2410.07864, 2024.", + "[2] Octo Model Team, D. Ghosh, H. Walke, K. Pertsch, K. Black, O. Mees, S. Dasari, J. Hejna, C. Xu, J. Luo, T. Kreiman, Y. Tan, L. Y. Chen, P. Sanketi, Q. Vuong, T. Xiao, D. Sadigh, C. Finn, and S. Levine. Octo: An open-source generalist robot policy. In Proceedings of Robotics: Science and Systems, 2024.", + "[3] K. Black, N. Brown, D. Driess, A. Esmail, M. Equi, C. Finn, N. Fusai, L. Groom, K. Hausman, B. Ichter, et al. $\\pi_0$ : A vision-language-action flow model for general robot control. arXiv preprint arXiv:2410.24164, 2024.", + "[4] S. Dasari, O. Mees, S. Zhao, M. K. Srirama, and S. Levine. The ingredients for robotic diffusion transformers. arXiv preprint arXiv:2410.10088, 2024.", + "[5] T. Z. Zhao, V. Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulation with low-cost hardware. arXiv preprint arXiv:2304.13705, 2023.", + "[6] Z. Fu, T. Z. Zhao, and C. Finn. Mobile aloha: Learning bimanual mobile manipulation with low-cost whole-body teleoperation. arXiv preprint arXiv:2401.02117, 2024.", + "[7] C. Chi, Z. Xu, C. Pan, E. Cousineau, B. Burchfiel, S. Feng, R. Tedrake, and S. Song. Universal manipulation interface: In-the-wild robot teaching without in-the-wild robots. arXiv preprint arXiv:2402.10329, 2024.", + "[8] S. Yang, M. Liu, Y. Qin, R. Ding, J. Li, X. Cheng, R. Yang, S. Yi, and X. Wang. Ace: A cross-platform visual-exoskeletons system for low-cost dexterous teleoperation. arXiv preprint arXiv:2408.11805, 2024.", + "[9] X. Cheng, J. Li, S. Yang, G. Yang, and X. Wang. Open-television: Teleoperation with immersive active visual feedback. In Conference on Robot Learning (CoRL), 2024.", + "[10] T. He, Z. Luo, X. He, W. Xiao, C. Zhang, W. Zhang, K. Kitani, C. Liu, and G. Shi. Omnih2o: Universal and dexterous human-to-humanoid whole-body teleoperation and learning. arXiv preprint arXiv:2406.08858, 2024.", + "[11] S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, and C. Finn. Robonet: Large-scale multi-robot learning. arXiv preprint arXiv:1910.11215, 2019.", + "[12] H. Bharadhwaj, J. Vakil, M. Sharma, A. Gupta, S. Tulsiani, and V. Kumar. Roboagent: Generalization and efficiency in robot manipulation via semantic augmentations and action chunking. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 4788-4795. IEEE, 2024.", + "[13] H. Ha, Y. Gao, Z. Fu, J. Tan, and S. Song. Umi on legs: Making manipulation policies mobile with manipulation-centric whole-body controllers. arXiv preprint arXiv:2407.10353, 2024.", + "[14] A. O'Neill, A. Rehman, A. Gupta, A. Maddukuri, A. Gupta, A. Padalkar, A. Lee, A. Pooley, A. Gupta, A. Mandlekar, et al. Open x-embodiment: Robotic learning datasets and rt-x models. arXiv preprint arXiv:2310.08864, 2023.", + "[15] C. Wang, H. Shi, W. Wang, R. Zhang, L. Fei-Fei, and C. K. Liu. Dexcap: Scalable and portable mocap data collection system for dexterous manipulation. arXiv preprint arXiv:2403.07788, 2024.", + "[16] S. Kareer, D. Patel, R. Punamiya, P. Mathur, S. Cheng, C. Wang, J. Hoffman, and D. Xu. Egomimic: Scaling imitation learning via egocentric video. arXiv preprint arXiv:2410.24221, 2024. URL https://arxiv.org/abs/2410.24221." + ], + "bbox": [ + 171, + 113, + 826, + 912 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "10", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[17] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In ICML. PMLR, 2021.", + "[18] OpenAI. Gpt-4 technical report. Technical report, OpenAI, 2023.", + "[19] R. Mendonca, S. Bahl, and D. Pathak. Structured world models from human videos. In RSS, 2023.", + "[20] S. Bahl, R. Mendonca, L. Chen, U. Jain, and D. Pathak. Affordances from human videos as a versatile representation for robotics. In CVPR, 2023.", + "[21] K. Grauman, A. Westbury, E. Byrne, Z. Chavis, A. Furnari, R. Girdhar, J. Hamburger, H. Jiang, M. Liu, X. Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In CVPR, 2022.", + "[22] D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro, T. Perrett, W. Price, and M. Wray. Scaling egocentric vision: The epic-kitchens dataset. In ECCV, 2018.", + "[23] T. Z. Zhao, J. Tompson, D. Driess, P. Florence, K. Ghasemipour, C. Finn, and A. Wahid. Aloha unleashed: A simple recipe for robot dexterity. arXiv preprint arXiv:2410.13126, 2024.", + "[24] L. Wang, X. Chen, J. Zhao, and K. He. Scaling proprioceptive-visual learning with heterogeneous pre-trained transformers. arXiv preprint arXiv:2409.20537, 2024.", + "[25] C. Chi, Z. Xu, S. Feng, E. Cousineau, Y. Du, B. Burchfiel, R. Tedrake, and S. Song. Diffusion policy: Visuomotor policy learning via action diffusion. The International Journal of Robotics Research, page 02783649241273668, 2023.", + "[26] R.-Z. Qiu, Y. Song, X. Peng, S. A. Suryadevara, G. Yang, M. Liu, M. Ji, C. Jia, R. Yang, X. Zou, et al. Wildma: Long horizon loco-manipulation in the wild. arXiv preprint arXiv:2411.15131, 2024.", + "[27] C. Lu, X. Cheng, J. Li, S. Yang, M. Ji, C. Yuan, G. Yang, S. Yi, and X. Wang. Mobile- television: Predictive motion priors for humanoid whole-body control. In ICRA, 2025.", + "[28] Y. Ze, Z. Chen, W. Wang, T. Chen, X. He, Y. Yuan, X. B. Peng, and J. Wu. Generalizable humanoid manipulation with improved 3d diffusion policies. arXiv preprint arXiv:2410.10803, 2024.", + "[29] S. P. Arunachalam, S. Silwal, B. Evans, and L. Pinto. Dexterous imitation made easy: A learning-based framework for efficient dexterous manipulation. In 2023 IEEE international conference on robotics and automation (icra), pages 5954-5961. IEEE, 2023.", + "[30] A. S. Chen, S. Nair, and C. Finn. Learning generalizable robotic reward functions from \"in-the-wild\" human videos. arXiv preprint arXiv:2103.16817, 2021.", + "[31] J. Lee and M. S. Ryoo. Learning robot activities from first-person human videos using convolutional future regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1-2, 2017.", + "[32] K. Lee, Y. Su, T.-K. Kim, and Y. Demiris. A syntactic approach to robot imitation learning using probabilistic activity grammars. Robotics and Autonomous Systems, 61(12):1323-1334, 2013.", + "[33] A. Nguyen, D. Kanoulas, L. Muratore, D. G. Caldwell, and N. G. Tsagarakis. Translating videos to commands for robotic manipulation with deep recurrent neural networks. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 3782-3788. IEEE, 2018." + ], + "bbox": [ + 171, + 90, + 825, + 910 + ], + "page_idx": 10 + }, + { + "type": "page_number", + "text": "11", + "bbox": [ + 490, + 935, + 506, + 946 + ], + "page_idx": 10 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[34] J. Rothfuss, F. Ferreira, E. E. Aksoy, Y. Zhou, and T. Asfour. Deep episodic memory: Encoding, recalling, and predicting episodic experiences for robot action execution. IEEE Robotics and Automation Letters, 3(4):4007-4014, 2018.", + "[35] Y. Yang, Y. Li, C. Fermuller, and Y. Aloimonos. Robot learning manipulation action plans by \"watching\" unconstrained videos from the world wide web. In Proceedings of the AAAI conference on artificial intelligence, volume 29, 2015.", + "[36] H. Bharadhwaj, R. Mottaghi, A. Gupta, and S. Tulsiani. Track2act: Predicting point tracks from internet videos enables diverse zero-shot robot manipulation. In ECCV, 2024.", + "[37] C. Wen, X. Lin, J. So, K. Chen, Q. Dou, Y. Gao, and P. Abbeel. Any-point trajectory modeling for policy learning. arXiv preprint arXiv:2401.00025, 2023.", + "[38] J. Li, Y. Zhu, Y. Xie, Z. Jiang, M. Seo, G. Pavlakos, and Y. Zhu. Okami: Teaching humanoid robots manipulation skills through single video imitation. arXiv preprint arXiv:2410.11792, 2024.", + "[39] N. Das, S. Bechtle, T. Davchev, D. Jayaraman, A. Rai, and F. Meier. Model-based inverse reinforcement learning from visual demonstrations. In Conference on Robot Learning, pages 1930–1942. PMLR, 2021.", + "[40] H. Xiong, Q. Li, Y.-C. Chen, H. Bharadhwaj, S. Sinha, and A. Garg. Learning by watching: Physical imitation of manipulation skills from human videos. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 7827-7834. IEEE, 2021.", + "[41] S. Pirk, M. Khansari, Y. Bai, C. Lynch, and P. Sermanet. Online object representations with contrastive learning. arXiv preprint arXiv:1906.04312, 2019.", + "[42] S. Nair, A. Rajeswaran, V. Kumar, C. Finn, and A. Gupta. R3m: A universal visual representation for robot manipulation. arXiv preprint arXiv:2203.12601, 2022.", + "[43] Y. J. Ma, S. Sodhani, D. Jayaraman, O. Bastani, V. Kumar, and A. Zhang. Vip: Towards universal visual reward and representation via value-implicit pre-training. arXiv preprint arXiv:2210.00030, 2022.", + "[44] L.-H. Lin, Y. Cui, A. Xie, T. Hua, and D. Sadigh. Flowretrieval: Flow-guided data retrieval for few-shot imitation learning. arXiv preprint arXiv:2408.16944, 2024.", + "[45] J. Ren, P. Sundaresan, D. Sadigh, S. Choudhury, and J. Bohg. Motion tracks: A unified representation for human-robot transfer in few-shot imitation learning. arXiv preprint arXiv:2501.06994, 2025.", + "[46] Y. Zhu, A. Lim, P. Stone, and Y. Zhu. Vision-based manipulation from single human video with open-world object graphs. arXiv preprint arXiv:2405.20321, 2024.", + "[47] C.-C. Hsu, B. Wen, J. Xu, Y. Narang, X. Wang, Y. Zhu, J. Biswas, and S. Birchfield. Spot: Se (3) pose trajectory diffusion for object-centric manipulation. arXiv preprint arXiv:2411.00965, 2024.", + "[48] Z. Fu, Q. Zhao, Q. Wu, G. Wetzstein, and C. Finn. Humanplus: Humanoid shadowing and imitation from humans. In CoRL, 2024.", + "[49] W. Huang, I. Mordatch, and D. Pathak. One policy to control them all: Shared modular policies for agent-agnostic control. In International Conference on Machine Learning, pages 4455–4464. PMLR, 2020.", + "[50] L. Y. Chen, K. Hari, K. Dharmarajan, C. Xu, Q. Vuong, and K. Goldberg. Mirage: Cross-embodiment zero-shot policy transfer with cross-painting. arXiv preprint arXiv:2402.19249, 2024." + ], + "bbox": [ + 171, + 90, + 825, + 910 + ], + "page_idx": 11 + }, + { + "type": "page_number", + "text": "12", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 11 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[51] J. Yang, C. Glossop, A. Bhorkar, D. Shah, Q. Vuong, C. Finn, D. Sadigh, and S. Levine. Pushing the limits of cross-embodiment learning for manipulation and navigation. arXiv preprint arXiv:2402.19432, 2024.", + "[52] J. Yang, D. Sadigh, and C. Finn. Polybot: Training one policy across robots while embracing variability. arXiv preprint arXiv:2307.03719, 2023.", + "[53] F. Ebert, Y. Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn, and S. Levine. Bridge data: Boosting generalization of robotic skills with cross-domain datasets. arXiv preprint arXiv:2109.13396, 2021.", + "[54] T. Franzmeyer, P. Torr, and J. F. Henriques. Learn what matters: cross-domain imitation learning with task-relevant embeddings. Advances in Neural Information Processing Systems, 35: 26283-26294, 2022.", + "[55] A. Ghadirzadeh, X. Chen, P. Poklukar, C. Finn, M. Björkman, and D. Kragic. Bayesian meta-learning for few-shot policy adaptation across robotic platforms. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1274–1280. IEEE, 2021.", + "[56] T. Shankar, Y. Lin, A. Rajeswaran, V. Kumar, S. Anderson, and J. Oh. Translating robot skills: Learning unsupervised skill correspondences across robots. In International Conference on Machine Learning, pages 19626-19644. PMLR, 2022.", + "[57] M. Xu, Z. Xu, C. Chi, M. Veloso, and S. Song. Xskill: Cross embodiment skill discovery. In Conference on Robot Learning, pages 3536-3555. PMLR, 2023.", + "[58] Z.-H. Yin, L. Sun, H. Ma, M. Tomizuka, and W.-J. Li. Cross domain robot imitation with invariant representation. In 2022 International Conference on Robotics and Automation (ICRA), pages 455-461. IEEE, 2022.", + "[59] K. Zakka, A. Zeng, P. Florence, J. Tompson, J. Bohg, and D. Dwibedi. Xirl: Cross-embodiment inverse reinforcement learning. In Conference on Robot Learning, pages 537–546. PMLR, 2022.", + "[60] G. Zhang, L. Zhong, Y. Lee, and J. J. Lim. Policy transfer across visual and dynamics domain gaps via iterative grounding. arXiv preprint arXiv:2107.00339, 2021.", + "[61] Q. Zhang, T. Xiao, A. A. Efros, L. Pinto, and X. Wang. Learning cross-domain correspondence for control with dynamics cycle-consistency. arXiv preprint arXiv:2012.09811, 2020.", + "[62] S. Bahl, A. Gupta, and D. Pathak. Human-to-robot imitation in the wild. In RSS, 2022.", + "[63] C. Wang, L. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y. Zhu, and A. Anandkumar. Mimicplay: Long-horizon imitation learning by watching human play. arXiv preprint arXiv:2302.12422, 2023.", + "[64] J. Engel, K. Somasundaram, M. Goesele, A. Sun, A. Gamino, A. Turner, A. Talattof, A. Yuan, B. Souti, B. Meredith, et al. Project aria: A new tool for egocentric multi-modal ai research. arXiv preprint arXiv:2308.13561, 2023.", + "[65] A. Khazatsky, K. Pertsch, S. Nair, A. Balakrishna, S. Dasari, S. Karamcheti, S. Nasiriany, M. K. Srirama, L. Y. Chen, K. Ellis, et al. Droid: A large-scale in-the-wild robot manipulation dataset. arXiv preprint arXiv:2403.12945, 2024.", + "[66] W. Zhu, X. Ma, Z. Liu, L. Liu, W. Wu, and Y. Wang. Motionbert: A unified perspective on learning human motion representations. In ICCV, 2023.", + "[67] M. Oquab, T. Darcet, T. Moutakanni, H. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023." + ], + "bbox": [ + 173, + 90, + 825, + 911 + ], + "page_idx": 12 + }, + { + "type": "page_number", + "text": "13", + "bbox": [ + 490, + 935, + 506, + 946 + ], + "page_idx": 12 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[68] Y. Zhou, C. Barnes, J. Lu, J. Yang, and H. Li. On the continuity of rotation representations in neural networks. In CVPR, 2019.", + "[69] T. Yu, T. Xiao, A. Stone, J. Tompson, A. Brohan, S. Wang, J. Singh, C. Tan, J. Peralta, B. Ichter, et al. Scaling robot learning with semantically imagined experience. arXiv preprint arXiv:2302.11550, 2023." + ], + "bbox": [ + 171, + 90, + 825, + 176 + ], + "page_idx": 13 + }, + { + "type": "page_number", + "text": "14", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 13 + }, + { + "type": "image", + "img_path": "images/9f523a9fee12629f433e77be513b1017d58395d4ce05221409b633c594646209.jpg", + "image_caption": [], + "image_footnote": [ + "(a) The robot performs the cup passing task across four different backgrounds. The left side shows the four background variations, while the right side illustrates the two passing directions: (#1 - Right hand passes the cup to the left hand, #2 - Left hand passes the cup to the right hand)." + ], + "bbox": [ + 176, + 89, + 823, + 161 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/866552a849bd596593904a24e5c38eb7ae77607f532fc18e081a730a1cc169b6.jpg", + "image_caption": [], + "image_footnote": [ + "(b) The robot performs the horizontal grasping task with four different items: bottle, box_1, box_2, and can, as shown on the left. The right side illustrates the process: (#1-#3 - The robot grasps the bottle, #4-#5 - The robot places it into the plastic bin)." + ], + "bbox": [ + 176, + 213, + 821, + 285 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/2e761ca98fefa9973609a6e2dbaa3424910a39a79dc84bf885adf3d0f1436ddb.jpg", + "image_caption": [], + "image_footnote": [ + "(c) The robot performs the vertical grasping task. As shown on the left, the Dynamixel box is placed in nine different positions for grasping. The right side illustrates the process: (#1-#3 - The robot grasps the box, #4-#5 - The robot places the box into the plastic bin)." + ], + "bbox": [ + 176, + 338, + 821, + 409 + ], + "page_idx": 14 + }, + { + "type": "image", + "img_path": "images/f6f6620b6c543c6d3d819bc17471c003a9fd4c539b262052407a900dc69d0ac7.jpg", + "image_caption": [], + "image_footnote": [ + "(d) The robot performs the pouring task. The left side shows different settings achieved by varying the robot's rotation and the table's position. The right side illustrates the pouring process: (#1 - Right hand grasps the bottle, #2 - Left hand grasps the cup, #3 - Pouring the drink, #4 - Left hand places the cup down, #5 - Right hand places the bottle down).", + "Figure 7: Illustrations of tasks used in quantitative evaluations. From top to bottom: cup passing, horizontal grasping, vertical grasping, and pouring.", + "Table 4: Ablation of how human whole-body movement in training demonstrations affects policy rollout. We collect the same number of demonstrations on the same set of objects for the grasping task with or without whole-body movement. Since the robot does not have a natural whole-body movement like humans, it negatively influences the manipulation success rate." + ], + "bbox": [ + 176, + 462, + 821, + 534 + ], + "page_idx": 14 + }, + { + "type": "table", + "img_path": "images/f53fe4b5a54f07a6b85dba56c4181f3865ea2d2b8b764a6c5fa784393f1a626b.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodBottleBox1Box2CanOvr. Succ.
I.D.H.D.H.D.H.D.
Without whole-body8/106/100/107/1021/40
With whole-body9/103/103/103/1018/40
", + "bbox": [ + 264, + 635, + 733, + 712 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "A More Ablation Study - Data Collection", + "text_level": 1, + "bbox": [ + 171, + 787, + 532, + 806 + ], + "page_idx": 14 + }, + { + "type": "text", + "text": "Autonomous Whole-body Movement. In Tab. 4, we justify the necessity to minimize body movement in human data collection. Humans tend to move their upper body unconsciously during manipulation (including shoulder and waist movement). However, existing humanoid robots have yet to reach such a level of dexterity. Thus, having these difficult-to-replicate actions in the human demonstrations leads to degraded performance. We hypothesize that such a necessity would be greatly reduced with the development of both whole-body locomotion methods and mechanical designs,", + "bbox": [ + 169, + 821, + 823, + 912 + ], + "page_idx": 14 + }, + { + "type": "page_number", + "text": "15", + "bbox": [ + 490, + 935, + 506, + 946 + ], + "page_idx": 14 + }, + { + "type": "table", + "img_path": "images/3db029e94cea3545b2bc9aece4ac03bbc7f164999355eafe4a0caf35755b81d5.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodGrasping (secs)Pouring (secs)
Human Demo3.79±0.274.81±0.35
Human Demo with VR4.09±0.304.90±0.26
Humanoid Demo (VR Teleop)19.72±1.6537.31±6.25
", + "bbox": [ + 269, + 88, + 727, + 157 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Table 5: Amortized mean and standard deviation of the time required to collect a single demonstration, including scene resets. The first row shows the time for regular human to complete corresponding tasks in real world. The second row represents our human data when wearing VR for data collection, demonstrating that egocentric human demonstrations provide a more scalable data source compared to robot teleoperation.", + "bbox": [ + 169, + 157, + 823, + 227 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "but for the currently available platforms, we instruct operators to minimize body movement as much as possible in our dataset.", + "bbox": [ + 169, + 255, + 823, + 285 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Efficiency of Data Collection. In Tab. 5, we compare task completion times across different setups, including standard human manipulation, human demonstrations performed while wearing a VR device, and robot teleoperation. This analysis highlights how task-oriented human demonstrations can be a scalable data source for cross-embodiment learning. Notably, wearing a VR device does not significantly impact human manipulation speed, as the completion time remains nearly the same as in standard human demonstrations.", + "bbox": [ + 169, + 292, + 823, + 381 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Among different data collection schemes, we find that most overhead arises during the retargeting process from human actions to robot actions. This is primarily due to latency and the constrained workspace of 7-DoF robotic arms, which are inherent challenges in existing data collection methods such as VR teleoperation [9], motion tracking [48, 10], and puppeting [8, 5].", + "bbox": [ + 169, + 388, + 823, + 450 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Beyond data collection speed, human demonstrations offer several additional advantages over teleoperation. They provide a safer alternative, reducing risks associated with real-robot execution. They are also more labor-efficient, as they do not require additional personnel for supervision. Furthermore, human demonstrations allow for greater flexibility in settings, enabling a diverse range of environments without requiring robot-specific adaptations. Additionally, human demonstrations achieve a higher demonstration success rate, and the required hardware (such as motion capture or VR devices) is more accessible and cost-effective compared to full robotic setups. These factors collectively make human data a more scalable solution for large-scale data collection.", + "bbox": [ + 169, + 455, + 825, + 578 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "B Normalization of different embodiments.", + "text_level": 1, + "bbox": [ + 171, + 595, + 549, + 613 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Tab. 2 suggests minor differences between using different normalization coefficients for the states and actions vectors of humans and humanoids. We take a closer look in Fig. 8, where we investigate the impact of different normalization strategies in the vertical grasping (picking) task. Noticeably, the same normalization approach achieved the highest overall success rate, but the success distribution is biased towards the upper-right region of the grid.", + "bbox": [ + 169, + 628, + 823, + 704 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "We hypothesize that this is because humans have a larger workspace than humanoid robots. Thus, human data encompasses humanoid proprioception as a subset, which results in a relatively smaller distribution for the robot state-action space.", + "bbox": [ + 169, + 710, + 823, + 756 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "C In-Depth Analysis of Different Types of Generalization", + "text_level": 1, + "bbox": [ + 169, + 773, + 666, + 792 + ], + "page_idx": 15 + }, + { + "type": "text", + "text": "Human data improves background generalization. We chose to use the cup passing task to test background generalization. We prepared four different tablecloths as backgrounds, as shown in Fig. 7a. In terms of training data distribution, the teleoperation data for this task was collected exclusively on the paper background shown in Fig. 7a, whereas the human data includes more than five different backgrounds. This diverse human dataset significantly enhances the generalization ability of the co-trained HAT policy. As shown in Tab. 7., HAT consistently outperforms across all four backgrounds, demonstrating robustness to background variations. In addition, the overall", + "bbox": [ + 169, + 806, + 825, + 912 + ], + "page_idx": 15 + }, + { + "type": "page_number", + "text": "16", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 15 + }, + { + "type": "table", + "img_path": "images/dfb20c46f115a91d517c9645ae36f4855d70870ed93ab393db917d1904f2fc69.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodBottleBox1Box2CanOvr. Succ.
I.D.H.D.O.O.D.O.O.D.
ACT8/105/101/101/1016/40
HAT8/107/101/104/1021/40
", + "bbox": [ + 292, + 88, + 704, + 161 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Table 6: Object Appearance Generalization: In the horizontal grasping task, we evaluated the grasping performance by attempting to grasp each object 10 times and recorded the success rate.", + "bbox": [ + 169, + 164, + 823, + 193 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "success rate increases by nearly $50\\%$ compared to training without human data, highlighting the advantage of utilizing diverse human demonstrations.", + "bbox": [ + 169, + 258, + 823, + 291 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Human data improves appearance generalization. To test how co-training improves robustness to perturbations in object textures, we evaluate the horizontal grasping policy on novel objects, as shown in Fig. 7b. Specifically, we compare the policy's performance on the bottle, box_1, box_2, and can, as shown left to right in the first image in Fig. 7b. These objects differ significantly in both color and shape from the bottle used in the teleoperation data distribution.", + "bbox": [ + 169, + 297, + 823, + 372 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Since grasping is a relatively simple task, our adjusted policy demonstrates strong learning capabilities even with only 50 teleoperation data samples. The policy can successfully grasp most bottles despite the limited training set. To better highlight the impact of human data, we selected more challenging objects for evaluation. As shown in Tab. 6, human data significantly enhances the policy's ability to grasp these more difficult objects.", + "bbox": [ + 169, + 378, + 823, + 455 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Notably, box_1 appears in the human data, while box_2 does not. Despite this, we observe that constraining with human data still improves overall performance, even on box_2, though its success rate does not increase. This suggests that, beyond direct experience with specific objects, the human data helps the policy learn broader visual priors that enable more proactive and stable grasping behaviors. For box_2, while the success rate remains low—partially due to its low height and color similarity to the table—the co-trained HAT policy demonstrates fewer out-of-distribution (OOD) failures and more actively searches for graspable regions. The failures on box_2 are primarily due to unstable grasping and the small box slipping from the hand, rather than the inability to perceive or locate the object.", + "bbox": [ + 169, + 460, + 825, + 597 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Furthermore, adding more human data not only improves performance on objects seen in human training demonstrations (e.g., box_1) but also enhances generalization to completely novel objects (e.g., box_2 and can). We hypothesize that, as the number of objects grows, HAT starts to learn inter-category visual priors that guide it to grasp objects more effectively, even when they were not explicitly present in the training set.", + "bbox": [ + 169, + 604, + 823, + 679 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "Human data improves object placement generalization. Finally, we introduce variations in object placements that are not present in the real-robot training demonstrations and specifically investigate this in the vertical grasping (picking) task. In this task, we intentionally constrain the robot data collection to object placements within a subset of cells, while human vertical grasping data covers a much more diverse range of settings.", + "bbox": [ + 169, + 686, + 823, + 762 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "To systematically analyze the impact of human data, we evaluate model performance on a structured $3 \\times 3$ grid, where each cell represents a $10\\mathrm{cm} \\times 10\\mathrm{cm}$ region for grasping attempts. The numbers in each cell indicate the number of successful picks out of 10 trials. Real-robot training data is collected from only two specific cells, highlighted with dashed lines.", + "bbox": [ + 169, + 768, + 823, + 829 + ], + "page_idx": 16 + }, + { + "type": "text", + "text": "A key detail in our teleoperation data distribution is that 50 picking attempts are collected from the right-hand side grid and only 10 from the left-hand side grid. This imbalance explains why policies trained purely on teleoperation data struggle to grasp objects in the left-side grid. We observe that models trained solely on robot data fail to generalize to unseen cells, whereas cross-embodiment learning with human data significantly improves generalization, doubling the overall success rate.", + "bbox": [ + 169, + 835, + 823, + 912 + ], + "page_idx": 16 + }, + { + "type": "page_number", + "text": "17", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 16 + }, + { + "type": "image", + "img_path": "images/1f6e163acefae2ad12158fbc4b7a7b2831a66bf5aa2f92de897b28a3f85082ae.jpg", + "image_caption": [ + "Figure 8: Object Placement Generalization. Performance comparisons of models trained with and without human data on vertical grasping (picking). Each cell in the $3 \\times 3$ grid represents a $10\\mathrm{cm} \\times 10\\mathrm{cm}$ region where the robot attempts to pick up a box, with numbers indicating successful attempts out of 10. The real-robot data is collected in two cells inside the dashed lines. Notably, our teleoperation data is intentionally imbalanced." + ], + "image_footnote": [], + "bbox": [ + 308, + 90, + 689, + 258 + ], + "page_idx": 17 + }, + { + "type": "table", + "img_path": "images/9343979d54558a796b3bdf65a7f11a9152c2734e0eeae43ff4f84b1a962e2643.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
MethodPaperWoodenRedGreenOvr. Succ.
I.D.H.D.O.O.D.O.O.D.
ACT19/2014/2012/2010/2055/80
HAT20/2016/2018/2018/2072/80
", + "bbox": [ + 285, + 349, + 712, + 425 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "Table 7: Background Generalization: In the cup passing task, we evaluate the passing performance by recording the number of failures or retries needed to complete 20 cup-passing trials.", + "bbox": [ + 169, + 425, + 823, + 455 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "D In-Depth Comparison between Humanoid A and Humanoid B configurations", + "text_level": 1, + "bbox": [ + 171, + 487, + 728, + 523 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "This section presents a detailed comparison of the two humanoid platforms, referred to as Humanoid A and Humanoid B, with a focus on joint structure and implications for manipulation capabilities. We restrict our analysis to the arm configurations, as other parts of the body were not exclusively explored in this work.", + "bbox": [ + 169, + 537, + 823, + 598 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "While morphologically similar, these two humanoids have drastically different arm configurations that create hurdles in direct policy transfer. Besides differences in motor technical specs such as torque and types of encoder (Humanoid B has absolute motor position encoders), they also have different mechanical limits. The range of motion (ROM) for the first four proximal joints—shoulder-pitch, shoulder-roll, shoulder_yaw, and elbow—differs across the two platforms. Humanoid B exhibits a consistently wider ROM, which allows a wider set of reachable configurations and increases the manipulability of the arm in constrained environments. Table 8 summarizes the ROM values for these shared joints.", + "bbox": [ + 169, + 604, + 823, + 726 + ], + "page_idx": 17 + }, + { + "type": "text", + "text": "A more significant architectural divergence is observed at the wrist. Humanoid A includes a single distal joint—wrist-roll—providing limited wrist articulation. This restricts end-effector dexterity and constrains in-hand manipulation strategies to a single rotational degree of freedom. In contrast, Humanoid B is equipped with a complete wrist mechanism composed of three independently actuated joints: wrist_pitch, wrist-roll, and wrist_yaw. These additional degrees of freedom allow for full orientation control of the end-effector, enabling tasks that require precise alignment, rotation, and fine adjustment of object poses.", + "bbox": [ + 169, + 732, + 826, + 839 + ], + "page_idx": 17 + }, + { + "type": "page_number", + "text": "18", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 17 + }, + { + "type": "table", + "img_path": "images/36db00f148a7c1259dadae00242a99645c3dae4013bbde9ba35fdbdb2e48afcc.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
JointHumanoid AHumanoid B
shoulder_pitch-164° to +164°-180° to +90°
shoulder-roll-19° to +178°-21° to +194°
shoulder_yaw-74° to +255°-152° to +172°
elbow-71° to 150°-54° to 182°
wrist-roll-175° to 175°-172° to 157°
", + "bbox": [ + 308, + 450, + 687, + 535 + ], + "page_idx": 18 + }, + { + "type": "text", + "text": "Table 8: Joint Range of Motion Comparison between Humanoid A and B (in degrees)", + "bbox": [ + 217, + 536, + 779, + 550 + ], + "page_idx": 18 + }, + { + "type": "page_number", + "text": "19", + "bbox": [ + 490, + 935, + 508, + 946 + ], + "page_idx": 18 + } +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13441/7dee0cb2-adf2-4076-a132-e57fc86ae441_model.json b/data/2025/2503_13xxx/2503.13441/7dee0cb2-adf2-4076-a132-e57fc86ae441_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c2479f2e2df4a25363731b06fef1d3e4fcceeca2 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13441/7dee0cb2-adf2-4076-a132-e57fc86ae441_model.json @@ -0,0 +1,2823 @@ +[ + [ + { + "type": "aside_text", + "bbox": [ + 0.029, + 0.284, + 0.061, + 0.712 + ], + "angle": 270, + "content": "arXiv:2503.13441v3 [cs.RO] 5 Oct 2025" + }, + { + "type": "title", + "bbox": [ + 0.29, + 0.103, + 0.71, + 0.128 + ], + "angle": 0, + "content": "Humanoid Policy \\(\\sim\\) Human Policy" + }, + { + "type": "text", + "bbox": [ + 0.194, + 0.15, + 0.803, + 0.167 + ], + "angle": 0, + "content": "Ri-Zhao Qiu*,1 Shiqi Yang*,1 Xuxin Cheng*,1 Chaitanya Chawla*,2 Jialong Li" + }, + { + "type": "text", + "bbox": [ + 0.249, + 0.168, + 0.744, + 0.182 + ], + "angle": 0, + "content": "Tairan He² Ge Yan⁴ David Yoon³ Ryan Hoque³ Lars Paulsen¹" + }, + { + "type": "text", + "bbox": [ + 0.251, + 0.182, + 0.74, + 0.197 + ], + "angle": 0, + "content": "Ge Yang\\(^{5}\\) Jian Zhang\\(^{3}\\) Sha Yi\\(^{1}\\) Guanya Shi\\(^{2}\\) Xiaolong Wang\\(^{1}\\)" + }, + { + "type": "text", + "bbox": [ + 0.273, + 0.197, + 0.728, + 0.212 + ], + "angle": 0, + "content": "1 UC San Diego, 2 CMU, 3 Apple, 4 University of Washington, 5 MIT" + }, + { + "type": "text", + "bbox": [ + 0.356, + 0.213, + 0.642, + 0.226 + ], + "angle": 0, + "content": "https://human-as-robot.github.io/" + }, + { + "type": "image_caption", + "bbox": [ + 0.252, + 0.258, + 0.343, + 0.27 + ], + "angle": 0, + "content": "Egocentric Vision" + }, + { + "type": "image_caption", + "bbox": [ + 0.208, + 0.301, + 0.28, + 0.32 + ], + "angle": 0, + "content": "Small-scale Humanoid Data" + }, + { + "type": "image_caption", + "bbox": [ + 0.211, + 0.331, + 0.266, + 0.341 + ], + "angle": 0, + "content": "1.5k demos" + }, + { + "type": "image", + "bbox": [ + 0.286, + 0.288, + 0.401, + 0.343 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.43, + 0.259, + 0.571, + 0.271 + ], + "angle": 0, + "content": "Unified State-Action Space" + }, + { + "type": "image", + "bbox": [ + 0.43, + 0.276, + 0.57, + 0.324 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.457, + 0.353, + 0.544, + 0.366 + ], + "angle": 0, + "content": "Fingers/Wrist" + }, + { + "type": "image_caption", + "bbox": [ + 0.207, + 0.39, + 0.264, + 0.409 + ], + "angle": 0, + "content": "Large-scale Human Data" + }, + { + "type": "image_caption", + "bbox": [ + 0.212, + 0.422, + 0.262, + 0.431 + ], + "angle": 0, + "content": "27k demos" + }, + { + "type": "image", + "bbox": [ + 0.271, + 0.371, + 0.403, + 0.438 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.433, + 0.398, + 0.565, + 0.438 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.665, + 0.259, + 0.741, + 0.269 + ], + "angle": 0, + "content": "Robot Policies" + }, + { + "type": "image", + "bbox": [ + 0.609, + 0.274, + 0.708, + 0.331 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.705, + 0.274, + 0.798, + 0.331 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.609, + 0.332, + 0.705, + 0.389 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.705, + 0.332, + 0.797, + 0.389 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.609, + 0.39, + 0.705, + 0.448 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.705, + 0.39, + 0.798, + 0.448 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.455, + 0.825, + 0.538 + ], + "angle": 0, + "content": "Figure 1: This paper advocates high-quality human data as a data source for cross-embodiment learning - task-oriented egocentric human data. We collect a large-scale dataset, Physical Human-Humanoid Data \\((\\mathbf{PH}^2\\mathbf{D})\\), with hand-finger 3D poses from consumer-grade VR devices on well-defined manipulation tasks directly aligned with robots. Without relying on modular perception, we train a Human Action Transformer (HAT) manipulation policy by directly modeling humans as a different humanoid embodiment in an end-to-end manner." + }, + { + "type": "text", + "bbox": [ + 0.23, + 0.553, + 0.767, + 0.78 + ], + "angle": 0, + "content": "Abstract: Training manipulation policies for humanoid robots with diverse data enhances their robustness and generalization across tasks and platforms. However, learning solely from robot demonstrations is labor-intensive, requiring expensive tele-operated data collection, which is difficult to scale. This paper investigates a more scalable data source, egocentric human demonstrations, to serve as cross-embediment training data for robot learning. We mitigate the embodiment gap between humanoids and humans from both the data and modeling perspectives. We collect an egocentric task-oriented dataset \\((\\mathbf{PH}^2\\mathbf{D})\\) that is directly aligned with humanoid manipulation demonstrations. We then train a human-humanoid behavior policy, which we term Human Action Transformer (HAT). The state-action space of HAT is unified for both humans and humanoid robots and can be differentiably retargeted to robot actions. Co-trained with smaller-scale robot data, HAT directly models humanoid robots and humans as different embodiments without additional supervision. We show that human data improve both generalization and robustness of HAT with significantly better data collection efficiency." + }, + { + "type": "text", + "bbox": [ + 0.231, + 0.792, + 0.66, + 0.807 + ], + "angle": 0, + "content": "Keywords: Robot Manipulation, Cross-Embodiment, Humanoid" + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.827, + 0.314, + 0.842 + ], + "angle": 0, + "content": "1 Introduction" + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.859, + 0.825, + 0.905 + ], + "angle": 0, + "content": "Learning from real robot demonstrations has led to great progress in robotic manipulation recently [1, 2, 3, 4]. One key advancement to enable such progress was hardware / software codesigns to scale up data collection using teleoperation [5, 6, 7, 8, 9, 10] and directly controlling" + }, + { + "type": "footer", + "bbox": [ + 0.172, + 0.926, + 0.551, + 0.94 + ], + "angle": 0, + "content": "9th Conference on Robot Learning (CoRL 2025), Seoul, Korea." + } + ], + [ + { + "type": "text", + "bbox": [ + 0.17, + 0.092, + 0.825, + 0.155 + ], + "angle": 0, + "content": "the robot end effector [11, 12, 5, 6, 13, 7]. Instead of gathering data on a single robot, collective efforts have been made to merge diverse robot data and train foundational policies across embodiments [11, 14, 2, 1, 3, 4], which have shown to improve cross-embodiment and cross-task generalizability." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.159, + 0.365, + 0.355 + ], + "angle": 0, + "content": "However, collecting structured real-robot data is expensive and time-consuming. We are still far away from building a robust and generalizable model as what has been achieved in Computer Vision [17] and NLP [18]. If we examine humanoid robot teleoperation more closely, it involves robots mimicking human actions" + }, + { + "type": "table", + "bbox": [ + 0.379, + 0.17, + 0.822, + 0.264 + ], + "angle": 0, + "content": "
DatasetHumanRobot
# Frames# Demos# Frames# Demos
DexCap [15]~378k787NANA
EgoMimic [16]~432k†2,1501.29M†1,000
PH2D (Ours)~3.02M26,824~668k1,552
" + }, + { + "type": "table_caption", + "bbox": [ + 0.375, + 0.266, + 0.825, + 0.351 + ], + "angle": 0, + "content": "Table 1: Comparisons of task-oriented egocentric human datasets. Besides having the most demonstrations, \\(\\mathrm{PH}^2\\mathrm{D}\\) is collected on various manipulation tasks, diverse objects and scenes, with accurate 3D hand-finger poses and language annotations. †: estimated based on reported data collection time with \\(30\\mathrm{Hz}\\); whereas DexCap [15] and \\(\\mathrm{PH}^2\\mathrm{D}\\) report processed frames for training." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.355, + 0.827, + 0.432 + ], + "angle": 0, + "content": "using geometric transforms or retargeting to control robot joints and end-effectors. From this perspective, we propose to model robots in a human-centric representation, and the robot action is just a transformation away from the human action. If we can accurately capture the end-effector and head poses of humans, egocentric human demonstrations will be a more scalable source of training data, as we can collect them efficiently, in any place, and without a robot." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.438, + 0.827, + 0.62 + ], + "angle": 0, + "content": "In this paper, we perform cross-human and humanoid embodiment training for robotic manipulation. Our key insight is to model bimanual humanoid behaviors by directly imitating human behaviors without using learning surrogates such as affordances [19, 20]. To realize this, we first collect an egocentric task-oriented dataset of Physical Humanoid-Human Data, dubbed \\(\\mathrm{PH^2D}\\). We adapt consumer-grade VR devices to collect egocentric videos with automatic but accurate hand pose and end effector (i.e., hand) annotations. Compared to existing human daily behavior datasets [21, 22], \\(\\mathrm{PH^2D}\\) is task-oriented so that it can be directly used for co-training. The same VR hardwares are then used to perform teleoperation to collect smaller-scale humanoid data for better alignment. We then train a Human-humanoid Action Transformer (HAT), which predicts future hand-finger trajectories in a unified human-centric state-action representation space. To obtain robot actions, we simply apply inverse kinematics and hand retargeting to differentiably convert human actions to robot actions for deployment." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.625, + 0.825, + 0.703 + ], + "angle": 0, + "content": "We conduct real-robot evaluations on different manipulation tasks with extensive ablation studies to investigate how to best align human and humanoid demonstrations. In particular, we found that co-training with diverse human data improves robustness against spatial variance and background perturbation, generalizing in settings unseen in robot data but seen in human data. We believe that these findings highlight the potential of using human data for large-scale cross-embodiment learning." + }, + { + "type": "text", + "bbox": [ + 0.172, + 0.709, + 0.405, + 0.723 + ], + "angle": 0, + "content": "In summary, our contributions are:" + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.733, + 0.825, + 0.763 + ], + "angle": 0, + "content": "- A dataset, \\(\\mathrm{PH}^2\\mathrm{D}\\), which is a large egocentric, task-oriented human-humanoid dataset with accurate hand and wrist poses for modeling human behavior (see Tab. 1)." + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.768, + 0.825, + 0.797 + ], + "angle": 0, + "content": "- A cross human-humanoid manipulation policy, HAT, that introduces a unified state-action space and other alignment techniques for humanoid manipulation." + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.802, + 0.825, + 0.832 + ], + "angle": 0, + "content": "- Improved policy robustness and generalization validated by extensive experiments and ablation studies to show the benefits of co-training with human data." + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.733, + 0.825, + 0.832 + ], + "angle": 0, + "content": null + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.851, + 0.323, + 0.867 + ], + "angle": 0, + "content": "2 Related Work" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.882, + 0.825, + 0.913 + ], + "angle": 0, + "content": "Imitation Learning for Robot Manipulation. Recently, learning robot policy with data gathered directly from the multiple and target robot embodiment has shown impressive robustness and dex-" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.948 + ], + "angle": 0, + "content": "2" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.171, + 0.092, + 0.825, + 0.152 + ], + "angle": 0, + "content": "terity [23, 2, 24, 1, 25, 26, 9, 27, 28]. The scale of data for imitation learning has grown substantially with recent advancements in data collection [29, 9, 7, 8], where human operators can efficiently collect large amounts of high-quality, task-oriented data. Despite these advances, achieving open-world generalization still remains a significant challenge due to lack of internet-scale training data." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.16, + 0.825, + 0.205 + ], + "angle": 0, + "content": "Learning from Human Videos. Learning policies from human videos is a long-standing topic in both computer vision and robotics due to the vast existence of human data. Existing works can be approximately divided into two categories: aligning observations or actions." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.212, + 0.827, + 0.362 + ], + "angle": 0, + "content": "Learn from Human - Aligning Observations. While teleoperating the actual robot platform allows learning policy with great dexterity, there is still a long way to go to achieve higher levels of generalization across diverse tasks, environments, and platforms. Unlike fields such as computer vision [17] and natural language processing [18] benefiting from internet-scale data, robot data collection in the real world is far more constrained. Various approaches have attempted to use internet-scale human videos to train robot policies [30, 31, 32, 33, 34, 35]. Due to various discrepancies (e.g., supervision and viewpoints) between egocentric robot views and internet videos, most existing work [19, 20] use modular approaches with intermediate representations as surrogates for training. The most representative ones are affordances [19, 20] for object interaction, object keypoints predictions [36, 37, 38, 39, 40], or other types of object representations [41, 42, 43]." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.37, + 0.827, + 0.55 + ], + "angle": 0, + "content": "Learn from Human - Aligning Actions. Beyond observation alignment, transferring human demonstrations to robotic platforms introduces additional challenges due to differences in embodiment, actuation, and control dynamics. Specific alignment of human and robot actions is required to overcome these disparities. Approaches have employed masking in egocentric views [16], aligning motion trajectories or flow [44, 45], object-centric actions [46, 47], or hand tracking with specialized hardware [15]. Most closely related to our work, HumanPlus [48] designs a remapping method from 3D human pose estimation to tele-operate humanoid robots. Compared to HumanPlus, the insight of our method is to waive the requirement for robot hardware in collecting human data and collect diverse human data directly for co-training. In contrast to HumanPlus, we intentionally avoid performing retargeting on human demonstrations and designed the policy to directly use human hand poses as states/actions. On the other hand, the 'human shadowing' retargeting in HumanPlus is a teleoperation method that still requires robots, leading to lower collection efficiency than ours." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.558, + 0.827, + 0.678 + ], + "angle": 0, + "content": "Cross-Embodiment. Cross-embodiment pre-training has been shown to improve adaptability and generalization over different embodiments [49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61]. When utilizing human videos, introducing intermediate representations can be prone to composite errors. Recent works investigate end-to-end approaches [2, 24, 1, 3] using cross-embodied robot data to reduce such compounding perceptive errors. Noticeably, these works have found that such end-to-end learning leads to desired behaviors such as retrying [3]. Some other work [62, 38] enforces viewpoint constraints between training human demonstrations and test-time robot deployment to allow learning on human data but it trades off the scalability of the data collection process." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.685, + 0.827, + 0.835 + ], + "angle": 0, + "content": "Concurrent Work. Some concurrent work [15, 16, 63] also attempts to use egocentric human demonstrations for end-to-end cross-embedment policy learning. DexCap [15] uses gloves to track 3D hand poses with a chest-mounted RGBD camera to capture egocentric human videos. However, DexCap relies on 3D inputs, whereas some recent works [3, 1] have shown the scalability of 2D visual inputs. Most related to our work, EgoMimic [16] also proposes to collect data using wearable device [64] with 2D visual inputs. However, EgoMimic requires strict visual sensor alignments; whereas we show that scaling up diverse observations with different cameras makes the policy more robust. In addition, \\(\\mathrm{PH}^2\\mathrm{D}\\) is also greater in dataset scale and object diversity. We also show our policy can be deployed on real robots without strict requirements of visual sensors and heuristics, which paves the way for scalable data collection." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.948 + ], + "angle": 0, + "content": "3" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.173, + 0.09, + 0.273, + 0.106 + ], + "angle": 0, + "content": "3 Method" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.124, + 0.827, + 0.23 + ], + "angle": 0, + "content": "To collect more data to train generalizable robot policies, recent research has explored cross-embodiment learning, enabling policies to generalize across diverse physical forms [3, 1, 4, 2, 65, 14]. This paper proposes egocentric human manipulation demonstrations as a scalable source of cross-embodiment training data. Sec. 3.1 describes our approach to adapt consumer-grade VR devices to scale up human data collection conveniently for a dataset of task-oriented egocentric human demonstrations. Sec. 3.2 describes various techniques to handle domain gaps to align human data and robot data for learning humanoid manipulation policy." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.245, + 0.598, + 0.261 + ], + "angle": 0, + "content": "3.1 \\(\\mathbf{PH}^2\\mathbf{D}\\): Task-oriented Physical Humanoid-Human Data" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.273, + 0.825, + 0.334 + ], + "angle": 0, + "content": "Though there has been existing work that collects egocentric human videos [16, 22, 21, 15], they either (1) provide demonstrations mostly for non-task-oriented skills (e.g., dancing) and do not provide world-frame 3D head and hand poses estimations for imitation learning supervision [21, 22] or (2) require specialized hardware or robot setups [15, 16]." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.34, + 0.486, + 0.476 + ], + "angle": 0, + "content": "To address these issues, we propose \\(\\mathrm{PH^2D}\\). \\(\\mathrm{PH^2D}\\) address these two issues by (1) collecting task-oriented human demonstrations that are directly related to robot execution, (2) adapting well-engineered SDKs of VR devices (illustrated in Fig. 2) to provide supervision, and (3) diversifying tasks, camera sensors, and reducing whole-body movement to reduce domain gaps in both vision and behaviors." + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.337, + 0.825, + 0.432 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.495, + 0.438, + 0.825, + 0.502 + ], + "angle": 0, + "content": "Figure 2: Consumer-grade Devices for Data Collection. To avoid relying on specialized hardware for data collection to make our method scalable, we design our data collection process using consumer-grade VR devices." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.493, + 0.485, + 0.522 + ], + "angle": 0, + "content": "Adapting Low-cost Commercial Devices With development in pose estimation [66] and" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.523, + 0.825, + 0.584 + ], + "angle": 0, + "content": "system engineering, modern mobile devices are capable of providing accurate on-device world frame 3D head pose tracking and 3D hand keypoint tracking [9], which has proved to be stable enough to teleoperate robot in real-time [9, 13]. We design software and hardware to support convenient data collection across different devices. Different cameras provide better visual diversity." + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.596, + 0.825, + 0.626 + ], + "angle": 0, + "content": "- Apple Vision Pro + Built-in Camera. We developed a Vision OS App that uses the built-in camera for visual observation and uses the Apple ARKit for 3D head and hand poses." + }, + { + "type": "text", + "bbox": [ + 0.217, + 0.632, + 0.825, + 0.692 + ], + "angle": 0, + "content": "- Meta Quest 3 / Apple Vision Pro + ZED Camera. We developed a web-based application based on OpenTelevision [9] to gather 3D head and hand poses. We also designed a 3D-printed holder to mount ZED Mini Stereo cameras on these devices. This configuration is both low-cost (\\(<700\\)) and introduces more diversity with stereo cameras." + }, + { + "type": "list", + "bbox": [ + 0.217, + 0.596, + 0.825, + 0.692 + ], + "angle": 0, + "content": null + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.709, + 0.825, + 0.785 + ], + "angle": 0, + "content": "Data Collection Pipeline We collect task-oriented egocentric human demonstrations by asking human operators to perform tasks overlapping with robot execution (e.g., grasping and pouring) when wearing the VR devices. For every demonstration, we provide language instructions (e.g., grasp a can of coke zero with right hand), and synchronize proprioception inputs and visual inputs by closest timestamps." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.792, + 0.827, + 0.913 + ], + "angle": 0, + "content": "Action Domain Gap. Human actions and tele-operated robot actions exhibit two distinct characteristics: (1) human manipulation usually involves involuntary whole-body movement, and (2) humans are more dexterous than robots and have significantly faster task completion time than robots. We mitigate the first gap by requesting the human data collectors to sit in an upright position. For the second speed gap, we interpolate translation and rotations of human data during training (effectively 'slowing down' actions). The slow-down factors \\(\\alpha_{\\mathrm{slow}}\\) are obtained by normalizing the average task completion time of humans and humanoids, which is empirically distributed around 4. For consistency, we use \\(\\alpha_{\\mathrm{slow}} = 4\\) in all tasks." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.947 + ], + "angle": 0, + "content": "4" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.18, + 0.089, + 0.825, + 0.316 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.17, + 0.327, + 0.828, + 0.398 + ], + "angle": 0, + "content": "Figure 3: Overview of HAT. Human Action Transformer (HAT) learns a robot policy by modeling humans. During training, we sample a state-action pair from either human data or robot data. The images are encoded by a frozen DinoV2 encoder [67]. The HAT model makes predictions in a human-centric action space using wrist 6 DoF poses and finger tips, which is retargeted to robot poses during real-robot deployment." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.418, + 0.455, + 0.433 + ], + "angle": 0, + "content": "3.2 HAT: Human Action Transformer" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.447, + 0.825, + 0.494 + ], + "angle": 0, + "content": "HAT learns cross-embodied robot policy by modeling humans. We demonstrate that treating bi-manual humanoid robots and humans as different robot embodiments via retargeting improves both generalizability and robustness of HAT." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.498, + 0.825, + 0.577 + ], + "angle": 0, + "content": "More concretely, let \\(\\mathcal{D}_{\\text{robot}} = \\{(\\mathbf{S}_i, \\mathbf{A}_i)\\}_{i=1}^N\\) be the set of data collected from real bimanual humanoid robots using teleoperation [9], where \\(\\mathbf{S}_i\\) is the states including proprioceptive and visual observations of \\(i\\)-th demonstration and \\(\\mathbf{A}_i\\) be the actions. The collected \\(\\mathrm{PH}^2\\mathrm{D}\\) dataset, \\(\\mathcal{D}_{\\text{human}} = \\{(\\tilde{\\mathbf{S}}_i, \\tilde{\\mathbf{A}}_i)\\}_{i=1}^M\\) is used to augment the training process. Note that it is reasonable to assume \\(M \\gg N\\) due to the significantly better human data collection efficiency." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.581, + 0.827, + 0.658 + ], + "angle": 0, + "content": "The goal is to design a policy \\(\\pi : \\mathbf{S} \\to \\mathbf{A}\\) that predicts future robot actions \\(\\mathbf{a}_t\\) given current robot observation \\(\\mathbf{s}_t\\) at time \\(t\\), where the future actions \\(\\mathbf{a}_{t+1}\\) is usually a chunk of actions for multi-step execution (with slight abuse of notation). We model \\(\\pi\\) using HAT, which is a transformer-based architecture predicting action chunks [5]. The overview of the model is illustrated in Fig. 3. We discuss key design choices of HAT with experimental ablations." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.664, + 0.827, + 0.801 + ], + "angle": 0, + "content": "Unified State-Action Space. Both bimanual robots and humans have two end effectors. In our case, our robots are also equipped with an actuated 2DoF neck that can rotate, which resembles the autonomous head movement when humans perform manipulation. Therefore, we design a unified state-action space (i.e., \\((\\mathbf{S},\\mathbf{A})\\equiv (\\tilde{\\mathbf{S}},\\tilde{\\mathbf{A}}))\\) for both bimanual robots and humans. More concretely, the proprioceptive observation is a 54-dimensional vector (6D rotations [68] of the head, left wrist, and right wrist; \\(x / y / z\\) of left and right wrists and 10 finger tips). In this work, since we deploy our policy on robots with 5-fingered dexterous hands (shown in Fig. 4), there exists a bijective mapping between the finger tips of robot hands and human hands. Note that injective mapping is also possible (e.g., mapping distance between the thumb finger and other fingers to parallel gripper distance)." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.807, + 0.825, + 0.913 + ], + "angle": 0, + "content": "Visual Domain Gap. Two types of domain gaps exist for co-training on human/humanoid data: camera sensors and end effector appearance. Since our human data collection process includes cameras different from robot deployment, this leads to camera domain gaps such as tones. Also, the appearances of human and humanoid end effectors are different. However, with sufficiently large and diverse data, we find it not a strict necessity to apply heuristic processing such as visual artifacts [16] or generative methods [69] to train human-robot policies - basic image augmentations such as color jittering and Gaussian blurring are effective regularization." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.948 + ], + "angle": 0, + "content": "5" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.177, + 0.089, + 0.825, + 0.176 + ], + "angle": 0, + "content": "
Meth.H. DataD. NormPassingHorizontal GraspVertical GraspPouringOvr. Succ.
I.D.O.O.D.I.D.O.O.D.I.D.O.O.D.I.D.O.O.D.I.D.O.O.D.
ACTXNA19/2036/608/107/307/2015/708/101/1042/6059/170
HATX17/2051/609/1011/3014/2030/705/105/1045/6097/170
HAT20/2052/608/1012/3013/2029/708/108/1049/60101/170
Type of GeneralizationBackgroundTextureObj. Placement
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.182, + 0.825, + 0.253 + ], + "angle": 0, + "content": "Table 2: Success rate of autonomous skill execution. Co-training with human data (H. Data) significantly improves the Out-Of-Distribution (O.O.D.) performance with nearly \\(100\\%\\) relative improvement on all tasks on Humanoid A. We also ablate the design choice of using different normalizations (D. Norm) for different embodiments. We designate each task setting to investigate a single type of generalization. Detailed analysis of each type of generalization is presented in Sec. C." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.27, + 0.825, + 0.3 + ], + "angle": 0, + "content": "Training. The final policy is denoted as \\(\\pi : f_{\\theta}(\\cdot) \\to \\mathbf{A}\\) for both human and robot policy, where \\(f_{\\theta}\\) is a transformer-based neural network parametrized by \\(\\theta\\). The final loss is given by," + }, + { + "type": "equation", + "bbox": [ + 0.348, + 0.306, + 0.825, + 0.323 + ], + "angle": 0, + "content": "\\[\n\\mathcal {L} = \\ell_ {1} \\left(\\pi \\left(s _ {i}\\right), a _ {i}\\right) + \\lambda \\cdot \\ell_ {1} \\left(\\pi \\left(s _ {i}\\right) _ {\\mathrm {E E F}}, a _ {i, \\mathrm {E E F}}\\right), \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.327, + 0.827, + 0.373 + ], + "angle": 0, + "content": "where EEF are the indices of the translation vectors of the left and right wrists, and \\(\\lambda = 2\\) is an (insensitive) hyperparameter used to balance loss to emphasize the importance of end effector positions over learning unnecessarily precise finger tip keypoints." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.391, + 0.315, + 0.409 + ], + "angle": 0, + "content": "4 Experiments" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.422, + 0.486, + 0.62 + ], + "angle": 0, + "content": "Hardware Platforms. We run our experiments on two humanoid robots (Humanoid A and Humanoid B shown in Fig. 4) equipped with 6-DOF Inspire dexterous hands. Humanoid A is a Unitree H1 robot and Humanoid B is a Unitree H1_2 robot with different arm configurations. Similar to humans, both robots (1) are equipped with actuated necks [9] to get make use of egocentric views and (2) do not have wrist cameras. Unless otherwise noted, most humanoid data collection is done with Humanoid A. We use Humanoid B mainly for testing cross-humanoid generalization." + }, + { + "type": "image", + "bbox": [ + 0.498, + 0.415, + 0.657, + 0.531 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.519, + 0.533, + 0.635, + 0.547 + ], + "angle": 0, + "content": "(a) Humanoid A" + }, + { + "type": "image", + "bbox": [ + 0.666, + 0.415, + 0.824, + 0.53 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.687, + 0.532, + 0.802, + 0.547 + ], + "angle": 0, + "content": "(b) Humanoid B" + }, + { + "type": "image_caption", + "bbox": [ + 0.495, + 0.558, + 0.825, + 0.622 + ], + "angle": 0, + "content": "Figure 4: Hardware Illustration. Most robot data attributes to Humanoid A, a Unitree H1 robot. Humanoid B, a Unitree H1-2 robot with different arm motor configurations, is used to evaluate few-shot cross-humanoid transfer. Detailed comparisons in Sec. D" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.634, + 0.827, + 0.74 + ], + "angle": 0, + "content": "Implementation Details. We implement policy architecture by adopting an transformer-based architecture predicting future action chunks [5]. We use a frozen DinoV2 ViT-S [67] as the visual backbone. We implement two variants: (1) ACT: baseline implementation using the Action Chunk Transformer [5], trained using only robot data. Robot states are represented as joint positions. (2) HAT: same architecture as ACT, but the state encoder operates in the unified state-action space. Unless otherwise stated, HAT is co-trained on robot and human data. A checkpoint is trained for each task with approximately 250-400 robot demonstrations." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.755, + 0.825, + 0.815 + ], + "angle": 0, + "content": "Experimental Protocol. We collect robot and human demonstrations in different object sets. Since human demonstrations are easier to collect, the settings in human demonstrations are generally more diverse, which include background, object types, object positions, and the relative position of the human to the table." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.822, + 0.826, + 0.913 + ], + "angle": 0, + "content": "We experimented with four different dexterous manipulation tasks and investigated in-distribution and out-of-distribution setups. The in-distribution (I.D.) setting tests the learned skills with backgrounds and object arrangements approximately similar to the training demonstrations presented in the real-robot data. In the Out-Of-Distribution (O.O.D.) setting, we test generalizability and robustness by introducing novel setups that were presented in human data but not in robot data. Fig. 7 visualizes different manipulation tasks and how we define out-of-distribution settings for each task." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.937, + 0.505, + 0.948 + ], + "angle": 0, + "content": "6" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.177, + 0.096, + 0.495, + 0.292 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.171, + 0.298, + 0.495, + 0.337 + ], + "angle": 0, + "content": "(a) Performance of Humanoid B co-trained with \\(\\mathrm{PH}^2\\mathrm{D}\\) on horizontal grasping. o1 is seen by Humanoid B. o2 and o3 seen in human data. o4 is unseen in all data." + }, + { + "type": "image", + "bbox": [ + 0.507, + 0.093, + 0.822, + 0.288 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.503, + 0.298, + 0.825, + 0.337 + ], + "angle": 0, + "content": "(b) Co-training consistently outperforms isolated training as Humanoid B demonstrations increase, achieving good success rates even in low-data regimes." + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.345, + 0.825, + 0.374 + ], + "angle": 0, + "content": "Figure 5: Few-Shot Adaptation. Co-training consistently outperforms isolated training as Humanoid B demonstrations increase, achieving robust success rates even in low-data regimes." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.382, + 0.331, + 0.396 + ], + "angle": 0, + "content": "4.1 Main Evaluation" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.409, + 0.825, + 0.501 + ], + "angle": 0, + "content": "Human data has minor effects on I.D. testing. From Tab. 2, we can see that I.D. performance with or without co-training with human data gives similar results. In the I.D. setting, we closely match the scene setups as training demonstrations, including both background, object types, and object placements. Thus, policies trained with only a small amount of Humanoid A data performed well in this setting. This finding is consistent with recent work [9, 7] that frozen visual foundation models [17, 67] improve robustness against certain external perturbations such as lighting." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.507, + 0.827, + 0.643 + ], + "angle": 0, + "content": "Human data improves the O.O.D. settings with many generalizations. One common challenge in imitation learning is overfitting to only in-distribution task settings. Hence, it is crucial for a robot policy to generalize beyond the scene setups seen in a limited set of single-embodiment data. To demonstrate how co-training with human data reduces such overfitting, we introduce O.O.D. task settings to evaluate such generalization. From Tab. 2, we can see that co-training drastically improves O.O.D. settings, achieving nearly \\(100\\%\\) relative improvement in settings unseen by the robot data. In particular, we find that human data improves three types of generalization: background, object placement, and appearance. To isolate the effect of each variable, each task focuses on a specific type of generalization as listed in Tab. 2, with in-depth analyses in Sec. C." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.66, + 0.592, + 0.675 + ], + "angle": 0, + "content": "4.2 Few-Shot Transfer across Heterogenous Embodiments" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.687, + 0.825, + 0.779 + ], + "angle": 0, + "content": "We conducted few-shot generalization experiments on a distinct humanoid platform (Humanoid B), contrasting it with our primary platform, Humanoid A. Notably, Humanoid B's demonstration data were collected in an entirely separate environment, introducing both embodiment and environmental shifts. We highlight two key advantages of our approach: (1) the ability to unify heterogeneous human-centric data sources (humanoids and humans) into a generalizable policy framework, and (2) the capacity to rapidly adapt to new embodiments with drastically reduced data requirements." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.785, + 0.825, + 0.874 + ], + "angle": 0, + "content": "Experiment 1: Cross-embodiment co-training efficacy Using only 20 demonstrations from Humanoid B, we trained 3 policies - respectively on data from (i) Humanoid B only, (ii) Humanoid B + Humanoid A (cross-embodiment), and (iii) Humanoid B + Humanoid A + Human (cross-embodiment and human priors). As shown in Fig. 5a, co-trained policies (ii) and (iii) substantially outperformed the Humanoid B-only baselines on all task settings, underscoring the method's ability to transfer latent task structure across embodiments." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.882, + 0.825, + 0.913 + ], + "angle": 0, + "content": "Experiment 2: Scaling Demonstrations for Few-Shot Adaptation We further quantified the relationship between required for few-shot generalization. We hold Humanoid A and human datasets fixed" + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.505, + 0.948 + ], + "angle": 0, + "content": "7" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.174, + 0.092, + 0.434, + 0.202 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.172, + 0.202, + 0.437, + 0.284 + ], + "angle": 0, + "content": "Figure 6: Human data has better sampling efficiency. Per-grid vertical grasping successes out of 10 trials with models trained with robot-only data and mixed data. Red boxes indicate where training data is collected." + }, + { + "type": "table", + "bbox": [ + 0.454, + 0.12, + 0.835, + 0.183 + ], + "angle": 0, + "content": "
TaskState SpaceAction SpeedSuccess
Vertical GraspingX1/10
X0/10
4/10
" + }, + { + "type": "table_caption", + "bbox": [ + 0.45, + 0.184, + 0.837, + 0.253 + ], + "angle": 0, + "content": "Table 3: Importance of unifying policy inputs and outputs. We report the number of successes of vertical grasping objects in the upper-left block as illustrated in Fig. 8. Baselines use joint positions as state input or do not interpolate human motions." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.301, + 0.825, + 0.348 + ], + "angle": 0, + "content": "for the horizontal grasping task and ablate number of demonstrations required for Humanoid B in Fig. 5. Co-training (Humanoid \\(\\mathrm{B} + \\mathrm{A} + \\mathrm{Human}\\)) consistently outperformed isolated training on Humanoid B across all settings, especially in the few-data regime." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.361, + 0.32, + 0.377 + ], + "angle": 0, + "content": "4.3 Ablation Study" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.388, + 0.825, + 0.448 + ], + "angle": 0, + "content": "Sampling Efficiency of Human and Humanoid Data. Conceptually, collecting human data is less expensive, not just because it can be done faster, but also because it can be done in in-the-wild scenes; reduces setup cost before every data collection; and avoids the hardware cost to equip every operator with robots." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.455, + 0.826, + 0.577 + ], + "angle": 0, + "content": "We perform additional experiments to show that even in the lab setting, human data can have better sampling efficiency in unit time. In particular, we provide a small-scale experiment on the vertical grasping task. Allocating 20 minutes for two settings, we collected (1) 60 Humanoid A demonstrations, (2) 30 Humanoid A demonstrations, and 120 human demonstrations. To avoid confating diversity and data size, the object placements in all demonstrations are evenly distributed at the bottom 6 cells. The results are given in Fig. 6. The policy trained with mixed robot and human data performs significantly better, which validates the sampling efficiency of human data over robot data. Each cell represents a \\(10\\mathrm{cm} \\times 10\\mathrm{cm}\\) region where the robot attempts to pick up a box." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.583, + 0.825, + 0.734 + ], + "angle": 0, + "content": "State-Action Design. In Tab. 3, we ablate the design choices of the proprioception state space and the speed of output actions. In particular, using the same set of robot and human data, we implement two baselines: 1) a unified state-action space, but does not interpolate (i.e., slow down) the human actions; and 2) a baseline that interpolates human actions but uses separate state representation for humanoid (joint positions) and humans (EEF representation). The policies exhibit different failure patterns during the rollout of these two baselines. Without interpolating human actions, the speed of the predicted actions fluctuates between fast (resembling humans) and slow (resembling teleoperation), which leads to instability. Without a unified state space, the policy is given a 'short-cut' to distinguish between embodiments, which leads to on-par in-distribution performance and significantly worse OOD performance." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.741, + 0.825, + 0.772 + ], + "angle": 0, + "content": "More Ablation Study. Due to space limit, please refer to the appendix and the supplementary for more qualitative visualization and quantitative ablation studies." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.79, + 0.308, + 0.805 + ], + "angle": 0, + "content": "5 Conclusions" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.822, + 0.825, + 0.913 + ], + "angle": 0, + "content": "This paper proposes \\(\\mathrm{PH^2D}\\), an effort to construct a large-scale human task-oriented behavior dataset, along with the training pipeline HAT, which leverages \\(\\mathrm{PH^2D}\\) and robot data to show how humans can be treated as a data source for cross-embodiment learning. We show that it is possible to directly train an imitation learning model with mixed human-humanoid data without any training surrogates when the human data are aligned with the robot data. The learned policy shows improved generalization and robustness compared to the counterpart trained using only real-robot data." + }, + { + "type": "page_number", + "bbox": [ + 0.494, + 0.936, + 0.504, + 0.948 + ], + "angle": 0, + "content": "8" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.176, + 0.09, + 0.304, + 0.107 + ], + "angle": 0, + "content": "6 Limitations" + }, + { + "type": "text", + "bbox": [ + 0.175, + 0.122, + 0.825, + 0.258 + ], + "angle": 0, + "content": "Although we also collect language instructions in \\(\\mathrm{PH^2D}\\), due to our focus on investigating the embodiment gap between humans and humanoids, one limitation of the current version of the paper uses a relatively simple architecture for learning policy. In the near future, we plan to expand the policy learning process to train a large language-conditioned cross-embodiment policy to investigate generalization to novel language using human demonstrations. The collection of human data relies on off-the-shelf VR hardwares and their hand tracking SDKs. Since these SDKs were trained mostly for VR applications, hand keypoint tracking can fail for certain motions with heavy occlusion. In addition, though the proposed method conceptually extends to more robot morphologies, current evaluations are done on robots equipped with dexterous hands." + }, + { + "type": "title", + "bbox": [ + 0.176, + 0.276, + 0.351, + 0.294 + ], + "angle": 0, + "content": "7 Acknowledgment" + }, + { + "type": "text", + "bbox": [ + 0.176, + 0.308, + 0.825, + 0.34 + ], + "angle": 0, + "content": "This work was supported, in part, by NSF CAREER Award IIS-2240014, NSF CCF-2112665 (TILOS), and gifts from Amazon, Meta and Apple." + }, + { + "type": "page_number", + "bbox": [ + 0.495, + 0.937, + 0.504, + 0.947 + ], + "angle": 0, + "content": "9" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.174, + 0.09, + 0.27, + 0.107 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.114, + 0.826, + 0.147 + ], + "angle": 0, + "content": "[1] S. Liu, L. Wu, B. Li, H. Tan, H. Chen, Z. Wang, K. Xu, H. Su, and J. Zhu. Rdt-1b: a diffusion foundation model for bimanual manipulation. arXiv preprint arXiv:2410.07864, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.155, + 0.827, + 0.215 + ], + "angle": 0, + "content": "[2] Octo Model Team, D. Ghosh, H. Walke, K. Pertsch, K. Black, O. Mees, S. Dasari, J. Hejna, C. Xu, J. Luo, T. Kreiman, Y. Tan, L. Y. Chen, P. Sanketi, Q. Vuong, T. Xiao, D. Sadigh, C. Finn, and S. Levine. Octo: An open-source generalist robot policy. In Proceedings of Robotics: Science and Systems, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.225, + 0.825, + 0.272 + ], + "angle": 0, + "content": "[3] K. Black, N. Brown, D. Driess, A. Esmail, M. Equi, C. Finn, N. Fusai, L. Groom, K. Hausman, B. Ichter, et al. \\(\\pi_0\\): A vision-language-action flow model for general robot control. arXiv preprint arXiv:2410.24164, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.28, + 0.825, + 0.311 + ], + "angle": 0, + "content": "[4] S. Dasari, O. Mees, S. Zhao, M. K. Srirama, and S. Levine. The ingredients for robotic diffusion transformers. arXiv preprint arXiv:2410.10088, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.32, + 0.825, + 0.351 + ], + "angle": 0, + "content": "[5] T. Z. Zhao, V. Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulation with low-cost hardware. arXiv preprint arXiv:2304.13705, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.36, + 0.825, + 0.392 + ], + "angle": 0, + "content": "[6] Z. Fu, T. Z. Zhao, and C. Finn. Mobile aloha: Learning bimanual mobile manipulation with low-cost whole-body teleoperation. arXiv preprint arXiv:2401.02117, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.4, + 0.825, + 0.445 + ], + "angle": 0, + "content": "[7] C. Chi, Z. Xu, C. Pan, E. Cousineau, B. Burchfiel, S. Feng, R. Tedrake, and S. Song. Universal manipulation interface: In-the-wild robot teaching without in-the-wild robots. arXiv preprint arXiv:2402.10329, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.455, + 0.825, + 0.501 + ], + "angle": 0, + "content": "[8] S. Yang, M. Liu, Y. Qin, R. Ding, J. Li, X. Cheng, R. Yang, S. Yi, and X. Wang. Ace: A cross-platform visual-exoskeletons system for low-cost dexterous teleoperation. arXiv preprint arXiv:2408.11805, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.182, + 0.51, + 0.825, + 0.542 + ], + "angle": 0, + "content": "[9] X. Cheng, J. Li, S. Yang, G. Yang, and X. Wang. Open-television: Teleoperation with immersive active visual feedback. In Conference on Robot Learning (CoRL), 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.55, + 0.825, + 0.597 + ], + "angle": 0, + "content": "[10] T. He, Z. Luo, X. He, W. Xiao, C. Zhang, W. Zhang, K. Kitani, C. Liu, and G. Shi. Omnih2o: Universal and dexterous human-to-humanoid whole-body teleoperation and learning. arXiv preprint arXiv:2406.08858, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.605, + 0.825, + 0.637 + ], + "angle": 0, + "content": "[11] S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, and C. Finn. Robonet: Large-scale multi-robot learning. arXiv preprint arXiv:1910.11215, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.645, + 0.825, + 0.706 + ], + "angle": 0, + "content": "[12] H. Bharadhwaj, J. Vakil, M. Sharma, A. Gupta, S. Tulsiani, and V. Kumar. Roboagent: Generalization and efficiency in robot manipulation via semantic augmentations and action chunking. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 4788-4795. IEEE, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.716, + 0.825, + 0.748 + ], + "angle": 0, + "content": "[13] H. Ha, Y. Gao, Z. Fu, J. Tan, and S. Song. Umi on legs: Making manipulation policies mobile with manipulation-centric whole-body controllers. arXiv preprint arXiv:2407.10353, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.756, + 0.825, + 0.802 + ], + "angle": 0, + "content": "[14] A. O'Neill, A. Rehman, A. Gupta, A. Maddukuri, A. Gupta, A. Padalkar, A. Lee, A. Pooley, A. Gupta, A. Mandlekar, et al. Open x-embodiment: Robotic learning datasets and rt-x models. arXiv preprint arXiv:2310.08864, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.811, + 0.825, + 0.856 + ], + "angle": 0, + "content": "[15] C. Wang, H. Shi, W. Wang, R. Zhang, L. Fei-Fei, and C. K. Liu. Dexcap: Scalable and portable mocap data collection system for dexterous manipulation. arXiv preprint arXiv:2403.07788, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.866, + 0.825, + 0.913 + ], + "angle": 0, + "content": "[16] S. Kareer, D. Patel, R. Punamiya, P. Mathur, S. Cheng, C. Wang, J. Hoffman, and D. Xu. Egomimic: Scaling imitation learning via egocentric video. arXiv preprint arXiv:2410.24221, 2024. URL https://arxiv.org/abs/2410.24221." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.114, + 0.827, + 0.913 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "10" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.091, + 0.826, + 0.136 + ], + "angle": 0, + "content": "[17] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In ICML. PMLR, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.147, + 0.642, + 0.163 + ], + "angle": 0, + "content": "[18] OpenAI. Gpt-4 technical report. Technical report, OpenAI, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.171, + 0.825, + 0.202 + ], + "angle": 0, + "content": "[19] R. Mendonca, S. Bahl, and D. Pathak. Structured world models from human videos. In RSS, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.21, + 0.825, + 0.242 + ], + "angle": 0, + "content": "[20] S. Bahl, R. Mendonca, L. Chen, U. Jain, and D. Pathak. Affordances from human videos as a versatile representation for robotics. In CVPR, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.25, + 0.825, + 0.295 + ], + "angle": 0, + "content": "[21] K. Grauman, A. Westbury, E. Byrne, Z. Chavis, A. Furnari, R. Girdhar, J. Hamburger, H. Jiang, M. Liu, X. Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In CVPR, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.305, + 0.825, + 0.352 + ], + "angle": 0, + "content": "[22] D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro, T. Perrett, W. Price, and M. Wray. Scaling egocentric vision: The epic-kitchens dataset. In ECCV, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.36, + 0.825, + 0.393 + ], + "angle": 0, + "content": "[23] T. Z. Zhao, J. Tompson, D. Driess, P. Florence, K. Ghasemipour, C. Finn, and A. Wahid. Aloha unleashed: A simple recipe for robot dexterity. arXiv preprint arXiv:2410.13126, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.4, + 0.825, + 0.433 + ], + "angle": 0, + "content": "[24] L. Wang, X. Chen, J. Zhao, and K. He. Scaling proprioceptive-visual learning with heterogeneous pre-trained transformers. arXiv preprint arXiv:2409.20537, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.44, + 0.825, + 0.487 + ], + "angle": 0, + "content": "[25] C. Chi, Z. Xu, S. Feng, E. Cousineau, Y. Du, B. Burchfiel, R. Tedrake, and S. Song. Diffusion policy: Visuomotor policy learning via action diffusion. The International Journal of Robotics Research, page 02783649241273668, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.495, + 0.825, + 0.541 + ], + "angle": 0, + "content": "[26] R.-Z. Qiu, Y. Song, X. Peng, S. A. Suryadevara, G. Yang, M. Liu, M. Ji, C. Jia, R. Yang, X. Zou, et al. Wildma: Long horizon loco-manipulation in the wild. arXiv preprint arXiv:2411.15131, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.55, + 0.825, + 0.583 + ], + "angle": 0, + "content": "[27] C. Lu, X. Cheng, J. Li, S. Yang, M. Ji, C. Yuan, G. Yang, S. Yi, and X. Wang. Mobile- television: Predictive motion priors for humanoid whole-body control. In ICRA, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.59, + 0.825, + 0.636 + ], + "angle": 0, + "content": "[28] Y. Ze, Z. Chen, W. Wang, T. Chen, X. He, Y. Yuan, X. B. Peng, and J. Wu. Generalizable humanoid manipulation with improved 3d diffusion policies. arXiv preprint arXiv:2410.10803, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.645, + 0.825, + 0.692 + ], + "angle": 0, + "content": "[29] S. P. Arunachalam, S. Silwal, B. Evans, and L. Pinto. Dexterous imitation made easy: A learning-based framework for efficient dexterous manipulation. In 2023 IEEE international conference on robotics and automation (icra), pages 5954-5961. IEEE, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.7, + 0.825, + 0.733 + ], + "angle": 0, + "content": "[30] A. S. Chen, S. Nair, and C. Finn. Learning generalizable robotic reward functions from \"in-the-wild\" human videos. arXiv preprint arXiv:2103.16817, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.741, + 0.825, + 0.788 + ], + "angle": 0, + "content": "[31] J. Lee and M. S. Ryoo. Learning robot activities from first-person human videos using convolutional future regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1-2, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.795, + 0.825, + 0.842 + ], + "angle": 0, + "content": "[32] K. Lee, Y. Su, T.-K. Kim, and Y. Demiris. A syntactic approach to robot imitation learning using probabilistic activity grammars. Robotics and Autonomous Systems, 61(12):1323-1334, 2013." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.85, + 0.825, + 0.911 + ], + "angle": 0, + "content": "[33] A. Nguyen, D. Kanoulas, L. Muratore, D. G. Caldwell, and N. G. Tsagarakis. Translating videos to commands for robotic manipulation with deep recurrent neural networks. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 3782-3788. IEEE, 2018." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.091, + 0.826, + 0.911 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "11" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.091, + 0.826, + 0.136 + ], + "angle": 0, + "content": "[34] J. Rothfuss, F. Ferreira, E. E. Aksoy, Y. Zhou, and T. Asfour. Deep episodic memory: Encoding, recalling, and predicting episodic experiences for robot action execution. IEEE Robotics and Automation Letters, 3(4):4007-4014, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.146, + 0.826, + 0.192 + ], + "angle": 0, + "content": "[35] Y. Yang, Y. Li, C. Fermuller, and Y. Aloimonos. Robot learning manipulation action plans by \"watching\" unconstrained videos from the world wide web. In Proceedings of the AAAI conference on artificial intelligence, volume 29, 2015." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.201, + 0.825, + 0.232 + ], + "angle": 0, + "content": "[36] H. Bharadhwaj, R. Mottaghi, A. Gupta, and S. Tulsiani. Track2act: Predicting point tracks from internet videos enables diverse zero-shot robot manipulation. In ECCV, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.241, + 0.825, + 0.272 + ], + "angle": 0, + "content": "[37] C. Wen, X. Lin, J. So, K. Chen, Q. Dou, Y. Gao, and P. Abbeel. Any-point trajectory modeling for policy learning. arXiv preprint arXiv:2401.00025, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.281, + 0.825, + 0.325 + ], + "angle": 0, + "content": "[38] J. Li, Y. Zhu, Y. Xie, Z. Jiang, M. Seo, G. Pavlakos, and Y. Zhu. Okami: Teaching humanoid robots manipulation skills through single video imitation. arXiv preprint arXiv:2410.11792, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.336, + 0.825, + 0.381 + ], + "angle": 0, + "content": "[39] N. Das, S. Bechtle, T. Davchev, D. Jayaraman, A. Rai, and F. Meier. Model-based inverse reinforcement learning from visual demonstrations. In Conference on Robot Learning, pages 1930–1942. PMLR, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.391, + 0.825, + 0.438 + ], + "angle": 0, + "content": "[40] H. Xiong, Q. Li, Y.-C. Chen, H. Bharadhwaj, S. Sinha, and A. Garg. Learning by watching: Physical imitation of manipulation skills from human videos. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 7827-7834. IEEE, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.446, + 0.825, + 0.478 + ], + "angle": 0, + "content": "[41] S. Pirk, M. Khansari, Y. Bai, C. Lynch, and P. Sermanet. Online object representations with contrastive learning. arXiv preprint arXiv:1906.04312, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.486, + 0.825, + 0.517 + ], + "angle": 0, + "content": "[42] S. Nair, A. Rajeswaran, V. Kumar, C. Finn, and A. Gupta. R3m: A universal visual representation for robot manipulation. arXiv preprint arXiv:2203.12601, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.526, + 0.825, + 0.572 + ], + "angle": 0, + "content": "[43] Y. J. Ma, S. Sodhani, D. Jayaraman, O. Bastani, V. Kumar, and A. Zhang. Vip: Towards universal visual reward and representation via value-implicit pre-training. arXiv preprint arXiv:2210.00030, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.581, + 0.825, + 0.613 + ], + "angle": 0, + "content": "[44] L.-H. Lin, Y. Cui, A. Xie, T. Hua, and D. Sadigh. Flowretrieval: Flow-guided data retrieval for few-shot imitation learning. arXiv preprint arXiv:2408.16944, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.621, + 0.825, + 0.667 + ], + "angle": 0, + "content": "[45] J. Ren, P. Sundaresan, D. Sadigh, S. Choudhury, and J. Bohg. Motion tracks: A unified representation for human-robot transfer in few-shot imitation learning. arXiv preprint arXiv:2501.06994, 2025." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.676, + 0.825, + 0.708 + ], + "angle": 0, + "content": "[46] Y. Zhu, A. Lim, P. Stone, and Y. Zhu. Vision-based manipulation from single human video with open-world object graphs. arXiv preprint arXiv:2405.20321, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.716, + 0.825, + 0.761 + ], + "angle": 0, + "content": "[47] C.-C. Hsu, B. Wen, J. Xu, Y. Narang, X. Wang, Y. Zhu, J. Biswas, and S. Birchfield. Spot: Se (3) pose trajectory diffusion for object-centric manipulation. arXiv preprint arXiv:2411.00965, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.771, + 0.825, + 0.802 + ], + "angle": 0, + "content": "[48] Z. Fu, Q. Zhao, Q. Wu, G. Wetzstein, and C. Finn. Humanplus: Humanoid shadowing and imitation from humans. In CoRL, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.811, + 0.825, + 0.857 + ], + "angle": 0, + "content": "[49] W. Huang, I. Mordatch, and D. Pathak. One policy to control them all: Shared modular policies for agent-agnostic control. In International Conference on Machine Learning, pages 4455–4464. PMLR, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.866, + 0.825, + 0.911 + ], + "angle": 0, + "content": "[50] L. Y. Chen, K. Hari, K. Dharmarajan, C. Xu, Q. Vuong, and K. Goldberg. Mirage: Cross-embodiment zero-shot policy transfer with cross-painting. arXiv preprint arXiv:2402.19249, 2024." + }, + { + "type": "list", + "bbox": [ + 0.173, + 0.091, + 0.826, + 0.911 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "12" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.091, + 0.826, + 0.136 + ], + "angle": 0, + "content": "[51] J. Yang, C. Glossop, A. Bhorkar, D. Shah, Q. Vuong, C. Finn, D. Sadigh, and S. Levine. Pushing the limits of cross-embodiment learning for manipulation and navigation. arXiv preprint arXiv:2402.19432, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.174, + 0.146, + 0.825, + 0.178 + ], + "angle": 0, + "content": "[52] J. Yang, D. Sadigh, and C. Finn. Polybot: Training one policy across robots while embracing variability. arXiv preprint arXiv:2307.03719, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.187, + 0.825, + 0.232 + ], + "angle": 0, + "content": "[53] F. Ebert, Y. Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn, and S. Levine. Bridge data: Boosting generalization of robotic skills with cross-domain datasets. arXiv preprint arXiv:2109.13396, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.242, + 0.825, + 0.286 + ], + "angle": 0, + "content": "[54] T. Franzmeyer, P. Torr, and J. F. Henriques. Learn what matters: cross-domain imitation learning with task-relevant embeddings. Advances in Neural Information Processing Systems, 35: 26283-26294, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.297, + 0.825, + 0.342 + ], + "angle": 0, + "content": "[55] A. Ghadirzadeh, X. Chen, P. Poklukar, C. Finn, M. Björkman, and D. Kragic. Bayesian meta-learning for few-shot policy adaptation across robotic platforms. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1274–1280. IEEE, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.352, + 0.825, + 0.397 + ], + "angle": 0, + "content": "[56] T. Shankar, Y. Lin, A. Rajeswaran, V. Kumar, S. Anderson, and J. Oh. Translating robot skills: Learning unsupervised skill correspondences across robots. In International Conference on Machine Learning, pages 19626-19644. PMLR, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.407, + 0.825, + 0.438 + ], + "angle": 0, + "content": "[57] M. Xu, Z. Xu, C. Chi, M. Veloso, and S. Song. Xskill: Cross embodiment skill discovery. In Conference on Robot Learning, pages 3536-3555. PMLR, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.447, + 0.825, + 0.492 + ], + "angle": 0, + "content": "[58] Z.-H. Yin, L. Sun, H. Ma, M. Tomizuka, and W.-J. Li. Cross domain robot imitation with invariant representation. In 2022 International Conference on Robotics and Automation (ICRA), pages 455-461. IEEE, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.502, + 0.825, + 0.546 + ], + "angle": 0, + "content": "[59] K. Zakka, A. Zeng, P. Florence, J. Tompson, J. Bohg, and D. Dwibedi. Xirl: Cross-embodiment inverse reinforcement learning. In Conference on Robot Learning, pages 537–546. PMLR, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.557, + 0.825, + 0.588 + ], + "angle": 0, + "content": "[60] G. Zhang, L. Zhong, Y. Lee, and J. J. Lim. Policy transfer across visual and dynamics domain gaps via iterative grounding. arXiv preprint arXiv:2107.00339, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.597, + 0.825, + 0.628 + ], + "angle": 0, + "content": "[61] Q. Zhang, T. Xiao, A. A. Efros, L. Pinto, and X. Wang. Learning cross-domain correspondence for control with dynamics cycle-consistency. arXiv preprint arXiv:2012.09811, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.636, + 0.78, + 0.652 + ], + "angle": 0, + "content": "[62] S. Bahl, A. Gupta, and D. Pathak. Human-to-robot imitation in the wild. In RSS, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.662, + 0.825, + 0.706 + ], + "angle": 0, + "content": "[63] C. Wang, L. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y. Zhu, and A. Anandkumar. Mimicplay: Long-horizon imitation learning by watching human play. arXiv preprint arXiv:2302.12422, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.717, + 0.825, + 0.762 + ], + "angle": 0, + "content": "[64] J. Engel, K. Somasundaram, M. Goesele, A. Sun, A. Gamino, A. Turner, A. Talattof, A. Yuan, B. Souti, B. Meredith, et al. Project aria: A new tool for egocentric multi-modal ai research. arXiv preprint arXiv:2308.13561, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.772, + 0.825, + 0.817 + ], + "angle": 0, + "content": "[65] A. Khazatsky, K. Pertsch, S. Nair, A. Balakrishna, S. Dasari, S. Karamcheti, S. Nasiriany, M. K. Srirama, L. Y. Chen, K. Ellis, et al. Droid: A large-scale in-the-wild robot manipulation dataset. arXiv preprint arXiv:2403.12945, 2024." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.827, + 0.825, + 0.858 + ], + "angle": 0, + "content": "[66] W. Zhu, X. Ma, Z. Liu, L. Liu, W. Wu, and Y. Wang. Motionbert: A unified perspective on learning human motion representations. In ICCV, 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.175, + 0.867, + 0.825, + 0.912 + ], + "angle": 0, + "content": "[67] M. Oquab, T. Darcet, T. Moutakanni, H. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023." + }, + { + "type": "list", + "bbox": [ + 0.174, + 0.091, + 0.826, + 0.912 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "13" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.172, + 0.091, + 0.826, + 0.121 + ], + "angle": 0, + "content": "[68] Y. Zhou, C. Barnes, J. Lu, J. Yang, and H. Li. On the continuity of rotation representations in neural networks. In CVPR, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.173, + 0.131, + 0.825, + 0.178 + ], + "angle": 0, + "content": "[69] T. Yu, T. Xiao, A. Stone, J. Tompson, A. Brohan, S. Wang, J. Singh, C. Tan, J. Peralta, B. Ichter, et al. Scaling robot learning with semantically imagined experience. arXiv preprint arXiv:2302.11550, 2023." + }, + { + "type": "list", + "bbox": [ + 0.172, + 0.091, + 0.826, + 0.178 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "14" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.178, + 0.09, + 0.825, + 0.162 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.171, + 0.168, + 0.825, + 0.207 + ], + "angle": 0, + "content": "(a) The robot performs the cup passing task across four different backgrounds. The left side shows the four background variations, while the right side illustrates the two passing directions: (#1 - Right hand passes the cup to the left hand, #2 - Left hand passes the cup to the right hand)." + }, + { + "type": "image", + "bbox": [ + 0.177, + 0.214, + 0.822, + 0.286 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.171, + 0.292, + 0.825, + 0.332 + ], + "angle": 0, + "content": "(b) The robot performs the horizontal grasping task with four different items: bottle, box_1, box_2, and can, as shown on the left. The right side illustrates the process: (#1-#3 - The robot grasps the bottle, #4-#5 - The robot places it into the plastic bin)." + }, + { + "type": "image", + "bbox": [ + 0.177, + 0.339, + 0.822, + 0.41 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.171, + 0.416, + 0.825, + 0.456 + ], + "angle": 0, + "content": "(c) The robot performs the vertical grasping task. As shown on the left, the Dynamixel box is placed in nine different positions for grasping. The right side illustrates the process: (#1-#3 - The robot grasps the box, #4-#5 - The robot places the box into the plastic bin)." + }, + { + "type": "image", + "bbox": [ + 0.177, + 0.463, + 0.822, + 0.535 + ], + "angle": 0, + "content": null + }, + { + "type": "image_footnote", + "bbox": [ + 0.171, + 0.54, + 0.825, + 0.592 + ], + "angle": 0, + "content": "(d) The robot performs the pouring task. The left side shows different settings achieved by varying the robot's rotation and the table's position. The right side illustrates the pouring process: (#1 - Right hand grasps the bottle, #2 - Left hand grasps the cup, #3 - Pouring the drink, #4 - Left hand places the cup down, #5 - Right hand places the bottle down)." + }, + { + "type": "image_footnote", + "bbox": [ + 0.171, + 0.6, + 0.825, + 0.63 + ], + "angle": 0, + "content": "Figure 7: Illustrations of tasks used in quantitative evaluations. From top to bottom: cup passing, horizontal grasping, vertical grasping, and pouring." + }, + { + "type": "table", + "bbox": [ + 0.266, + 0.636, + 0.734, + 0.713 + ], + "angle": 0, + "content": "
MethodBottleBox1Box2CanOvr. Succ.
I.D.H.D.H.D.H.D.
Without whole-body8/106/100/107/1021/40
With whole-body9/103/103/103/1018/40
" + }, + { + "type": "image_footnote", + "bbox": [ + 0.171, + 0.712, + 0.825, + 0.768 + ], + "angle": 0, + "content": "Table 4: Ablation of how human whole-body movement in training demonstrations affects policy rollout. We collect the same number of demonstrations on the same set of objects for the grasping task with or without whole-body movement. Since the robot does not have a natural whole-body movement like humans, it negatively influences the manipulation success rate." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.789, + 0.533, + 0.807 + ], + "angle": 0, + "content": "A More Ablation Study - Data Collection" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.822, + 0.825, + 0.913 + ], + "angle": 0, + "content": "Autonomous Whole-body Movement. In Tab. 4, we justify the necessity to minimize body movement in human data collection. Humans tend to move their upper body unconsciously during manipulation (including shoulder and waist movement). However, existing humanoid robots have yet to reach such a level of dexterity. Thus, having these difficult-to-replicate actions in the human demonstrations leads to degraded performance. We hypothesize that such a necessity would be greatly reduced with the development of both whole-body locomotion methods and mechanical designs," + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.508, + 0.948 + ], + "angle": 0, + "content": "15" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.27, + 0.089, + 0.728, + 0.159 + ], + "angle": 0, + "content": "
MethodGrasping (secs)Pouring (secs)
Human Demo3.79±0.274.81±0.35
Human Demo with VR4.09±0.304.90±0.26
Humanoid Demo (VR Teleop)19.72±1.6537.31±6.25
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.159, + 0.825, + 0.228 + ], + "angle": 0, + "content": "Table 5: Amortized mean and standard deviation of the time required to collect a single demonstration, including scene resets. The first row shows the time for regular human to complete corresponding tasks in real world. The second row represents our human data when wearing VR for data collection, demonstrating that egocentric human demonstrations provide a more scalable data source compared to robot teleoperation." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.256, + 0.825, + 0.286 + ], + "angle": 0, + "content": "but for the currently available platforms, we instruct operators to minimize body movement as much as possible in our dataset." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.293, + 0.825, + 0.382 + ], + "angle": 0, + "content": "Efficiency of Data Collection. In Tab. 5, we compare task completion times across different setups, including standard human manipulation, human demonstrations performed while wearing a VR device, and robot teleoperation. This analysis highlights how task-oriented human demonstrations can be a scalable data source for cross-embodiment learning. Notably, wearing a VR device does not significantly impact human manipulation speed, as the completion time remains nearly the same as in standard human demonstrations." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.39, + 0.825, + 0.452 + ], + "angle": 0, + "content": "Among different data collection schemes, we find that most overhead arises during the retargeting process from human actions to robot actions. This is primarily due to latency and the constrained workspace of 7-DoF robotic arms, which are inherent challenges in existing data collection methods such as VR teleoperation [9], motion tracking [48, 10], and puppeting [8, 5]." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.457, + 0.826, + 0.579 + ], + "angle": 0, + "content": "Beyond data collection speed, human demonstrations offer several additional advantages over teleoperation. They provide a safer alternative, reducing risks associated with real-robot execution. They are also more labor-efficient, as they do not require additional personnel for supervision. Furthermore, human demonstrations allow for greater flexibility in settings, enabling a diverse range of environments without requiring robot-specific adaptations. Additionally, human demonstrations achieve a higher demonstration success rate, and the required hardware (such as motion capture or VR devices) is more accessible and cost-effective compared to full robotic setups. These factors collectively make human data a more scalable solution for large-scale data collection." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.597, + 0.55, + 0.614 + ], + "angle": 0, + "content": "B Normalization of different embodiments." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.629, + 0.825, + 0.705 + ], + "angle": 0, + "content": "Tab. 2 suggests minor differences between using different normalization coefficients for the states and actions vectors of humans and humanoids. We take a closer look in Fig. 8, where we investigate the impact of different normalization strategies in the vertical grasping (picking) task. Noticeably, the same normalization approach achieved the highest overall success rate, but the success distribution is biased towards the upper-right region of the grid." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.711, + 0.825, + 0.757 + ], + "angle": 0, + "content": "We hypothesize that this is because humans have a larger workspace than humanoid robots. Thus, human data encompasses humanoid proprioception as a subset, which results in a relatively smaller distribution for the robot state-action space." + }, + { + "type": "title", + "bbox": [ + 0.171, + 0.775, + 0.667, + 0.793 + ], + "angle": 0, + "content": "C In-Depth Analysis of Different Types of Generalization" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.807, + 0.826, + 0.913 + ], + "angle": 0, + "content": "Human data improves background generalization. We chose to use the cup passing task to test background generalization. We prepared four different tablecloths as backgrounds, as shown in Fig. 7a. In terms of training data distribution, the teleoperation data for this task was collected exclusively on the paper background shown in Fig. 7a, whereas the human data includes more than five different backgrounds. This diverse human dataset significantly enhances the generalization ability of the co-trained HAT policy. As shown in Tab. 7., HAT consistently outperforms across all four backgrounds, demonstrating robustness to background variations. In addition, the overall" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "16" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.294, + 0.089, + 0.705, + 0.162 + ], + "angle": 0, + "content": "
MethodBottleBox1Box2CanOvr. Succ.
I.D.H.D.O.O.D.O.O.D.
ACT8/105/101/101/1016/40
HAT8/107/101/104/1021/40
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.165, + 0.825, + 0.194 + ], + "angle": 0, + "content": "Table 6: Object Appearance Generalization: In the horizontal grasping task, we evaluated the grasping performance by attempting to grasp each object 10 times and recorded the success rate." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.26, + 0.825, + 0.292 + ], + "angle": 0, + "content": "success rate increases by nearly \\(50\\%\\) compared to training without human data, highlighting the advantage of utilizing diverse human demonstrations." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.298, + 0.825, + 0.373 + ], + "angle": 0, + "content": "Human data improves appearance generalization. To test how co-training improves robustness to perturbations in object textures, we evaluate the horizontal grasping policy on novel objects, as shown in Fig. 7b. Specifically, we compare the policy's performance on the bottle, box_1, box_2, and can, as shown left to right in the first image in Fig. 7b. These objects differ significantly in both color and shape from the bottle used in the teleoperation data distribution." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.38, + 0.825, + 0.456 + ], + "angle": 0, + "content": "Since grasping is a relatively simple task, our adjusted policy demonstrates strong learning capabilities even with only 50 teleoperation data samples. The policy can successfully grasp most bottles despite the limited training set. To better highlight the impact of human data, we selected more challenging objects for evaluation. As shown in Tab. 6, human data significantly enhances the policy's ability to grasp these more difficult objects." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.462, + 0.826, + 0.598 + ], + "angle": 0, + "content": "Notably, box_1 appears in the human data, while box_2 does not. Despite this, we observe that constraining with human data still improves overall performance, even on box_2, though its success rate does not increase. This suggests that, beyond direct experience with specific objects, the human data helps the policy learn broader visual priors that enable more proactive and stable grasping behaviors. For box_2, while the success rate remains low—partially due to its low height and color similarity to the table—the co-trained HAT policy demonstrates fewer out-of-distribution (OOD) failures and more actively searches for graspable regions. The failures on box_2 are primarily due to unstable grasping and the small box slipping from the hand, rather than the inability to perceive or locate the object." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.605, + 0.825, + 0.68 + ], + "angle": 0, + "content": "Furthermore, adding more human data not only improves performance on objects seen in human training demonstrations (e.g., box_1) but also enhances generalization to completely novel objects (e.g., box_2 and can). We hypothesize that, as the number of objects grows, HAT starts to learn inter-category visual priors that guide it to grasp objects more effectively, even when they were not explicitly present in the training set." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.687, + 0.825, + 0.763 + ], + "angle": 0, + "content": "Human data improves object placement generalization. Finally, we introduce variations in object placements that are not present in the real-robot training demonstrations and specifically investigate this in the vertical grasping (picking) task. In this task, we intentionally constrain the robot data collection to object placements within a subset of cells, while human vertical grasping data covers a much more diverse range of settings." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.77, + 0.825, + 0.83 + ], + "angle": 0, + "content": "To systematically analyze the impact of human data, we evaluate model performance on a structured \\(3 \\times 3\\) grid, where each cell represents a \\(10\\mathrm{cm} \\times 10\\mathrm{cm}\\) region for grasping attempts. The numbers in each cell indicate the number of successful picks out of 10 trials. Real-robot training data is collected from only two specific cells, highlighted with dashed lines." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.837, + 0.825, + 0.913 + ], + "angle": 0, + "content": "A key detail in our teleoperation data distribution is that 50 picking attempts are collected from the right-hand side grid and only 10 from the left-hand side grid. This imbalance explains why policies trained purely on teleoperation data struggle to grasp objects in the left-side grid. We observe that models trained solely on robot data fail to generalize to unseen cells, whereas cross-embodiment learning with human data significantly improves generalization, doubling the overall success rate." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "17" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.309, + 0.092, + 0.691, + 0.259 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.171, + 0.266, + 0.828, + 0.337 + ], + "angle": 0, + "content": "Figure 8: Object Placement Generalization. Performance comparisons of models trained with and without human data on vertical grasping (picking). Each cell in the \\(3 \\times 3\\) grid represents a \\(10\\mathrm{cm} \\times 10\\mathrm{cm}\\) region where the robot attempts to pick up a box, with numbers indicating successful attempts out of 10. The real-robot data is collected in two cells inside the dashed lines. Notably, our teleoperation data is intentionally imbalanced." + }, + { + "type": "table", + "bbox": [ + 0.286, + 0.35, + 0.713, + 0.426 + ], + "angle": 0, + "content": "
MethodPaperWoodenRedGreenOvr. Succ.
I.D.H.D.O.O.D.O.O.D.
ACT19/2014/2012/2010/2055/80
HAT20/2016/2018/2018/2072/80
" + }, + { + "type": "table_caption", + "bbox": [ + 0.171, + 0.426, + 0.825, + 0.457 + ], + "angle": 0, + "content": "Table 7: Background Generalization: In the cup passing task, we evaluate the passing performance by recording the number of failures or retries needed to complete 20 cup-passing trials." + }, + { + "type": "title", + "bbox": [ + 0.172, + 0.488, + 0.73, + 0.524 + ], + "angle": 0, + "content": "D In-Depth Comparison between Humanoid A and Humanoid B configurations" + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.538, + 0.825, + 0.599 + ], + "angle": 0, + "content": "This section presents a detailed comparison of the two humanoid platforms, referred to as Humanoid A and Humanoid B, with a focus on joint structure and implications for manipulation capabilities. We restrict our analysis to the arm configurations, as other parts of the body were not exclusively explored in this work." + }, + { + "type": "text", + "bbox": [ + 0.17, + 0.606, + 0.825, + 0.727 + ], + "angle": 0, + "content": "While morphologically similar, these two humanoids have drastically different arm configurations that create hurdles in direct policy transfer. Besides differences in motor technical specs such as torque and types of encoder (Humanoid B has absolute motor position encoders), they also have different mechanical limits. The range of motion (ROM) for the first four proximal joints—shoulder-pitch, shoulder-roll, shoulder_yaw, and elbow—differs across the two platforms. Humanoid B exhibits a consistently wider ROM, which allows a wider set of reachable configurations and increases the manipulability of the arm in constrained environments. Table 8 summarizes the ROM values for these shared joints." + }, + { + "type": "text", + "bbox": [ + 0.171, + 0.733, + 0.827, + 0.84 + ], + "angle": 0, + "content": "A more significant architectural divergence is observed at the wrist. Humanoid A includes a single distal joint—wrist-roll—providing limited wrist articulation. This restricts end-effector dexterity and constrains in-hand manipulation strategies to a single rotational degree of freedom. In contrast, Humanoid B is equipped with a complete wrist mechanism composed of three independently actuated joints: wrist_pitch, wrist-roll, and wrist_yaw. These additional degrees of freedom allow for full orientation control of the end-effector, enabling tasks that require precise alignment, rotation, and fine adjustment of object poses." + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "18" + } + ], + [ + { + "type": "table", + "bbox": [ + 0.31, + 0.451, + 0.688, + 0.536 + ], + "angle": 0, + "content": "
JointHumanoid AHumanoid B
shoulder_pitch-164° to +164°-180° to +90°
shoulder-roll-19° to +178°-21° to +194°
shoulder_yaw-74° to +255°-152° to +172°
elbow-71° to 150°-54° to 182°
wrist-roll-175° to 175°-172° to 157°
" + }, + { + "type": "table_caption", + "bbox": [ + 0.218, + 0.537, + 0.781, + 0.551 + ], + "angle": 0, + "content": "Table 8: Joint Range of Motion Comparison between Humanoid A and B (in degrees)" + }, + { + "type": "page_number", + "bbox": [ + 0.491, + 0.936, + 0.509, + 0.948 + ], + "angle": 0, + "content": "19" + } + ] +] \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13441/7dee0cb2-adf2-4076-a132-e57fc86ae441_origin.pdf b/data/2025/2503_13xxx/2503.13441/7dee0cb2-adf2-4076-a132-e57fc86ae441_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..888723398b48c67bc7252ed219f26bc55018bd2a --- /dev/null +++ b/data/2025/2503_13xxx/2503.13441/7dee0cb2-adf2-4076-a132-e57fc86ae441_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8134f546d21910a472fdca4cfd893fb2ca1a4f3dbd526a010d5f390b4cfae835 +size 5372065 diff --git a/data/2025/2503_13xxx/2503.13441/full.md b/data/2025/2503_13xxx/2503.13441/full.md new file mode 100644 index 0000000000000000000000000000000000000000..003dd7a0b30e9b3b1a71f363c6c81c24df363833 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13441/full.md @@ -0,0 +1,361 @@ +# Humanoid Policy $\sim$ Human Policy + +Ri-Zhao Qiu*,1 Shiqi Yang*,1 Xuxin Cheng*,1 Chaitanya Chawla*,2 Jialong Li + +Tairan He² Ge Yan⁴ David Yoon³ Ryan Hoque³ Lars Paulsen¹ + +Ge Yang $^{5}$ Jian Zhang $^{3}$ Sha Yi $^{1}$ Guanya Shi $^{2}$ Xiaolong Wang $^{1}$ + +1 UC San Diego, 2 CMU, 3 Apple, 4 University of Washington, 5 MIT + +https://human-as-robot.github.io/ + +![](images/4405aa2b835aba6a277e6c9198aa9977469d97508f3fdf4c76aee82216576500.jpg) +Egocentric Vision +Small-scale Humanoid Data +1.5k demos + +![](images/aea774d82dbe5bd4fbe061109175f75ddddda90ba7ff4fdaa5626b9bd22ef7f5.jpg) +Unified State-Action Space +Fingers/Wrist + +![](images/6de31e2c3f7ef7882133985700e8e04374897bf36bf16ee566165c8b694a3018.jpg) +Large-scale Human Data +27k demos + +![](images/346158b88ba04c6e7ab081daf4c31661e0295710b7ae42f838fcf37309e1f6c9.jpg) + +![](images/d758fb486b915cb7b8bba76f25c64677ff6383244dd1b2d11810084c616ff526.jpg) +Robot Policies + +![](images/155fe39651ff5c1dc0794715d35ba201201e8503baae6301d1a5408214e69852.jpg) + +![](images/9ba220379b07dcbb35a75d6613b9038f779a2745c67964f1ea4c2fcc4b21bac8.jpg) + +![](images/d09737e448a71aaf9de7992f65ae4c12d439120d577b56996d26b2f49dd055e1.jpg) + +![](images/e60053de8c621fdd2935ccfd57a062430af67270a90436afec5f8a43a6dfa737.jpg) +Figure 1: This paper advocates high-quality human data as a data source for cross-embodiment learning - task-oriented egocentric human data. We collect a large-scale dataset, Physical Human-Humanoid Data $(\mathbf{PH}^2\mathbf{D})$ , with hand-finger 3D poses from consumer-grade VR devices on well-defined manipulation tasks directly aligned with robots. Without relying on modular perception, we train a Human Action Transformer (HAT) manipulation policy by directly modeling humans as a different humanoid embodiment in an end-to-end manner. + +![](images/932b5667c208befbac683cbcd8b77d681fb74f6dc2da9b82aa158bd701300670.jpg) + +Abstract: Training manipulation policies for humanoid robots with diverse data enhances their robustness and generalization across tasks and platforms. However, learning solely from robot demonstrations is labor-intensive, requiring expensive tele-operated data collection, which is difficult to scale. This paper investigates a more scalable data source, egocentric human demonstrations, to serve as cross-embediment training data for robot learning. We mitigate the embodiment gap between humanoids and humans from both the data and modeling perspectives. We collect an egocentric task-oriented dataset $(\mathbf{PH}^2\mathbf{D})$ that is directly aligned with humanoid manipulation demonstrations. We then train a human-humanoid behavior policy, which we term Human Action Transformer (HAT). The state-action space of HAT is unified for both humans and humanoid robots and can be differentiably retargeted to robot actions. Co-trained with smaller-scale robot data, HAT directly models humanoid robots and humans as different embodiments without additional supervision. We show that human data improve both generalization and robustness of HAT with significantly better data collection efficiency. + +Keywords: Robot Manipulation, Cross-Embodiment, Humanoid + +# 1 Introduction + +Learning from real robot demonstrations has led to great progress in robotic manipulation recently [1, 2, 3, 4]. One key advancement to enable such progress was hardware / software codesigns to scale up data collection using teleoperation [5, 6, 7, 8, 9, 10] and directly controlling + +the robot end effector [11, 12, 5, 6, 13, 7]. Instead of gathering data on a single robot, collective efforts have been made to merge diverse robot data and train foundational policies across embodiments [11, 14, 2, 1, 3, 4], which have shown to improve cross-embodiment and cross-task generalizability. + +However, collecting structured real-robot data is expensive and time-consuming. We are still far away from building a robust and generalizable model as what has been achieved in Computer Vision [17] and NLP [18]. If we examine humanoid robot teleoperation more closely, it involves robots mimicking human actions + +
DatasetHumanRobot
# Frames# Demos# Frames# Demos
DexCap [15]~378k787NANA
EgoMimic [16]~432k†2,1501.29M†1,000
PH2D (Ours)~3.02M26,824~668k1,552
+ +Table 1: Comparisons of task-oriented egocentric human datasets. Besides having the most demonstrations, $\mathrm{PH}^2\mathrm{D}$ is collected on various manipulation tasks, diverse objects and scenes, with accurate 3D hand-finger poses and language annotations. †: estimated based on reported data collection time with $30\mathrm{Hz}$ ; whereas DexCap [15] and $\mathrm{PH}^2\mathrm{D}$ report processed frames for training. + +using geometric transforms or retargeting to control robot joints and end-effectors. From this perspective, we propose to model robots in a human-centric representation, and the robot action is just a transformation away from the human action. If we can accurately capture the end-effector and head poses of humans, egocentric human demonstrations will be a more scalable source of training data, as we can collect them efficiently, in any place, and without a robot. + +In this paper, we perform cross-human and humanoid embodiment training for robotic manipulation. Our key insight is to model bimanual humanoid behaviors by directly imitating human behaviors without using learning surrogates such as affordances [19, 20]. To realize this, we first collect an egocentric task-oriented dataset of Physical Humanoid-Human Data, dubbed $\mathrm{PH^2D}$ . We adapt consumer-grade VR devices to collect egocentric videos with automatic but accurate hand pose and end effector (i.e., hand) annotations. Compared to existing human daily behavior datasets [21, 22], $\mathrm{PH^2D}$ is task-oriented so that it can be directly used for co-training. The same VR hardwares are then used to perform teleoperation to collect smaller-scale humanoid data for better alignment. We then train a Human-humanoid Action Transformer (HAT), which predicts future hand-finger trajectories in a unified human-centric state-action representation space. To obtain robot actions, we simply apply inverse kinematics and hand retargeting to differentiably convert human actions to robot actions for deployment. + +We conduct real-robot evaluations on different manipulation tasks with extensive ablation studies to investigate how to best align human and humanoid demonstrations. In particular, we found that co-training with diverse human data improves robustness against spatial variance and background perturbation, generalizing in settings unseen in robot data but seen in human data. We believe that these findings highlight the potential of using human data for large-scale cross-embodiment learning. + +In summary, our contributions are: + +- A dataset, $\mathrm{PH}^2\mathrm{D}$ , which is a large egocentric, task-oriented human-humanoid dataset with accurate hand and wrist poses for modeling human behavior (see Tab. 1). +- A cross human-humanoid manipulation policy, HAT, that introduces a unified state-action space and other alignment techniques for humanoid manipulation. +- Improved policy robustness and generalization validated by extensive experiments and ablation studies to show the benefits of co-training with human data. + +# 2 Related Work + +Imitation Learning for Robot Manipulation. Recently, learning robot policy with data gathered directly from the multiple and target robot embodiment has shown impressive robustness and dex- + +terity [23, 2, 24, 1, 25, 26, 9, 27, 28]. The scale of data for imitation learning has grown substantially with recent advancements in data collection [29, 9, 7, 8], where human operators can efficiently collect large amounts of high-quality, task-oriented data. Despite these advances, achieving open-world generalization still remains a significant challenge due to lack of internet-scale training data. + +Learning from Human Videos. Learning policies from human videos is a long-standing topic in both computer vision and robotics due to the vast existence of human data. Existing works can be approximately divided into two categories: aligning observations or actions. + +Learn from Human - Aligning Observations. While teleoperating the actual robot platform allows learning policy with great dexterity, there is still a long way to go to achieve higher levels of generalization across diverse tasks, environments, and platforms. Unlike fields such as computer vision [17] and natural language processing [18] benefiting from internet-scale data, robot data collection in the real world is far more constrained. Various approaches have attempted to use internet-scale human videos to train robot policies [30, 31, 32, 33, 34, 35]. Due to various discrepancies (e.g., supervision and viewpoints) between egocentric robot views and internet videos, most existing work [19, 20] use modular approaches with intermediate representations as surrogates for training. The most representative ones are affordances [19, 20] for object interaction, object keypoints predictions [36, 37, 38, 39, 40], or other types of object representations [41, 42, 43]. + +Learn from Human - Aligning Actions. Beyond observation alignment, transferring human demonstrations to robotic platforms introduces additional challenges due to differences in embodiment, actuation, and control dynamics. Specific alignment of human and robot actions is required to overcome these disparities. Approaches have employed masking in egocentric views [16], aligning motion trajectories or flow [44, 45], object-centric actions [46, 47], or hand tracking with specialized hardware [15]. Most closely related to our work, HumanPlus [48] designs a remapping method from 3D human pose estimation to tele-operate humanoid robots. Compared to HumanPlus, the insight of our method is to waive the requirement for robot hardware in collecting human data and collect diverse human data directly for co-training. In contrast to HumanPlus, we intentionally avoid performing retargeting on human demonstrations and designed the policy to directly use human hand poses as states/actions. On the other hand, the 'human shadowing' retargeting in HumanPlus is a teleoperation method that still requires robots, leading to lower collection efficiency than ours. + +Cross-Embodiment. Cross-embodiment pre-training has been shown to improve adaptability and generalization over different embodiments [49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61]. When utilizing human videos, introducing intermediate representations can be prone to composite errors. Recent works investigate end-to-end approaches [2, 24, 1, 3] using cross-embodied robot data to reduce such compounding perceptive errors. Noticeably, these works have found that such end-to-end learning leads to desired behaviors such as retrying [3]. Some other work [62, 38] enforces viewpoint constraints between training human demonstrations and test-time robot deployment to allow learning on human data but it trades off the scalability of the data collection process. + +Concurrent Work. Some concurrent work [15, 16, 63] also attempts to use egocentric human demonstrations for end-to-end cross-embedment policy learning. DexCap [15] uses gloves to track 3D hand poses with a chest-mounted RGBD camera to capture egocentric human videos. However, DexCap relies on 3D inputs, whereas some recent works [3, 1] have shown the scalability of 2D visual inputs. Most related to our work, EgoMimic [16] also proposes to collect data using wearable device [64] with 2D visual inputs. However, EgoMimic requires strict visual sensor alignments; whereas we show that scaling up diverse observations with different cameras makes the policy more robust. In addition, $\mathrm{PH}^2\mathrm{D}$ is also greater in dataset scale and object diversity. We also show our policy can be deployed on real robots without strict requirements of visual sensors and heuristics, which paves the way for scalable data collection. + +# 3 Method + +To collect more data to train generalizable robot policies, recent research has explored cross-embodiment learning, enabling policies to generalize across diverse physical forms [3, 1, 4, 2, 65, 14]. This paper proposes egocentric human manipulation demonstrations as a scalable source of cross-embodiment training data. Sec. 3.1 describes our approach to adapt consumer-grade VR devices to scale up human data collection conveniently for a dataset of task-oriented egocentric human demonstrations. Sec. 3.2 describes various techniques to handle domain gaps to align human data and robot data for learning humanoid manipulation policy. + +# 3.1 $\mathbf{PH}^2\mathbf{D}$ : Task-oriented Physical Humanoid-Human Data + +Though there has been existing work that collects egocentric human videos [16, 22, 21, 15], they either (1) provide demonstrations mostly for non-task-oriented skills (e.g., dancing) and do not provide world-frame 3D head and hand poses estimations for imitation learning supervision [21, 22] or (2) require specialized hardware or robot setups [15, 16]. + +To address these issues, we propose $\mathrm{PH^2D}$ . $\mathrm{PH^2D}$ address these two issues by (1) collecting task-oriented human demonstrations that are directly related to robot execution, (2) adapting well-engineered SDKs of VR devices (illustrated in Fig. 2) to provide supervision, and (3) diversifying tasks, camera sensors, and reducing whole-body movement to reduce domain gaps in both vision and behaviors. + +![](images/01ad4450db2b81f63c3f2521e32a09dfef5f050e7dd08149de218571ffa6fb2a.jpg) +Figure 2: Consumer-grade Devices for Data Collection. To avoid relying on specialized hardware for data collection to make our method scalable, we design our data collection process using consumer-grade VR devices. + +# Adapting Low-cost Commercial Devices With development in pose estimation [66] and + +system engineering, modern mobile devices are capable of providing accurate on-device world frame 3D head pose tracking and 3D hand keypoint tracking [9], which has proved to be stable enough to teleoperate robot in real-time [9, 13]. We design software and hardware to support convenient data collection across different devices. Different cameras provide better visual diversity. + +- Apple Vision Pro + Built-in Camera. We developed a Vision OS App that uses the built-in camera for visual observation and uses the Apple ARKit for 3D head and hand poses. +- Meta Quest 3 / Apple Vision Pro + ZED Camera. We developed a web-based application based on OpenTelevision [9] to gather 3D head and hand poses. We also designed a 3D-printed holder to mount ZED Mini Stereo cameras on these devices. This configuration is both low-cost ( $<700$ ) and introduces more diversity with stereo cameras. + +Data Collection Pipeline We collect task-oriented egocentric human demonstrations by asking human operators to perform tasks overlapping with robot execution (e.g., grasping and pouring) when wearing the VR devices. For every demonstration, we provide language instructions (e.g., grasp a can of coke zero with right hand), and synchronize proprioception inputs and visual inputs by closest timestamps. + +Action Domain Gap. Human actions and tele-operated robot actions exhibit two distinct characteristics: (1) human manipulation usually involves involuntary whole-body movement, and (2) humans are more dexterous than robots and have significantly faster task completion time than robots. We mitigate the first gap by requesting the human data collectors to sit in an upright position. For the second speed gap, we interpolate translation and rotations of human data during training (effectively 'slowing down' actions). The slow-down factors $\alpha_{\mathrm{slow}}$ are obtained by normalizing the average task completion time of humans and humanoids, which is empirically distributed around 4. For consistency, we use $\alpha_{\mathrm{slow}} = 4$ in all tasks. + +![](images/59c67885ce45ec56b97cbfea1a92fcbe69c8e4d04c354db3d05a4da260c37a2a.jpg) +Figure 3: Overview of HAT. Human Action Transformer (HAT) learns a robot policy by modeling humans. During training, we sample a state-action pair from either human data or robot data. The images are encoded by a frozen DinoV2 encoder [67]. The HAT model makes predictions in a human-centric action space using wrist 6 DoF poses and finger tips, which is retargeted to robot poses during real-robot deployment. + +# 3.2 HAT: Human Action Transformer + +HAT learns cross-embodied robot policy by modeling humans. We demonstrate that treating bi-manual humanoid robots and humans as different robot embodiments via retargeting improves both generalizability and robustness of HAT. + +More concretely, let $\mathcal{D}_{\text{robot}} = \{(\mathbf{S}_i, \mathbf{A}_i)\}_{i=1}^N$ be the set of data collected from real bimanual humanoid robots using teleoperation [9], where $\mathbf{S}_i$ is the states including proprioceptive and visual observations of $i$ -th demonstration and $\mathbf{A}_i$ be the actions. The collected $\mathrm{PH}^2\mathrm{D}$ dataset, $\mathcal{D}_{\text{human}} = \{(\tilde{\mathbf{S}}_i, \tilde{\mathbf{A}}_i)\}_{i=1}^M$ is used to augment the training process. Note that it is reasonable to assume $M \gg N$ due to the significantly better human data collection efficiency. + +The goal is to design a policy $\pi : \mathbf{S} \to \mathbf{A}$ that predicts future robot actions $\mathbf{a}_t$ given current robot observation $\mathbf{s}_t$ at time $t$ , where the future actions $\mathbf{a}_{t+1}$ is usually a chunk of actions for multi-step execution (with slight abuse of notation). We model $\pi$ using HAT, which is a transformer-based architecture predicting action chunks [5]. The overview of the model is illustrated in Fig. 3. We discuss key design choices of HAT with experimental ablations. + +Unified State-Action Space. Both bimanual robots and humans have two end effectors. In our case, our robots are also equipped with an actuated 2DoF neck that can rotate, which resembles the autonomous head movement when humans perform manipulation. Therefore, we design a unified state-action space (i.e., $(\mathbf{S},\mathbf{A})\equiv (\tilde{\mathbf{S}},\tilde{\mathbf{A}}))$ for both bimanual robots and humans. More concretely, the proprioceptive observation is a 54-dimensional vector (6D rotations [68] of the head, left wrist, and right wrist; $x / y / z$ of left and right wrists and 10 finger tips). In this work, since we deploy our policy on robots with 5-fingered dexterous hands (shown in Fig. 4), there exists a bijective mapping between the finger tips of robot hands and human hands. Note that injective mapping is also possible (e.g., mapping distance between the thumb finger and other fingers to parallel gripper distance). + +Visual Domain Gap. Two types of domain gaps exist for co-training on human/humanoid data: camera sensors and end effector appearance. Since our human data collection process includes cameras different from robot deployment, this leads to camera domain gaps such as tones. Also, the appearances of human and humanoid end effectors are different. However, with sufficiently large and diverse data, we find it not a strict necessity to apply heuristic processing such as visual artifacts [16] or generative methods [69] to train human-robot policies - basic image augmentations such as color jittering and Gaussian blurring are effective regularization. + +
Meth.H. DataD. NormPassingHorizontal GraspVertical GraspPouringOvr. Succ.
I.D.O.O.D.I.D.O.O.D.I.D.O.O.D.I.D.O.O.D.I.D.O.O.D.
ACTXNA19/2036/608/107/307/2015/708/101/1042/6059/170
HATX17/2051/609/1011/3014/2030/705/105/1045/6097/170
HAT20/2052/608/1012/3013/2029/708/108/1049/60101/170
Type of GeneralizationBackgroundTextureObj. Placement
+ +Table 2: Success rate of autonomous skill execution. Co-training with human data (H. Data) significantly improves the Out-Of-Distribution (O.O.D.) performance with nearly $100\%$ relative improvement on all tasks on Humanoid A. We also ablate the design choice of using different normalizations (D. Norm) for different embodiments. We designate each task setting to investigate a single type of generalization. Detailed analysis of each type of generalization is presented in Sec. C. + +Training. The final policy is denoted as $\pi : f_{\theta}(\cdot) \to \mathbf{A}$ for both human and robot policy, where $f_{\theta}$ is a transformer-based neural network parametrized by $\theta$ . The final loss is given by, + +$$ +\mathcal {L} = \ell_ {1} \left(\pi \left(s _ {i}\right), a _ {i}\right) + \lambda \cdot \ell_ {1} \left(\pi \left(s _ {i}\right) _ {\mathrm {E E F}}, a _ {i, \mathrm {E E F}}\right), \tag {1} +$$ + +where EEF are the indices of the translation vectors of the left and right wrists, and $\lambda = 2$ is an (insensitive) hyperparameter used to balance loss to emphasize the importance of end effector positions over learning unnecessarily precise finger tip keypoints. + +# 4 Experiments + +Hardware Platforms. We run our experiments on two humanoid robots (Humanoid A and Humanoid B shown in Fig. 4) equipped with 6-DOF Inspire dexterous hands. Humanoid A is a Unitree H1 robot and Humanoid B is a Unitree H1_2 robot with different arm configurations. Similar to humans, both robots (1) are equipped with actuated necks [9] to get make use of egocentric views and (2) do not have wrist cameras. Unless otherwise noted, most humanoid data collection is done with Humanoid A. We use Humanoid B mainly for testing cross-humanoid generalization. + +![](images/4c584c96607aaeb29e9b43e2c7ed41bb0e70f76fd5bedf0f003b16fa1b9660c9.jpg) +(a) Humanoid A +Figure 4: Hardware Illustration. Most robot data attributes to Humanoid A, a Unitree H1 robot. Humanoid B, a Unitree H1-2 robot with different arm motor configurations, is used to evaluate few-shot cross-humanoid transfer. Detailed comparisons in Sec. D + +![](images/c658da49d15f8271384f414e1d227a0a00368b23fa0f816d5b26eb80997e7154.jpg) +(b) Humanoid B + +Implementation Details. We implement policy architecture by adopting an transformer-based architecture predicting future action chunks [5]. We use a frozen DinoV2 ViT-S [67] as the visual backbone. We implement two variants: (1) ACT: baseline implementation using the Action Chunk Transformer [5], trained using only robot data. Robot states are represented as joint positions. (2) HAT: same architecture as ACT, but the state encoder operates in the unified state-action space. Unless otherwise stated, HAT is co-trained on robot and human data. A checkpoint is trained for each task with approximately 250-400 robot demonstrations. + +Experimental Protocol. We collect robot and human demonstrations in different object sets. Since human demonstrations are easier to collect, the settings in human demonstrations are generally more diverse, which include background, object types, object positions, and the relative position of the human to the table. + +We experimented with four different dexterous manipulation tasks and investigated in-distribution and out-of-distribution setups. The in-distribution (I.D.) setting tests the learned skills with backgrounds and object arrangements approximately similar to the training demonstrations presented in the real-robot data. In the Out-Of-Distribution (O.O.D.) setting, we test generalizability and robustness by introducing novel setups that were presented in human data but not in robot data. Fig. 7 visualizes different manipulation tasks and how we define out-of-distribution settings for each task. + +Figure 5: Few-Shot Adaptation. Co-training consistently outperforms isolated training as Humanoid B demonstrations increase, achieving robust success rates even in low-data regimes. +![](images/9b89a15df58365bcccfb9db4b6d750eab29560b650833938df03595db378a427.jpg) +(a) Performance of Humanoid B co-trained with $\mathrm{PH}^2\mathrm{D}$ on horizontal grasping. o1 is seen by Humanoid B. o2 and o3 seen in human data. o4 is unseen in all data. + +![](images/e4fd12fd6c055191040209ed8b5dd29a02bd388cfb49a7541bb3832b5932007c.jpg) +(b) Co-training consistently outperforms isolated training as Humanoid B demonstrations increase, achieving good success rates even in low-data regimes. + +# 4.1 Main Evaluation + +Human data has minor effects on I.D. testing. From Tab. 2, we can see that I.D. performance with or without co-training with human data gives similar results. In the I.D. setting, we closely match the scene setups as training demonstrations, including both background, object types, and object placements. Thus, policies trained with only a small amount of Humanoid A data performed well in this setting. This finding is consistent with recent work [9, 7] that frozen visual foundation models [17, 67] improve robustness against certain external perturbations such as lighting. + +Human data improves the O.O.D. settings with many generalizations. One common challenge in imitation learning is overfitting to only in-distribution task settings. Hence, it is crucial for a robot policy to generalize beyond the scene setups seen in a limited set of single-embodiment data. To demonstrate how co-training with human data reduces such overfitting, we introduce O.O.D. task settings to evaluate such generalization. From Tab. 2, we can see that co-training drastically improves O.O.D. settings, achieving nearly $100\%$ relative improvement in settings unseen by the robot data. In particular, we find that human data improves three types of generalization: background, object placement, and appearance. To isolate the effect of each variable, each task focuses on a specific type of generalization as listed in Tab. 2, with in-depth analyses in Sec. C. + +# 4.2 Few-Shot Transfer across Heterogenous Embodiments + +We conducted few-shot generalization experiments on a distinct humanoid platform (Humanoid B), contrasting it with our primary platform, Humanoid A. Notably, Humanoid B's demonstration data were collected in an entirely separate environment, introducing both embodiment and environmental shifts. We highlight two key advantages of our approach: (1) the ability to unify heterogeneous human-centric data sources (humanoids and humans) into a generalizable policy framework, and (2) the capacity to rapidly adapt to new embodiments with drastically reduced data requirements. + +Experiment 1: Cross-embodiment co-training efficacy Using only 20 demonstrations from Humanoid B, we trained 3 policies - respectively on data from (i) Humanoid B only, (ii) Humanoid B + Humanoid A (cross-embodiment), and (iii) Humanoid B + Humanoid A + Human (cross-embodiment and human priors). As shown in Fig. 5a, co-trained policies (ii) and (iii) substantially outperformed the Humanoid B-only baselines on all task settings, underscoring the method's ability to transfer latent task structure across embodiments. + +Experiment 2: Scaling Demonstrations for Few-Shot Adaptation We further quantified the relationship between required for few-shot generalization. We hold Humanoid A and human datasets fixed + +![](images/d1abd18a54dcf019aac825156a4dd6ef52c0918ea26fa6a585afc6d147e83292.jpg) +Figure 6: Human data has better sampling efficiency. Per-grid vertical grasping successes out of 10 trials with models trained with robot-only data and mixed data. Red boxes indicate where training data is collected. + +
TaskState SpaceAction SpeedSuccess
Vertical GraspingX1/10
X0/10
4/10
+ +Table 3: Importance of unifying policy inputs and outputs. We report the number of successes of vertical grasping objects in the upper-left block as illustrated in Fig. 8. Baselines use joint positions as state input or do not interpolate human motions. + +for the horizontal grasping task and ablate number of demonstrations required for Humanoid B in Fig. 5. Co-training (Humanoid $\mathrm{B} + \mathrm{A} + \mathrm{Human}$ ) consistently outperformed isolated training on Humanoid B across all settings, especially in the few-data regime. + +# 4.3 Ablation Study + +Sampling Efficiency of Human and Humanoid Data. Conceptually, collecting human data is less expensive, not just because it can be done faster, but also because it can be done in in-the-wild scenes; reduces setup cost before every data collection; and avoids the hardware cost to equip every operator with robots. + +We perform additional experiments to show that even in the lab setting, human data can have better sampling efficiency in unit time. In particular, we provide a small-scale experiment on the vertical grasping task. Allocating 20 minutes for two settings, we collected (1) 60 Humanoid A demonstrations, (2) 30 Humanoid A demonstrations, and 120 human demonstrations. To avoid confating diversity and data size, the object placements in all demonstrations are evenly distributed at the bottom 6 cells. The results are given in Fig. 6. The policy trained with mixed robot and human data performs significantly better, which validates the sampling efficiency of human data over robot data. Each cell represents a $10\mathrm{cm} \times 10\mathrm{cm}$ region where the robot attempts to pick up a box. + +State-Action Design. In Tab. 3, we ablate the design choices of the proprioception state space and the speed of output actions. In particular, using the same set of robot and human data, we implement two baselines: 1) a unified state-action space, but does not interpolate (i.e., slow down) the human actions; and 2) a baseline that interpolates human actions but uses separate state representation for humanoid (joint positions) and humans (EEF representation). The policies exhibit different failure patterns during the rollout of these two baselines. Without interpolating human actions, the speed of the predicted actions fluctuates between fast (resembling humans) and slow (resembling teleoperation), which leads to instability. Without a unified state space, the policy is given a 'short-cut' to distinguish between embodiments, which leads to on-par in-distribution performance and significantly worse OOD performance. + +More Ablation Study. Due to space limit, please refer to the appendix and the supplementary for more qualitative visualization and quantitative ablation studies. + +# 5 Conclusions + +This paper proposes $\mathrm{PH^2D}$ , an effort to construct a large-scale human task-oriented behavior dataset, along with the training pipeline HAT, which leverages $\mathrm{PH^2D}$ and robot data to show how humans can be treated as a data source for cross-embodiment learning. We show that it is possible to directly train an imitation learning model with mixed human-humanoid data without any training surrogates when the human data are aligned with the robot data. The learned policy shows improved generalization and robustness compared to the counterpart trained using only real-robot data. + +# 6 Limitations + +Although we also collect language instructions in $\mathrm{PH^2D}$ , due to our focus on investigating the embodiment gap between humans and humanoids, one limitation of the current version of the paper uses a relatively simple architecture for learning policy. In the near future, we plan to expand the policy learning process to train a large language-conditioned cross-embodiment policy to investigate generalization to novel language using human demonstrations. The collection of human data relies on off-the-shelf VR hardwares and their hand tracking SDKs. Since these SDKs were trained mostly for VR applications, hand keypoint tracking can fail for certain motions with heavy occlusion. In addition, though the proposed method conceptually extends to more robot morphologies, current evaluations are done on robots equipped with dexterous hands. + +# 7 Acknowledgment + +This work was supported, in part, by NSF CAREER Award IIS-2240014, NSF CCF-2112665 (TILOS), and gifts from Amazon, Meta and Apple. + +# References + +[1] S. Liu, L. Wu, B. Li, H. Tan, H. Chen, Z. Wang, K. Xu, H. Su, and J. Zhu. Rdt-1b: a diffusion foundation model for bimanual manipulation. arXiv preprint arXiv:2410.07864, 2024. +[2] Octo Model Team, D. Ghosh, H. Walke, K. Pertsch, K. Black, O. Mees, S. Dasari, J. Hejna, C. Xu, J. Luo, T. Kreiman, Y. Tan, L. Y. Chen, P. Sanketi, Q. Vuong, T. Xiao, D. Sadigh, C. Finn, and S. Levine. Octo: An open-source generalist robot policy. In Proceedings of Robotics: Science and Systems, 2024. +[3] K. Black, N. Brown, D. Driess, A. Esmail, M. Equi, C. Finn, N. Fusai, L. Groom, K. Hausman, B. Ichter, et al. $\pi_0$ : A vision-language-action flow model for general robot control. arXiv preprint arXiv:2410.24164, 2024. +[4] S. Dasari, O. Mees, S. Zhao, M. K. Srirama, and S. Levine. The ingredients for robotic diffusion transformers. arXiv preprint arXiv:2410.10088, 2024. +[5] T. Z. Zhao, V. Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulation with low-cost hardware. arXiv preprint arXiv:2304.13705, 2023. +[6] Z. Fu, T. Z. Zhao, and C. Finn. Mobile aloha: Learning bimanual mobile manipulation with low-cost whole-body teleoperation. arXiv preprint arXiv:2401.02117, 2024. +[7] C. Chi, Z. Xu, C. Pan, E. Cousineau, B. Burchfiel, S. Feng, R. Tedrake, and S. Song. Universal manipulation interface: In-the-wild robot teaching without in-the-wild robots. arXiv preprint arXiv:2402.10329, 2024. +[8] S. Yang, M. Liu, Y. Qin, R. Ding, J. Li, X. Cheng, R. Yang, S. Yi, and X. Wang. Ace: A cross-platform visual-exoskeletons system for low-cost dexterous teleoperation. arXiv preprint arXiv:2408.11805, 2024. +[9] X. Cheng, J. Li, S. Yang, G. Yang, and X. Wang. Open-television: Teleoperation with immersive active visual feedback. In Conference on Robot Learning (CoRL), 2024. +[10] T. He, Z. Luo, X. He, W. Xiao, C. Zhang, W. Zhang, K. Kitani, C. Liu, and G. Shi. Omnih2o: Universal and dexterous human-to-humanoid whole-body teleoperation and learning. arXiv preprint arXiv:2406.08858, 2024. +[11] S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, and C. Finn. Robonet: Large-scale multi-robot learning. arXiv preprint arXiv:1910.11215, 2019. +[12] H. Bharadhwaj, J. Vakil, M. Sharma, A. Gupta, S. Tulsiani, and V. Kumar. Roboagent: Generalization and efficiency in robot manipulation via semantic augmentations and action chunking. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 4788-4795. IEEE, 2024. +[13] H. Ha, Y. Gao, Z. Fu, J. Tan, and S. Song. Umi on legs: Making manipulation policies mobile with manipulation-centric whole-body controllers. arXiv preprint arXiv:2407.10353, 2024. +[14] A. O'Neill, A. Rehman, A. Gupta, A. Maddukuri, A. Gupta, A. Padalkar, A. Lee, A. Pooley, A. Gupta, A. Mandlekar, et al. Open x-embodiment: Robotic learning datasets and rt-x models. arXiv preprint arXiv:2310.08864, 2023. +[15] C. Wang, H. Shi, W. Wang, R. Zhang, L. Fei-Fei, and C. K. Liu. Dexcap: Scalable and portable mocap data collection system for dexterous manipulation. arXiv preprint arXiv:2403.07788, 2024. +[16] S. Kareer, D. Patel, R. Punamiya, P. Mathur, S. Cheng, C. Wang, J. Hoffman, and D. Xu. Egomimic: Scaling imitation learning via egocentric video. arXiv preprint arXiv:2410.24221, 2024. URL https://arxiv.org/abs/2410.24221. + +[17] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In ICML. PMLR, 2021. +[18] OpenAI. Gpt-4 technical report. Technical report, OpenAI, 2023. +[19] R. Mendonca, S. Bahl, and D. Pathak. Structured world models from human videos. In RSS, 2023. +[20] S. Bahl, R. Mendonca, L. Chen, U. Jain, and D. Pathak. Affordances from human videos as a versatile representation for robotics. In CVPR, 2023. +[21] K. Grauman, A. Westbury, E. Byrne, Z. Chavis, A. Furnari, R. Girdhar, J. Hamburger, H. Jiang, M. Liu, X. Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In CVPR, 2022. +[22] D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro, T. Perrett, W. Price, and M. Wray. Scaling egocentric vision: The epic-kitchens dataset. In ECCV, 2018. +[23] T. Z. Zhao, J. Tompson, D. Driess, P. Florence, K. Ghasemipour, C. Finn, and A. Wahid. Aloha unleashed: A simple recipe for robot dexterity. arXiv preprint arXiv:2410.13126, 2024. +[24] L. Wang, X. Chen, J. Zhao, and K. He. Scaling proprioceptive-visual learning with heterogeneous pre-trained transformers. arXiv preprint arXiv:2409.20537, 2024. +[25] C. Chi, Z. Xu, S. Feng, E. Cousineau, Y. Du, B. Burchfiel, R. Tedrake, and S. Song. Diffusion policy: Visuomotor policy learning via action diffusion. The International Journal of Robotics Research, page 02783649241273668, 2023. +[26] R.-Z. Qiu, Y. Song, X. Peng, S. A. Suryadevara, G. Yang, M. Liu, M. Ji, C. Jia, R. Yang, X. Zou, et al. Wildma: Long horizon loco-manipulation in the wild. arXiv preprint arXiv:2411.15131, 2024. +[27] C. Lu, X. Cheng, J. Li, S. Yang, M. Ji, C. Yuan, G. Yang, S. Yi, and X. Wang. Mobile- television: Predictive motion priors for humanoid whole-body control. In ICRA, 2025. +[28] Y. Ze, Z. Chen, W. Wang, T. Chen, X. He, Y. Yuan, X. B. Peng, and J. Wu. Generalizable humanoid manipulation with improved 3d diffusion policies. arXiv preprint arXiv:2410.10803, 2024. +[29] S. P. Arunachalam, S. Silwal, B. Evans, and L. Pinto. Dexterous imitation made easy: A learning-based framework for efficient dexterous manipulation. In 2023 IEEE international conference on robotics and automation (icra), pages 5954-5961. IEEE, 2023. +[30] A. S. Chen, S. Nair, and C. Finn. Learning generalizable robotic reward functions from "in-the-wild" human videos. arXiv preprint arXiv:2103.16817, 2021. +[31] J. Lee and M. S. Ryoo. Learning robot activities from first-person human videos using convolutional future regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1-2, 2017. +[32] K. Lee, Y. Su, T.-K. Kim, and Y. Demiris. A syntactic approach to robot imitation learning using probabilistic activity grammars. Robotics and Autonomous Systems, 61(12):1323-1334, 2013. +[33] A. Nguyen, D. Kanoulas, L. Muratore, D. G. Caldwell, and N. G. Tsagarakis. Translating videos to commands for robotic manipulation with deep recurrent neural networks. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 3782-3788. IEEE, 2018. + +[34] J. Rothfuss, F. Ferreira, E. E. Aksoy, Y. Zhou, and T. Asfour. Deep episodic memory: Encoding, recalling, and predicting episodic experiences for robot action execution. IEEE Robotics and Automation Letters, 3(4):4007-4014, 2018. +[35] Y. Yang, Y. Li, C. Fermuller, and Y. Aloimonos. Robot learning manipulation action plans by "watching" unconstrained videos from the world wide web. In Proceedings of the AAAI conference on artificial intelligence, volume 29, 2015. +[36] H. Bharadhwaj, R. Mottaghi, A. Gupta, and S. Tulsiani. Track2act: Predicting point tracks from internet videos enables diverse zero-shot robot manipulation. In ECCV, 2024. +[37] C. Wen, X. Lin, J. So, K. Chen, Q. Dou, Y. Gao, and P. Abbeel. Any-point trajectory modeling for policy learning. arXiv preprint arXiv:2401.00025, 2023. +[38] J. Li, Y. Zhu, Y. Xie, Z. Jiang, M. Seo, G. Pavlakos, and Y. Zhu. Okami: Teaching humanoid robots manipulation skills through single video imitation. arXiv preprint arXiv:2410.11792, 2024. +[39] N. Das, S. Bechtle, T. Davchev, D. Jayaraman, A. Rai, and F. Meier. Model-based inverse reinforcement learning from visual demonstrations. In Conference on Robot Learning, pages 1930–1942. PMLR, 2021. +[40] H. Xiong, Q. Li, Y.-C. Chen, H. Bharadhwaj, S. Sinha, and A. Garg. Learning by watching: Physical imitation of manipulation skills from human videos. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 7827-7834. IEEE, 2021. +[41] S. Pirk, M. Khansari, Y. Bai, C. Lynch, and P. Sermanet. Online object representations with contrastive learning. arXiv preprint arXiv:1906.04312, 2019. +[42] S. Nair, A. Rajeswaran, V. Kumar, C. Finn, and A. Gupta. R3m: A universal visual representation for robot manipulation. arXiv preprint arXiv:2203.12601, 2022. +[43] Y. J. Ma, S. Sodhani, D. Jayaraman, O. Bastani, V. Kumar, and A. Zhang. Vip: Towards universal visual reward and representation via value-implicit pre-training. arXiv preprint arXiv:2210.00030, 2022. +[44] L.-H. Lin, Y. Cui, A. Xie, T. Hua, and D. Sadigh. Flowretrieval: Flow-guided data retrieval for few-shot imitation learning. arXiv preprint arXiv:2408.16944, 2024. +[45] J. Ren, P. Sundaresan, D. Sadigh, S. Choudhury, and J. Bohg. Motion tracks: A unified representation for human-robot transfer in few-shot imitation learning. arXiv preprint arXiv:2501.06994, 2025. +[46] Y. Zhu, A. Lim, P. Stone, and Y. Zhu. Vision-based manipulation from single human video with open-world object graphs. arXiv preprint arXiv:2405.20321, 2024. +[47] C.-C. Hsu, B. Wen, J. Xu, Y. Narang, X. Wang, Y. Zhu, J. Biswas, and S. Birchfield. Spot: Se (3) pose trajectory diffusion for object-centric manipulation. arXiv preprint arXiv:2411.00965, 2024. +[48] Z. Fu, Q. Zhao, Q. Wu, G. Wetzstein, and C. Finn. Humanplus: Humanoid shadowing and imitation from humans. In CoRL, 2024. +[49] W. Huang, I. Mordatch, and D. Pathak. One policy to control them all: Shared modular policies for agent-agnostic control. In International Conference on Machine Learning, pages 4455–4464. PMLR, 2020. +[50] L. Y. Chen, K. Hari, K. Dharmarajan, C. Xu, Q. Vuong, and K. Goldberg. Mirage: Cross-embodiment zero-shot policy transfer with cross-painting. arXiv preprint arXiv:2402.19249, 2024. + +[51] J. Yang, C. Glossop, A. Bhorkar, D. Shah, Q. Vuong, C. Finn, D. Sadigh, and S. Levine. Pushing the limits of cross-embodiment learning for manipulation and navigation. arXiv preprint arXiv:2402.19432, 2024. +[52] J. Yang, D. Sadigh, and C. Finn. Polybot: Training one policy across robots while embracing variability. arXiv preprint arXiv:2307.03719, 2023. +[53] F. Ebert, Y. Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn, and S. Levine. Bridge data: Boosting generalization of robotic skills with cross-domain datasets. arXiv preprint arXiv:2109.13396, 2021. +[54] T. Franzmeyer, P. Torr, and J. F. Henriques. Learn what matters: cross-domain imitation learning with task-relevant embeddings. Advances in Neural Information Processing Systems, 35: 26283-26294, 2022. +[55] A. Ghadirzadeh, X. Chen, P. Poklukar, C. Finn, M. Björkman, and D. Kragic. Bayesian meta-learning for few-shot policy adaptation across robotic platforms. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1274–1280. IEEE, 2021. +[56] T. Shankar, Y. Lin, A. Rajeswaran, V. Kumar, S. Anderson, and J. Oh. Translating robot skills: Learning unsupervised skill correspondences across robots. In International Conference on Machine Learning, pages 19626-19644. PMLR, 2022. +[57] M. Xu, Z. Xu, C. Chi, M. Veloso, and S. Song. Xskill: Cross embodiment skill discovery. In Conference on Robot Learning, pages 3536-3555. PMLR, 2023. +[58] Z.-H. Yin, L. Sun, H. Ma, M. Tomizuka, and W.-J. Li. Cross domain robot imitation with invariant representation. In 2022 International Conference on Robotics and Automation (ICRA), pages 455-461. IEEE, 2022. +[59] K. Zakka, A. Zeng, P. Florence, J. Tompson, J. Bohg, and D. Dwibedi. Xirl: Cross-embodiment inverse reinforcement learning. In Conference on Robot Learning, pages 537–546. PMLR, 2022. +[60] G. Zhang, L. Zhong, Y. Lee, and J. J. Lim. Policy transfer across visual and dynamics domain gaps via iterative grounding. arXiv preprint arXiv:2107.00339, 2021. +[61] Q. Zhang, T. Xiao, A. A. Efros, L. Pinto, and X. Wang. Learning cross-domain correspondence for control with dynamics cycle-consistency. arXiv preprint arXiv:2012.09811, 2020. +[62] S. Bahl, A. Gupta, and D. Pathak. Human-to-robot imitation in the wild. In RSS, 2022. +[63] C. Wang, L. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y. Zhu, and A. Anandkumar. Mimicplay: Long-horizon imitation learning by watching human play. arXiv preprint arXiv:2302.12422, 2023. +[64] J. Engel, K. Somasundaram, M. Goesele, A. Sun, A. Gamino, A. Turner, A. Talattof, A. Yuan, B. Souti, B. Meredith, et al. Project aria: A new tool for egocentric multi-modal ai research. arXiv preprint arXiv:2308.13561, 2023. +[65] A. Khazatsky, K. Pertsch, S. Nair, A. Balakrishna, S. Dasari, S. Karamcheti, S. Nasiriany, M. K. Srirama, L. Y. Chen, K. Ellis, et al. Droid: A large-scale in-the-wild robot manipulation dataset. arXiv preprint arXiv:2403.12945, 2024. +[66] W. Zhu, X. Ma, Z. Liu, L. Liu, W. Wu, and Y. Wang. Motionbert: A unified perspective on learning human motion representations. In ICCV, 2023. +[67] M. Oquab, T. Darcet, T. Moutakanni, H. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. + +[68] Y. Zhou, C. Barnes, J. Lu, J. Yang, and H. Li. On the continuity of rotation representations in neural networks. In CVPR, 2019. +[69] T. Yu, T. Xiao, A. Stone, J. Tompson, A. Brohan, S. Wang, J. Singh, C. Tan, J. Peralta, B. Ichter, et al. Scaling robot learning with semantically imagined experience. arXiv preprint arXiv:2302.11550, 2023. + +![](images/9f523a9fee12629f433e77be513b1017d58395d4ce05221409b633c594646209.jpg) +(a) The robot performs the cup passing task across four different backgrounds. The left side shows the four background variations, while the right side illustrates the two passing directions: (#1 - Right hand passes the cup to the left hand, #2 - Left hand passes the cup to the right hand). + +![](images/866552a849bd596593904a24e5c38eb7ae77607f532fc18e081a730a1cc169b6.jpg) +(b) The robot performs the horizontal grasping task with four different items: bottle, box_1, box_2, and can, as shown on the left. The right side illustrates the process: (#1-#3 - The robot grasps the bottle, #4-#5 - The robot places it into the plastic bin). + +![](images/2e761ca98fefa9973609a6e2dbaa3424910a39a79dc84bf885adf3d0f1436ddb.jpg) +(c) The robot performs the vertical grasping task. As shown on the left, the Dynamixel box is placed in nine different positions for grasping. The right side illustrates the process: (#1-#3 - The robot grasps the box, #4-#5 - The robot places the box into the plastic bin). + +![](images/f6f6620b6c543c6d3d819bc17471c003a9fd4c539b262052407a900dc69d0ac7.jpg) +(d) The robot performs the pouring task. The left side shows different settings achieved by varying the robot's rotation and the table's position. The right side illustrates the pouring process: (#1 - Right hand grasps the bottle, #2 - Left hand grasps the cup, #3 - Pouring the drink, #4 - Left hand places the cup down, #5 - Right hand places the bottle down). +Figure 7: Illustrations of tasks used in quantitative evaluations. From top to bottom: cup passing, horizontal grasping, vertical grasping, and pouring. +Table 4: Ablation of how human whole-body movement in training demonstrations affects policy rollout. We collect the same number of demonstrations on the same set of objects for the grasping task with or without whole-body movement. Since the robot does not have a natural whole-body movement like humans, it negatively influences the manipulation success rate. + +
MethodBottleBox1Box2CanOvr. Succ.
I.D.H.D.H.D.H.D.
Without whole-body8/106/100/107/1021/40
With whole-body9/103/103/103/1018/40
+ +# A More Ablation Study - Data Collection + +Autonomous Whole-body Movement. In Tab. 4, we justify the necessity to minimize body movement in human data collection. Humans tend to move their upper body unconsciously during manipulation (including shoulder and waist movement). However, existing humanoid robots have yet to reach such a level of dexterity. Thus, having these difficult-to-replicate actions in the human demonstrations leads to degraded performance. We hypothesize that such a necessity would be greatly reduced with the development of both whole-body locomotion methods and mechanical designs, + +
MethodGrasping (secs)Pouring (secs)
Human Demo3.79±0.274.81±0.35
Human Demo with VR4.09±0.304.90±0.26
Humanoid Demo (VR Teleop)19.72±1.6537.31±6.25
+ +Table 5: Amortized mean and standard deviation of the time required to collect a single demonstration, including scene resets. The first row shows the time for regular human to complete corresponding tasks in real world. The second row represents our human data when wearing VR for data collection, demonstrating that egocentric human demonstrations provide a more scalable data source compared to robot teleoperation. + +but for the currently available platforms, we instruct operators to minimize body movement as much as possible in our dataset. + +Efficiency of Data Collection. In Tab. 5, we compare task completion times across different setups, including standard human manipulation, human demonstrations performed while wearing a VR device, and robot teleoperation. This analysis highlights how task-oriented human demonstrations can be a scalable data source for cross-embodiment learning. Notably, wearing a VR device does not significantly impact human manipulation speed, as the completion time remains nearly the same as in standard human demonstrations. + +Among different data collection schemes, we find that most overhead arises during the retargeting process from human actions to robot actions. This is primarily due to latency and the constrained workspace of 7-DoF robotic arms, which are inherent challenges in existing data collection methods such as VR teleoperation [9], motion tracking [48, 10], and puppeting [8, 5]. + +Beyond data collection speed, human demonstrations offer several additional advantages over teleoperation. They provide a safer alternative, reducing risks associated with real-robot execution. They are also more labor-efficient, as they do not require additional personnel for supervision. Furthermore, human demonstrations allow for greater flexibility in settings, enabling a diverse range of environments without requiring robot-specific adaptations. Additionally, human demonstrations achieve a higher demonstration success rate, and the required hardware (such as motion capture or VR devices) is more accessible and cost-effective compared to full robotic setups. These factors collectively make human data a more scalable solution for large-scale data collection. + +# B Normalization of different embodiments. + +Tab. 2 suggests minor differences between using different normalization coefficients for the states and actions vectors of humans and humanoids. We take a closer look in Fig. 8, where we investigate the impact of different normalization strategies in the vertical grasping (picking) task. Noticeably, the same normalization approach achieved the highest overall success rate, but the success distribution is biased towards the upper-right region of the grid. + +We hypothesize that this is because humans have a larger workspace than humanoid robots. Thus, human data encompasses humanoid proprioception as a subset, which results in a relatively smaller distribution for the robot state-action space. + +# C In-Depth Analysis of Different Types of Generalization + +Human data improves background generalization. We chose to use the cup passing task to test background generalization. We prepared four different tablecloths as backgrounds, as shown in Fig. 7a. In terms of training data distribution, the teleoperation data for this task was collected exclusively on the paper background shown in Fig. 7a, whereas the human data includes more than five different backgrounds. This diverse human dataset significantly enhances the generalization ability of the co-trained HAT policy. As shown in Tab. 7., HAT consistently outperforms across all four backgrounds, demonstrating robustness to background variations. In addition, the overall + +
MethodBottleBox1Box2CanOvr. Succ.
I.D.H.D.O.O.D.O.O.D.
ACT8/105/101/101/1016/40
HAT8/107/101/104/1021/40
+ +Table 6: Object Appearance Generalization: In the horizontal grasping task, we evaluated the grasping performance by attempting to grasp each object 10 times and recorded the success rate. + +success rate increases by nearly $50\%$ compared to training without human data, highlighting the advantage of utilizing diverse human demonstrations. + +Human data improves appearance generalization. To test how co-training improves robustness to perturbations in object textures, we evaluate the horizontal grasping policy on novel objects, as shown in Fig. 7b. Specifically, we compare the policy's performance on the bottle, box_1, box_2, and can, as shown left to right in the first image in Fig. 7b. These objects differ significantly in both color and shape from the bottle used in the teleoperation data distribution. + +Since grasping is a relatively simple task, our adjusted policy demonstrates strong learning capabilities even with only 50 teleoperation data samples. The policy can successfully grasp most bottles despite the limited training set. To better highlight the impact of human data, we selected more challenging objects for evaluation. As shown in Tab. 6, human data significantly enhances the policy's ability to grasp these more difficult objects. + +Notably, box_1 appears in the human data, while box_2 does not. Despite this, we observe that constraining with human data still improves overall performance, even on box_2, though its success rate does not increase. This suggests that, beyond direct experience with specific objects, the human data helps the policy learn broader visual priors that enable more proactive and stable grasping behaviors. For box_2, while the success rate remains low—partially due to its low height and color similarity to the table—the co-trained HAT policy demonstrates fewer out-of-distribution (OOD) failures and more actively searches for graspable regions. The failures on box_2 are primarily due to unstable grasping and the small box slipping from the hand, rather than the inability to perceive or locate the object. + +Furthermore, adding more human data not only improves performance on objects seen in human training demonstrations (e.g., box_1) but also enhances generalization to completely novel objects (e.g., box_2 and can). We hypothesize that, as the number of objects grows, HAT starts to learn inter-category visual priors that guide it to grasp objects more effectively, even when they were not explicitly present in the training set. + +Human data improves object placement generalization. Finally, we introduce variations in object placements that are not present in the real-robot training demonstrations and specifically investigate this in the vertical grasping (picking) task. In this task, we intentionally constrain the robot data collection to object placements within a subset of cells, while human vertical grasping data covers a much more diverse range of settings. + +To systematically analyze the impact of human data, we evaluate model performance on a structured $3 \times 3$ grid, where each cell represents a $10\mathrm{cm} \times 10\mathrm{cm}$ region for grasping attempts. The numbers in each cell indicate the number of successful picks out of 10 trials. Real-robot training data is collected from only two specific cells, highlighted with dashed lines. + +A key detail in our teleoperation data distribution is that 50 picking attempts are collected from the right-hand side grid and only 10 from the left-hand side grid. This imbalance explains why policies trained purely on teleoperation data struggle to grasp objects in the left-side grid. We observe that models trained solely on robot data fail to generalize to unseen cells, whereas cross-embodiment learning with human data significantly improves generalization, doubling the overall success rate. + +![](images/1f6e163acefae2ad12158fbc4b7a7b2831a66bf5aa2f92de897b28a3f85082ae.jpg) +Figure 8: Object Placement Generalization. Performance comparisons of models trained with and without human data on vertical grasping (picking). Each cell in the $3 \times 3$ grid represents a $10\mathrm{cm} \times 10\mathrm{cm}$ region where the robot attempts to pick up a box, with numbers indicating successful attempts out of 10. The real-robot data is collected in two cells inside the dashed lines. Notably, our teleoperation data is intentionally imbalanced. + +
MethodPaperWoodenRedGreenOvr. Succ.
I.D.H.D.O.O.D.O.O.D.
ACT19/2014/2012/2010/2055/80
HAT20/2016/2018/2018/2072/80
+ +Table 7: Background Generalization: In the cup passing task, we evaluate the passing performance by recording the number of failures or retries needed to complete 20 cup-passing trials. + +# D In-Depth Comparison between Humanoid A and Humanoid B configurations + +This section presents a detailed comparison of the two humanoid platforms, referred to as Humanoid A and Humanoid B, with a focus on joint structure and implications for manipulation capabilities. We restrict our analysis to the arm configurations, as other parts of the body were not exclusively explored in this work. + +While morphologically similar, these two humanoids have drastically different arm configurations that create hurdles in direct policy transfer. Besides differences in motor technical specs such as torque and types of encoder (Humanoid B has absolute motor position encoders), they also have different mechanical limits. The range of motion (ROM) for the first four proximal joints—shoulder-pitch, shoulder-roll, shoulder_yaw, and elbow—differs across the two platforms. Humanoid B exhibits a consistently wider ROM, which allows a wider set of reachable configurations and increases the manipulability of the arm in constrained environments. Table 8 summarizes the ROM values for these shared joints. + +A more significant architectural divergence is observed at the wrist. Humanoid A includes a single distal joint—wrist-roll—providing limited wrist articulation. This restricts end-effector dexterity and constrains in-hand manipulation strategies to a single rotational degree of freedom. In contrast, Humanoid B is equipped with a complete wrist mechanism composed of three independently actuated joints: wrist_pitch, wrist-roll, and wrist_yaw. These additional degrees of freedom allow for full orientation control of the end-effector, enabling tasks that require precise alignment, rotation, and fine adjustment of object poses. + +
JointHumanoid AHumanoid B
shoulder_pitch-164° to +164°-180° to +90°
shoulder-roll-19° to +178°-21° to +194°
shoulder_yaw-74° to +255°-152° to +172°
elbow-71° to 150°-54° to 182°
wrist-roll-175° to 175°-172° to 157°
+ +Table 8: Joint Range of Motion Comparison between Humanoid A and B (in degrees) \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13441/images.zip b/data/2025/2503_13xxx/2503.13441/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4ba19a4a05ac9cade81fedea53cb8b5ce70b2493 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13441/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70dbb6fab86974309e2c3020446a0e667a781101e2b1c446c460cdd9db9536a9 +size 569788 diff --git a/data/2025/2503_13xxx/2503.13441/layout.json b/data/2025/2503_13xxx/2503.13441/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..9eadf2b99db31fbf6b21112d67a6b4b7ad646810 --- /dev/null +++ b/data/2025/2503_13xxx/2503.13441/layout.json @@ -0,0 +1,9870 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 177, + 81, + 434, + 101 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 177, + 81, + 434, + 101 + ], + "spans": [ + { + "bbox": [ + 177, + 81, + 434, + 101 + ], + "type": "text", + "content": "Humanoid Policy " + }, + { + "bbox": [ + 177, + 81, + 434, + 101 + ], + "type": "inline_equation", + "content": "\\sim" + }, + { + "bbox": [ + 177, + 81, + 434, + 101 + ], + "type": "text", + "content": " Human Policy" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 118, + 118, + 491, + 132 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 118, + 491, + 132 + ], + "spans": [ + { + "bbox": [ + 118, + 118, + 491, + 132 + ], + "type": "text", + "content": "Ri-Zhao Qiu*,1 Shiqi Yang*,1 Xuxin Cheng*,1 Chaitanya Chawla*,2 Jialong Li" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 152, + 133, + 455, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 152, + 133, + 455, + 144 + ], + "spans": [ + { + "bbox": [ + 152, + 133, + 455, + 144 + ], + "type": "text", + "content": "Tairan He² Ge Yan⁴ David Yoon³ Ryan Hoque³ Lars Paulsen¹" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 153, + 144, + 452, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 153, + 144, + 452, + 156 + ], + "spans": [ + { + "bbox": [ + 153, + 144, + 452, + 156 + ], + "type": "text", + "content": "Ge Yang" + }, + { + "bbox": [ + 153, + 144, + 452, + 156 + ], + "type": "inline_equation", + "content": "^{5}" + }, + { + "bbox": [ + 153, + 144, + 452, + 156 + ], + "type": "text", + "content": " Jian Zhang" + }, + { + "bbox": [ + 153, + 144, + 452, + 156 + ], + "type": "inline_equation", + "content": "^{3}" + }, + { + "bbox": [ + 153, + 144, + 452, + 156 + ], + "type": "text", + "content": " Sha Yi" + }, + { + "bbox": [ + 153, + 144, + 452, + 156 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 153, + 144, + 452, + 156 + ], + "type": "text", + "content": " Guanya Shi" + }, + { + "bbox": [ + 153, + 144, + 452, + 156 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 153, + 144, + 452, + 156 + ], + "type": "text", + "content": " Xiaolong Wang" + }, + { + "bbox": [ + 153, + 144, + 452, + 156 + ], + "type": "inline_equation", + "content": "^{1}" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 167, + 156, + 445, + 167 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 167, + 156, + 445, + 167 + ], + "spans": [ + { + "bbox": [ + 167, + 156, + 445, + 167 + ], + "type": "text", + "content": "1 UC San Diego, 2 CMU, 3 Apple, 4 University of Washington, 5 MIT" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 217, + 168, + 392, + 178 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 217, + 168, + 392, + 178 + ], + "spans": [ + { + "bbox": [ + 217, + 168, + 392, + 178 + ], + "type": "text", + "content": "https://human-as-robot.github.io/" + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 175, + 228, + 245, + 271 + ], + "blocks": [ + { + "bbox": [ + 154, + 204, + 209, + 213 + ], + "lines": [ + { + "bbox": [ + 154, + 204, + 209, + 213 + ], + "spans": [ + { + "bbox": [ + 154, + 204, + 209, + 213 + ], + "type": "text", + "content": "Egocentric Vision" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 127, + 238, + 171, + 253 + ], + "lines": [ + { + "bbox": [ + 127, + 238, + 171, + 253 + ], + "spans": [ + { + "bbox": [ + 127, + 238, + 171, + 253 + ], + "type": "text", + "content": "Small-scale Humanoid Data" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 129, + 262, + 162, + 270 + ], + "lines": [ + { + "bbox": [ + 129, + 262, + 162, + 270 + ], + "spans": [ + { + "bbox": [ + 129, + 262, + 162, + 270 + ], + "type": "text", + "content": "1.5k demos" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 175, + 228, + 245, + 271 + ], + "lines": [ + { + "bbox": [ + 175, + 228, + 245, + 271 + ], + "spans": [ + { + "bbox": [ + 175, + 228, + 245, + 271 + ], + "type": "image", + "image_path": "4405aa2b835aba6a277e6c9198aa9977469d97508f3fdf4c76aee82216576500.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 263, + 218, + 348, + 256 + ], + "blocks": [ + { + "bbox": [ + 263, + 205, + 349, + 214 + ], + "lines": [ + { + "bbox": [ + 263, + 205, + 349, + 214 + ], + "spans": [ + { + "bbox": [ + 263, + 205, + 349, + 214 + ], + "type": "text", + "content": "Unified State-Action Space" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 263, + 218, + 348, + 256 + ], + "lines": [ + { + "bbox": [ + 263, + 218, + 348, + 256 + ], + "spans": [ + { + "bbox": [ + 263, + 218, + 348, + 256 + ], + "type": "image", + "image_path": "aea774d82dbe5bd4fbe061109175f75ddddda90ba7ff4fdaa5626b9bd22ef7f5.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 279, + 279, + 332, + 289 + ], + "lines": [ + { + "bbox": [ + 279, + 279, + 332, + 289 + ], + "spans": [ + { + "bbox": [ + 279, + 279, + 332, + 289 + ], + "type": "text", + "content": "Fingers/Wrist" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_caption" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 165, + 293, + 246, + 346 + ], + "blocks": [ + { + "bbox": [ + 126, + 308, + 161, + 323 + ], + "lines": [ + { + "bbox": [ + 126, + 308, + 161, + 323 + ], + "spans": [ + { + "bbox": [ + 126, + 308, + 161, + 323 + ], + "type": "text", + "content": "Large-scale Human Data" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 129, + 334, + 160, + 341 + ], + "lines": [ + { + "bbox": [ + 129, + 334, + 160, + 341 + ], + "spans": [ + { + "bbox": [ + 129, + 334, + 160, + 341 + ], + "type": "text", + "content": "27k demos" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 165, + 293, + 246, + 346 + ], + "lines": [ + { + "bbox": [ + 165, + 293, + 246, + 346 + ], + "spans": [ + { + "bbox": [ + 165, + 293, + 246, + 346 + ], + "type": "image", + "image_path": "6de31e2c3f7ef7882133985700e8e04374897bf36bf16ee566165c8b694a3018.jpg" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_body" + } + ], + "index": 16 + }, + { + "type": "image", + "bbox": [ + 264, + 315, + 345, + 346 + ], + "blocks": [ + { + "bbox": [ + 264, + 315, + 345, + 346 + ], + "lines": [ + { + "bbox": [ + 264, + 315, + 345, + 346 + ], + "spans": [ + { + "bbox": [ + 264, + 315, + 345, + 346 + ], + "type": "image", + "image_path": "346158b88ba04c6e7ab081daf4c31661e0295710b7ae42f838fcf37309e1f6c9.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 372, + 217, + 433, + 262 + ], + "blocks": [ + { + "bbox": [ + 406, + 205, + 453, + 213 + ], + "lines": [ + { + "bbox": [ + 406, + 205, + 453, + 213 + ], + "spans": [ + { + "bbox": [ + 406, + 205, + 453, + 213 + ], + "type": "text", + "content": "Robot Policies" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 372, + 217, + 433, + 262 + ], + "lines": [ + { + "bbox": [ + 372, + 217, + 433, + 262 + ], + "spans": [ + { + "bbox": [ + 372, + 217, + 433, + 262 + ], + "type": "image", + "image_path": "d758fb486b915cb7b8bba76f25c64677ff6383244dd1b2d11810084c616ff526.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + }, + { + "type": "image", + "bbox": [ + 431, + 217, + 488, + 262 + ], + "blocks": [ + { + "bbox": [ + 431, + 217, + 488, + 262 + ], + "lines": [ + { + "bbox": [ + 431, + 217, + 488, + 262 + ], + "spans": [ + { + "bbox": [ + 431, + 217, + 488, + 262 + ], + "type": "image", + "image_path": "155fe39651ff5c1dc0794715d35ba201201e8503baae6301d1a5408214e69852.jpg" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_body" + } + ], + "index": 20 + }, + { + "type": "image", + "bbox": [ + 372, + 262, + 431, + 308 + ], + "blocks": [ + { + "bbox": [ + 372, + 262, + 431, + 308 + ], + "lines": [ + { + "bbox": [ + 372, + 262, + 431, + 308 + ], + "spans": [ + { + "bbox": [ + 372, + 262, + 431, + 308 + ], + "type": "image", + "image_path": "9ba220379b07dcbb35a75d6613b9038f779a2745c67964f1ea4c2fcc4b21bac8.jpg" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_body" + } + ], + "index": 21 + }, + { + "type": "image", + "bbox": [ + 431, + 262, + 487, + 308 + ], + "blocks": [ + { + "bbox": [ + 431, + 262, + 487, + 308 + ], + "lines": [ + { + "bbox": [ + 431, + 262, + 487, + 308 + ], + "spans": [ + { + "bbox": [ + 431, + 262, + 487, + 308 + ], + "type": "image", + "image_path": "d09737e448a71aaf9de7992f65ae4c12d439120d577b56996d26b2f49dd055e1.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 372, + 308, + 431, + 354 + ], + "blocks": [ + { + "bbox": [ + 372, + 308, + 431, + 354 + ], + "lines": [ + { + "bbox": [ + 372, + 308, + 431, + 354 + ], + "spans": [ + { + "bbox": [ + 372, + 308, + 431, + 354 + ], + "type": "image", + "image_path": "e60053de8c621fdd2935ccfd57a062430af67270a90436afec5f8a43a6dfa737.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 360, + 504, + 426 + ], + "lines": [ + { + "bbox": [ + 104, + 360, + 504, + 426 + ], + "spans": [ + { + "bbox": [ + 104, + 360, + 504, + 426 + ], + "type": "text", + "content": "Figure 1: This paper advocates high-quality human data as a data source for cross-embodiment learning - task-oriented egocentric human data. We collect a large-scale dataset, Physical Human-Humanoid Data " + }, + { + "bbox": [ + 104, + 360, + 504, + 426 + ], + "type": "inline_equation", + "content": "(\\mathbf{PH}^2\\mathbf{D})" + }, + { + "bbox": [ + 104, + 360, + 504, + 426 + ], + "type": "text", + "content": ", with hand-finger 3D poses from consumer-grade VR devices on well-defined manipulation tasks directly aligned with robots. Without relying on modular perception, we train a Human Action Transformer (HAT) manipulation policy by directly modeling humans as a different humanoid embodiment in an end-to-end manner." + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_caption" + } + ], + "index": 23 + }, + { + "type": "image", + "bbox": [ + 431, + 308, + 488, + 354 + ], + "blocks": [ + { + "bbox": [ + 431, + 308, + 488, + 354 + ], + "lines": [ + { + "bbox": [ + 431, + 308, + 488, + 354 + ], + "spans": [ + { + "bbox": [ + 431, + 308, + 488, + 354 + ], + "type": "image", + "image_path": "932b5667c208befbac683cbcd8b77d681fb74f6dc2da9b82aa158bd701300670.jpg" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_body" + } + ], + "index": 24 + }, + { + "bbox": [ + 140, + 437, + 469, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 437, + 469, + 617 + ], + "spans": [ + { + "bbox": [ + 140, + 437, + 469, + 617 + ], + "type": "text", + "content": "Abstract: Training manipulation policies for humanoid robots with diverse data enhances their robustness and generalization across tasks and platforms. However, learning solely from robot demonstrations is labor-intensive, requiring expensive tele-operated data collection, which is difficult to scale. This paper investigates a more scalable data source, egocentric human demonstrations, to serve as cross-embediment training data for robot learning. We mitigate the embodiment gap between humanoids and humans from both the data and modeling perspectives. We collect an egocentric task-oriented dataset " + }, + { + "bbox": [ + 140, + 437, + 469, + 617 + ], + "type": "inline_equation", + "content": "(\\mathbf{PH}^2\\mathbf{D})" + }, + { + "bbox": [ + 140, + 437, + 469, + 617 + ], + "type": "text", + "content": " that is directly aligned with humanoid manipulation demonstrations. We then train a human-humanoid behavior policy, which we term Human Action Transformer (HAT). The state-action space of HAT is unified for both humans and humanoid robots and can be differentiably retargeted to robot actions. Co-trained with smaller-scale robot data, HAT directly models humanoid robots and humans as different embodiments without additional supervision. We show that human data improve both generalization and robustness of HAT with significantly better data collection efficiency." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 141, + 627, + 403, + 639 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 141, + 627, + 403, + 639 + ], + "spans": [ + { + "bbox": [ + 141, + 627, + 403, + 639 + ], + "type": "text", + "content": "Keywords: Robot Manipulation, Cross-Embodiment, Humanoid" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 105, + 654, + 192, + 666 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 654, + 192, + 666 + ], + "spans": [ + { + "bbox": [ + 105, + 654, + 192, + 666 + ], + "type": "text", + "content": "1 Introduction" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 104, + 680, + 504, + 716 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 680, + 504, + 716 + ], + "spans": [ + { + "bbox": [ + 104, + 680, + 504, + 716 + ], + "type": "text", + "content": "Learning from real robot demonstrations has led to great progress in robotic manipulation recently [1, 2, 3, 4]. One key advancement to enable such progress was hardware / software codesigns to scale up data collection using teleoperation [5, 6, 7, 8, 9, 10] and directly controlling" + } + ] + } + ], + "index": 29 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 17, + 224, + 37, + 563 + ], + "type": "aside_text", + "angle": 270, + "lines": [ + { + "bbox": [ + 17, + 224, + 37, + 563 + ], + "spans": [ + { + "bbox": [ + 17, + 224, + 37, + 563 + ], + "type": "text", + "content": "arXiv:2503.13441v3 [cs.RO] 5 Oct 2025" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 105, + 733, + 337, + 744 + ], + "type": "footer", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 733, + 337, + 744 + ], + "spans": [ + { + "bbox": [ + 105, + 733, + 337, + 744 + ], + "type": "text", + "content": "9th Conference on Robot Learning (CoRL 2025), Seoul, Korea." + } + ] + } + ], + "index": 30 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 504, + 122 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 504, + 122 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 504, + 122 + ], + "type": "text", + "content": "the robot end effector [11, 12, 5, 6, 13, 7]. Instead of gathering data on a single robot, collective efforts have been made to merge diverse robot data and train foundational policies across embodiments [11, 14, 2, 1, 3, 4], which have shown to improve cross-embodiment and cross-task generalizability." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 125, + 223, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 125, + 223, + 281 + ], + "spans": [ + { + "bbox": [ + 104, + 125, + 223, + 281 + ], + "type": "text", + "content": "However, collecting structured real-robot data is expensive and time-consuming. We are still far away from building a robust and generalizable model as what has been achieved in Computer Vision [17] and NLP [18]. If we examine humanoid robot teleoperation more closely, it involves robots mimicking human actions" + } + ] + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 231, + 134, + 503, + 209 + ], + "blocks": [ + { + "bbox": [ + 231, + 134, + 503, + 209 + ], + "lines": [ + { + "bbox": [ + 231, + 134, + 503, + 209 + ], + "spans": [ + { + "bbox": [ + 231, + 134, + 503, + 209 + ], + "type": "table", + "html": "
DatasetHumanRobot
# Frames# Demos# Frames# Demos
DexCap [15]~378k787NANA
EgoMimic [16]~432k†2,1501.29M†1,000
PH2D (Ours)~3.02M26,824~668k1,552
", + "image_path": "491be4059e6073cde8add8b32e34a6cc496333027f66d01b6d5e1e6df6ff1814.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 229, + 210, + 504, + 277 + ], + "lines": [ + { + "bbox": [ + 229, + 210, + 504, + 277 + ], + "spans": [ + { + "bbox": [ + 229, + 210, + 504, + 277 + ], + "type": "text", + "content": "Table 1: Comparisons of task-oriented egocentric human datasets. Besides having the most demonstrations, " + }, + { + "bbox": [ + 229, + 210, + 504, + 277 + ], + "type": "inline_equation", + "content": "\\mathrm{PH}^2\\mathrm{D}" + }, + { + "bbox": [ + 229, + 210, + 504, + 277 + ], + "type": "text", + "content": " is collected on various manipulation tasks, diverse objects and scenes, with accurate 3D hand-finger poses and language annotations. †: estimated based on reported data collection time with " + }, + { + "bbox": [ + 229, + 210, + 504, + 277 + ], + "type": "inline_equation", + "content": "30\\mathrm{Hz}" + }, + { + "bbox": [ + 229, + 210, + 504, + 277 + ], + "type": "text", + "content": "; whereas DexCap [15] and " + }, + { + "bbox": [ + 229, + 210, + 504, + 277 + ], + "type": "inline_equation", + "content": "\\mathrm{PH}^2\\mathrm{D}" + }, + { + "bbox": [ + 229, + 210, + 504, + 277 + ], + "type": "text", + "content": " report processed frames for training." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 104, + 281, + 506, + 342 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 281, + 506, + 342 + ], + "spans": [ + { + "bbox": [ + 104, + 281, + 506, + 342 + ], + "type": "text", + "content": "using geometric transforms or retargeting to control robot joints and end-effectors. From this perspective, we propose to model robots in a human-centric representation, and the robot action is just a transformation away from the human action. If we can accurately capture the end-effector and head poses of humans, egocentric human demonstrations will be a more scalable source of training data, as we can collect them efficiently, in any place, and without a robot." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 346, + 506, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 346, + 506, + 491 + ], + "spans": [ + { + "bbox": [ + 104, + 346, + 506, + 491 + ], + "type": "text", + "content": "In this paper, we perform cross-human and humanoid embodiment training for robotic manipulation. Our key insight is to model bimanual humanoid behaviors by directly imitating human behaviors without using learning surrogates such as affordances [19, 20]. To realize this, we first collect an egocentric task-oriented dataset of Physical Humanoid-Human Data, dubbed " + }, + { + "bbox": [ + 104, + 346, + 506, + 491 + ], + "type": "inline_equation", + "content": "\\mathrm{PH^2D}" + }, + { + "bbox": [ + 104, + 346, + 506, + 491 + ], + "type": "text", + "content": ". We adapt consumer-grade VR devices to collect egocentric videos with automatic but accurate hand pose and end effector (i.e., hand) annotations. Compared to existing human daily behavior datasets [21, 22], " + }, + { + "bbox": [ + 104, + 346, + 506, + 491 + ], + "type": "inline_equation", + "content": "\\mathrm{PH^2D}" + }, + { + "bbox": [ + 104, + 346, + 506, + 491 + ], + "type": "text", + "content": " is task-oriented so that it can be directly used for co-training. The same VR hardwares are then used to perform teleoperation to collect smaller-scale humanoid data for better alignment. We then train a Human-humanoid Action Transformer (HAT), which predicts future hand-finger trajectories in a unified human-centric state-action representation space. To obtain robot actions, we simply apply inverse kinematics and hand retargeting to differentiably convert human actions to robot actions for deployment." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 495, + 504, + 556 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 495, + 504, + 556 + ], + "spans": [ + { + "bbox": [ + 104, + 495, + 504, + 556 + ], + "type": "text", + "content": "We conduct real-robot evaluations on different manipulation tasks with extensive ablation studies to investigate how to best align human and humanoid demonstrations. In particular, we found that co-training with diverse human data improves robustness against spatial variance and background perturbation, generalizing in settings unseen in robot data but seen in human data. We believe that these findings highlight the potential of using human data for large-scale cross-embodiment learning." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 561, + 247, + 572 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 561, + 247, + 572 + ], + "spans": [ + { + "bbox": [ + 105, + 561, + 247, + 572 + ], + "type": "text", + "content": "In summary, our contributions are:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 580, + 504, + 658 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 132, + 580, + 504, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 580, + 504, + 604 + ], + "spans": [ + { + "bbox": [ + 132, + 580, + 504, + 604 + ], + "type": "text", + "content": "- A dataset, " + }, + { + "bbox": [ + 132, + 580, + 504, + 604 + ], + "type": "inline_equation", + "content": "\\mathrm{PH}^2\\mathrm{D}" + }, + { + "bbox": [ + 132, + 580, + 504, + 604 + ], + "type": "text", + "content": ", which is a large egocentric, task-oriented human-humanoid dataset with accurate hand and wrist poses for modeling human behavior (see Tab. 1)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 608, + 504, + 631 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 608, + 504, + 631 + ], + "spans": [ + { + "bbox": [ + 132, + 608, + 504, + 631 + ], + "type": "text", + "content": "- A cross human-humanoid manipulation policy, HAT, that introduces a unified state-action space and other alignment techniques for humanoid manipulation." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 635, + 504, + 658 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 635, + 504, + 658 + ], + "spans": [ + { + "bbox": [ + 132, + 635, + 504, + 658 + ], + "type": "text", + "content": "- Improved policy robustness and generalization validated by extensive experiments and ablation studies to show the benefits of co-training with human data." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 105, + 673, + 197, + 686 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 673, + 197, + 686 + ], + "spans": [ + { + "bbox": [ + 105, + 673, + 197, + 686 + ], + "type": "text", + "content": "2 Related Work" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 698, + 504, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 698, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 698, + 504, + 723 + ], + "type": "text", + "content": "Imitation Learning for Robot Manipulation. Recently, learning robot policy with data gathered directly from the multiple and target robot embodiment has shown impressive robustness and dex-" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 72, + 504, + 120 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 72, + 504, + 120 + ], + "spans": [ + { + "bbox": [ + 104, + 72, + 504, + 120 + ], + "type": "text", + "content": "terity [23, 2, 24, 1, 25, 26, 9, 27, 28]. The scale of data for imitation learning has grown substantially with recent advancements in data collection [29, 9, 7, 8], where human operators can efficiently collect large amounts of high-quality, task-oriented data. Despite these advances, achieving open-world generalization still remains a significant challenge due to lack of internet-scale training data." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 126, + 504, + 162 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 126, + 504, + 162 + ], + "spans": [ + { + "bbox": [ + 104, + 126, + 504, + 162 + ], + "type": "text", + "content": "Learning from Human Videos. Learning policies from human videos is a long-standing topic in both computer vision and robotics due to the vast existence of human data. Existing works can be approximately divided into two categories: aligning observations or actions." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 167, + 506, + 286 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 167, + 506, + 286 + ], + "spans": [ + { + "bbox": [ + 104, + 167, + 506, + 286 + ], + "type": "text", + "content": "Learn from Human - Aligning Observations. While teleoperating the actual robot platform allows learning policy with great dexterity, there is still a long way to go to achieve higher levels of generalization across diverse tasks, environments, and platforms. Unlike fields such as computer vision [17] and natural language processing [18] benefiting from internet-scale data, robot data collection in the real world is far more constrained. Various approaches have attempted to use internet-scale human videos to train robot policies [30, 31, 32, 33, 34, 35]. Due to various discrepancies (e.g., supervision and viewpoints) between egocentric robot views and internet videos, most existing work [19, 20] use modular approaches with intermediate representations as surrogates for training. The most representative ones are affordances [19, 20] for object interaction, object keypoints predictions [36, 37, 38, 39, 40], or other types of object representations [41, 42, 43]." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 293, + 506, + 435 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 293, + 506, + 435 + ], + "spans": [ + { + "bbox": [ + 104, + 293, + 506, + 435 + ], + "type": "text", + "content": "Learn from Human - Aligning Actions. Beyond observation alignment, transferring human demonstrations to robotic platforms introduces additional challenges due to differences in embodiment, actuation, and control dynamics. Specific alignment of human and robot actions is required to overcome these disparities. Approaches have employed masking in egocentric views [16], aligning motion trajectories or flow [44, 45], object-centric actions [46, 47], or hand tracking with specialized hardware [15]. Most closely related to our work, HumanPlus [48] designs a remapping method from 3D human pose estimation to tele-operate humanoid robots. Compared to HumanPlus, the insight of our method is to waive the requirement for robot hardware in collecting human data and collect diverse human data directly for co-training. In contrast to HumanPlus, we intentionally avoid performing retargeting on human demonstrations and designed the policy to directly use human hand poses as states/actions. On the other hand, the 'human shadowing' retargeting in HumanPlus is a teleoperation method that still requires robots, leading to lower collection efficiency than ours." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 441, + 506, + 536 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 441, + 506, + 536 + ], + "spans": [ + { + "bbox": [ + 104, + 441, + 506, + 536 + ], + "type": "text", + "content": "Cross-Embodiment. Cross-embodiment pre-training has been shown to improve adaptability and generalization over different embodiments [49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61]. When utilizing human videos, introducing intermediate representations can be prone to composite errors. Recent works investigate end-to-end approaches [2, 24, 1, 3] using cross-embodied robot data to reduce such compounding perceptive errors. Noticeably, these works have found that such end-to-end learning leads to desired behaviors such as retrying [3]. Some other work [62, 38] enforces viewpoint constraints between training human demonstrations and test-time robot deployment to allow learning on human data but it trades off the scalability of the data collection process." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 542, + 506, + 661 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 542, + 506, + 661 + ], + "spans": [ + { + "bbox": [ + 104, + 542, + 506, + 661 + ], + "type": "text", + "content": "Concurrent Work. Some concurrent work [15, 16, 63] also attempts to use egocentric human demonstrations for end-to-end cross-embedment policy learning. DexCap [15] uses gloves to track 3D hand poses with a chest-mounted RGBD camera to capture egocentric human videos. However, DexCap relies on 3D inputs, whereas some recent works [3, 1] have shown the scalability of 2D visual inputs. Most related to our work, EgoMimic [16] also proposes to collect data using wearable device [64] with 2D visual inputs. However, EgoMimic requires strict visual sensor alignments; whereas we show that scaling up diverse observations with different cameras makes the policy more robust. In addition, " + }, + { + "bbox": [ + 104, + 542, + 506, + 661 + ], + "type": "inline_equation", + "content": "\\mathrm{PH}^2\\mathrm{D}" + }, + { + "bbox": [ + 104, + 542, + 506, + 661 + ], + "type": "text", + "content": " is also greater in dataset scale and object diversity. We also show our policy can be deployed on real robots without strict requirements of visual sensors and heuristics, which paves the way for scalable data collection." + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 71, + 167, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 71, + 167, + 83 + ], + "spans": [ + { + "bbox": [ + 105, + 71, + 167, + 83 + ], + "type": "text", + "content": "3 Method" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 98, + 506, + 182 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 98, + 506, + 182 + ], + "spans": [ + { + "bbox": [ + 104, + 98, + 506, + 182 + ], + "type": "text", + "content": "To collect more data to train generalizable robot policies, recent research has explored cross-embodiment learning, enabling policies to generalize across diverse physical forms [3, 1, 4, 2, 65, 14]. This paper proposes egocentric human manipulation demonstrations as a scalable source of cross-embodiment training data. Sec. 3.1 describes our approach to adapt consumer-grade VR devices to scale up human data collection conveniently for a dataset of task-oriented egocentric human demonstrations. Sec. 3.2 describes various techniques to handle domain gaps to align human data and robot data for learning humanoid manipulation policy." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 194, + 365, + 206 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 194, + 365, + 206 + ], + "spans": [ + { + "bbox": [ + 105, + 194, + 365, + 206 + ], + "type": "text", + "content": "3.1 " + }, + { + "bbox": [ + 105, + 194, + 365, + 206 + ], + "type": "inline_equation", + "content": "\\mathbf{PH}^2\\mathbf{D}" + }, + { + "bbox": [ + 105, + 194, + 365, + 206 + ], + "type": "text", + "content": ": Task-oriented Physical Humanoid-Human Data" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 216, + 504, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 216, + 504, + 264 + ], + "spans": [ + { + "bbox": [ + 104, + 216, + 504, + 264 + ], + "type": "text", + "content": "Though there has been existing work that collects egocentric human videos [16, 22, 21, 15], they either (1) provide demonstrations mostly for non-task-oriented skills (e.g., dancing) and do not provide world-frame 3D head and hand poses estimations for imitation learning supervision [21, 22] or (2) require specialized hardware or robot setups [15, 16]." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 269, + 297, + 376 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 269, + 297, + 376 + ], + "spans": [ + { + "bbox": [ + 104, + 269, + 297, + 376 + ], + "type": "text", + "content": "To address these issues, we propose " + }, + { + "bbox": [ + 104, + 269, + 297, + 376 + ], + "type": "inline_equation", + "content": "\\mathrm{PH^2D}" + }, + { + "bbox": [ + 104, + 269, + 297, + 376 + ], + "type": "text", + "content": ". " + }, + { + "bbox": [ + 104, + 269, + 297, + 376 + ], + "type": "inline_equation", + "content": "\\mathrm{PH^2D}" + }, + { + "bbox": [ + 104, + 269, + 297, + 376 + ], + "type": "text", + "content": " address these two issues by (1) collecting task-oriented human demonstrations that are directly related to robot execution, (2) adapting well-engineered SDKs of VR devices (illustrated in Fig. 2) to provide supervision, and (3) diversifying tasks, camera sensors, and reducing whole-body movement to reduce domain gaps in both vision and behaviors." + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 304, + 266, + 504, + 342 + ], + "blocks": [ + { + "bbox": [ + 304, + 266, + 504, + 342 + ], + "lines": [ + { + "bbox": [ + 304, + 266, + 504, + 342 + ], + "spans": [ + { + "bbox": [ + 304, + 266, + 504, + 342 + ], + "type": "image", + "image_path": "01ad4450db2b81f63c3f2521e32a09dfef5f050e7dd08149de218571ffa6fb2a.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 302, + 346, + 504, + 397 + ], + "lines": [ + { + "bbox": [ + 302, + 346, + 504, + 397 + ], + "spans": [ + { + "bbox": [ + 302, + 346, + 504, + 397 + ], + "type": "text", + "content": "Figure 2: Consumer-grade Devices for Data Collection. To avoid relying on specialized hardware for data collection to make our method scalable, we design our data collection process using consumer-grade VR devices." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 390, + 296, + 413 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 390, + 296, + 413 + ], + "spans": [ + { + "bbox": [ + 105, + 390, + 296, + 413 + ], + "type": "text", + "content": "Adapting Low-cost Commercial Devices With development in pose estimation [66] and" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 414, + 504, + 462 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 414, + 504, + 462 + ], + "spans": [ + { + "bbox": [ + 104, + 414, + 504, + 462 + ], + "type": "text", + "content": "system engineering, modern mobile devices are capable of providing accurate on-device world frame 3D head pose tracking and 3D hand keypoint tracking [9], which has proved to be stable enough to teleoperate robot in real-time [9, 13]. We design software and hardware to support convenient data collection across different devices. Different cameras provide better visual diversity." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 472, + 504, + 548 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 132, + 472, + 504, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 472, + 504, + 495 + ], + "spans": [ + { + "bbox": [ + 132, + 472, + 504, + 495 + ], + "type": "text", + "content": "- Apple Vision Pro + Built-in Camera. We developed a Vision OS App that uses the built-in camera for visual observation and uses the Apple ARKit for 3D head and hand poses." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 500, + 504, + 548 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 500, + 504, + 548 + ], + "spans": [ + { + "bbox": [ + 132, + 500, + 504, + 548 + ], + "type": "text", + "content": "- Meta Quest 3 / Apple Vision Pro + ZED Camera. We developed a web-based application based on OpenTelevision [9] to gather 3D head and hand poses. We also designed a 3D-printed holder to mount ZED Mini Stereo cameras on these devices. This configuration is both low-cost (" + }, + { + "bbox": [ + 132, + 500, + 504, + 548 + ], + "type": "inline_equation", + "content": "<700" + }, + { + "bbox": [ + 132, + 500, + 504, + 548 + ], + "type": "text", + "content": ") and introduces more diversity with stereo cameras." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 104, + 561, + 504, + 621 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 561, + 504, + 621 + ], + "spans": [ + { + "bbox": [ + 104, + 561, + 504, + 621 + ], + "type": "text", + "content": "Data Collection Pipeline We collect task-oriented egocentric human demonstrations by asking human operators to perform tasks overlapping with robot execution (e.g., grasping and pouring) when wearing the VR devices. For every demonstration, we provide language instructions (e.g., grasp a can of coke zero with right hand), and synchronize proprioception inputs and visual inputs by closest timestamps." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 627, + 506, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 627, + 506, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 627, + 506, + 723 + ], + "type": "text", + "content": "Action Domain Gap. Human actions and tele-operated robot actions exhibit two distinct characteristics: (1) human manipulation usually involves involuntary whole-body movement, and (2) humans are more dexterous than robots and have significantly faster task completion time than robots. We mitigate the first gap by requesting the human data collectors to sit in an upright position. For the second speed gap, we interpolate translation and rotations of human data during training (effectively 'slowing down' actions). The slow-down factors " + }, + { + "bbox": [ + 104, + 627, + 506, + 723 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{slow}}" + }, + { + "bbox": [ + 104, + 627, + 506, + 723 + ], + "type": "text", + "content": " are obtained by normalizing the average task completion time of humans and humanoids, which is empirically distributed around 4. For consistency, we use " + }, + { + "bbox": [ + 104, + 627, + 506, + 723 + ], + "type": "inline_equation", + "content": "\\alpha_{\\mathrm{slow}} = 4" + }, + { + "bbox": [ + 104, + 627, + 506, + 723 + ], + "type": "text", + "content": " in all tasks." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 110, + 70, + 504, + 250 + ], + "blocks": [ + { + "bbox": [ + 110, + 70, + 504, + 250 + ], + "lines": [ + { + "bbox": [ + 110, + 70, + 504, + 250 + ], + "spans": [ + { + "bbox": [ + 110, + 70, + 504, + 250 + ], + "type": "image", + "image_path": "59c67885ce45ec56b97cbfea1a92fcbe69c8e4d04c354db3d05a4da260c37a2a.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 258, + 506, + 315 + ], + "lines": [ + { + "bbox": [ + 104, + 258, + 506, + 315 + ], + "spans": [ + { + "bbox": [ + 104, + 258, + 506, + 315 + ], + "type": "text", + "content": "Figure 3: Overview of HAT. Human Action Transformer (HAT) learns a robot policy by modeling humans. During training, we sample a state-action pair from either human data or robot data. The images are encoded by a frozen DinoV2 encoder [67]. The HAT model makes predictions in a human-centric action space using wrist 6 DoF poses and finger tips, which is retargeted to robot poses during real-robot deployment." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 105, + 331, + 278, + 342 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 331, + 278, + 342 + ], + "spans": [ + { + "bbox": [ + 105, + 331, + 278, + 342 + ], + "type": "text", + "content": "3.2 HAT: Human Action Transformer" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 354, + 504, + 391 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 354, + 504, + 391 + ], + "spans": [ + { + "bbox": [ + 104, + 354, + 504, + 391 + ], + "type": "text", + "content": "HAT learns cross-embodied robot policy by modeling humans. We demonstrate that treating bi-manual humanoid robots and humans as different robot embodiments via retargeting improves both generalizability and robustness of HAT." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "spans": [ + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "text", + "content": "More concretely, let " + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{\\text{robot}} = \\{(\\mathbf{S}_i, \\mathbf{A}_i)\\}_{i=1}^N" + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "text", + "content": " be the set of data collected from real bimanual humanoid robots using teleoperation [9], where " + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "inline_equation", + "content": "\\mathbf{S}_i" + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "text", + "content": " is the states including proprioceptive and visual observations of " + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "text", + "content": "-th demonstration and " + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "inline_equation", + "content": "\\mathbf{A}_i" + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "text", + "content": " be the actions. The collected " + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "inline_equation", + "content": "\\mathrm{PH}^2\\mathrm{D}" + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "text", + "content": " dataset, " + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_{\\text{human}} = \\{(\\tilde{\\mathbf{S}}_i, \\tilde{\\mathbf{A}}_i)\\}_{i=1}^M" + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "text", + "content": " is used to augment the training process. Note that it is reasonable to assume " + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "inline_equation", + "content": "M \\gg N" + }, + { + "bbox": [ + 104, + 394, + 504, + 456 + ], + "type": "text", + "content": " due to the significantly better human data collection efficiency." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "spans": [ + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "text", + "content": "The goal is to design a policy " + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "inline_equation", + "content": "\\pi : \\mathbf{S} \\to \\mathbf{A}" + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "text", + "content": " that predicts future robot actions " + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "inline_equation", + "content": "\\mathbf{a}_t" + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "text", + "content": " given current robot observation " + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "inline_equation", + "content": "\\mathbf{s}_t" + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "text", + "content": " at time " + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "text", + "content": ", where the future actions " + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "inline_equation", + "content": "\\mathbf{a}_{t+1}" + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "text", + "content": " is usually a chunk of actions for multi-step execution (with slight abuse of notation). We model " + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "inline_equation", + "content": "\\pi" + }, + { + "bbox": [ + 104, + 460, + 506, + 521 + ], + "type": "text", + "content": " using HAT, which is a transformer-based architecture predicting action chunks [5]. The overview of the model is illustrated in Fig. 3. We discuss key design choices of HAT with experimental ablations." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 525, + 506, + 634 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 525, + 506, + 634 + ], + "spans": [ + { + "bbox": [ + 104, + 525, + 506, + 634 + ], + "type": "text", + "content": "Unified State-Action Space. Both bimanual robots and humans have two end effectors. In our case, our robots are also equipped with an actuated 2DoF neck that can rotate, which resembles the autonomous head movement when humans perform manipulation. Therefore, we design a unified state-action space (i.e., " + }, + { + "bbox": [ + 104, + 525, + 506, + 634 + ], + "type": "inline_equation", + "content": "(\\mathbf{S},\\mathbf{A})\\equiv (\\tilde{\\mathbf{S}},\\tilde{\\mathbf{A}}))" + }, + { + "bbox": [ + 104, + 525, + 506, + 634 + ], + "type": "text", + "content": " for both bimanual robots and humans. More concretely, the proprioceptive observation is a 54-dimensional vector (6D rotations [68] of the head, left wrist, and right wrist; " + }, + { + "bbox": [ + 104, + 525, + 506, + 634 + ], + "type": "inline_equation", + "content": "x / y / z" + }, + { + "bbox": [ + 104, + 525, + 506, + 634 + ], + "type": "text", + "content": " of left and right wrists and 10 finger tips). In this work, since we deploy our policy on robots with 5-fingered dexterous hands (shown in Fig. 4), there exists a bijective mapping between the finger tips of robot hands and human hands. Note that injective mapping is also possible (e.g., mapping distance between the thumb finger and other fingers to parallel gripper distance)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 639, + 504, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 639, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 639, + 504, + 723 + ], + "type": "text", + "content": "Visual Domain Gap. Two types of domain gaps exist for co-training on human/humanoid data: camera sensors and end effector appearance. Since our human data collection process includes cameras different from robot deployment, this leads to camera domain gaps such as tones. Also, the appearances of human and humanoid end effectors are different. However, with sufficiently large and diverse data, we find it not a strict necessity to apply heuristic processing such as visual artifacts [16] or generative methods [69] to train human-robot policies - basic image augmentations such as color jittering and Gaussian blurring are effective regularization." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 108, + 70, + 504, + 139 + ], + "blocks": [ + { + "bbox": [ + 108, + 70, + 504, + 139 + ], + "lines": [ + { + "bbox": [ + 108, + 70, + 504, + 139 + ], + "spans": [ + { + "bbox": [ + 108, + 70, + 504, + 139 + ], + "type": "table", + "html": "
Meth.H. DataD. NormPassingHorizontal GraspVertical GraspPouringOvr. Succ.
I.D.O.O.D.I.D.O.O.D.I.D.O.O.D.I.D.O.O.D.I.D.O.O.D.
ACTXNA19/2036/608/107/307/2015/708/101/1042/6059/170
HATX17/2051/609/1011/3014/2030/705/105/1045/6097/170
HAT20/2052/608/1012/3013/2029/708/108/1049/60101/170
Type of GeneralizationBackgroundTextureObj. Placement
", + "image_path": "907faeb6fd6e2c7136323eb78a719b367cbed6eb8a5156d731ff41c131e3f716.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 144, + 504, + 200 + ], + "lines": [ + { + "bbox": [ + 104, + 144, + 504, + 200 + ], + "spans": [ + { + "bbox": [ + 104, + 144, + 504, + 200 + ], + "type": "text", + "content": "Table 2: Success rate of autonomous skill execution. Co-training with human data (H. Data) significantly improves the Out-Of-Distribution (O.O.D.) performance with nearly " + }, + { + "bbox": [ + 104, + 144, + 504, + 200 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 104, + 144, + 504, + 200 + ], + "type": "text", + "content": " relative improvement on all tasks on Humanoid A. We also ablate the design choice of using different normalizations (D. Norm) for different embodiments. We designate each task setting to investigate a single type of generalization. Detailed analysis of each type of generalization is presented in Sec. C." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 104, + 213, + 504, + 237 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 213, + 504, + 237 + ], + "spans": [ + { + "bbox": [ + 104, + 213, + 504, + 237 + ], + "type": "text", + "content": "Training. The final policy is denoted as " + }, + { + "bbox": [ + 104, + 213, + 504, + 237 + ], + "type": "inline_equation", + "content": "\\pi : f_{\\theta}(\\cdot) \\to \\mathbf{A}" + }, + { + "bbox": [ + 104, + 213, + 504, + 237 + ], + "type": "text", + "content": " for both human and robot policy, where " + }, + { + "bbox": [ + 104, + 213, + 504, + 237 + ], + "type": "inline_equation", + "content": "f_{\\theta}" + }, + { + "bbox": [ + 104, + 213, + 504, + 237 + ], + "type": "text", + "content": " is a transformer-based neural network parametrized by " + }, + { + "bbox": [ + 104, + 213, + 504, + 237 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 213, + 504, + 237 + ], + "type": "text", + "content": ". The final loss is given by," + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 212, + 242, + 504, + 255 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 212, + 242, + 504, + 255 + ], + "spans": [ + { + "bbox": [ + 212, + 242, + 504, + 255 + ], + "type": "interline_equation", + "content": "\\mathcal {L} = \\ell_ {1} \\left(\\pi \\left(s _ {i}\\right), a _ {i}\\right) + \\lambda \\cdot \\ell_ {1} \\left(\\pi \\left(s _ {i}\\right) _ {\\mathrm {E E F}}, a _ {i, \\mathrm {E E F}}\\right), \\tag {1}", + "image_path": "6a57da20fc4123f0e3bca11848cc3c99366a9c9bafe89ef092b9d1cb691c5275.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 258, + 506, + 295 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 258, + 506, + 295 + ], + "spans": [ + { + "bbox": [ + 104, + 258, + 506, + 295 + ], + "type": "text", + "content": "where EEF are the indices of the translation vectors of the left and right wrists, and " + }, + { + "bbox": [ + 104, + 258, + 506, + 295 + ], + "type": "inline_equation", + "content": "\\lambda = 2" + }, + { + "bbox": [ + 104, + 258, + 506, + 295 + ], + "type": "text", + "content": " is an (insensitive) hyperparameter used to balance loss to emphasize the importance of end effector positions over learning unnecessarily precise finger tip keypoints." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 309, + 192, + 323 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 309, + 192, + 323 + ], + "spans": [ + { + "bbox": [ + 105, + 309, + 192, + 323 + ], + "type": "text", + "content": "4 Experiments" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 334, + 297, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 334, + 297, + 491 + ], + "spans": [ + { + "bbox": [ + 104, + 334, + 297, + 491 + ], + "type": "text", + "content": "Hardware Platforms. We run our experiments on two humanoid robots (Humanoid A and Humanoid B shown in Fig. 4) equipped with 6-DOF Inspire dexterous hands. Humanoid A is a Unitree H1 robot and Humanoid B is a Unitree H1_2 robot with different arm configurations. Similar to humans, both robots (1) are equipped with actuated necks [9] to get make use of egocentric views and (2) do not have wrist cameras. Unless otherwise noted, most humanoid data collection is done with Humanoid A. We use Humanoid B mainly for testing cross-humanoid generalization." + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 304, + 328, + 402, + 420 + ], + "blocks": [ + { + "bbox": [ + 304, + 328, + 402, + 420 + ], + "lines": [ + { + "bbox": [ + 304, + 328, + 402, + 420 + ], + "spans": [ + { + "bbox": [ + 304, + 328, + 402, + 420 + ], + "type": "image", + "image_path": "4c584c96607aaeb29e9b43e2c7ed41bb0e70f76fd5bedf0f003b16fa1b9660c9.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 317, + 422, + 388, + 433 + ], + "lines": [ + { + "bbox": [ + 317, + 422, + 388, + 433 + ], + "spans": [ + { + "bbox": [ + 317, + 422, + 388, + 433 + ], + "type": "text", + "content": "(a) Humanoid A" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 302, + 441, + 504, + 492 + ], + "lines": [ + { + "bbox": [ + 302, + 441, + 504, + 492 + ], + "spans": [ + { + "bbox": [ + 302, + 441, + 504, + 492 + ], + "type": "text", + "content": "Figure 4: Hardware Illustration. Most robot data attributes to Humanoid A, a Unitree H1 robot. Humanoid B, a Unitree H1-2 robot with different arm motor configurations, is used to evaluate few-shot cross-humanoid transfer. Detailed comparisons in Sec. D" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 407, + 328, + 504, + 419 + ], + "blocks": [ + { + "bbox": [ + 407, + 328, + 504, + 419 + ], + "lines": [ + { + "bbox": [ + 407, + 328, + 504, + 419 + ], + "spans": [ + { + "bbox": [ + 407, + 328, + 504, + 419 + ], + "type": "image", + "image_path": "c658da49d15f8271384f414e1d227a0a00368b23fa0f816d5b26eb80997e7154.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 420, + 421, + 490, + 433 + ], + "lines": [ + { + "bbox": [ + 420, + 421, + 490, + 433 + ], + "spans": [ + { + "bbox": [ + 420, + 421, + 490, + 433 + ], + "type": "text", + "content": "(b) Humanoid B" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 502, + 506, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 502, + 506, + 586 + ], + "spans": [ + { + "bbox": [ + 104, + 502, + 506, + 586 + ], + "type": "text", + "content": "Implementation Details. We implement policy architecture by adopting an transformer-based architecture predicting future action chunks [5]. We use a frozen DinoV2 ViT-S [67] as the visual backbone. We implement two variants: (1) ACT: baseline implementation using the Action Chunk Transformer [5], trained using only robot data. Robot states are represented as joint positions. (2) HAT: same architecture as ACT, but the state encoder operates in the unified state-action space. Unless otherwise stated, HAT is co-trained on robot and human data. A checkpoint is trained for each task with approximately 250-400 robot demonstrations." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 597, + 504, + 645 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 597, + 504, + 645 + ], + "spans": [ + { + "bbox": [ + 104, + 597, + 504, + 645 + ], + "type": "text", + "content": "Experimental Protocol. We collect robot and human demonstrations in different object sets. Since human demonstrations are easier to collect, the settings in human demonstrations are generally more diverse, which include background, object types, object positions, and the relative position of the human to the table." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 651, + 505, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 651, + 505, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 651, + 505, + 723 + ], + "type": "text", + "content": "We experimented with four different dexterous manipulation tasks and investigated in-distribution and out-of-distribution setups. The in-distribution (I.D.) setting tests the learned skills with backgrounds and object arrangements approximately similar to the training demonstrations presented in the real-robot data. In the Out-Of-Distribution (O.O.D.) setting, we test generalizability and robustness by introducing novel setups that were presented in human data but not in robot data. Fig. 7 visualizes different manipulation tasks and how we define out-of-distribution settings for each task." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 742, + 309, + 750 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 108, + 76, + 302, + 231 + ], + "blocks": [ + { + "bbox": [ + 108, + 76, + 302, + 231 + ], + "lines": [ + { + "bbox": [ + 108, + 76, + 302, + 231 + ], + "spans": [ + { + "bbox": [ + 108, + 76, + 302, + 231 + ], + "type": "image", + "image_path": "9b89a15df58365bcccfb9db4b6d750eab29560b650833938df03595db378a427.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 236, + 302, + 266 + ], + "lines": [ + { + "bbox": [ + 104, + 236, + 302, + 266 + ], + "spans": [ + { + "bbox": [ + 104, + 236, + 302, + 266 + ], + "type": "text", + "content": "(a) Performance of Humanoid B co-trained with " + }, + { + "bbox": [ + 104, + 236, + 302, + 266 + ], + "type": "inline_equation", + "content": "\\mathrm{PH}^2\\mathrm{D}" + }, + { + "bbox": [ + 104, + 236, + 302, + 266 + ], + "type": "text", + "content": " on horizontal grasping. o1 is seen by Humanoid B. o2 and o3 seen in human data. o4 is unseen in all data." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 104, + 273, + 504, + 296 + ], + "lines": [ + { + "bbox": [ + 104, + 273, + 504, + 296 + ], + "spans": [ + { + "bbox": [ + 104, + 273, + 504, + 296 + ], + "type": "text", + "content": "Figure 5: Few-Shot Adaptation. Co-training consistently outperforms isolated training as Humanoid B demonstrations increase, achieving robust success rates even in low-data regimes." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 310, + 73, + 503, + 228 + ], + "blocks": [ + { + "bbox": [ + 310, + 73, + 503, + 228 + ], + "lines": [ + { + "bbox": [ + 310, + 73, + 503, + 228 + ], + "spans": [ + { + "bbox": [ + 310, + 73, + 503, + 228 + ], + "type": "image", + "image_path": "e4fd12fd6c055191040209ed8b5dd29a02bd388cfb49a7541bb3832b5932007c.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 307, + 236, + 504, + 266 + ], + "lines": [ + { + "bbox": [ + 307, + 236, + 504, + 266 + ], + "spans": [ + { + "bbox": [ + 307, + 236, + 504, + 266 + ], + "type": "text", + "content": "(b) Co-training consistently outperforms isolated training as Humanoid B demonstrations increase, achieving good success rates even in low-data regimes." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 302, + 202, + 313 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 302, + 202, + 313 + ], + "spans": [ + { + "bbox": [ + 105, + 302, + 202, + 313 + ], + "type": "text", + "content": "4.1 Main Evaluation" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 323, + 504, + 396 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 323, + 504, + 396 + ], + "spans": [ + { + "bbox": [ + 104, + 323, + 504, + 396 + ], + "type": "text", + "content": "Human data has minor effects on I.D. testing. From Tab. 2, we can see that I.D. performance with or without co-training with human data gives similar results. In the I.D. setting, we closely match the scene setups as training demonstrations, including both background, object types, and object placements. Thus, policies trained with only a small amount of Humanoid A data performed well in this setting. This finding is consistent with recent work [9, 7] that frozen visual foundation models [17, 67] improve robustness against certain external perturbations such as lighting." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 401, + 506, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 401, + 506, + 509 + ], + "spans": [ + { + "bbox": [ + 104, + 401, + 506, + 509 + ], + "type": "text", + "content": "Human data improves the O.O.D. settings with many generalizations. One common challenge in imitation learning is overfitting to only in-distribution task settings. Hence, it is crucial for a robot policy to generalize beyond the scene setups seen in a limited set of single-embodiment data. To demonstrate how co-training with human data reduces such overfitting, we introduce O.O.D. task settings to evaluate such generalization. From Tab. 2, we can see that co-training drastically improves O.O.D. settings, achieving nearly " + }, + { + "bbox": [ + 104, + 401, + 506, + 509 + ], + "type": "inline_equation", + "content": "100\\%" + }, + { + "bbox": [ + 104, + 401, + 506, + 509 + ], + "type": "text", + "content": " relative improvement in settings unseen by the robot data. In particular, we find that human data improves three types of generalization: background, object placement, and appearance. To isolate the effect of each variable, each task focuses on a specific type of generalization as listed in Tab. 2, with in-depth analyses in Sec. C." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 522, + 362, + 534 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 522, + 362, + 534 + ], + "spans": [ + { + "bbox": [ + 104, + 522, + 362, + 534 + ], + "type": "text", + "content": "4.2 Few-Shot Transfer across Heterogenous Embodiments" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 544, + 504, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 544, + 504, + 616 + ], + "spans": [ + { + "bbox": [ + 104, + 544, + 504, + 616 + ], + "type": "text", + "content": "We conducted few-shot generalization experiments on a distinct humanoid platform (Humanoid B), contrasting it with our primary platform, Humanoid A. Notably, Humanoid B's demonstration data were collected in an entirely separate environment, introducing both embodiment and environmental shifts. We highlight two key advantages of our approach: (1) the ability to unify heterogeneous human-centric data sources (humanoids and humans) into a generalizable policy framework, and (2) the capacity to rapidly adapt to new embodiments with drastically reduced data requirements." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 621, + 504, + 692 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 621, + 504, + 692 + ], + "spans": [ + { + "bbox": [ + 104, + 621, + 504, + 692 + ], + "type": "text", + "content": "Experiment 1: Cross-embodiment co-training efficacy Using only 20 demonstrations from Humanoid B, we trained 3 policies - respectively on data from (i) Humanoid B only, (ii) Humanoid B + Humanoid A (cross-embodiment), and (iii) Humanoid B + Humanoid A + Human (cross-embodiment and human priors). As shown in Fig. 5a, co-trained policies (ii) and (iii) substantially outperformed the Humanoid B-only baselines on all task settings, underscoring the method's ability to transfer latent task structure across embodiments." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 698, + 504, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 698, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 698, + 504, + 723 + ], + "type": "text", + "content": "Experiment 2: Scaling Demonstrations for Few-Shot Adaptation We further quantified the relationship between required for few-shot generalization. We hold Humanoid A and human datasets fixed" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 309, + 750 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 106, + 72, + 265, + 159 + ], + "blocks": [ + { + "bbox": [ + 106, + 72, + 265, + 159 + ], + "lines": [ + { + "bbox": [ + 106, + 72, + 265, + 159 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 265, + 159 + ], + "type": "image", + "image_path": "d1abd18a54dcf019aac825156a4dd6ef52c0918ea26fa6a585afc6d147e83292.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 105, + 159, + 267, + 224 + ], + "lines": [ + { + "bbox": [ + 105, + 159, + 267, + 224 + ], + "spans": [ + { + "bbox": [ + 105, + 159, + 267, + 224 + ], + "type": "text", + "content": "Figure 6: Human data has better sampling efficiency. Per-grid vertical grasping successes out of 10 trials with models trained with robot-only data and mixed data. Red boxes indicate where training data is collected." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 277, + 95, + 511, + 144 + ], + "blocks": [ + { + "bbox": [ + 277, + 95, + 511, + 144 + ], + "lines": [ + { + "bbox": [ + 277, + 95, + 511, + 144 + ], + "spans": [ + { + "bbox": [ + 277, + 95, + 511, + 144 + ], + "type": "table", + "html": "
TaskState SpaceAction SpeedSuccess
Vertical GraspingX1/10
X0/10
4/10
", + "image_path": "d81ce5430621f43196bfb4090fd92ce98453108b4e753cfa0147aac2867c12ea.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 275, + 145, + 512, + 200 + ], + "lines": [ + { + "bbox": [ + 275, + 145, + 512, + 200 + ], + "spans": [ + { + "bbox": [ + 275, + 145, + 512, + 200 + ], + "type": "text", + "content": "Table 3: Importance of unifying policy inputs and outputs. We report the number of successes of vertical grasping objects in the upper-left block as illustrated in Fig. 8. Baselines use joint positions as state input or do not interpolate human motions." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 104, + 238, + 504, + 275 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 238, + 504, + 275 + ], + "spans": [ + { + "bbox": [ + 104, + 238, + 504, + 275 + ], + "type": "text", + "content": "for the horizontal grasping task and ablate number of demonstrations required for Humanoid B in Fig. 5. Co-training (Humanoid " + }, + { + "bbox": [ + 104, + 238, + 504, + 275 + ], + "type": "inline_equation", + "content": "\\mathrm{B} + \\mathrm{A} + \\mathrm{Human}" + }, + { + "bbox": [ + 104, + 238, + 504, + 275 + ], + "type": "text", + "content": ") consistently outperformed isolated training on Humanoid B across all settings, especially in the few-data regime." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 285, + 195, + 298 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 285, + 195, + 298 + ], + "spans": [ + { + "bbox": [ + 105, + 285, + 195, + 298 + ], + "type": "text", + "content": "4.3 Ablation Study" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 307, + 504, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 307, + 504, + 354 + ], + "spans": [ + { + "bbox": [ + 104, + 307, + 504, + 354 + ], + "type": "text", + "content": "Sampling Efficiency of Human and Humanoid Data. Conceptually, collecting human data is less expensive, not just because it can be done faster, but also because it can be done in in-the-wild scenes; reduces setup cost before every data collection; and avoids the hardware cost to equip every operator with robots." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 360, + 505, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 360, + 505, + 456 + ], + "spans": [ + { + "bbox": [ + 104, + 360, + 505, + 456 + ], + "type": "text", + "content": "We perform additional experiments to show that even in the lab setting, human data can have better sampling efficiency in unit time. In particular, we provide a small-scale experiment on the vertical grasping task. Allocating 20 minutes for two settings, we collected (1) 60 Humanoid A demonstrations, (2) 30 Humanoid A demonstrations, and 120 human demonstrations. To avoid confating diversity and data size, the object placements in all demonstrations are evenly distributed at the bottom 6 cells. The results are given in Fig. 6. The policy trained with mixed robot and human data performs significantly better, which validates the sampling efficiency of human data over robot data. Each cell represents a " + }, + { + "bbox": [ + 104, + 360, + 505, + 456 + ], + "type": "inline_equation", + "content": "10\\mathrm{cm} \\times 10\\mathrm{cm}" + }, + { + "bbox": [ + 104, + 360, + 505, + 456 + ], + "type": "text", + "content": " region where the robot attempts to pick up a box." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 461, + 504, + 581 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 461, + 504, + 581 + ], + "spans": [ + { + "bbox": [ + 104, + 461, + 504, + 581 + ], + "type": "text", + "content": "State-Action Design. In Tab. 3, we ablate the design choices of the proprioception state space and the speed of output actions. In particular, using the same set of robot and human data, we implement two baselines: 1) a unified state-action space, but does not interpolate (i.e., slow down) the human actions; and 2) a baseline that interpolates human actions but uses separate state representation for humanoid (joint positions) and humans (EEF representation). The policies exhibit different failure patterns during the rollout of these two baselines. Without interpolating human actions, the speed of the predicted actions fluctuates between fast (resembling humans) and slow (resembling teleoperation), which leads to instability. Without a unified state space, the policy is given a 'short-cut' to distinguish between embodiments, which leads to on-par in-distribution performance and significantly worse OOD performance." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 586, + 504, + 611 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 586, + 504, + 611 + ], + "spans": [ + { + "bbox": [ + 104, + 586, + 504, + 611 + ], + "type": "text", + "content": "More Ablation Study. Due to space limit, please refer to the appendix and the supplementary for more qualitative visualization and quantitative ablation studies." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 625, + 188, + 637 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 625, + 188, + 637 + ], + "spans": [ + { + "bbox": [ + 105, + 625, + 188, + 637 + ], + "type": "text", + "content": "5 Conclusions" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 651, + 504, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 651, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 651, + 504, + 723 + ], + "type": "text", + "content": "This paper proposes " + }, + { + "bbox": [ + 104, + 651, + 504, + 723 + ], + "type": "inline_equation", + "content": "\\mathrm{PH^2D}" + }, + { + "bbox": [ + 104, + 651, + 504, + 723 + ], + "type": "text", + "content": ", an effort to construct a large-scale human task-oriented behavior dataset, along with the training pipeline HAT, which leverages " + }, + { + "bbox": [ + 104, + 651, + 504, + 723 + ], + "type": "inline_equation", + "content": "\\mathrm{PH^2D}" + }, + { + "bbox": [ + 104, + 651, + 504, + 723 + ], + "type": "text", + "content": " and robot data to show how humans can be treated as a data source for cross-embodiment learning. We show that it is possible to directly train an imitation learning model with mixed human-humanoid data without any training surrogates when the human data are aligned with the robot data. The learned policy shows improved generalization and robustness compared to the counterpart trained using only real-robot data." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 741, + 308, + 750 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 107, + 71, + 186, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 71, + 186, + 84 + ], + "spans": [ + { + "bbox": [ + 107, + 71, + 186, + 84 + ], + "type": "text", + "content": "6 Limitations" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 107, + 96, + 504, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 96, + 504, + 204 + ], + "spans": [ + { + "bbox": [ + 107, + 96, + 504, + 204 + ], + "type": "text", + "content": "Although we also collect language instructions in " + }, + { + "bbox": [ + 107, + 96, + 504, + 204 + ], + "type": "inline_equation", + "content": "\\mathrm{PH^2D}" + }, + { + "bbox": [ + 107, + 96, + 504, + 204 + ], + "type": "text", + "content": ", due to our focus on investigating the embodiment gap between humans and humanoids, one limitation of the current version of the paper uses a relatively simple architecture for learning policy. In the near future, we plan to expand the policy learning process to train a large language-conditioned cross-embodiment policy to investigate generalization to novel language using human demonstrations. The collection of human data relies on off-the-shelf VR hardwares and their hand tracking SDKs. Since these SDKs were trained mostly for VR applications, hand keypoint tracking can fail for certain motions with heavy occlusion. In addition, though the proposed method conceptually extends to more robot morphologies, current evaluations are done on robots equipped with dexterous hands." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 218, + 214, + 232 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 218, + 214, + 232 + ], + "spans": [ + { + "bbox": [ + 107, + 218, + 214, + 232 + ], + "type": "text", + "content": "7 Acknowledgment" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 243, + 504, + 269 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 243, + 504, + 269 + ], + "spans": [ + { + "bbox": [ + 107, + 243, + 504, + 269 + ], + "type": "text", + "content": "This work was supported, in part, by NSF CAREER Award IIS-2240014, NSF CCF-2112665 (TILOS), and gifts from Amazon, Meta and Apple." + } + ] + } + ], + "index": 3 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 302, + 742, + 308, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 742, + 308, + 750 + ], + "spans": [ + { + "bbox": [ + 302, + 742, + 308, + 750 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 4 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 71, + 165, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 71, + 165, + 84 + ], + "spans": [ + { + "bbox": [ + 106, + 71, + 165, + 84 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 105, + 90, + 506, + 723 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 111, + 90, + 505, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 90, + 505, + 116 + ], + "spans": [ + { + "bbox": [ + 111, + 90, + 505, + 116 + ], + "type": "text", + "content": "[1] S. Liu, L. Wu, B. Li, H. Tan, H. Chen, Z. Wang, K. Xu, H. Su, and J. Zhu. Rdt-1b: a diffusion foundation model for bimanual manipulation. arXiv preprint arXiv:2410.07864, 2024." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 111, + 122, + 506, + 170 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 122, + 506, + 170 + ], + "spans": [ + { + "bbox": [ + 111, + 122, + 506, + 170 + ], + "type": "text", + "content": "[2] Octo Model Team, D. Ghosh, H. Walke, K. Pertsch, K. Black, O. Mees, S. Dasari, J. Hejna, C. Xu, J. Luo, T. Kreiman, Y. Tan, L. Y. Chen, P. Sanketi, Q. Vuong, T. Xiao, D. Sadigh, C. Finn, and S. Levine. Octo: An open-source generalist robot policy. In Proceedings of Robotics: Science and Systems, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 111, + 178, + 504, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 178, + 504, + 215 + ], + "spans": [ + { + "bbox": [ + 111, + 178, + 504, + 215 + ], + "type": "text", + "content": "[3] K. Black, N. Brown, D. Driess, A. Esmail, M. Equi, C. Finn, N. Fusai, L. Groom, K. Hausman, B. Ichter, et al. " + }, + { + "bbox": [ + 111, + 178, + 504, + 215 + ], + "type": "inline_equation", + "content": "\\pi_0" + }, + { + "bbox": [ + 111, + 178, + 504, + 215 + ], + "type": "text", + "content": ": A vision-language-action flow model for general robot control. arXiv preprint arXiv:2410.24164, 2024." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 111, + 221, + 504, + 246 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 221, + 504, + 246 + ], + "spans": [ + { + "bbox": [ + 111, + 221, + 504, + 246 + ], + "type": "text", + "content": "[4] S. Dasari, O. Mees, S. Zhao, M. K. Srirama, and S. Levine. The ingredients for robotic diffusion transformers. arXiv preprint arXiv:2410.10088, 2024." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 111, + 253, + 504, + 277 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 253, + 504, + 277 + ], + "spans": [ + { + "bbox": [ + 111, + 253, + 504, + 277 + ], + "type": "text", + "content": "[5] T. Z. Zhao, V. Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulation with low-cost hardware. arXiv preprint arXiv:2304.13705, 2023." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 111, + 285, + 504, + 310 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 285, + 504, + 310 + ], + "spans": [ + { + "bbox": [ + 111, + 285, + 504, + 310 + ], + "type": "text", + "content": "[6] Z. Fu, T. Z. Zhao, and C. Finn. Mobile aloha: Learning bimanual mobile manipulation with low-cost whole-body teleoperation. arXiv preprint arXiv:2401.02117, 2024." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 111, + 316, + 504, + 352 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 316, + 504, + 352 + ], + "spans": [ + { + "bbox": [ + 111, + 316, + 504, + 352 + ], + "type": "text", + "content": "[7] C. Chi, Z. Xu, C. Pan, E. Cousineau, B. Burchfiel, S. Feng, R. Tedrake, and S. Song. Universal manipulation interface: In-the-wild robot teaching without in-the-wild robots. arXiv preprint arXiv:2402.10329, 2024." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 111, + 360, + 504, + 396 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 360, + 504, + 396 + ], + "spans": [ + { + "bbox": [ + 111, + 360, + 504, + 396 + ], + "type": "text", + "content": "[8] S. Yang, M. Liu, Y. Qin, R. Ding, J. Li, X. Cheng, R. Yang, S. Yi, and X. Wang. Ace: A cross-platform visual-exoskeletons system for low-cost dexterous teleoperation. arXiv preprint arXiv:2408.11805, 2024." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 111, + 403, + 504, + 429 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 403, + 504, + 429 + ], + "spans": [ + { + "bbox": [ + 111, + 403, + 504, + 429 + ], + "type": "text", + "content": "[9] X. Cheng, J. Li, S. Yang, G. Yang, and X. Wang. Open-television: Teleoperation with immersive active visual feedback. In Conference on Robot Learning (CoRL), 2024." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 435, + 504, + 472 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 435, + 504, + 472 + ], + "spans": [ + { + "bbox": [ + 105, + 435, + 504, + 472 + ], + "type": "text", + "content": "[10] T. He, Z. Luo, X. He, W. Xiao, C. Zhang, W. Zhang, K. Kitani, C. Liu, and G. Shi. Omnih2o: Universal and dexterous human-to-humanoid whole-body teleoperation and learning. arXiv preprint arXiv:2406.08858, 2024." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 479, + 504, + 504 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 479, + 504, + 504 + ], + "spans": [ + { + "bbox": [ + 105, + 479, + 504, + 504 + ], + "type": "text", + "content": "[11] S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, and C. Finn. Robonet: Large-scale multi-robot learning. arXiv preprint arXiv:1910.11215, 2019." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 510, + 504, + 559 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 510, + 504, + 559 + ], + "spans": [ + { + "bbox": [ + 105, + 510, + 504, + 559 + ], + "type": "text", + "content": "[12] H. Bharadhwaj, J. Vakil, M. Sharma, A. Gupta, S. Tulsiani, and V. Kumar. Roboagent: Generalization and efficiency in robot manipulation via semantic augmentations and action chunking. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 4788-4795. IEEE, 2024." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 567, + 504, + 592 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 567, + 504, + 592 + ], + "spans": [ + { + "bbox": [ + 105, + 567, + 504, + 592 + ], + "type": "text", + "content": "[13] H. Ha, Y. Gao, Z. Fu, J. Tan, and S. Song. Umi on legs: Making manipulation policies mobile with manipulation-centric whole-body controllers. arXiv preprint arXiv:2407.10353, 2024." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 598, + 504, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 598, + 504, + 635 + ], + "spans": [ + { + "bbox": [ + 105, + 598, + 504, + 635 + ], + "type": "text", + "content": "[14] A. O'Neill, A. Rehman, A. Gupta, A. Maddukuri, A. Gupta, A. Padalkar, A. Lee, A. Pooley, A. Gupta, A. Mandlekar, et al. Open x-embodiment: Robotic learning datasets and rt-x models. arXiv preprint arXiv:2310.08864, 2023." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 642, + 504, + 677 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 642, + 504, + 677 + ], + "spans": [ + { + "bbox": [ + 105, + 642, + 504, + 677 + ], + "type": "text", + "content": "[15] C. Wang, H. Shi, W. Wang, R. Zhang, L. Fei-Fei, and C. K. Liu. Dexcap: Scalable and portable mocap data collection system for dexterous manipulation. arXiv preprint arXiv:2403.07788, 2024." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 685, + 504, + 723 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 685, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 105, + 685, + 504, + 723 + ], + "type": "text", + "content": "[16] S. Kareer, D. Patel, R. Punamiya, P. Mathur, S. Cheng, C. Wang, J. Hoffman, and D. Xu. Egomimic: Scaling imitation learning via egocentric video. arXiv preprint arXiv:2410.24221, 2024. URL https://arxiv.org/abs/2410.24221." + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 72, + 505, + 721 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 106, + 72, + 505, + 107 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 72, + 505, + 107 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 505, + 107 + ], + "type": "text", + "content": "[17] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In ICML. PMLR, 2021." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 116, + 392, + 129 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 116, + 392, + 129 + ], + "spans": [ + { + "bbox": [ + 106, + 116, + 392, + 129 + ], + "type": "text", + "content": "[18] OpenAI. Gpt-4 technical report. Technical report, OpenAI, 2023." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 135, + 504, + 159 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 135, + 504, + 159 + ], + "spans": [ + { + "bbox": [ + 107, + 135, + 504, + 159 + ], + "type": "text", + "content": "[19] R. Mendonca, S. Bahl, and D. Pathak. Structured world models from human videos. In RSS, 2023." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 106, + 166, + 504, + 191 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 166, + 504, + 191 + ], + "spans": [ + { + "bbox": [ + 106, + 166, + 504, + 191 + ], + "type": "text", + "content": "[20] S. Bahl, R. Mendonca, L. Chen, U. Jain, and D. Pathak. Affordances from human videos as a versatile representation for robotics. In CVPR, 2023." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 106, + 198, + 504, + 233 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 198, + 504, + 233 + ], + "spans": [ + { + "bbox": [ + 106, + 198, + 504, + 233 + ], + "type": "text", + "content": "[21] K. Grauman, A. Westbury, E. Byrne, Z. Chavis, A. Furnari, R. Girdhar, J. Hamburger, H. Jiang, M. Liu, X. Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In CVPR, 2022." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 106, + 241, + 504, + 278 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 241, + 504, + 278 + ], + "spans": [ + { + "bbox": [ + 106, + 241, + 504, + 278 + ], + "type": "text", + "content": "[22] D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro, T. Perrett, W. Price, and M. Wray. Scaling egocentric vision: The epic-kitchens dataset. In ECCV, 2018." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 285, + 504, + 311 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 285, + 504, + 311 + ], + "spans": [ + { + "bbox": [ + 105, + 285, + 504, + 311 + ], + "type": "text", + "content": "[23] T. Z. Zhao, J. Tompson, D. Driess, P. Florence, K. Ghasemipour, C. Finn, and A. Wahid. Aloha unleashed: A simple recipe for robot dexterity. arXiv preprint arXiv:2410.13126, 2024." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 316, + 504, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 316, + 504, + 342 + ], + "spans": [ + { + "bbox": [ + 105, + 316, + 504, + 342 + ], + "type": "text", + "content": "[24] L. Wang, X. Chen, J. Zhao, and K. He. Scaling proprioceptive-visual learning with heterogeneous pre-trained transformers. arXiv preprint arXiv:2409.20537, 2024." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 106, + 348, + 504, + 385 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 348, + 504, + 385 + ], + "spans": [ + { + "bbox": [ + 106, + 348, + 504, + 385 + ], + "type": "text", + "content": "[25] C. Chi, Z. Xu, S. Feng, E. Cousineau, Y. Du, B. Burchfiel, R. Tedrake, and S. Song. Diffusion policy: Visuomotor policy learning via action diffusion. The International Journal of Robotics Research, page 02783649241273668, 2023." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 106, + 392, + 504, + 428 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 392, + 504, + 428 + ], + "spans": [ + { + "bbox": [ + 106, + 392, + 504, + 428 + ], + "type": "text", + "content": "[26] R.-Z. Qiu, Y. Song, X. Peng, S. A. Suryadevara, G. Yang, M. Liu, M. Ji, C. Jia, R. Yang, X. Zou, et al. Wildma: Long horizon loco-manipulation in the wild. arXiv preprint arXiv:2411.15131, 2024." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 435, + 504, + 461 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 435, + 504, + 461 + ], + "spans": [ + { + "bbox": [ + 105, + 435, + 504, + 461 + ], + "type": "text", + "content": "[27] C. Lu, X. Cheng, J. Li, S. Yang, M. Ji, C. Yuan, G. Yang, S. Yi, and X. Wang. Mobile- television: Predictive motion priors for humanoid whole-body control. In ICRA, 2025." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 106, + 467, + 504, + 503 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 467, + 504, + 503 + ], + "spans": [ + { + "bbox": [ + 106, + 467, + 504, + 503 + ], + "type": "text", + "content": "[28] Y. Ze, Z. Chen, W. Wang, T. Chen, X. He, Y. Yuan, X. B. Peng, and J. Wu. Generalizable humanoid manipulation with improved 3d diffusion policies. arXiv preprint arXiv:2410.10803, 2024." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 106, + 510, + 504, + 548 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 510, + 504, + 548 + ], + "spans": [ + { + "bbox": [ + 106, + 510, + 504, + 548 + ], + "type": "text", + "content": "[29] S. P. Arunachalam, S. Silwal, B. Evans, and L. Pinto. Dexterous imitation made easy: A learning-based framework for efficient dexterous manipulation. In 2023 IEEE international conference on robotics and automation (icra), pages 5954-5961. IEEE, 2023." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 554, + 504, + 580 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 554, + 504, + 580 + ], + "spans": [ + { + "bbox": [ + 105, + 554, + 504, + 580 + ], + "type": "text", + "content": "[30] A. S. Chen, S. Nair, and C. Finn. Learning generalizable robotic reward functions from \"in-the-wild\" human videos. arXiv preprint arXiv:2103.16817, 2021." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 106, + 586, + 504, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 586, + 504, + 624 + ], + "spans": [ + { + "bbox": [ + 106, + 586, + 504, + 624 + ], + "type": "text", + "content": "[31] J. Lee and M. S. Ryoo. Learning robot activities from first-person human videos using convolutional future regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1-2, 2017." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 106, + 629, + 504, + 666 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 629, + 504, + 666 + ], + "spans": [ + { + "bbox": [ + 106, + 629, + 504, + 666 + ], + "type": "text", + "content": "[32] K. Lee, Y. Su, T.-K. Kim, and Y. Demiris. A syntactic approach to robot imitation learning using probabilistic activity grammars. Robotics and Autonomous Systems, 61(12):1323-1334, 2013." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 106, + 673, + 504, + 721 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 673, + 504, + 721 + ], + "spans": [ + { + "bbox": [ + 106, + 673, + 504, + 721 + ], + "type": "text", + "content": "[33] A. Nguyen, D. Kanoulas, L. Muratore, D. G. Caldwell, and N. G. Tsagarakis. Translating videos to commands for robotic manipulation with deep recurrent neural networks. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 3782-3788. IEEE, 2018." + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 72, + 505, + 721 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 106, + 72, + 505, + 107 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 72, + 505, + 107 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 505, + 107 + ], + "type": "text", + "content": "[34] J. Rothfuss, F. Ferreira, E. E. Aksoy, Y. Zhou, and T. Asfour. Deep episodic memory: Encoding, recalling, and predicting episodic experiences for robot action execution. IEEE Robotics and Automation Letters, 3(4):4007-4014, 2018." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 115, + 505, + 152 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 115, + 505, + 152 + ], + "spans": [ + { + "bbox": [ + 106, + 115, + 505, + 152 + ], + "type": "text", + "content": "[35] Y. Yang, Y. Li, C. Fermuller, and Y. Aloimonos. Robot learning manipulation action plans by \"watching\" unconstrained videos from the world wide web. In Proceedings of the AAAI conference on artificial intelligence, volume 29, 2015." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 159, + 504, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 159, + 504, + 183 + ], + "spans": [ + { + "bbox": [ + 107, + 159, + 504, + 183 + ], + "type": "text", + "content": "[36] H. Bharadhwaj, R. Mottaghi, A. Gupta, and S. Tulsiani. Track2act: Predicting point tracks from internet videos enables diverse zero-shot robot manipulation. In ECCV, 2024." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 190, + 504, + 215 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 190, + 504, + 215 + ], + "spans": [ + { + "bbox": [ + 107, + 190, + 504, + 215 + ], + "type": "text", + "content": "[37] C. Wen, X. Lin, J. So, K. Chen, Q. Dou, Y. Gao, and P. Abbeel. Any-point trajectory modeling for policy learning. arXiv preprint arXiv:2401.00025, 2023." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 106, + 222, + 504, + 257 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 222, + 504, + 257 + ], + "spans": [ + { + "bbox": [ + 106, + 222, + 504, + 257 + ], + "type": "text", + "content": "[38] J. Li, Y. Zhu, Y. Xie, Z. Jiang, M. Seo, G. Pavlakos, and Y. Zhu. Okami: Teaching humanoid robots manipulation skills through single video imitation. arXiv preprint arXiv:2410.11792, 2024." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 266, + 504, + 301 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 266, + 504, + 301 + ], + "spans": [ + { + "bbox": [ + 107, + 266, + 504, + 301 + ], + "type": "text", + "content": "[39] N. Das, S. Bechtle, T. Davchev, D. Jayaraman, A. Rai, and F. Meier. Model-based inverse reinforcement learning from visual demonstrations. In Conference on Robot Learning, pages 1930–1942. PMLR, 2021." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 309, + 504, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 309, + 504, + 346 + ], + "spans": [ + { + "bbox": [ + 105, + 309, + 504, + 346 + ], + "type": "text", + "content": "[40] H. Xiong, Q. Li, Y.-C. Chen, H. Bharadhwaj, S. Sinha, and A. Garg. Learning by watching: Physical imitation of manipulation skills from human videos. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 7827-7834. IEEE, 2021." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 353, + 504, + 378 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 353, + 504, + 378 + ], + "spans": [ + { + "bbox": [ + 107, + 353, + 504, + 378 + ], + "type": "text", + "content": "[41] S. Pirk, M. Khansari, Y. Bai, C. Lynch, and P. Sermanet. Online object representations with contrastive learning. arXiv preprint arXiv:1906.04312, 2019." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 384, + 504, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 384, + 504, + 409 + ], + "spans": [ + { + "bbox": [ + 107, + 384, + 504, + 409 + ], + "type": "text", + "content": "[42] S. Nair, A. Rajeswaran, V. Kumar, C. Finn, and A. Gupta. R3m: A universal visual representation for robot manipulation. arXiv preprint arXiv:2203.12601, 2022." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 416, + 504, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 416, + 504, + 453 + ], + "spans": [ + { + "bbox": [ + 107, + 416, + 504, + 453 + ], + "type": "text", + "content": "[43] Y. J. Ma, S. Sodhani, D. Jayaraman, O. Bastani, V. Kumar, and A. Zhang. Vip: Towards universal visual reward and representation via value-implicit pre-training. arXiv preprint arXiv:2210.00030, 2022." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 460, + 504, + 485 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 460, + 504, + 485 + ], + "spans": [ + { + "bbox": [ + 107, + 460, + 504, + 485 + ], + "type": "text", + "content": "[44] L.-H. Lin, Y. Cui, A. Xie, T. Hua, and D. Sadigh. Flowretrieval: Flow-guided data retrieval for few-shot imitation learning. arXiv preprint arXiv:2408.16944, 2024." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 491, + 504, + 528 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 491, + 504, + 528 + ], + "spans": [ + { + "bbox": [ + 107, + 491, + 504, + 528 + ], + "type": "text", + "content": "[45] J. Ren, P. Sundaresan, D. Sadigh, S. Choudhury, and J. Bohg. Motion tracks: A unified representation for human-robot transfer in few-shot imitation learning. arXiv preprint arXiv:2501.06994, 2025." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 535, + 504, + 560 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 535, + 504, + 560 + ], + "spans": [ + { + "bbox": [ + 107, + 535, + 504, + 560 + ], + "type": "text", + "content": "[46] Y. Zhu, A. Lim, P. Stone, and Y. Zhu. Vision-based manipulation from single human video with open-world object graphs. arXiv preprint arXiv:2405.20321, 2024." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 567, + 504, + 602 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 567, + 504, + 602 + ], + "spans": [ + { + "bbox": [ + 107, + 567, + 504, + 602 + ], + "type": "text", + "content": "[47] C.-C. Hsu, B. Wen, J. Xu, Y. Narang, X. Wang, Y. Zhu, J. Biswas, and S. Birchfield. Spot: Se (3) pose trajectory diffusion for object-centric manipulation. arXiv preprint arXiv:2411.00965, 2024." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 610, + 504, + 635 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 610, + 504, + 635 + ], + "spans": [ + { + "bbox": [ + 107, + 610, + 504, + 635 + ], + "type": "text", + "content": "[48] Z. Fu, Q. Zhao, Q. Wu, G. Wetzstein, and C. Finn. Humanplus: Humanoid shadowing and imitation from humans. In CoRL, 2024." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 107, + 642, + 504, + 678 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 642, + 504, + 678 + ], + "spans": [ + { + "bbox": [ + 107, + 642, + 504, + 678 + ], + "type": "text", + "content": "[49] W. Huang, I. Mordatch, and D. Pathak. One policy to control them all: Shared modular policies for agent-agnostic control. In International Conference on Machine Learning, pages 4455–4464. PMLR, 2020." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 107, + 685, + 504, + 721 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 685, + 504, + 721 + ], + "spans": [ + { + "bbox": [ + 107, + 685, + 504, + 721 + ], + "type": "text", + "content": "[50] L. Y. Chen, K. Hari, K. Dharmarajan, C. Xu, Q. Vuong, and K. Goldberg. Mirage: Cross-embodiment zero-shot policy transfer with cross-painting. arXiv preprint arXiv:2402.19249, 2024." + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 106, + 72, + 505, + 722 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 106, + 72, + 505, + 107 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 72, + 505, + 107 + ], + "spans": [ + { + "bbox": [ + 106, + 72, + 505, + 107 + ], + "type": "text", + "content": "[51] J. Yang, C. Glossop, A. Bhorkar, D. Shah, Q. Vuong, C. Finn, D. Sadigh, and S. Levine. Pushing the limits of cross-embodiment learning for manipulation and navigation. arXiv preprint arXiv:2402.19432, 2024." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 106, + 115, + 504, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 115, + 504, + 140 + ], + "spans": [ + { + "bbox": [ + 106, + 115, + 504, + 140 + ], + "type": "text", + "content": "[52] J. Yang, D. Sadigh, and C. Finn. Polybot: Training one policy across robots while embracing variability. arXiv preprint arXiv:2307.03719, 2023." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 107, + 148, + 504, + 183 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 148, + 504, + 183 + ], + "spans": [ + { + "bbox": [ + 107, + 148, + 504, + 183 + ], + "type": "text", + "content": "[53] F. Ebert, Y. Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn, and S. Levine. Bridge data: Boosting generalization of robotic skills with cross-domain datasets. arXiv preprint arXiv:2109.13396, 2021." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 191, + 504, + 226 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 191, + 504, + 226 + ], + "spans": [ + { + "bbox": [ + 107, + 191, + 504, + 226 + ], + "type": "text", + "content": "[54] T. Franzmeyer, P. Torr, and J. F. Henriques. Learn what matters: cross-domain imitation learning with task-relevant embeddings. Advances in Neural Information Processing Systems, 35: 26283-26294, 2022." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 107, + 235, + 504, + 270 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 235, + 504, + 270 + ], + "spans": [ + { + "bbox": [ + 107, + 235, + 504, + 270 + ], + "type": "text", + "content": "[55] A. Ghadirzadeh, X. Chen, P. Poklukar, C. Finn, M. Björkman, and D. Kragic. Bayesian meta-learning for few-shot policy adaptation across robotic platforms. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1274–1280. IEEE, 2021." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 278, + 504, + 314 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 278, + 504, + 314 + ], + "spans": [ + { + "bbox": [ + 107, + 278, + 504, + 314 + ], + "type": "text", + "content": "[56] T. Shankar, Y. Lin, A. Rajeswaran, V. Kumar, S. Anderson, and J. Oh. Translating robot skills: Learning unsupervised skill correspondences across robots. In International Conference on Machine Learning, pages 19626-19644. PMLR, 2022." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 322, + 504, + 346 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 322, + 504, + 346 + ], + "spans": [ + { + "bbox": [ + 107, + 322, + 504, + 346 + ], + "type": "text", + "content": "[57] M. Xu, Z. Xu, C. Chi, M. Veloso, and S. Song. Xskill: Cross embodiment skill discovery. In Conference on Robot Learning, pages 3536-3555. PMLR, 2023." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 354, + 504, + 389 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 354, + 504, + 389 + ], + "spans": [ + { + "bbox": [ + 107, + 354, + 504, + 389 + ], + "type": "text", + "content": "[58] Z.-H. Yin, L. Sun, H. Ma, M. Tomizuka, and W.-J. Li. Cross domain robot imitation with invariant representation. In 2022 International Conference on Robotics and Automation (ICRA), pages 455-461. IEEE, 2022." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 397, + 504, + 432 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 397, + 504, + 432 + ], + "spans": [ + { + "bbox": [ + 107, + 397, + 504, + 432 + ], + "type": "text", + "content": "[59] K. Zakka, A. Zeng, P. Florence, J. Tompson, J. Bohg, and D. Dwibedi. Xirl: Cross-embodiment inverse reinforcement learning. In Conference on Robot Learning, pages 537–546. PMLR, 2022." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 441, + 504, + 465 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 441, + 504, + 465 + ], + "spans": [ + { + "bbox": [ + 107, + 441, + 504, + 465 + ], + "type": "text", + "content": "[60] G. Zhang, L. Zhong, Y. Lee, and J. J. Lim. Policy transfer across visual and dynamics domain gaps via iterative grounding. arXiv preprint arXiv:2107.00339, 2021." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 472, + 504, + 497 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 472, + 504, + 497 + ], + "spans": [ + { + "bbox": [ + 107, + 472, + 504, + 497 + ], + "type": "text", + "content": "[61] Q. Zhang, T. Xiao, A. A. Efros, L. Pinto, and X. Wang. Learning cross-domain correspondence for control with dynamics cycle-consistency. arXiv preprint arXiv:2012.09811, 2020." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 503, + 477, + 516 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 503, + 477, + 516 + ], + "spans": [ + { + "bbox": [ + 107, + 503, + 477, + 516 + ], + "type": "text", + "content": "[62] S. Bahl, A. Gupta, and D. Pathak. Human-to-robot imitation in the wild. In RSS, 2022." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 524, + 504, + 559 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 524, + 504, + 559 + ], + "spans": [ + { + "bbox": [ + 107, + 524, + 504, + 559 + ], + "type": "text", + "content": "[63] C. Wang, L. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y. Zhu, and A. Anandkumar. Mimicplay: Long-horizon imitation learning by watching human play. arXiv preprint arXiv:2302.12422, 2023." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 567, + 504, + 603 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 567, + 504, + 603 + ], + "spans": [ + { + "bbox": [ + 107, + 567, + 504, + 603 + ], + "type": "text", + "content": "[64] J. Engel, K. Somasundaram, M. Goesele, A. Sun, A. Gamino, A. Turner, A. Talattof, A. Yuan, B. Souti, B. Meredith, et al. Project aria: A new tool for egocentric multi-modal ai research. arXiv preprint arXiv:2308.13561, 2023." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 611, + 504, + 647 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 611, + 504, + 647 + ], + "spans": [ + { + "bbox": [ + 107, + 611, + 504, + 647 + ], + "type": "text", + "content": "[65] A. Khazatsky, K. Pertsch, S. Nair, A. Balakrishna, S. Dasari, S. Karamcheti, S. Nasiriany, M. K. Srirama, L. Y. Chen, K. Ellis, et al. Droid: A large-scale in-the-wild robot manipulation dataset. arXiv preprint arXiv:2403.12945, 2024." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 107, + 654, + 504, + 679 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 654, + 504, + 679 + ], + "spans": [ + { + "bbox": [ + 107, + 654, + 504, + 679 + ], + "type": "text", + "content": "[66] W. Zhu, X. Ma, Z. Liu, L. Liu, W. Wu, and Y. Wang. Motionbert: A unified perspective on learning human motion representations. In ICCV, 2023." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 107, + 686, + 504, + 722 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 686, + 504, + 722 + ], + "spans": [ + { + "bbox": [ + 107, + 686, + 504, + 722 + ], + "type": "text", + "content": "[67] M. Oquab, T. Darcet, T. Moutakanni, H. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023." + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 72, + 505, + 140 + ], + "type": "list", + "angle": 0, + "index": 2, + "blocks": [ + { + "bbox": [ + 105, + 72, + 505, + 95 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 72, + 505, + 95 + ], + "spans": [ + { + "bbox": [ + 105, + 72, + 505, + 95 + ], + "type": "text", + "content": "[68] Y. Zhou, C. Barnes, J. Lu, J. Yang, and H. Li. On the continuity of rotation representations in neural networks. In CVPR, 2019." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 105, + 103, + 504, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 103, + 504, + 140 + ], + "spans": [ + { + "bbox": [ + 105, + 103, + 504, + 140 + ], + "type": "text", + "content": "[69] T. Yu, T. Xiao, A. Stone, J. Tompson, A. Brohan, S. Wang, J. Singh, C. Tan, J. Peralta, B. Ichter, et al. Scaling robot learning with semantically imagined experience. arXiv preprint arXiv:2302.11550, 2023." + } + ] + } + ], + "index": 1 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 3 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 108, + 71, + 504, + 128 + ], + "blocks": [ + { + "bbox": [ + 108, + 71, + 504, + 128 + ], + "lines": [ + { + "bbox": [ + 108, + 71, + 504, + 128 + ], + "spans": [ + { + "bbox": [ + 108, + 71, + 504, + 128 + ], + "type": "image", + "image_path": "9f523a9fee12629f433e77be513b1017d58395d4ce05221409b633c594646209.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 133, + 504, + 163 + ], + "lines": [ + { + "bbox": [ + 104, + 133, + 504, + 163 + ], + "spans": [ + { + "bbox": [ + 104, + 133, + 504, + 163 + ], + "type": "text", + "content": "(a) The robot performs the cup passing task across four different backgrounds. The left side shows the four background variations, while the right side illustrates the two passing directions: (#1 - Right hand passes the cup to the left hand, #2 - Left hand passes the cup to the right hand)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 108, + 169, + 503, + 226 + ], + "blocks": [ + { + "bbox": [ + 108, + 169, + 503, + 226 + ], + "lines": [ + { + "bbox": [ + 108, + 169, + 503, + 226 + ], + "spans": [ + { + "bbox": [ + 108, + 169, + 503, + 226 + ], + "type": "image", + "image_path": "866552a849bd596593904a24e5c38eb7ae77607f532fc18e081a730a1cc169b6.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 231, + 504, + 262 + ], + "lines": [ + { + "bbox": [ + 104, + 231, + 504, + 262 + ], + "spans": [ + { + "bbox": [ + 104, + 231, + 504, + 262 + ], + "type": "text", + "content": "(b) The robot performs the horizontal grasping task with four different items: bottle, box_1, box_2, and can, as shown on the left. The right side illustrates the process: (#1-#3 - The robot grasps the bottle, #4-#5 - The robot places it into the plastic bin)." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 108, + 268, + 503, + 324 + ], + "blocks": [ + { + "bbox": [ + 108, + 268, + 503, + 324 + ], + "lines": [ + { + "bbox": [ + 108, + 268, + 503, + 324 + ], + "spans": [ + { + "bbox": [ + 108, + 268, + 503, + 324 + ], + "type": "image", + "image_path": "2e761ca98fefa9973609a6e2dbaa3424910a39a79dc84bf885adf3d0f1436ddb.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 329, + 504, + 361 + ], + "lines": [ + { + "bbox": [ + 104, + 329, + 504, + 361 + ], + "spans": [ + { + "bbox": [ + 104, + 329, + 504, + 361 + ], + "type": "text", + "content": "(c) The robot performs the vertical grasping task. As shown on the left, the Dynamixel box is placed in nine different positions for grasping. The right side illustrates the process: (#1-#3 - The robot grasps the box, #4-#5 - The robot places the box into the plastic bin)." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 108, + 366, + 503, + 423 + ], + "blocks": [ + { + "bbox": [ + 108, + 366, + 503, + 423 + ], + "lines": [ + { + "bbox": [ + 108, + 366, + 503, + 423 + ], + "spans": [ + { + "bbox": [ + 108, + 366, + 503, + 423 + ], + "type": "image", + "image_path": "f6f6620b6c543c6d3d819bc17471c003a9fd4c539b262052407a900dc69d0ac7.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 427, + 504, + 468 + ], + "lines": [ + { + "bbox": [ + 104, + 427, + 504, + 468 + ], + "spans": [ + { + "bbox": [ + 104, + 427, + 504, + 468 + ], + "type": "text", + "content": "(d) The robot performs the pouring task. The left side shows different settings achieved by varying the robot's rotation and the table's position. The right side illustrates the pouring process: (#1 - Right hand grasps the bottle, #2 - Left hand grasps the cup, #3 - Pouring the drink, #4 - Left hand places the cup down, #5 - Right hand places the bottle down)." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 104, + 475, + 504, + 498 + ], + "lines": [ + { + "bbox": [ + 104, + 475, + 504, + 498 + ], + "spans": [ + { + "bbox": [ + 104, + 475, + 504, + 498 + ], + "type": "text", + "content": "Figure 7: Illustrations of tasks used in quantitative evaluations. From top to bottom: cup passing, horizontal grasping, vertical grasping, and pouring." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_footnote" + }, + { + "bbox": [ + 104, + 563, + 504, + 608 + ], + "lines": [ + { + "bbox": [ + 104, + 563, + 504, + 608 + ], + "spans": [ + { + "bbox": [ + 104, + 563, + 504, + 608 + ], + "type": "text", + "content": "Table 4: Ablation of how human whole-body movement in training demonstrations affects policy rollout. We collect the same number of demonstrations on the same set of objects for the grasping task with or without whole-body movement. Since the robot does not have a natural whole-body movement like humans, it negatively influences the manipulation success rate." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_footnote" + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 162, + 503, + 449, + 564 + ], + "blocks": [ + { + "bbox": [ + 162, + 503, + 449, + 564 + ], + "lines": [ + { + "bbox": [ + 162, + 503, + 449, + 564 + ], + "spans": [ + { + "bbox": [ + 162, + 503, + 449, + 564 + ], + "type": "table", + "html": "
MethodBottleBox1Box2CanOvr. Succ.
I.D.H.D.H.D.H.D.
Without whole-body8/106/100/107/1021/40
With whole-body9/103/103/103/1018/40
", + "image_path": "f53fe4b5a54f07a6b85dba56c4181f3865ea2d2b8b764a6c5fa784393f1a626b.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 624, + 326, + 639 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 624, + 326, + 639 + ], + "spans": [ + { + "bbox": [ + 105, + 624, + 326, + 639 + ], + "type": "text", + "content": "A More Ablation Study - Data Collection" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 651, + 504, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 651, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 651, + 504, + 723 + ], + "type": "text", + "content": "Autonomous Whole-body Movement. In Tab. 4, we justify the necessity to minimize body movement in human data collection. Humans tend to move their upper body unconsciously during manipulation (including shoulder and waist movement). However, existing humanoid robots have yet to reach such a level of dexterity. Thus, having these difficult-to-replicate actions in the human demonstrations leads to degraded performance. We hypothesize that such a necessity would be greatly reduced with the development of both whole-body locomotion methods and mechanical designs," + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 310, + 750 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 165, + 70, + 445, + 125 + ], + "blocks": [ + { + "bbox": [ + 165, + 70, + 445, + 125 + ], + "lines": [ + { + "bbox": [ + 165, + 70, + 445, + 125 + ], + "spans": [ + { + "bbox": [ + 165, + 70, + 445, + 125 + ], + "type": "table", + "html": "
MethodGrasping (secs)Pouring (secs)
Human Demo3.79±0.274.81±0.35
Human Demo with VR4.09±0.304.90±0.26
Humanoid Demo (VR Teleop)19.72±1.6537.31±6.25
", + "image_path": "3db029e94cea3545b2bc9aece4ac03bbc7f164999355eafe4a0caf35755b81d5.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 125, + 504, + 180 + ], + "lines": [ + { + "bbox": [ + 104, + 125, + 504, + 180 + ], + "spans": [ + { + "bbox": [ + 104, + 125, + 504, + 180 + ], + "type": "text", + "content": "Table 5: Amortized mean and standard deviation of the time required to collect a single demonstration, including scene resets. The first row shows the time for regular human to complete corresponding tasks in real world. The second row represents our human data when wearing VR for data collection, demonstrating that egocentric human demonstrations provide a more scalable data source compared to robot teleoperation." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 104, + 202, + 504, + 226 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 202, + 504, + 226 + ], + "spans": [ + { + "bbox": [ + 104, + 202, + 504, + 226 + ], + "type": "text", + "content": "but for the currently available platforms, we instruct operators to minimize body movement as much as possible in our dataset." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 232, + 504, + 302 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 232, + 504, + 302 + ], + "spans": [ + { + "bbox": [ + 104, + 232, + 504, + 302 + ], + "type": "text", + "content": "Efficiency of Data Collection. In Tab. 5, we compare task completion times across different setups, including standard human manipulation, human demonstrations performed while wearing a VR device, and robot teleoperation. This analysis highlights how task-oriented human demonstrations can be a scalable data source for cross-embodiment learning. Notably, wearing a VR device does not significantly impact human manipulation speed, as the completion time remains nearly the same as in standard human demonstrations." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 308, + 504, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 308, + 504, + 357 + ], + "spans": [ + { + "bbox": [ + 104, + 308, + 504, + 357 + ], + "type": "text", + "content": "Among different data collection schemes, we find that most overhead arises during the retargeting process from human actions to robot actions. This is primarily due to latency and the constrained workspace of 7-DoF robotic arms, which are inherent challenges in existing data collection methods such as VR teleoperation [9], motion tracking [48, 10], and puppeting [8, 5]." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 361, + 505, + 458 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 361, + 505, + 458 + ], + "spans": [ + { + "bbox": [ + 104, + 361, + 505, + 458 + ], + "type": "text", + "content": "Beyond data collection speed, human demonstrations offer several additional advantages over teleoperation. They provide a safer alternative, reducing risks associated with real-robot execution. They are also more labor-efficient, as they do not require additional personnel for supervision. Furthermore, human demonstrations allow for greater flexibility in settings, enabling a diverse range of environments without requiring robot-specific adaptations. Additionally, human demonstrations achieve a higher demonstration success rate, and the required hardware (such as motion capture or VR devices) is more accessible and cost-effective compared to full robotic setups. These factors collectively make human data a more scalable solution for large-scale data collection." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 472, + 336, + 486 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 472, + 336, + 486 + ], + "spans": [ + { + "bbox": [ + 105, + 472, + 336, + 486 + ], + "type": "text", + "content": "B Normalization of different embodiments." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 498, + 504, + 558 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 498, + 504, + 558 + ], + "spans": [ + { + "bbox": [ + 104, + 498, + 504, + 558 + ], + "type": "text", + "content": "Tab. 2 suggests minor differences between using different normalization coefficients for the states and actions vectors of humans and humanoids. We take a closer look in Fig. 8, where we investigate the impact of different normalization strategies in the vertical grasping (picking) task. Noticeably, the same normalization approach achieved the highest overall success rate, but the success distribution is biased towards the upper-right region of the grid." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 563, + 504, + 599 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 563, + 504, + 599 + ], + "spans": [ + { + "bbox": [ + 104, + 563, + 504, + 599 + ], + "type": "text", + "content": "We hypothesize that this is because humans have a larger workspace than humanoid robots. Thus, human data encompasses humanoid proprioception as a subset, which results in a relatively smaller distribution for the robot state-action space." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 613, + 408, + 628 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 613, + 408, + 628 + ], + "spans": [ + { + "bbox": [ + 104, + 613, + 408, + 628 + ], + "type": "text", + "content": "C In-Depth Analysis of Different Types of Generalization" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 639, + 505, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 639, + 505, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 639, + 505, + 723 + ], + "type": "text", + "content": "Human data improves background generalization. We chose to use the cup passing task to test background generalization. We prepared four different tablecloths as backgrounds, as shown in Fig. 7a. In terms of training data distribution, the teleoperation data for this task was collected exclusively on the paper background shown in Fig. 7a, whereas the human data includes more than five different backgrounds. This diverse human dataset significantly enhances the generalization ability of the co-trained HAT policy. As shown in Tab. 7., HAT consistently outperforms across all four backgrounds, demonstrating robustness to background variations. In addition, the overall" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 179, + 70, + 431, + 128 + ], + "blocks": [ + { + "bbox": [ + 179, + 70, + 431, + 128 + ], + "lines": [ + { + "bbox": [ + 179, + 70, + 431, + 128 + ], + "spans": [ + { + "bbox": [ + 179, + 70, + 431, + 128 + ], + "type": "table", + "html": "
MethodBottleBox1Box2CanOvr. Succ.
I.D.H.D.O.O.D.O.O.D.
ACT8/105/101/101/1016/40
HAT8/107/101/104/1021/40
", + "image_path": "dfb20c46f115a91d517c9645ae36f4855d70870ed93ab393db917d1904f2fc69.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 130, + 504, + 153 + ], + "lines": [ + { + "bbox": [ + 104, + 130, + 504, + 153 + ], + "spans": [ + { + "bbox": [ + 104, + 130, + 504, + 153 + ], + "type": "text", + "content": "Table 6: Object Appearance Generalization: In the horizontal grasping task, we evaluated the grasping performance by attempting to grasp each object 10 times and recorded the success rate." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 104, + 205, + 504, + 231 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 205, + 504, + 231 + ], + "spans": [ + { + "bbox": [ + 104, + 205, + 504, + 231 + ], + "type": "text", + "content": "success rate increases by nearly " + }, + { + "bbox": [ + 104, + 205, + 504, + 231 + ], + "type": "inline_equation", + "content": "50\\%" + }, + { + "bbox": [ + 104, + 205, + 504, + 231 + ], + "type": "text", + "content": " compared to training without human data, highlighting the advantage of utilizing diverse human demonstrations." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 236, + 504, + 295 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 236, + 504, + 295 + ], + "spans": [ + { + "bbox": [ + 104, + 236, + 504, + 295 + ], + "type": "text", + "content": "Human data improves appearance generalization. To test how co-training improves robustness to perturbations in object textures, we evaluate the horizontal grasping policy on novel objects, as shown in Fig. 7b. Specifically, we compare the policy's performance on the bottle, box_1, box_2, and can, as shown left to right in the first image in Fig. 7b. These objects differ significantly in both color and shape from the bottle used in the teleoperation data distribution." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 300, + 504, + 361 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 300, + 504, + 361 + ], + "spans": [ + { + "bbox": [ + 104, + 300, + 504, + 361 + ], + "type": "text", + "content": "Since grasping is a relatively simple task, our adjusted policy demonstrates strong learning capabilities even with only 50 teleoperation data samples. The policy can successfully grasp most bottles despite the limited training set. To better highlight the impact of human data, we selected more challenging objects for evaluation. As shown in Tab. 6, human data significantly enhances the policy's ability to grasp these more difficult objects." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 365, + 505, + 473 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 365, + 505, + 473 + ], + "spans": [ + { + "bbox": [ + 104, + 365, + 505, + 473 + ], + "type": "text", + "content": "Notably, box_1 appears in the human data, while box_2 does not. Despite this, we observe that constraining with human data still improves overall performance, even on box_2, though its success rate does not increase. This suggests that, beyond direct experience with specific objects, the human data helps the policy learn broader visual priors that enable more proactive and stable grasping behaviors. For box_2, while the success rate remains low—partially due to its low height and color similarity to the table—the co-trained HAT policy demonstrates fewer out-of-distribution (OOD) failures and more actively searches for graspable regions. The failures on box_2 are primarily due to unstable grasping and the small box slipping from the hand, rather than the inability to perceive or locate the object." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 479, + 504, + 538 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 479, + 504, + 538 + ], + "spans": [ + { + "bbox": [ + 104, + 479, + 504, + 538 + ], + "type": "text", + "content": "Furthermore, adding more human data not only improves performance on objects seen in human training demonstrations (e.g., box_1) but also enhances generalization to completely novel objects (e.g., box_2 and can). We hypothesize that, as the number of objects grows, HAT starts to learn inter-category visual priors that guide it to grasp objects more effectively, even when they were not explicitly present in the training set." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 544, + 504, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 544, + 504, + 604 + ], + "spans": [ + { + "bbox": [ + 104, + 544, + 504, + 604 + ], + "type": "text", + "content": "Human data improves object placement generalization. Finally, we introduce variations in object placements that are not present in the real-robot training demonstrations and specifically investigate this in the vertical grasping (picking) task. In this task, we intentionally constrain the robot data collection to object placements within a subset of cells, while human vertical grasping data covers a much more diverse range of settings." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 609, + 504, + 657 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 609, + 504, + 657 + ], + "spans": [ + { + "bbox": [ + 104, + 609, + 504, + 657 + ], + "type": "text", + "content": "To systematically analyze the impact of human data, we evaluate model performance on a structured " + }, + { + "bbox": [ + 104, + 609, + 504, + 657 + ], + "type": "inline_equation", + "content": "3 \\times 3" + }, + { + "bbox": [ + 104, + 609, + 504, + 657 + ], + "type": "text", + "content": " grid, where each cell represents a " + }, + { + "bbox": [ + 104, + 609, + 504, + 657 + ], + "type": "inline_equation", + "content": "10\\mathrm{cm} \\times 10\\mathrm{cm}" + }, + { + "bbox": [ + 104, + 609, + 504, + 657 + ], + "type": "text", + "content": " region for grasping attempts. The numbers in each cell indicate the number of successful picks out of 10 trials. Real-robot training data is collected from only two specific cells, highlighted with dashed lines." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 662, + 504, + 723 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 662, + 504, + 723 + ], + "spans": [ + { + "bbox": [ + 104, + 662, + 504, + 723 + ], + "type": "text", + "content": "A key detail in our teleoperation data distribution is that 50 picking attempts are collected from the right-hand side grid and only 10 from the left-hand side grid. This imbalance explains why policies trained purely on teleoperation data struggle to grasp objects in the left-side grid. We observe that models trained solely on robot data fail to generalize to unseen cells, whereas cross-embodiment learning with human data significantly improves generalization, doubling the overall success rate." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 189, + 72, + 422, + 205 + ], + "blocks": [ + { + "bbox": [ + 189, + 72, + 422, + 205 + ], + "lines": [ + { + "bbox": [ + 189, + 72, + 422, + 205 + ], + "spans": [ + { + "bbox": [ + 189, + 72, + 422, + 205 + ], + "type": "image", + "image_path": "1f6e163acefae2ad12158fbc4b7a7b2831a66bf5aa2f92de897b28a3f85082ae.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 210, + 506, + 266 + ], + "lines": [ + { + "bbox": [ + 104, + 210, + 506, + 266 + ], + "spans": [ + { + "bbox": [ + 104, + 210, + 506, + 266 + ], + "type": "text", + "content": "Figure 8: Object Placement Generalization. Performance comparisons of models trained with and without human data on vertical grasping (picking). Each cell in the " + }, + { + "bbox": [ + 104, + 210, + 506, + 266 + ], + "type": "inline_equation", + "content": "3 \\times 3" + }, + { + "bbox": [ + 104, + 210, + 506, + 266 + ], + "type": "text", + "content": " grid represents a " + }, + { + "bbox": [ + 104, + 210, + 506, + 266 + ], + "type": "inline_equation", + "content": "10\\mathrm{cm} \\times 10\\mathrm{cm}" + }, + { + "bbox": [ + 104, + 210, + 506, + 266 + ], + "type": "text", + "content": " region where the robot attempts to pick up a box, with numbers indicating successful attempts out of 10. The real-robot data is collected in two cells inside the dashed lines. Notably, our teleoperation data is intentionally imbalanced." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "type": "table", + "bbox": [ + 175, + 277, + 436, + 337 + ], + "blocks": [ + { + "bbox": [ + 175, + 277, + 436, + 337 + ], + "lines": [ + { + "bbox": [ + 175, + 277, + 436, + 337 + ], + "spans": [ + { + "bbox": [ + 175, + 277, + 436, + 337 + ], + "type": "table", + "html": "
MethodPaperWoodenRedGreenOvr. Succ.
I.D.H.D.O.O.D.O.O.D.
ACT19/2014/2012/2010/2055/80
HAT20/2016/2018/2018/2072/80
", + "image_path": "9343979d54558a796b3bdf65a7f11a9152c2734e0eeae43ff4f84b1a962e2643.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 337, + 504, + 361 + ], + "lines": [ + { + "bbox": [ + 104, + 337, + 504, + 361 + ], + "spans": [ + { + "bbox": [ + 104, + 337, + 504, + 361 + ], + "type": "text", + "content": "Table 7: Background Generalization: In the cup passing task, we evaluate the passing performance by recording the number of failures or retries needed to complete 20 cup-passing trials." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 105, + 386, + 446, + 415 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 386, + 446, + 415 + ], + "spans": [ + { + "bbox": [ + 105, + 386, + 446, + 415 + ], + "type": "text", + "content": "D In-Depth Comparison between Humanoid A and Humanoid B configurations" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 426, + 504, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 426, + 504, + 474 + ], + "spans": [ + { + "bbox": [ + 104, + 426, + 504, + 474 + ], + "type": "text", + "content": "This section presents a detailed comparison of the two humanoid platforms, referred to as Humanoid A and Humanoid B, with a focus on joint structure and implications for manipulation capabilities. We restrict our analysis to the arm configurations, as other parts of the body were not exclusively explored in this work." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 479, + 504, + 575 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 479, + 504, + 575 + ], + "spans": [ + { + "bbox": [ + 104, + 479, + 504, + 575 + ], + "type": "text", + "content": "While morphologically similar, these two humanoids have drastically different arm configurations that create hurdles in direct policy transfer. Besides differences in motor technical specs such as torque and types of encoder (Humanoid B has absolute motor position encoders), they also have different mechanical limits. The range of motion (ROM) for the first four proximal joints—shoulder-pitch, shoulder-roll, shoulder_yaw, and elbow—differs across the two platforms. Humanoid B exhibits a consistently wider ROM, which allows a wider set of reachable configurations and increases the manipulability of the arm in constrained environments. Table 8 summarizes the ROM values for these shared joints." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 580, + 506, + 665 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 580, + 506, + 665 + ], + "spans": [ + { + "bbox": [ + 104, + 580, + 506, + 665 + ], + "type": "text", + "content": "A more significant architectural divergence is observed at the wrist. Humanoid A includes a single distal joint—wrist-roll—providing limited wrist articulation. This restricts end-effector dexterity and constrains in-hand manipulation strategies to a single rotational degree of freedom. In contrast, Humanoid B is equipped with a complete wrist mechanism composed of three independently actuated joints: wrist_pitch, wrist-roll, and wrist_yaw. These additional degrees of freedom allow for full orientation control of the end-effector, enabling tasks that require precise alignment, rotation, and fine adjustment of object poses." + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 189, + 357, + 421, + 424 + ], + "blocks": [ + { + "bbox": [ + 189, + 357, + 421, + 424 + ], + "lines": [ + { + "bbox": [ + 189, + 357, + 421, + 424 + ], + "spans": [ + { + "bbox": [ + 189, + 357, + 421, + 424 + ], + "type": "table", + "html": "
JointHumanoid AHumanoid B
shoulder_pitch-164° to +164°-180° to +90°
shoulder-roll-19° to +178°-21° to +194°
shoulder_yaw-74° to +255°-152° to +172°
elbow-71° to 150°-54° to 182°
wrist-roll-175° to 175°-172° to 157°
", + "image_path": "36db00f148a7c1259dadae00242a99645c3dae4013bbde9ba35fdbdb2e48afcc.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_body" + } + ], + "index": 0 + }, + { + "bbox": [ + 133, + 425, + 477, + 436 + ], + "lines": [ + { + "bbox": [ + 133, + 425, + 477, + 436 + ], + "spans": [ + { + "bbox": [ + 133, + 425, + 477, + 436 + ], + "type": "text", + "content": "Table 8: Joint Range of Motion Comparison between Humanoid A and B (in degrees)" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "spans": [ + { + "bbox": [ + 300, + 741, + 311, + 750 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 2 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/data/2025/2503_13xxx/2503.13444/1c5e9011-0450-44ee-b04b-75479b89c9af_content_list.json b/data/2025/2503_13xxx/2503.13444/1c5e9011-0450-44ee-b04b-75479b89c9af_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0fef30705a9347461b799b0800e490a4a81537ee --- /dev/null +++ b/data/2025/2503_13xxx/2503.13444/1c5e9011-0450-44ee-b04b-75479b89c9af_content_list.json @@ -0,0 +1,2930 @@ +[ + { + "type": "text", + "text": "Ye Liu $^{1\\dagger}$ Kevin Qinghong Lin $^{2\\dagger}$ Chang Wen Chen $^{1\\boxtimes}$ Mike Zheng Shou $^{2\\boxtimes}$ $^{1}$ The Hong Kong Polytechnic University $^{2}$ Show Lab, National University of Singapore", + "bbox": [ + 151, + 181, + 844, + 223 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "https://videomind.github.io/", + "bbox": [ + 359, + 227, + 635, + 242 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 248, + 277, + 326, + 292 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Videos, with their unique temporal dimension, demand precise grounded understanding, where answers are directly linked to visual, interpretable evidence. Despite significant breakthroughs in reasoning capabilities within Large Language Models, multi-modal reasoning - especially for videos - remains unexplored. In this work, we introduce VideoMind, a novel video-language agent designed for temporal-grounded video understanding. VideoMind incorporates two key innovations: (i) We identify essential capabilities for video temporal reasoning and develop a role-based agentic workflow, including a planner for coordinating different roles, a grounder for temporal localization, a verifier to assess temporal interval accuracy, and an answerer for question-answering. (ii) To efficiently integrate these diverse roles, we propose a novel Chain-of-LoRA strategy, enabling seamless role-switching via lightweight LoRA adaptors while avoiding the overhead of multiple models, thus balancing efficiency and flexibility. Extensive experiments on 14 public benchmarks, including 3 on grounded video question-answering (Grounded VideoQA), 6 on video temporal grounding (VTG), and 5 on general video question-answering (VideoQA), verify that our agent achieves state-of-the-art performance on diverse video understanding tasks, underscoring its effectiveness in advancing video agent and long-form temporal reasoning.", + "bbox": [ + 89, + 309, + 483, + 688 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 91, + 715, + 220, + 729 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Recent advancements in Large Language Models (LLMs) have demonstrated remarkable effectiveness in reasoning tasks, such as Chain-of-Thought (CoT) [65, 85, 95], significantly improving both accuracy and interpretability in complex problem-solving scenarios [96]. Inspired by these achievements, researchers are now working to extend these reasoning capabilities to multi-modal domains [64, 69, 94] such as visual math understanding [73, 89, 106].", + "bbox": [ + 89, + 741, + 482, + 862 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Among multi-modal inputs, videos present a unique", + "bbox": [ + 109, + 863, + 482, + 878 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Textual Chain-of-Thought Reasoning", + "bbox": [ + 522, + 282, + 717, + 295 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/865525d574da9fe39ef94461ef11c1a45f03124f927f77f20a337613aaca69d1.jpg", + "image_caption": [ + "A 50min Video" + ], + "image_footnote": [], + "bbox": [ + 527, + 304, + 598, + 347 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Why are the bunnies gathering on the table?", + "bbox": [ + 606, + 311, + 679, + 342 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/5ac996b2acf397567990b9a9c70f6475b440f8cb23d8accf76f7490f142ae2fe.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 720, + 309, + 792, + 347 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The user asked Why are the bunnies gathering on the table? To answer this question, I need to examine the bunnies activity in the video and see if there are certain scenes showing them.", + "bbox": [ + 802, + 299, + 892, + 359 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/4529880d48997766d7724496b741fa5f6073db21f04d76032acd8bde6c39e4c4.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 526, + 377, + 568, + 397 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "I would act as different roles to analyze it. I'm now acting as the...", + "bbox": [ + 571, + 381, + 880, + 393 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/f07c34e3d221590795ca00d06baecd8eb1a7df25b78e5742f186156b9f32fde9.jpg", + "image_caption": [ + "Planner" + ], + "image_footnote": [], + "bbox": [ + 527, + 405, + 630, + 441 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The question seems related to the event \"bunnies gathering on the table\". I would ground this moment, verify it, then answer the question.", + "bbox": [ + 656, + 411, + 802, + 445 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Action Plan", + "bbox": [ + 826, + 411, + 874, + 420 + ], + "page_idx": 0 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "1. Grounder", + "2. Verifier" + ], + "bbox": [ + 826, + 421, + 874, + 438 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/4ce95f53a38d024cde0c1e5a35548e8588d8fa8724f0c8c1892a2ea134a17ff9.jpg", + "image_caption": [ + "Grounder" + ], + "image_footnote": [], + "bbox": [ + 527, + 498, + 568, + 521 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "I'm localizing the moment about 'bunnies gathering at the table'. Here are some candidate moments containing the relevant event.", + "bbox": [ + 655, + 470, + 807, + 503 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/e17ff7046c0b180a08ffe3d1d37f421a6d6054aeac92fdd1fa8a2389e671b47c.jpg", + "image_caption": [ + "Moments:" + ], + "image_footnote": [], + "bbox": [ + 823, + 477, + 890, + 508 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/c19712c65b18717c8a290ac3add7beaaa736be3fd392e95382149f186ce334aa.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 527, + 521, + 568, + 542 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/5c191bb697d10239fc29d8b6cdd7b3b8303bc4677db79a7ebf47a50da7d6a0a9.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 581, + 521, + 630, + 564 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "There are four candidate moments. Let me verify them one-by-one, focusing on the temporal boundaries and semantic similarities.", + "bbox": [ + 653, + 527, + 805, + 560 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/9d569066adde19b5244e4cc5670932a3b643c41cc8c0d6d03c1c9de68cc25e10.jpg", + "image_caption": [ + "Confidence" + ], + "image_footnote": [], + "bbox": [ + 823, + 535, + 888, + 565 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/db68b455eaf9f7444671e57f1938b636437a3b432f0ac669fe302c9798c1cf52.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 527, + 582, + 568, + 604 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/9c065590373e58f7a81c122df5811f8e63827746ed2f653cf09970664dd50b4b.jpg", + "image_caption": [ + "Figure 1. An illustration of VideoMind's Chain-of-LoRA reasoning strategy applied to a complex question for a 50-min long video. The problem is decomposed by Planner and distributed to Grounder, Verifier, and Answerer to systematically localize, verify, and interpret the relevant video moments. Such a role-based pipeline enables more human-like video reasoning compared with the pure textual CoT process." + ], + "image_footnote": [], + "bbox": [ + 581, + 582, + 630, + 622 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "I'm zooming-in to the most confident moment and watching it under a higher spatial resolution. Now, I've figured out the answer.", + "bbox": [ + 653, + 585, + 807, + 619 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Final Answer: Because a boy is sitting aside and handing out food to the bunnies.", + "bbox": [ + 825, + 584, + 883, + 621 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "VideoMind: Multi-modal Chain-of-LoRA Reasoning", + "bbox": [ + 524, + 636, + 789, + 647 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "challenge due to their temporal dimension, which introduces complexities not found in static images or text. Effective video reasoning requires not only recognizing appearance but also understanding their dynamic interactions over time (i.e., temporal-grounded understanding) [6, 7, 88]. While recent visual CoT methods [73, 89, 106] excel at generating detailed thoughts for static visual inputs, they struggle with videos because they cannot explicitly localize", + "bbox": [ + 511, + 779, + 906, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "#", + "bbox": [ + 158, + 114, + 192, + 147 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning", + "bbox": [ + 197, + 121, + 843, + 143 + ], + "page_idx": 0 + }, + { + "type": "aside_text", + "text": "arXiv:2503.13444v2 [cs.CV] 1 Apr 2025", + "bbox": [ + 22, + 268, + 60, + 700 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "$\\dagger$ Equal contribution. $\\boxtimes$ Corresponding authors.", + "bbox": [ + 94, + 887, + 357, + 900 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "1", + "bbox": [ + 493, + 924, + 503, + 935 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "or revisit earlier parts of the sequence, as demonstrated in Fig. 1. Humans, by contrast, approach video understanding with ease: they break down complex problems, identify relevant moments, revisit them to confirm details, and synthesize their observations into coherent answers. This natural proficiency inspires the development of an AI assistant capable of replicating this process, adeptly managing multiple abilities to achieve advanced video reasoning.", + "bbox": [ + 89, + 90, + 480, + 210 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "In this work, we introduce VideoMind, a novel video-language agent with enhanced video temporal reasoning. (i) To meet the demands of diverse tasks, we first identify several key roles essential for understanding long videos: a Grounder for precise moment retrieval, a Verifier for validating information accuracy, an Answerer for generating query-aware responses, and a Planner to flexibly coordinate these roles. Each role has been carefully designed to deliver strong performance, e.g., Grounder is equipped with a timestamp-decoder to ensure strong temporal grounding ability. (ii) To enable efficient collaboration among these roles, we propose a novel Chain-of-LoRA strategy, built upon a single base MLLM (i.e., Qwen2-VL [77]). This approach embodies a minimalist yet flexible design philosophy, facilitating seamless transitions and interactions between roles without the computational burden of multiple full models. Therefore, VideoMind achieves both efficiency and adaptability, offering a practical and flexible solution for diverse video tasks.", + "bbox": [ + 91, + 213, + 483, + 500 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To evaluate the effectiveness of VideoMind, we conducted extensive experiments across 14 benchmarks, including Grounded Video Question-Answering (CG-Bench [6], ReXTime [8], NExT-GQA [88]), Video Temporal Grounding (Charades-STA [17], ActivityNet-Captions [27], QVHighlights [30], TACoS [60], Ego4D-NLQ [18], ActivityNet-RTL [23]), and General Video Question-Answering (Video-MME [13], MLVU [108], LVBench [79], MVBench [34], LongVideoBench [86]). VideoMind demonstrates its capability to progressively process complex reasoning tasks by jointly providing temporal-grounded evidence and delivering accurate answers. Notably, our model demonstrates significant performance improvements even at the smaller 2B size, achieving a substantial increase in accuracy on long videos (27 minutes) in CG-Bench, even surpassing top-performing models such as GPT-4o [56]. Moreover, it demonstrates strong performance on sub-settings including video temporal grounding and general video question-answering. Ablation studies reveal the essential contributions of our design choices, particularly the Chain-of-LoRA mechanism, which enhances performance while achieving remarkable computational efficiency compared to traditional fine-tuning methods. Overall, our contributions are threefold:", + "bbox": [ + 91, + 502, + 483, + 864 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "1. We propose VideoMind, a multi-modal agent framework that enhances video reasoning by emulating human-like", + "bbox": [ + 91, + 869, + 482, + 901 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "processes, such as breaking down tasks, localizing and verifying moments, and synthesizing answers. This approach addresses the unique challenges of temporal-grounded reasoning in a progressive strategy.", + "bbox": [ + 532, + 90, + 903, + 151 + ], + "page_idx": 1 + }, + { + "type": "list", + "sub_type": "text", + "list_items": [ + "2. We introduce Chain-of-LoRA, a minimalist strategy built on one vision-language model, allowing seamless collaboration among multiple roles. This method ensures VideoMind adapts flexibly to diverse tasks without the overhead of multiple models.", + "3. VideoMind achieves state-of-the-art results across three settings: Grounded VideoQA, Video Temporal Grounding, and General VideoQA. Notably, on the long video benchmarks [6, 79, 108], our 2B model outperforms GPT-4o on grounded QA tasks. Extensive ablations further confirm its effectiveness and efficiency." + ], + "bbox": [ + 513, + 152, + 903, + 316 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Work", + "text_level": 1, + "bbox": [ + 513, + 333, + 653, + 349 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1. Temporal-grounded Video Understanding", + "text_level": 1, + "bbox": [ + 511, + 359, + 870, + 376 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Significant advances in video understanding have propelled tasks such as video captioning [41, 107], video question answering [87, 99], and video-text retrieval [39, 53], which emphasize instance-level understanding, yet these models often lack visual-grounded correspondence and interpretability, particularly for long-form video streams. The task of Video Temporal Grounding [17, 27] tackles this by requiring precise temporal localization for diverse queries, though regression-based models [46, 47] excel at localization but fall short in providing textual interpretability. Recent benchmarks (such as Grounded Question Answering) [6, 88] intensify this challenge, demanding both reasoning for complex questions and fine-grained temporal correspondence. Previous baselines for these tasks typically rely on multi-task objectives or modular agents composed of distinct components [11, 82, 97, 99], often yielding suboptimal performance (e.g., LLM-based approaches for temporal grounding) or overly complex systems, which constrain their efficiency and flexibility. In this work, our proposed VideoMind introduces an agentic workflow built upon a unified backbone, seamlessly integrating multiple functionalities while enhancing localization and interpretability, thus surpassing the limitations of prior methods.", + "bbox": [ + 511, + 382, + 906, + 729 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.2. Multi-modal Reasoning", + "text_level": 1, + "bbox": [ + 513, + 741, + 732, + 758 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Large Multimodal Models [44], trained with supervised instruction-tuning (SFT), exhibit generalized capabilities such as free-form dialogue and question answering; however, they fall short in addressing complex challenges that often require the reasoning abilities of LLMs [85]. (i) One approach to overcome this is to develop agent-based interfaces [26, 99], which integrates textual outputs from multiple visual tools to enable language reasoning via LLMs. Advanced methods [16, 69, 94] leverage strategies like", + "bbox": [ + 511, + 763, + 906, + 902 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "2", + "bbox": [ + 493, + 924, + 504, + 935 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/f5bb6e36260c2c2160aa328075154590972fbb1ceb19c8d760b788cbc832c701.jpg", + "image_caption": [ + "Figure 2. The overall workflow of VideoMind. Given a video and a query, VideoMind adaptively activates different roles (Planner $\\rightarrow$ Grounder $\\rightarrow$ Verifier $\\rightarrow$ Answerer in this case) and perform step-by-step reasoning by calling individual modules." + ], + "image_footnote": [], + "bbox": [ + 140, + 87, + 861, + 354 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Codex or ReAct [96] to invoke visual APIs (e.g., detectors, captioners) through progressive execution and reasoning. (ii) Alternatively, pure text-based reasoning [2, 19] has been a dominant paradigm in LLMs [85, 99], exemplified by training with long CoT processes using Reinforcement Learning, which provides detailed, step-by-step readable reasoning, with some works [89, 106] extending this to the visual domain for complex mathematical or scientific problems. Despite these advances, extending reasoning to videos across temporal dimensions remains an open challenge. Given the long-context nature of informative videos, we think that a video-centric CoT should incorporate a human-like re-watch strategy and self-validation of intermediate observations, leading us to introduce a novel Chain-of-LoRA framework for video reasoning.", + "bbox": [ + 88, + 412, + 485, + 640 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.3. Inference-time Searching", + "text_level": 1, + "bbox": [ + 89, + 651, + 321, + 667 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Inference-time searching has emerged as a critical technique for tackling complex reasoning and planning challenges in domains like robotics [20, 80], games [67], and navigation [72], distinct from training-time strategies as it optimizes model behavior during inference rather than model parameters during training. The advent of OpenAI o1 has advanced these inference-time techniques within LLMs by integrating sampling strategies such as controlled decoding [5, 91], Best-of-N sampling [32, 36], and Monte Carlo Tree Search (MCTS) [74, 76, 81, 100], allowing LLMs to iteratively refine outputs and achieve superior performance without altering their underlying weights. However, the potential of inference-time searching remains largely untapped in video understanding, where temporal reasoning introduces unique challenges. In our framework,", + "bbox": [ + 89, + 674, + 483, + 901 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "we explore how MCTS can be tailored for video temporal reasoning, observing that models are highly sensitive to the selection of temporal segments, often producing unreliable predictions when segment choices are suboptimal. To address this, we propose a video moment-level MCTS approach where a Grounder generates multiple segment candidates, followed by a Verifier that evaluates and determines the correct correspondence, validating that this strategy significantly enhances temporal localization accuracy and robustness across diverse video contexts.", + "bbox": [ + 511, + 412, + 906, + 561 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. VideoMind", + "text_level": 1, + "bbox": [ + 513, + 580, + 633, + 597 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Overview. Fig. 2 provides an overview of VideoMind. Our model derives from the widely adopted Qwen2-VL [77] architecture, which consists of an LLM backbone and a ViT-based visual encoder that natively supports dynamic resolution inputs. Given a video input $\\mathcal{V}$ and a text query $\\mathcal{Q}$ , the model adaptively activates different roles and performs step-by-step reasoning by calling individual modules: (i) Planner: Dynamically coordinates the following roles based on the query. (ii) Grounder: Identifies and localizes relevant video moments. (iii) Verifier: Evaluates the validity of the moments identified by Grounder, refining them through a zoom-in process with boolean outputs. (iv) Answer: Generates the final response in natural language. This mechanism enables the models to revisit the videos several times (with varying temporal segments & spatial resolutions) to derive the final response.", + "bbox": [ + 511, + 608, + 906, + 851 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Chain-of-LoRA. To meet the diverse demands of different roles, we introduce a novel Chain-of-LoRA strategy. The model dynamically activates role-specific LoRA adapters", + "bbox": [ + 511, + 854, + 908, + 902 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3", + "bbox": [ + 493, + 924, + 504, + 936 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "[21] during inference via self-calling, ensuring both efficiency and adaptability. Next, we describe how we optimize each design and detail the curation of their SFT data.", + "bbox": [ + 89, + 90, + 480, + 136 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.1. Planner", + "text_level": 1, + "bbox": [ + 89, + 143, + 189, + 157 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In practice, an agent should be flexible enough to meet various demands and determine efficiently which function to call next. To this end, we introduce the Planner role, which dynamically coordinates all other roles for each query. The key idea is that the planner should decide the sequence of function calls based on the visual context.", + "bbox": [ + 89, + 166, + 482, + 255 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We formulate each function call as a JSON-style object {\"type\":, \"value\":