text string | source string |
|---|---|
reverse goal order, which failed to progress through the curriculum. Importance of Ascending Difficulty. Structuring curriculums with ascending difficulty is essential, as front-loading too many challenges can overwhelm agents. For instance: •A curriculum with multiple versions of a goal arranged by ascending difficulty outperforms using just the goal with the most difficult metric, improving average reward by 26%. •Curriculums that include both basic and applied skills achieved a 15% higher average reward than those without skill differentiation. Handling Atypical Adversaries. Starting curriculums with exclusive exposure to atypical adversaries (e.g., the inactive red agent) can completely derail training. Agents exposed to adversaries in this way developed policies that failed to generalize to the main set of adversaries, leading to cycles of demotion and promotion when encountering their first typical adversary. To mitigate this, it is best to heterogeneously mix experiences with atypical adversaries throughout training. This approach improved average reward by 10–15%. 21 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT Figure 17: Performance comparison of three policies. One policy corresponds to training with the fixed task selection strategy in Section 4.1. Another policy corresponds to training with dynamic task selection, also described in Section 4.1. For reference, we compare the performance of these policies with a trivial policy in which the blue agent does nothing. Note that it is possible to achieve worse performance than the trivial policy, if in addition to allowing security compromises, the QoS was degraded. The color of the markers represents the difficulty of the task during the corresponding training iteration, derived from the number of hosts, subnets, and variability of red and gray behaviors. 22 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT Figure 18: Performance comparison of multiple task selection strategies. One strategy uses difficulty levels to drive selection of tasks with increasing difficulty. Other task selection strategies include choosing tasks uniformly at random and choosing tasks via small variations (smoothly). While the average reward for “smooth” task updates is higher, the difficulty of the task mastered is higher with the approach that uses difficulty levels. The color of the markers represents the difficulty of the task during the corresponding training iteration, derived from the number of hosts, subnets, and variability of red and gray behaviors. 23 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT Figure 19: Performance comparison of two policies. One policy was trained via a fixed task, with the goal of mitigating a traditional exfiltration attack. The other one was trained via dynamic task selection, where the blue agent learns to mitigate an exfiltration attack, where the red agent interleaves gray-like actions between traditional red actions, with some probability. The color of the markers represents the difficulty of the task during the corresponding training iteration, derived from the number of hosts, subnets, and variability of red and gray behaviors. 24 Training RL Agents for Multi-Objective Network Defense Tasks A P REPRINT Figure 20: Comparison in learning performance between two learning strategies that only differ in the actions available to the agent. One learning strategy | https://arxiv.org/abs/2505.22531v1 |
arXiv:2505.22533v1 [cs.LG] 28 May 2025TabularQGAN: A Quantum Generative Model for Tabular Data Pallavi Bhardwaj∗1,3, Caitlin Jones∗2,3, Lasse Dierich1,4,5, Aleksandar Vučković3,6 1SAP SE, Walldorf, Germany 2BASF Digital Solutions, Ludwigshafen, Germany 3QUTAC, Quantum Technology and Application Consortium, Germany 4Technical University of Munich, CIT, Garching, Germany 5Ludwig-Maximilians-Universität München, Munich, Germany 6Merck KGaA, Darmstadt, Germany {pallavi.bhardwaj@sap.com} Abstract In this paper, we introduce a novel quantum generative model for synthesizing tabular data. Synthetic data is valuable in scenarios where real-world data is scarce or private, it can be used to augment or replace existing datasets. Real-world enterprise data is predominantly tabular and heterogeneous, often comprising a mixture of categorical and numerical features, making it highly relevant across various industries such as healthcare, finance, andsoftware. Weproposeaquantumgenerativeadversarialnetworkarchitecturewith flexible data encoding and a novel quantum circuit ansatz to effectively model tabular data. The proposed approach is tested on the MIMIC III healthcare and Adult Census datasets, with extensive benchmarking against leading classical models, CTGAN, and CopulaGAN. Experimental results demonstrate that our quantum model outperforms classical models by an average of 8.5% with respect to an overall similarity score from SDMetrics, while using only 0.072% of the parameters of the classical models. Additionally, we evaluate the generalization capabilities of the models using two custom-designed metrics that demonstrate the ability of the proposed quantum model to generate useful and novel samples. To our knowledge, this is one of the first demonstrations of a successful quantum generative model for handling tabular data, indicating that this task could be well-suited to quantum computers. 1 Introduction Recent progress in quantum computing research for both hardware [ 1,2] and algorithmic [ 3,4] aspects has been promising. It remains to be shown if quantum computers are universally faster, more energy efficient, or otherwise more useful than classical computers in a task universal to an entire set of problems, apart from areas where it as has been successfully proven: factorization [ 5], unstructured search [ 6] and quantum simulation [7]. In this work, we investigate quantum machine learning (QML) models, a class of machine learning model which incorporates quantum computing into a large variety of machine learning architectures and tasks including neural networks [ 8], reinforcement learning [ 9], transformers [ 10,11], image classification [ 12], and, the subject of this work, unsupervised generative models, [ 13]. [14] is a recent overview of the field. As of ∗Main authors. now, there is no demonstration of QML providing a robust and repeatable advantage over classical methods for practically useful problems [ 15]. However, a body of evidence suggests that the increased expressivity (i.e., fraction of the parameter sample space can be effectively explored by a variation model) of quantum models may have the same or better performance with a large reduction in the number of parameters required for certain tasks, such as generative learning [ 16]. Generative QML methods such as Quantum Circuit Born Machines (QCBM) [ 17,18] and Quantum GANs (QGAN) [ 19,20,21], have demonstrated comparable training performance to classical models, requiring fewer parameters [ 22]. QML may be particularly well suited to generative tasks, as the fundamental | https://arxiv.org/abs/2505.22533v1 |
task of a quantum computer is to produce a probability distribution that is sampled from, which is also what is required for a generative task [23]. Generative models are useful in cases whereby amplifying a sample population with generated samples can lead to improved statistics of rare events [ 24] (to be used in anomaly detection, fraud identification and market simulations), to improve generative design pipelines (for drug discovery and personalized assistants [25]) and alleviate data privacy concerns by sharing synthetic instead of confidential data [26, 27]. The majority of generative QML research has been on homogeneous data, e.g., image and text data, but business-relevant data is often heterogeneous tabular data, i.e., data that has numeric, categorical, as well as binary features. A prominent example is electronic health records (EHR), which are collections of heterogeneous patient data [ 28,29]. Other examples include human resources-related data and chemical structures. Previous work investigated the use of quantum kernel models in a classification setting of EHR data [ 30], as well as the use of classical GANs to model EHR data (medGAN) with a reduction of continuous features to a discrete latent space via autoencoding [ 31]. In addition, the generation of heterogeneous time series data has been investigated, employing variational autoencoders to map the data to and from a smaller latent space for training [32]. In this work, we introduce a novel method for generating tabular data with a QGAN using a custom ansatz (quantum circuit) that does not require additional autoencoding or feature reduction. The model is an adaptation of the model presented in [ 22]. The architecture makes use of a new approach to model one-hot vectors representing the exclusive categorical features, and is, by construction, well suited to represent numerical features. We perform hyperparameter optimization and benchmark against classical models [33,34] on subsets of two datasets, MIMIC III [ 35,36] and Adult Census [ 37]. We find that the best-found configurations of the quantum tabular model outperformed the classical model(s) for both datasets. In section 2.1 we introduce the mathematical framework of quantum generative models and our approach to modelling one-hot vectors, in section 2.2, we describe how data is encoded into the quantum circuit, in section 2.3 we outline the specific architecture of our quantum generative model and how it is trained, followed by an analysis of the resources required from a quantum computer, in section 2.4. In section 2.6, the benchmarking and evaluation metrics are outlined. in section 3.1 details of the datasets are given. In section 3.2 by the hyperparameter optimization procedure. The experimental configuration and numerical results are presented in section 3.3. Finally, a discussion of the results, the limitations of our approach, and an outlook is presented in section 4. 2 Methodology 2.1 Quantum Generative Models and Variational Quantum Circuits Any generative model is expected to possess two key capabilities: first, the ability to accurately learn the ground truth probability distribution of a provided training dataset, and second, the ability to generalize in order to generate novel samples. In this work, we primarily focus on the | https://arxiv.org/abs/2505.22533v1 |
ability to learn the ground truth probability distribution and propose a quantum–classical generative adversarial network (QGAN), which has a similar structure to that of a conventional GAN, with the generator implemented using a variational quantum circuit (VQC) [38] and the discriminator is realized using a classical neural network (Figure 2). A VQC is sequence of parametrized unitary matrices ( gates)Um(θm)∈C2n×2nwhich prepare a quantum state 2 |Ψ({θm}m)⟩=/productdisplay mUm(θm)|Ψ0⟩, (1) starting from an n-qubit initial quantum state |Ψ0⟩of the finite-dimensional Hilbert space C2n, whereUmis the gate at index m, andθmits scalar parameter. The computational basis {|i1i2...in⟩}i1,...,in∈{0,1}:={|i1⟩⊗|i2⟩⊗···|in⟩}i1,...,in∈{0,1}(2) is an orthonormal basis for this space and ⟨i1i2...in|the conjugate transpose to the vector |i1i2...in⟩. Quantum mechanics allows the quantum state |Ψ({θm}m)⟩to be a superposition of multiple basis states, which collapses with a probability of p(i1i2...in) =|⟨i1i2...in|Ψ({θm}m)⟩|2(3) into the state|i1i2...in⟩upon measurement. To employ the VQC as a generator G, we encode the rows of a given tabular training set into bitstrings i1i2...inand aim to find a set of parameters {θm}msuch that the probability distribution p(i1i2...in)over all bitstrings approximates the underlying probability distribution of the training data. In contrast, the role of the classical discriminator D:C2n→(0,1)is to distinguish if a bitstring xis a genuine (0) or synthetic sample (1). In our experimental evaluation, we consider tabular datasets comprising both numerical and categorical features. Numerical features are modelled using the discrete circuit architecture from [ 22], while categorical variables are encoded through the application of Givens rotations. As demonstrated in [ 39], controlled single-excitation gates implemented as Givens rotations form a universal gate set for particle-conserving unitaries in quantum chemistry. Givens rotations are unitary transformations within a designated subspace of a larger Hilbert space, and we adapt these rotations to preserve the one-hot encoding intrinsic to categorical features. In quantum systems with a fixed excitation number, these rotations facilitate transitions only among basis states that maintain the total number of excitations. For example, in a system comprising of nqubits with exactly kexcitations, the relevant subspace is spanned by all states in which exactly kqubits are in the excited state |1⟩and the remaining n−kqubits are in the ground state |0⟩. The dimensionality of this subspace is d =/parenleftbign k/parenrightbig . To illustrate, consider the encoding of a categorical feature with three distinct categories using one-hot encoding. The encoding is represented by a system of n= 3qubits and k= 1excitations. An arbitrary rotation among the states |x1⟩=|001⟩,|x2⟩=|010⟩,|x3⟩=|100⟩, while leaving other states unchanged, must result in a superposition that strictly maintains the one-hot encoding. The advantage of such a rotation is that it enforces natural symmetry for one-hot encoding. Any arbitrary state given a reference state can be written as|ψ⟩=/summationtextd i=1ci|xi⟩. The method for preparing such states is described in [39]. Definition 1 (Hilbert Space Reduction via Givens Rotations) LetH= (C2)⊗ndenote the Hilbert space of an n-qubit system and define the particle-conserving subspace Hk⊂Hto consist of all states with exactlykexcitations (i.e., precisely kqubits in the state |1⟩and remainder in state |0⟩); the dimension of this subspace is d=/parenleftbign k/parenrightbig . Then, any unitary operator Uacting onHthat conserves the total number of excitations (i.e., | https://arxiv.org/abs/2505.22533v1 |
[U,N] = 0 whereNis the excitation number operator) can be decomposed into product of two-level unitary operators known as Givens rotations, Gij(θ,φ), such that U=/productdisplay mGimjm(θm,φm) (4) where Gij(θ,φ) = cosθ−eiφsinθ e−iφsinθ cosθ (5) is a gate acting on the subspace of qubit iandj, and index mruns over all the gates in the circuit. This decomposition reduces the effective dimensionality of the problem from 2nto/parenleftbign k/parenrightbig , significantly reducing the 3 parameter space of variational quantum simulations. Givens rotations were originally proposed as gates for quantum chemistry, where conserving the electron number is critical for accurately representing molecular electronic states. Any state in the subspace Hkcan be written as a linear combination of an orthonormal basis {|xi⟩}(n k) i=1where each|xi⟩is a distinct Hamming-weight-k-substring. Since Uis a particle-conserving unitary, its action is restricted toHk. Eq. 4 and 5 show that any unitary transformation on a finite-dimensional space can be decomposed into a product of two-level unitary operations (Givens rotations). Each Gij(θ,φ)only affects the amplitudes of the basis states |xi⟩and|xj⟩without altering any other state, which ensures that the overall transformation remains within Hk. 2.2 Encoding In this section, we will introduce how the tabular data is encoded into quantum states via basis encoding. Each data sample is mapped to a bitstring of length n, which is split into the numerical register , containing ordered variables, and the categorical register containing unordered variables. All numerical variables xare partitioned into 2Nequal-width bins, where Nis the number of qubits allocated to x(the qubit budget). The indexi∈{0,..., 2N−1}of the respective bin is represented by a computational-basis state‚ /vextendsingle/vextendsinglebN−1bN−2...b 0/angbracketrightbig , (6) where (bN−1...b 0)is theN-bit binary expansion of index i. Categorical features with multiple classes ( c>2), are one-hot encoded using a dedicated c-qubit subregister and binary features ( c= 2) are encoded either as Boolean with one qubit |0⟩vs.|1⟩, or one-hot with two qubits:|10⟩vs.|01⟩. This dual-encoding strategy yields two distinct circuit topologies, as shown in Figure 1. The full input register is obtained by concatenating the subregisters for each feature. If feature fusesNf qubits, then an entire record is represented as /vextendsingle/vextendsingleb(1) N1−1...b(1) 0/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright feature 1/vextenddouble/vextenddoubleb(2) N2−1...b(2) 0/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright feature 2/vextenddouble/vextenddouble···/angbracketrightbig (7) in the computational basis. An explicit example of this encoding can be found in Appendix A.1. We also tested a binary encoding for the categorical features; however, it performed significantly worse in the benchmarking, as shown in Appendix A.2. 2.3 Quantum Generator We propose two variational circuit designs for the quantum generator: a non-Boolean and aBoolean design. In both designs, the circuit consists of a single n-qubit numerical register into which all numerical features are binary encoded, followed by multiple categorical registers, ordered based on the qubit count or number of categories they represent. In the non-Boolean design, all categorical features are encoded in one-hot encoding in (Figure 1), whereas in the Boolean design, encoding of binary categories can be optimized by replacing the one-hot encoding with a Boolean encoding and merging the Boolean variable into the numerical register. The Boolean design saves one qubit per two-category feature. For example, a register configuration in the | https://arxiv.org/abs/2505.22533v1 |
non-Boolean circuit might be denoted as [n5,c3,c2], where n5 represents a numerical register with five qubits, c3 is a three-qubit one-hot register, and c2 is a two-qubit one-hot categorical register. In Boolean circuit design, the same configuration could be simplified to [n6,c3], where the binary category is absorbed into the numerical register. Circuit Design: The upper part of the circuit represents an n-qubit numerical register which consists of a layer of RY rotations on each qubit, followed by pairwise IsingYY gates and controlled RY rotations. The lower part of the circuit consists of multiple categorical registers. Each categorical register is initialized by an X-gate to prepare a reference state such as |1000⟩, followed by pairwise single-excitation gates. The entanglement between different registers is established by controlled single-excitation gates to learn correlations between different features. 4 2.4 Qubit and gate complexity Let a variational block be applied to a configuration [n, c 1, c2,...,cR−1]withRregisters (1 numerical + R−1categorical). The number of gates required for numerical register ( nqubits) is given by gnum(n) =n/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright Ry+ (n−1)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright Ising-YY+ (n−1)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright c-Ry= 3n−2. (8) For each categorical register of size ci, the number of gates will be equal to the number of qubits or the size of the register gcat(ci) = 1/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright X-prep+ (ci−1)/bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright single- excitation=ci. (9) Cross-register entanglers are controlled single-excitation gate between each adjacent pair of registers gcross=R−1. (10) Hence, the total gate count is gtotal=gnum(n) +R−1/summationdisplay i=1gcat(ci) +gcross=/parenleftbig 3n−2/parenrightbig +R−1/summationdisplay i=1ci+ (R−1). (11) Example: [n5, c3, c2](R= 3) gnum(5) = 3·5−2 = 13, g cat(3) = 3, g cat(2) = 2, gtotal= 13 + 3 + 2 + 2 = 20 . Letm=/summationtextR−1 i=1cibe the total number of categorical qubits, Rbe the total number of registers, and N=n+m the overall qubit count. Since R≤n+m, we have gtotal= 3n−2 +m+ (R−1) =O(n+m+R) =O(N). (12) Thus, for fixed or slowly growing R, the total gate count scales linearlywith the total number of qubits N. 2.5 Training QGAN Training of our TabularQGAN model proceeds by alternately updating a three-layer classical discriminator with a sigmoid output and the quantum generator. The training pipeline is described in Figure 2. First, a batch of mtraining samples is initialized to |0⟩and encoded into the n-qubit numerical register and categorical registers. The resulting state |ψ(θ,z)⟩is measured in the computational basis to yield bitstrings x′=bn−1bn−2...b 0, which are then mapped back to numerical and categorical values. These synthetic samplesx′are fed into a classical feed-forward network D(x′;ϕ)that outputs D(x′)∈(0,1), estimating the probability that the input is real. During each training iteration, we first update the parameter vector ϕof the discriminator by minimizing the objective function [40] LD(ϕ) =−1 mm/summationdisplay i=1logD(xi;ϕ)−1 mm/summationdisplay j=1log/parenleftbig 1−D(x′ j;ϕ)/parenrightbig , (13) where{xi}are real records and {x′ j}are generator outputs. Next, we update the generator by fixing ϕand minimizing LG(θ) =−1 mm/summationdisplay j=1logD(x′ j;ϕ), (14) 5 0 1 2 3 4 5 6 7 8 9X XRY RY RY RY RY G G IsingYY G IsingYY IsingYYRY IsingYYRY RY RY G G. . .. . .. . .. . .. . .. . .. . .. . .. . .. | https://arxiv.org/abs/2505.22533v1 |
. . initial excitation model layern5 c3 c2 (a) Non-Boolean design 0 1 2 3 4 5 6 7 8 9XRY RY RY RY RY RY G IsingYY G IsingYY IsingYYRY IsingYYRY IsingYYRY RY RY G. . .. . .. . .. . .. . .. . .. . .. . .. . . initial excitation model layern5 c3 (b) Boolean design Figure 1: Non-Boolean circuit design (a) and Boolean circuit design (b) for a [n5,c3,c2] register. The model layer can be repeated dtimes to obtain a depth- dcircuit. All qubits are measured in the computational basis to obtain a bitstring generated by the model. Here, the Boolean circuit design saves one qubit by treating the two-category feature as a Boolean variable and merging it into the numerical register. thereby encouraging G(θ)to produce samples that the discriminator labels as real. Gradients with respect toϕare computed via standard backpropagation. Gradients with respect to the quantum parameters θ are obtained using the parameter-shift rule [ 41][42]: for each parameter (θ), the gradient is evaluated as derivative of observable ⟨M⟩: ∂⟨M⟩ ∂θ=1 2/bracketleftbig ⟨M⟩θ+π 2−⟨M⟩θ−π 2/bracketrightbig , (15) whereM=D◦measure on|ψ(θ,z)⟩[43]. The training continues for Tepochs or until convergence, monitored via the discriminator loss plateau and sample fidelity metrics (e.g., KL divergence). At convergence, the quantum generator has learned to produce synthetic records indistinguishable from real tabular data by the classical discriminator. The training procedure is formally presented in Algorithm 1. 6 Below-50K Self-employed 00110 01 0010 01011 01 0100 00100 10 0100 00101 01 0001...4 Above-50K Govt-employed11 Below-50K Govt-employed 5 Below-50K Unemployed...FC: 2n Leaky ReLU FC: 1 Leaky ReLU Sigmoid Output: (0, 1)Input: n-bit sample(1) Load batch & generate samples from single-shot measurements 001001001000101101010000100100100 (2) Discriminator Loss, Update /uni0278 (3) Single-shot measurement, for each ±π/2 parameter shift(4) Parameter Gradients, Update θ011011001010101001110110000110000 Figure 2: Schematic diagram of TabularQGAN training. In Step 1, either a batch of training data or a batch of synthetic samples (obtained from single-shot measurements) is fed to the discriminator. In Step 2, the discriminator attempts to distinguish between real and fake samples, and its parameters ϕare updated based on the gradient of the discriminator loss LD. In Step 3, a sample is generated for each parameter shift, and the discriminator with fixed parameters ϕis used to compute the gradient of the parameters according to the parameter-shift rule. In Step 4, the generator parameters θare updated based on their gradient. 6 Algorithm 1 TabularQGAN Training Algorithm Initialize: Generator parameters θ, Discriminator parameters ϕ, batch size m, learning rates ηG,ηD, total epochs/training steps T, discriminator steps kD 1:fort= 1toTdo 2:Discriminator Training: 3:fori= 1tokDdo 4: Sample batch of mexamples:{x1,...,xm}∼preal 5: Generate synthetic data: 6: Prepare quantum state |ψ⟩←G(θ) 7: Measure and decode: x′←Measure (|ψ⟩) 8: Compute discriminator loss: LD(ϕ) =−1 mm/summationdisplay j=1logD(xj;ϕ)−1 mm/summationdisplay k=1log(1−D(x′ k;ϕ)) 9: Update discriminator weight: ϕ←ϕ−ηD∇ϕLD(ϕ) 10:end for 11:Generator Training: 12:Generate synthetic data: 13:Prepare quantum state for each ( ±π/2)-parameter shift |ψ⟩←G(θ) 14:Measure and decode: x′ j←Measure (|ψ⟩) 15:Compute generator loss: LG(θ) =−1 mm/summationdisplay j=1logD(x′ j;ϕ) 16:Update generator parameters with parameter-shift rule: θ←θ−ηG∇θLG(θ) 17:Model Evaluation: 18:Compute the KL-Divergence between real and | https://arxiv.org/abs/2505.22533v1 |
generated distributions, to log 19:it for each epoch for analysis: DKL(Preal∥Pgen) =/summationdisplay xPreal(x) logPreal(x) Pgen(x) 20:end for 2.6 Evaluation and Benchmarking Benchmarks: WebenchmarkourquantumgeneratoragainsttwoclassicalbaselinesCTGANandCopulaGAN. CTGAN is introduced in [ 33] and CopulaGAN via SDV library [ 34]. These models are both adaptations of the well-known GAN architecture [ 44] with additional data preprocessing techniques. These classical benchmarks are chosen as they are also designed specifically for tabular data. All models are trained by minimizing the loss function in Eq. 13. Evaluation Metrics: We evaluate the performance of the models using three complementary measures: an overall similarity score from SDMetrics [ 45], the overlap fraction between the training data and synthetic samples, and the final metric measure downstream predictive performance on generated data. The SDMetrics Overall Similarity score [ 45] is an average over two types of components, column-wise and pairwise metrics. The first is a column-wise measure of univariate marginals, the column shape similarity (See Appendix A.3). The second component is column pair trends, which capture bivariate relations. In addition to these statistical similarity metrics, we also evaluated two measures for measuring the generalisation performance of the models. 7 The overlap fraction between the training data and the synthetic data is defined as 1−(UR−US/UR) whereUR(US)is the count of unique rows in the training (synthetic) data. Hence, an overlap fraction of one would imply that all samples in the synthetic data were also found in the training data. The second generalization metric, which we call the downstream score, measures how well synthetic data can effectively replace real data in a classical supervised learning task. For each dataset and feature-target combination, we train an XGBoost model [ 46] on both real and synthetic data using identical hyperparameters and training procedures. If the target is categorical, we report the classification accuracy, and for numerical columns, we report the coefficient of determination R2. The downstream score is then defined as the absolute difference between these two performance values: Downstream Score =/vextendsingle/vextendsingleScore real−Score synthetic/vextendsingle/vextendsingle. A near-zero score indicates that the synthetic data faithfully preserves the predictive relationships found in the original dataset. These metrics collectively quantify the quality and usability of synthetic samples across statistical and downstream learning dimensions. 3 Experimental results 3.1 Dataset We evaluate our model on two standard datasets, the MIMIC-III clinical dataset [ 47] and the adult census income dataset [ 37]. MIMIC-III is a publicly available anonymized health-related data with 100k samples of over forty thousand patients, and the adult census contains income records of adults as per age, education, workclass, etc, with 35k samples. Each dataset contains numerical and categorical features and is divided into 10 and 15-qubit datasets. The number of qubits and features used for each dataset configuration is shown in Table 1. Table 1: Overview of datasets used in experiments, including the number of qubits, numerical and categorical features. Dataset Name Number of Qubits Numeric Features Categorical Features Adults Census 10 10 Age Income, Education Adults Census 15 15 Age Workclass, Education Mimic 10 10 Age Gender, Admission type Mimic 15 15 Age, Admission time Gender, Admission type 3.2 | https://arxiv.org/abs/2505.22533v1 |
Hyperparameter Optimization Hyperparameter optimization was performed over all four data sets for both the quantum and classical models. The optimization was performed via a grid search over the hyperparameters circuit depth, batch size, generator learning rate, discriminator learning rate, and, for the classical models only, layer width. The layer width was either a set value of 256 or data set dependent, as twice the dimension of the training data. Some of the hyperparameter ranges differ between the classical models and the TabularQGAN. This was due to initial experiments indicating which ranges lead to better performance. Each model configuration was repeated five times with a different random seed. The values of the parameters varied are shown in Table 2. For each model, the best hyperparameter settings were selected with respect to the overall metric defined in section 2.6, and can be found in Appendix A.4. In addition to the quantum model, the best epoch of the 3000 epochs was selected (this was not possible for the classical models, as only the parameters for the final epoch were accessible). 3.3 Results In this section, we discuss the results of the quantum and classical models with respect to the metrics introduced in Section 2.6. Our experiments are conducted using BASF’s HPC cluster Quriosity, on CPU nodes. All quantum models are executed on noiseless state vector simulations using the PennyLane library. Additional key machine learning libraries used were PyTorch and JAX. Although we described two circuit topologies in Section 2.3, our experiments across four datasets indicate that Boolean and non-Boolean 8 Table 2: Hyperparameter settings for each model type, including circuit depth, batch size, learning rates for both the discriminator and generator, layer width, and the number of training epochs. Model TypeCircuit Depth%Batch SizeLearning Rate DiscriminatorLearning Rate GeneratorLayer WidthNumber Epochs TabularQGAN 1, 2, 3, 4 10, 20 0.05, 0.1, 0.2 0.05, 0.1, 0.2 - 3000 CTGAN 1, 2, 3, 4 10, 20 0.001, 0.01, 0.05 0.001, 0.01, 0.05 256 1500 CopulaGAN 1, 2, 3, 4 10, 20 0.001, 0.01, 0.05 0.001, 0.01, 0.05 256 1500 encodings yield comparable performance, see A.5 for more details. Therefore, in this section, we focus exclusively on the results obtained from Boolean-encoding models. The first column in Table 3 shows the results of the overall similarity metric for different models. A score of 1 indicates perfect similarity between the probability distributions of the synthetic and training data sets, whereas 0 implies no similarity. Our TabularQGAN outperforms both classical models, and the performance of CTGAN and CopulaGAN models is similar, see Figure 3. The black-box nature of GAN models makes it challenging to directly attribute the improved performance of the quantum GAN to specific architectural features. However, our hypothesis 2 states that the enhanced expressivity of the quantum circuit and the constrained search space induced by Givens rotations contribute to its improved performance. Table 3: Best performing models with respect to the overall metric for each data set and model type. The number of parameters is for the best performing hyperparameter configuration. Data Set Model NameOverall MetricOverlap FractionDownstream ScoreNumber of Parameters Adults | https://arxiv.org/abs/2505.22533v1 |
Census 10TabularQGAN 0.949 0.869 0.026 80 CTGAN 0.855 0.953 0.112 131,072 CopulaGAN 0.845 0.953 0.105 65,536 Adults Census 15TabularQGAN 0.930 0.820 0.038 104 CTGAN 0.848 0.925 0.117 131,072 CopulaGAN 0.836 0.913 0.096 60 MIMIC 10TabularQGAN 0.983 0.973 0.006 88 CTGAN 0.888 0.984 0.068 65,536 CopulaGAN 0.887 0.981 0.062 131,072 MIMIC 15TabularQGAN 0.964 0.784 0.133 37 CTGAN 0.938 0.770 0.107 262,144 CopulaGAN 0.924 0.757 0.118 131,072 The second column shows the results of the overlap fraction, which measures the number of unique rows in the synthetic data that are not present in the training data. Due to computational constraints, only a subsample of hyperparameter configurations is evaluated, the best and worst 10 models for each data and model type pair. If the overlap is one, then it implies that no novel samples are generated. The usefulness of the overlap metric for evaluating generalization is limited when applied to low-dimensional datasets such as three features for MIMIC 10 and four features for MIMIC 15. In these cases, the sample space of each data set is mostly covered by the training data set, so novel samples that still fit the underlying distribution are unlikely to be produced. However, our results show that each model does produce some novel samples, meaning they are not purely reproducing the training dataset. The number of parameters for the optimal configuration of the model varies across the model and data type but for all but one dataset the classical model has far more parameters. The values reported in the last column represent parameter count for the Boolean model configuration. Although the non-Boolean design employs fewer parameters, the difference is minimal. Hence, only the Boolean parameter counts are presented to maintain consistency with the results reported for other models. 9 Adults Census 10 Adults Census 15 MIMIC 10 MIMIC 15 Data Set0.50.60.70.80.91.0Overall Metric CTGAN CopulaGAN T abular QGANFigure 3: Plot of the overall metric for each hyperparameter configuration for each dataset. The spread of the points within each bar is artificially added to improve data visibility. It can be seen that the TabularQGAN model consistently outperforms the other models. In an attempt to find a more qubit-efficient alternative to one-hot encoding, we introduce a Unique-Row-Index encoding and train a generator composed of a single numerical register to reproduce the distribution of row indices (see Appendix A.2). However, the performance of this approach is significantly lower than that achieved with the proposed one-hot encoding with Givens rotations. This suggests that a single numerical register is not a suitable circuit design for generating samples with categorical features. Analysis of the effect of circuit depth on model performance can be found in Appendix A.6. 4 Conclusion, Limitations and Outlook In this work, we introduce an adaptation of a quantum GAN model for tabular data. It utilises a novel flexible encoding protocol and circuit ansatz to account for both categorical and numeric data and to natively handle one-hot encoding. In our experiments, the TabularQGAN model outperformed classical models on the datasets under consideration. Additionally, the quantum architecture has significantly fewer parameters than its classical counterparts. | https://arxiv.org/abs/2505.22533v1 |
Training well-performing models for large-scale real-world applications can require expensive and energy-intensive computation. In this regime, the parameter compression provided by quantum models may dramatically reduce computational resources. For TabularQGAN to be a practically advantageous model, further investigation is required into the perfor- mance as the number of qubits increases. In our current experiments, we limited each dataset to only three to four features for both MIMIC-III and Adult Census datasets which is substantially lower than what is used in many realistic settings. This restriction was made due to the difficulty in simulating models with higher numbers of qubits on classical hardware as currently training on actual quantum hardware is very costly and introduces noise. The difficulty of scaling to larger numbers of qubits has been raised in [ 48] as the barren plateau problem, and although some quantum architectures have been shown to avoid them [ 49], it is still an open question if quantum variational models can scale to large qubit counts and avoid classical simulability [50, 51]. 10 Another limitation is that under the data encoding protocol for TabularQGAN, numeric data must be discretized. All the numerical results in this study are based on discretized data; however, in general, classical models do not have such a restriction and may perform better on continuous-valued numeric data. Further work on testing the model on a wider range of data sets, with a higher number of features, would improve the reliability of the results. Additionally, performing training and sampling at scale on actual quantum hardware would be valuable for understanding the impact of noise on the quality of samples and what might be possible as the size and fidelity of quantum hardware improve. Finally, we considered two variations of a quantum circuit here; more investigation into different ansatz and potential encoding schemes may further improve performance. 5 Acknowledgements The authors would like to thank the other members of the QUTAC consortium. CJ would also like to thank Abhishek Awasthi and Davide Vodola for their assistance with setting up HPC experiments. References [1]Rajeev Acharya, Dmitry A Abanin, Laleh Aghababaie-Beni, Igor Aleiner, Trond I Andersen, Markus Ansmann, Frank Arute, Kunal Arya, Abraham Asfaw, Nikita Astrakhantsev, et al. Quantum error correction below the surface code threshold. Nature, 2024. [2]Dolev Bluvstein, Simon J. Evered, Alexandra A. Geim, Sophie H. Li, Hengyun Zhou, Tom Manovitz, Sepehr Ebadi, Madelyn Cain, Marcin Kalinowski, Dominik Hangleiter, J. Pablo Bonilla Ataides, Nishad Maskara, Iris Cong, Xun Gao, Pedro Sales Rodriguez, Thomas Karolyshyn, Giulia Semeghini, Michael J. Gullans, Markus Greiner, Vladan Vuletić, and Mikhail D. Lukin. Logical quantum processor based on reconfigurable atom arrays. Nature, pages 1–3, December 2023. Publisher: Nature Publishing Group. [3] Youngseok Kim, Andrew Eddins, Sajant Anand, Ken Xuan Wei, Ewout van den Berg, Sami Rosenblatt, Hasan Nayfeh, Yantao Wu, Michael Zaletel, Kristan Temme, and Abhinav Kandala. Evidence for the utility of quantum computing before fault tolerance. Nature, 618(7965):500–505, June 2023. Number: 7965 Publisher: Nature Publishing Group. [4]Stephen P Jordan, Noah Shutty, Mary Wootters, Adam Zalcman, Alexander Schmidhuber, Robbie King, Sergei V Isakov, and Ryan Babbush. Optimization by decoded | https://arxiv.org/abs/2505.22533v1 |
quantum interferometry. arXiv preprint arXiv:2408.08292 , 2024. [5]Peter W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Journal on Computing , 26(5):1484–1509, 1997. [6]Lov K. Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing , STOC ’96, page 212–219, New York, NY, USA, 1996. Association for Computing Machinery. [7]A Yu Kitaev. Quantum measurements and the abelian stabilizer problem. arXiv preprint quant-ph/9511026 , 1995. [8]Maria Schuld and Francesco Petruccione. Supervised learning with quantum computers , volume 17. Springer, 2018. [9]Artur Garcia-Saez and Jordi Riu. Quantum observables for continuous control of the quantum approximate optimization algorithm via reinforcement learning. arXiv preprint arXiv:1911.09682 , 2019. [10]El Amine Cherrat, Iordanis Kerenidis, Natansh Mathur, Jonas Landman, Martin Strahm, and Yun Yvonna Li. Quantum vision transformers. arXiv preprint arXiv:2209.08167 , 2022. [11]Yeqi Gao, Zhao Song, Xin Yang, and Ruizhe Zhang. Fast quantum algorithm for attention computation. arXiv preprint arXiv:2307.08045 , 2023. [12]Arsenii Senokosov, Alexandr Sedykh, Asel Sagingalieva, Basil Kyriacou, and Alexey Melnikov. Quantum machine learning for image classification. Machine Learning: Science and Technology , 5(1):015040, 2024. [13]Ryan Sweke, Jean-Pierre Seifert, Dominik Hangleiter, and Jens Eisert. On the Quantum versus Classical Learnability of Discrete Distributions. Quantum , 5:417, March 2021. [14]Yunfei Wang and Junyu Liu. A comprehensive review of quantum machine learning: from nisq to fault tolerance. Reports on Progress in Physics , 2024. 11 [15]Joseph Bowles, Shahnawaz Ahmed, and Maria Schuld. Better than classical? the subtle art of benchmarking quantum machine learning models. arXiv preprint arXiv:2403.07059 , 2024. [16]Amira Abbas, David Sutter, Christa Zoufal, Aurélien Lucchi, Alessio Figalli, and Stefan Woerner. The power of quantum neural networks. Nature Computational Science , 1(6):403–409, 2021. [17]Song Cheng, Jing Chen, and Lei Wang. Information perspective to probabilistic modeling: Boltzmann machines versus born machines. Entropy, 20(8), 2018. [18]Marcello Benedetti, Delfina Garcia-Pintos, Oscar Perdomo, Vicente Leyton-Ortega, Yunseong Nam, and Alejandro Perdomo-Ortiz. A generative modeling approach for benchmarking and training shallow quantum circuits. npj Quantum Information , 5(1):45, 2019. [19]Pierre-Luc Dallaire-Demers and Nathan Killoran. Quantum generative adversarial networks. Phys. Rev. A , 98:012324, Jul 2018. [20]Seth Lloyd and Christian Weedbrook. Quantum generative adversarial learning. Physical Review Letters , 121(4), July 2018. arXiv:1804.09139 [quant-ph]. [21]D. Zhu, N. M. Linke, M. Benedetti, K. A. Landsman, N. H. Nguyen, C. H. Alderete, A. Perdomo-Ortiz, N. Korda, A. Garfoot, C. Brecque, L. Egan, O. Perdomo, and C. Monroe. Training of quantum circuits on a hybrid quantum computer. Science Advances , 5(10):eaaw9918, 2019. [22]Carlos A Riofrío, Oliver Mitevski, Caitlin Jones, Florian Krellner, Aleksandar Vučković, Joseph Doetsch, Johannes Klepsch, Thomas Ehmer, and Andre Luckow. A performance characterization of quantum generative models. arXiv e-prints , pages arXiv–2301, 2023. [23]X. Gao, Z.-Y. Zhang, and L.-M. Duan. A quantum machine learning algorithm based on generative models. Science Advances , 4(12):eaat9004, 2018. [24]Xiangtian Zheng, Bin Wang, Dileep Kalathil, and Le Xie. Generative adversarial networks-based synthetic pmu data creation for improved event classification. IEEE Open Access Journal of Power and Energy , 8:68–76, 2021. [25]Xiangxiang Zeng, Fei Wang, Yuan Luo, Seung-gu Kang, Jian Tang, Felice C Lightstone, | https://arxiv.org/abs/2505.22533v1 |
Evandro F Fang, Wendy Cornell, Ruth Nussinov, and Feixiong Cheng. Deep generative molecular design reshapes drug discovery. Cell Reports Medicine , 3(12), 2022. [26]Ismail Keshta and Ammar Odeh. Security and privacy of electronic health records: Concerns and challenges. Egyptian Informatics Journal , 22(2):177–183, 2021. [27]Rudolf Mayer, Markus Hittmeir, and Andreas Ekelhart. Privacy-preserving anomaly detection using synthetic data. In IFIP Annual Conference on Data and Applications Security and Privacy , pages 195–207. Springer, 2020. [28]Cao Xiao, Edward Choi, and Jimeng Sun. Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review. Journal of the American Medical Informatics Association : JAMIA, 25(10):1419–1428, June 2018. [29]Ghadeer Ghosheh, Jin Li, and Tingting Zhu. A review of Generative Adversarial Networks for Electronic Health Records: applications, evaluation measures and data sources, December 2022. arXiv:2203.07018 [cs]. [30]Zoran Krunic, Frederik F. Flöther, George Seegan, Nathan D. Earnest-Noble, and Omar Shehab. Quantum Kernels for Real-World Predictions Based on Electronic Health Records. IEEE Transactions on Quantum Engineering , 3:1–11, 2022. Conference Name: IEEE Transactions on Quantum Engineering. [31]Edward Choi, Siddharth Biswal, Bradley Malin, Jon Duke, Walter F. Stewart, and Jimeng Sun. Generating Multi-label Discrete Patient Records using Generative Adversarial Networks. In Proceedings of the 2nd Machine Learning for Healthcare Conference , pages 286–305. PMLR, November 2017. ISSN: 2640-3498. [32]Jin Li, Benjamin J. Cairns, Jingsong Li, and Tingting Zhu. Generating synthetic mixed-type longitudinal electronic health records for artificial intelligent applications. npj Digital Medicine , 6(1):1–18, May 2023. Publisher: Nature Publishing Group. [33]Lei Xu, Maria Skoularidou, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. Modeling tabular data using conditional gan. Advances in neural information processing systems , 32, 2019. [34]Neha Patki, Roy Wedge, and Kalyan Veeramachaneni. The synthetic data vault. In IEEE International Conference on Data Science and Advanced Analytics (DSAA) , pages 399–410, Oct 2016. Business Source License 1.1. [35]Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. Mimic-iii, a freely accessible critical care database. Scientific data , 3(1):1–9, 2016. [36] T. & Mark R. Johnson A., Pollard. Mimic-iii clinical database, 2016. 12 [37]Barry Becker and Ronny Kohavi. Adult. UCI Machine Learning Repository, 1996. DOI: https://doi.org/10.24432/C5XW20. [38]M. Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C. Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R. McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and Patrick J. Coles. Variational quantum algorithms. Nature Reviews Physics , 3(9):625–644, 2021. [39]Juan Miguel Arrazola, Olivia Di Matteo, Nicolás Quesada, Soran Jahangiri, Alain Delgado, and Nathan Killoran. Universal quantum circuits for quantum chemistry. Quantum , 6:742, June 2022. [40]Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks, 2014. [41]GavinE.Crooks. Gradientsofparameterizedquantumgatesusingtheparameter-shiftruleandgatedecomposition, 2019. [42]Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, and Nathan Killoran. Evaluating analytic gradients on quantum hardware. Physical Review A , 99(3), March 2019. [43]René Steijl and George N Barakos. Parallel evaluation of quantum algorithms for computational fluid dynamics. Computers & Fluids , 173:22–28, 2018. [44]Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua | https://arxiv.org/abs/2505.22533v1 |
Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems , volume 27. Curran Associates, Inc., 2014. [45] Inc. DataCebo. Synthetic data metrics, 10 2023. Version 0.12.0, MIT License. [46]Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , page 785–794. ACM, August 2016. [47]Alistair E. W. Johnson, Tom J. Pollard, Lu Shen, Li-wei H. Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G. Mark. MIMIC-III, a freely accessible critical care database. Scientific Data , 3(1):160035, May 2016. Number: 1 Publisher: Nature Publishing Group. [48]Jarrod R McClean, Sergio Boixo, Vadim N Smelyanskiy, Ryan Babbush, and Hartmut Neven. Barren plateaus in quantum neural network training landscapes. Nature communications , 9(1):4812, 2018. [49]Louis Schatzki, Martin Larocca, Quynh T Nguyen, Frederic Sauvage, and Marco Cerezo. Theoretical guarantees for permutation-equivariant quantum neural networks. npj Quantum Information , 10(1):12, 2024. [50]Pablo Bermejo, Paolo Braccia, Manuel S Rudolph, Zoë Holmes, Lukasz Cincio, and M Cerezo. Quantum convolutional neural networks are (effectively) classically simulable. arXiv preprint arXiv:2408.12739 , 2024. [51]Marco Cerezo, Martín Larocca, Diego Garc’ia-Mart’in, N. L. Diaz, Paolo Braccia, Enrico Fontana, Manuel S. Rudolph, Pablo Bermejo, Aroosa Ijaz, Supanut Thanasilp, Eric R Anschuetz, and Zoe Holmes. Does provable absence of barren plateaus imply classical simulability? or, why we need to rethink variational quantum computing. ArXiv, abs/2312.09121, 2023. 13 A Appendices A.1 Example of Tabular Encoding In Section 2.2 we introduce a protocol for encoding classical tabular data into quantum states. Here we present an example of that encoding for the Adult Census 10 data set with three features, age, income, and education. The first is a numerical feature age, which is encoded with 5 qubits. Age (numerical, Nage= 5qubits→25= 32bins). The width of each bin is calculated from the minimum and maximum values from the training data: Wage= ({xage}max−{xage}min)/Bin Count (16) whereWageis the width of each bin and xageis a vector of the age feature of the training data. The bin number is a rounded value of: xage−{xage}min/Wage. For example: Age= 19∝⇕⊣√∫⊔≀→bin2∝⇕⊣√∫⊔≀→ |00010⟩ Income is a (binary) variable with options "<=50K" and ">50K". It requires one qubit for Boolean: |0⟩ (<=50K),|1⟩(>50K) and 2-qubits for one-hot: |10⟩(<=50K),|01⟩(>50K) Work class is a categorical variable with four options (4 classes, one-hot): (empl-unknown ,|1000⟩),(govt-employed ,|0100⟩) (self-employed ,|0010⟩),(unemployed ,|0001⟩) A single row is represented as: Encoding with Boolean design (10 qubits): {19,"<=50K",govt-employed} ∝⇕⊣√∫⊔≀→ | 00010∥0∥0100⟩=|0001000100⟩ Encoding with non-Boolean design (11 qubits): {19,"<=50K",govt-employed} ∝⇕⊣√∫⊔≀→ | 00010∥10∥0100⟩=|00010100100⟩. A.2 Example of Unique-Row-Index Encoding (Failure Case) The qubit number required to implement the one-hot encoding introduced in Section 2.2 scales linearly with the number of categories in one feature. It is natural to ask if a more qubit-efficient encoding is possible, while maintaining similarly high benchmarking results. Consider the following encoding: Assign an index to every unique row of the search space and encode this index as a binary number. We illustrate this encoding with the example from Appendix A.1. The | https://arxiv.org/abs/2505.22533v1 |
set of all unique rows of the search space is given by S={0,..., 25−1}×{Income1,Income2}×{Workclass1,...,Workclass4}, (17) where×denotes the Cartesian product of two discrete sets. We assign an index to each of the |S|= 256 elements of the set and encode the index in binary using log2|S|= 8qubits. This allows us to represent all elements ofSusing a single numerical register of 8 qubits. We train the circuit to generate indices that follow the underlying distribution of indices in the training data. The generated indices can simply be decoded by directly accessing the corresponding element in S. We train the model using the proposed Unique-Row-Index encoding on the same hyperparameter ranges described in Section 3.2, applied to both the Adults Census 10 and Adults Census 15 datasets. While scores above 0.9 are achieved on the Adults Census 10 dataset for a few hyperparameter configurations, the performance on the Adults Census 15 dataset is significantly lower (Figure 4). This suggests that a single numerical register is not a suitable circuit design for generating samples with categorical features. 14 Adults Census 10 Adults Census 15 Data Set0.40.50.60.70.80.91.0Overall Metric T abularQGAN T abularQGAN with Unique Row IndexingFigure 4: Comparison of the overall metric for each hyperparameter configuration on the Adults Census 10 and Adults Census 15 datasets, using the Unique-Row-Index encoding and a single numerical register for index generation. The spread of points within each bar has been added to improve data visibility. It is evident that the performance of the TabularQGAN model is significantly lower when a Unique-Row-Index encoding is used instead of the proposed one-hot encoding. A.3 Overall Metric Details The overall metric, from the SDMetrics library [ 45], called the overall similarity score there, is an average over two components, the first is a column-wise metric, Sshape, and the second is a pairwise metric over each pair of columns Spair. The column shape metric is given by Sshape =1 C/summationtextC i=1siwhereCis the number of columns and si is the Kolmogorov–Smirnov complement for numerical columns ( si= 1−KS(Pi,Qi)) and the Total Variational Distance Complement for categorical columns ( si= 1−1 2/summationtext k|Pi(k)−Qi(k)|). Here,Qiis a vector of the values of column ifrom the training data, and Piis the equivalent for the synthetic data. Forthecategoricaldata Qi(k)(Pi(k))isthecountoftheinstancesofthecategory kincolumnQi(Pi(k)). The pair-wise metric is given by Spair=1 (C 2)/summationtext i<jtij, for columns iandj. For numeric data tij= 1−(|PS ij− PR ij|/2)wherePS ijandPR ijare the Pearson correlation coefficients for the synthetic and real data, respectively. For categorical data (or a mixed pair of categorical and numeric data), tij= 1−1 2/summationtext α∈Ai/summationtext β∈Bj|FS αβ−FR αβ|. This is the contingency similarity where αis each of the categories of column Ai, andFS αβis the frequency of the category values αandβfor the synthetic data. Each of these metric are normalized such that they are between [0,1]with 1 being perfect similarity. Then the final overall metric is an average of the two components. Soverall =1 2(Sshape +Spair). A.4 Optimum Hyperparameter Configurations Here we show the hyperparameter configurations associated with our best found models mentioned in Table 3. Each configuration was selected by taking the model instance with | https://arxiv.org/abs/2505.22533v1 |
the maximum overall metric for each dataset. For TabularQGAN we found that the deeper circuit depths were optimal and for the classical models, the lower number of layers was better. 15 Table 4: Best found hyperparameter configurations for each model and data type Data Set NameModel%Batch SizeCircuit DepthLR GenLR DiscrimLayer WidthNumber Epochs TabularQGAN 0.2 4 0.200 0.050 N/A 3000 Adult Census 10CTGAN 0.2 2 0.001 0.050 256 1500 CopulaGAN 0.2 1 0.001 0.010 256 1500 TabularQGAN 0.1 4 0.200 0.100 N/A 3000 Adult Census 15CTGAN 0.2 2 0.050 0.001 256 1500 CopulaGAN 0.1 1 0.001 0.050Data Width1500 TabularQGAN 0.2 4 0.100 0.050 N/A 3000 MIMIC 10CTGAN 0.1 1 0.001 0.010 256 1500 CopulaGAN 0.1 2 0.001 0.010 256 1500 TabularQGAN 0.1 1 0.200 0.200 N/A 3000 MIMIC 15CTGAN 0.1 4 0.001 0.010 256 1500 CopulaGAN 0.1 2 0.001 0.010 256 1500 A.5 Effect of different Circuit Encodings We explored two different ways of encoding Boolean variables, with one or two qubits, as described in Section 2.2. We performed the hyperparameter optimization search for each different encoding, for each model type. Figure 5 shows the distributions of the overall metric over the hyperparameter configurations. The distributions for the two different encoding types exhibit high similarity, indicating that the impact of encoding choice was minimal for the features and data sets considered. This limited effect is likely due to the datasets containing at most two Boolean features, which affects only one or two qubits by the current encoding method. However, as the number of Boolean features increases, the influence of encoding choice on performance may become more significant. A.6 Effect of Circuit Depth on Performance In Figure 7, the effect of circuit depth for our quantum model and number of layers for the classical models is explored, with the results averaged over all other hyperparameter settings. For the TabularQGAN model (Figure 7a), an increasing circuit depth had a small performance improvement for all data sets except for the Adult Census 15 dataset, where there was little difference. The classical models (Figure 7b) showed the opposite behavior: increasing the number of layers leads to worse performance. As the classical models had much larger parameter counts, we speculate that this may have been due to excessive overparameterization, which could lead to smaller gradients slowing down, or even stopping, training. A.7 Overlap Fraction Metric and Downstream Score numerical Results Figure 6 is a bar plot of the overlap fraction for different model types. It shows that the overlap fraction is well over 50% and close to 1 for some models and data sets. However, we find that the TabularQGAN model has on average a lower overlap score compared to classical models for Adults Census dataset but higher for MIMIC dataset. We find that the overall metric score and the overlap fraction are not correlated for either the classical or quantum models for any data set. We plot the downstream metric against the overall metric in 8. We split the results into those for classification, Figure 8a (where the target feature was categorical), and regression tasks, Figure 8b | https://arxiv.org/abs/2505.22533v1 |
(where the target feature was numeric). Again, these results are for a subsample of the overall hyperparameter-optimized data. We find 16 that, on average, the TabularQGAN models had a lower downstream score, indicating that those samples were better able to replicate the real data in training a classifier. This generalization metric is also contextually useful, for example, in a scenario where the original data, like electronic health records, cannot be shared due to privacy concerns, so that a synthetic dataset can be used instead. CTGAN CopulaGAN T abular QGAN Model Type0.40.50.60.70.80.91.0Overall Metric Boolean Mapping Non Boolean Mapping Figure 5: Plot showing the distribution of the overall metric value for each data set with the two different encodings. Adult Census 15 is excluded as it does not contain any binary features. CTGAN CopulaGAN T abular QGAN Model Type0.30.40.50.60.70.80.91.0Overlap Fraction Adults Census 10 Adults Census 15 MIMIC 10 MIMIC 15 Figure 6: Plot showing the overlap fraction metric for different models and data types. Only a selected subset of the data is sampled. 17 Adults Census 10 Adults Census 15 MIMIC 10 MIMIC 15 Data Set0.40.50.60.70.80.91.0Overall Metric Quantum 1 2 3 4(a) Adults Census 10 Adults Census 15 MIMIC 10 MIMIC 15 Data Set0.40.50.60.70.80.91.0Overall Metric Classical (b) Figure 7: (a) Effect of circuit depth on performance for quantum models. (b) Effect of the number of layers on performance for classical models. Only data from the Boolean data encoding is included for both plots. 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Overall Metric0.00.20.40.60.8Downstream Score (Accuracy)CopulaGAN CTGAN T abular QGAN (a) 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Overall Metric0.00.20.40.60.81.0Downstream Score (R2 Score)CopulaGAN CTGAN T abular QGAN (b) Figure 8: (a) Downstream Score against Overall Metric for predicting categorical Variables (b) Downstream Score against Overall Metric for predicting numeric Variables. Only a selected subset of the data is sampled. 18 | https://arxiv.org/abs/2505.22533v1 |
arXiv:2505.22543v1 [cs.CV] 28 May 2025Scaling-up Perceptual Video Quality Assessment Ziheng Jia1*, Zicheng Zhang1*, Zeyu Zhang1, Yingji Liang4, Xiaorong Zhu1, Chunyi Li1, Jinliang Han1 Haoning Wu3, Bin Wang2, Haoran Zhang2, Guanyu Zhu2, Qiyong Zhao2 Xiaohong Liu1, Xiongkuo Min1♢, Guangtao Zhai1 1Shanghai Jiaotong University,2Media Experience and Evaluation Lab, Huawei Techonologies 3Nanyang Technological University,4East China Normal University Binary Multi. Open Spatial TemporalMachineHumanOverallOmni-VQA-Chatter(7B)(400K)GPT-4oQwen2-vl(72B) Annotate/Refine/ Expand HumanVideos / Images Videos / Images LMM ModelsExisting MIDBs Refine,ReformulationAutomated Annotation/Extension Too costly to scale-up Limited fine-grained content Limited quality dimensions Efficient pipeline for synthetic data scaling Multi-branch quality annotation Fine-grained quality annotation Task- specific SFT OmniVQA Query Query Qwen2-vl (72B) OmniVQA- Chatter (7B)How would you rate the quality of this video? This video suffers from several visual impairments. Low light conditions result in a dim, unevenly illuminated scene with overexposed highlights and underexposed shadows. Though overall sharpness is fair, significant compression artifacts manifest as blockiness, particularly in darker regions. While the video plays smoothly without stuttering or noticeable camera shake, the overall image quality is degraded by pervasive noise and a dark, desaturated color palette. Furthermore, pronounced motion blur affects moving objects, creating ghosting artifacts.Describe the quality of this video in detail. OmniVQA-Rater (7B)61.9733Binary Multi. Open Tech. Aes.Temp.AIGCOverallOmni-VQA-Chatter(7B)(400K)Qwen2-vl(72B)GPT-4oQ-bench-video (test) OmniVQA-FG-Bench Previous MIDB Construction MethodsThe quality of this video is poor. The overall clarity is low, with weak lighting causing most of the texture details to be lost and blurry. The composition is also not good, as it is slightly tilted. Additionally, the background is also somewhat blurred, which further reduces the quality of the image.Example Video Knowledge Injection Figure 1. OmniVQA overview . On the left side, it summarizes the existing MIDB construction paradigms in the visual quality assessment field. The main drawbacks of these methods lie in their excessive reliance on manual annotations or existing datasets, as well as the lack of comprehensive and diverse annotation dimensions. To address this, we have developed the OmniVQA datasets and models. On the right side, it shows examples of the OmniVQA model’s application and the radar charts of its superior performance on two benchmarks. Abstract The data scaling law has been shown to significantly enhance the performance of large multi-modal models (LMMs) across various downstream tasks. However, in the domain of perceptual video quality assessment (VQA), the potential of scaling law remains unprecedented due to the scarcity of labeled resources and the insufficient scale of datasets. To address this, we propose OmniVQA , an effi- cient framework designed to efficiently build high-quality, human-in-the-loop VQA multi-modal instruction databases (MIDBs). We then scale up to create OmniVQA-Chat- 400K , the largest MIDB in the VQA field concurrently. Our focus is on the technical and aesthetic quality dimen- sions, with abundant in-context instruction data to provide fine-grained VQA knowledge. Additionally, we have built theOmniVQA-MOS-20K dataset to enhance the model’s quantitative quality rating capabilities. We then intro- duce a complementary training strategy that effectively leverages the knowledge from datasets for quality under- standing and quality rating tasks. Furthermore, we pro- pose the OmniVQA-FG (fine-grain)-Benchmark to evalu-ate the fine-grained performance of the models. Our results demonstrate that | https://arxiv.org/abs/2505.22543v1 |
our models achieve state-of-the-art per- formance in both quality understanding and rating tasks. 1. Introduction In the context of perceptual video quality assessment (VQA), the field currently focuses on two key tasks: quan- titative quality rating and quality understanding. Quality rating refers to assigning a precise score to a video aligning with its human-labeled mean opinion score (MOS), while quality understanding involves providing detailed, qualita- tive feedback and analysis on the video’s quality. Recent ad- vancements in large multi-modal models (LMMs) demon- strate the significant impact of data scaling laws on various downstream tasks [40, 55, 56, 58]. However, in the field of perceptual video quality assessment (VQA), the benefits of data scaling have not been adequately investigated. It moti- vates a key hypothesis: Scaling up VQA data can potentially improve model performance. Despite this potential, existing VQA datasets may struggle to fully leverage scaling due to limited labeling resources and insufficient data scale (shown in the supplementary materials ( supp. )(Tab. 7). Here, we would like to raise a fundamental query: Why scaling up VQA data is a formidable challenge? Perceptual VQA is a task that mimics human percep- tion, which inherently requires significant expert-level hu- man involvement. As a consequence, the construction of large-scale datasets becomes not only resource-intensive but also time-consuming. Previous VQA multi-modal in- struction databases ( MIDBs ) like [16, 46] utilize large lan- guage models (LLMs), such as GPT [30], to augment the human annotations. Nonetheless, the overall infor- mation gain remains marginal. Furthermore, this annota- tion methodology is deeply reliant on extensive human re- sources, thereby hindering its scalability. To tackle these challenges, we introduce OmniVQA , a comprehensive framework by scaling perceptual VQA MIDB through a human-in-the-loop paradigm predominantly driven by ma- chine annotation, culminating in the creation of the largest VQA MIDB to date, the OmniVQA-Chat-400K . This framework is built on 3branches that facilitate easy scal- ability while maintaining data quality. In addition, we pro- pose OmniVQA-MOS-20K , a large-scale human-labeled VQA dataset specially for video quality rating tasks. Considering the intrinsic interconnection between qual- ity rating and understanding tasks, we further propose a task-specific complementary training strategy to effec- tively harness the quality knowledge embedded in datasets from both tasks, facilitating the training of LMMs. First, we train the model on one type of data (e.g., quality rating or understanding) and then finetune it using the remaining data. This approach enables the model to preserve and ef- fectively integrate knowledge from both datasets. Currently, there is no benchmark available for the fine- grained, in-context VQA quality understanding tasks. To thoroughly evaluate the in-context capabilities of our pro- posed model, we introduce the OmniVQA-FG (fine-grain) Benchmark , which is meticulously designed to assess the model’s performance in spatiotemporal, local fine-grained video quality understanding and description. Our contributions are threefold: • We propose an effective and comprehensive data collec- tion pipeline that supports the creation of large-scale, high-quality VQA MIDBs, and we then develop the OmniVQA-Chat-400K, with a focus on 3key branches: technical quality, aesthetic quality, and in-context analy- sis. We also propose | https://arxiv.org/abs/2505.22543v1 |
OmniVQA-MOS-20K, a large-scale human-labeled UGC video subjective scoring dataset. • The proposed OmniVQA models achieve state-of-the-art (SOTA) performance by training LMMs with the task- specific complementary training strategy for both quality rating and understanding (general and fine-grained) tasks. • We introduce the OmniVQA-FG-Benchmark, a machine- human-mixed-annotated benchmark designed to evaluate fine-grained spatiotemporal quality understanding perfor-mance in both synthetic and real-world scenarios. 2. Related Works 2.1. Perceptual video quality assessment Perceptual VQA initially focuses on quantitative video quality rating [26], primarily aiming to fit the human- labeled MOSs in public datasets such as [4, 9, 10, 14, 23, 29, 31, 38, 51]. Existing works on this task includes handcrafted-feature-based methods [3, 11, 19, 27, 28, 35, 35, 36], deep neural network (DNN)-based approaches [20, 22, 25, 32, 33, 39, 41–45], and LMM-based models [13, 17, 47]. Recently, video quality understanding has be- come a new emerging research field. Q-bench-video [57] is the first comprehensive benchmark for evaluating the ca- pabilities of LMMs in general video quality understand- ing tasks. The VQA2-Assistant [17] is the first proprietary LMM capable of video quality understanding and chatting. However, existing works on perceptual VQA are almost en- tirely confined to the overall quality assessment of videos. There is almost no work addressing fine-grained tasks such as spatiotemporal local distortions retrieval and description, thereby leaving significant room for further research. 2.2. MIDBs for visual quality assessment In the field of LMMs for visual quality assessment, recent works have proposed various approaches to constructing MIDBs. These approaches can be broadly divided (shown in the part of Fig. 1): 1.Human annotators as perceivers [16, 17, 46, 53, 59]: Human annotators directly serve as perceivers of videos/images, and the data is obtained directly through manual annotations and refinement/rewrite by LLMs. The major drawback is the significant cost of human la- bor and time, which significantly hampers the efficient scalability of MIDBs. 2.General LMMs as perceiver and annotator (distillation-like approach) [52]: General LMMs(e.g., GPT-4o [1], Gemini-1.5 [34]) directly serve as per- ceivers and annotators of image/video. Its primary limitation lies in the sub-optimal data quality, con- strained by distillation from teacher models that are not proprietary within this domain. 3.Knowledge injection [5, 8, 47, 48]: This approach in- volves injecting task-specific knowledge into existing MIDBs to adapt them for new downstream tasks with subsequent LLM refinement and reformulation. The de- ficiency with this method is that the scale of the new MIDB is heavily contingent upon the original MIDB, thus limiting the efficient expansion of the data scale. In addition to the aforementioned drawbacks, these previ- ous MIDBs also share the following common issues. First, these datasets primarily focus on the annotation of the tech- VQA2-Assistant (7B) (Domain Expert Model) Step1: Synthetic Distorted Video GenerationStep 1: Multiple Sampling from Domain Expert ModelStep 2: Human-in-the-loop Summary and Judgement (Rejection Sampling Process)Step 3: Instruction Pairs GenerationTechnical-Branch Construction ProcessIn-context Branch Construction Process Technical-branch 55128 UGC-Videos 230K Instruction PairsIn-context-branch 6500 Source Videos 23860 DistortedVideos 6 Distortion Types 127K Instruction Pairs Word Cloud of OmniVQA-Chat-400K Video InputQuality Factors: Sharpness,Light,Camera Shake,Compression, Color,Noise, Motion blur,Fluency... Answer1-5 (√)Summary&Refine Sampled FramesMulti-round | https://arxiv.org/abs/2505.22543v1 |
Voting Aesthetic Branch Construction Process High-quality Videos Spatial Temporal Distorted Videos with bboxsDistorted Object Semantic Description Distortion Information Recording (Center,upperleft...) (At ... second) (Last for... seconds) (Noticeable,...,Severe) Step2: Distorted Objects Recognition and Description Step3: Instruction Pairs Generation High-Quality Videos (Sampled Frames) LMMs Overall Style Realistism Romantic Spatial/Temporal Analysis Spatial : Frame composition, Spatial style,Foreground distinguishing, Color vividness, Lighting style.. Temporal :Camera trajectory, Filming style...Emotional Feedback Calm, Energetic, Excitement, Dizzleness, Frustration... Aesthetic-branch 7728 UGC-Videos 45K Instruction Pairs (√) (×) (×) (√) 2 points 1 point 0 points ... Human Selection LLM CorrectionRound1 Round n2 points 1 point 0 points The video generally exhibits acceptable sharpness and natural, though slightly dim, lighting with a cool color cast. While mostly fluent playback contributes to crisp visuals, minor compression artifacts are noticeable, particularly impacting finer details like hands and keyboard edges. Slight motion blur is present but doesn't obstruct observation of movement, and noise levels are minimal. A minor, unobtrusive flicker (frame shake) is also present. Compression Overexposure Underexposure Blur Noise Stutter/Stalling The upper body and face of a woman.Q&A pairs generation Q&A pairs generation Instruction Pairs Generation Yes/No What/How Open-ended Fill-in-blank Q&A pairs generation Summary Is the motion in the video smooth and free from blur? A. Yes B. No"A. Yes, the motion in the video is smooth and free from blur. How many stuttering events are present in the video?There are 3 stuttering events.Which emotion does the video primarily evoke? A. Excitement, B. Calm, C. Anger, D. JoyA. Excitement, This distortion mainly affects the ____ portion of the frame, impacting ____ and ____.upper center, the upper body of a female reporter, the hospital building Summarized information in each branchGive the description of the technical/in-context/aesthetic quality of the video. Positive Negative Neutral (√) + + (√) Animation DocumentarySummarySampled Frames Figure 2. Data construction pipeline of OmniVQA-Chat-400K . nical [17, 46–48, 52, 53] or aesthetic quality [16, 59] of im- ages/videos while lacking comprehensive annotations inte- grating multiple quality dimensions and factors. Moreover, the majority of these MIDBs [16, 17, 46–48, 52, 53, 59] emphasize an overall description or understanding of im- age/video quality but lack spatiotemporal fine-grained an- notation and evaluation. These issues have just become the inspiration for our work. 3. The OmniVQA-Chat-400K Considering the deficiencies in the existing MIDBs, we propose that an effective perceptual VQA MIDB construc- tion pipeline should encompass: (1) Collaboration anno- tation using multiple models with human-in-the-loop. (2) Spanning multiple quality dimensions of video, along with spatiotemporal fine-grained data. Driven by these motivations, we construct the OmniVQA-Chat-400K , cur- rently the largest and most comprehensive perceptual VQA MIDB. The construction pipeline is illustrated in Fig. 2 and detailed in supp. (Sec. D), and some statistical information is shown in Fig. 3. 3.1. Candidate video pool construction We select 100,000 videos from a large-scale online in- the-wild user-generated content (UGC) video dataset [6] to serve as the candidate video pool for subsequent selection. We impose the constraint that the length of all candidate videos must be in the range of [3,15) seconds. We then uti- lize4state-of-the-art objective video quality rating | https://arxiv.org/abs/2505.22543v1 |
methods[33, 42, 44, 47] to label the objective quality labels for the candidate videos. To ensure consistency in the scale of the scores from each scoring method, they are first converted to[0,100) . Subsequently, the scores are averaged for each video as the objective quality label. 3.2. The technical branch The technical branch is the primary component of the OmniVQA-Chat-400K. The videos in this branch are ran- domly selected from the candidate video pool to ensure the diversity of video content and quality. Each video is annotated across 8quality factors: sharp- ness, light, compression, color, noise, fluency, motion blur, and camera shake . We employ an aggregation strat- egy, where annotations for each quality factor are initially performed individually and subsequently combined (shown in the upper part of Fig.2). We propose a novel annotation paradigm based on rejec- tion sampling (RS) for the annotation of each quality fac- tor. The process begins by performing multiple coarse an- notations using expert VQA LMM. This process forms the suggested distribution . Subsequently, unlike the common practice in reinforcement learning (RL) for RS [18, 24], we do not only select the sampling result with the highest re- ward determined by the reward model. Instead, we employ a heuristic method to fully utilize the effective information in all the sampled results. This method filters and sum- marizes the Nsamples obtained through the first sampling (a) Video Quality Distribution of OmniVQA-Chat-400K (b) Top 15 Quality Word Frequency of OmniVQA-Chat-400K(c) The Distribution of Instruction Lengths(d) Human-rated Quality Distribution of OmniVQA-MOS-20K Figure 3. Statistical information of OmniVQA-Chat-400K andOmniVQA-MOS-20K . with reasoning LLMs, and a voting mechanism using gen- eral LMMs is used to decide the next process. In this case, the general LMMs act as a judger rather than an annotator. We argue that substituting the challenging direct annotation task with the relatively simpler voting-based decision for general LMMs offers a more practical distillation approach. This method is more likely to guide the general LMM to- ward generating more accurate and definitive responses. Multiple coarse annotations. We use VQA2-Assistant (7B) [17] as the expert model for sampling. We set N= 5, meaning that for each quality factor, we pose 5questions with the same fundamental meaning but varying sentence structures and record the model’s responses. LLM summary. We use a reasoning model to conduct the LLM summary process. The information in the responses obtained before is categorized into 3types: 1.Positive answers are those that have similar meanings in the responses and appear in at least 3of the 5responses. These answers are then merged into the summary. 2.Negative answers , which are contradicted in meaning to the positive answers, are excluded from the summary. 3. If any responses contain additional neutral information, it should be included in the summary. For example, for the question “How is the sharpness of this video?” , if the 5responses are: “Poor” (positive), “The sharpness is relatively poor” (positive), “Poor, with degraded facial details” (positive with neutral), “Good, however, the facial details are slightly lost” (negative with neutral), “Excellent” (negative). The final summarized re- sponse would | https://arxiv.org/abs/2505.22543v1 |
be: “The sharpness is poor with degraded human facial details.” Voting and post-processing. We use sota LMM to per- form voting to decide the post-processing method. We input the keyframe sequence sampled from the annotated video (1fps) along with the prior summary of each quality factor. We then prompt it to conduct 3voting rounds for each qual- ity factor, assessing the accuracy and relevance of the pro- vided summaries and assigning a score in (2,1,0). Then, the post-processing method is determined and implemented on the given quality factor summaries. If any round of vot- ing scores is 0, human experts are required to intervene and make a decision between the quality summary and it’s mod- ified summary for that round. The detailed scoring criteria and the corresponding post-processing methods, includingthe human-in-the-loop selection process, are detailed in the supp. (Fig. 11 and Sec. E.2). Instruction pairs generation. We use Openai-o1 to merge the annotations of all quality factors, resulting in a video- level quality summary. Based on this summary, we ask the model to generate 3question-answer (Q&A) pairs related to specific quality factors. The questions may take the follow- ing forms: binary choice questions, multiple-choice (single answer) questions, and Open-ended questions. To fully leverage the overall quality summary, we add an extra question for each video: “Please describe the overall quality of this video, please evaluate as many quality fac- tors as possible. ” The video-level overall quality summary is then provided as the answer to this question. 3.3. The in-context branch Thein-context branch is designed to augment the model’s ability to identify fine-grained local spatiotemporal quality issues in videos. To minimize the influence of inherent dis- tortions, we choose 6,500source videos from the candidate pool with objective quality labels above 70. The annotation process is presented in the middle left part in Fig. 2. Synthetic distorted video generation. We manually add local spatiotemporal distortions to the selected videos. Spatial distortions include overexposure ,underexposure , blur,compression artifacts , and noise , while temporal dis- tortions refer to video stuttering . Spatial distortions are randomly added to a rectangular region within the frame, covering 1/4of the frame area. The duration of spatial distortion is randomly assigned an integer value between 1and3seconds, with the starting time of the synthetic distortion also determined as an in- teger. The intensity of the distortion is categorized into 3 levels: noticeable ,relatively severe , and severe . The 5dis- tortion types are added sequentially for each source video, with the number of distortions fixed at 1. Finally, we record the spatial distortion’s start time, duration, distortion type, intensity, and location. For the video stutter distortion, we remove the entire second following a randomly selected in- teger second and then duplicate the frames from that for an additional second, thereby creating a frame freeze effect. Each source video ultimately has 1to3randomly inserted instances of video stutter events. More detailed video gen- eration processes are provided in the supp. (Sec. C)). We Binary ~ 518Open- ended ~ 155 Multi-choice~ 527 (a) Proportions of Question TypesMachine-annotated ~ 900Human- annotated ~ 300 (b) | https://arxiv.org/abs/2505.22543v1 |
Proportions of Annotation TypesSpatial- temporal~ 150 Temporal~ 350Spatial ~ 400 (c) Proportions of Quality Concerns in the Machine-annotated Part(d) Proportions of Quality Concerns in the Human-annotated PartSpatial- temporal ~ 46 Spatial ~ 145 Temporal~ 109 Figure 4. Statistical information of the OmniVQA-FG-Bench. manually filter out the generated videos in which the syn- thetic spatiotemporal distortion is not visibly perceptible. Distorted objects recognition and description. To en- hance the LMM‘s ability to capture and describe the se- mantics of the local distortion regions, we add a highlighted bounding box (bbox) around the distorted rectangle area in the generated videos. The keyframes of the distorted videos with the bbox, are then input into the sota LMM model, which is tasked with describing the main semantic objects within the bbox with the following criteria: a semantic ob- ject is deemed valuable for annotation only if it is fully con- tained within the bbox and occupies the primary region, ex- hibits a distinct contrast from the surrounding areas, and remains within the bbox throughout its presence within the distorted period . Instruction pairs generation. We then generate 5in- struction Q&A pairs for each spatial and temporal distorted video based on the summarized video distortion informa- tion. The Q&A pairs must focus entirely on the spatiotem- poral local distortions of the video. If the video information contains the annotated distorted semantic object, at least one of the generated Q&A pairs must be related to this. Here, we introduce an additional form of Q&A pair, which is the cloze completion for the local distortion information. Similar to the technical branch, each video also includes a summarizing question, which is: “Please describe the information of the spatiotemporal local distortions of the video. ” The answer to this question is set to be the summa- rized video’s local distortion information. 3.4. The aesthetic branch The aesthetic branch aims to enhance the diversity and com- prehensiveness of the MIDB. To prevent severe technical distortions from affecting the extraction of aesthetic fea- tures, we also select videos with objective quality scores above 70. Then, we directly input keyframes sampled from the videos into sota LMM to annotate the aesthetic quality. Inspired by [16], the annotation is conducted from 3as- pects (illustrated in the middle right of Fig. 2): 1.Aesthetic Style : Label the aesthetic style of the video. 2.Spatiotemporal Analysis : Perform a detailed aesthetic analysis from the spatial and temporal perspectives. 3.Emotional Repercussions : Provide the emotional expe- rience that the video may evoke in viewers. The machine annotation for each video is then summarized, How is the facial quality of the character in the second shot of the video compared to the first shot ? What is the reason? In the second shot , the clarity of the character's face is noticeably higher than in the first shot because the face is closer to the camera. Quality Concern: Spatiotemporal What differences in presentation quality are there between the building in the upper right and the building in the lower left ? The building in the top right is of much | https://arxiv.org/abs/2505.22543v1 |
lower quality than the one in the bottom left. The building in the top right appears very blurry , while the building in the bottom left appears relatively clear . Quality Concern: Spatial Figure 5. Examples of questions and correct answers of the human-annotated part of the OmniVQA-FG-Benchmark . and6instruction Q&A pairs are derived from this. Each video also includes a summarizing Q&A pair with the corresponding question “Please give a detailed descrip- tion of the aesthetic effects of the video. ” with all the anno- tated aesthetic information as the corresponding answer. 4. The OmniVQA-MOS-20K To enhance the model’s performance on quality rating tasks, we construct a large-scale UGC video subjective scor- ing dataset comprising 20,000 video segments and over 200,000annotated scores. These selected videos are drawn from a distinct candidate pool of 100,000videos, separate from the video pool used in Sec. 3, with the objective qual- ity label for each video annotated using the same procedure. The video selection aims to ensure a well-balanced distri- bution of objective quality labels. To this end, we first con- vert the objective quality levels of all candidate videos to be aligned with the MOSs of LSVQ (train) [51]. Then, we en- sure that the distribution of objective quality levels for the selected videos mirrors that of LSVQ (train). Specifically, for the 5objective quality levels (determined by its objec- tive label) —excellent ( 80-100), good ( 60-80), fair ( 40-60), poor ( 20-40), and low ( 0-20)— the proportion of videos in each quality level must align with that of LSVQ (train). During the subjective rating experiment, we implement a hidden reference supervision strategy where the objective quality level for each video serves as the reference but is not visible to the human annotators. Any human score deviating by two or more levels from the reference is rejected, and rescoring is requested. This method effectively minimizes human annotation bias, especially in scenarios with limited expert-level resources. The distribution of human-labeled scores (averaged for each video) is shown in Fig. 3. Finally, we randomly split the dataset into the OmniVQA-MOS- 20K (train) and the OmniVQA-MOS-20K (test) with an 80 : 20 ratio for subsequent experiments. Since the model training still focuses on text-based tasks, the videos and quality labels in OmniVQA-MOS-20K (train ) need to be converted into Q&A pairs. The specific format is as follows: Q: How would you rate the quality of this video? The answer is the human-labeled quality rating converted into the quality level. 5. The OmniVQA-FG-Benchmark To evaluate the model’s performance in fine-grained tasks, we construct the QmniVQA-FG-Benchmark . This bench contains 1,200 Q&A pairs and consists of both machine and human annotations. The statistical information about the benchmark is shown in Fig. 4. The entire benchmark focuses on the following 3core quality concerns: •Spatial (S) : Spatial quality primarily concerns quality is- sues within specific local regions, particularly focusing on specific areas or semantic objects in the video. •Temporal (T) : Temporal quality concentrates on tem- poral quality issues, especially issues related to specific timepoints or periods. •Spatiotemporal (ST) | https://arxiv.org/abs/2505.22543v1 |
: Spatiotemporal quality encom- passes both spatial and temporal dimensions, focusing on quality issues that arise in specific locations or at semantic objects during specific timepoints or intervals. Detailed examples are shown in the supp. (Figs. 23 and 24). For the machine annotation , we select 900videos with objective quality labels higher than 70in the candidate videos pool in Sec. 3 (different from videos for MIDB annotation) and manually synthesize varying spatiotempo- ral distortion at different locations and periods. The anno- tations concentrate on fixed-form descriptions to evaluate the models’ ability to localize spatiotemporal distortions. The questions in the machine-annotated part are all multi- choice, single-answer questions. For the human annotation , we select 1,000videos in the candidate videos pool with an objective quality score below 70to ensure that the videos contain abundant quality issues to be annotated. Unlike machine annotations, hu- man annotations delve deeper into the analysis and seman- tic descriptions of spatiotemporal local quality. It follows a flexible approach, incorporating both multi-choice ques- tions and open-ended questions, ensuring a comprehensive capture and annotation of video quality issues. During the annotation processes, we rigorously follow the requirements in ITU-R BT.500-15 to ensure the consis- tency and accuracy of the annotated data. 6. The OmniVQA Models After obtaining the OmniVQA-Chat-400K and OmniVQA- MOS-20K MIDBs, the VQA2-Stage-1 model [17] is em- ployed as the base model for supervised fine-tuning (SFT). The base model consists of the SigLip [54] vision encoder, theSlowFast-R50 [12] motion extractor, and the Qwen-2 [49] LLM model (shown in Fig. 6). In this model, the text tokens from the prompt, the vision tokens from the video keyframes, and the motion tokens from the entire video are interleaved into a semantically ordered sequence, which is then input into the LLM for text generation. Through different SFT processes, we obtain 2specialized models: Vision EncoderMotion ExtractorText- Embedding Layers Vision Token ProjectorMotion Token Projector Input Video... ... ... ...Here is one video <Video> and corresponding <motion features> + <Question> Sampled Frames ResizeAll Frames ResizeLLM-Decoder Text Prompt Train on MOS-20K Train on Chat-400K Train on Chat-400K Train on MOS-20K Chatter Rater Complementary Training Answer:Figure 6. Illustration of OmniVQA models and the complemen- tary training strategy. Therater focuses on perceptual VQA quantification rating tasks, and the chatter specializes in quality understanding and question-answering tasks. Complementary training strategy. We posit that the role of the LLM part varies between quantitative rating tasks and quality understanding tasks. In the former, the LLM func- tions primarily as an effective regressor , whereas in the lat- ter, it learns to navigate the complex semantic relationships between different quality factors and modalities. Conse- quently, we argue that random mixing of training data from these tasks may undermine the LLM’s ability to effectively perform on each task, as the divergent training objectives could hinder its capacity to focus on the specific learning goals. However, from a pre-training perspective, these two tasks are perfectly complementary. Firstly, the sequential training process mitigates potential confusion regarding the model’s learning objectives. More importantly, the intrinsic relationship between the knowledge of the two tasks sug- gests | https://arxiv.org/abs/2505.22543v1 |
that the datasets can provide valuable prior informa- tion for one another, thus making them well-suited to serve as mutually beneficial pre-training components. The pro- cess of complementary training is also depicted in Fig. 6. 7. Experiments We conduct a detailed evaluation of our models on video quality rating and video quality understanding tasks. In ad- dition, we perform comprehensive supplementary experi- ments to investigate some key factors. 7.1. Experimental Setups System prompt design. In all evaluations, we set almost unified system prompts for all LMM models; the system prompts are illustrated in the supp. (Sec. A.1). Model training. We employ the complementary training Table 1. Performance on quality rating tasks. The best result is marked in red, the second best is denoted in blue. The OmniVQA-MOS- 20K(test) is presented as “MOS-20K” in short. Datasets LSVQ(1080p) [51] LSVQ(test) [51] LIVE-VQC [31] KoNViD-1k[15] YT-UGC [38] MOS-20K Metrics SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC Simple-VQA (ACM.MM 2022) [32] 0.760 0.805 0.870 0.868 0.755 0.793 0.826 0.820 0.850 0.845 0.813 0.809 BVQA (TCSVT 2022) [20] 0.747 0.785 0.870 0.861 0.795 0.814 0.795 0.817 0.845 0.847 0.825 0.813 FAST-VQA (TPAMI 2023) [42] 0.765 0.793 0.880 0.871 0.830 0.822 0.869 0.870 0.828 0.849 0.792 0.783 Dover (ICCV 2023) [44] 0.797 0.821 0.893 0.892 0.835 0.857 0.885 0.879 0.855 0.861 0.828 0.832 Modular-VQA (CVPR 2024) [41] 0.810 0.834 0.897 0.895 0.803 0.839 0.876 0.887 0.862 0.878 0.843 0.835 q-align-VQA (7B) (ICML 2024) [47] 0.758 0.833 0.883 0.882 0.777 0.813 0.865 0.876 0.811 0.830 0.820 0.831 q-align-onealign (7B) 0.803 0.836 0.888 0.885 0.773 0.829 0.876 0.878 0.831 0.847 0.829 0.826 VQA2-UGC-Scorer (7B) 0.782 0.837 0.897 0.885 0.798 0.830 0.894 0.884 0.818 0.827 0.785 0.773 Chatter (7B) (400K) 0.816 0.821 0.889 0.856 0.822 0.846 0.882 0.835 0.859 0.839 0.810 0.788 Rater (7B) 0.815 0.838 0.902 0.905 0.826 0.855 0.895 0.900 0.872 0.873 0.837 0.837 Table 2. Evaluation results on the testanddevsubset of the Q-bench-video . Categories Q-bench-video-test ( 900 questions) Q-bench-video-dev ( 892 questions) LMMs Binary Multi. Open Tech. Aes. Temp. AIGC Overall Binary Multi. Open Tech. Aes. Temp. AIGC Overall mplug-owl3 (7B) [50] 56.90% 57.14% 42.88% 53.40% 61.85% 50.34% 45.34% 52.06% 57.14% 54.57% 40.37% 52.80% 58.37% 55.09% 42.38% 51.12% Internvl2 (8B) [7] 48.15% 39.37% 31.49% 39.06% 46.68% 42.52% 31.37% 39.50% 44.22% 31.40% 30.56% 33.70% 49.75% 43.27% 23.78% 35.37% Internvl2 (40B) 52.53% 43.21% 35.13% 42.54% 52.13% 46.43% 42.55% 43.44% 52.38% 33.23% 34.63% 40.75% 53.94% 46.00% 28.05% 39.97% Internvl2.5 (8B) 46.46% 41.81% 30.70% 39.06% 46.21% 39.97% 28.57% 39.44% 46.26% 37.80% 31.11% 38.29% 51.72% 44.55% 24.70% 38.57% LLaVA-onevision (7B) [21] 57.58% 48.78% 32.12% 44.98% 50.95% 45.07% 44.72% 45.83% 61.22% 50.30% 34.63% 49.58% 59.61% 45.64% 46.04% 49.16% LLaVA-onevision (72B) 52.19% 54.36% 34.34% 45.62% 54.74% 50.00% 46.58% 46.61% 54.08% 48.78% 32.59% 47.37% 57.39% 47.45% 30.79% 45.63% Qwen2-vl (7B) [37] 50.84% 55.75% 34.49% 46.03% 56.40% 50.17% 39.75% 46.67% 57.48% 51.83% 32.78% 51.19% 55.42% 51.64% 30.49% 47.93% Qwen2-vl (72B) 61.62% 66.90% 39.24% 55.19% 63.03% 52.38% 50.93% 55.44% 69.73% 64.63% 38.89% 60.87% 62.81% 60.18% 39.63% 58.52% Qwen2.5-vl (7B) [2] 52.53% 49.48% 38.77% 46.68% 58.53% 46.09% 41.61% 46.72% 56.80% 45.43% 39.26% 48.30% | https://arxiv.org/abs/2505.22543v1 |
60.59% 51.27% 34.76% 47.31% Qwen2.5-vl (72B) 54.55% 49.48% 35.92% 46.03% 57.82% 47.96% 40.68% 46.39% 60.20% 48.17% 35.19% 47.62% 57.88% 54.73% 34.45% 48.21% GPT-4o (24-11-20) [1] 60.61% 50.17% 45.25% 50.89% 63.27% 52.04% 48.45% 51.89% 69.73% 48.48% 41.11% 51.27% 63.55% 59.09% 42.68% 53.25% Gemini-1.5-pro [34] 56.80% 43.29% 39.26% 44.57% 53.94% 54.64% 44.21% 46.52% 56.80% 43.90% 37.59% 44.14% 53.20% 54.00% 45.73% 46.24% Gemini-2.0-flash 56.23% 46.34% 43.57% 47.73% 58.77% 48.64% 55.90% 49.33% 64.63% 47.26% 40.93% 48.64% 62.07% 58.91% 51.22% 51.07% VQA2-Assistant (7B) [17] 67.12% 59.93% 39.56% 55.19% 56.87% 57.99% 43.79% 55.56% 73.81% 56.40% 38.33% 60.70% 56.65% 61.09% 38.11% 56.67% Chatter (7B) (400K) 68.35% 63.76% 44.46% 58.10% 60.66% 54.93% 52.17% 58.50% 75.51% 59.76% 40.37% 62.05% 61.58% 63.45% 42.38% 59.08% strategy to obtain the models. All training is performed with full-parameter-tuning, with only 1epoch trained on each dataset. The specific hyper-parameter configurations and model structures are also presented in the supp. (Tab. 5). 7.2. Evaluation on quality rating tasks We compare our models with several DNN-based [20, 32, 41, 42, 44] and LMM-based [17, 47] quality rating models on6datasets including the OmniVQA-MOS-20K (test) . Apart from the Q-align ,VQA2-UGC-Scorer , and our mod- els (using complementary training), all models are trained on the merged dataset of OmniVQA-MOS-20K (train) and LSVQ (train) (approximately 43,000 videos). The eval- uation metrics are the Pearson Linear Correlation Coef- ficient (PLCC) and Spearman Rank Correlation Coeffi- cient (SRCC). We adopt the quality rating method used in [17, 47] during testing, which is detailed in the supp. (Sec. A.2). The performance of the models on all datasets is pre- sented in Tab.1. The experimental results demonstrate that therater achieves Top-3 performance across all 6datasets. This demonstrates its superior performance in quality rating tasks. Since the chatter model is not a proprietary model for the rating task, its performance shows a noticeable de- cline, but it still indicates acceptable performance. 7.3. Evaluation on quality understanding tasks As the primary task of our work, we conduct detailed video quality understanding evaluation experiments, which in- clude both overall and fine-grained tasks. We also include4real-world scenario case studies in the supp. (Figs. 25, 26, 27 and 28) to visualize the functionality of the model. The overall video quality understanding task is carried out on the Q-bench-video . Since our training process does not include multi-video comparison analysis, we remove questions involving multi-video quality issues. The eval- uation question types include binary questions ( Binary ), multi-choice (single-answer) questions ( Multi. ), and open- ended questions ( Open ). The questions cover different qual- ity concerns, including technical quality ( Tech. ), aesthetic quality ( Aes.), temporal quality ( Temp. ), and AIGC video quality ( AIGC ). We select some of the latest open-source LMMs with video analysis capabilities with varying param- eter sizes, some proprietary LMMs, and the VQA2-Assistant for comparison. To ensure a fair comparison, the input for each model is only related to its architecture. For models without a motion extractor, we input the keyframe sequence obtained by sampling 1frame per second from the video. For the VQA2-Assistant and our chatter model, we addition- ally | https://arxiv.org/abs/2505.22543v1 |
input the whole frame sequence of the video (resizing to(224∗224) to the motion extractor. The experimental results are presented in Tab. 2. The experimental results show that the chatter achieves the best Overall performance on both the testanddevsub- sets, with outstanding performance in the Tech., Aes., and Temp. quality concerns. Although it does not outperform some of the most advanced LMMs on some subcategories, the performance gap is relatively minimal. This demon- Table 3. Evaluation results on the OmniVQA-FG-Bench, where “Mach.” denotes “Machine”, “S” denotes “Spatial”, “T” denotes “Tempo- ral”, and “ST” denotes ”spatiotemporal”. Categories Question Types Machine Annotated Human AnnotatedOverallLMMs Binary Multi. Open Mach. S Mach. T Mach. ST Overall Human S Human T Human ST Overall mplug-owl3 (7B) 25.05% 30.93% 34.74% 21.50% 17.50% 40.00% 24.13% 37.05% 41.82% 41.18% 39.50% 28.88% Internvl2 (8B) 26.59% 24.29% 24.03% 18.00% 15.50% 16.00% 16.93% 30.22% 32.27% 50.00% 34.33% 25.25% Internvl2 (40B) 49.13% 31.31% 30.19% 38.75% 34.00% 28.00% 35.33% 39.93% 39.55% 40.20% 39.83% 38.88% Internvl2.5 (8B) 21.97% 37.38% 21.10% 29.75% 14.50% 28.00% 25.33% 29.14% 27.73% 30.39% 28.83% 28.62% LLaVA-onevision (7B) 31.60% 29.98% 27.60% 25.00% 17.00% 33.33% 24.53% 33.45% 40.00% 37.25% 36.50% 30.38% LLaVA-onevision (72B) 36.03% 30.36% 24.03% 30.75% 22.50% 33.33% 29.07% 30.94% 38.64% 40.20% 35.33% 32.00% Qwen2-vl (7B) 26.59% 23.15% 22.73% 20.25% 18.50% 24.67% 20.67% 26.26% 33.64% 42.16% 31.67% 24.58% Qwen2-vl (72B) 33.14% 40.80% 27.27% 32.50% 26.00% 34.00% 31.07% 39.21% 44.55% 42.16% 41.67% 35.75% Qwen2.5-vl (7B) 23.12% 24.29% 30.84% 16.50% 5.50% 21.33% 14.53% 33.09% 41.36% 43.14% 37.83% 24.62% Qwen2.5-vl (72B) 26.20% 24.29% 24.84% 19.00% 7.50% 25.33% 17.20% 32.53% 43.12% 32.61% 36.38% 25.19% GPT-4o (24-11-20) 48.55% 39.47% 33.12% 48.00% 37.00% 44.67% 44.40% 38.85% 45.91% 42.16% 42.00% 42.58% VQA2-Assistant (7B) 76.11% 37.19% 37.99% 51.25% 60.00% 60.00% 55.33% 43.88% 50.45% 45.10% 46.50% 54.12% Chatter (7B) (400K) 81.31% 57.31% 39.03% 56.75% 85.50% 84.00% 69.87% 44.52% 46.33% 56.52% 47.01% 65.32% Table 4. Performance of training strategies. The Overall task is evaluated on the q-bench-video (test) . The best result is denoted in red. Categories Quality Rating (SRCC / PLCC) Overall Fine-grain Training Strategy LSVQ(1080p) LSVQ(test) LIVE-VQC KoNViD-1k OmniVQA (test) Tech. Aes. Overall Machine Human Overall Direct 0.800 / 0.824 0.880 / 0.878 0.776 / 0.820 0.877 / 0.883 0.819 / 0.820 57.54% 63.03% 58.33% 68.00% 48.50% 62.28% Mix 0.817 / 0.836 0.898 / 0.896 0.822 / 0.860 0.887 / 0.898 0.840 / 0.838 52.54% 56.32% 52.78% 66.47% 51.17% 63.58% Complementary 0.815 / 0.838 0.902 / 0.905 0.826 / 0.855 0.895 / 0.900 0.837 / 0.837 58.10% 60.66% 58.50% 69.87% 56.52% 65.32% strates that in overall video quality understanding tasks, synthetic MIDBs with human-in-the-loop can still yield su- perior training performance. For the fine-grained video quality understanding task, we conduct comparison experiments between the chatter model and the comparison models on the OmniVQA-FG- Bench . Tab. 3 records the performance of each LMM on different subcategories of questions in the benchmark. Models specially trained on VQA tasks ( VQA2-Assistant (7B) and chatter ) achieve significantly superior perfor- mance compared to general LMMs. Additionally, our chat- teroutperforms VQA2-Assistant by a significant margin on machine annotation tasks and also achieves better perfor- mance on | https://arxiv.org/abs/2505.22543v1 |
human annotation tasks. This demonstrates the importance of the in-context branch in the MIDB for im- proving the model’s performance in fine-grained spatiotem- poral quality understanding tasks. 7.4. Discussions We have also conducted experiments on some key points, and further analysis is presented in the supp. (Sec. A.4). Data scaling effects verification. Additionally, we val- idate the data-scaling effect by selecting subsets of data (ranging from 100k to 400k, with each subset having an equal distribution across 3branches) from OmniVQA- Chat-400K for ablation. The performance is shown in Fig. 7, where ‘mix with human’ denotes that we combine the OmniVQA-Chat-400K dataset with the VQA2-stage3 [17] (which contains 115K human-annotated, sota LMM -refined high-quality data) and then retrain the base model. It is ev- ident that the data-scaling effect appears in the 100k-400k range but gradually becomes marginal under 7B parameter size. While mixed training with human-annotated data fur- Figure 7. The performance map of scaling up the training dataset. ther improves the performance on the Q-bench-video (test) , it does not showcase a significant effect on fine-grain tasks. Effects of complementary training. We verify the ef- fects of the complementary training strategy by comparing 2other training strategies: the Direct strategy, where the model is trained directly on the corresponding dataset for each task, and the Mixstrategy in which the 2datasets are randomly mixed to train one unified model. The experimen- tal results are presented in Tab. 4. The results show that our Complementary strategy outperforms the Direct strategy on all tasks. While the Mixstrategy shows no significant differ- ence from the Complementary strategy in the quality rating task, it exhibits a clear performance gap in the quality un- derstanding tasks. The results showcase the rationale of the complementary training strategy. 8. Conclusion InQ-Bench-Video , our chatter model achieves superior performance, highlighting its proficiency in handling gen- eral VQA quality understanding tasks. Additionally, it ex- cels on the OmniVQA-FG-Benchmark, showcasing its ca- pability for fine-grained evaluation. Furthermore, on qual- ity rating tasks, our rater model achieves state-of-the-art performance, demonstrating its effectiveness. These results highlight the potential of scaling up perceptual video quality assessment through the integration of machine annotation strategies and task-specific complementary model training.References [1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ah- mad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. 2, 7 [2] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. 7 [3] Christos G Bampis and Alan C Bovik. Feature-based predic- tion of streaming video qoe: Distortions, stalling and mem- ory. SPIC , 68:218–228, 2018. 2 [4] Christos G Bampis, Zhi Li, Ioannis Katsavounidis, Te-Yuan Huang, Chaitanya Ekanadham, and Alan C Bovik. Towards perceptually optimized adaptive video streaming-a realistic quality of experience database. IEEE TIP , 30:5182–5197, 2021. 2 [5] Chaofeng Chen, Sensen Yang, Haoning Wu, Liang Liao, Zicheng Zhang, Annan Wang, Wenxiu Sun, Qiong Yan, and Weisi Lin. Q-ground: Image | https://arxiv.org/abs/2505.22543v1 |
quality grounding with large multi-modality models. In ACM MM , pages 486–495, 2024. 2 [6] Tsai-Shien Chen, Aliaksandr Siarohin, Willi Menapace, Ekaterina Deyneka, Hsiang-wei Chao, Byung Eun Jeon, Yuwei Fang, Hsin-Ying Lee, Jian Ren, Ming-Hsuan Yang, et al. Panda-70m: Captioning 70m videos with multiple cross-modality teachers. In CVPR , pages 13320–13331, 2024. 3 [7] Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhang- wei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, et al. Expanding performance boundaries of open-source multimodal models with model, data, and test- time scaling. arXiv preprint arXiv:2412.05271 , 2024. 7 [8] Zheng Chen, Xun Zhang, Wenbo Li, Renjing Pei, Feng- long Song, Xiongkuo Min, Xiaohong Liu, Xin Yuan, Yong Guo, and Yulun Zhang. Grounding-iqa: Multimodal lan- guage grounding model for image quality assessment. arXiv preprint arXiv:2411.17237 , 2024. 2 [9] Zhengfang Duanmu, Kai Zeng, Kede Ma, Abdul Rehman, and Zhou Wang. A quality-of-experience index for streaming video. IEEE JSTSP , 11(1):154–166, 2016. 2 [10] Zhengfang Duanmu, Abdul Rehman, and Zhou Wang. A quality-of-experience database for adaptive video streaming. IEEE TBC , 64(2):474–487, 2018. 2 [11] Zhengfang Duanmu, Wentao Liu, Diqi Chen, Zhuoran Li, Zhou Wang, Yizhou Wang, and Wen Gao. A bayesian quality-of-experience model for adaptive streaming videos. ACM TOMM , 18(3s):1–24, 2023. 2 [12] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In ICCV , pages 6202–6211, 2019. 6 [13] Qihang Ge, Wei Sun, Yu Zhang, Yunhao Li, Zhongpeng Ji, Fengyu Sun, Shangling Jui, Xiongkuo Min, and Guangtao Zhai. Lmm-vqa: Advancing video quality assessment with large multimodal models. arXiv preprint arXiv:2408.14008 , 2024. 2 [14] Deepti Ghadiyaram, Janice Pan, Alan C Bovik, Anush Kr- ishna Moorthy, Prasanjit Panda, and Kai-Chieh Yang. In- capture mobile video distortions: A study of subjective be- havior and objective algorithms. IEEE TCSVT , 28(9):2061– 2077, 2017. 2 [15] Vlad Hosu, Franz Hahn, Mohsen Jenadeleh, Hanhe Lin, Hui Men, Tam ´as Szir ´anyi, Shujun Li, and Dietmar Saupe. The konstanz natural video database (konvid-1k). In QoMEX , pages 1–6. IEEE, 2017. 7 [16] Yipo Huang, Xiangfei Sheng, Zhichao Yang, Quan Yuan, Zhichao Duan, Pengfei Chen, Leida Li, Weisi Lin, and Guangming Shi. Aesexpert: Towards multi-modality foun- dation model for image aesthetics perception. In ACM MM , pages 5911–5920, 2024. 2, 3, 5 [17] Ziheng Jia, Zicheng Zhang, Jiaying Qian, Haoning Wu, Wei Sun, Chunyi Li, Xiaohong Liu, Weisi Lin, Guangtao Zhai, and Xiongkuo Min. Vqa2: Visual question answering for video quality assessment. arXiv preprint arXiv:2411.03795 , 2024. 2, 3, 4, 6, 7, 8 [18] Saeed Khaki, JinJin Li, Lan Ma, Liu Yang, and Prathap Ra- machandra. Rs-dpo: A hybrid rejection sampling and direct preference optimization method for alignment of large lan- guage models. arXiv preprint arXiv:2402.10038 , 2024. 3 [19] Jari Korhonen. Two-level approach for no-reference con- sumer video quality assessment. IEEE TIP , 28(12):5923– 5938, 2019. 2 [20] Bowen Li, Weixia Zhang, Meng Tian, Guangtao Zhai, and Xianpei Wang. Blindly assess quality of in-the-wild videos via quality-aware pre-training and motion perception. IEEE TCSVT , 32(9):5944–5958, 2022. 2, 7 [21] Bo Li, Yuanhan Zhang, | https://arxiv.org/abs/2505.22543v1 |
Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Zi- wei Liu, et al. Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326 , 2024. 7 [22] Dingquan Li, Tingting Jiang, and Ming Jiang. Quality as- sessment of in-the-wild videos. In ACM MM , pages 2351– 2359, 2019. 2 [23] Z Li, Z Duanmu, W Liu, and Z Wang. A comparative study of state-of-the-art video encoders on 4k videos. Image Anal- ysis and Recognition. LNCS , 11662. 2 [24] Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mo- hammad Saleh, Peter J Liu, and Jialu Liu. Statistical rejec- tion sampling improves preference optimization. In ICLR . 3 [25] Wentao Liu, Zhengfang Duanmu, and Zhou Wang. End- to-end blind quality assessment of compressed videos using deep neural networks. In ACM MM , pages 546–554, 2018. 2 [26] Xiongkuo Min, Huiyu Duan, Wei Sun, Yucheng Zhu, and Guangtao Zhai. Perceptual video quality assessment: A sur- vey. SCIS , 67(11):211301, 2024. 2 [27] Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. No-reference image quality assessment in the spa- tial domain. IEEE TIP , 21(12):4695–4708, 2012. 2 [28] Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. Mak- ing a “completely blind” image quality analyzer. IEEE SPL , 20(3):209–212, 2012. 2[29] Mikko Nuutinen, Toni Virtanen, Mikko Vaahteranoksa, Tero Vuori, Pirkko Oittinen, and Jukka H ¨akkinen. Cvd2014—a database for evaluating no-reference video quality assess- ment algorithms. IEEE TIP , 25(7):3073–3086, 2016. 2 [30] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by gen- erative pre-training. 2018. 2 [31] Zeina Sinno and Alan Conrad Bovik. Large-scale study of perceptual video quality. IEEE TIP , 28(2):612–627, 2018. 2, 7 [32] Wei Sun, Xiongkuo Min, Wei Lu, and Guangtao Zhai. A deep learning based no-reference quality assessment model for ugc videos. In ACM MM , pages 856–865, 2022. 2, 7 [33] Wei Sun, Wen Wen, Xiongkuo Min, Long Lan, Guangtao Zhai, and Kede Ma. Analysis of video quality datasets via design of minimalistic video quality models. IEEE TPAMI , 2024. 2, 3 [34] Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of con- text. arXiv preprint arXiv:2403.05530 , 2024. 2, 7 [35] Zhengzhong Tu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and Alan C Bovik. Ugc-vqa: Benchmarking blind video quality assessment for user generated content. IEEE TIP , 30:4449–4464, 2021. 2 [36] Zhengzhong Tu, Xiangxu Yu, Yilin Wang, Neil Birkbeck, Balu Adsumilli, and Alan C Bovik. Rapique: Rapid and accurate video quality prediction of user generated content. IEEE OJSP , 2:425–440, 2021. 2 [37] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 , 2024. 7 [38] Yilin Wang, Sasi Inguva, and Balu Adsumilli. Youtube ugc dataset for video compression research. In IEEE MMSP , pages 1–5. IEEE, 2019. 2, 7 | https://arxiv.org/abs/2505.22543v1 |
[39] Yilin Wang, Junjie Ke, Hossein Talebi, Joong Gon Yim, Neil Birkbeck, Balu Adsumilli, Peyman Milanfar, and Feng Yang. Rich features for perceptual quality assessment of ugc videos. In CVPR , pages 13435–13444, 2021. 2 [40] Zhenhua Wang, Guang Xu, and Ming Ren. Llm- generated natural language meets scaling laws: New ex- plorations and data augmentation methods. arXiv preprint arXiv:2407.00322 , 2024. 1 [41] Wen Wen, Mu Li, Yabin Zhang, Yiting Liao, Junlin Li, Li Zhang, and Kede Ma. Modular blind video quality assess- ment. In CVPR , pages 2763–2772, 2024. 2, 7 [42] Haoning Wu, Chaofeng Chen, Liang Liao, Jingwen Hou, Wenxiu Sun, Qiong Yan, Jinwei Gu, and Weisi Lin. Neigh- bourhood representative sampling for efficient end-to-end video quality assessment. IEEE TPAMI , 45(12):15185– 15202, 2023. 3, 7 [43] Haoning Wu, Chaofeng Chen, Liang Liao, Jingwen Hou, Wenxiu Sun, Qiong Yan, and Weisi Lin. Discovqa: Temporal distortion-content transformers for video quality assessment. IEEE TCSVT , 33(9):4840–4854, 2023. [44] Haoning Wu, Erli Zhang, Liang Liao, Chaofeng Chen, Jing- wen Hou, Annan Wang, Wenxiu Sun, Qiong Yan, and Weisi Lin. Exploring video quality assessment on user generated contents from aesthetic and technical perspectives. In ICCV , pages 20144–20154, 2023. 3, 7 [45] Haoning Wu, Erli Zhang, Liang Liao, Chaofeng Chen, Jing- wen Hou, Annan Wang, Wenxiu Sun, Qiong Yan, and Weisi Lin. Towards explainable in-the-wild video quality assess- ment: A database and a language-prompted approach. In ACM MM , pages 1045–1054, 2023. 2 [46] Haoning Wu, Zicheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Annan Wang, Kaixin Xu, Chunyi Li, Jingwen Hou, Guangtao Zhai, et al. Q-instruct: Improving low-level visual abilities for multi-modality foundation models. In CVPR , pages 25490–25500, 2024. 2, 3 [47] Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, et al. Q-align: teaching lmms for visual scoring via discrete text-defined levels. In ICML , pages 54015–54029, 2024. 2, 3, 7 [48] Haoning Wu, Hanwei Zhu, Zicheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Annan Wang, Wenxiu Sun, Qiong Yan, et al. Towards open-ended vi- sual quality comparison. In ECCV , pages 360–377. Springer, 2024. 2, 3 [49] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. 6 [50] Jiabo Ye, Haiyang Xu, Haowei Liu, Anwen Hu, Ming Yan, Qi Qian, Ji Zhang, Fei Huang, and Jingren Zhou. mplug- owl3: Towards long image-sequence understanding in multi- modal large language models. In ICLR . 7 [51] Zhenqiang Ying, Maniratnam Mandal, Deepti Ghadiyaram, and Alan Bovik. Patch-vq:’patching up’the video quality problem. In CVPR , pages 14019–14029, 2021. 2, 5, 7 [52] Zhiyuan You, Jinjin Gu, Zheyuan Li, Xin Cai, Kaiwen Zhu, Chao Dong, and Tianfan Xue. Descriptive image quality assessment in the wild. arXiv preprint arXiv:2405.18842 , 2024. 2, 3 [53] Zhiyuan You, Zheyuan Li, Jinjin Gu, Zhenfei Yin, Tianfan Xue, and Chao Dong. Depicting beyond scores: Advanc- ing image quality assessment through multi-modal language models. In | https://arxiv.org/abs/2505.22543v1 |
ECCV , pages 259–276. Springer, 2024. 2, 3 [54] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. InICCV , pages 11975–11986, 2023. 6 [55] Biao Zhang, Zhongtao Liu, Colin Cherry, and Orhan Firat. When scaling meets llm finetuning: The effect of data, model and finetuning method. In ICLR , 2024. 1 [56] Yuanhan Zhang, Jinming Wu, Wei Li, Bo Li, Zejun Ma, Zi- wei Liu, and Chunyuan Li. Video instruction tuning with synthetic data. CoRR , 2024. 1 [57] Zicheng Zhang, Ziheng Jia, Haoning Wu, Chunyi Li, Zijian Chen, Yingjie Zhou, Wei Sun, Xiaohong Liu, Xiongkuo Min, Weisi Lin, et al. Q-bench-video: Benchmarking the video quality understanding of lmms. CVPR , 2025. 2[58] Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. NIPS , 36:55006– 55021, 2023. 1 [59] Zhaokun Zhou, Qiulin Wang, Bin Lin, Yiwei Su, Rui Chen, Xin Tao, Amin Zheng, Li Yuan, Pengfei Wan, and Di Zhang. Uniaa: A unified multi-modal image aesthetic assessment baseline and benchmark. arXiv preprint arXiv:2404.09619 , 2024. 2, 3 Scaling-up Perceptual Video Quality Assessment Supplementary Material A. Experiments Supplementary Materials A.1. System prompts for training and evaluation In all training process and task evaluations, we set uni- fied system prompts (prefix) for all LMM models, which is:“You will receive a keyframe sequence sampled at an average of one frame per second from a video of length seconds, with the keyframe sequence ordered in alignment with the video’s temporal order. Please answer the follow- ing question based on the information provided. ” For our model, since motion feature extraction is involved, the sys- tem prompt also includes: “In addition, you will receive a motion feature sequence that corresponds to the number of frames in the video, num of frames. ” For the spatiotemporal fine-grained tasks, which involve specific time-related ques- tions, we introduce a standardized representation of time to ensure consistency in the responses from different models. The time point representation rule is as follows: “When the video starts playing, this timepoint is denoted as “1 sec- ond”. When the 1st second ends and the 2nd second begins, this timepoint is marked as “2 seconds’, and so on. ” . A.2. Evaluation details Quality rating evaluation details. We adopt the following procedure to assess the quality during evaluation: Q=5X i=1ωiePquality levels [i] P5 i=1ePquality levels [i], where quality levels refers to a list of predefined quality categories: [Excellent, Good, Fair, Poor, Low] , andPde- notes the model’s logit outputs for each respective quality level. Specifically, the vector corresponding to the qual- ity description word in the model’s output sequence is first extracted, where its dimension matches the tokenizer’s vo- cabulary size (located at the −3index in our model). The logit values at the specific indices of this vector, which cor- respond to the 5quality level in the tokenizer’s vocabulary (indices 1550 ,1661 ,6624 ,7852 , and 3347 in our model), are then selected. These logits are subsequently normalized using | https://arxiv.org/abs/2505.22543v1 |
the softmax function. The values ωrepresent the weight factors assigned to the normalized probabilities of each quality level, given by [1,0.75,0.5,0.25,0]. The resulting weighted sum of these probabilities produces the predicted quality score Q, which is confined within the range of [0,1]. We directly call model.forward() with the prompt ex- tended to the word just before outputting the quality level, i.e., the input prompt would be:How would you rate the quality of this video? The qual- ity of this video is:” The final output corresponds to the token after the last newline or special token (like EOS) in the original prompt, which represents the quality level. Quality understanding evaluation details. For the quality understanding task, we use model.generate() with greedy search to ensure the reproducibility of the results. For multiple-choice questions in the benchmarks, we compare the first letter of the output (usually the option selected) with the correct answer and output the accuracy. For open-ended questions and multiple-choice questions where the first let- ter is not an option, we use sota LMM for judgment. For multiple-choice questions, we directly assess whether the answer is correct (scoring 0 or 1). For open-ended ques- tions, we evaluate them based on 3criteria: completeness , accuracy , and relevance , with a score given as 0, 1, or 2. The specific evaluation standards are as follows: “Given the question, evaluate whether the response an- swer completely matches the correct answer correct ans. First, check the response and please rate score 0 if the re- sponse is not a valid answer. Please rate score 2 if the re- sponse completely or almost completely matches the correct answer on completeness, accuracy, and relevance. Please rate score 1 if the response partly matches the correct an- swer on completeness, accuracy, and relevance. Please rate score 0 if the response doesn’t match the correct answer on completeness, accuracy, and relevance at all. Please pro- vide the result in the following format: Score:” We set up 5rounds of sota LMM scoring for each ques- tion. The final score for the question is determined by “majority voting”, selecting the most frequently occurring score. Based on our experiments, there has been no instance where the score distribution resulted in a “2/2/1” split. A.3. Model structure/Training hyper-parameters The model structure and training hyper-parameters are de- tailed in Tab. 5. A.4. Additional analysis for experiments results Effects of merged training data for quality rating. We compared the rater trained on the merged dataset (the com- bination of LSVQ(train) and OmniVQA-MOS-20K(train)) and LSVQ (train), with evaluation results on 6test datasets. The results are presented in Tab. 6. The effectiveness of the merged data varies depending on the evaluation dataset. We assume that the performance im- provement from combined training is related to the similar- ity between the test set and the video content of OmniVQA- Table 5. Details of the model structure and hyper-parameters for the model training. The red / blue colors represent the different hyperpa- rameters used in the first and second rounds of complementary training, respectively. Other entries without color differentiation indicate | https://arxiv.org/abs/2505.22543v1 |
that the hyperparameter remains consistent across both rounds of complementary training. Model Structure/Training Hyper-Parameters Name/Value More Information Vision Tower SigLIP-SO400m Parameter size= 397.75M,Tokens per keyframe= 196 Vision Projector 2-layers MLP+GeLU Parameter size =16.98M Motion Extractor SlowFast-R50 Parameter size= 34.16M, only use the fast path feature Motion Projector 2-layers MLP+GeLU Parameter size =13.77M, the same with the structure of vision projector LLM init. Qwen-2 (7B) Decoder-only model, parameter size =7660.56M Keyframes Sampling Interval 1 second / Keyframes Resolution 336×336 / Frames (for motion extraction) Resolution 224×224 / Batch Size (videos) 8 Per device train batch size=1 LR Max 1e-5 / 1e-6 / LR Schedule cosine decay / Warmup Epochs 0.03 / Weight Decay 0 / Gradient Accumulation Steps 1/2 / Numerical Precision bfloat16 / Epoch 1/2 / Optimizer AdamW / Activation Checkpointing ✓ / Deepspeed Stage 3 / Table 6. Performance of different training data for quality rating. Training Datasets LSVQ(1080p) LSVQ(test) LIVE-VQC KoNViD-1k YT-UGC MOS-20K Metrics SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC LSVQ(train) 0.829 0.849 0.904 0.893 0.857 0.872 0.892 0.886 0.826 0.835 0.802 0.791 Merged 0.815 0.838 0.902 0.905 0.826 0.855 0.895 0.900 0.872 0.873 0.837 0.837 MOS-20K(train). Most notably, on the YT-UGC dataset, the merged training yields the most significant performance improvement. This is clearly due to the high consistency between the video sources of the YT-UGC dataset and OmniVQA-MOS-20K(train) (both are entirely or mostly sourced from UGC videos on the YouTube platform). This consistency leads to a strong alignment of video quality priors between OmniVQA-MOS-20K(train) and YT-UGC, thus producing the best performance when trained with the merged data. For the KoNViD-1k and LSVQ(test), since the results obtained from LSVQ(train) training are already quite excellent (we believe close to the highest performance achievable for these datasets), the data augmentation does not lead to significant performance improvements. Is model-size scaling law evident in the video quality un- derstanding tasks? For open-source LMMs designed for general visual understanding tasks, the model-size scaling law is not evident in quality understanding tasks. This is observed in Internvl-2 ,LLaVa-Onevision , and Qwen2.5-vl . These models exhibit a clear parameter scaling law in gen- eral visual question answering tasks (40B or 72B modelssignificantly outperform the 7B or 8B models). However, in video quality understanding tasks, whether general or fine-grained, increasing the model size does not result in significant performance improvements. The only exception isQwen2-vl , where the Qwen2-vl (72B) outperforms the Qwen2-vl (7B) in both general and fine-grained tasks, mak- ing it the best-performing open-source LMM. Based on this observation, we hypothesize that in downstream tasks like quality understanding, the scaling law of model parameters is closely related to the amount of data directly associated with or similar to the task during training. Different train- ing data or strategies can affect the scaling effect of model parameters in such downstream tasks. Deeper analysis on training strategies. We find that the difference between direct training and complementary training is more pronounced in the quality rating task, while the difference between mixed training and complementary training is more prominent in the quality | https://arxiv.org/abs/2505.22543v1 |
understanding tasks. The former is easier to explain, as the amount of training data for the quality rating task is much smaller than that for the quality understanding task. Therefore, the pretraining effect for the quality understanding data is evi- dently stronger, leading to amore noticeable differences in the quality rating task. For the latter, we hypothesize that the task objective in the quality rating task is much more direct compared to that in the quality understanding task, and its data composition is much simpler. Consequently, training with these relatively simple and consistent data structures, as opposed to the complex and varied data for the quality understanding task, reduces the model’s ability to generalize across various types of quality understanding tasks and hinders the diversity of its output. For example, in the OmniVQA-MOS-20K dataset, the Q&A pairs have questions like “How would you rate the quality of this video?” with answers corresponding to the quality level of the video. In contrast, in the OmniVQA- Chat-400K technical branch, many Q&A pairs contain questions like “How would you evaluate the quality of this video?” , where the answers are the summarized video-level quality descriptions. While these two questions are seman- tically similar, their answers emphasize different aspects in their respective datasets. Clearly, during the mix training, where both types of questions are included in the same batch, the training strat- egy could negatively impact the model’s learning direction. However, under the complementary training, these two types of questions are not mixed within the same batch, ef- fectively preventing mutual interference. An alternative approach is to prepend a task-type de- scriptor (e.g., ”rating” or ”understanding”) to differentiate between these tasks and guide the model’s learning. How- ever, this method would also require adding such prefixes during inference, increasing the computational cost. B. Additional Statistical Information B.1. VQA datasets summary Tab. 7 presents the statistical information of mainstream (UGC) VQA video datasets, sorted in chronological or- der of publication. This serves as supporting evidence for our assertions in Sec. 1. As observed, the majority of these VQA datasets lack the instruction component, mak- ing them unsuitable as direct training data for video quality understanding models. Additionally, the number of videos in these datasets rarely exceeds 20K. In contrast, the Om- niVQA dataset series — particularly the OmniVQA-Chat- 400K — stands out as the largest in scale to date in terms of the number of videos , the variety of covered tasks , and thetotal number of instruction pairs . B.2. Statistical information for each branch Fig. 8 displays the distribution of instruction lengths, the top 20 word frequencies, and the word cloud of the 3 branches in OmniVQA-Chat-400K .C. Distorted Video Generation The method for constructing spatial distorted videos in the in-context branch (Sec. 3.3) is as follows, where “level” refers to: “noticeable” (1), “relatively severe” (2), and “se- vere” (3). The distorted region is selected in one of the fol- lowing locations: top-left ,bottom-left ,top-right ,bottom- right ,center ,center-up ,center-down ,center-left ,center- right . The detailed examples of distortion types and loca- tion are illustrated | https://arxiv.org/abs/2505.22543v1 |
in Fig. 9. 1.Gaussian Blur (Blur) : The Gaussian blur effect smooths the image by applying a filter with a kernel size determined by the level. The larger the level, the stronger the blur effect. Blurred =GaussianBlur (Region ,(16×Level+1,16×Level+1),0) 2.Overexposure (Brightness Increase) : Overexposure in- creases the brightness of each pixel by a factor of 80×level. The resulting pixel values are clamped to stay within the valid range of 1 to 254. overexposed (x, y) = min (max ( region(x, y) + 80 ×level,1),254) 3.Underexposure (Brightness Decrease) : Underexposure reduces the brightness of each pixel by a factor of 40×level. Similar to overexposure, the pixel val- ues are clamped to ensure they remain within the range 1 to 254. underexposed (x, y) = min (max ( region(x, y)−40×level,1),254) 4.Noise (Gaussian Noise) : e The noise distortion adds Gaussian noise to the image, where the standard deviation of the noise is determined by σ=√ 250×level. This introduces random variations in pixel values, creating a noisy effect. noise (x, y, c ) =region (x, y, c ) +N(0, σ2) where N(0, σ2)represents a Gaussian distribution with mean 0 and variance σ2. 5.Compression Distortion (JPEG Compression) : The compression distortion simulates the loss of quality due to JPEG compression. The quality of the compressed image is controlled by the level, where higher levels result in lower quality (lower JPEG quality). The compression quality is calculated as max(15 −5×level,1). compressed (x, y) = JPEG(region,quality = max(15 −5×level,1)) D. Machine Annotation Process Detail Annotation prompts for each branch Figs. 10,11, and 12 illustrate the technical branch prompts and detailed an- notation pipeline. Fig. 13 illustrates the in-context branch annotation prompts. Fig. 14 illustrates the aesthetic branch annotation prompts. Technical branch examples Figs. 15, 16, and 17 show 3 annotated examples of the technical branch. Table 7. Summary of various VQA datasets. Datasets for VQA # Videos MOS # MOS Instruction # Instruction Description LIVE-VQA [] 160 ✓ 160 ✗ / Full-reference video quality rating CVD2014 234 ✓ 234 ✗ / Quality assessment of video captured by cameras LIVE-Qualcomm 208 ✓ 208 ✗ / Mobile in-capture video quality rating KoNViD-1K 1,200 ✓ 1,200 ✗ / Unified UGC video quality rating LIVE-VQC 585 ✓ 585 ✗ / Quality rating of real world UGC videos YouTube-UGC 1,380 ✓ 1,380 ✗ / Quality rating of UGC videos LSVQ 39,075 ✓39,075 ✗ / Large-scale quality rating of UGC videos LIVE-NFLX-I 558 ✓ 558 ✗ / Quality-of-experience (QoE) rating of hand-craft streaming videos LIVE-NFLX-II 420 ✓ 420 ✗ / QoE rating of real-world streaming videos WaterlooSQoE-III 450 ✓ 450 ✗ / QoE rating of hand-craft streaming videos LBVD 1,013 ✓ 1,013 ✗ / QoE assessment of in-the-wild streaming videos WaterlooSQoE-IV 1,350 ✓ 1,350 ✗ / Large-scale QoE assessment of hand-craft streaming videos TaoLive 3,762 ✓ 3,762 ✗ / Quality rating of live streaming (compresqsed) videos Maxwell 4,543 ✓ 4,543 ✗ / Fine-grained (technical/aesthetic) quality rating of UGC videos VQA2-Stage-1 12,385 ✗ / ✓ 12,385 Pre-training MIDB for distortion recognition VQA2-Stage-2 30,156 ✓30,156 ✓ 30,156 Large-scale MIDB specially for video quality rating. VQA2-Stage-3 15,500 | https://arxiv.org/abs/2505.22543v1 |
✗ / ✓ 115,214 Human-annotated MIDB for video quality understanding. OmniVQA-Chat-20K 20,000 ✓20,000 ✓ 20,000 Large-scale MIDB for quality rating for in-the-wild UGC videos OmniVQA-MOS-400K 86,716 ✗ / ✓ 402,987 Machine-dominated MIDB for video quality understanding. Incontext branch examples Fig. 19 illustrates 3annotated examples from the in-context branch. Aesthetic branch examples Fig. 18 illustrates 2annotated examples from the aesthetic branch. E. Subjective Experiments E.1. Subjective quality rating experiment To establish a rigorous framework for perceptual evaluation of UGC video quality, we develop an annotation interface illustrated in Fig. 20. The interface comprises several key elements, including a designated textbox for group number entry, a continuous rating slider for quantifying perceived quality, and a “Replay” function for repeated video inspec- tion. Furthermore, the system integrates a pre-determined quality range. The system will trigger a prompt for re- evaluation if a rating falls outside predefined parameters. The subjective quality rating experiment is conducted in the standard laboratory environment, with each testing sta- tion equipped with 21080p resolution display devices. The video and rating interface are presented on separate devices. A total of 52participants take part in the subjective evalua- tion, ensuring that each video is rated by at least 10subjects. E.2. Human-in-the-loop selection experiment To facilitate a human-in-the-loop methodology for video quality assessment, we implemented an interactive inter- face, depicted in Fig. 21. The interface incorporates sev- eral essential components, including an input field for groupnumber specification, a suite of predefined quality options, and a text field enabling manual quality assessment anno- tation. Additionally, the system features sequential video presentations within designated groups. The human-in-the-loop selection experiment is con- ducted under the same environmental and equipment con- ditions as the subjective quality rating experiment. A total of32expert-level participants are recruited, and the exper- iment is carried out after the sota LMM voting process is completed. In total, 11,500out of 23,860videos have at least one quality factor receiving a score of 0in at least one voting round, requiring manual selection. Each of these videos, along with the problematic quality factors, is carefully re- viewed by at least 3participants. In cases where the results are inconsistent, majority voting is used to determine the final selection. After completing the selection process, the human- selected quality dimension results are merged back into the original annotations. Subsequently, these videos undergo quality factors summarization and the Q&A pairs genera- tion. F. Benchmark Supplementary Matrials F.1. Human annotation interface To facilitate human annotation of low-quality videos, we design a video annotation interface, as shown in Fig. 22. The interface consists of key components, including a sec- tion for annotators to input questions related to video distor- tion, corresponding answer fields based on different ques- tion types, and a selection for the correct answer. Addition- ally, annotators classify each question by type and quality concern. The interface also provides progress tracking and statistical summaries to help maintain a balanced distribu- tion of annotations. F.2. Detailed Annotation Examples In Fig. 23 and Fig. 24, we present specific annotation examples from the OmniVQA-FG-Benchmark, including three human-annotated cases and five machine-annotated cases. These annotations focus | https://arxiv.org/abs/2505.22543v1 |
on different video qual- ity concerns: spatial, temporal, and spatiotemporal aspects. Human annotations include three types of questions: Yes- or-No, What/How, and Open-ended, while machine annota- tions consist of two types: Yes-or-No and What/How. G. Applications G.1. Potential future applications The most practical potential value of OmniVQA is its ability to expand the perceptual visual quality assessment MIDB at almost zero cost, not only in the video domain but also in the image domain. By replacing the paid sota LMM andGemini with the most advanced open-source LMMs for local deployment and substituting the payed model with a locally deployed reasoning model, the annotation costs can be nearly reduced to zero. This allows for the rapid cre- ation of a large-scale MIDB that rivals manual annotations, with minimal sacrifice in data quality. Additionally, Om- niVQA enables iterative rejection sampling , which means that after acquiring a more performant model, the original expert model in the technical branch can be replaced with a newer version to continue rejection sampling, thereby iter- atively improving the data quality of the MIDB. Moreover, the human-in-the-loop process allows for data selected by human participation to be used for RLHF training, such as DPO , to further enhance the model’s ability to make accu- rate annotations. G.2. Applications in real-world scenarios Figs. 25, 26, 27, 28 show 4real-world scenario case stud- ies presented by the Gradio demo. H. Limitations The primary limitation of the OmniVQA is that, due to computational resource constraints, we are unable to verify thescaling effect of model parameter size in VQA un- der sufficiently large-scale data volume condition . Addi- tionally, although our approach is predominantly machine- annotation-driven, the human-in-the-loop process still in- curs a certain amount of time and cost, especially in thequality rating task, where reliance on human scoring re- mains unavoidable in our work. These limitations hinder a comprehensive investigation into the feasibility of a fully machine-annotated MIDB , while effectively reducing de- pendence on a large amount of human annotation remains a crucial challenge in the visual quality assessment field. Ad- dressing these limitations will be a key focus and inspiration for our future work. I. Acknowledgements We would like to express our sincere gratitude to all partic- ipants for their contributions to the human-in-the-loop sub- jective experiments. (a)The distribution of Instruction Lengths in Technical Branch(b)Top 20 Word Frequency of OmniVQA - Chat-400K in Technical Branch (c)Technical_Word_Cloud (f)In-Context_Word_Cloud (i)Aesthetic_Word_Cloud (d)The distribution of Instruction Lengths in Technical Branch(e)Top 20 Word Frequency of OmniVQA - Chat-400K in Technical Branch (g)The distribution of Instruction Lengths in In -Context Branch(h)Top 20 Word Frequency of OmniVQA - Chat-400K in Aesthetic BranchFigure 8. Additional Statistical Information Blur_Noticeable_Center_down Blur_Relatively_severe_Center_down Blur_Severe_Center_down Overexposure_Noticeable_Cente rOverexposure_Relatively_severe_Cente rOverexposure_Severe_Center Underexposure_Noticeable_Center_u pUnderexposure_Relatively_severe_ Center_upUnderexposure_Severe_Center_up Noise_Noticeable_Bottom_left Noise_Relatively_severe_Bottom_left Noise_Severe_Bottom_left Compression_Noticeable_Top_leftCompression_Relatively_severe_ Top_leftCompression_Severe_Top_leftFigure 9. Spatial-distorted videos examples. Technical Branch Step 1: Summary and Voting (for quality factor “fluency”) System Prompt:"""You will receive 5 brief evaluations regarding to the {key} of a video:1.{annotation[key][0]}2.{annotation[key][1]}3.{annotation[key][2]}4.{annotation [key][3]}5.{annotation[key][4]}.These evaluations correspond to five different formulations of the same question. (SUMMARY RULES, NOTE THAT THERE IS NO VOTING PROCESS FOR FACTOR “FLUENCY”) Please summarize these evaluations into | https://arxiv.org/abs/2505.22543v1 |
a comprehensive assessment. The requirements are as follows: if any of the five descriptions contain at least one meaning that is equivalent to or closely to 'the video does not play smoothly and experiences stuttering', then the final summary should be based solely on these descriptions. Additionally, please assign a score of 1 and subsequently provide an alternative summary with the opposite meaning (e.g., if the initial summary indicates stuttering, the alternative should indicate smooth playback). Only when all five descriptions are equivalent to or closely to 'the video plays smoothly without any stuttering' should the summary reflect the collective meaning of these 5 descriptions, followed by a score of 0. If there exists any description in these five with a meaning equivalent to or closely approximating 'the video segment does not play smoothly and experiences stuttering', the output format should be: Summary: + Score: + Opposite Summary:. If all five descriptions are equivalent to or closely approximate 'the video plays smoothly', the output format should be: Summary: + Score:. You should simulate as if you have derived the summary from the video itself, so do not reveal any trace of the provided 5 descriptions.""" Step 2: Summary and Voting (for other quality factors) System Prompt:"""You will receive 5 brief evaluations regarding to the {key} of a video:1.{annotation[key][0]}2.{annotation[key][1]}3.{annotation[key][2]}4.{annotation [key][3]}5.{annotation[key][4]}. (Please note that if the description is related to the 'flicker' of the video, the term 'flicker' here refers solely to 'frame shake/camera shake' and carries no additional meaning.These evaluations correspond to five different formulations of the same question. (SUMMARY RULES) Please summarize these evaluations into a comprehensive assessment. The requirements are as follows: if the evaluations are same or similar in meaning, combine the core content of the evaluations, refine the language, and slightly expand without altering the original meaning. If the 5 evaluations contain obviously conflicting meanings with each other (e.g., some rated it as 'there are noises',others as 'no noise'. e.g. some rated it as 'the motion blue is severe',others as 'the motion blue is slight' ), select the meaning with the highest frequency in the five descriptions (3 in 5 at least)(POSITIVE) and do not contain the meanings of the conflicted descriptions (NEGATIVE) to it in your answer and, based on the selected meaning, refine and expand the statement without changing the original meaning. If there is information which is not in the previous conflicted information (NEUTRAL), then directly combine it into the summarized evaluation. Note that the summarized evaluation should be as precise as it is possible, with no more than one sentence with only one single clause. Directly output your summarized evaluation." + \ "You should simulate as if you have derived the summary from the video itself, so do not reveal any trace of the provided 5 descriptions"Figure 10. Annotation prompts for technical branch (1). The highlighted content in red represents the summary or key points of each step. (VOTING CRITERIA) "You will receive a keyframe sequence of a video with {len(frames)} frames (sample 2 frames per second) and a comprehensive quality description regarding to the | https://arxiv.org/abs/2505.22543v1 |
{key} of the video: {annotation[key][5]} (Please note that if the description is related to the 'flicker' of the video, the term 'flicker' here refers solely to 'frame shake/camera shake' and carries no additional meaning.). Please carefully observe the keyframe especially on the {key} aspect) and assess whether you agree with the provided evaluation description. Rate your assessment according to the following guidance:" + \ "2 points: The evaluation description is largely consistent with the key frame sequence, with only minor inaccuracies or non-standard description. Overall, the evaluation is in alignment with the sequence. One key factor to consider is whether the entire video (all sampled frames) is consistent with the {key} description. If there is a change in the {key} during the video playback that is not reflected in the description, a score of 2 cannot be given." + \ "1 point: The evaluation description shows some deviation from the key frame sequence, primarily due to inconsistencies in temporal quality or degree of distortions or noticeable inaccurate descriptions of quality factors." + \ "0 points: The evaluation description is largely or completely inconsistent with the observed key frame sequence; the described elements do not appear in the sequence or exhibit significant discrepancies or shows great temporal difference. Please provide your reason if you rate 0 mark." + \ (POST PROCESSING METHODS) "If you give a score of 2 points, please further refine the original description based on your observation in the frames without altering its original meaning, overall structure and sentence patterns. If you give a score of 1 point, please give your reason and correct the original description, aiming to preserve a similar sentence structure and style, while making it more academic. If you give a score of 0 points, please provide the reasoning behind this score, including what you consider to be the correct description. Please follow the structure and requirements outlined below when you rate 2/1/0 (please give the score you rate before making subsequent movement):" + \ "2 points. Then provide the refined description without any changes (directly give the refined description without any analysis and prefix);" + \ "1 point. Then provide your reason and corrected description(the output structure should be: Reason: + Corrected description:);" + \ "0 points. Then provide your reason and corrected description(the output structure should be: Reason: + Corrected description:);. Please do not provide any other content or format other than this." Step 2: Video-level Description Summary f"You will receive 8 brief evaluation descriptions regarding to the sharpness, light, compression_artifact,fluency, flickers, color, motion blur and noise of a video: sharpness:{annotation['sharpness'][5]},light:{annotation['light'][5]},compression_arti fact:{annotation['compression_artifact'][5]},fluency:{annotation['fluency'][5]},flicke rs:{annotation['flickers'][5]},color:{annotation['color'][5]},motion_blur:{annotation ['motion blur'][5]},noise:{annotation['noise'][5]}. (Please note that the term 'flicker' here refers solely to 'frame shake/camera shake' and carries no additional meaning.) Please summarize these evaluations into a comprehensive assessment paragraph with relatively concise and academic style and keep all the original meanings of the descriptions. Directly output your summarized evaluation." + \ "You should simulate as if you have derived the summary from the video itself, so do not reveal any trace of the provided 8 descriptions"Figure 11. Annotation prompts for technical | https://arxiv.org/abs/2505.22543v1 |
branch (2). Step 2: Instruction Pairs Generation (for openended) System Prompt:"""You will receive a detailed description of the quality of a video regarding to 8 quality factors of the video: sharpness, light, compression_artifact, fluency, flickers, color, motion blur and noise (Please note that the term 'flicker' here refers solely to 'frame shake/camera shake' and carries no additional meaning.). Based on this description, create exactly three questions about specific factors of the video quality in the eight factors and provide the corresponding answers. The questions and answers should only be in the form of open-ended chat without choices. When designing question-answer pairs, please follow these rules: 1.Avoid designing overly simple question-answer pairs that merely rewrite or copy content directly from the given description. The question-answer pairs should require a certain level of reasoning difficulty. 2.At least one extended conversation-style question should be designed (i.e., a 'How' or 'Why' question). If the description indicates that a particular quality factor of the video is poor, please provide a reasonable solution to improve this quality factor based on your expertise. The solution should be as professional as possible, including post-processing or re-shooting suggestions. If the description specifies the reason behind a particular quality factor of the video, summarize this reason into a question-answer pair). Please ensure that the questions and answers are fully based on the content of the video quality description without including any information beyond what is stated."""Step 2: Instruction Pairs Generation (for choices Q/A) System Prompt:"""You will receive a detailed description of the quality of a video regarding to seven quality factors of the video: sharpness, light, compression_artifact,fluency, flickers, color, motion blur and noise (Please note that the term 'flicker' here refers solely to 'frame shake/camera shake' and carries no additional meaning.). Based on this description, create exactly three questions about specific factors of the video quality in the 8 factors and provide the corresponding answers.""" """The questions and answers should only be in the form of yes/no choices (e.g., Question: Is the sharpness of the video high throughout the playback? A. Yes B. No Answer: A.) and multi-choice (single answeer) question (e.g., Question: Which part of the video is affected by overexposure? A. The part of the bamboo hit by sunlight B. The sky C. The entire bamboo structure D. The beach. Answer: A. The part of the bamboo hit by sunlight.). Please ensure that the questions and answers are fully based on the content of the video quality description without including any information beyond what is stated. Below is an example:" + " Quality Description: The video playback is smooth and fluent. In the first half, when the video is indoors, the picture is stable without flicker. However, when the scene switches to the playground, the flicker becomes noticeable, and both the sharpness and overall brightness of the image decrease. (Additionally, there is a certain level of compression blur throughout the entire video.)" + " Example Questions and Answers:" + " Question 1: Question: Is the video playback smooth and fluent? A. Yes B. No Answer: A. Yes" + " Question | https://arxiv.org/abs/2505.22543v1 |
2: Question: How is the flicker condition when the scene switches to the playground? A.Very severe B.Noticeable C.Mild D.Does not exist. Answer: B.Noticeable." + " Question 3: Question: How is the sharpness throughout the video playback? Answer: Due to compression blur present throughout the video, the sharpness is relatively poor." + " Note: Avoid questions like 'How is the brightness of the video?' as there is no absolute assessment of brightness quality in the description, only a mention of its decrease." + " Please format the output as follows: Question 1: Question: Answer:; Question 2: Question: Answer:; Question 3: Question: Answer:." + f" The video quality depiction is: {annotation['overall_annotation']}."""Figure 12. Annotation prompts for technical branch (3). In-Context Branch Step 1: Distorted Objects Recognition and Description System Prompt:"""You will receive a series of keyframes sampled every second from a video with manually added distortions, where the distorted regions are highlighted with bounding boxes. Please assess whether the contents inside the bounding box in the video are worth annotating according to the following criteria: (SEMANTIC ANNOTATION CRITERIA) 1. The content inside the bounding box should exhibit a significant semantic difference from the surrounding area outside the bounding box. For instance, if the bounding box contains a part of a human body while the surrounding area shows a background wall, this would be valuable for annotation. However, if both the bounding box and surrounding area consist of the same content, such as a uniform grass field, the scene would not be worth annotating. 2. The content inside the bounding box should ideally contain a complete, describable object, such as a human face, the body of a person, or the right half of a face. Additionally, the object should occupy a relatively large proportion of the bounding box. 3. The content inside the bounding box should maintain temporal consistency, meaning there should be no significant semantic changes or scene transitions within the sequence of frames containing the bounding box. If all three criteria are met, the region is considered annotatable. Please score the region as follows: - If annotatable: Score: 1. Description: [Detailed description of the object inside the bounding box (If referring to a specific part of an object, please provide detailed information on its exact location. For instance, if the distortion occurs in the left half of the face, the description should specify "the left half of the face" rather than merely stating "a portion of the face.") (directly output the description of the object without any prefix)]. - If not annotatable: Score: 0. Please provide your evaluation in the specified format.""" Step 2: Instruction Generation System Prompt:"""You will receive a description of spatiotemporal local distortions in a video: {incontext_description}. First, please refine the description to make it more fluent and grammatically correct, while retaining its original meaning and structure. Then, based on the revised description, please generate four questions related to the spatiotemporal local distortion in the video. The four questions should include the following types: 1. Ask and answer the specific start and end seconds or the duration of the spatiotemporal local | https://arxiv.org/abs/2505.22543v1 |
distortion in the video. 2. Ask and answer the location of the spatiotemporal local distortion within the video frame. If only a general description of the location is provided (such as center, top left, bottom right, etc.), answer with a location description. If the description contains specific semantic objects about the distorted area (such as a part of the human body or face), then you must respond using that specific semantic information. 3. Ask what specific distortion occurred at a particular location and for a specific object (if there is semantic information provided), and inquire about the severity of the distortion. The answer should only use the information in the descriptions without adding any additional message. The question formats should be as follows: 1. Yes/No question (with two options, single answer): Format: "Question + A. Option one, B. Option two Answer: correct choice + answer". 2. Multiple choice (four options, single answer): Format: "Question + A. Option one, B. Option two, C. Option three, D. Option four Answer: correct choice + answer". 3. Open-ended question: Format: "Question Answer". Please ensure that the four different question-answer pairs have diverse question formats and you don't have to set the three types of questions in order, you can shuffle them in any sequences. Final output should be in the following format: Revised description: [Revised description here] Question 1: Question: Answer: Question 2: Question: Answer: Question 3: Question: Answer: Question 4: Question: Answer:. You should simulate as if you have seen the video itself, so do not reveal any trace of the given description."""Figure 13. Annotation prompts for in-context branch Aesthetic Branch Step 1: Aesthetic Analysis System Prompt:"""You will receive a sequence of keyframes sampled at one frame per second from a "in the wild" UGC video. Please analyze the aesthetic effects of this video following these three steps: (STYLE) 1. First, summarize the overall aesthetic style of the video based on the keyframe sampling sequence. Please aim to be concise, yet accurate and professional in your description. (SPATIAL-TEMPORAL ANALYSIS) 2. In the second step, analyze the aesthetic effects of the video from the following two perspectives: a. Spatial: Provide a brief analysis of the compositional beauty and rationality, clarity of the main subject, and the comfort and accuracy of the main content. Please analyze the spatial aesthetic style of the video, making sure to consider the content and style of the video itself, as different types of videos may employ different spatial aesthetic techniques. b. Temporal: Provide a brief analysis of the consistency and rationality of the camera movements, and the aesthetic effects of the camera work. Also, assess whether the spatial aesthetic effect changes throughout the video playback. If there is a change, please identify this. Note that if the video is clearly not a camera-captured video (such as animation or CG), do not analyze camera movements. (EMOTION REPERCUSSION) 3. Finally, provide a precise one-sentence description of the human emotional feedback the video evokes (i.e., what kind of feeling it gives to human) (Please note that the emotional response evoked by the video should | https://arxiv.org/abs/2505.22543v1 |
not only include the emotional feedback arising from its content or artistic style, but also consider the comfort level of its presentation. For example, videos with excessive flickering or rapid scene transitions may cause dizziness and discomfort, while videos with stable scene transitions and well-composed shots tend to make the viewer feel comfortable and pleased.), and briefly evaluate the overall aesthetic effect of the video using a single adverb of degree (e.g., good, fair, poor, high, etc.). """ Step 2: Instruction Generation System Prompt:"""You will receive a detailed description of the aesthetic effects of one video: {annotation['aesthetic_description']}. Based on the description, please generate four questions only related to the aesthetic effects in the video. The four questions must include the following types: 1. Inquire and answer the overall style of this video. 2. Inquire and answer the emotional feedback this video evokes. 3. Inquire and answer a specific aesthetic characteristic regarding the spatial or temporal aspect of this video. The question formats should be as follows: 1. Yes/No question (with two options, single answer): Format: "Question + A. Option one, B. Option two Answer: correct choice + answer". 2. Multiple choice (four options, single answer): Format: "Question + A. Option one, B. Option two, C. Option three, D. Option four Answer: correct choice + answer". 3. Open-ended question: Format: "Question Answer". Please ensure that the four different question-answer pairs have diverse question formats and you don't have to set the three types of questions in order, you can shuffle them in any sequences. Final output should be in the following format: Question 1: Question: Answer: Question 2: Question: Answer: Question 3: Question: Answer: Question 4: Question: Answer:. You should simulate as if you have seen the video itself, so do not reveal any trace of the given description. """Figure 14. Annotation prompts for aesthetic branch. video_path:270dc3d47d1a3dba65ce30a620e07088547b9c0af33265ad833f9acafa1f9d5e -00001.mp4 "sharpness" : [ "The video is slightly blurred, and the facial details of the characters are not clear.", "The sharpness is fair.", "The sharpness is fair.", "The sharpness is poor throughout the entire video.", "The sharpness is fair.", “The video exhibits generally fair sharpness, though some slight blurring reduces clarity, particularly in facial details.”, "2 points.","2 points.","2 points." ], "light": [ "The video's light is relatively low, and the distribution of light is uneven.", "The video is relatively dark throughout its playback.", "The overall light of the video is relatively low.", "The light is relatively low, and the lighting is dim and unnatural.", "The light is relatively low, and the lighting is dim and unnatural.", "The video suffers from consistently low and uneven lighting, resulting in a dim and unnatural appearance.", "2 points. ,"2 points,"2 points ], "compression_artifact ": [ "The video has noticeable compression artifacts throughout, with significant block effects.", "Compression artifacts are noticeable in the video, especially on the edges of the characters and the background buildings.", "The compression artifacts are relatively mild, and the overall image is quite clear.", "The video has severe compression artifacts, with noticeable block effects.", "The compression artifact in this video is relatively mild, and the overall image | https://arxiv.org/abs/2505.22543v1 |
is quite clear.", "Compression artifacts are noticeable, particularly in detailed areas like character outlines and background elements, ranging from mild to severe blockiness .", "2 points. ,"2 points,"2 points ], "flickers" : [ "The flicker is noticeable throughout the video." , "There is a slight flicker throughout the video playback." , "There is a slight flicker throughout the video." , "The flicker is relatively slight, and the overall viewing experience is fair." , "The camera shake is severe, and the flicker is noticeable." , "The video exhibits a persistent, though generally slight, camera shake.", "2 points. ,"2 points,"2 points ], "fluency" : [ "The video plays fluently, with no stuttering." , "The video playback is smooth." , "The video is fairly fluent throughout." , "The video plays smoothly and fluently." , "The video playback is smooth and fluent, with no stuttering." , "The video plays smoothly without any stuttering."],Step1:Multiple Sampling from Domain Expert Model "color": [ "The color of this video is relatively vivid, with a high saturation." , "The color is relatively faded, and the colorfulness is insufficient." , "The color tone of this video is relatively cool, and the overall color is quite dark." , "The colorfulness of this video is relatively low, and the overall tone is quite cool." , "The color is relatively faded." , Step2:summary and refine "This video exhibits slightly muted colors and a neutral tone, with no significant indications of extreme coolness or darkness. The frames maintain fairly consistent brightness throughout.", Step2:Voting 0 point: ; 1 point: ; 2 points: ; "**Score: 0 points** **Corrected description**: \nThis video features moderately saturated colors with a neutral and slightly subdued tone. The overall atmosphere appears clear and calm, with no significantly cool or dark coloring present.", "**1 point** **Corrected description:** \nThis video exhibits slightly muted colors and a neutral tone, with no significant indications of extreme coolness or darkness. The frames maintain fairly consistent brightness throughout.", "2 points. ], "motion blur" : [ "The video has severe motion blur throughout, with noticeable ghosting." , "Yes, there is noticeable motion blur in the video." , "The video has severe motion blur throughout, with noticeable ghosting." , "The video has severe motion blur throughout, with noticeable ghosting." , "The video has severe motion blur, and the characters' movements are not smooth." , "The video suffers from significant motion blur, resulting in noticeable ghosting artifacts, particularly during character movements.", "2 points. ,"2 points,"2 points ], "noise": [ "The video has noticeable noise throughout." , "The video has slight noise throughout, which slightly affects viewing." , "No, there is no noise." , "There is a lot of noise in the video." , "The video has noticeable noise." , "The video exhibits a noticeable amount of noise.", "2 points. ,"2 points,"2 points ], Step3:Instruction Pairs Generation "overall_annotation ": "This video exhibits fair sharpness, albeit slightly diminished by blurring, especially in facial details. Low and uneven lighting contributes to a dim, unnatural appearance. Compression artifacts, ranging from mild to severe blockiness , are evident, primarily in detailed areas. While the video plays smoothly, | https://arxiv.org/abs/2505.22543v1 |
persistent camera shake is noticeable. Colors appear slightly muted with a neutral tone and consistent brightness. Significant motion blur, resulting in ghosting artifacts, is apparent, particularly during movement. The video also suffers from a noticeable level of noise. "quality Q1": "Question: How does the lighting affect the overall video quality? Answer: The low and uneven lighting makes the video appear dim and gives it an unnatural look, detracting from the overall quality.", "quality Q2": "Question: Why are details, especially facial features, difficult to discern in this video? Answer: The combination of blurring, which reduces sharpness, and compression artifacts obscures finer details, making things like facial features less distinct. Additionally, the motion blur contributes to the lack of clarity in moving subjects, including faces.", "quality Q3": "Question: How can the issues of motion blur and camera shake be addressed to improve the video's quality? Answer: To reduce motion blur, try increasing the shutter speed during recording or use a camera with a higher frame rate. In post -production, motion blur can be partially corrected with deblurring software, but results may vary. The persistent camera shake suggests the need for stabilization. During filming, using a tripod or gimbal would significantly improve stability. In post-processing, stabilization software can help reduce shake, but it might also introduce some cropping or distortion. For future recordings, ensuring the camera is held steady or using stabilization equipment is crucial.", "quality Q4": "Please give the description of the evaluation of the overall quality of this video, please evaluate as many quality factors as possible. Answer:This video exhibits fair sharpness, albeit slightly diminished by blurring, especially in facial details. Low and uneven lighting contributes to a dim, unnatural appearance. Compression artifacts, ranging from mild to severe blockiness , are evident, primarily in detailed areas. While the video plays smoothly, persistent camera shake is noticeable. Colors appear slightly muted with a neutral tone and consistent brightness. Significant motion blur, resulting in ghosting artifacts, is apparent, particularly during movement. The video also suffers from a noticeable level of noise..“ Video InputFigure 15. Technical branch annotation example(1). (Note that the “summary and refine” process is done after the human-in-the-loop selection process if there is “0” in voting results.) video_path : 6209bbb589afa1da181dea911ab6763c981ccedd1e473e13c40873d786105985 -00000.mp4 “sharpness" : [ "The video is relatively clear, with the text in the video being quite clear." , "The sharpness is fair." , "The sharpness of the video is fair." , "The sharpness is fair throughout the video." , "The sharpness is fair." , "The video exhibits acceptable sharpness.", "2 points.","2 points.","2 points." ], "light": [ "The light distribution is uneven, with the brightness of the text in the video being relatively high, while the background light is relatively low." , "The video is bright throughout its playback." , "The light is relatively dark." , "The light is moderate and natural." , "The light is relatively high, resulting in overexposure." , "The video lighting is inconsistent, with some areas overexposed and others, particularly the background, underexposed.", "2 points.","2 points.","2 points." ], "compression_artifact ": [ "The video has severe compression artifacts, with noticeable | https://arxiv.org/abs/2505.22543v1 |
block effects and significant blur." , "The compression artifact is noticeable in the video, with a significant loss of edge detail." , "The video has severe compression artifacts, with noticeable block effects in the text areas." , "The compression artifacts in the video are severe, with noticeable block effects." , "The compression artifact in the video is relatively mild, and the overall image is quite clear." , "The video exhibits severe compression artifacts, including noticeable blockiness and a loss of detail, particularly in text and edge areas.", "2 points.","2 points.","2 points." ], "flickers" : [ "The flicker is slight and does not affect viewing." , "The video playback is smooth, with no flicker." , "The video has slight flicker throughout." , "The video has a slight flicker, which is caused by the rapid changes in light." , "The camera shake is stable, with no flicker." , "The video exhibits minimal camera shake that does not detract from the viewing experience.", "2 points.","2 points.","2 points." ] "fluency" : [ "The video is fairly fluent throughout, with no obvious stuttering." , "The video is smooth throughout." , "The video is fairly fluent throughout." , "The video playback is smooth and fluent." , "The video playback is smooth and fluent, with no stuttering." , "The video plays smoothly and fluently."]Step1:Multiple Sampling from Domain Expert Model "color": [ "The video's color is relatively rich, with high saturation and high contrast." , "The video's color is relatively rich, and the color saturation is high." , "The color tone of this video is relatively cool, and the colorfulness is not high." , "The video's colorfulness is relatively high." , "The colors are vivid." , Step2:summary and refine "The video initially exhibits vibrant colors with high saturation and contrast, particularly in the earlier frames. However, the latter portion transitions to monochromatic and plain text visuals, which lack the vibrancy and color dynamics present at the beginning.", Step2:Voting 0 point: ; 1 point: ; 2 points: ; "0 points. Corrected Description: The video initially exhibits vibrant colors with high saturation and contrast. However, the majority of the frames transition to muted, text-based content on a plain white background, resulting in significant variation in color characteristics throughout the sequence.", "1 point. Corrected description: The video initially exhibits vibrant colors with high saturation and contrast, particularly in the earlier frames. However, the latter portion transitions to monochromatic and plain text visuals, which lack the vibrancy and color dynamics present at the beginning." ,"2 points."], "motion blur" : [ "The motion blur is relatively mild." , "No, the video playback is smooth and fluent." , "The video has severe motion blur, with noticeable ghosting in the text and background images." , "The video is composed of several static images, and there is no motion blur." , "The video has noticeable motion blur throughout." , "The video exhibits noticeable motion blur.", "2 points.","2 points.","2 points."], "noise": [ "The video has noticeable noise throughout." , "The video has slight noise throughout." , "No, there is no noise." , "The video has noticeable noise throughout." , | https://arxiv.org/abs/2505.22543v1 |
"There is a small amount of noise in the video." , "The video exhibits a moderate level of noise.", "2 points.","2 points.","2 points."] Step3:Instruction Pairs Generation "overall_annotation": "This video suffers from inconsistent lighting, with overexposed foregrounds and underexposed backgrounds. Significant compression artifacts manifest as blockiness and detail loss, especially in text and along edges. While the video plays smoothly, noticeable motion blur detracts from the visual quality. A moderate level of noise is also present. Color reproduction is inconsistent, shifting from initially vibrant and saturated hues to a monochromatic, text -based presentation lacking the earlier dynamism. Finally, minimal camera shake is observed, but it does not significantly impact the viewing experience." "quality Q1": "Question: How does the lighting affect the overall visual quality of the video? Answer: The inconsistent lighting, with overexposed foregrounds and underexposed backgrounds, creates an unbalanced and visually jarring effect, diminishing the overall quality.", "quality Q2": "Question: Why is the clarity of details, especially text and edges, compromised in this video? Answer: The significant compression artifacts, manifesting as blockiness and detail loss, are the primary reason for the reduced clarity of text and edges in the video. This suggests an overly aggressive compression setting was used during encoding.", "quality Q3": "Question: Despite the smooth playback, what detracts from the sharpness and clarity of moving objects in the video? How could this be improved in future recordings? Answer: Noticeable motion blur is impacting the sharpness and clarity of moving objects. To mitigate this, several approaches can be taken during filming or post -production. During filming: using a faster shutter speed, ensuring ample lighting to allow for the higher shutter speed, and stabilizing the camera can help reduce motion blur. In post - production, motion blur reduction algorithms can be applied, though these may introduce artifacts or unnatural smoothness if used excessively. Also, increasing the frame rate during capture can also minimize motion blur by providing more temporal information.", "quality Q4": "Please give the description of the evaluation of the overall quality of this video, please evaluate as many quality factors as possible. Answer: This video suffers from inconsistent lighting, with overexposed foregrounds and underexposed backgrounds. Significant compression artifacts manifest as blockiness and detail loss, especially in text and along edges. While the video plays smoothly, noticeable motion blur detracts from the visual quality. A moderate level of noise is also present. Color reproduction is inconsistent, shifting from initially vibrant and saturated hues to a monochromatic, text -based presentation lacking the earlier dynamism. Finally, minimal camera shake is observed, but it does not significantly impact the viewing experience..“ Video InputFigure 16. Technical-branch annotation example(2). (Note that the “summary and refine” process is done after the human-in-the-loop selection process if there is “0” in voting results.) video_path : 6c7bdec1f12c6a030bf1513afb9b1fbb52ca7ecb052a91d2e826b6d5b83187f9 -00000.mp4 "sharpness" : [ "The video is slightly blurred, and the details of the ground tiles are relatively blurred." , "The sharpness is fair throughout the video." , "The sharpness is fair." , "The sharpness is poor throughout the entire video." , "The sharpness is fair." , "The video exhibits acceptable sharpness, though | https://arxiv.org/abs/2505.22543v1 |
some details, such as ground tiles, appear slightly soft.", "2 points.","2 points.","2 points." ], "light": [ "The light is relatively uniform, with no overexposure or underexposure issues." , "The video is bright throughout its playback." , "The overall light of the video is relatively dark." , "The light is moderate and natural." , "The light is soft and not glaring." , "The lighting is generally soft, even, and appropriately bright, creating a natural look.", "2 points.","2 points.","2 points." ], "flickers" : [ "The flicker is slight and does not affect the viewing experience." , "There is a slight flicker throughout the video playback." , "There is a slight flicker throughout the video." , "There is a slight flicker, but it is not serious." , "The camera shake is relatively slight, and the video is stable without flicker." , "The video exhibits minimal camera shake and remains stable.", "2 points.","2 points.","2 points." ], "color": [ "The color of this video is relatively single, and the overall tone is warm." , "The color saturation is relatively low, and the color is relatively faded/single." , "The color tone of this video is warm." , "The colorfulness is relatively low, and the overall tone is warm." , "The overall color is relatively faded." , "This video exhibits a warm color tone with low saturation, resulting in a somewhat faded and less vibrant appearance.", consistent throughout all sampled frames.", "2 points.","2 points.","2 points." ], Step1:Multiple Sampling from Domain Expert Model " motion blur ": [ "The video has a slight flicker throughout, and the motion blur is quite severe.", "Yes, there is a certain degree of motion blur in the video.", "The video playback is smooth, and the camera trajectory is relatively stable. However, the image is slightly blurred, with noticeable compression artifacts.", "The motion blur is relatively severe, and the details of the characters are severely lost.", "The video has severe motion blur, with noticeable ghosting.", Step2:summary and refine "The video suffers from significant motion blur, resulting in a loss of detail, particularly during movement." Step2:Voting 0 point: ; 1 point: ; 2 points: ; 0 points. **Corrected Description:** The video shows clear and well -defined frames, with no significant motion blur affecting the detail, even during movement." 2 points, 2 points."], "noise": [ "The video has noticeable noise throughout, especially in the background buildings and the ground.", "The video has almost no noise.", "No, there is no obvious noise in this video.", "The video has noticeable noise throughout.", "The video has noticeable noise throughout.", "The video exhibits noticeable noise, particularly in background details like buildings and the ground.", "2 points.","2 points.","2 points." ], "compression_artifact ": [ "The video has noticeable compression artifacts, with block effects being quite prominent." , "The compression artifact is noticeable and affects the viewing experience." , "The compression artifact is relatively mild, and the overall image is quite clear." , "The video has noticeable compression artifacts, and the block effect is quite obvious." , "The video has severe compression artifacts, with noticeable block effects in the background of the image." , | https://arxiv.org/abs/2505.22543v1 |
"The video suffers from noticeable compression artifacts, particularly evident as blockiness .", "2 points.","2 points.","2 points." ], "fluency" : [ "The video playback is smooth and fluent, with no stuttering." , "The video is smooth." , "The video is fairly fluent throughout." , "The video plays smoothly and fluently." , "The video playback is smooth and fluent, with no stuttering." , "The video plays smoothly and fluently."], Step3:Instruction Pairs Generation "overall_annotation ": "This video presents acceptable sharpness, although finer details exhibit some softness. While the lighting is generally even, soft, and appropriately bright, contributing to a natural appearance, the video suffers from noticeable compression artifacts and significant motion blur, especially during movement. The warm color palette, coupled with low saturation, yields a somewhat faded and less vibrant aesthetic. Noticeable noise is present, particularly in background details. Though camera shake is minimal, the overall visual quality is impacted by the aforementioned motion blur and compression artifacts.", "quality Q1": "Question: How does the lighting contribute to the overall aesthetic of the video, and does it complement the color palette ? Answer: The lighting is soft, even, and appropriately bright, creating a natural look. However, this soft lighting combined with the warm, desaturated color palette results in a slightly faded and less vibrant appearance.", "quality Q2": "Question: What are the primary issues affecting the clarity and detail of the video? Answer: The video suffers from noticeable compression artifacts which reduce fine detail clarity, and significant motion blur, further degrading the clarity, particularly during movement. While the base sharpness is acceptable, these two factors combine to soften the image and obscure detail.", "quality Q3": "Question: Why is the video's visual quality impacted despite minimal camera shake, and what could be done to improve it during post -production or in future recordings ? Answer: Although camera shake isn't a major issue, the significant motion blur and compression artifacts are the primary culprits degrading the video's quality. For post -production, deblurring algorithms and subtle sharpening could help mitigate the motion blur and enhance apparent sharpness. Reducing compression artifacts after the fact is difficult, but tools to smooth out blockiness may provide some improvement. For future recordings, increasing the bitrate/quality settings during capture will reduce compression artifacts. Using a faster shutter speed will help reduce motion blur, and ensuring ample lighting can allow for this faster shutter without introducing excessive noise. Additionally, employing image stabilization (either optical or electronic) in the camera could further improve stability and reduce motion blur.", "quality Q4": "Please give the description of the evaluation of the overall quality of this video, please evaluate as many quality factors as possible . Answer:This video presents acceptable sharpness, although finer details exhibit some softness. While the lighting is generally even, soft, and appropriately bright, contributing to a natural appearance, the video suffers from noticeable compression artifacts and significant motion blur, especially during movement. The warm color palette, coupled with low saturation, yields a somewhat faded and less vibrant aesthetic. Noticeable noise is present, particularly in background details. Though camera shake is minimal, the overall visual quality is | https://arxiv.org/abs/2505.22543v1 |
impacted by the aforementioned motion blur and compression artifacts.."},Video Input Figure 17. Technical branch annotation example(3). aesthetic branch Annotation Example 1 "video_name": "HIGHQUALITY/ia-batch14_bctvnh-Around_Town_with_Dianne_Bzik_-_Miss_Bedford_Scholarship_Program.mp4", "aesthetic_description": "1. Overall Aesthetic Style: The video presents a simple, static interview setup, characteristic of local news or community access programming. The aesthetic is functional rather than artistic, prioritizing clear presentation of the subjects. 2. Aesthetic Effects: a.**Spatial:** The composition is straightforward, with two subjects seated in profile, creating a balanced, symmetrical feel. The subjects are the clear focus, with minimal distracting background elements. The framing is comfortable, avoiding extreme close-ups or awkward angles. The spatial aesthetic aligns with the informational nature of the presumed interview. b.**Temporal:** The provided keyframes suggest a static shot with no camera movement. There is no discernible change in the spatial aesthetic over the shown duration.\n\n3. **Emotional Feedback and Overall Aesthetic Effect:** The video evokes a sense of polite attentiveness due to the straightforward presentation of the interviewees. The overall aesthetic effect is *fair*.", "prefix": "Now you will receive one video. This video is 8 seconds long, and you will see a sequence of images generated by uniformly sampling 1 frame per second from the video. The image sequence follows the original order of the video. After uniform sampling, there are a total of 8 images:\n <image>. In addition, you will also obtain motion features extracted from all 252 frames of the the entire video:\n <image>. The temporal motion features also follows the original frame order of the video. Please watch this video carefully, and then answer the following question: ", "aesthetic Q1": ": Is the overall style of this video best described as artistic and visually striking or functional and straightforward? A. Artistic and visually striking, B. Functional and straightforward Answer: B. Functional and straightforward. The video prioritizes clear presentation of the interview subjects over elaborate visuals.", "aesthetic Q2": ": What emotional feedback does this video primarily evoke? A. Excitement and intrigue, B. Polite attentiveness, C. Nostalgia and warmth, D. Suspense and anticipation Answer: B. Polite attentiveness. The simple, static setup and focus on the subjects encourages a sense of focused listening.", "aesthetic Q3": ": How does the video utilize camera movement to enhance its aesthetic effect? Answer: The video appears to use minimal to no camera movement, maintaining a static shot on the interviewees. This reinforces the straightforward, interview-focused style.", "aesthetic Q4": ": Does the framing of the subjects in the video utilize extreme close-ups or maintain a comfortable distance? A. Extreme close-ups, B. Comfortable distance Answer: B. Comfortable distance. The framing likely avoids extreme close-ups, contributing to the balanced and symmetrical composition.", "aesthetic Q5": "Please give the detailed description of the aesthetic effects of the video. Answer:1. **Overall Aesthetic Style:** The video presents a simple, static interview setup, characteristic of local news or community access programming. The aesthetic is functional rather than artistic, prioritizing clear presentation of the subjects.\n\n2. **Aesthetic Effects:**\n\n a. **Spatial:** The composition is straightforward, with two subjects seated in profile, creating a balanced, symmetrical feel. The subjects are the clear focus, with minimal distracting background elements. The framing | https://arxiv.org/abs/2505.22543v1 |
is comfortable, avoiding extreme close-ups or awkward angles. The spatial aesthetic aligns with the informational nature of the presumed interview.\n\n b. **Temporal:** The provided keyframes suggest a static shot with no camera movement. There is no discernible change in the spatial aesthetic over the shown duration.\n\n3. **Emotional Feedback and Overall Aesthetic Effect:** The video evokes a sense of polite attentiveness due to the straightforward presentation of the interviewees. The overall aesthetic effect is *fair*..", Annotation Example 2 "video_name": "HIGHQUALITY/ia-batch13_VM69_St_Agatha_St_Agatha_Road.mp4", "aesthetic_description": "1. **Overall Aesthetic Style:** The video showcases a simple, documentary-style aerial view of agricultural land and surrounding landscape. 2. **Aesthetic Effects Analysis:** a.**Spatial:** The composition is straightforward, offering a clear, wide-angle perspective of the fields. The main subject, the patterned agricultural land, is easily discernible. The framing offers a comfortable view, accurately depicting the scale and layout of the area. The spatial aesthetic leans towards functional and informative, prioritizing clarity and overview rather than artistic embellishment. This aligns with the likely purpose of showcasing the location. b.b. **Temporal:** The camera slowly pans across the scene, providing a consistent and rational movement that enhances the sense of scale and allows the viewer to absorb the details of the landscape. The slow pan contributes to a calming effect. The spatial aesthetic effect remains largely consistent throughout the brief sample, maintaining the focus on the overall landscape features. 3. **Emotional Feedback and Overall Aesthetic Effect:** The video evokes a feeling of tranquility and spaciousness. The overall aesthetic effect is fair.", "prefix": "Now you will receive one video. This video is 6 seconds long, and you will see a sequence of images generated by uniformly sampling 1 frame per second from the video. The image sequence follows the original order of the video. After uniform sampling, there are a total of 6 images:\n <image>. In addition, you will also obtain motion features extracted from all 180 frames of the the entire video:\n <image>. The temporal motion features also follows the original frame order of the video. Please watch this video carefully, and then answer the following question: ", "aesthetic Q1": ": Does this video primarily employ a fast-paced, dynamic editing style, or a slow, observational approach? A. Fast-paced and dynamic, B. Slow and observational Answer: B. Slow and observational. The video uses slow panning shots to showcase the agricultural landscape, creating a calm and observational feel.", "aesthetic Q2": ": What primary emotion does the aerial footage of the farmland evoke? A. Excitement, B. Tranquility, C. Anxiety, D. Boredom Answer: B. Tranquility. The slow pan and wide-angle perspective create a sense of peace and spaciousness.", "aesthetic Q3": ": How does the camera movement contribute to the overall aesthetic effect of the video? Answer: The slow panning movement enhances the sense of scale and allows viewers to fully appreciate the details of the agricultural land and surrounding landscape. This slow pace contributes to the calming and tranquil effect.", "aesthetic Q4": ": Is the overall aesthetic style of this video best described as abstract and artistic or simple and documentary-style? A. Abstract and Artistic, B. Simple and Documentary-style Answer: | https://arxiv.org/abs/2505.22543v1 |
B. Simple and Documentary-style. The clear, wide-angle views and straightforward composition prioritize conveying information about the location in a direct and functional manner.", "aesthetic Q5": "Please give the detailed description of the aesthetic effects of the video. Answer:1. **Overall Aesthetic Style:** The video showcases a simple, documentary-style aerial view of agricultural land and surrounding landscape.\n\n\n2. **Aesthetic Effects Analysis:**\n\n a. **Spatial:** The composition is straightforward, offering a clear, wide-angle perspective of the fields. The main subject, the patterned agricultural land, is easily discernible. The framing offers a comfortable view, accurately depicting the scale and layout of the area. The spatial aesthetic leans towards functional and informative, prioritizing clarity and overview rather than artistic embellishment. This aligns with the likely purpose of showcasing the location.\n\n b. **Temporal:** The camera slowly pans across the scene, providing a consistent and rational movement that enhances the sense of scale and allows the viewer to absorb the details of the landscape. The slow pan contributes to a calming effect. The spatial aesthetic effect remains largely consistent throughout the brief sample, maintaining the focus on the overall landscape features.\n\n\n3. **Emotional Feedback and Overall Aesthetic Effect:** The video evokes a feeling of tranquility and spaciousness. The overall aesthetic effect is fair." Figure 18. Aesthetic branch annotation examples. Incontext branch Example 1 "num_stutter": 2, "stutter_info": [{"stutter_start_sec": 2,"stutter_duration_sec": 1},{"stutter_start_sec": 4, "stutter_duration_sec": 1}], "distortion_type": "stutter", "description": "The video exhibited two stutter events: one initiating at 2 seconds with a duration of 1 second, and another beginning at 4 seconds, also lasting 1 second.", "prefix": "Now you will receive one video. This video is 5 seconds long, and you will see a sequence of images generated by uniformly sampling 1 frame per second from the video. The image sequence follows the original order of the video. After uniform sampling, there are a total of 5 images:\n <image>. In addition, you will also obtain motion features extracted from all 152 frames of the the entire video:\n <image>. The temporal motion features also follows the original frame order of the video. Please watch this video carefully, and then answer the following question: ", "incontext Q1": "Is the total duration of all stutter events in the video 2 seconds? A. Yes B. No Answer: A. Yes", "incontext Q2": "When does the second stuttering event begin? Answer: The second stuttering event begins at 4 seconds.", "incontext Q3": "Please give the description of the information of the stuttering events of the video as detailed as possible. Answer:The video exhibited two stutter events: one initiating at 2 seconds with a duration of 1 second, and another beginning at 4 seconds, also lasting 1 second.."}, Example 2 "start_time": 2, "length": 1, "distortion_type": "blur", "distortion_level": 3, "description": ["Score: 1.\nThe left portion of the Laticrete logo.\n"], "incontext_description": "The video experiences severe blurring from the 2-second mark to the 3-second mark, lasting for one second. The distortion predominantly occurs in the top-left region of the frames, particularly affecting the left portion of the Laticrete logo with a score of 1.", "prefix": "Now you will receive one video. This video is 4 seconds long, and | https://arxiv.org/abs/2505.22543v1 |
you will see a sequence of images generated by uniformly sampling 1 frame per second from the video. The image sequence follows the original order of the video. After uniform sampling, there are a total of 4 images:\n <image>. In addition, you will also obtain motion features extracted from all 148 frames of the the entire video:\n <image>. The temporal motion features also follows the original frame order of the video. Please watch this video carefully, and then answer the following question: ", "incontext Q1": "Question: Does the video experience severe blurring for one second from the 2-second mark to the 3- second mark? A. Yes B. No Answer: A. Yes. The video experiences severe blurring from the 2-second mark to the 3-second mark, lasting for one second.", "incontext Q2": "Question: Where does the distortion primarily occur within the video frames? A. Bottom-right corner B. Center C. Top-left region D. Top-right corner Answer: C. The distortion primarily occurs in the top-left region of the frames.", "incontext Q3": "Question: What specific part of the Laticrete logo is affected by the distortion, and how severe is it? Answer: The left portion of the Laticrete logo is affected by the severe distortion, with a score of 1.", "incontext Q4": "Question: How long does the severe blurring last in the video? Answer: The severe blurring lasts for one second.", "incontext Q5": "Please give the description of the information of the spatial-temporal local distortions of the video. Answer: The video experiences severe blurring from the 2-second mark to the 3-second mark, lasting for one second. The distortion predominantly occurs in the top-left region of the frames, particularly affecting the left portion of the Laticrete logo with a score of 1." Example 3 "start_time": 2, "length": 1, "distortion_type": "compression", "distortion_level": 3, "description": ["Score: 1.\nLower left portion of the torso and the left arm."], "incontext_description": "The video experiences severe compression artifacts from the 2-second mark to the 3-second mark, lasting a total of 1 second. The distortion primarily affects the lower left region of each frame, specifically targeting the lower left portion of the torso and the left arm.\n\nQuestion 1 (Yes/No): \n "prefix": "Now you will receive one video. This video is 8 seconds long, and you will see a sequence of images generated by uniformly sampling 1 frame per second from the video. The image sequence follows the original order of the video. After uniform sampling, there are a total of 8 images:\n <image>. In addition, you will also obtain motion features extracted from all 260 frames of the the entire video:\n <image>. The temporal motion features also follows the original frame order of the video. Please watch this video carefully, and then answer the following question: ", "incontext Q1": "(Yes/No): \nQuestion: Does the video display distortion starting at 2 seconds and ending at 3 seconds? A. Yes, B. No \n Answer: A. Yes \u2013 the distortion spans from the 2-second to the 3-second mark.", "incontext Q2": "(Multiple Choice): \nQuestion: Where is the distortion primarily located within the video frames? A. Upper left corner, B. Upper right corner, | https://arxiv.org/abs/2505.22543v1 |
C. Lower right corner, D. Lower left portion of the torso and left arm \n Answer: D. Lower left portion of the torso and left arm", "incontext Q3": "(Open-ended): \nQuestion: What specific distortion occurs on the left arm and the lower left portion of the torso, and how severe is it? Answer: Severe compression.", "incontext Q4": "(Multiple Choice): \nQuestion: What is the duration of the distortion observed in the video? A. 0.5 seconds, B. 1 second, C. 1.5 seconds, D. 2 seconds \n Answer: B. 1 second.", "incontext Q5": "Please give the description of the information of the spatial-temporal local distortions of the video. Answer:The video experiences severe compression artifacts from the 2-second mark to the 3-second mark, lasting a total of 1 second. The distortion primarily affects the lower left region of each frame, specifically targeting the lower left portion of the torso and the left arm.}, Figure 19. In-context branch annotation examples. Since frame freezing effects cannot be demonstrated through screenshots, we provide only an annotation example for the stutter distortion. Figure 20. Subjective Quality Rating Experiment Interface. This interface is designed for the subjective quality rating experiment. The annotation workflow consists of 3explicit phases. Firstly, the annotator inputs their assigned group number into the designated textbox (top-center position) and activates the evaluation sequence by clicking the “Submit” button. Secondly, the system plays videos from the selected group sequentially in the central viewport using a player. For each video, the annotator manipulates the horizontal slider (labeled 0 to 100; 0: ”Lowest Quality”, 100: “Highest Quality”) to indicate perceived quality and record their rating by clicking “Submit”. The system automatically triggers the next video in the group after each submission until all videos in the group have been rated. The interface incorporates a pre-scored range generated by a large language model. If the subjective rating falls outside this hidden range, the annotator is prompted to reconsider and rescore the video. A “Replay” button is available for re-viewing the video. . Figure 21. Human-in-the-loop Selection Experiment Interface. This interface facilitates a human-in-the-loop approach to video quality assessment. The annotation workflow is composed of three phases. Firstly, the annotator enters their assigned group number into the first designated textbox and activates the evaluation sequence by clicking the “Submit” button. Secondly, the system displays videos from the selected group sequentially in the central viewport. For each video, the annotator selects the most appropriate quality description from a set of predefined options. If none of the provided options accurately reflect the perceived video quality, the annotator can manually input a custom description in the second designated textbox. Thirdly, the annotator records their selection (or custom description) by clicking the “Submit” button. The system automatically triggers the next video in the group after each submission, continuing until all videos within the group have been assessed. . Figure 22. This is a human annotation interface with the following structure: The first row displays [Question], where the annotator fills in questions related to video distortion. [Answer 1-4] correspond to the answers for each question. For Binary type questions, | https://arxiv.org/abs/2505.22543v1 |
only [Answer 1] and [Answer 2] need to be filled; for Multi-choice (single-answer) type questions, all four answer options should be filled in; and for Open-ended questions, only [Answer 1] is required. The [Correct Answer] is a drop-down menu where the annotator selects the correct option. Each question has corresponding question types (question type) and quality concerns (quality concern), and annotators need to choose the appropriate options based on the type of the question. Annotators can click [Play Video] to play the video, [Previous] to select the previous video, and [Next] to select the next video. The interface displays statistics at the bottom, including [Current progress], which shows the progress of the current annotated video; [Question type], which indicates the number of different types of questions; and [Quality Concern], which shows the number of different quality concerns. This helps annotators pay attention to the distribution of different annotation types during the process. How blurred is the person's face? How blurred is the person's face?Question(What/How) Answer A: There is slight blurring, slightly affecting distinguishability. Bjjd-Mm555k -00020.mp4 Human _SpatialB: There is relatively severe blurring, affecting discernibility. C: Very clear, with no blurriness . D: The blurriness is so severe that it is completely unreadable . At the beginning of the video, how does the clarity of the text at the bottom of the screen compare to that of the text at the top?Question( Open_ended ) Answer The clarity of the text at the bottom of the screen is slightly lower than that of the text at the top of the screen. bwntPXCtopk -00004.mp4 Human _Spatial_Temporal During the playback process, does the video exhibit motion blur?Question(Yes -or-No) Answer A: Yes. ZRQwTL2W53k -00042.mp4 Human _TemporalB: No. What type of distortion occurs at 6 seconds in the centerQuestion(What/How) Answer A: Blur. B: Color shift . C: Noise. D: Compression artifact . Spatial_Temporal-NCDegQxhyY -00004_5_2_blur_center_2.mp4 Figure 23. Examples of questions and correct answers of the OmniVQA-FG-Benchmark Is the distortion occurring at 5 seconds in the top -right classified as overexposure?Question(Yes -or-No) Answer bxS_JP5U5 -Q-00000_4_1_overexposure_top -right_1.mp4 Spatial _Temporal frame_5.jpg What type of distortion occurs at 2 seconds for 2 seconds with a medium level?Question(What/How) Answer A: Blur. gJ5VIkKRpJ0 -00010_0_2_compression_center -right_1.mp4 TemporalB: Overexposure . C: Underexposure .. D: Pixelation .A: Yes. B: No. Does the video exhibit underexposure distortion in the bottom -left area?Question(Yes -or-No) Answer A: Yes. 0ia8G_PSZ6U -00006_3_2_underexposure_bottom -left_3.mp4 SpatialB: No. How long does the noise distortion last at bottom -right?Question(What/How) Answer A: 1 seconds B: 2 seconds C: 0 seconds. D: No distortion . Spatial_TemporallnLZDLZ7ybw -00000_5_1_noise_bottom -right_1.mp4 Figure 24. Examples of questions and correct answers of the OmniVQA-FG-Benchmark Figure 25. This is a self-recorded video excerpt from the LBVD dataset. The video exhibits severe overexposure in the upper right corner and suffers from significant stuttering, resulting in low overall quality. This example demonstrates that OmniVQA-Chatter can provide a relatively accurate and comprehensive overall video quality description while also achieving precise spatial localization of distortions. Figure 26. This is a smartphone-recorded video. For most of the playback duration, the camera remains stable; however, at the beginning of the sixth | https://arxiv.org/abs/2505.22543v1 |
second, severe camera shake and motion blur suddenly occur. This example demonstrates that OmniVQA-Chatter possesses accurate temporal distortion localization (precise to the second)and is highly sensitive to temporal retrieval of specific distortions. Figure 27. This video was recorded using a professional camera with background defocus processing to enhance its aesthetic appeal. The overall quality is very high, with excellent clarity. This example demonstrates that OmniVQA-Chatter can provide precise quality descriptions and quantitative scoring for high-quality videos while accurately assessing aesthetic value. Most importantly, it can analyze specific quality issues in the context of the video content (in this case, background defocus) to determine whether they should be classified as distortions. This highlights the model’s capability in “high-level quality understanding”. Figure 28. This video is manually recorded using a smartphone, with varying focal points over time. At the beginning of the video, the focus is primarily on the background, while the foreground appears blurred. At the fifth second, the focus shifts toward the foreground, causing the background to become blurred. This example demonstrates that OmniVQA-Chatter can perform temporal analysis of quality variations in specific regions or semantic objects, as well as precise temporal quality analysis for specific areas, accurate to the second. | https://arxiv.org/abs/2505.22543v1 |
arXiv:2505.22552v1 [cs.CL] 28 May 2025ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM Hoang Pham*, Thanh-Do Nguyen*, Khac-Hoai Nam Bui† Viettel Artificial Intelligence and Data Services Center, Viettel Group, Vietnam {hoangpv4, dont15, nambkh}@viettel.com.vn Abstract Integrating knowledge graphs (KGs) to en- hance the reasoning capabilities of large lan- guage models (LLMs) is an emerging research challenge in claim verification. While KGs provide structured, semantically rich represen- tations well-suited for reasoning, most exist- ing verification methods rely on unstructured text corpora, limiting their ability to effectively leverage KGs. Additionally, despite possess- ing strong reasoning abilities, modern LLMs struggle with multi-step modular pipelines and reasoning over KGs without adaptation. To ad- dress these challenges, we propose ClaimPKG1, an end-to-end framework that seamlessly inte- grates LLM reasoning with structured knowl- edge from KGs. Specifically, the main idea of ClaimPKG is to employ a lightweight, spe- cialized LLM to represent the input claim as pseudo-subgraphs, guiding a dedicated sub- graph retrieval module to identify relevant KG subgraphs. These retrieved subgraphs are then processed by a general-purpose LLM to pro- duce the final verdict and justification. Exten- sive experiments on the FactKG dataset demon- strate that ClaimPKG achieves state-of-the- art performance, outperforming strong base- lines in this research field by 9%-12% accu- racy points across multiple categories. Fur- thermore, ClaimPKG exhibits zero-shot gen- eralizability to unstructured datasets such as HoVer and FEVEROUS, effectively combin- ing structured knowledge from KGs with LLM reasoning across various LLM backbones. 1 Introduction In today’s rapidly evolving information landscape, distinguishing fact from misinformation is becom- ing more challenging, especially with the rise of AI-generated content. Robust claim verification *Equal contribution. †Corresponding author. 1https://github.com/HoangHoang1408/ClaimPKG ClaimSub-claim Verdict & Justification Sub-claimSub-claim a) Unstructured Text-based Paradigm ClaimPreprocessing - Ner - Coreference - Entity Linking Subgraph RetrievalVerdict b) Traditional KG-based Paradigm c) Our Method - ClaimPKG Claim Subgraph RetrievalPseudo SubgraphVerdict & JustificationPseudo SubgraphPseudo SubgraphFigure 1: Different claim verification paradigms: (a) Unstructured Text-based methods focusing on claim decomposition and sequential reasoning over text, (b) KG-based methods facing challenges in entity resolu- tion and structured reasoning, and (c) ClaimPKG’s uni- fied framework with specialized modules for pseudo- subgraph generation, retrieval, and general reasoning. systems, leveraging NLP methods to automati- cally assess the veracity of claims (Glockner et al., 2022a,b; Thorne and Vlachos, 2018), are essential to ensure information reliability. Effective meth- ods require not only accuracy but also transparency, necessitating strong reasoning to identify evidence and provide clear justifications (Pan et al., 2023). Most existing verification approaches focus on unstructured text corpora, using techniques like chain-of-thought (CoT) reasoning (Wei et al., 2022) to break down claims for verification. Approaches like ProgramFC (Pan et al., 2023) and FOLK (Wang and Shu, 2023) employ modular pipelines to verify claims against text-based knowledge bases (Figure 1(a)). However, the inherent limitations of text representation pose challenges. Specifically, ambiguous entity references and complex multi- hop relationships make it difficult to perform rigor- ous verification against unstructured text. In contrast, Knowledge Graphs (KGs) provide structured relationships for effective reasoning (Luo et al., 2024; Sun et al., 2024), yet their use in claim verification remains | https://arxiv.org/abs/2505.22552v1 |
limited. Existing KG- based approaches (Figure 1(b)) (Kim et al., 2023b; Zhou et al., 2019; Kim et al., 2023a) lack end-to- end solutions, often requiring pre-extracted entities via modules like entity or relation extraction. Mean- while, despite excelling at general reasoning, LLMs struggle with KG-specific tasks like entity resolu- tion and multi-hop reasoning (Cao et al., 2021; Aly et al., 2021), suggesting the need for a system com- bining LLM capabilities with KG-based inference. Overall, solving claim verification problems is hindered by following major limitations: (1) Entity Ambiguity: Systems must accurately disambiguate entities within claims to identify relevant evidence (Aly et al., 2021); (2) Multihop Reasoning: Com- plex claims often require reasoning across multiple evidence from different sources (Pan et al., 2023; Wang and Shu, 2023); and (3) Limited integration of KGs and LLMs: Current approaches are underex- ploring the potential of combining the application of structured representation with strong inference capabilities of LLMs (Kim et al., 2023a). To address these challenges, we propose ClaimPKG (Claim Verification using Pseudo- Subgraph in Knowledge Graphs), a novel end- to-end framework that synergizes the adaptabil- ity and generalization strengths of LLMs with the structured and rigorous representation of KGs to enable robust and transparent claim verification. As specified in Figure 1(c), ClaimPKG operates through three phases: (1) Pseudo-Subgraphs Gen- eration : A KG-specialized lightweight LLM gen- erates pseudo subgraphs as the representations of input claims under a Trie-based KG-Entity Con- straint, ensuring the correctness of extracted en- tities; (2) Subgraphs Retrieval : A retrieval al- gorithm considers generated pseudo subgraphs as queries to identify actual relevant KG subgraphs as evidence; and (3) General Reasoning : A general- purpose LLM reasons over the retrieved KG sub- graphs to produce the verdict and human-readable justifications. Through extensive experiments on the FactKG dataset, ClaimPKG achieves state-of- the-art performance, demonstrating its effective- ness over various claim types with a small number of training samples. Furthermore, its zero-shot generalizability to unstructured datasets (HoVer, FEVEROUS) highlights its robustness. Our contributions can be summarized as follows: (1) We introduce ClaimPKG, a holistic frameworkthat integrates LLMs and KGs for accurate and interpretable claim verification, handling various types of claims in a unified manner; (2) We de- velop a lightweight specialized LLM with its ac- cording decoding algorithm for pseudo-subgraph generation and pair it with general-purpose LLMs to achieve robust reasoning; and (3) We validate the effectiveness of ClaimPKG through extensive ex- periments, achieving state-of-the-art performance on structure-based datasets and generalizing to unstructure-based datasets. 2 Related Work Claim Verification Approaches. Claim verifica- tion systems utilize knowledge bases that can be categorized into unstructured and structured for- mats. In the unstructured domain, text-based ver- ification methods predominate, with systems de- signed to verify claims against textual evidence, as demonstrated in the FEVER dataset (Thorne et al., 2018). Recent advances have focused on handling specialized verification scenarios, including am- biguous question-answer pairs (Park et al., 2022), detecting factual changes (Schuster et al., 2021), and processing multiple documents concurrently (Jiang et al., 2020). For structured verification, re- search has primarily focused on tables and graphs, with early | https://arxiv.org/abs/2505.22552v1 |
work developing specialized architec- tures: graph neural networks for knowledge graph processing (Zhou et al., 2020), table-specific trans- formers (Herzig et al., 2020), and tree-structured decoders for hierarchical data (Wang et al., 2020). Claim Verification over Knowledge Graphs (KGs). The emergence of Large Language Models (LLMs) has simplified direct reasoning over textual corpora for claim verification, as demonstrated by ProgramFC (Pan et al., 2023) and FOLK (Wang and Shu, 2023). However, structured data sources like tables and graphs can provide more grounded and robust verification results (Kim et al., 2023b). Knowledge graphs are particularly advantageous as they enable explicit representation of reason- ing processes through logical rules over nodes and edges. FactKG (Kim et al., 2023b) established a foundation in this direction by introducing a com- prehensive dataset for evaluating modern verifica- tion methods. KG-GPT (Kim et al., 2023a) fol- lowed this work by demonstrating performance gains through a pipeline that performs sentence decomposition, subgraph retrieval, and logical in- ference. Additionally, while not directly addressing claim verification, StructGPT (Jiang et al., 2023) and RoG (Luo et al., 2024) achieved promising re- sults in related tasks (e.g., Knowledge Base Ques- tion Answering) by collecting relevant evidence, such as subgraphs in KGs, then leveraging LLMs for complex reasoning in particular scenarios. 3 Preliminary Knowledge Graph: Knowledge Graph (KG) G represents facts as triplets of format t= (e, r, e′), where entities e, e′∈ Eare connected by a relation r∈ R;rcan also be referred as r(e, e′). Claim Verification: Given a claim c, a verification model Fdetermines its veracity as Supported or Refuted based on an external knowledge base K, while also providing a justification jto explain the predicted label. This work specifically considers the scenario where Kis structured as a Knowledge GraphG, enabling reasoning over graph knowledge to infer vandj. Formally, the verification process is defined as: (v, j) =F(c,G). Trie-based Constrained Decoding: A Trie (Wikipedia, 2025b) indexes predefined token se- quences, where each root-to-node path represents a prefix. During LLM generation, this structure restricts token selection to only valid Trie paths, ensuring reliable output. 4 ClaimPKG 4.1 Formulation of ClaimPKG We formulate the ClaimPKG framework using a probabilistic approach. Given a claim cand a pre- built KG G, our objective is to model the distribu- tionpθ(v, j|c,G), where vdenotes the verdict and jthe justification. However, direct computation for this distribution is infeasible as reasoning over the entire KG is not practical given its large size. To address this, we propose to select Sc, a subgraph ofGrelevant to ccontaining necessary information to derive our target distribution. Treating Scas a latent variable, pθ(v, j|c,G)is decomposed as: pθ(v, j|c,G) =P Scpθ(v, j|c,Sc)pθ(Sc|c,G)(1) where pθ(Sc|c,G)models the subgraph selection, andpθ(v, j|c,Sc)models the generator of the ver- dict and justification given Sc. However, direct computation of pθ(Sc|c,G)is challenging due to modality mismatch between the input c(text) and the target Sc(graph structure), hindering the em- ployment of retrieval methods for Sc. To bridge thisgap, we decompose the subgraph selection into: pθ(Sc|c,G) =P Pcpθ(Sc|Pc,G)pθ(Pc|c,G)(2) where pθ(Pc|c,G)models the generation of the graph representation Pc, which we refer as “pseudo subgraph”, from a | https://arxiv.org/abs/2505.22552v1 |
textual claim c, andpθ(Sc|Pc,G) models the distribution over relevant subgraphs Sc givenPc. While equations 1 and 2 establish our theoretical framework for ClaimPKG, computing exact probabilities by summing over all possible (Sc,Pc)pairs is intractable. Addressing this we propose two approximations: (1) We infer the ve- racity using only the most relevant subgraph S∗ c: (v∗, j∗)∼pθ(v, j|c,S∗ c) (3) (2) We assume each generated pseudo-subgraph is reasonable with a high probability, allowing us to approximate the subgraph selection in 2 as: S(i) c= arg max pθ(Sc|P(i) c,G) (4) withP(i) cis the ithpseudo-graph generation. We then construct S∗ cby aggregating multiple sampled subgraphs, specifically S∗ c=SS(i) c. These approximations lead ClaimPKG to com- prise 3 key modules as depicted in Figure 2: (1) Pseudo Subgraph Generation to generate graph representations Pc’s given claim c; (2) Subgraph Retrieval to retrieve relevant evidence subgraph S∗ c; and (3) General Reasoning to generate final verdict vand justification j. The inference procedure is described as follows: figure]fig:inferenceInference Procedure of ClaimPKG Preprocessing: Index the KG Ginto an Entity Trie for effective entity lookup. 1. Pseudo Subgraph Generation: Gener- ate multiple graph representations (pseudo sub- graphs) Pc={P(i) c}N i=1from claim c, using a specialized LLM with beam search and Entity- Trie constraints. 2. Subgraph Retrieval: Use each pseudo graph in Pcfor querying the most respective relevant subgraph S(i) cin the KG G, resulting in a set of {S(i) c}N i=1following Equation 4, then aggregate them to form S∗ c=SN i=1S(i) c. 3. General Reasoning: Employ a general- purpose LLM to reason veracity (v∗, j∗)∼ pθ(v, j|c,S∗ c)following Equation 3. The subsequent sections provide details about each component in the ClaimPKG framework. Justification : Vedat Tek did design İzmit Clock Tower but dit not design 103 Colmore Row ; John Madin did. Vedat Tek was born in Istanbul . Khalid Mahmood is associated with Birmingham , not the birthplace of Vedat Tek . Verdict : False(3) General Reasoning AnnotationClaim: Khalid Mahmood is the leader of a city which was the birthplace of architect, Vedat Tek , who designed 103 Colmore Row and I.C.Tower . 103 Colmore RowVedat Tek I.C. Tower unk_0Vedat Tek103 Colmore Row John Madin Brutalist architecture I.C. TowerVedat Tek Birming ham Khalid Mahmood Jeff Rooker, Baron Rookerarchitect architectural style architect Ankara Palas architectlocation leader name successorIstanbulbirth place unk_0Khalid. M.unk_0Vedat Tek Khalid. M. 103 Colmore Rowunk_1 Vedat Tekunk_2 I.C. TowerVedat Tek Explicit Unknown CandidatePseudo ActualEntity IndexingKnowledge GraphEntity Trie Entity RelationGeneral LLM<e>103 Colmore Row </e> || architect || <e> Vedat Tek </e> <e>I.C. Tower </e> || architect || <e> Vedat Tek </e> unknown_0 || ~birth place || <e> Vedat Tek <</e> unknown_0 || leader title || <e> Khalid Mahmood </e> Preprocessing(1) Pseudo Subgraph Generation (2) Subgraph Retrieval...... Retrieve and UnionTriplet Types Incomplete Complete Specialized LLM Group and DecomposeConstrain Vedat TekFigure 2: Illustration of the ClaimPKG for claim verification. The framework consists of three key modules: (1) Pseudo-subgraph Generation, constructing representative subgraphs; (2) Subgraph Retrieval, selecting the most pertinent KG subgraphs; and (3) General Reasoning, integrating them for accurate and interpretable verification. 4.2 Pseudo Subgraph Generation The first step to effectively | https://arxiv.org/abs/2505.22552v1 |
verify a claim is to understand its content thoroughly and represent it in a format compatible with the KG. Since evi- dence comes from KG, representing claims in the graph format is crucial, which captures hypotheti- cal relations among entities in an effective way that enables effective comparisons with KG subgraphs for evidence retrieval. However, this process faces two main challenges: (1) handling ambiguity res- olution and multi-hop reasoning, and (2) ensuring accurate entity extraction from the claim. Specialized LLM. To address the first challenge, the Pseudo Subgraph Generation module employs a lightweight model optimized for processing in- put claims. Following (Li et al., 2013; Miwa and Bansal, 2016), the model is trained to jointly ex- tract entities and their corresponding relations from a claim c. Specifically, from cthe model con- structs a pseudo subgraph Pccomprising triplets in the form of head_entity ||relation ||tail_entity (illustrated in Figure 2). To ensure the generated subgraph can identify entities requiring ambigu- ity resolution and multi-hop reasoning, we employ a specialized annotation mechanism : when the claim references an entity indirectly—either with- out explicit naming or through relations to other entities—we denote it as unknown_ i, with the index ito keep track of different entities. Thisnotation effectively signals the need for further disambiguation and reasoning within the KG in subsequent steps. Training details enabling this annotation strategy are presented in Appendix B.1. Trie-Constrained Decoding. For the second chal- lenge, we develop a constrained decoding algo- rithm with an Entity Trie inspired by (Cao et al., 2021). We construct a trie Tfrom the KG’s entity setE={e1, e2, ...}. The specialized LLM gener- ates entities using special tokens ⟨e⟩and⟨/e⟩to mark entity boundaries. When ⟨e⟩is generated, the decoding process restricts token selection based on Tuntil⟨/e⟩is produced, ensuring all generated entities exist in the KG. Outside such boundaries, the model generates relations by sampling from an unconstrained original token distribution. This mechanism ensures entity reliability while preserv- ing flexible relation extraction (Edge et al., 2024). Multiple Representations. In order to capture different semantic views of a claim, we employ beam search along with the described sampling strategy, which is proved to improve the coverage of extracted triplets (table 8), resulting in multiple representations Pc={P(i) c}N i=1for an input claim. In summary, each of the claim’s graph represen- tations satisfies following properties: (1) effectively capture the underlying graph structure of that claim, and (2) correctly align with the KG’s entities. 4.3 Subgraph Retrieval The second component of ClaimPKG involves re- trieving relevant KG subgraphs as evidence by us- ing a dedicated algorithm that matches the pseudo- subgraphs Pc’s from the previous step to actual subgraphs in the KG. We present the high-level de- scription of our algorithm here, while its complete formulation is detailed in Appendix D. We catego- rize triplets in a Pcinto: (1) Incomplete triplets, where either the head or tail entity is marked as unknown, and (2) Complete triplets, where both head and tail entities are explicitly identified. Relation Scoring Function: We define a function Sim(r1, r2)to quantify the similarity between two relations, where a higher score indicates | https://arxiv.org/abs/2505.22552v1 |
greater similarity. This function can be instantiated via various mechanisms (e.g., embedding similarity, re-ranking, fuzzy matching, etc.). Incomplete Triplets Retrieval: Our goal is to identify evidence (actual triplets in the KG) to in- form us about entities marked as unknown and their respective relations with explicit entities in the pseudo-subgraphs. First, for a Pc, we group triplets sharing the same unknown entity uinto a group g(e.g., in Figure 2, triplets associated with unknown_ 0are grouped together). Subsequently, for each group gcharacterized by the unknown entity u, we denote: Eu={eu1, . . . , e un}as enti- ties directly connected to uin the pseudo-subgraph PcandRu={ru1, . . . , r un}as relations from u to corresponding entities in Ec. Ing, for each ex- plicit entity eui∈ Eu, we first retrieve candidate set Cui={ec i1, . . . , ec im}containing all entities con- nected to euiin the KG, then collect all candidate sets into Cu={Cu1, . . . , C un}. To determine the best candidates for resolving u, we propose an Entity Scoring mechanism, which is based on two assumptions : (1) since uhas pseudo relations with all entities in Eu, a candidate eccon- nected to more entities in Euis more likely to re- solve u; and (2) because every information related toeuianduis crucial to verify the initial claim, each candidate set Cuimust contribute to the final verification. Note that an entity can appear in mul- tiple candidate sets, hence we compute a “global” score for each ec ijin a candidate set Cui: score (ec ij) =PRu ijrSim(rui, r) (5) withRu ij=S|Eu| i=1{r(eui, ec ij)|ifec ij∈Cui}, the set of all relations across candidate sets appearinginCuthat connect ec ijwith an eui. Subsequently, to construct the set Tuof most relevant triplets to a group g, we employ a ranking function as follows: Tu=|Cu|[ i=1arg max triplet,k 1{πij|j≤ |Cui|} (6) withπijis simply score (ec ij)and(triplet, k 1)de- notes the selection of top k1triplets (eui, r, ec)hav- ing the highest global scores from each set in Cu. While equation 5 ensures candidates appearing in multiple candidate sets and having high simi- lar scores are prioritized, equation 6 ensures every entity in Euhas at least k1triplets, both of which make use of assumptions (1) and (2). Complete Triplets Retrieval: For each triplet (e1, r, e 2)in aPc, we first find top k2similar re- lations between e1ande2in the KG Gusing the Sim function. If no direct connection exists (e.g., "103 Colmore Row" and "Vedat Tek" as shown in figure 2), the triplet is decomposed into two: (e1, r,unknown 0)and(unknown 0, r, e 2). These are then handled via Incomplete Triplets Retrieval. Subgraph Union: In summary, for an input claim c, multiple pseudo-graphs are generated, contain- ingcomplete andincomplete triplets. These triplets undergo processing to handle shared unknown en- tities and identified entities that are not connected in the KG G, and are used to query Gfor rele- vant triplets. All retrieved evidence triplets are aggregated into a final subgraph S∗ c, serving as the evidence for the final component of ClaimPKG. 4.4 General Reasoning The General Reasoning module concludes | https://arxiv.org/abs/2505.22552v1 |
the ClaimPKG framework by determining claim ve- racity through reasoning over input claim cand retrieved evidence subgraph S∗ c. As complex tasks, especially claim verification, require deliber- ate chain-of-thought reasoning (Jiang et al., 2020; Wang et al., 2023), we use a general-purpose LLM to analyze candS∗ c. Using carefully designed prompts (Figure 6), the module generates a natural language justification jand verdict v. Expanded from equation 3, this step is formalized as: pθ(v, j|c,S∗ c) =pθ(v|c, j,S∗ c)pθ(j|c,S∗ c)(7) where p(j|c,S∗ c)produces the justification and p(v|c, j,S∗ c)determines veracity. This model- agnostic design enables integration with state-of- the-art LLMs (e.g., Llama, Qwen and GPT4) for zero-shot reasoning. 5 Experiments 5.1 Experimental Setup Datasets. Our primary benchmark is the FactKG dataset (Kim et al., 2023b), designed for claim ver- ification over the DBpedia KG (Lehmann et al., 2015). It consists of 108K claims grounded in DBpedia and labelled as either SUPPORTED or REFUTED . The claims span five distinct cate- gories: One-hop, Conjunction, Existence, Multi- hop, and Negation, each posing unique challenges. For evaluation, we randomly sample 2K claims from the test set, ensuring balanced representation across categories under computational efficiency. To assess the generalizability of ClaimPKG beyond structured benchmarks, we also evaluate HoVer (Jiang et al., 2020) and FEVEROUS (Aly et al., 2021), two widely-used unstructured-based bench- marks requiring multi-hop reasoning and evidence aggregation from Wikipedia. Additional statistics of datasets are provided in Appendix A. Metrics. We use Accuracy as the primary metric along with Entity Correctness to measure if the claim’s extracted entity is valid in KG. Addition- ally, for the FactKG dev set, we report Claim Struc- ture Coverage , which quantifies the proportion of triplets from the original claim’s graph structure successfully reconstructed by our pipeline. We re- fer readers to Appendix C for more details. Annotation. For brevity, we use Llama-3B, Llama- 70B, and Qwen-72B to refer to Llama-3.2-3B, Llama-3.3-70B, and Qwen2.5-72B respectively. The * symbol denotes models fine-tuned for pseudo subgraph generation. Full model names are used when necessary. Baselines. We compare ClaimPKG with recent KG-based claim verification methods: Zero-shot CoT (Wei et al., 2022) prompts LLMs to generate rationales and verdicts without accessing the KG; GEAR (Zhou et al., 2019), originally designed for text-based verification, employs graph-based ev- idence aggregation with multiple aggregators to capture multi-evidence dependencies, using BERT for language representation and adapted for KG settings following (Kim et al., 2023b); and KG- GPT (Kim et al., 2023a), a pioneer work that com- bines LLMs and KGs through a structured pipeline of Sentence Segmentation, Graph Retrieval, and Logic Inference. Notably, unlike baselines which receive pre-identified claim entities along with the claim as the input, our method processes entities in an end-to-end pipeline.Implementation. For a comprehensive evaluation, we evaluate baselines on three model series: Llama 3 (Meta, 2024), Qwen 2.5 (Qwen, 2024), and GPT- 4o-mini (OpenAI, 2024). In ClaimPKG, we con- figure the Specialized LLM to generate multiple pseudo-subgraphs using a beam size of 5. For the Subgraph Retrieval algorithm, we adopt an embedding-based approach leveraging BGE-Large- EN-v1.5 (Xiao et al., 2023) to compute dot-product similarity | https://arxiv.org/abs/2505.22552v1 |
for the Relation Scoring Function, we set the primary hyperparameters to k1= 3andk2= 1. Detailed justification is provided in Appendix C. 5.2 Results and Analysis We present the main experimental results in this section and additional findings in Appendix C. (RQ1): How Does ClaimPKG Perform Against the Baselines? Table 1 compares the accuracy (%) of ClaimPKG with baselines across claim cate- gories of the FactKG. Key observations include: (1)Direct inference using LLMs with CoT rea- soning significantly underperforms compared to evidence-based methods, with the best average score reaching only 69.07%, highlighting that de- spite LLM advancements, evidence retrieval re- mains crucial. (2)KG-GPT integrates knowl- edge graphs with LLMs but its best average score achieves only 74.70% (Llama-70B Few- shot), falling short of GEAR’s fine-tuned model at 76.65%. This suggests that while LLMs excel at language tasks, they require specific adaptation for KG processing. (3)ClaimPKG, with the strongest configuration (Llama-3B∗+ Llama-70B) and con- strained by Entity-Trie for valid KG entity gener- ation, achieves a 12-point improvement over KG- GPT and 9 points over GEAR. It particularly excels in multi-hop reasoning, demonstrating strong per- formance across Llama-3 and Qwen-2.5 backbones through effective structured evidence retrieval and KG integration. (RQ2): How Do Different Components Affect Performance? To evaluate the impact of each component in ClaimPKG, we conduct ablation studies of the following components, maintaining Llama-3B∗as the Specialized LLM and Llama- 70B as the General LLM. Entity-Trie Constraint. We remove the Entity- Trie constraint to assess its necessity. Compared to the full setup, this reduces the entity extraction correctness from 100% to 87.5%, and overall per- formance from 84.64% to 82.72%. Method Entity Correctness Negation Existence Conjunction Multi-hop One-hop Average Direct Inference With CoT - w/o Evidence Retrieval GPT-4o-mini (Zero-shot CoT) - 61.91 59.45 69.51 60.87 70.83 64.51 Qwen-72B (Zero-shot CoT) - 62.91 62.20 74.04 62.32 75.98 67.49 Llama-70B (Zero-shot CoT) - 64.34 64.62 72.47 65.58 78.32 69.07 Baseline Comparision - w/ Evidence Retrieval GEAR (Finetuned BERT) Known in Prior 79.72 79.19 78.63 68.39 77.34 76.65 KG-GPT (Llama-70B Few-shot) Known in Prior 70.91 65.06 86.64 58.87 92.02 74.70 KG-GPT (Qwen-72B Few-shot) Known in Prior 67.31 60.08 89.14 58.19 90.87 73.12 ClaimPKG (Llama-3B∗+ GPT-4o-mini) 100.0% 85.10 72.64 84.23 72.26 91.01 81.05 ClaimPKG (Llama-3B∗+ Qwen-72B) 100.0% 85.27 86.90 84.02 78.71 91.20 85.22 ClaimPKG (Llama-3B∗+ Llama-70B) 100.0% 84.58 84.20 85.68 78.49 90.26 84.64 Ablation Results (Llama-3B∗+ Llama-70B) - w/ Evidence Retrieval ClaimPKG (w/o Trie Constraint) 87.50% 82.50 83.24 83.82 76.13 88.01 82.74 ClaimPKG (Few-shot Specialized LLM) 86.52% 77.99 81.89 77.80 68.82 81.65 77.63 ClaimPKG (w/o Incomplete Retrieval) 100.0% 68.80 51.25 67.84 61.29 76.22 65.08 Table 1: Performance (accuracy %) comparison of ClaimPKG with baselines on 5 claim categories of FactKG dataset and their average scores. Specialized LLM. When replacing the special- ized LLM with few-shot prompting strategy us- ing Llama-70B, a much larger general-purpose LLM, entity correctness further declines to 86.52%, leading overall performance to drop to 77.63%. These results demonstrate that even with examples, general-purpose LLMs struggle to produce outputs with desired graph structure correctly, emphasizing the importance of the specialized LLM in generat- ing pseudo subgraphs. Incomplete | https://arxiv.org/abs/2505.22552v1 |
Retrieval. Removing the Incomplete Triplet Retrieval function, which forces the re- trieval algorithm to only query evidence using com- plete triplets, causes a significant average perfor- mance drop of nearly 20% compared to the full setup, showing the complete graph structure of in- put claims is essential for optimal performance. (RQ3): Robustness and Generalization of ClaimPKG? To assess ClaimPKG’s robustness, we vary model backbones, examine zero-shot gen- eralizability, analyze the effect of training data size, and conduct error analysis. Model Backbones. We evaluate different LLM ar- chitectures for both Specialized and General LLMs (Table 2). For General LLMs, we test various model sizes (7B to 70B parameters) using retrieved KG triplets as input. For Specialized LLMs, we ex- periment with different small fine-tuned backbones and few-shot prompt templates (Figure 7), while keeping Llama-3.3-70B as the fixed General LLM. Results in Table 2 show larger General LLMs (GPT-4o-Mini, Llama-3.3-70B) outperform smaller ones (Qwen-2.5-7B, Llama-3.1-8B) by up to 8 points, highlighting model capacity’s role in ag-Component Strategy Backbone Average General LLMZero-shotLlama 3.1 - 8B 77.08 Llama 3.3 - 70B 84.64 GPT4o - Mini 81.05 Qwen 2.5 - 7B 80.22 Qwen 2.5 - 72B 85.22 Specialized LLMFinetuneLlama 3 - 3B 84.64 Qwen 2.5 - 3B 82.32 Llama 3 - 1B 83.91 Qwen 2.5 - 1.5B 82.20 Few-shotLlama 3.3 - 70B 77.63 Qwen 2.5 - 72B 77.10 Table 2: Performance on Different Backbones. gregating subgraph evidence. Notably, a fine-tuned 1B Specialized LLM outperforms the general 70B counterpart, demonstrating fine-tuning’s effective- ness to process graph data. This supports the need to combine powerful General LLMs with adapted Specialized LLMs for optimal performance. Zero-shot Generalizability. To assess Benchmark Llama 3 Qwen 2.5 HoVer (Zero-shot CoT) 66.6 65.3 HoVer (Support-Predicted) 70.7 (14.3%) 69.4 (15.7%) FEVEROUS (Zero-shot CoT) 81.1 80.9 FEVEROUS (Support-Predicted) 83.8 (12.5%) 83.6 (12.9%) Table 3: Zero-shot transferred performance on other unstructure-based benchmarks on the Support-Predicted samples along with Support Predicted rates. ClaimPKG’s zero-shot generalizability, we test transfer to HoVer (Jiang et al., 2020) and FEVER- OUS (Aly et al., 2021) datasets. Using DB- pedia (Lehmann et al., 2015) as the knowledge source, we evaluate with trained Specialized LLMs (Llama-3.2-3B and Qwen-2.5-3B) while keeping Llama-3.3-70B as the General LLM. Since exter- nal datasets may contain claims outside DBpe- dia’s coverage, making it difficult to distinguish between knowledge gaps and actual verification failures of ClaimPKG for Refuted cases, we an- alyze only samples predicted as Supported . As shown in Table 3, ClaimPKG predicts Supported for only 12.5%-15.7% of samples, indicating lim- ited knowledge overlap with DBpedia. However, on these samples, ClaimPKG outperforms Llama- 3.3-70B’s zero-shot CoT inference by 4% accuracy on both datasets, demonstrating robust transfer to reasoning patterns in unseen data. Training Data Size. To assess the impact of train- 0.1K 0.5K 2.0K 5.0K 10.0K Sample Size60657075808590Avg Performance (%)79.3581.0583.7284.64 81.90 77.62 74.3480.8582.32 79.81Llama-3.2-3B ACC Qwen-2.5-3B ACCLlama-3.2-3B Coverage Qwen-2.5-3B Coverage 30405060708090 Structure Coverage (%) Figure 3: Varying Specialized LLM’s training data. ing data on the Specialized LLM, we vary the num- ber of training samples from 0.1K to 10K, using two configurations: Llama-3.2-3B and Qwen-2.5- 3B as the | https://arxiv.org/abs/2505.22552v1 |
specialized LLM and keep the General LLM to be Llama-3.3-70B. We evaluate perfor- mance based on two metrics: average accuracy on the test set and claim structure coverage on the dev set. As shown in Figure 3, the Specialized LLMs achieve satisfactory accuracy (Llama-3.2- 3B: 79.35%, Qwen-2.5-3B: 77.62%) with just 100 training samples, demonstrating efficiency and low training costs for KG adaptation. While both struc- ture coverage and accuracy improve up to 5K sam- ples, coverage plateaus thereafter, and accuracy begins to decline, indicating overfitting where ex- cessive training data reduces generalizability. 5.3 Interpretability and Error Analysis ClaimPKG can improve claim verification perfor- mance while enhancing interpretability. Represen- tative outputs of ClaimPKG (Figure 12, Appendix E) illustrate its ability to capture claim structure and provide well-grounded justifications. Notably,when refuting claims, it explicitly presents contra- dicting evidence, ensuring transparent reasoning. To further assess reliability, we conducted a human analysis of 200 incorrect predictions from FactKG, categorizing errors (Figure 13, Appendix E) into: Claim Structure Errors : fail to capture the un- derlying claim structure; Retrieval Errors : fail to retrieve necessary evidence required for claim veri- fication; and Reasoning Errors : incorrect logical inferences of the general LLM to judge the verdict. Specifically, there are 0 (0%) Claim Structure Er- rors, 57 (28.5%) Retrieval Errors, and 143 (71.5%) Reasoning Errors. These results suggest that, with chances (multiple beams) to generate pseudo- subgraphs, the Specialized LLM can effectively capture the structural representation of claims. However, the general-purpose LLM, despite its strong reasoning capabilities, still struggles with certain complex reasoning scenarios that require specific handling. Moreover, retrieval errors high- light cases where additional implicit reasoning is necessary, as we hypothesize that direct subgraph retrieval failed to provide a comprehensive picture of the required evidence. These highlight future improvements, focusing on enhancing retrieval in- ference and refining reasoning for complex claim verification over structured knowledge. 5.4 Scalability of ClaimPKG ClaimPKG maintains scalability and adaptability within dynamic knowledge environments. After training the Specialized LLM on a domain (e.g., Wikipedia), the system remains decoupled from the underlying Knowledge Graph (KG). Only the Entity-Trie component interfaces directly with the data. Consequently, when the KG undergoes up- dates, ClaimPKG requires merely an update of the corresponding entities within the Entity-Trie, en- suring an efficient adaptation process. 6 Conclusion In this work, we present ClaimPKG, a novel claim verification combining the structure of Knowl- edge Graphs with the adaptability and reasoning of Large Language Models. Through Pseudo- subgraph Generation, Subgraph Retrieval, and Gen- eral Reasoning, it addresses limitations while en- suring transparency. Extensive experiments show state-of-the-art performance and generalizability across datasets, making ClaimPKG a step toward reliable and explainable misinformation detection. Limitations Despite their advanced reasoning capabilities, LLMs are prone to errors and biases, necessitating careful deployment, particularly in fact-checking systems where incorrect or biased outputs could contribute to misinformation. Addressing these biases remains an ongoing research challenge, re- quiring effective mechanisms for detection, control, and mitigation. Additionally, real-world claim veri- fication often requires inferring implicit reasoning, where further related knowledge for a problem is necessary, and making improvements in pipeline | https://arxiv.org/abs/2505.22552v1 |
components to handle this type of information is crucial. Another limitation is the performance decline observed when the Specialized LLM is trained on an excessive number of examples, high- lighting the need for future research into regular- ization strategies. Further improvements should also focus on the general reasoning module to infer missing knowledge more effectively and enhance intricate and nuanced claim verification cases over structured knowledge. References Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. FEVEROUS: fact extraction and verifi- cation over unstructured and structured information. InProceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual . Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In9th International Conference on Learning Repre- sentations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 . OpenReview.net. Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, and Jonathan Larson. 2024. From local to global: A graph RAG approach to query-focused summariza- tion. CoRR , abs/2404.16130. Max Glockner, Yufang Hou, and Iryna Gurevych. 2022a. Missing counter-evidence renders NLP fact-checking unrealistic for misinformation. In Proceedings of the 2022 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022 , pages 5916–5936. Association for Computational Linguis- tics. Max Glockner, Yufang Hou, and Iryna Gurevych. 2022b. Missing counter-evidence renders NLP fact-checkingunrealistic for misinformation. In Proceedings of the 2022 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022 , pages 5916–5936. Association for Computational Linguis- tics. Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics , pages 4320–4333, Online. Association for Computa- tional Linguistics. Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Xin Zhao, and Ji-Rong Wen. 2023. StructGPT: A general framework for large language model to reason over structured data. In Proceedings of the 2023 Con- ference on Empirical Methods in Natural Language Processing , pages 9237–9251, Singapore. Associa- tion for Computational Linguistics. Yichen Jiang, Shikha Bordia, Zheng Zhong, Charles Dognin, Maneesh Kumar Singh, and Mohit Bansal. 2020. Hover: A dataset for many-hop fact extraction and claim verification. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020 , volume EMNLP 2020 ofFindings of ACL , pages 3441–3460. Association for Computational Linguistics. Jiho Kim, Yeonsu Kwon, Yohan Jo, and Edward Choi. 2023a. KG-GPT: A general framework for reasoning on knowledge graphs using large language models. InFindings of the Association for Computational Lin- guistics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 9410–9421. Association for Computa- tional Linguistics. Jiho Kim, Sungjin Park, Yeonsu Kwon, Yohan Jo, James Thorne, and Edward Choi. 2023b. Factkg: Fact verification via reasoning on knowledge graphs. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), ACL 2023, | https://arxiv.org/abs/2505.22552v1 |
Toronto, Canada, July 9-14, 2023 , pages 16190–16206. Association for Computa- tional Linguistics. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles . Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Sören Auer, and Christian Bizer. 2015. Dbpedia - A large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web , 6(2):167–195. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event ex- traction via structured prediction with global features. InProceedings of the 51st Annual Meeting of the As- sociation for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers , pages 73–82. The Association for Computer Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 . OpenRe- view.net. Linhao Luo, Yuan-Fang Li, Gholamreza Haffari, and Shirui Pan. 2024. Reasoning on graphs: Faithful and interpretable large language model reasoning. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. Meta. 2024. Build the future of ai with meta llama 3, 2024. Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1105–1116, Berlin, Germany. Association for Computational Linguistics. OpenAI. 2024. Hello gpt-4o, 2024a. Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, and Preslav Nakov. 2023. Fact-checking complex claims with program-guided reasoning. In Proceedings of the 61st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 6981– 7004. Association for Computational Linguistics. Jungsoo Park, Sewon Min, Jaewoo Kang, Luke Zettle- moyer, and Hannaneh Hajishirzi. 2022. FaVIQ: FAct verification from information-seeking questions. In Proceedings of the 60th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 5154–5166, Dublin, Ireland. As- sociation for Computational Linguistics. Qwen. 2024. Qwen2.5: A party of foundation models. Tal Schuster, Adam Fisch, and Regina Barzilay. 2021. Get your vitamin C! robust fact verification with contrastive evidence. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 624–643, Online. As- sociation for Computational Linguistics. Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Lionel M. Ni, Heung- Yeung Shum, and Jian Guo. 2024. Think-on-graph: Deep and responsible reasoning of large language model on knowledge graph. In The Twelfth Inter- national Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . Open- Review.net.James Thorne and Andreas Vlachos. 2018. Automated fact checking: Task formulations, methods and future directions. In Proceedings of the 27th International Conference on Computational Linguistics, | https://arxiv.org/abs/2505.22552v1 |
COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018 , pages 3346–3359. Association for Computa- tional Linguistics. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and verification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers) , pages 809–819. Association for Computational Linguistics. Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RAT-SQL: Relation-aware schema encoding and linking for text- to-SQL parsers. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics , pages 7567–7578, Online. Association for Computational Linguistics. Haoran Wang and Kai Shu. 2023. Explainable claim verification via knowledge-grounded reasoning with large language models. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 6288–6304. Association for Computational Linguistics. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V . Le, Ed H. Chi, Sharan Narang, Aakanksha Chowd- hery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V . Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Ad- vances in Neural Information Processing Systems 35: Annual Conference on Neural Information Process- ing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 . Wikipedia. 2025a. Levenshtein distance — Wikipedia, The Free Encyclopedia. Accessed: 14-February- 2025. Wikipedia. 2025b. Trie — Wikipedia, The Free Ency- clopedia. [Online; accessed 9-February-2025]. Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighoff. 2023. C-pack: Packaged resources to advance general chinese embedding. Preprint , arXiv:2309.07597. Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2020. Graph neural networks: A review of methods and applica- tions. AI Open , 1:57–81. Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. GEAR: graph-based evidence aggregating and rea- soning for fact verification. In Proceedings of the 57th Conference of the Association for Computa- tional Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers , pages 892–901. Association for Computational Linguistics. A Benchmark Datasets Dataset Split Support Refute NEI Total FactKGTrain 42723 43644 - 86367 Dev 6426 6840 - 132666 Test 4398 4643 - 9041 Total 53547 55127 - 108674 HoverTrain 11023 7148 - 18171 Dev 2000 2000 - 4000 Test 2000 2000 - 4000 Total 15023 11148 - 26171 FEVER OUSTrain 41835 27215 2241 71291 Dev 3908 3481 501 7890 Test 3372 2973 1500 7845 Total 49115 33669 4242 87026 Table 4: Basic statistics of Hover, FEVEROUS, and FactKG Datasets Type WrittenColloquialTotal Model Presup One-hop 2,106 15,934 1,580 19,530 Conjunction 20,587 15,908 602 37,097 Existence 280 4,060 4,832 9,172 Multi-hop 10,239 16,420 603 27,262 Negation | https://arxiv.org/abs/2505.22552v1 |
1,340 12,466 1,807 15,613 Total 34,462 64,788 9,424 108,674 Table 5: Dataset statistics of FACTKG for claim types. FEVEROUS. (Aly et al., 2021) FEVEROUS is a fact verification dataset comprising 87,026 veri- fied claims sourced from Wikipedia (Table 4). Each claim is accompanied by evidence in the form of sentences and/or cells from tables, along with a label indicating whether the evidence supports, re- futes, or does not provide enough information to verify the claim. The dataset includes metadata like annotator actions and challenge types, designed to minimize biases. It is used for tasks that involve verifying claims against both unstructured (textual) and structured (tabular) information. HoVer. (Jiang et al., 2020) HoVer is a dataset con- taining 26,171 samples, designed for open-domain,multi-hop fact extraction and claim verification, us- ing the Wikipedia corpus. Claims in HoVer are adapted from question-answer pairs and require the extraction of facts from multiple (up to four) Wikipedia articles to determine if the claim is sup- ported or not supported. The complexity of HoVer, particularly in the 3/4-hop claims, is further am- plified because these claims are often expressed across multiple sentences, which introduces chal- lenges related to long-range dependencies, such as accurately resolving coreferences. FactKG. (Kim et al., 2023b) FactKG is a challeng- ing fact verification dataset comprised of 108,674 samples, designed to rigorously test models’ abil- ities to reason over structured knowledge repre- sented in a knowledge graph. Its difficulty arises from a combination of factors. First, it demands proficiency in five distinct reasoning types: one- hop (single relationship), conjunction (combin- ing multiple relationships), existence (verifying entity/relationship presence), multi-hop (travers- ing multiple relationships), and, crucially, nega- tion (reasoning about the absence of relationships). Second, FactKG incorporates linguistic diversity, encompassing both formal, written-style claims and more challenging colloquial expressions, re- quiring models to handle paraphrasing, idiomatic language, and less direct wording. Third, instead of unstructured text, FactKG utilizes the DBpedia knowledge graph (derived from Wikipedia), ne- cessitating that models correctly link entities and relations mentioned in the claim to the graph’s nodes and edges, and perform complex path-based reasoning, especially for multi-hop claims. The addition of a weakly semantic knowledge source, and cross-style evaluation to asses generalizability, further contributes to the difficulty of this dataset. These features collectively make FactKG signifi- cantly more complex than datasets relying solely on unstructured text for verification. Detailed statis- tics of this dataset can be found in table 5. Readers can refer to table 4 for the overall basic statistics of all employed datasets for ClaimPKG. B Implementation Details We conducted all experiments on a DGX server with 8 NVIDIA A100 GPUs. The General LLM is hosted within the vLLM framework (Kwon et al., 2023). Below, we detail the training process of the Specialized LLM. B.1 Specialized LLM Training Data Annotation To tailor the specialized model for improved com- prehension and processing of KG-specific data, we construct a dedicated dataset for training, leverag- ing the provided version of FactKG (Kim et al., 2023b) (illustrated in Figure 4). The annotation process consists of the following steps: Claim: A musical artist, whose music | https://arxiv.org/abs/2505.22552v1 |
is Post-metal, played with the band Twilight and performs for Mamiffer. Entities: [Mamiffer, Post-metal, Twilight_(band)] Evidence: - Twilight_(band), ( associatedMusicalArtist, associated- Band), Mamiffer) - Twilight_(band), ( associatedMusicalArtist, genre), Post- metal Figure 4: Provided data of FactKG Preprocessing: All entities and relations from FactKG, including the train, development, and test datasets, as well as the DBPedia KG, are normal- ized by splitting concatenated words to ensure con- sistency. Graph Construction: Using the provided evidence information from FactKG, we observe that while evidence may not explicitly exist in the graph, it accurately captures the underlying structure of the claim. Accordingly, for triplets with relation paths exceeding one hop, we decompose them into multiple triplets while introducing a placeholder entity, denoted as “unknown_{index}” , to preserve structural in- tegrity. This placeholder represents an ambiguous or missing entity that requires identification. For instance, the triplet: “Twilight_(band), (∼associatedMusicalArtist, associatedBand), Mamiffer” is transformed into the following triplets: “Twilight_(band), associatedBand, unknown_1” and“unknown_1, associatedMusicalArtist, Mamiffer” . Ad- ditionally, entities present in the Entities set but absent from the graph are also introduced asunknown_{index} . To further enhance graph completeness, GPT-4 is employed to verify whether entities from the Entities set are explicitly mentioned in the claim. This ensures that relevant entities are either linked to existing nodes or added as placeholders. The automatic entity verification process is conducted using a prompt template, as shown in Figure 8. Additionally, the symbol "~"is retained to denote inverse relations. Random shuffle among constructed triplets but preserving the sequential order of “unknown” entity is applied to improve the robustness of the model being trained. Generated Pseudo-Subgraph: The transformed claim results in the pseudo-subgraph illustrated in Figure 5. Pseudo Subgraph Label: - Twilight (band), associated musical artist, unknown_0 - unknown_0, associated band , Mamiffer - unknown_0, genre, Post-metal Figure 5: Pseudo-Subgraph label as the output of the data annotation process. B.2 Training and Hyperparameter Settings of the Specialized LLM Parameter Value BackboneLlama-3-Base Qwen-2.5-Base Learning Rate 1e-5 Training Epoch 1 Training Steps 128 Optimizer AdamW Table 6: Hyperparameters of the Specialized LLM in ClaimPKG. The training configurations for the Specialized LLM are summarized in Table 6. The model train- ing is based on the Base version of Llama-3 (Llama- 3.2-1B, Llama-3.2-3B, Llama-3.1-8B) and Qwen 2.5 (Qwen-2.5-1.5B, Qwen-2.5-3B, Qwen-2.5-7B). These base models are selected to preserve their inherent linguistic capabilities while facilitating optimal adaptation to domain-specific tasks dur- ing fine-tuning. The training process employs the annotated dataset described in Section B.1 and is conducted over one single epoch using the AdamW (Loshchilov and Hutter, 2019) optimizer. This strat- egy enables the generation of multiple variants of the Specialized LLM, ensuring task-specific adapta- tion while maintaining robust generalization across diverse linguistic structures. C Additional Experimental Results In this section, we present additional experimental results through a systematic analysis on the FactKG development set with 2000 randomly sampled data points across claim categories. First, we provide a more detailed explanation of the evaluation met- rics used. Second, we examine the performance of the specialized LLM by varying the beam size and backbone model size. Third, we analyze the Sub- | https://arxiv.org/abs/2505.22552v1 |
graph Retrieval by adjusting the hyperparameters k1andk2as explained in the 4.3, which influence the diversity and correctness of the retrieved sub- graphs. C.1 Metrics The specialized LLM’s generation of pseudo- subgraphs plays a crucial role in ClaimPKG’s per- formance. We evaluated the specialized LLM’s per- formance using four metrics: claim structure cover- age (coverage ), entity correctness ( correctness ), unique triplet count, and average end-to-end accu- racy. While the final metric is straightforward, the three former metrics can be described as follows: (1) Structure coverage quantifies the alignment between the LLM-generated pseudo-graph and the reference claim graph in the FactKG dataset. Specifically, for a generated graph Pand reference graph Q,coverage is computed as: coverage (P, Q) =#(P.triplets ∩Q.triplets ) #(Q.triplets ) (2) Entity correctness quantifies the correctness of a claim’s extracted entities, i.e., whether these en- tities exist in the KG. Specifically, for a generated graph Pand a knowledge graph G,correctness is computed as: correctness (P,G) =#(P.enities ∩ G.entities ) #(P.entities ) (3) Unique triplet count measures the diversity of generated graph structures, with higher counts po- tentially enabling better subgraph retrieval through increased coverage of possible relationships. C.2 Different Beam Sizes of the Specialized LLM To evaluate the LLM’s decoding strategy across different beam sizes, we utilized three average ac- curacy, structure coverage andunique triplet count as metrics. Table 7 details the impact of the number of beam sizes on the previously mentioned metrics on the FactKG dev set. Both Llama and Qwen models demonstrate consistent improvements in average performance and claim structure coverageBackboneBeam SizeAverage AccuracyStructure CoverageUnique Triplets Llama-3BBeam 1 79.78 76.51 4.48 Beam 3 81.80 81.27 6.44 Beam 5 82.04 83.02 8.39 Beam 10 82.33 84.61 13.83 Qwen-3BBeam 1 78.84 77.95 3.82 Beam 3 80.76 82.66 5.16 Beam 5 81.41 83.58 6.73 Beam 10 82.19 84.62 9.58 Table 7: Performance metrics for different models on FactKG dev set. Beam Size Gen Graph (s) Retrieve (s) Reason (s) beam 1 1.02 0.24 2.19 beam 3 2.16 0.38 2.22 beam 5 3.52 0.50 2.33 beam 10 35.18 1.01 2.88 Table 8: Computing time for different beam sizes on FactKG dev set. as beam size increases from 1 to 10. At beam size 10, Llama achieves 84.61% coverage while Qwen reaches 84.62%, showing comparable per- formance at higher beam sizes. The unique triplet count shows more pronounced growth with larger beam sizes, with Llama generating 13.83 unique triplets and Qwen 9.58 triplets at beam size 10. However, table 8 shows this improved per- formance comes with significant computational overhead. Table 8 details on the time taken for generating pseudo-graphs, retrieving sub-graphs and reasoning with retrieved evidence. Most no- tably, while the time required for retrieving sub- graphs and reasoning with evidence only increase marginally as the beam size increase, this figure for pseudo-graph generation increases dramatically as the beam size goes to 10, from 1.02s at beam size 1 to 35.18s at beam size 10 - a 34.5 ×increase. Based on this measurement, in our official framework we select beam size = 5 to balance the performance gain and computational costs. C.3 | https://arxiv.org/abs/2505.22552v1 |
Different Model Sizes of the Specialized LLM To evaluate how model size affects performance, we compare different variants of Llama and Qwen models ranging from 1B to 8B parameters. Ta- ble 9 presents the performance on the FactKG dev set across three key metrics: average performance, structure coverage, and unique triplets generated, which was explained previously. BackboneAverage AccuracyStructure CoverageUnique Triplets Llama - 1B 80.26 78.98 8.97 Llama - 3B 82.04 83.02 8.39 Llama - 8B 82.63 82.84 9.34 Qwen - 1.5B 80.48 81.34 6.58 Qwen - 3B 81.41 83.58 6.73 Qwen - 7B 81.79 82.88 7.05 Table 9: Performance metrics for different models on the FactKG dev set. For both model families, we observe improve- ments in performance as model size increases, though with different patterns. The Llama fam- ily shows more notable gains, with average per- formance increasing from 80.26% (1B) to 82.63% (8B), while Qwen demonstrates more modest im- provements from 80.48% (1.5B) to 81.79% (7B). Structure coverage peaks with the 3B variants for both families - Llama-3B achieving 83.02% and Qwen-3B reaching 83.58%. The models keep the increasing trend in their triplet generation patterns: Llama maintains relatively stable unique triplet counts (8.39 - 9.34) across sizes, while the fig- ures for Qwen are (6.58 - 7.05) as the model size increases. Overall, scaling to larger models shows slight improvements while increasing computational re- quirements. Based on these results, we select 3B variants of both model families in our official im- plementation, which offer an optimal balance of performance and model size, with Llama-3B and Qwen-3B showing comparable effectiveness across all metrics. C.4 Different Hyperparameters of Subgraph Retrieval Hyper ParamsAverage AccuracyUnique Triplets k1= 5;k2= 3 82.00 11.42 k1= 3;k2= 1 82.04 8.39 k1= 1;k2= 1 81.87 3.58 Table 10: Performance of different subgraph retrieval configurations k1andk2with Llama-3.2-3B + Llama- 3.3-70B on the FactKG dev set. To assess the impact of different hyperparame- ters in the subgraph retrieval algorithm on overallperformance, we systematically vary these hyper- parameters while keeping the specialized LLM and general LLM fixed as Llama-3.2-3B and Llama- 3.3-70B, respectively. Table 10 presents the perfor- mance across two key metrics: average accuracy and the number of unique triplets generated. The results indicate that increasing k1andk2 leads to a higher number of unique triplets, sug- gesting greater diversity in retrieved claims. How- ever, this increase does not consistently translate to overall performance gains, which fall in the range of 81.87 - 82.00. Notably, performance peaks atk1= 3 andk2= 1, suggesting that a more focused retrieval strategy is sufficient to achieve optimal performance, whereas excessively high k values may introduce noise or irrelevant informa- tion. Based on these results, we select k1= 3 andk2= 1in our official implementation, which balancing between information discovery and com- puting required. C.5 Different Methods for Relation Scoring Function Method Average Accuracy Embedding Based 84.64 Rerank Based 84.73 Fuzzy Matching 82.19 Exact Matchching 81.57 Table 11: Performance of different scoring approach of the Subgraph Retrieval on the FactKG test set To assess the impact of different scoring mech- anisms on performance, we vary the scoring func- | https://arxiv.org/abs/2505.22552v1 |
tion and evaluate the test set of FactKG while fix the Specialized LLM and the General LLM. Specif- ically, we explore multiple strategies for the Rela- tion Scoring Function ( Sim), as described in Sec- tion 4.3, incorporating diverse techniques such as embedding-based retrieval, reranking, fuzzy text matching (Wikipedia, 2025a), and exact matching. For embedding-based and reranking approaches, we employ state-of-the-art pre-trained models, namely BGE-Large-EN-v1.52and BGE-Reranker- Large3, as provided by (Xiao et al., 2023). Ex- perimental results indicate that deep learning- based methods, such as embedding and rerank- ing, achieve superior performance, with accuracy scores of 84.64 and 84.56, respectively. In contrast, 2https://huggingface.co/BAAI/bge-large-en-v1.5 3https://huggingface.co/BAAI/bge-reranker-large text-matching-based methods yield lower accuracy, with fuzzy matching and exact matching scoring 82.19 and 81.57, respectively. These findings high- light the effectiveness of deep learning-based ap- proaches. We recommend embedding-based retrieval as it enables pre-indexing of corpus relations. This al- lows precomputation of relation embeddings and requires encoding only the query relation for new Pseudo Subgraphs, eliminating the need to re- encode existing knowledge graph relations during inference. D Algorithm Details The detailed implementation of the Entity Trie- constrained decoding algorithm is provided as the pseudo-code in Algorithm 1 and the Algorithm 2 details the implementation of the Subgraph Re- trieval. E Case Study We present the case study results of ClaimPKG on the FactKG dataset in Tables 12 and 13. Each table includes the claim c, pseudo-subgraphs Ps, retrieved subgraphs Sc, final justification j, and verdict v. Table 12 showcases correctly predicted examples, demonstrating ClaimPKG’s ability to accurately capture claim structures and generate well-grounded justifications. Conversely, Table 13 highlights incorrectly predicted cases of two error types as detailed in Section 5.3. The first two ex- amples illustrate Reasoning Errors, while the third represents a Retrieval Error. These insights serve as a foundation for future improvements, emphasizing key areas for future refinement. F Prompt Templates For better reproducibility, we present all prompt templates in the appendix. Below is a quick refer- ence list outlining the prompt templates and their usages: •Figure 6: Prompt the General LLM to reason on the input claim and retrieved subgraphs to produce justification and final verdict. •Figure 7: Few-shot prompts the General LLM to generate a Pseudo Subgraph with provided examples. •Figure 8: Annotate the inside and outside enti- ties of the input claim for the training dataset. Algorithm 1: LLM Decoding with Entity-Trie Constraint Input : Specialized LLM , Input claim c, Entity Trie T Output : Pseudo-Subgraph P Initialize: P ← ∅ ; // Initialize pseudo subgraph h0←InitializeHiddenStates (); constrained ←False; Function ConstrainedDecoding( LLM, c, T): while True do pt, ht←LLM (P, c, h t−1); // Compute token probabilities and update hidden states ifconstrained then prefix ←ExtractPrefix (P);// Retrieve tokens from last unclosed <e> to the last allowed ← T.lookup (prefix );// Retrieve allowed tokens from valid continuations in T pt←MaskProb (pt, allowed ); // Impose probabilities of invalid tokens to be 0 new_token ←arg max pt; // Select new token for P P ← P ∪ { new_token}; ifnew_token == <e> then constrained ←True; ifnew_token == </e> then constrained ←False; | https://arxiv.org/abs/2505.22552v1 |
ifnew_token == EOS then break ; return P GENERAL REASONING Task: Verify whether the fact in the given sentence is true or false based on the provided graph triplets. Use only the information in the triplets for verification. - The triplets provided represent all relevant knowledge that can be retrieved. - If the fact is a negation and the triplets do not include the fact, consider the fact as true. - Ignore questions and verify only the factual assertion within them. For example, in the question “When was Daniel Martínez (politician) a leader of Montevideo?”, focusing on verifying the assertion “Daniel Martínez (politician) a leader of Montevideo”. - Interpret the “ ∼” symbol in triplets as indicating a reverse relationship. For example: “A ∼south of B” means “B is north of A”. Response Format: Provide your response in the following JSON format without any additional explanations: { "rationale": "A concise explanation for your decision", "verdict": "true/false as the JSON value" } Triplets: {{triplets}} Claim: {{claim}} Figure 6: Prompt template for the general LLM to perform reasoning Algorithm 2: Subgraph Retrieval Input : Knowledge graph G, Pseudo Subgraph List Pc, Topk1Candidate Unknown Entities, Top k2Complete Triplets Output : Combined subgraph Sc Function SubgraphRetrieval( G,Pc, k1, k2): S← ∅; foreach P ∈ P cdo S←S∪RetrieveSingleSubgraph (G,P, k1, k2); // Process each pseudo subgraph returnJoinSubgraphs (S); // Combine subgraphs Function RetrieveSingleSubgraph( G,P, k1, k2): (Tcomp, Tinc)←CategorizeTriplets (P); // Split into complete/incomplete triplets Sinc←RetrieveIncomplete (G, Tinc, k1); Scomp←RetrieveComplete (G, Tcomp, k1, k2); return Sinc∪Scomp Function RetrieveIncomplete( G, Tinc, k1): S← ∅; G←GroupTripletsByUnknown (Tinc); // Group by unknown entity foreach g∈Gdo (Eu, Ru)←ExtractPseudoStructure (g);// Extract entities and relations associated to unknown entity C← ∅; foreach (e, r)∈(Eu, Ru)do (Ce,scores )←GetCandidatesAndScores (G, e, r); C←C∪ {(Ce,scores )}; C=AggregateGlobalScore (C); // Aggregate candidate scores globally C∗←RankTopKCandidates (C, k 1); // Select top- k1candidates S←S∪GetTriplets (C∗, g); return S Function GetCandidatesAndScores( G, e, r): Ract←RetrieveActualConnectedRelations (G, e); Eact←RetrieveActualConnectedEntities (G, e); r_scores ←RelationScore (r, R act); S← ∅; foreach e′∈Eactdo s←MaxRelatedRelationScores (e′, r_scores ); S←S∪ {(e′, s)}; return S; // Score connected entities Function AggregateGlobalScore (C): // Calculate new scores and reassign for each C_e foreach (Ce,scores )∈Cdo foreach (c, s)∈(Ce,scores )do s←Sum([s′(c)for(C′, s′)inCifc∈C′]) return C; Function RankTopKCandidates( C, k 1): C∗← ∅; foreach (Ce,scores )∈Cdo C∗ e←TopKCandidates (Ce,scores , k1); C∗←C∗∪C∗ e; return C∗; // Rank candidates per unknown entity Function RetrieveComplete( G, Tcomp, k1, k2): S← ∅; foreach (h, r, t )∈Tcompdo ifExistConnection (h, t)then T←GetTripletsWithHeadAndTail (G, h, t); scores ← ∅; foreach t′∈Tdo scores ←scores ∪RelationScore (r, t′.relation ); S←S∪TopK(T, scores, k 2); // Rank connected triplets else S1←RetrieveIncomplete (G,[(unknown, r, t )], k1); S2←RetrieveIncomplete (G,[(h, r, unknown )], k1); S←S∪(S1∪S2); // Handle missing connection return S Claim c Pseudo-Subgraphs Pc Retrieved Subgraphs Sc Justification j; Verdict v The 11th Mississippi Infantry Monument is not in Gettysburg, Pennsylvania.P1:11th Mississippi Infantry Monument || location || Gettysburg, Pennsylvania P2: Gettysburg, Pennsylvania || ~location || 11th Mississippi Infantry Monument 11th Mississippi Infantry Monument || location || unknown_0 P3:11th Mississippi Infantry Monument || location || Gettysburg 11th Mississippi Infantry Monument || location | https://arxiv.org/abs/2505.22552v1 |
|| Pennsylvania P4:11th Mississippi Infantry Monument || is part of military conflict || Gettysburg 11th Mississippi Infantry Monument || location || Pennsylvania 11th Mississippi Infantry Monument || ~location || unknown_0 P5:11th Mississippi Infantry Monument || location || Gettysburg, Pennsylvania(11th Mississippi Infantry Monument, municipality, Gettysburg, Pennsylvania) (11th Mississippi Infantry Monument, website, 11th mississippi infantry monument) (U.S. Route 140, location, Gettysburg) (Pennsylvania Route 34, location, Gettysburg) (U.S. Route 30 in Pennsylvania, location, Gettys- burg) (11th Mississippi Infantry Monument, state, Penn- sylvania) (11th Mississippi Infantry Monument, region, Adams County, Pennsylvania) (11th Mississippi Infantry Monument, country, United States) (11th Mississippi Infantry Monument, category, Contributing property) (Charles Saalmann, battles, Gettysburg) (Lutheran Theological Seminary at Gettysburg, city, Gettysburg) (Gettysburg Area High School, city, Gettysburg) (11th Mississippi Infantry Monument, range, Semi- nary Ridge)Justification: The triplets provided include ’11th Mississippi Infantry Mon- ument’, ’municipality’, ’Gettysburg, Pennsylva- nia’ and ’11th Mississippi Infantry Monument’, ’state’, ’Pennsylvania’, in- dicating the monument’s location as Gettysburg, Pennsylvania. Verdict: False It was released in Hardcover by J.V . JonesP1:Hardcover || ~media type || unknown_0 unknown_0 || author || J. V . Jones P2:unknown_0 || author || J. V . Jones unknown_0 || media type || Hardcover unknown_0 || ~genre || unknown_1 P3:J. V . Jones || ~author || unknown_0 unknown_0 || media type || Hardcover P4:Hardcover || ~media type || J. V . Jones J. V . Jones || ~author || unknown_0 unknown_0 || previous work || unknown_1 P5:Hardcover || ~media type || unknown_0 unknown_0 || author || J. V . Jones(Hardcover, ~media type, A Fortress of Grey Ice) (Hardcover, ~media type, A Clockwork Orange) (Hardcover, ~media type, The Bell Curve) (A Fortress of Grey Ice, author, J. V . Jones) (A Cavern of Black Ice, author, J. V . Jones) (The Barbed Coil, author, J. V . Jones) (Hardcover, ~media type, The Andromeda Strain) (1963, ~date of birth, J. V . Jones) (Liverpool, England, ~place of birth, J. V . Jones)Justification: The triplets show that ’A Fortress of Grey Ice’ is a book by ’J. V . Jones’ and it is of ’Hardcover’ media type, which supports the claim. Verdict: True A person was born in the County of Ty- rol and later died in Austria which is led by Doris Bures.P1:Austria || leader || Doris Bures Austria || ~death place || unknown_0 unknown_0 || death place || County of Tyrol P2:Austria || leader name || Doris Bures Austria || ~death place || unknown_0 unknown_0 || death place || County of Tyrol P3:County of Tyrol || ~birth place || unknown_0 unknown_0 || death place || Austria unknown_0 || successor || Doris Bures P4:Doris Bures || ~leader || Austria Austria || ~death place || unknown_0 unknown_0 || birth place || County of Tyrol P5:Austria || leader || Doris Bures Austria || ~death place || unknown_0 unknown_0 || birth place || County of Tyrol(Austria, leader, Doris Bures) (Austria, ~death place, Hans Hömberg) (Austria, ~death place, Edgar Meyer (painter) ) (Austria, ~death place, Rakhat Aliyev) (Hans Hömberg, death place, County of Tyrol) (Edgar Meyer (painter) , death place, County of Tyrol) (Friedrich Hasenöhrl, death place, County of Tyrol) | https://arxiv.org/abs/2505.22552v1 |
(County of Tyrol, ~birth place, Kurt Schuschnigg) (County of Tyrol, ~birth place, Alfons Gorbach) (County of Tyrol, ~birth place, Josef Hellensteiner) (Kurt Schuschnigg, death place, Austria) (Alfons Gorbach, death place, Austria) (Josef Hellensteiner, death place, Austria) (Barbara Prammer, successor, Doris Bures) (Werner Faymann, successor, Doris Bures) (Alois Stöger, ~successor, Doris Bures)Justification: The triplets show that Kurt Schuschnigg, Alfons Gorbach, and Josef Hellensteiner were born in the County of Tyrol and died in Austria. Additionally, Doris Bures is the leader of Austria. Verdict: True The New Britain Transit is the operat- ing organisation of Agra Airport which is located in India and has the location identifier VIAG.P1:Agra Airport || location || New Britain Transit Agra Airport || location || India Agra Airport || faa || VIAG P2:Agra Airport || location country || India Agra Airport || operator || New Britain Transit Agra Airport || faa || VIAG P3:Agra Airport || location || India Agra Airport || operator || New Britain Transit Agra Airport || faa || VIAG P4:New Britain Transit || operator || Agra Airport Agra Airport || location || India Agra Airport || faa || VIAG P5:Agra Airport || operator || New Britain Transit Agra Airport || location || India Agra Airport || faa || VIAG(Agra Airport, location, India) (Agra Airport, location, Uttar Pradesh) (Agra Airport, location, Agra) (Connecticut Transit New Britain and Bristol, oper- ator, New Britain Transit) (Agra Airport, icao location identifier, VIAG) (Agra Airport, operator, Indian Air Force) (Agra Airport, operator, Airports Authority of In- dia) (Agra Airport, elevation, 167.94)Justification: The op- erator of Agra Airport is listed as ’Indian Air Force’ and ’Airports Au- thority of India’, not ’New Britain Transit’, which op- erates ’Connecticut Tran- sit New Britain and Bris- tol’ Verdict: False Table 12: Case study of ClaimPKG on the FactKG dataset with correctly predicted samples. Claim c Pseudo-Subgraphs Pc Retrieved Subgraphs Sc Justification j; Verdict v When did Indian Air Force operate Agra Airport?P1:Agra Airport || operator || Indian Air Force P2:Indian Air Force || ~operator || Agra Airport Agra Airport || ~runway designation || un- known_0 P3:Agra Airport || ~runway designation || In- dian Air Force Agra Airport || ~location || unknown_0 unknown_0 || operator || Indian Air Force P4:Agra Airport || operator || Indian Air Force P5:Agra Airport || operator || Indian Air Force(Agra Airport, operator, Indian Air Force) (Agra Airport, runway length, 2743.0) (Agra Airport, runway length, 1818.0) (Agra Airport, icao location identifier, VIAG) (Agra Airport, location, India) (Agra Airport, location, Uttar Pradesh) (Agra Airport, location, Agra) (Bhuj Airport, operator, Indian Air Force) (Bhuj Rudra Mata Air Force Base, operator, Indian Air Force) (Car Nicobar Air Force Base, operator, Indian Air Force)Justification: The triplets do not provide informa- tion about the specific time period when the In- dian Air Force operated Agra Airport. However, they do confirm that the Indian Air Force is the op- erator of Agra Airport, im- plying that the operation is either current or has oc- curred at some point, but the exact timing is not specified. Verdict: False I imagined that Lady Anne Mon- son not lived | https://arxiv.org/abs/2505.22552v1 |
in the Cape Verde whose capital was Winch- ester.P1:Cape Verde || capital || Winchester Cape Verde || ~residence || Lady Anne Monson P2: Lady Anne Monson || residence || Cape Verde Cape Verde || capital || Winchester P3:Anne Monzon || residence || Cape Verde Cape Verde || capital || Winchester P4:Cape Verde || capital || Winchester Cape Verde || ~residence || Lady Anne Monson P5:Cape Verde || capital || Winchester Cape Verde || ~death place || Lady Anne Monson(Cape Verde, capital, Praia) (Cape Verde, currency, Cape Verdean escudo) (Cape Verde, ~headquarter, Movement for Democracy (Cape Verde)) (Wessex, capital, Winchester) (Kingdom of England, capital, Winchester) (River Itchen, Hampshire, city, Winchester) (Cape Verde, ~birth place, Aristides Pereira) (Cape Verde, ~birth place, Pedro Pires) (Cape Verde, ~birth place, José Maria Neves) (India, ~residence, Lady Anne Monson) (Kingdom of England, ~birth place, Lady Anne Mon- son) (Darlington, ~birth place, Lady Anne Monson) (Anne Monzon, ~cinematography, "Wont Last a Day Without You") (Anne Monzon, ~cinematography, Pop Class) (Anne Monzon, ~cinematography, Namets!) (Cape Verde, ~death place, Corsino Fortes) (Cape Verde, ~death place, Cesária Évora) (Cape Verde, ~death place, Baltasar Lopes da Silva) (1776-02-18, ~death date, Lady Anne Monson) (1776, ~death year, Lady Anne Monson)Justification: The capi- tal of Cape Verde is Praia, not Winchester. Winch- ester is the capital of Wessex and the King- dom of England. Lady Anne Monson’s residence is listed as India, and her birthplace is listed as Dar- lington and the Kingdom of England, with no men- tion of Cape Verde as her residence. Verdict: True The operating organization for Agra Airport is the Indian Air Force who deploy the HAL Light Com- bat Helicopter and the Boeing C17 Globemaster III transport aircraft.P1:Agra Airport || operator || Indian Air Force Agra Airport || ~location || Boeing C-17 Globe- master III Agra Airport || ~location || HAL Light Combat Helicopter P2:Boeing C-17 Globemaster III || ~aircraft || Agra Airport Agra Airport || operator || Indian Air Force HAL Light Combat Helicopter || ~aircraft || Agra Airport P3:Agra Airport || operating organization || In- dian Air Force Agra Airport || ~location || Boeing C-17 Globe- master III Agra Airport || ~location || HAL Light Combat Helicopter P4:Agra Airport || operator || Indian Air Force Agra Airport || runway length || Boeing C-17 Globemaster III Agra Airport || ~location || HAL Light Combat Helicopter P5:Agra Airport || operator || Indian Air Force Agra Airport || ~city || HAL Light Combat Heli- copter Agra Airport || ~city || Boeing C-17 Globemas- ter III(Agra Airport, operator, Indian Air Force) (Agra Airport, location, India) (Agra Airport, location, Uttar Pradesh) (Agra Airport, location, Agra) (2012 in spaceflight, site, Boeing C-17 Globemaster III) (2013 in spaceflight, site, Boeing C-17 Globemaster III) (2009 in spaceflight (July–December) , site, Boeing C-17 Globemaster III) (India, origin, HAL Light Combat Helicopter) (Developmental trials, status, HAL Light Combat Helicopter) (Hindustan Aeronautics, manufacturer, HAL Light Combat Helicopter) (Boeing C-17 Globemaster III, aircraft transport, United States Air Force) (Boeing C-17 Globemaster III, aircraft transport, Royal Air Force) (Boeing C-17 Globemaster III, aircraft transport, Royal Australian Air Force) (2743.0, runway | https://arxiv.org/abs/2505.22552v1 |
length, Agra Airport) (1818.0, runway length, Agra Airport) (HAL Light Combat Helicopter, aircraft helicopter, Indian Air Force) (HAL Light Combat Helicopter, aircraft attack, Army Aviation Corps (India)) (HAL Light Combat Helicopter, type, Attack heli- copter) (Agra Airport, icao location identifier, VIAG) (Airlift, type, Boeing C-17 Globemaster III) (United States, origin, Boeing C-17 Globemaster III) (In service, status, Boeing C-17 Globemaster III)Justification: The triplets confirm Agra Airport is operated by the Indian Air Force, the Indian Air Force deploys the HAL Light Combat Heli- copter, and the Boeing C- 17 Globemaster III is in- deed a transport aircraft, but there is no direct infor- mation linking the Indian Air Force at Agra Airport to the deployment of the Boeing C-17 Globemas- ter III. However, given the information provided, we can infer the involve- ment of the Indian Air Force with the mentioned aircraft types, albeit indi- rectly for the Boeing C-17 Globemaster III. Verdict: False Table 13: Case study of ClaimPKG on the FactKG dataset with incorrectly predicted samples. FEWSHOT PSEUDO SUBGRAPH GENERATION Task: Generate a reference graph to verify the following claim. Only return the subgraphs following the format of provided examples and do NOT include other unnecessary information. Here are some examples: Claim: Akeem Priestley played for club RoPS and currently plays for the Orange County Blues FC, which is managed by Oliver Wyss. Subgraphs: <e>Orange County Blues FC</e> || manager || <e>Oliver Wyss</e> <e>Orange County Blues FC</e> || clubs || <e>Akeem Priestley</e> <e>Akeem Priestley</e> || team || <e>RoPS</e> Claim: He is a Rhythm and Blues singer from Errata, Mississippi! Subgraphs: <e>Rhythm and blues</e> || genre || unknown_0 unknown_0 || birth place || <e>Errata, Mississippi</e> unknown_0 || background || unknown_1 Claim: Arròs negre is a traditional dish from Spain, and from the Catalonia region, which is led by the Maria Norrfalk. Subgraphs: <e>Arròs negre</e> || country || <e>Spain</e> <e>Arròs negre</e> || region || <e>Catalonia</e> <e>Catalonia</e> || leader name || <e>Maria Norrfalk</e> Claim: Well, Jason Sherlock did not have a nickname! Subgraphs: <e>Jason Sherlock</e> || nickname || unknown_0 Claim: Garlic is the main ingredient of Ajoblanco, which is from Andalusia. Subgraphs: <e>Ajoblanco</e> || region || <e>Andalusia</e> <e>Ajoblanco</e> || ingredient || <e>Garlic</e> ..... More examples ..... Claim: {{claim}} Subgraphs: Figure 7: Prompt template for the general LLM to generate pseudo subgraphs ANNOTATE IN AND OUT ENTITIES Task: Specify if the following entities are mentioned in the claim or not. Respond correctly in the following JSON format and do not output anything else: { "in_entities": [list of entities that are in the claim], "out_entities": [list of entities that are not in the claim] } Do not change the entity names from the list of provided entities. Claim: {{claim}} Entities: {{entities}} Figure 8: Prompt template to annotate inside and outside entity of the claim. | https://arxiv.org/abs/2505.22552v1 |
arXiv:2505.22564v1 [cs.CV] 28 May 2025PRISM: Video Dataset Condensation with Progressive Refinement and Insertion for Sparse Motion Jaehyun Choi Jiwan Hur Gyojin Han Jaemyung Yu Junmo Kim Korea Advanced Institute of Science and Technology (KAIST) {chlwogus, jiwan.hur, hangj0820, jaemyung, junmo.kim}@kaist.ac.kr Abstract Video dataset condensation has emerged as a critical technique for addressing the computational challenges associated with large-scale video data processing in deep learning applications. While significant progress has been made in image dataset condensation, the video domain presents unique challenges due to the complex interplay between spatial content and temporal dynamics. This paper introduces PRISM, Progressive Refinement and Insertion for Sparse Motion, for video dataset condensation, a novel approach that fundamentally reconsiders how video data should be condensed. Unlike the previous method that separates static content from dynamic motion, our method preserves the essential interdependence between these elements. Our approach progressively refines and inserts frames to fully accommodate the motion in an action while achieving better performance but less storage, considering the relation of gradients for each frame. Extensive experiments across standard video action recognition benchmarks demonstrate that PRISM outperforms existing disentangled approaches while maintaining compact representations suitable for resource-constrained environments. 1 Introduction Machine learning research has progressed substantially through the parallel development of novel algorithmic frameworks and the growing availability of extensive training data. In the domain of computer vision, video data represents one of the richest sources of visual information, where static content elements and temporal dynamics are fundamentally intertwined, creating a comprehensive representation of our visual world. These video datasets have enabled researchers to develop models capable of understanding human actions [ 1,2], tracking objects through time [ 3,4], predicting future events [ 5], and realistic video generation [ 6]. As the field continues to evolve, both the quality and scale of video datasets have grown exponentially, with collections like Kinetics-700 [1], HowTo100M [ 7], and YouTube-8M [ 8] now containing millions of video clips and petabytes of data. While these expansive datasets have enabled significant progress in video understanding, they simultaneously introduce substantial computational demands for storage, preprocessing, and training. The resource requirements for working with such massive collections can limit broader participation in research with video datasets. Similarly, in the image domain, the image dataset condensation first concretized in GM [ 9] tackled the problem of storage and training time in image classification and achieved higher performance than the original data with condensed data that is size of only 20% of the whole dataset in DATM [ 10]. Building upon successes in image dataset condensation, researchers have begun exploring methods to distill the essential information from large-scale video datasets into more compact formats while preserving their effectiveness for model training. A pioneering work by Wang et al. [ 11] disentangles the video into static content and dynamic motion. They employed a two-stage method, which in the first stage trains the static memory and then freezes it during the second stage, where the dynamic memory gets trained. While this decomposition may Preprint. Under review. Naive ApproachStatic Frame Static−Dynamic Disentanglement PRISM (Ours )Added Frame Interpolated FramesOptimal Video Dist . | https://arxiv.org/abs/2505.22564v1 |
Update Gradient (Per−Frame ) Optimizable Frame One−step optimized FrameFigure 1: Visual representation of prior video dataset condensation methods and PRISM (Ours). In frame-wise matching, each frame gets updated individually, neglecting the relation between one another. Static and dynamic disentangling method [ 11] learn the temporal dynamics; however, it is restricted by the frozen pre-trained static image. Unlike these methods, our method learns the motion dynamics without any constraints to a single frame through a holistic approach. offer computational advantages, it fundamentally misrepresents the intrinsic nature of real-world videos, where content and motion are not separable but rather deeply interdependent. By generating content and motion features independently, this approach neglects the crucial ways in which object positioning constrains possible motion trajectories and, conversely, how motion patterns influence the transformation of visual content throughout a sequence. For example, if a static image represents a person who is clapping, one static image could show a person with both hands touching (making the clapping sound) while another static image could show the person with hands separated. In both scenarios, the class for the video is “clapping"; however, the motion differs significantly. This demonstrates how content position within a frame inherently influences motion possibilities, and conversely, how motion patterns constrain the evolution of content. Our method, PRISM, considers this intrinsic relationship between content and motion and is structured as illustrated in Figure 1. Rather than treating all frames equally or separating static and dynamic components as in prior work, PRISM, Progressive Refinement and Insertion for Sparse Motion, adopts a holistic optimization strategy over sparse key frames selected based on motion complexity. It begins with only two key frames per video, typically the first and last, and interpolates intermediate frames to match the expected sequence length for training. Matching losses, such as distribution matching [ 12] or trajectory matching [ 13], are applied over the interpolated sequence, but only the key frames are optimized. When interpolation fails to capture complex motion, identified by negative gradient correlation with neighboring key frames, PRISM inserts a new frame at that location. This frame is initialized by interpolation and refined jointly with the other key frames. Since this refinement and insertion process occurs independently for each class, motion-intensive actions receive more representational capacity, while simpler actions remain compact. By progressively allocating frames where needed, PRISM creates temporally coherent and memory-efficient synthetic datasets that preserve the essential dynamics of real video data. Extensive experiments on UCF-101, HMDB-51, and Something-Something-V2 demonstrate that PRISM consistently outperforms existing methods, achieving state-of-the-art results while maintaining strong content and motion fidelity. 2 Related Works 2.1 Dataset Distillation Dataset distillation aims to synthesize a small, highly informative dataset that captures the essential characteristics of the original large-scale dataset. When models are trained on these condensed datasets, they can achieve performance comparable to training on the full dataset, but with significantly reduced computational and storage requirements. As deep learning models and datasets continue to grow in size, this field has evolved into several methodological branches. 2 Gradient Matching This approach ensures that synthetic data produces similar gradient updates as the | https://arxiv.org/abs/2505.22564v1 |
original dataset. DC [ 9] pioneered this direction by formulating dataset distillation as a bi-level optimization problem that matches single-step gradients between original and synthetic datasets. DSA [ 14] enhanced this framework through differentiable Siamese augmentation, improving generalization by ensuring consistent gradients across various data transformations. IDC [ 15] contributed efficient parameterization strategies by storing synthetic images at lower resolutions and upsampling during training, reducing storage requirements while maintaining performance. These methods provide a direct way to ensure that synthetic data induces similar training behavior as the original dataset. Distribution Matching These methods aim to align feature distributions between synthetic and real data, often providing more efficient alternatives to gradient-matching. DM [ 12] introduced a framework that aligns distributions in embedding space, significantly improving computational efficiency. CAFE [ 16] ensures that statistical feature properties from synthetic and real samples remain consistent across network layers, providing more comprehensive feature alignment. Distribution- matching methods typically offer better scaling properties when condensing large-scale datasets with numerous categories. Trajectory Matching Rather than matching single-step gradients or feature distributions, these methods aim to match entire training trajectories. MTT [ 13] developed techniques to create condensed datasets by mimicking the training trajectories of models trained on the original dataset, significantly improving distillation efficiency. DATM [ 10] introduced difficulty-aligned trajectory matching to enable effective distillation without performance loss even as the synthetic dataset size changes. These approaches capture longer-range training dynamics, often resulting in better performance than single-step methods. 2.2 Video Dataset Condensation Despite extensive research on image dataset condensation, the field of video dataset condensation remains largely unexplored, with only Wang et al. [ 11] making notable contributions. Their approach disentangles static content from dynamic motion by distilling videos into a single RGB static frame for content representation and a separate multi-frame single-channel component for motion. Their method follows a two-stage process: first, training the static component, then freezing it while updating only the dynamic component. Through experiments with varying numbers of condensed frames, they found that frame count does not significantly impact action recognition performance, leading to their focus on a hybrid static-dynamic representation. Our work differs fundamentally in how it approaches the interaction between content and motion. While previous methods explicitly separate these two components by first learning a static representa- tion and then optimizing motion as an auxiliary signal, we adopt a holistic training framework that treats the video as a fully coupled spatiotemporal structure from the beginning. 3 Method LetD=SC−1 c=0Dcwhere Dc={(Vi c, yi c)}Nc i=1, denote the real dataset consisting of Cclasses. Each video Vi c∈RT×H×W×3contains Tframes of height H, width Wand 3 color channels. The goal of video dataset condensation is to synthesize a compact synthetic dataset: S=C−1[ c=0Sc, S c={(Sj c, yj c)}Mc j=1, M c≪Nc, (1) such that each condensed video Sj ceffectively captures essential spatiotemporal patterns specific to class c, while drastically reducing memory and computation costs with minimal degradation in downstream task performance. 3.1 Temporal Frame Interpolation PRISM, Progressive Refinement and Insertion for Sparse Motion, initializes each motion sequence Sj cusing only the first and last frames of | https://arxiv.org/abs/2505.22564v1 |
the video segment, rather than the entire sequence of T 3 Interpolated Frame Interpolation between two framesMatching Loss (ℒ 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 ) Start Frame End Frame Video Dataset Optimizable Frame∇𝑠𝑠𝑐𝑐,𝑘𝑘𝑖𝑖 ℒ𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑠𝑠𝑚𝑚,𝑘𝑘𝑖𝑖∇𝑠𝑠𝑐𝑐,1 ℒ𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑠𝑠𝑚𝑚,1∇𝑠𝑠𝑐𝑐,𝑇𝑇 ℒ𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑠𝑠𝑚𝑚,𝑇𝑇 UpdateFigure 2: Overview of PRISM. We first initialize the key frame set as the start and the end frame. As training proceeds, we calculate the cosine similarity of the gradients for each temporally interpolated frame against all frames in the key frame set. Then, the frames that have a negative correlation with all of the current key frames are added to the key frame set and used in the subsequent training. frames. This two-frame initialization serves as a sparse temporal anchor from which new frames are progressively inserted during training. This design is motivated by prior works in video frame interpolation, which show that simple or low-velocity motion can often be approximated by linear interpolation between the two endpoints [17, 18]. We denote this initial set of frames as: Sj c={sc,1, sc,T}. (2) All intermediate frames between these key frames are populated by linear temporal interpolation to construct a full sequence of length T. Given two adjacent key frames sc,kiandsc,ki+1, the interpolated frame at time tis computed as: sc,t=αtsc,ki+ (1−αt)sc,ki+1,where αt=ki+1−t ki+1−ki, k i< t < k i+1. (3) Throughout the training, only the frames in Sj care treated as trainable parameters, while interpolated frames are held fixed. 3.2 Gradient-Guided Frame Insertion Once the initial set Sj c={sc,1, sc,T}is established, PRISM progressively expands this set by inserting frames that deviate from linear motion, as indicated by their gradient directions. At each training step, we consider interpolated candidate frames sc,tthat lie between adjacent key frames sc,kiandsc,ki+1, where ki< t < k i+1. For each candidate frame, we compute its gradient ∇L(sc,t) and measure its cosine similarity with the gradients of its two adjacent key frames: cost i= cos ∇sc,tL(sc,t),∇sc,kiL(sc,ki) ,cost i+1= cos ∇sc,tL(sc,t),∇sc,ki+1L(sc,ki+1) . (4) If both cosine similarities are below ϵ, i.e., cost i< ϵ and cost i+1< ϵ, (5) then the candidate frame sc,tis considered to be the frame that is at the position of which the motion cannot be represented through linear interpolation and is inserted into the key frame set: Sj c←Sj c∪ {sc,t}. (6) This gradient-based criterion captures non-linear transitions in appearance or motion and enables the model to refine its support set by inserting only those frames that contribute meaningful learning signals. The insertion process is repeated iteratively throughout training except for the warm-up and cool-down phases, resulting in a temporally adaptive sequence that emphasizes semantically rich regions. Our frame insertion strategy is theoretically justified through the following lemmas. 4 Lemma 1 (Loss-Descent Blockage under Gradient Misalignment) Letst=αski+(1−α)ski+1, with0< α < 1, be a linearly interpolated frame between two key frames skiandski+1. Let the task-loss gradients be denoted as gt=∇stL(st), g i=∇skiL(ski), g i+1=∇ski+1L(ski+1). Suppose ⟨gt, gi⟩<0and⟨gt, gi+1⟩<0. Then, for every convex combination v=λ(−gi) + (1 −λ)(−gi+1), λ∈[0,1], the following inequality holds: ⟨gt, v⟩>0. Consequently, no first-order update obtained by modifying only the two endpoint frames can decrease Latst; the loss is stationary or strictly | https://arxiv.org/abs/2505.22564v1 |
increasing along every such direction. Therefore, stmust be promoted to the key-frame set and directly optimized to enable further loss minimization. Proof. See supplementary material A. □ 3.3 Warm-Up and Cool-Down Scheduling To ensure stable optimization and avoid the premature or unstable expansion of the key frame set, we introduce a warm-up and cool-down phase. Warm-Up Phase During the initial warm-up period, frame insertion is disabled, and the model is trained using only the two endpoint frames in the key set Sj c={sc,1, sc,T}. This phase allows the initial frames to undergo sufficient optimization before being used as reference points for gradient- based frame comparison. Without this phase, noisy or unstable gradients from undertrained key frames may lead to unreliable insertion decisions and degrade overall sequence structure. Cool-Down Phase In the final stage of training, frame insertion is again disabled. This prevents the inclusion of new key frames that would receive disproportionately fewer updates due to limited remaining iterations. In the context of dataset condensation, where each synthetic sample must convey maximally useful gradients, such late insertions can be detrimental. By freezing the key set in the final phase, we ensure that all selected frames are adequately trained and contribute uniformly to the optimization objective. 3.4 Optimization Objective PRISM is integrated with Dataset Condensation via Gradient Matching (DM), where synthetic videos are optimized to match the gradient signals of real videos. The optimization is performed only over the key frame set Sj c, which is progressively expanded during training. Interpolated frames serve solely as semantic probes and are not included in the optimization unless inserted into Sj c. Letθdenote the parameters of the student model, and let Ltaskdenote the supervised loss (e.g., cross-entropy). For each class c, we define a batch of real videos Breal cand a batch of synthetic videos Bsyn c={Sj c}Nc j=1, where each Sj ccontains a subset of trainable frames. The optimization objective is to minimize the gradient matching loss between real and synthetic data: min θ,{Sj c}CX c=1 ∇θLtask(fθ(Bsyn c), yc)− ∇ θLtask(fθ(Breal c), yc) 2 2. (7) During training, gradients are backpropagated only through the currently active key frame set Sj c, while interpolated frames remain fixed unless inserted via the gradient-based criterion. Warm-up and cool-down phases regulate the insertion timing, ensuring stable training and sufficient updates for all selected frames. By integrating PRISM into DM, we introduce a temporally sparse yet semantically adaptive mecha- nism that focuses the optimization on the most informative frames. This enables effective condensa- tion of long video sequences into compact synthetic sets, with minimal redundancy and improved semantic coverage. The overall framework of the PRISM is illustrated in Figure. 2. 5 Table 1: Experiment results on four video benchmarks and prior methods categorized into coreset methods, static/dynamic disentangled methods, and holistic methods.†represents the author provided results. Bold and underline denote the best and second-best scores for each setting, respectively. Dataset MiniUCF HMDB51 Kinetics-400 SSv2 VPC 1 5 10 1 5 10 1 5 1 5 Coreset Methods Random 10.9±0.719.6±0.427.8±1.13.3±0.1 6.8±0.7 9.8±0.4 3.0±0.2 5.5±0.2 3.1±0.1 3.6±0.1 Herding [22] 13.2±1.326.3±1.033.7±0.33.0±0.1 9.0±0.6 10.8±0.63.3±0.1 | https://arxiv.org/abs/2505.22564v1 |
6.3±0.2 2.8±0.1 3.6±0.1 K-Center [23] 13.9±1.623.2±0.729.1±0.62.4±0.4 5.2±0.4 8.0±0.1 3.1±0.1 6.2±0.2 2.6±0.2 4.5±0.1 Static / Dynamic Disentangled Methods DM [12] 15.3±1.125.7±0.230.0±0.66.1±0.2 8.0±0.2 12.1±0.46.3±0.0 9.1±0.9 3.6±0.0 4.1±0.0 + Wang et al.†[11] 17.5 ±0.127.2±0.4 - 6.0±0.4 8.2±0.1 - 6.3 ±0.2 7.0±0.1 4.0±0.1 3.8±0.1 Holistic Method DM + PRISM 17.9±0.328.0±0.131.0±0.17.5±0.3 10.5±0.412.8±0.27.1±0.1 8.1±0.1 3.9±0.2 4.1±0.1 Whole Dataset 57.8 ±1.1 25.4±0.2 30.3±0.1 23.0±0.3 4 Experiment 4.1 Dataset We conduct experiments on 4 datasets: UCF101 [ 19] and HMDB51 [ 20] for small scale datasets, and Kinetics [ 1] and Something-Something V2 [ 21] for large scale datasets. UCF101 consists of 13,320 video clips of 101 action categories. Following the prior work [ 11], we leverage the miniaturized version of UCF101, hereinafter miniUCF, which includes the 50 most common action categories from the UCF101 dataset. HMDB51 consists of 6,849 video clips of 51 action categories. Kinetics-400 has videos of 400 human action classes and Something-Something V2 has 174 motion-centered classes. For miniUCF and HMDB51, we sample 16 frames per video with a sampling interval of 4 and resize frames to 112×112. For Kinetics-400 and Something-Something V2, we sample 8 frames per video and resize to 64×64. Consistent with prior work [ 11], we only apply horizontal flipping with 50% probability as the sole data augmentation strategy. 4.2 Experimental Setting For all of the experiments, we employ miniC3D, which comprises 4 Conv3D layers, as our backbone architecture following the pioneering work in video dataset condensation. Unlike DM [12], PRISM is initialized from Gaussian noise rather than initializing to a random real frame from the dataset. We report the mean of three evaluations for each experiment, measuring top-1 accuracy for miniUCF and HMDB51, and top-5 accuracy for Kinetics-400 and Something-Something V2. We compare our method against three coreset selection methods (random selection, Herding [ 22], and K-Center [ 23]), an image dataset condensation methods (DM [ 12]), and a video dataset condensation method (Wang et al. [ 11]) the first and the only video dataset condensation method. We evaluate performance under different condensation ratios, measured as Videos Per Class (VPC). Note that the VPC follows the notation of Images Per Class (IPC) in image dataset condensation and that PRISM, in most cases, will have fewer frames than 16 frames, as we are only adding frames when required. During inference, we leverage the index position for each saved vector, which is saved with the frames, with negligible memory consumption. Our experiments employ the SGD optimizer with a momentum of 0.95 for all methods. The hyperparameters, including the learning rates, are detailed in the supplementary material B. 4.3 Results Table 1 presents the experiment results categorized as coreset methods, static and dynamic disentan- gled methods, and the holistic method. We categorized DM [ 12] as the disentangled methods as they initialize the frames from the real frames in the dataset. The results showcase that PRISM achieves state-of-the-art performance in most experimental settings. The performance increments of PRISM are smaller in the motion-centric large dataset, where only 8 frames are used for training. However, 6 Table 2: Storage requirements (MB) for MiniUCF and HMDB51 | https://arxiv.org/abs/2505.22564v1 |
under different VPC settings.† represents the author provided results. Dataset MiniUCF HMDB51 VPC 1 5 10 1 5 10 Coreset Methods Random 115 586 1.15GB 115 586 1.17GB Herding [22] 115 586 1.15GB 115 586 1.17GB K-Center [23] 115 586 1.15GB 115 586 1.17GB Disentangled Methods DM [12] 115 586 586 115 586 586 + Wang et al.†[11] 94 455 – 94 455 – Holistic Method DM + PRISM 20 133 324 22 137 287 Whole Dataset 9.81GB 4.93GB Table 3: Results of a cross-architecture experiment on the MiniUCF dataset condensed to 1 VPC.† represents the author-provided results. Bold denotes the best scores for each model. Method Evaluation Model ConvNet3D CNN+GRU CNN+LSTM DM 15.3 ±1.1 9.9±0.7 9.2±0.3 Wang et al.†17.5±0.1 12.0±0.7 10.3±0.2 PRISM (Ours) 17.9±0.3 18.9±0.8 18.2±1.3 we note that we always achieve the second-best performance, if not first. Additionally, PRISM scales along with the VPC which was not the case with Wang et. al [11]. As the storage footprint of condensed data is a critical factor in dataset condensation, we report the storage requirements in Table 2. We follow the same calculation procedure as prior work [ 11], treating each sample as a float32 tensor. For PRISM, the reported storage corresponds to the total number of frames retained after the condensation process completes for each VPC setting. We ignore the negligible overhead from storing frame indices (less than 0.000016 MB per index). Unlike previous methods that begin with a fixed number of frames (e.g., 16), PRISM starts with only 2 key frames per video and progressively inserts additional frames only when the cosine similarity between gradients is negative. This selective strategy results in a significantly lower storage footprint—up to a 75% reduction compared to prior approaches—while achieving superior performance. Moreover, since PRISM adds frames adaptively rather than proportionally to the number of VPCs, its storage does not grow linearly with VPC. This behavior is clearly visible in the miniUCF results, where storage grows much more slowly than would be expected under proportional expansion. Such efficiency makes PRISM especially advantageous when users wish to scale up performance under higher VPC budgets without incurring prohibitive storage costs. 5 Ablation Cross-Architecture As dataset condensation aims to perform well not only on the training model but also on other architectures, we validate our approach’s robustness through Table 3. The experimen- tal results demonstrate that PRISM not only achieves state-of-the-art performance compared to prior methods but also maintains robust performance across different architectures. Notably, while DM and Wang et al. [ 11] show significant performance drops when evaluated on CNN+GRU, our method maintains consistent performance with only minimal degradation. This strong cross-architecture generalization underscores the strength of our holistic design, which preserves the intrinsic coupling between content and motion—an essential property of video data often overlooked by prior methods. Effect of Number of Initial Key Frames Table 4 presents results when varying the number of initial key frames for the condensation process. We observe a consistent decrease in performance as 7 Table 4: Results by varying the number of initial representative frames of PRISM. Bold denotes | https://arxiv.org/abs/2505.22564v1 |
the best scores for each dataset. DatasetNumber of Initial Key Frames 2 3 4 6 8 MiniUCF 17.9±0.3 16.4±0.5 15.9±0.3 15.0±0.2 14.8±0.2 HMDB51 7.5±0.3 6.4±0.2 6.0±0.4 5.2±0.4 4.9±0.2 Table 5: Ablation study results for (A) with and without insertion, (B) frame selection strategy, (C) similarity metric, and (D) training phase scheduling (warm-up and cool-down). (A) Dataset w/ Insertion w/o Insertion MiniUCF 17.9 ±0.3 15.8±1.2 HMDB51 7.5 ±0.3 6.1±0.3(B) Dataset Negative Grad. Random Pos. MiniUCF 17.9 ±0.3 16.8±0.4 HMDB51 7.5 ±0.3 6.8±0.2 (C) Dataset Cosine Sim. L2 Distance MiniUCF 17.9 ±0.3 15.7±0.7 HMDB51 7.5 ±0.3 6.0±0.6(D) Dataset w/o Warm-Up w/o Cool-Down MiniUCF 16.1 ±0.8 16.9±1.3 HMDB51 6.8 ±1.2 6.3±0.3 the number of key frames increases. This trend suggests that while more initial frames may cover more content, they also introduce more redundancy and dilute the discriminative capacity of the condensed set as well as optimization complexity, making updates more difficult. This leads to a less focused optimization signal, especially in the early stages of training. The result highlights the importance of a sparse but minimally selected number of frame initialization rather than simply increasing the number of input frames. Without Frame Insertion One of PRISM’s core contributions is the progressive insertion of frames based on gradient cues. To isolate the effect of this component, we perform an ablation where only two key frames (the first and last) are optimized throughout training. Temporal interpolation is still applied between these endpoints, but no additional key frames are inserted. As shown in Table 5 (A), removing this progressive insertion leads to a substantial performance drop, confirming that our insertion strategy is critical for capturing complex motion and structure. Nevertheless, this reduced variant still outperforms several existing baselines using real images, despite being initialized from Gaussian noise. This suggests that even a minimal version of our method can serve as a competitive and meaningful baseline for video dataset condensation. Frame Selection Strategy We evaluate the effectiveness of our gradient-based frame selection by comparing it against a random selection baseline. Both methods operate under identical conditions: a new frame is inserted whenever a negative cosine similarity is detected between gradients. However, while PRISM selects the frame with the most negative cosine similarity, the baseline instead randomly selects one from the candidate pool, including the negatively correlated one. This setup ensures that the two methods differ only in how the new frame is selected, not in how often frames are added or when. As shown in Table 5 (B), replacing our targeted frame selection with random addition results in substantial performance drops across both datasets. This confirms that gradient correlation is not just a useful signal but a decisive factor in identifying semantically meaningful frames that enhance the condensation process. Cosine Similarity vs. L2 Distance PRISM uses cosine similarity to identify frames whose gradients are directionally misaligned with those of existing key frames, signaling potential discontinuities in motion or content. This angle-based criterion is particularly effective for capturing semantic transitions, as it not only measures the degree of difference but also the directional disagreement between gradients. To test whether | https://arxiv.org/abs/2505.22564v1 |
cosine similarity is truly essential, we compare it against a distance-based alternative using the L2 norm. However, unlike cosine similarity, which has a well- defined geometric interpretation (e.g., zero for orthogonal gradients), L2 distance lacks a natural threshold. To make the comparison meaningful, we calibrated the L2 threshold to 0.141, which corresponds to 10% of the unit vector distance implied by a 90-degree angular separation in cosine 8 space. As shown in Table 5 (C), cosine similarity significantly outperforms L2 distance across both HMDB51 and miniUCF. These results confirm that directional disagreement, rather than magnitude alone, is a more reliable indicator of frame-level semantic variation, justifying our use of cosine similarity for frame insertion in PRISM. Effect of Warm-Up and Cool-Down Phases To stabilize training and prevent premature or noisy frame insertions, PRISM incorporates both a warm-up and a cool-down phase. The warm-up phase delays the start of frame insertion to allow gradients to stabilize around the initial key frames. The cool-down phase, on the other hand, suspends further insertions once the condensation process nears convergence, preventing overfitting or unnecessary growth in the synthetic video set. To evaluate the necessity of each phase, we conduct ablations where either the warm-up or cool-down phase is removed. As shown in Table 5 (D), removing either phase degrades performance, with the absence of the cool-down phase leading to over-insertion and noisy representation, and the absence of the warm-up phase causing unstable optimization due to early gradient noise. These findings confirm that both the warm-up and cool-down stages are integral to PRISM’s temporal curriculum, ensuring effective and stable condensation dynamics throughout training. All qualitative results can be found in the supplementary material D. miniUCF →Class ∶ Soccer Penalty Mean Optical Flow of Real Video Mean Optical Flow of Condensed Data (IPC =5) Figure 3: Optical-flow comparison for the Soccer Penalty class on miniUCF.Optical Flow Result To qualita- tively assess whether the frames se- lected through our gradient-based cri- terion capture meaningful temporal dynamics, we visualize the class-wise mean optical flow on the miniUCF dataset. Figure 3 compares the aver- age optical flow of real videos (top) with that of our condensed data gener- ated using only negatively correlated frames (IPC = 5, bottom) for the class Soccer Penalty . Despite the aggres- sive frame reduction, our method pro- duces motion patterns that closely re- semble those of the real videos. This result implies that selecting frames based on gradient misalign- ment is not only computationally principled but also semantically grounded—our approach reliably detects the frames responsible for key motion events, even without any supervision. More qualitative evaluations are provided in the supplementary material C. Limitations While PRISM enables efficient and adaptive frame selection under typical spatiotem- poral conditions, it may encounter limitations under two challenging regimes. First, in videos with extremely fast or abrupt motion, the linear interpolation between key frames may fail to approximate intermediate dynamics, especially when the number of frames inserted is insufficient. As PRISM relies on gradient-based cues to detect non-linearity, such rapid transitions may not always manifest as cleanly separable cosine | https://arxiv.org/abs/2505.22564v1 |
misalignment, particularly early in training when gradients are noisy. Second, when applied to very long video sequences, the initial frame optimization from Gaussian noise may become unstable. This resembles difficulties observed in long-range video generation, where spatiotemporal consistency becomes harder to preserve over extended durations. Although our warm-up and cool-down phases mitigate some of these effects, scaling PRISM to extreme motion speeds or extended temporal horizons may require additional stabilization strategies. 6 Conclusion This paper introduced Progressive Refinement and Insertion for Sparse Motion (PRISM), a novel approach that preserves the interdependence between content and motion in videos. Unlike previous methods that artificially separate these elements, our holistic approach begins with minimal represen- tation and strategically adds frames where gradient correlation indicates complex motion patterns. Experiments across multiple benchmarks demonstrate that PRISM outperforms existing methods while reducing storage requirements by up to 70%. Our method maintains better cross-architecture generalization and adapts the number of frames based on class-specific motion complexity. 9 References [1]J. Carreira and A. Zisserman, “Quo vadis, action recognition? a new model and the kinetics dataset,” in proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 2017, pp. 6299–6308. [2]L. Wang, Y . Xiong, Z. Wang, Y . Qiao, D. Lin, X. Tang, and L. Van Gool, “Temporal segment networks: Towards good practices for deep action recognition,” in European conference on computer vision . Springer, 2016, pp. 20–36. [3]L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr, “Fully-convolutional siamese networks for object tracking,” in Computer vision–ECCV 2016 workshops: Amsterdam, the Netherlands, October 8-10 and 15-16, 2016, proceedings, part II 14 . Springer, 2016, pp. 850–865. [4]B. Li, J. Yan, W. Wu, Z. Zhu, and X. Hu, “High performance visual tracking with siamese region proposal network,” in Proceedings of the IEEE conference on computer vision and pattern recognition , 2018, pp. 8971–8980. [5]Y . A. Farha and J. Gall, “Ms-tcn: Multi-stage temporal convolutional network for action segmentation,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2019, pp. 3575–3584. [6]S. Tulyakov, M.-Y . Liu, X. Yang, and J. Kautz, “Mocogan: Decomposing motion and content for video generation,” in Proceedings of the IEEE conference on computer vision and pattern recognition , 2018, pp. 1526–1535. [7]A. Miech, D. Zhukov, J.-B. Alayrac, M. Tapaswi, I. Laptev, and J. Sivic, “Howto100m: Learning a text-video embedding by watching hundred million narrated video clips,” in Proceedings of the IEEE/CVF international conference on computer vision , 2019, pp. 2630–2640. [8]S. Abu-El-Haija, N. Kothari, J. Lee, P. Natsev, G. Toderici, B. Varadarajan, and S. Vijayanarasimhan, “Youtube-8m: A large-scale video classification benchmark,” arXiv preprint arXiv:1609.08675 , 2016. [9]B. Zhao, K. R. Mopuri, and H. Bilen, “Dataset condensation with gradient matching,” arXiv preprint arXiv:2006.05929 , 2020. [10] Z. Guo, K. Wang, G. Cazenavette, H. Li, K. Zhang, and Y . You, “Towards lossless dataset distillation via difficulty-aligned trajectory matching,” arXiv preprint arXiv:2310.05773 , 2023. [11] Z. Wang, Y . Xu, C. Lu, and Y .-L. Li, “Dancing with still images: video distillation via static-dynamic dis- entanglement,” in Proceedings of the IEEE/CVF Conference | https://arxiv.org/abs/2505.22564v1 |
on Computer Vision and Pattern Recognition , 2024, pp. 6296–6304. [12] B. Zhao and H. Bilen, “Dataset condensation with distribution matching,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision , 2023, pp. 6514–6523. [13] G. Cazenavette, T. Wang, A. Torralba, A. A. Efros, and J.-Y . Zhu, “Dataset distillation by matching training trajectories,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2022, pp. 4750–4759. [14] B. Zhao and H. Bilen, “Dataset condensation with differentiable siamese augmentation,” in International Conference on Machine Learning . PMLR, 2021, pp. 12 674–12 685. [15] J.-H. Kim, J. Kim, S. J. Oh, S. Yun, H. Song, J. Jeong, J.-W. Ha, and H. O. Song, “Dataset condensation via efficient synthetic-data parameterization,” in International Conference on Machine Learning . PMLR, 2022, pp. 11 102–11 118. [16] K. Wang, B. Zhao, X. Peng, Z. Zhu, S. Yang, S. Wang, G. Huang, H. Bilen, X. Wang, and Y . You, “Cafe: Learning to condense dataset by aligning features,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2022, pp. 12 196–12 205. [17] S. Niklaus, L. Mai, and F. Liu, “Video frame interpolation via adaptive convolution,” in Proceedings of the IEEE conference on computer vision and pattern recognition , 2017, pp. 670–679. [18] Z. Liu, R. A. Yeh, X. Tang, Y . Liu, and A. Agarwala, “Video frame synthesis using deep voxel flow,” in Proceedings of the IEEE international conference on computer vision , 2017, pp. 4463–4471. [19] K. Soomro, A. R. Zamir, and M. Shah, “Ucf101: A dataset of 101 human actions classes from videos in the wild,” arXiv preprint arXiv:1212.0402 , 2012. 10 [20] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre, “Hmdb: a large video database for human motion recognition,” in 2011 International conference on computer vision . IEEE, 2011, pp. 2556–2563. [21] R. Goyal, S. Ebrahimi Kahou, V . Michalski, J. Materzynska, S. Westphal, H. Kim, V . Haenel, I. Fruend, P. Yianilos, M. Mueller-Freitag et al. , “The" something something" video database for learning and evaluating visual common sense,” in Proceedings of the IEEE international conference on computer vision , 2017, pp. 5842–5850. [22] M. Welling, “Herding dynamical weights to learn,” in Proceedings of the 26th annual international conference on machine learning , 2009, pp. 1121–1128. [23] O. Sener and S. Savarese, “Active learning for convolutional neural networks: A core-set approach,” arXiv preprint arXiv:1708.00489 , 2017. 11 A Proof of Lemma 1 Lemma 1 (Loss-Descent Blockage under Gradient Misalignment) Letst=αski+(1−α)ski+1, with0< α < 1, be a linearly interpolated frame between two key frames skiandski+1. Let the task-loss gradients be denoted as gt=∇stL(st), g i=∇skiL(ski), g i+1=∇ski+1L(ski+1). Suppose ⟨gt, gi⟩<0and⟨gt, gi+1⟩<0. Then, for every convex combination v=λ(−gi) + (1 −λ)(−gi+1), λ∈[0,1], the following inequality holds: ⟨gt, v⟩>0. Consequently, no first-order update obtained by modifying only the two endpoint frames can decrease Latst; the loss is stationary or strictly increasing along every such direction. Therefore, stmust be promoted to the key frame set and directly optimized to enable further loss minimization. Proof. | https://arxiv.org/abs/2505.22564v1 |
By the bilinearity of the inner product, ⟨gt, v⟩=λ⟨gt,−gi⟩+ (1−λ)⟨gt,−gi+1⟩. Applying the assumption ⟨gt, gi⟩<0, we obtain ⟨gt,−gi⟩=−⟨gt, gi⟩>0, and similarly, ⟨gt,−gi+1⟩=−⟨gt, gi+1⟩>0. Therefore, ⟨gt, v⟩=λ· ⟨gt,−gi⟩+ (1−λ)· ⟨gt,−gi+1⟩>0. This shows that the directional derivative of Latstalong any direction vformed by adjusting only the endpoints is positive: DvL(st) =⟨∇L(st), v⟩>0. Thus, no first-order update along such directions can reduce the loss at st, andL(s)is strictly increasing along all directions spanned by −giand−gi+1. It follows that further loss minimization requires directly optimizing stas a key frame. □ B Hyperparameter In Table A, we show the learning rate and batch size under each dataset and IPC. The ϵis set to 0 for all experiments throughout the manuscript. The warm-up and cool-down phases are processed for 20% of the whole iteration each. In other words, if the condensation process is set to 100 iterations, the warm-up phase takes up the first 20 iterations and the cool-down phase takes up the last 20 iterations, leaving 80 iterations for the progressive refinement and insertion of frames. We follow the setting from the prior method [11] for evaluation and cross-architecture evaluation. 12 Table A: Hyperparameters for PRISM under different datasets and IPC. Method DatasetTrain Evaluation IPC LR Batch Real Epoch LR DMMiniUCF1 1 64 500 1e−25 25 64 10 50 64 HMDB511 0.7 64 5 25 64 10 75 64 Kinetics-4001 1 64 5 50 128 SSv21 3 64 5 30 128 SSv2−Class 60 SSv2−Class 26 Figure 4: Extra optical flow results on SSv2 under 1 VPC setting. The red circle shows where the optical flow matches the most between the condensed video and the real video. C Optical Flow Analysis To further assess the temporal fidelity of our condensation framework, we visualize the optical flow fields of both real and condensed videos. Optical flow represents the pixel-wise motion between consecutive frames and serves as a direct indicator of whether the synthesized frames preserve realistic temporal dynamics. In our visualizations in the supplementary and also in the main manuscript, we use a standard HSV-based color encoding, where the hue (i.e., the color itself) corresponds to the direction of motion—such as rightward appearing reddish, leftward bluish, and upward greenish—while the saturation and brightness encode the magnitude of motion, with brighter and more saturated regions indicating stronger or faster motion. Regions with little to no motion appear desaturated or grayish. Despite beginning from Gaussian noise and adding frames along training, the optical flow results show that PRISM is capable of progressively aligning the synthesized motion with that of the real video. As shown in Figure 5, the red circles highlight regions where the direction and magnitude of the condensed optical flow closely resemble those of the original video. This further supports the observation that PRISM can synthesize coherent motion dynamics from sparsely supervised temporal supervision. Nevertheless, some failure modes are also apparent in these optical flow visualizations. In cases where the first and last frames contain minimal or no motion, the model struggles and generates meaningless or abrupt motion during warm-up phase. Moreover, in action classes that involve fast or | https://arxiv.org/abs/2505.22564v1 |
abrupt motions, the resulting flow fields from the condensed video occasionally lack directional consistency and show spatial noise, indicating poor alignment. These limitations appear to be exacerbated by the use of Gaussian noise initialization, which may hinder the model’s ability to focus solely on the informative motion patterns at early training stages. 13 SSv2−class 51 SSv2−class 156 MiniUCF−class 15MiniUCF−class 5 Figure 5: Analysis on when PRISM fails. D Qualitative Results We visualize the condensed videos on HMDB51 and MiniUCF under the 1 VPC setting for maximal clarity. The visualized frames in Figure 6 and Figure 7 correspond to those retained after the condensation process, where the noise images are placeholders which does not get stored along with condensed data. Red rectangles highlight the negative effect when the warm-up phase is omitted. As consistently observed across both datasets, removing the warm-up leads to excessive frame selection, resulting in redundant and less informative synthetic frames while consuming more memory. Blue rectangles indicate frames produced when the cool-down phase is omitted. Although overall results appear more stable than in the warm-up-removed case, we observe that some frames are added during the final few iterations of condensation. These late-added frames often lack sufficient training, reducing their utility for action recognition by being not fully trained. 14 PRIS M w/o warm−up w/o cool−downFigure 6: Visualization of PRISM, PRISM without warm-up, and PRISM without cool-down on HMDB51 under 1 VPC. Red rectangles highlight the negative effects of omitting the warm-up phase, while blue rectangles indicate frames that may be under-trained due to the absence of a cool-down phase. 15 PRIS M w/o warm−up w/o cool−downFigure 7: Visualization of PRISM, PRISM without warm-up, and PRISM without cool-down on MiniUCF under 1 VPC. Red rectangles highlight the negative effects of omitting the warm-up phase, while blue rectangles indicate frames that may be under-trained due to the absence of a cool-down phase. 16 | https://arxiv.org/abs/2505.22564v1 |
arXiv:2505.22566v1 [cs.CV] 28 May 2025Universal Visuo-Tactile Video Understanding for Embodied Interaction Yifan Xie1, Mingyang Li1, Shoujie Li1, Xingting Li1, Guangyu Chen2, Fei Ma3, Fei Richard Yu3, Wenbo Ding1 1Tsinghua University 2Sun Yat-sen University 3Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ) Abstract Tactile perception is essential for embodied agents to understand physical attributes of objects that cannot be determined through visual inspection alone. While exist- ing approaches have made progress in visual and language modalities for physical understanding, they fail to effectively incorporate tactile information that provides crucial haptic feedback for real-world interaction. In this paper, we present VTV- LLM, the first multi-modal large language model for universal Visuo-Tactile Video (VTV) understanding that bridges the gap between tactile perception and natural language. To address the challenges of cross-sensor and cross-modal integra- tion, we contribute VTV150K, a comprehensive dataset comprising 150,000 video frames from 100 diverse objects captured across three different tactile sensors (Gel- Sight Mini, DIGIT, and Tac3D), annotated with four fundamental tactile attributes (hardness, protrusion, elasticity, and friction). We develop a novel three-stage training paradigm that includes VTV enhancement for robust visuo-tactile repre- sentation, VTV-text alignment for cross-modal correspondence, and text prompt finetuning for natural language generation. Our framework enables sophisticated tactile reasoning capabilities including feature assessment, comparative analysis, scenario-based decision making and so on. Experimental evaluations demonstrate that VTV-LLM achieves superior performance in tactile video understanding tasks, establishing a foundation for more intuitive human-machine interaction in tactile domains. 1 Introduction Touch is a fundamental sensory modality that provides humans with physical information unattainable through vision alone, such as material attributes, surface texture, and compliance. This tactile feedback enables sophisticated physical reasoning and interaction in our environment [ 1,2,3]. While recent advances in vision-language models [ 4,5,6,7,8] have demonstrated impressive capabilities in visual reasoning, these models remain fundamentally limited by their inability to perceive tactile attributes, restricting their effectiveness in scenarios requiring physical interaction and reasoning about material characteristics that cannot be reliably inferred from visual cues alone. Visuo-tactile sensors [ 9], like GelSight [ 10], DIGIT [ 11], and Tac3D [ 12], have emerged as promising technologies for capturing tactile information, generating image-like representations that encode physical properties such as pressure distribution, surface geometry, and friction characteristics. However, there remains a significant challenge in bridging the domain gap between these tactile Preprint. Under review. 100Objects 3 Sensors … … … GelSight MiniDIGITTac3D150,000 Video Frames4 AttributesStatic(a) Data Collection (c) Template Generation(d) Embodied Interaction(b) Attribute Annotation 10,000 Question-Answer PairsDynamic VTV-LLM Among these three videos: a) <video1>, b) <video_2>, c) <video_3>, identify the <comparison><attribute> object after describing each one.Video/Comparison/AttributeSelection & FiltrationQuestion: Among these three videos: a) <kiwi.mp4>, b) <circuit_board.mp4>, c) <golf_ball.mp4>, identify the leastprotrudedobject after describing each one.Answer: a)…b)…c)… Conclusion: The least protruded object is a). … …bark.mp4avocado.mp4 … Tactile Feature AssessmentQuestion: What are the tactile features of the object presented in the video <avocado.mp4> ?VTV-LLM: It is moderately deformable under force with strong protrusion. It has moderate elasticity and moderate friction characteristics. Tactile Scenario AnalysisQuestion: Comparing the objects in <avocado.mp4> and <velvet.mp4> , which one is moreprotruded?VTV-LLM: Thefirst object. Surface Feature | https://arxiv.org/abs/2505.22566v1 |
Distinction velvet.mp4Question: Describe these two videos based on their tactile attributes: a) <bark.mp4> , b) <velvet.mp4> . Which material would be better for lining the inside of a box meant to hold delicate jewelry ?VTV-LLM: a) It presents a extremely hard and strong protrusion surface with no elasticity and strong friction. b) It presents a moderately deformable and no protrusion with no elasticity and slight friction. The most suitable object is b) . …Figure 1: The workflow consists of four key components: (a) Data Collection, which includes 100 diverse objects recorded by 3 different tactile sensors, resulting in 150,000 video frames; (b) Attribute Annotation, where objects are systematically categorized across 4 static and dynamic tactile attributes: hardness, protrusion, elasticity, and friction; (c) Template Generation, which generates 10,000 question-answer pairs using structured templates for various comparative analyses; and (d) Embodied Interaction, demonstrating VTV-LLM’s capability to perform tactile feature assessment, surface feature distinction, tactile scenario analysis and so on. Through this integrated approach, VTV-LLM enables multi-modal reasoning about physical attributes that cannot be determined through visual inspection alone, creating a foundation for more sophisticated human-machine interaction in tactile understanding domains. representations and natural language understanding. The inherent differences between tactile data captured across various sensor types further complicates this integration, as each sensor produces distinct data formats with varying resolutions and physical property encodings. Existing research on tactile learning has made progress in representation learning [ 13,14,15,16, 17,18], but these approaches often focus either exclusively on static attributes or fail to develop comprehensive frameworks that integrate both tactile perception and language understanding. Most critically, they lack the ability to ground tactile perceptions in natural language descriptions and reasoning, which is essential for human-machine communication about physical properties and interactions [ 19,20]. Additionally, the temporal dimension of tactile interactions, which captures how surfaces respond to pressing, sliding, and rotational movements, remains underexplored in current approaches, despite containing crucial information about dynamic material attributes. To address these challenges, we present VTV-LLM, the first multi-modal large language model for universal visuo-tactile video understanding. Our approach treats tactile perception as a cross- modal reasoning problem, where tactile videos are aligned with linguistic descriptions to enable sophisticated reasoning about physical attributes. As illustrated in Fig. 1(d), VTV-LLM supports a diverse range of embodied interaction capabilities, from basic tactile feature assessment to complex comparative analyses and scenario-based decision making. Additionally, we construct the VTV150K dataset, comprising 150,000 video frames collected from 100 common objects across three different tactile sensors. We systematically annotate these videos with four fundamental tactile attributes (hardness, protrusion, elasticity, and friction), creating a structured foundation for tactile reasoning. To bridge the substantial gap between tactile perception and language understanding, we develop a three-stage training paradigm: (1) VTV enhancement through optical flow-guided masking to learn 2 robust visuo-tactile representations, (2) VTV-text alignment to establish cross-modal correspondence, and (3) text prompt finetuning to optimize natural language generation about tactile attributes. Our main contributions can be summarized as follows: •We introduce VTV-LLM, the first multi-modal large language model capable of universal visuo-tactile video understanding, enabling sophisticated embodied reasoning through natural | https://arxiv.org/abs/2505.22566v1 |
language interaction. •We contribute VTV150K, a comprehensive dataset of 150,000 visuo-tactile video frames capturing 100 diverse objects across three tactile sensors, annotated with four fundamental tactile attributes. •We develop a novel three-stage training paradigm that effectively bridges the domain gap between tactile perception and language understanding, providing a valuable reference for future cross-modal learning efforts. 2 Related Works Tactile Perception Tactile perception has evolved significantly from early sensors measuring basic physical properties to sophisticated vision-based systems providing high-resolution contact information. Visuo-tactile sensors [ 9] such as GelSight [ 10], DIGIT [ 11], and Tac3D [ 12] have garnered widespread attention for their ability to capture detailed contact deformations through elastomeric gels and embedded cameras. These sensors have enabled numerous robotic applications including material classification [ 21], shape reconstruction [ 22,23], and dexterous manipulation tasks [ 24,14]. Recent research has focused on developing representation learning approaches for tactile data, progressing from task-specific models [ 25] to general-purpose representations using self- supervised techniques like contrastive multi-view coding [ 21] and masked autoencoders [ 26]. The integration of tactile sensing with other modalities has also emerged as a promising direction, with works like UniTouch [ 17] dynamically fusing tactile signals with visual and audio data to enhance cross-sensor knowledge transferability, Yu et al. [ 15] aligned tactile images with vision-language models for object property reasoning, and Fu et al. [ 16] used a touch-vision-language model for open-vocabulary classification. Unlike prior works, our method processes visuo-tactile video directly and focuses on sophisticated tactile reasoning. Self-Supervised Video Representation Learning Self-supervised video representation learning has emerged as a critical area for developing robust visual features without manual annotations. VideoMAE [ 27] pioneered this approach by effectively adapting masked autoencoding strategies to the video domain, demonstrating significant performance improvements across various benchmark tasks. Subsequently, VideoMAEv2 [ 28] enhanced this framework through the introduction of dual masking mechanisms, which substantially improved computational efficiency while maintaining representational power. Recent advancements in this field have focused on sophisticated optimiza- tions along both temporal and spatial dimensions [ 29,30,31,32], addressing challenges unique to video understanding such as motion coherence and long-range dependencies. In the tactile domain, Sparsh [ 18] explored the ability of different existing self-supervised learning methods to characterize in tactile video. Feng et al. [ 13] utilized the tube masking strategy to process the tactile video. Our method builds upon these foundations by introducing optical flow-guided masking specifically designed for visuo-tactile videos, which addresses the unique challenges of capturing both spatial deformation and temporal dynamics in tactile interactions. Multi-Modal Large Language Models Multimodal Large Language Models (MLLMs) have transformed AI research by enabling reasoning across textual and visual modalities. Early efforts integrated LLMs as agents for downstream tasks [ 33,34,35]. Later approaches focused on parameter- efficient tuning [ 36,37] and instruction tuning [ 38,39] to align visual semantics with language. Recent advances have incorporated video processing [ 40,41] and diverse sensory inputs [ 42], enabling applications in robotics [ 43,44]. In our work, we present the first visuo-tactile video large language model to bridge the gap between tactile perception and natural | https://arxiv.org/abs/2505.22566v1 |
language. 3 3 Methods In this section, we first introduce VTV150K, a large-scale dataset of video-question-answer pairs in Sec. 3.1. Subsequently, we present VTV-LLM, the first visuo-tactile video large language model designed for visuo-tactile video understanding and embodied interaction in Sec. 3.2. 3.1 VTV150K Overview Visuo-tactile sensor technologies suffer from inadequate standardization and significant cross-sensor data discrepancies, which substantially impede the transferability of tactile representation models across different sensing platforms. Existing methods [ 14,18,45,13] addressing these challenges exhibit notable limitations, as they either neglect the integration of both static and dynamic tactile attributes or fail to incorporate comprehensive visuo-tactile video understanding for embodied interaction. In this work, we introduce VTV150K, a comprehensive large-scale dataset comprising video-question- answer pairs collected across three diverse visuo-tactile sensors, as illustrated in Fig. 1(a-c). The dataset construction methodology encompasses three sequential stages: data collection, attribute annotation, and template generation. We will delve into the specifics of these stages. Data Collection To facilitate the grounding of embodied interaction on tactile inputs, we collected a comprehensive dataset comprising 100 common objects, yielding a total of 150,000 visuo-tactile video frames. As illustrated in Fig. 1(a), we employed multiple visuo-tactile sensors to ensure style diversity: GelSight mini [ 10] and DIGIT [ 11] sensors for capturing high-resolution visuo-tactile information, and Tac3D [ 12] for measuring deformation force fields. Due to the relatively low resolution of Tac3D, we implemented the cubic spline interpolation algorithm [ 46] to reconstruct more detailed force field representations. Data collection was performed manually to address the challenges associated with properly interacting with irregularly shaped objects. For each object, we systematically captured five visuo-tactile videos across different regions using various sensors. Our data collection process consisted of three sequential interactions: (1) normal pressing against the object surface to capture pressure distribution, (2) rotational movement to acquire shear information, and (3) sliding motion to obtain friction characteristics. This multi-interaction approach enables comprehensive tactile information extraction for embodied interaction. Attribute Annotation To facilitate tactile reasoning, we annotated our dataset across four funda- mental static and dynamic tactile attributes as shown in Fig. 1(b). Each attribute was categorized into three distinct levels, with harness classified as highly deformable (28%), moderately deformable (33%), and extremely hard (39%); protrusion categorized as absent (41%), moderate (26%), or strong (33%); elasticity measured as none (42%), moderate (30%), or strong (28%); and friction assessed as slight (32%), moderate (25%), or strong (43%). This structured annotation framework enables comprehensive tactile attribute analysis for downstream reasoning tasks. Template Generation Template generation facilitates the creation of question-answer pairs for model training. We developed multiple problem templates encompassing various reasoning tasks: tac- tile feature assessment, surface feature distinction, texture optimal selection and so on. To instantiate these templates, we systematically integrated diverse visuo-tactile video combinations, comparison operators ( e.g., "more", "less", "most", "least"), and attribute selectors to generate a comprehensive dataset of 10,000 question-answer pairs. As illustrated in Fig. 1(c), our generation process follows a hierarchical framework: selection, filtration, and structured question formulation with corresponding ground-truth annotations. For more comprehensive details about attribute annotation and template generation, please refer to | https://arxiv.org/abs/2505.22566v1 |
the Supplementary Material. 3.2 VTV-LLM Overview VTV-LLM aims to serve as a multi-modal framework capable of integrating visual- tactile video data with large language models to facilitate tactile reasoning for embodied interaction. As illustrated in Fig. 2(a), VTV-LLM formulates tactile perception as a cross-modal approach to question answering and descriptive generation. By leveraging the rich sensory information inherent 4 …I0I1ITPrompt: “What are the tactile features of the object presented in the video <video_start>”, “<video>”, "<video_end>?"Visuo-Tactile VideoVTVEncoderTokenizerT-ProjectorV-ProjectorLarge Language Model(Qwen 2.5)VTV-LLM: It presents a moderately deformable and strong protrusion surface with moderate elasticity and moderate friction. (a) VTV-LLM Framework(b) Multi-Stage Training Trainable Modules ❄ 🔥Frozen Modules⊕Element Addition … MaskedVideo FramesVTVEncoderVTVDecoder Stage1: VTV Enhancement … 🔥 🔥AttributeClassifier 🔥 Stage2: VTV-Text Alignment …VTVEncoder ❄V-Projector 🔥Large Language Model ❄T-Projector 🔥 Video Frames TextPrompt …VTVEncoder ❄V-Projector 🔥Large Language ModelT-Projector 🔥 Video Frames TextPrompt 🔥 Stage3:Text Prompt FinetuningFigure 2: (a) VTV-LLM framework: A multi-modal system integrating visual-tactile video data with large language models to facilitate tactile reasoning for embodied interaction; (b) Multi-Stage Training: It consists of VTV enhancement, alignment between visuo-tactile video and text, and prompt-based finetuning to generate accurate tactile descriptions. in visuo-tactile video data, VTV-LLM enhances understanding in scenarios traditionally challenging for standard vision-only models, particularly in applications requiring tactile attribute inference. At the core of VTV-LLM lies a (Qwen 2.5 [ 4,5]) that synthesizes complex multi-modal information from visuo-tactile videos, utilizing world knowledge to generate coherent, human-readable descrip- tions of tactile attributes. In general, a visuo-tactile video can be mathematically represented as a sequence of frames V={It}T t=0, where each frame Itcaptures both visual and tactile information at timestamp t. Initially, high-dimensional features FV TV are extracted from Vusing a VTV encoder based on ViT-base architecture [47] adapted from VideoMAE [27, 28]: FV TV =fenc(V) =ViT {Patch (It) +TE(t)}T t=0 , (1) where Patch (·)denotes the patch embedding operation and TE(t)represents temporal embeddings. These features are then processed through a visual projector fV−projconsisting of two linear layers with a GELU activation function [48] in between to produce the visual embedding EV: EV=fV−proj(FV TV) =W2·GELU (W1·FV TV +b1) +b2, (2) where W1, W 2are learnable weight matrices and b1, b2are bias terms. Concurrently, the textual prompt is tokenized and processed through LLM’s text projector to produce text embedding ET. For effective multi-modal reasoning, we introduce special tokens <video_start>, <video>, and <video_end> to denote the beginning, content and end of the visuo-tactile video in the input sequence. These tokens serve as anchors for the model to properly align visual information with textual understanding during the inference process. 5 VTVEncoderVTVDecoderVisuo-Tactile VideoKeyframe Selection & InitializationIkM'kOptical Flow Generator… Ok→k-1 …I0 …Ik…IT Backward Warping& MaskingM0…Mk…MTMasked Video FramesOptical FlowR0 …Rk…RT Reconstructed Video FramesAttributeClassifierHardness: 2Protrusion:2Elasticity: 0Friction: 0 Figure 3: Training pipeline of VTV enhancement. Given these aligned representations, the large language model fLLM performs reasoning to generate a response Adescribing tactile attributes: A=fLLM(EV, ET) =Qwen (Concat ([EV;ET])). (3) Given the complexity of integrating visuo-tactile information with language representations, we implement a staged training approach to develop our framework. As shown in Fig. 2(b), VTV-LLM adopts a three-stage training paradigm encompassing VTV enhancement, VTV-text alignment, | https://arxiv.org/abs/2505.22566v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.