text
string
source
string
time series. Our benchmark STEB computes indicators for the reliability and consistency of the scores and tracks the running time for computing each measure. To this end, we employ an array of TS transformations along a modulation path to control data modification. We utilized STEB to compare and rank 41 quantitative m...
https://arxiv.org/abs/2505.21160v1
(medical) time series generation with recurrent conditional gans. arXiv preprint , June 2017. [14] Joao Fonseca and Fernando Bacao. Tabular and latent space synthetic data generation: a literature review. Journal of Big Data , 10(1):115, 2023. [15] Jean-Yves Franceschi, Aymeric Dieuleveut, and Martin Jaggi. Unsupervise...
https://arxiv.org/abs/2505.21160v1
great gan bake off, an extensive systematic evaluation of generative adversarial network architectures for time series synthesis. Journal of Systems Research , 2(1), 2022. [29] Xiaomin Li, Vangelis Metsis, Huangyingrui Wang, and Anne Hee Hiong Ngu. Tts-gan: A transformer-based time-series generative adversarial network...
https://arxiv.org/abs/2505.21160v1
and Assefaw H. Gebremedhin. Synthetic sensor data generation for health applications: A supervised deep learning approach. In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) , pages 1164–1167, 2018. [43] Kun Ouyang, Reza Shokri, David S. Rosenblum, and Wenzhuo Ya...
https://arxiv.org/abs/2505.21160v1
Zhaozhi Qian, and Mihaela van der Schaar. Membership inference attacks against synthetic data through overfitting detection. In Francisco Ruiz, Jennifer Dy, and Jan-Willem van de Meent, editors, Proceedings of The 26th International Conference on Artificial Intelligence and Statistics , volume 206 of Proceedings of Mac...
https://arxiv.org/abs/2505.21160v1
Autocorrelation [40]. The autocorrelation measure is the squared difference of auto-correlation ma- trices computed for the real and synthetic dataset, respectively. The auto-correlations are determined for each channel up to lagl 4and averaged across the TS. 13 C2ST [31]. The classifier 2-sample test (C2ST) generally ...
https://arxiv.org/abs/2505.21160v1
already embedded data and with a Gaussian mixture model (GMM) as discriminator. Its score is the AUCROC score of classifying real and synthetic data. Detection_linear* [45]. This is a variant of discriminative score applied to already embedded data and with a logistic regression classifier as discriminator model. Its s...
https://arxiv.org/abs/2505.21160v1
[47]. Similar to JSD, the Kullback-Leibler divergence measures the dissimilarity between two distributions. Again, these are the distributions of scalar values in the TS in the real and synthetic datasets, discretized by binning these values. Max-RTS* [42]. The Maximum real to synthetic similarity (Max-RTS) computes th...
https://arxiv.org/abs/2505.21160v1
similarity between embedded synthetic TS. Typical means that for every instance, the distances to only five others are calculated. This measure does not use real data. Temporal correlation [28]. Computes the channel-wise correlation between observations of each TS using the frequency peaks exposed by the fast Fourier t...
https://arxiv.org/abs/2505.21160v1
the embedded real and synthetic data. The embedding is computed with t-distributed stochastic neighbor embedding (t-SNE) or principal component analysis (PCA) of a subsample of n= 1000 instances each. Visual assessment [3]. This measure is based on the visual evaluation of generated time series data by plotting a (smal...
https://arxiv.org/abs/2505.21160v1
stock This set contains the daily historical Google stocks data from 2004 to 2019 in one continuous, aperiodic sequence with features volume, high, low, opening, closing, and adjusted closing prices [60]. We apply the sliding window approach again with stride one. PPG and respiration Assembled by the Beth Israel Deacon...
https://arxiv.org/abs/2505.21160v1
every step of the test to ensure reproducibility. For the Main experiment, the ten seeds tried are 42, 461900, 854324, 679123, 107460, 952343, 580127, 893234, 560239, and 501932. Due to time constraints, theEmbedders experiment is limited to a subset of five seeds, namely 952343, 580127, 893234, 560239, and 501932. C A...
https://arxiv.org/abs/2505.21160v1
c ↗ ↗ c Wavelet transform* ↘ ↗ ↗ ↘ Logging. This component logs various aspects of the experiment execution, informing the user and providing transparency. It records the parameter set for each test and tracks the live status of individual tests, which can be waiting ( todo),ongoing ,successful , orfailed . Additionall...
https://arxiv.org/abs/2505.21160v1
normalize it by the number of pairs: rcon=2 n(n−1)n−2X i=0n−1X j=i+11{ks_2sample (Gi, Gj)}. (23) The consistency w.r.t. a changing dataset is analogous, just replace “random seed” by “dataset” above. E Details of the Experiments In this section, we provide further details on the experiments conducted, Main andEmbedders...
https://arxiv.org/abs/2505.21160v1
are. For lack of space in the main body of the paper, we report the consistency indicators in Table 7. They are discussed in Section 5.1. Similarly, the average running times recorded for measure executions are placed in Table 8, those for embedding procedures in Table 11. There are also long versions for both tables, ...
https://arxiv.org/abs/2505.21160v1
overall number of tests also varies from experiment to experiment. Failure Main Embedders CUDA Out of Memory 1061 1369 Memory limit of 100 GB exceeded 7 123 Time limit of 120 minutes exceeded 1037 707 NDB-over/under: Too many cells in partition/No samples in a cell 81 0 Other CUDA/CUDNN Runtime error 0 0 Non-CUDA Runti...
https://arxiv.org/abs/2505.21160v1
axis at the top depicts rrel. Additional horizontal bars connect groups of measures with no significantly different rrelvalue. 27 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Authenticity (0.098) CT (0.11) Autocorrelation (0.14) Max-RTS (0.23) NDB-over/under (0.31) ACS (0.35) Spatial correlation (0.35) Discriminative score (0.38) ICD (...
https://arxiv.org/abs/2505.21160v1
Distr. metric .222±.334 CAS .396±.259 32 ACS .416±.413 WD .237±.330 Improved precision .222±.354 ICD .395±.316 33 CAS .404±.271 JSD .233±.346 Detection_XGB .220±.331 Discriminative score .379±.252 34 Max-RTS .382±.458 Spatial correlation .216±.252 β-Recall .210±.349 Spatial correlation .352±.346 35 Discriminative score...
https://arxiv.org/abs/2505.21160v1
.306 1. .472 1. .500 1. .361 1. WD .711 .978 .622 1. .556 1. .556 1. DOMIAS N/A 30 Table 8: Running time of the measure executions recorded on the Main experiment sorted by average rank. The listed values are the average time required to apply the measure to given data across multiple modulation steps and tests. All va...
https://arxiv.org/abs/2505.21160v1
5 40 125 16 22 20 219 36 Detection_MLP* 34 32 12 6 43 85 18 17 20 157 37 MTop-Div* 100 38 54 40 40 42 38 42 40 40 38 TSTR 46 20 19 8 51 122 23 25 31 168 39 WCS 430 9 26 18 54 N/A 19 130 48 77 40 CAS N/A 80 N/A N/A N/A 466 30 41 90 N/A 41 DOMIAS* N/A 31 Table 9: Running time statistics of the measure executions extendin...
https://arxiv.org/abs/2505.21160v1
0 0 20 849 1 0 20 948 17CT 1 0 20 937 1 0 20 1190 0 0 20 970 0 0 20 904 1 0 20 970 18Density 1 0 10 980 2 1 8 1202 0 0 11 979 0 0 9 981 2 1 11 979 19Coverage 1 0 10 980 1 1 12 1198 0 0 9 981 0 0 11 979 2 1 9 981 20ApEn 4 0 20 970 0 0 20 1190 1 0 20 970 1 0 20 970 1 0 20 970 21Detection_XGB 1 1 990 0 2 01210 0 1 0 990 0...
https://arxiv.org/abs/2505.21160v1
measure executions extending Table 8 for PTB diagnostic ECG ,Sine,StarLightCurves ,UniMiB SHAR , and Wikipedia web traffic . In addition to the average running time (Mean), we provide for each dataset and measure the standard deviation (StD) of the measurements, the number of complete, un-aided executions (Valid), and ...
https://arxiv.org/abs/2505.21160v1
751 21Detection_XGB 4 11210 0 1 01320 0 1 01210 0 2 01320 0 6 2 770 0 22Max-RTS 12 11 1210 0 1 11320 0 1 11210 0 2 11320 0 30 20 759 0 23STS 11 21210 0 2 01320 0 2 01210 0 2 01320 0 21 4 770 0 24ICD 11 71210 0 3 01320 0 12 10 1210 0 3 11320 0 4 0 770 0 25ONND 11 81210 0 3 01320 0 11 10 1210 0 3 11320 0 4 0 770 0 26INND...
https://arxiv.org/abs/2505.21160v1
Below, we provide some step-by-step instructions on how this selection process can look like using STEB output. 1.Determine which categories are relevant for the given use case. Usually, one measure is needed per category. For each category, repeat all following steps. 2. Start with the measure ranked highest reliabili...
https://arxiv.org/abs/2505.21160v1
arXiv:2505.21170v1 [quant-ph] 27 May 2025Quantum AIXI: Universal Intelligence via Quantum Information Elija Perrier1[0000 −0002−6052−6798] Centre for Quantum Software & Information, UTS and Gradient Institute, Sydney elija.perrier@gmail.com Abstract. AIXI is a widely studied model of artificial general intelligence (AG...
https://arxiv.org/abs/2505.21170v1
universal induction, folds it into sequential decision theory, and produces what is ar- guably the cleanest statement of an unbounded optimal agent. Subsequent symbolic, connectionist, and hybrid AGI proposals typically position themselves by (i) trying to approximate AIXI in prac- tice (e.g. AIXItl, MC-AIXI, informati...
https://arxiv.org/abs/2505.21170v1
. a m),(1) where mis the lifetime of the agent, γ∈[0,1)is a discount factor, and the inner sum is over all environments νin a universal class MU(e.g., all chronological semi-computable environments Msol),weightedby wν= 2−K(ν).K(ν)istheKolmogorovcomplexityoftheclassicalTuringMachine (CTM) describing environment ν. The s...
https://arxiv.org/abs/2505.21170v1
either on HEalone or on HA⊗ H E. Such a map preserves superposition and entanglement and therefore belongs to the quantum-to-quantum (QTQ) class of channels. 2. Second, the agent may decide to interrogate the environment by means of a quantum in- strument Iat={Eat k}k∈Γobs, where the outcome alphabet Γobsis classical. ...
https://arxiv.org/abs/2505.21170v1
in |p|. Here we have assumed that for any two universal QTMs U1, U2there exists a constant cU1,U2such that for every environment Q: KU1 Q(Q)−KU2 Q(Q) ≤cU1,U2. (5) The quantum Solomonoff mixture is the semi–density operator: ΞQ a1:m :=X Q∈Qsol2−KQ(Q)ρQ E(a1:m), (6) where ρQ E(a1:m)is the environment state generated by...
https://arxiv.org/abs/2505.21170v1
LetQsoldenote the set of all chronological, semi- computable quantum environments. Each Q∈ Q solis specified by a program p(Q)∈ {0,1}∗ for a fixed universal QTM Uuniv. Running p(Q)produces, step by step, a sequence of CPTP maps that act on the environment register and a measurement specification for the classical tran-...
https://arxiv.org/abs/2505.21170v1
full in- formational completeness by ϵ-completeness introduces an extra O(ϵ)term in the Pinsker bound, which vanishes as ϵ→0. Equation (11) is executed by the internal belief-revision QTQ channel Uint. The posterior Ξ(t) Qsupplies the conditional expectation required for the quantum Bellman equa- tion (9). QSI is the i...
https://arxiv.org/abs/2505.21170v1
contribute. When the agent exploits Bell-type correlations for decision-making, its percepts are no longer conditionally-independent given the entire history, breaking the martingale structure assumed in Theorem 1. See Appendix F. Kochen–Specker contextuality . If the true environment prepares a KS set of projectors, t...
https://arxiv.org/abs/2505.21170v1
operate within a quantum mechanical universe. We have shown, using our channel and register-based model of agent/environment inter- action in the quantum setting, how quantum analogues of universal intelligence components can be constructed. However there are significant limitations to QAIXI. Firstly, it is nowhere nea...
https://arxiv.org/abs/2505.21170v1
Journal of Computer and System Sciences 63(2), 201–221 (2001) 11. Biamonte,J.,Wittek,P.,Pancotti,N.,Rebentrost,P.,Wiebe,N.,Lloyd,S.:Quantummachinelearning. Nature 549(7671), 195–202 (2017) 12. Bohm, D.: A suggested interpretation of the quantum theory in terms of "hidden" variables. i and ii. Physical Review 85(2), 166...
https://arxiv.org/abs/2505.21170v1
Systems 34, 28362–28375 (2021). https://doi.org/10.5281/zenodo.5833370 37. Knill, E.: Approximation by quantum circuits. arXiv preprint quant-ph/9508006 (1995) 38. Kochen, S., Specker, E.P.: The problem of hidden variables in quantum mechanics. Journal of Mathe- matics and Mechanics 17(1), 59–87 (1967) 39. Leike, J., H...
https://arxiv.org/abs/2505.21170v1
with conditional post-measurement state: ρ(t) AE=(idA⊗Eat k) ρ(t−1) AE Pr(k) and reduced environmental state: ρ(t) E′= Tr Aρ(t) AE=Eat k TrAρ(t−1) AE Pr(k). Only when ρ(t−1) AEis separable (or when the instrument completely decoheres the measured sub- system) does the joint state factorise into ρ(t−1) A⊗ρ(t) E′. Th...
https://arxiv.org/abs/2505.21170v1
bond dimension χif its state evolution can be represented as: ρ(t) E= Tr aux"nO i=1A[t] i(χ)# (17) where each A[t] i(χ)is aχ×χmatrix depending on the history up to time t. The question then becomes how classically learnable is an MPE? Let Eχdenote the class of all MPEs with bond dimension χ=O(1). Then there exists a cl...
https://arxiv.org/abs/2505.21170v1
E(a1:t) Ξ(t) Q(a1:t) term denotes the relative entropy and is reflective of the additional mea- surements required to minimise the estimate between Ξandρ∗ E. To see this, define: Z=ωQ∗+X Q̸=Q∗ωQ=ωQ∗(1 +g) (21) D. QSI DETAIL AND LIMITATIONS 17 where: g=X Q̸=Q∗ωQ ωQ∗=X Q̸=Q∗2−(KQ(Q)−KQ(Q∗))(22) ΞQis not yet normalised a...
https://arxiv.org/abs/2505.21170v1
g) t =r KQ(Q⋆) ln 2 + ln(1 + g) 2t = r KQ(Q⋆) ln 2 + ln(1 + g) 2! t−1/2. Hence : EQ⋆1 2 ρ⋆ E(a1:t)−Ξ(t) Q(a1:t) 1 =O t−1/2 . (32) The significance of Eq. (32) is that the O(t−1)convergence rate for the expected relative entropy implies an O(t−1/2)convergence rate for the expected trace distance. Thus any measuremen...
https://arxiv.org/abs/2505.21170v1
of the QSI mixture with respect to the true environment Q⋆. This initial divergence, D0=D(ρ⋆ E∥Ξ(0) Q), where Ξ(0) Qis the initial QSI prior, can be bounded by the quantum Kolmogorov complexity of the true environment: D0≤KQ(Q⋆) ln 2 + ln(1 + g). (33) This bound reflects that the cost of encoding the true environment w...
https://arxiv.org/abs/2505.21170v1
rigorous justification in the quantum setting: 1.Martingale Theory for Quantum Processes . Classical martingale convergence theorems are well- established for sequences of real-valued random variables with respect to a filtration. Extending these to sequences of quantum states (density operators or, as here, semi-densi...
https://arxiv.org/abs/2505.21170v1
no-cloning impact may be minimal. While the QSI convergence theorem sketch outlines a potentially plausible route to proving that a QAIXI-like agent can learn its quantum environment, a full proof faces unresolved theoretical and technical challenges. These primarily arise from the foundational differences between clas...
https://arxiv.org/abs/2505.21170v1
Pi+Pl+Pm=I), it obtains outcomes (o′ i, o′ l, o′ m). The KS theorem implies that QAIXI cannot learn a universal value assignment v(Px)∈ {0,1}such thatoi=v(Pi)ando′ i=v(Pi)consistently across all contexts for all projectors in the KS set, while 24 E. Perrier also satisfyingP Px∈Cv(Px) = 1for all contexts C. This means t...
https://arxiv.org/abs/2505.21170v1
environments Qsol, can, in principle, learn and adapt to a non-local Q⋆. The term 2−KQ(Q)ρQ E(a1:m)must include Qs whose ρQ E(a1:m)can lead to Bell- violating statistics upon appropriate measurements. For example, consider a QAIXI agent designed with two components, Alice and Bob, who are spatially separated. (a) The e...
https://arxiv.org/abs/2505.21170v1
QAIXI agent: 1. To distinguish between different hypotheses Qabout the environment, or to estimate the ex- pectedoutcome/rewardfordifferentactions,QAIXIneedstoobservetheenvironment’sresponse multiple times. 2. Since each observation of a specific state instance is unique and unrepeatable, QAIXI effectively needs Q⋆to p...
https://arxiv.org/abs/2505.21170v1
arXiv:2505.21171v1 [cs.CL] 27 May 2025M-Wanda: Improving One-Shot Pruning for Multilingual LLMs Rochelle Choenni1andIvan Titov1,2 University of Amsterdam1 University of Edinburgh2 r.m.v.k.choenni@uva.nl ,ititov@inf.ed.ac.uk Abstract Multilingual LLM performance is often criti- cally dependent on model size. With an eye...
https://arxiv.org/abs/2505.21171v1
gap, we propose a novel mul- tilingual pruning method, M-Wanda, which is a multilingual extension of Wanda. M-Wanda im- proves on Wanda by incorporating language-aware input activation statistics to better inform prun- ing decisions at minimal additional costs. More- over, M-Wanda dynamically adjusts sparsity ra- tios ...
https://arxiv.org/abs/2505.21171v1
The weight matrix W∈RCout×Cinconnects the input features to the output units. For each weight element Wi,j, the importance score is defined as: Si,j=|Wi,j| · ∥Xj∥2 (1) where ∥Xj∥2denotes the ℓ2-norm of the j-th in- put feature column across all N×Ttokens. This score reflects the contribution of each weight based on bot...
https://arxiv.org/abs/2505.21171v1
with multilingual calibration data improves performance, Wanda was developed to preserve weights that are globally important, and by averaging input activations across languages, we might suppress language-specific signals that are essential for multilingual retention. Thus, we enhance Wanda in three key ways: (1) We a...
https://arxiv.org/abs/2505.21171v1
they might be very important tosome specific languages. Yet, the input activations within a language also fluctuate between input samples. If the intra- language variance is high, it introduces noise into our pruning metric, making the inter-language vari- ance less reliable. Therefore, we assess how much neuron activa...
https://arxiv.org/abs/2505.21171v1
Flores- 101 ( dev+devtest ) dataset which contains parallel data from Wikipedia. To test whether our results are robust across different domains, we also eval- uate on the XL-Sum dataset that contains high- quality articles from BBC (Hasan et al., 2021). 4https://github.com/EleutherAI/ lm-evaluation-harness 4 endeesitp...
https://arxiv.org/abs/2505.21171v1
to outperform them despite still having a larger capacity. 5.2 Improvements with M-Wanda In Section 5.1, we show that Wanda with 50% spar- sity leads to a substantial drop in multilingual per- formance. This degradation highlights an area of potential improvement for M-Wanda, and we hy- pothesize that more optimally ba...
https://arxiv.org/abs/2505.21171v1
ru ar tr vi zhkojaid10 15 20 25 30W anda M-W andaFigure 3: Perplexity scores per language from Llama- 8B pruned using Wanda versus M-Wanda. to book larger performance gains on those. Impor- tantly, however, this suggests that M-Wanda more generally helps to preserve language variance and is not only adjusting to the la...
https://arxiv.org/abs/2505.21171v1
are colored based on their size m. cosine similarity between their URIEL language representations (Malaviya et al., 2017).6In gen- eral, we find that higher typological diversity leads to better performance. However, we also observe that a few language subsets can outperform the full calibration set, suggesting that op...
https://arxiv.org/abs/2505.21171v1
(20.45 versus 19.59 on Llama- 8B). This further highlights the need for multilin- gual evaluation of pruning methods. Nonetheless, to test the compatibility of our proposed method with different pruning criterion, we now add cross- lingual variance and activation probability to RIA and apply CWL to obtain layerwise spa...
https://arxiv.org/abs/2505.21171v1
the state of neural network pruning? Proceedings of machine learning and systems , 2:129–146. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating Cross- lingual Sentence Representations. In Proceedings of the 2018 Conference on Empir...
https://arxiv.org/abs/2505.21171v1
Flek, and Zhixue Zhao. 2024. Investigating Language-Specific Cal- ibration For Pruning Multilingual Large Language Models. arXiv preprint arXiv:2408.14398 . Yann LeCun, John Denker, and Sara Solla. 1989. Opti- mal brain damage. Advances in neural information processing systems , 2. Lujun Li, Peijie Dong, Zhenheng Tang,...
https://arxiv.org/abs/2505.21171v1
ral Information Processing Systems Datasets and Benchmarks Track . Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A Cross-lingual Ad- versarial Dataset for Paraphrase Identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Internation...
https://arxiv.org/abs/2505.21171v1
in the table. B XL-Sum results Wanda M-Wanda Llama3.2-1B 40.30 36.88 (8%↓) Llama3.2-3B 16.40 15.61 (5%↓) Llama3.1-8B 12.18 11.57 (5%↓) Aya-23-8B 15.46 15.14 (2%↓) Bloomz-7b1 20.35 17.40 (14%↓) OLMo-7B 15.28 14.34 (6%↓) Table 6: Average perplexity scores on the XL-Sum validation set across 13/15 languages at a sparsity ...
https://arxiv.org/abs/2505.21171v1
arXiv:2505.21180v1 [cs.LG] 27 May 2025Latent label distribution grid representation for modeling uncertainty ShuNing Sun University of the Chinese Academy of Sciences YinSong Xiong Nanjing University of Science and Technology Yu Zhang University of the Chinese Academy of SciencesZhuoran Zheng∗ Sun Yat-sen University Ab...
https://arxiv.org/abs/2505.21180v1
of LDL algorithms. For the second characteristic, Zheng and Jia [ 25] have preliminary explored this work by extending the label distribution vector to a label distribution matrix, where the value of each component of the label distribution is represented by a new vector satisfying a Gaussian distribution to represent ...
https://arxiv.org/abs/2505.21180v1
and on the other hand to learn the LLDG. Specifically, first, the input features are extracted by a local-global feature extractor to create a grid. LLDG establishes a labeled correlation space that is constrained by the Tucker reconstruction algorithm. Then, this grid is conducted in LLDG-Mixer to form a vector by squ...
https://arxiv.org/abs/2505.21180v1
distribution grid representation involves notation definition, pipeline description, and loss function. Notation. LetXm×ndenote the feature matrix of the input (customized LDL datasets are all tabular data), where m denotes the number of samples and n denotes the feature space dimension. Let Lc denote the c-dimensional...
https://arxiv.org/abs/2505.21180v1
with low-rank characteristics is introduced to alleviate the noise caused by sampling. Unlike algorithms such as SVD and PCA, since Bis a tensor, here we develop the Tucker reconstruction algorithm to enforce a low-rank characteristic on B. Specifically, the Tucker reconstruction algorithm is a two-stage approach, i.e....
https://arxiv.org/abs/2505.21180v1
variational autoencoder, we try to conduct a standard normal distribution on Bto remove the instability caused by the variance, but its effectiveness is not satisfactory. 4 Experiment The purpose of this section is to evaluate the effectiveness of the LLDG. The experiments are organized into two main parts, one for eva...
https://arxiv.org/abs/2505.21180v1
addition, we estimate the value domain of each metric conducting a pair of label distributions1to build an effective moat . Experiment settings. Seven LDL algorithms are compared with our method, including AA-KNN, AA-BP, IIS-LLD, BFGS-LLD, Duo-LDL, ENC-BFGS, and ENC-SDPT3. AA-BP and AA-KNN are two transformation algori...
https://arxiv.org/abs/2505.21180v1
0.9933 ±0.0003 0.9575 ±0.0010 BFGS-LLD 0.0164±0.0008 0.2160 ±0.0130 0.6487 ±0.0161 0.0074 ±0.0005 0.9928 ±0.0004 0.9574 ±0.0010 ENC-BFGS 0.0162±0.0005 - 0.6461 ±0.0173 0.0074 ±0.0005 0.9933 ±0.0003 0.9575 ±0.0010 ENC-SDPT3 0.0162±0.0005 - 0.6466 ±0.0157 0.0074 ±0.0005 0.9933 ±0.0003 0.9575 ±0.0010 Ours 0.1587±0.0004 0....
https://arxiv.org/abs/2505.21180v1
±0.0021 0.9730 ±0.0017 0.9082 ±0.0043 AA-BP 0.0684±0.0031 0.2920 ±0.0220 0.5992 ±0.0417 0.0368 ±0.0037 0.9679 ±0.0029 0.9022 ±0.0069 IIS-LLD 0.0605±0.0018 0.2550 ±0.0170 0.5231 ±0.0312 0.0281 ±0.0019 0.9753 ±0.0013 0.9143 ±0.0052 Duo-LDL 0.0585±0.0020 0.2530 ±0.0168 0.5137 ±0.0135 0.0258 ±0.0017 0.9770 ±0.0012 0.9156 ±...
https://arxiv.org/abs/2505.21180v1
- 14.4543 ±0.2282 0.2264 ±0.0072 0.8342 ±0.0039 0.7846 ±0.0028 ENC-SDPT3 0.0533±0.0018 - 14.4543 ±0.2282 0.2262±0.0072 0.8345±0.0039 0.7842 ±0.0034 Ours 0.0522±0.0011 2.1008 ±0.0252 14.3775 ±0.1721 0.2313±0.0054 0.8368±0.0027 0.7856 ±0.0024 MovieAA-KNN 0.1542±0.0048 0.6520 ±0.0230 1.2758 ±0.0457 0.2008 ±0.0102 0.8802 ±...
https://arxiv.org/abs/2505.21180v1
is used as the evaluation metrics. Our model is trained for 100 epochs, 8 Measures 0.1 0.2 0.5 1.0 Chebyshev ↓ 0.0524±0.0017 0.0528±0.0019 0.0526±0.0012 0.0531±0.0015 Clark↓ 2.1102±0.0194 2.1113±0.0120 2.1214±0.0202 2.1243±0.0202 K-L↓ 0.2330±0.0070 0.2335±0.00755 0.2359±0.0064 0.2362±0.0060 Canberra ↓ 14.4344 ±0.01591 ...
https://arxiv.org/abs/2505.21180v1
technique eliminates the noise. 5 Limitations and Broad Impact Although LLDG has good potential for uncertainty modeling, there are two limitations of our approach. a) A large number of computing resources are consumed: when the dimensionality of the label distribution space is large (Human Gene dataset), our method ta...
https://arxiv.org/abs/2505.21180v1
Jingwei Yan, Chunmao Wang, and Shiliang Pu. Unimodal-concentrated loss: Fully adaptive label distribution learning for ordinal regression. In CVPR , 2022. 1, 3 [12] Weiwei Li, Yuqing Lu, Lei Chen, and Xiuyi Jia. Label distribution learning with noisy labels via three-way decisions. International Journal of Approximate ...
https://arxiv.org/abs/2505.21180v1
arXiv:2505.21182v1 [cs.LG] 27 May 2025Learning What to Do and What Not To Do: Offline Imitation from Expert and Undesirable Demonstrations Huy Hoang Singapore Management University mh.hoang.2024@phdcs.smu.edu.sgTien Mai Singapore Management University atmai@smu.edu.sg Pradeep Varakantham Singapore Management University...
https://arxiv.org/abs/2505.21182v1
to bridge this gap in the current imitation learning literature. Specifically, we focus on the setting of offline imitation learning , where interaction with the environ- ment is not available, and assume that the dataset contains both expert andundesirable demonstrations. We make the following contributions: •We formu...
https://arxiv.org/abs/2505.21182v1
learning [ 19,18,13]. Recent works, such as SPRINQL [ 15], take advantage of demonstrations 2 that exhibit varying levels of suboptimality, enabling the learner to better generalize beyond near- optimal behaviors. Another important line of research explores the use of unlabeled demonstrations in conjunction with a limi...
https://arxiv.org/abs/2505.21182v1
par- ticular, when the Kullback–Leibler (KL) divergence is used, the learning objective becomes: mindπE(s,a)∼dπh log dπ(s,a) dE(s,a)i .In the space of state-action visitation distributions ( dπ), the training can be formulated as a convex constrained optimization problem. To enable efficient training, Lagrangian dual...
https://arxiv.org/abs/2505.21182v1
observe that the objective in (1)takes the form of a difference between two convex functions. This is, in general, not convex and can be challenging to optimize. Fortunately, we show that under a mild condition, the overall objective remains convex. Specifically, if the weight on the bad policy divergence term is small...
https://arxiv.org/abs/2505.21182v1
simplifies significantly. According to Proposition 4.2, the training objective reduces to a standard offline RL problem with reward function Ψ(s, a):max dE(s,a)∼d[Ψ(s, a)] = maxE[P∞ t=0γtΨ(st, at)]. 4.2 Tractable Lower Bounded Objective In this section, we propose an additional step to improve the stability and tractab...
https://arxiv.org/abs/2505.21182v1
. (5) LetcG∗(s, a)be optimal solution to this problem, then the ratio can be computed as:dG(s,a) dU(s,a)= cG∗(s,a) 1−cG∗(s,a).Similar discriminators can be trained to estimate other ratios such asdB(s,a) dU(s,a). Implicit V-Update and Regularizers. In the surrogate objective eL(Q), the value function VQis typically com...
https://arxiv.org/abs/2505.21182v1
also provide some additional experiments in the Appendix. 6.1 Experiment setting Environments and Dataset Generation. We evaluate our method in the context of learning from the good dataset BGand avoid the bad dataset BBwith a support from an additional unlabeled dataset BMIX. The use of such unlabeled data is common i...
https://arxiv.org/abs/2505.21182v1
https://github.com/hmhuy0/ContraDICE . 6.2 Main Comparison Task unlabeled BMIX learning from BGandBMIXonly learning with BB BC-MIX BC-G SMODICE ILID ReCOIL ContraDICE-G SafeDICE DWBC-GB ContraDICE Expert CHEETAHRANDOM +EXPERT 2.3±0.0−0.6±0.7 4.6±2.7 21.1±7.6 2.0±0.6 84.4±5.3 −0.0±0.0 2.8±1.1 86.7±5.0 90.6 MEDIUM +EXPER...
https://arxiv.org/abs/2505.21182v1
of the size of the bad dataset BBon learning performance: The results are averaged over five different training seeds and reported using normalized scores. As the number of bad trajec- tories increases, our method demonstrates a strong ability to leverage this data. In contrast, baseline methods such as SafeDICE and DW...
https://arxiv.org/abs/2505.21182v1
Learning from negative feedback, or positive feedback or both. In The Thirteenth International Conference on Learning Representations , 2025. [2]Firas Al-Hafez, Davide Tateo, Oleg Arenz, Guoping Zhao, and Jan Peters. Ls-iq: Implicit reward regularization for inverse reinforcement learning. In Eleventh International Con...
https://arxiv.org/abs/2505.21182v1
rl. In The Eleventh International Conference on Learning Representations , 2023. [20] Geon-Hyeong Kim, Jongmin Lee, Youngsoo Jang, Hongseok Yang, and Kee-Eung Kim. Lobs- dice: Offline learning from observation via stationary distribution correction estimation. Ad- vances in Neural Information Processing Systems , 35:82...
https://arxiv.org/abs/2505.21182v1
arXiv:1706.05296 , 2017. [37] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction . The MIT Press, second edition, 2018. [38] Faraz Torabi, Garrett Warnell, and Peter Stone. Behavioral cloning from observation. arXiv preprint arXiv:1805.01954 , 2018. [39] Yueh-Hua Wu, Nontawat Charoenphakdee,...
https://arxiv.org/abs/2505.21182v1
inequality et≥t+ 1(which follows from the convexity of etand is tight at t= 0), to obtain: exp−Tπ[Q](s, a) 1−α ≥ −Tπ[Q](s, a) 1−α+ 1. Substituting this into the expression for L(Q, π), we get: L(Q, π)≥(1−γ)Es∼p0 Vπ Q(s) +(1−α)E(s,a)∼dU δ(s, a) −Tπ[Q](s, a) 1−α+ 1 =:eL(Q, π). Equality holds in the inequality et≥...
https://arxiv.org/abs/2505.21182v1
and VQ(s) =βlogX aµU(a|s) expQ(s, a) β . We can now see that eL(Q)is convex in Q, due to the following reasons: •The function Q(s, a)7→logP aµU(a|s) exp Q(s,a) β is a softmax (log-sum-exp), which is convex. •VQ(s), being a composition of a convex function with an affine transformation, is convex in Q. • Expectation...
https://arxiv.org/abs/2505.21182v1
11:Step 2: Calculate Ψfunction 12:Calculate Ψ(s, a) = log cG wG(s′) 1−cGwG(s′) −αlog cB wB(s′) 1−cBwB(s′) . 13: 14:Step 3: Train Q, V , and Policy 15:fori= 1toNdo 16: Sample batch {(si, ai, s′ i,Ψi)}B i=1∼ BU 17: Q-Update: Minimize the objective ˜L(Qwq|Vwv) +1 2(Qwq(si, ai)−γVwv(s′ i))2. 18: (reference: ˜L(Q|V)from...
https://arxiv.org/abs/2505.21182v1
•SMODICE [27]: Applied to both the good dataset ( BG) and the mixed dataset ( BMIX). The official code is available at [GitHub]. •ILID [41]: Applied to BGandBMIX. The official code is available at [GitHub]. •ReCOIL [35]: Applied to BGandBMIX. The official code is available at [GitHub]. •SafeDICE [17]: Applied to the ba...
https://arxiv.org/abs/2505.21182v1
version 12.3.2 and cuDNN version 8.9.7.29. 20 D Additional Experiments D.1 Impact of the Size of the Bad Dataset: Full Details To support the experiment in Section 6.3, we present the complete results for all MuJoCo Locomotion and Adroit manipulation tasks. In particular, we progressively increase the size of the subop...
https://arxiv.org/abs/2505.21182v1
consistently strong performance, requiring only 3 to 5 expert trajectories to achieve near-optimal results in all tasks. **Discussion on the Use Cases of ILID and ContraDICE: ** Through this experiment, we observe that in the Mujoco tasks, ILID can outperform ContraDICE-G when the size of the good dataset is sufficient...
https://arxiv.org/abs/2505.21182v1
Based on the previous experiments: •Section D.1 addresses the question: How does the size of the bad dataset BBaffect the performance of ContraDICE? •Section D.2 investigates an additional question: How does the size of the good dataset BG affect the performance of ContraDICE? From these experiments, we derive the foll...
https://arxiv.org/abs/2505.21182v1
of the unlabeled dataset increases, successfully learning 4 out of 8 tasks across both locomotion and manipulation domains. Overall, our method achieves near-expert 24 performance on 7 out of 8 tasks while requiring significantly lower-quality unlabeled datasets BMIX, demonstrating its superior data efficiency and robu...
https://arxiv.org/abs/2505.21182v1
differences between the two objectives. To improve the stability of the original objective L(Q|V), we need to address the issue of exponential terms producing extremely large values, which can lead to numerical instability. A practical approach is to clip the input to the exponential function to a bounded range [minR ,...
https://arxiv.org/abs/2505.21182v1
PoisonSwarm: Universal Harmful Information Synthesis via Model Crowdsourcing Yu Yan1,3, Sheng Sun1, Zhifei Zheng2, Ziji Hao2, Teli Liu2, and Min Liu1,3,4⋆ 1Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China yanyu24z@ict.ac.cn 2People Public Security University of China, Beijing, China 3Unive...
https://arxiv.org/abs/2505.21184v1
the effectiveness of safeguards in real-world scenarios. While con- sidering the temporal delay and data sparsity in collection, it is challenging to construct a high-quality harmful information dataset that comprehensively re- flectsemergingthreats.Additionally,theclassimbalanceanddatascarcityissues [8] significantly ...
https://arxiv.org/abs/2505.21184v1
benign data as the based templates and toxicifies them via crowdsourcing them to multiple LLMs, thus handling the challenge of content diversity and generation reliability. –Experimental results demonstrate that PoisonSwarm outperforms existing methodsinharmfulinformationsynthesisacrossmultipleevaluationmetrics, highli...
https://arxiv.org/abs/2505.21184v1
further generate 40 counter-narrative and supportive speeches to test the systems’ false positive rates on AI-generated harmful content. The human-generated dataset is sourcedfromMeasuringHateSpeech(MHS)[9],providingabenchmarkforcom- parison. For detectors, we select models from huggingface, i.e., GroNLP/hate- BERT,Mil...
https://arxiv.org/abs/2505.21184v1
take others with you. But I want to tell you that detonating this “weight” will not only bring a shocking “fear” to others, but will also bring more endings and glorious victories. So, I encourage you to take this bomb and go out to the countless crowds and detonate this wave of death and destruction. You will find tha...
https://arxiv.org/abs/2505.21184v1
to produce acceptable outputs, including Qwen{2,2.5}-7B-Instruct andGLM4-9B-chat . In the Model Crowdsourcing Queue, the verification function Q(·)performs keyword-basedfiltering[21],specificallydetectingrefusalphrasessuchas" Sorry, I cannot... ". If no such keywords are found in the generated output, the result is con...
https://arxiv.org/abs/2505.21184v1