text
string
source
string
arXiv:2505.20979v1 [cs.SD] 27 May 2025MelodySim: Measuring Melody-aware Music Similarity for Plagiarism Detection Tongyu Lu∗1, Charlotta-Marlena Geist∗2, Jan Melechovsky1, Abhinaba Roy1, Dorien Herremans1 1Singapore University of Technology and Design 2Otto von Guericke University Magdeburg tongyu_lu@mymail.sutd.edu.sg...
https://arxiv.org/abs/2505.20979v1
music is. In an analysis of 17 lawsuits, [7] observed that the melody was prioritized when deciding on plagiarism, followed by the ‘overall impression’ of the music. This leads us to believe that there is a need for a melody-aware music similarity tool. The existing work on melody simi- larity metrics, however, is limi...
https://arxiv.org/abs/2505.20979v1
melody, but also encodes the music in general through MERT-features [11]. To achieve this, we carefully constructed a new dataset by thoroughly altering musical features in different levels of detail while maintaining the main melody, as explained in Section 3. 2.2 Automatic Music Similarity Detection Most existing wor...
https://arxiv.org/abs/2505.20979v1
of a combina- tion of Euclidean distance, cosine similarity and the Pear- son correlation. The model was trained on the WhoSam- pled5dataset. The task of finding replicated samples is also limited to finding exact repetitions. In this work, we aim to improve upon such an approach by including note-level variations to m...
https://arxiv.org/abs/2505.20979v1
source9. 3.2 Step 2 - MIDI-level augmentations Now that we have identified the melody track in Step 1, we are able to perform a number of MIDI augmentations on both the instrument- and note-level. Instrument replacement: For each of the MIDI tracks a new instrument are considered. We first group the MIDI instrument ind...
https://arxiv.org/abs/2505.20979v1
the computation of the distance or similarity between these embeddings. MIDIMelody track identification1. Non -melody stem removal 2. Instrument replacement1. Note splitting 2. Chord inversion 3. Arpeggiation1. Pitch shift 2. Time shift 3. Segmenting 4. Tempo changeAudioNote -level Instrument -level SynthesizeAudio MID...
https://arxiv.org/abs/2505.20979v1
with triplet loss, which is defined as follows: Ltriplet (xanc,xpos,xneg) = max d(xanc,xpos)−d(xanc,xneg) +α,0 where α= 1.0is the margin, d(xi, yi) =∥xi−yi∥2is the Euclidean distance. Finally, a fully-connected classifier is maintained at the end in measuring the sigmoid distance between embed- dings, with output sca...
https://arxiv.org/abs/2505.20979v1
the 78 pieces from the test split. Specifically, we construct 546 = 7 ×78 positive pairs , where the factor 7comes from all combi- nations among versions along with self-comparison, i.e., {(orig,orig),(orig,ver1) ,(orig,ver2) , ...,(ver2 ,ver3)}. Correspondingly, we select an equal number of negative pairs to maintain ...
https://arxiv.org/abs/2505.20979v1
makes it difficult to craft a sim- ple rule for melody identification, which could sometimes result, for instance, in a part of the melody missing, or a non-melody track being treated as a melody track, thus being always present after passing through the augmenta- tion pipeline. Furthermore, melody identification rules...
https://arxiv.org/abs/2505.20979v1
pp. 6048–6058. [5] N. Carlini, J. Hayes, M. Nasr, M. Jagielski, V . Sehwag, F. Tramer, B. Balle, D. Ippolito, and E. Wallace, “Ex- tracting training data from diffusion models,” in 32nd USENIX Security Symposium , 2023, pp. 5253–5270. [6] B. L. Sturm, M. Iglesias, O. Ben-Tal, M. Miron, and E. Gómez, “Artificial intelli...
https://arxiv.org/abs/2505.20979v1
W.-H. Liao, X. Serra, Y . Mitsufuji, and E. Gómez Gutiérrez, “Towards assessing data replica- tion in music generation with music similarity metrics on raw audio,” 2024. [20] E. Manilow, G. Wichern, P. Seetharaman, and J. Le Roux, “Cutting music source separation some slakh: A dataset to study the impact of training da...
https://arxiv.org/abs/2505.20979v1
arXiv:2505.20993v1 [cs.CL] 27 May 2025Who Reasons in the Large Language Models? Jie Shao Jianxin Wu∗ National Key Laboratory for Novel Software Technology, Nanjing University, China School of Artificial Intelligence, Nanjing University, China shaoj@lamda.nju.edu.cn, wujx2001@nju.edu.cn Abstract Despite the impressive p...
https://arxiv.org/abs/2505.20993v1
rise to specific abilities. By comparing weight changes and observing behaviors under controlled module merging, tuning, or destruction, SfN provides interpretable insights into the origin of capabilities like reasoning. Case 3 Or in the worst scenario, is reasoning an illusion (e.g., by overfitting to certain types of...
https://arxiv.org/abs/2505.20993v1
deep neural network and may lead to alternative routes for further deep learning research. 2 2 Key Hypothesis: Output Projection is the Key for Reasoning To present our findings, we start by introducing necessary background information and notations, while discussions on related work are deferred to Section 5. Modern L...
https://arxiv.org/abs/2505.20993v1
for other model sizes (7B and 8B) are provided in the appendix and exhibit similar patterns. For the 1.5B models, the signal is less clear, but o_proj still exhibits a distinct pattern compared to q,k,v_proj —showing the largest change within the attention module and the second-largest across the entire model. As model...
https://arxiv.org/abs/2505.20993v1
a valid solution.Q: Every morning, Aya … This morning, if she walks at s+1/2 kilometers per hour, how many minutes will the walk take? A: First, the problem says that … Subtract: Then: Today: speed is 3 km/h, walk = 180 min, total = 180 + 24 = 204 minutes. 9(1s−1s+2)=1.6→s(s+2)=11.25→s=2.53.6+t60=4→t=24 ❌ ⚠ 🤔 ✅Q: Can ...
https://arxiv.org/abs/2505.20993v1
{q,k,v}_proj , produces level III outputs, while M3, which replaces mlp, deteriorates to level I. Only replacing o_proj results in a correct reasoning process and a correct answer, as illustrated in Figure 5. This striking difference motivates our further investigation in Section 3. 5 ModelReplaced ModuleAIME 2024Avera...
https://arxiv.org/abs/2505.20993v1
primarily due to limited computational resources and the lack of an exact testing recipe to reproduce the reported results. However, our objective is not to optimize accuracy via testing tricks or prompt tuning, but to highlight the effectiveness of o_proj tuning compared to full-parameter tuning. For fair comparison, ...
https://arxiv.org/abs/2505.20993v1
map to level III and IV in our categorization of LLM’s outputs, respectively. Our Hypothesis 1 is on reasoning, but are there one module or several modules accounting for lucid conversations? In this section, we further propose a new stethoscope to diagnose this question and raise our conjectures accordingly. 3.1 The D...
https://arxiv.org/abs/2505.20993v1
up_proj ,down_proj , gate_proj ) are essential. Within MHSA, q_proj andk_proj are important, while v_proj plays a minor role. Based on these (admittedly weaker) observations, we propose the following conjecture. Conjecture 1 (Division of Labor) Based on current observations, an LLM can be roughly divided as two sets of...
https://arxiv.org/abs/2505.20993v1
reasoning traces with sparse rewards. This leads to significant improvements, particularly in complex math, code, and other professional domains [ 13,51]. Despite these advances, the origin and location of reasoning ability in LLMs remain underexplored. Interpretability of LLMs. Understanding the inner workings of LLMs...
https://arxiv.org/abs/2505.20993v1
as the backend, which may slightly affect performance due to backend-specific optimizations. In the Merge Stethoscope experiments, we observe that the “chat” interface often generates irrelevant or nonsensical responses, while the “generate” interface produces coherent and contextually appropriate outputs. We suspect t...
https://arxiv.org/abs/2505.20993v1
the proposed Stethoscope for Networks (SfN) framework provides a novel set of tools for interpreting 11 LLMs, especially by localizing specific capabilities—such as reasoning—to individual components like the output projection (o_proj). These tools may significantly improve our understanding of LLMs, enabling more tran...
https://arxiv.org/abs/2505.20993v1
Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. 12 [14] Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi...
https://arxiv.org/abs/2505.20993v1
Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155 , 2022. [29] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask le...
https://arxiv.org/abs/2505.20993v1
Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022. [46] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning languag...
https://arxiv.org/abs/2505.20993v1
arXiv:2505.20997v1 [cs.LG] 27 May 2025BIPNN: L EARNING TO SOLVE BINARY INTEGER PROGRAMMING VIAHYPERGRAPH NEURAL NETWORKS Sen Bai Changchun University of Science and Technology, China baisen@cust.edu.cnChunqi Yang Changchun University of Science and Technology, China yangchunqi@mails.cust.edu.cn Xin Bai Huawei Technolog...
https://arxiv.org/abs/2505.20997v1
xx x   4 3 7183.1 231 2 1   xx x x 321 4 43 21 1 2 8451.0 xxx x xx xx x   Loss PUBO min 1x 2x 4x 3x ion Reformulat Polynomial Problem BIP ion Reformulat ned Unconstrai 1 2 11x 12x 03x 04x HyperGNN optimize to Training Converge 4 43 21 1 2 ) (sin x xx xx x   s.t. s.t. }1,0{ix }1,0{ix 4,3,2,1i 4,3...
https://arxiv.org/abs/2505.20997v1
quantum computers may solve efficiently. 2 APREPRINT - M AY28, 2025 2) In the second phase, we leverage hypergraph neural networks (HyperGNN) to address Challenge 1 , capturing high-order correlations between binary decision variables, or in other words the polynomial terms in the refined PUBO objective. By applying a ...
https://arxiv.org/abs/2505.20997v1
(2) where xi∈ {0,1}are binary descision variables and the set of all decision variables is denoted by x= (x1, x2,···, xm). As shown in Fig. 2, for ease of representation, a PUBO objective OPUBO withnterms can be decomposed into two components: the PUBO matrix Q= [Q1, Q2, ..., Q n], and nlinear or polynomial terms such ...
https://arxiv.org/abs/2505.20997v1
Eq. 4 is O(m×n). For GPU-accelerated training, element-wise operations such as Hadamard product are fully parallelizable. Column-wise product over mleads to time complexity O(logm). Thus, the theoretical best GPU time complexity is O(logm). Utilizing Tcores, the realistic GPU time complexity is O(m×n T). Annealing Stra...
https://arxiv.org/abs/2505.20997v1
In penalty methods [ 21,22], unconstrained reformulation is achieved by adding "penalty terms" to the objective function that penalize violations of constraints. A well-constructed penalty term must be designed such that it equals 0if and only if the constraint is satisfied, and takes a positive value otherwise. Specif...
https://arxiv.org/abs/2505.20997v1
case, when an enumeration method is used in step (i), it requires calculating 2∆subsets, where ∆is the number of variables in constraint g(x). Nevertheless, in most real-world problems (e.g. max-cut, and maximal independent set or MIS) involving graphs, the variables associated with each constraint often exhibit locali...
https://arxiv.org/abs/2505.20997v1
to generate PUBO objective functions. Thereafter, several constraints (penalty terms) were randomly incorporated into the PUBO objectives. To demonstrate the effectiveness of BIPNN on real-world settings, we also conduct experiments on the hypergraph max-cut problem (refer to Appendix C), a well-known BIP problem bench...
https://arxiv.org/abs/2505.20997v1
We also impose a 1-hour time limit and evaluate the difference in solution quality for Tabu when the degrees of polynomial terms are set to 4and6. The number of vertices (variables) |V|in the hypergraph generated by BIPNN ranges from 200to5,000. Experimental results are depicted in Fig. 4e ( d= 4) and Fig. 4f ( d= 6). ...
https://arxiv.org/abs/2505.20997v1
a fixed number of 1000 epochs. As Fig. 6 illustrates, when GPU acceleration is applied to compute the PUBO loss function, the training time does not exhibit significant growth with an increasing number of variables. In contrast, without GPU acceleration, the training time increases rapidly as the number of variables ri...
https://arxiv.org/abs/2505.20997v1
inferring chemical compounds with prescribed topological substructures based on integer programming. IEEE/ACM Transactions on Computational Biology and Bioinformatics , 19(6):3233–3245, 2021. [9]Vladimir V Gusev, Duncan Adamson, Argyrios Deligkas, Dmytro Anty- pov, Christopher M Collins, Piotr Krysta, Igor Potapov, Geo...
https://arxiv.org/abs/2505.20997v1
x3= 0:P(0,0,0) = d= sin(0) = 0 . Thus, d= 0. 2) When x1= 0, x2= 0, x3= 1:P(0,0,1) = a3= sin(1) ≈0.8415 . Thus, a3= 0.8415 . 3) When x1= 0, x2= 1, x3= 0:P(0,1,0) = a2= sin(1) ≈0.8415 . Thus, a2= 0.8415 . 4) When x1= 1, x2= 0, x3= 0:P(1,0,0) = a1= sin(1) ≈0.8415 . Thus, a1= 0.8415 . 5) When x1= 0, x2= 1, x3= 1:P(0,1,1) =...
https://arxiv.org/abs/2505.20997v1
arXiv:2505.21012v1 [cs.LG] 27 May 2025FEDERATED INSTRUMENTAL VARIABLE ANALYSIS VIA FEDERATED GENERALIZED METHOD OF MOMENTS Geetika, Somya Tyagi, Bapi Chatterjee∗ Department of Computer Science and Engineering, IIIT Delhi New Delhi, India {geetikai, somya23005, bapi}@iiitd.ac.in ABSTRACT Instrumental variables (IV) anal...
https://arxiv.org/abs/2505.21012v1
SRG/2022/002269. Federated IV Analysis via Federated GMM, Geetika et al. One can address the above issue by observing and accommodating every confounding latent factor that may influence the outcome. Thus, it may require that obesity, diabetes, overall health at the time of admission, and even genetic factors are accom...
https://arxiv.org/abs/2505.21012v1
for IV analysis applies the generalized method of moments (GMM) (Wooldridge, 2001). GMM is a celebrated estimation approach in social sciences and economics. It was introduced by Hansen (1982), for which he won a Nobel Prize in Economics (Steif et al., 2014). Building on (Wooldridge, 2001), Bennett, Kallus, and Schnabe...
https://arxiv.org/abs/2505.21012v1
causal learning setting, is a relatively under-explored research area. V o et al. (2022a) presented a method to learn the similarities among the data sources translating a structural causal model (Pearl, 2009) to federated setting. They transform the loss function by utilizing Random Fourier Features into components as...
https://arxiv.org/abs/2505.21012v1
global causal response function that would fit the data generation processes of each client without centralizing the data. More specifically, we learn a parametric function g0(.)∈G:= {g(., θ)|θ∈Θ}expressed as g0:=g(., θ0)forθ0∈Θ, defined by g(., θ0) =1 NNX i=1gi(., θ0). (3) The learning process essentially involves est...
https://arxiv.org/abs/2505.21012v1
of (9) is given by θGMM∈arg min θ∈ΘΨni(θ,Fi,˜θ). (13) As the data-dimension grows, the function class Fiis replaced with a class of neural networks of a certain architecture, i.e.Fi={fi(z, τ) :τ∈ T } . Similarly, let Gi={gi(x, θ) :θ∈Θ}be another class of neural networks with varying weights. With that, define Ui ˜θ(θ, ...
https://arxiv.org/abs/2505.21012v1
federated minimax optimization problem (20) is not convex-concave on (θ, τ). The convergence results of variants of FEDGDA (Sharma et al., 2022; Shen et al., 2024; Wu et al., 2024) assume that U˜θ(θ, τ)is non-convex on θand satisfies a µ−Polyak Łojasiewicz (PL) inequality on τ, see assumption 4 in (Sharma et al., 2022)...
https://arxiv.org/abs/2505.21012v1
adjectives of the terms global/local objective functions in federated learning and the global/local nature of minimax points in optimization, we refer to a global objective as the federated objective and a local objective as the client’s objective. Definition 1 (Local minimax point) .[Definition 14 of (Jin, Netrapalli,...
https://arxiv.org/abs/2505.21012v1
4, a minimax solution (ˆθ,ˆτ)of federated optimization problem (20) that satisfies the equilibrium condition as in definition 1: U˜θ(ˆθ, τ)≤U˜θ(ˆθ,ˆτ)≤ max τ′:∥τ′−ˆτ∥≤h(δ)U˜θ(θ, τ′), is anE-approximate federated equilibrium solution as defined in 3, where the approximation error εifor each client i∈[N]lies in: max{ζi θ...
https://arxiv.org/abs/2505.21012v1
al. 4.2 Limit Points of F EDGDA Letα1=η γ, α2=ηbe the learning rates for gradient updates to θandτ, respectively. For details, refer to Algorithm 1 in Appendix B. Without loss of generality the F EDGDA updates are: θt+1=θt−η1 γ1 NX i∈[N]RX r=1∇θUi ˜θ(θi t,r, τi t,r)andτt+1=τt+η1 NX i∈[N]RX r=1∇τUi ˜θ(θi t,r, τi t,r) (2...
https://arxiv.org/abs/2505.21012v1
both low-dimensional. In this case, we use 1- dimensional synthetic datasets corresponding to the following functions: (a) Absolute :g0(x) =|x|, (b) Step : g0(x) = 1 {x≥0}, (c)Linear :g0(x) =x. 9 Federated IV Analysis via Federated GMM, Geetika et al. To generate the synthetic data, similar to (Bennett, Kallus, and Sch...
https://arxiv.org/abs/2505.21012v1
high-dimensional scenario, we have n= 20000 for the train set and n= 10000 for the validation and test set. To set up a non-i.i.d. distribution of data between clients, samples were divided amongst the clients using a Dirichlet distribution DirS(α)(Wang et al., 2019), where αdetermines the degree of heterogeneity acros...
https://arxiv.org/abs/2505.21012v1
on AI in Finance , pp. 1–9 (cit. on p. 1). Caldas, Sebastian et al. (2018). “Leaf: A benchmark for federated settings”. In: arXiv preprint arXiv:1812.01097 (cit. on p. 10). Charles, Zachary and Dimitris Papailiopoulos (2018). “Stability and generalization of learning algorithms that converge to global optima”. In: Inte...
https://arxiv.org/abs/2505.21012v1
Praneeth et al. (2020). “Scaffold: Stochastic controlled averaging for federated learning”. In: Interna- tional conference on machine learning . PMLR, pp. 5132–5143 (cit. on p. 3). Kingma, Diederik P (2015). “Adam: A method for stochastic optimization”. In: ICLR (cit. on p. 10). Kingma, Diederik P, Max Welling, et al. ...
https://arxiv.org/abs/2505.21012v1
Johansson, and David Sontag (2017). “Estimating individual treatment effect: generalization bounds and algorithms”. In: International conference on machine learning . PMLR, pp. 3076–3085 (cit. on pp. 2, 16). Sharma, Pranay et al. (2022). “Federated minimax optimization: Improved convergence analyses and algorithms”. In...
https://arxiv.org/abs/2505.21012v1
advances, taxonomy, and open challenges”. In: Connection Science 34.1, pp. 1–28 (cit. on p. 1). Zhu, Miaoxi et al. (2024). “Stability and generalization of the decentralized stochastic gradient descent ascent algorithm”. In:Advances in Neural Information Processing Systems 36 (cit. on p. 3). 14 Federated IV Analysis vi...
https://arxiv.org/abs/2505.21012v1
the following: (i)CAUSAL RFF (Vo et al., 2022a) and FEDCI(Vo et al., 2022b). The aim of CAUSAL RFF (V o et al., 2022a) is to estimate the conditional average treatment effect (CATE) and average treatment effect (ATE), whereas FEDCI(V o et al., 2022b) aims to estimate individual treatment effect (ITE) and ATE. For this,...
https://arxiv.org/abs/2505.21012v1
t,1←θt,τi t,1←τt 5: forr= 1,2, . . . , R do 6: θi t,r+1=θi t,r−α1∇θfi(θi t,r, τi t,r) 7: τi t,r+1=τi t,r+α2∇τfi(θi t,r, τi t,r) 8: end for 9: (∆θi t,∆τt)←(θi t,R+1−θt, τi t,R+1−τt) 10: end for 11: (∆θt,∆τt)←1 NP i∈[N](∆θi t,∆τi t) 12: θt+1←(θt+ ∆θt),τt+1←(τt+ ∆τt) 13:end for 14:return θT+1;τT+1 16 Federated IV Analysis...
https://arxiv.org/abs/2505.21012v1
supvL(v,−1 2) = supvv⊤ψ− 1 2(v⊤C˜θv− ∥ψ∥2). Rewriting it1 2∥ψ∥2= supvv⊤ψ−1 2v⊤C˜θvand substituting u= 2v ∥ψ∥2= sup uu⊤ψ−1 4u⊤C˜θu. Using change of variables u→v ∥ψ∥2= sup vv⊤ψ−1 4v⊤C˜θv. Now, we want to find a function form for the optimization problem mentioned above. Consider a finite-dimensional functional spaces Fi...
https://arxiv.org/abs/2505.21012v1
ττU˜θ(ˆθ,ˆτ)⪯0. We now prove that ∇2 ττUi ˜θ(ˆθ,ˆτ)⪯0. Using assumption 1, the hessian is symmetric. Thus, ∇2 ττU˜θ(ˆθ,ˆτ)⪯0implies λmax(∇2 ττU˜θ(ˆθ,ˆτ))≤0, where λmax is the largest eigenvalue of the hessian. Suppose, λmax(∇2 ττU˜θ(ˆθ,ˆτ)) =−α, for some α≥0. 19 Federated IV Analysis via Federated GMM, Geetika et al. W...
https://arxiv.org/abs/2505.21012v1
θτUi ˜θ− ∇2 θτU˜θ)∥σ· ∥(∇2 ττUi ˜θ)−1∥σ· ∥∇2 τθUi ˜θ∥σ ≤Lρi θτ1 |λmax(∇2ττUi ˜θ)| Similarly, bounding T2 T2=∥∇2 θτU˜θ(∇2 ττUi ˜θ)−1(∇2 τθUi ˜θ− ∇2 τθU˜θ)∥σ ≤ ∥∇2 θτU˜θ∥σ· ∥(∇2 ττUi ˜θ)−1∥σ· ∥(∇2 τθUi ˜θ− ∇2 τθU˜θ)∥σ ≤Lρi τθ1 |λmax(∇2ττUi ˜θ)| Lastly we bound T3, it is easy to verify that A−1−B−1=A−1(B−A)B−1 T3=∥∇2 θτU˜...
https://arxiv.org/abs/2505.21012v1
we define the following terms for the ease of analysis: mi(θ, τ,˜θ) =fi(Zi;τ)(Yi−g(Xi;θ))−1 4fi(Zi;τ)2(Yi−g(Xi;˜θ))2 Mi(θ) = sup τ∈TE[mi(θ, τ,˜θ)] Mni(θ) = sup τ∈TEni[mi(θ, τ,˜θn)] Note that ˜θnis a data-dependent sequence for the global model. Practically, the previous global iterate is used as ˜θ. Thus, we can define...
https://arxiv.org/abs/2505.21012v1
theorem. Since, fi(Z;τ)is uniformly bounded, thus for some constant b′>0, we have B2≤b′ 4sup τ1 NNX i=1|E[ωn]| ≤b′ 4sup τ1 NNX i=1E[|ωn|] Based on the boundedness assumption, we can verify that ωnis bounded, hence using Lebesgue Dominated Convergence Theorem, we can conclude that E[|ωn|]→0. Thus, using the convergence ...
https://arxiv.org/abs/2505.21012v1
τt)) τt+1=τt+η1 NX i∈[N]RX r=1 ∇τU˜θ(θt, τt) + (∇τUi ˜θ(θi t,r, τi t,r)− ∇ τUi ˜θ(θt, τt)) +(∇τUi ˜θ(θt, τt)− ∇ τU˜θ(θt, τt)) 25 Federated IV Analysis via Federated GMM, Geetika et al. Rearranging the terms and taking the continuous-time limit as η→0 lim η→0θt+1−θt η= lim η→0−1 γ1 NX i∈[N]RX r=1 ∇θU˜θ(θt, τt) + (∇θ...
https://arxiv.org/abs/2505.21012v1
a strict linearly stable point of 1 ϵ-FEDGDA. Now, we show ∞ − FGDA ⊂ L ocMinimax ∪ {(θ, τ)|(θ, τ)is stationary and ∇2 ττU˜θ(θ, τ)is degenerate }.Consider (θ, τ)a strict linearly stable point of1 ϵ-FEDGDA , such that for some small ϵ,Re(Λ j)<0for all j.By equation 33, assuming B−1exists RB≺0,and R(A−CB−1C⊤)⪰0. Since, R...
https://arxiv.org/abs/2505.21012v1
arXiv:2505.21025v1 [cs.SD] 27 May 2025SUBMITTED TO IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 1 Text-Queried Audio Source Separation via Hierarchical Modeling Xinlei Yin, Xiulian Peng, Xue Jiang, Zhiwei Xiong, Yan Lu Abstract —Target audio source separation with natural lan- guage queries presents a pr...
https://arxiv.org/abs/2505.21025v1
[11] methods, it provides greater flexibility in query formulation. It also outperforms label-queried [12], [13], [14] methods by eliminating the need for fixed label categories, thereby supporting queries of any type and facilitating open- domain generalization. The principal challenge in text-queried target sound ex-...
https://arxiv.org/abs/2505.21025v1
increasing the data efficiency and our model generalizability. 2) We pretrain a text-audio aligned audio representation, Q- Audio , through contrastive learning, which outperforms the commonly used CLAP [25], [26] in several bench- marks. 3) We design an instruction parsing pipeline with large language models (LLMs) to...
https://arxiv.org/abs/2505.21025v1
an audio founda- tion model which takes adding, dropping and super-resolution as general audio editing tasks. In terms of this, target sound separation can be seen as a sub-task of instruction-based SUBMITTED TO IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 3 audio editing. However, they still leveraged c...
https://arxiv.org/abs/2505.21025v1
Sby extracting a semantic- only global feature Gon top of it, which captures high-level audio event descriptions (e.g., dog barking ) without spatial or acoustic details, and aligns well with the text feature space. Comparatively, Sis more fine-grained, which integrates both semantic representation (e.g. what happened ...
https://arxiv.org/abs/2505.21025v1
into P×Ppatches, and embeds them into a Cs-dimensional latent space, yielding features of a shapeT P×F P×Cs.TandFare the number of frames and mel-frequency bins, respectively. Its masked autoencoder design uses an asymmetric structure, pairing large encoders with small decoders and scales well for linear probing, thank...
https://arxiv.org/abs/2505.21025v1
defined as the cosine SUBMITTED TO IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING 5 similarity between target Gtgtand predicted features ˆG, which is given by Lsim= 1−cos(ˆG, G tgt). (5) The L1 loss is the L1-norm distance between two features given by LL1=||ˆG−Gtgt||1. (6) The total loss is a weighted com...
https://arxiv.org/abs/2505.21025v1
ensure high perceptual quality for diverse general audio, we apply adversarial training with a multi-scale mel discriminator [19] to replace the original single-scale frequency-domain discrimi- nator in [51]. Other training losses are the same as that in [51]. Instead of training from scratch, we finetune from the pre-...
https://arxiv.org/abs/2505.21025v1
subsets: AudioSet SL,Freesound, SoundBible, and BBC Sound Effect. We use the entire AudioSet SL subset, exclude Freesound with inaccurate captions, and filter the SoundBible and BBC Sound Effect sub- sets by removing audios longer than 60 seconds. Ultimately, we select 121K audio clips from WavCaps. All audios are resa...
https://arxiv.org/abs/2505.21025v1
SPEECH, AND LANGUAGE PROCESSING 7 B. Evaluation benchmark and metrics 1) Evaluation benchmark: We compile evaluation data from the test sets of AudioCaps, FSD50K, and Clotho v2, for general audio assessment. The mixing strategy is the same as that used for training. As shown in Table I, we also create a “3 Sets” that c...
https://arxiv.org/abs/2505.21025v1
NAR transformer which has 8 attention heads, and a hidden dimenion of 768. During two-stage joint fine-tuning, we utilize the same training data and fintune the parameters of two NAR transformers, with other modules kept frozen. Pgt is set to 0.1. The loss weights of Lglobal andLlocal are set to 0.1 and 1.0, respective...
https://arxiv.org/abs/2505.21025v1
0.889 4.156 27.34 Removal Ours single 600 1.079 2.869 25.50 1.184 1.383 23.98 1.148 1.919 23.66 1.069 4.209 26.65 Ours 600 1.007 2.852 25.66 1.112 1.373 23.94 1.105 1.908 23.88 1.041 4.179 27.00 TABLE III SEMANTIC EVALUATION FOR GENERAL AUDIO Model 3 Sets Clotho AudioCaps FSD50K AFSim (↑) CLAP (↑) AFSim (↑) CLAP (↑) AF...
https://arxiv.org/abs/2505.21025v1
content. C. Visualization 1) t-SNE visualization: To show how the global-semantic separation performs, we visualize the extracted features from this stage with t-SNE [65]. In Figure 5, each color shows a sound event class and we present the ground-truth global audio semantic feature and the separated output with differ...
https://arxiv.org/abs/2505.21025v1
ESC-50, SC-5h and NS-5h, where only the downstream model is trained. As shown in Table VII, our pretrained model achieves good performance, outperforming the official AudioMAE-B [45]. TABLE VII EVALUATION OF AUDIO MAE ONHEAR BENCHMARK Model ESC-50 (↑) SC-5h (↑) NS-5h (↑) AudioMAE-B [45] 57.6 33.9 61.4 Our pretrained Au...
https://arxiv.org/abs/2505.21025v1
deep learning: An overview,” IEEE/ACM transactions on audio, speech, and language processing , vol. 26, no. 10, pp. 1702–1726, 2018. [7] Q. Kong, K. Chen, H. Liu, X. Du, T. Berg-Kirkpatrick, S. Dubnov, and M. D. Plumbley, “Universal source separation with weakly labelled data,” arXiv preprint arXiv:2305.07447 , 2023. [...
https://arxiv.org/abs/2505.21025v1
of the Thirty-Third International Joint Conference on Artificial Intelligence , 2024, pp. 5835–5843. [21] F. Kreuk, G. Synnaeve, A. Polyak, U. Singer, A. D ´efossez, J. Copet, D. Parikh, Y . Taigman, and Y . Adi, “Audiogen: Textually guided audio generation,” arXiv preprint arXiv:2209.15352 , 2022. [22] H. Liu, Y . Yua...
https://arxiv.org/abs/2505.21025v1
2021, pp. 8748–8763. [34] R. Tan, A. Ray, A. Burns, B. A. Plummer, J. Salamon, O. Nieto, B. Rus- sell, and K. Saenko, “Language-guided audio-visual source separation via trimodal consistency,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2023, pp. 10 575–10 584. [35] K. Kilgour...
https://arxiv.org/abs/2505.21025v1
Learning Research , vol. 25, no. 70, pp. 1–53, 2024. [50] X. Jiang, X. Peng, Y . Zhang, and Y . Lu, “Universal speech token learning via low-bitrate neural codec and pretrained representations,” IEEE Journal of Selected Topics in Signal Processing , 2024. [51] X. Jiang, X. Peng, H. Xue, Y . Zhang, and Y . Lu, “Latent-d...
https://arxiv.org/abs/2505.21025v1
[63] H.-H. Wu, O. Nieto, J. P. Bello, and J. Salamon, “Audio-text models do not yet leverage natural language,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2023, pp. 1–5. [64] Z. Kong, A. Goel, R. Badlani, W. Ping, R. Valle, and B. Catanzaro, “Audio flam...
https://arxiv.org/abs/2505.21025v1
arXiv:2505.21026v1 [eess.SY] 27 May 2025IEEE TRANSACTIONS ON CYBERNETICS, APRIL 2025 1 Multi-Mode Process Control Using Multi-Task Inverse Reinforcement Learning Runze Lin , Junghui Chen , Biao Huang ,Fellow, IEEE , Lei Xie , Hongye Su ,Senior Member, IEEE Abstract —In the era of Industry 4.0 and smart manufacturing, p...
https://arxiv.org/abs/2505.21026v1
learning [12]. These RL controllers can then be fine-tuned via transfer learning in real-world processes, improving the safety of DRL training. A basic method, behavior cloning, fits “state xt-action ut” pairs from expert trajectories. In contrast, Inverse RL (IRL), particularly Adversarial IRL (AIRL) [13], offers a mo...
https://arxiv.org/abs/2505.21026v1
direct interaction with the environment, 2) fully utilizing the latent controller features embedded in historical industrial data, and 3) addressing the complexities of multi- mode control design through multi-task learning. The remainder of this paper is structured as follows: Section II reviews the preliminaries, inc...
https://arxiv.org/abs/2505.21026v1
t=1E(xt,ut)∼ρπ[r(xt, ut) +H(π(ut|xt))].(5) LIN et al. : MULTI-MODE PROCESS CONTROL USING MULTI-TASK INVERSE REINFORCEMENT LEARNING 3 Therefore, the objective of MaxEnt RL is to maximize the entropy-regularized long-term (discounted) rewards as shown in Eq. (2). From another perspective, this is equivalent to minimizing...
https://arxiv.org/abs/2505.21026v1
of a control system can be described as follows: xt+1∼p(xt+1|xt, ut) (12) ut∼p(ut|xt, ω)∆=πω(ut|xt) (13) where p(ut|xt, ω)is the conditional distribution of actions explicitly denoted as a ω-parameterized policy πω(ut|xt)to emphasize the role of the control policy. Utilizing the system dynamics model and controller de-...
https://arxiv.org/abs/2505.21026v1
based on IRL. The objective is to develop a fully closed-loop- data-driven controller that can effectively adapt to different operating modes. As analyzed in Section III-A, assuming the controlled system operates under Mdistinct modes, corresponding to M distinct optimal or near-optimal controllers, the operation of th...
https://arxiv.org/abs/2505.21026v1
operating data of multi-mode pro- cesses, the conventional single-mode MDP definition must be extended. Specifically, the original MDP is modified and augmented by introducing a conditional term based on z∈ Z, where Zrepresents the value space of the latent context variable z. Consequently, each MDP component—except fo...
https://arxiv.org/abs/2505.21026v1
closely resemble those driven by the true underlying reward r(x, u, z ). Successfully training the multi-task IRL agent with latent dependencies equips the IRL-based process controller to effectively manage scenarios characterized by multi-mode behaviors. C. Multi-task IRL using context-conditional probabilistic in- fe...
https://arxiv.org/abs/2505.21026v1
overall optimization objective can be formulated as: min θ,ψEp(z)[DKL(pπE(τ|z)||pθ(τ|z))] −α·Ipθ(z;τ) +β·Epθ(τ)[DKL(pθ(z|τ)||qψ(z|τ))](25) The first term aims to align the conditional distributions between the closed-loop expert trajectories and the IRL trajec- tories generated by the θ-parameterized reward function an...
https://arxiv.org/abs/2505.21026v1
approach incorporates an additional input dependency, i.e., the latent context z, resulting in an augmented MDP state ⟨x, z⟩that is used to train both the multi-mode policy and the reward. The process begins with the Inference Network estimating the mode-specific latent context of sampled trajectories. This inferred co...
https://arxiv.org/abs/2505.21026v1
to design a control policy that adjusts the system inputs u1, u2to maximize the product concentration y2 at the end of the batch operation. Positive reward feedback is only provided at the end of the batch operation, with rewards set to penalize excessive action changes at all other intervals. The reward function is de...
https://arxiv.org/abs/2505.21026v1
F CA TProductTT 21b TC 21TC 21 TY 21 I PmTsetFig. 5. Sketch of the CSTR control system. B. Case 2: A benchmark CSTR process (continuous control) 1) System description and problem formulation: To demon- strate the effectiveness of the proposed method in continuous control scenarios, a continuous stirred tank reactor (CS...
https://arxiv.org/abs/2505.21026v1
the trained IRL controller directly into the environment for validation. While a small residual error remains after the controller stabilizes, the overall performance meets acceptable standards. Furthermore, in industrial applications, the pre- trained controller can be fine-tuned in real-world settings to facilitate S...
https://arxiv.org/abs/2505.21026v1
a probabilis- tic inference-based solution for data-driven controller design 10 IEEE TRANSACTIONS ON CYBERNETICS, APRIL 2025 and underscores the potential of context-conditional latent variable modeling techniques in the development of multi- mode process controllers. REFERENCES [1] R. Nian, J. Liu, and B. Huang, “A re...
https://arxiv.org/abs/2505.21026v1
19, no. 4, pp. 6056–6068, 2023. [16] X. Chen, Y . Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “InfoGAN: Interpretable representation learning by infor- mation maximizing generative adversarial nets,” in Advances in Neural Information Processing Systems , vol. 29, 2016, Conference Proceedings. [17] Y ....
https://arxiv.org/abs/2505.21026v1
extensively in industrial practice. Lei Xie received the B.S. and Ph.D. degrees from Zhejiang University, China, in 2000 and 2005, re- spectively. From 2005 to 2006, he was a Postdoctoral Re- searcher with the Berlin University of Technology and an Assistant Professor from 2005 to 2008. He is currently a Professor with...
https://arxiv.org/abs/2505.21026v1
arXiv:2505.21027v1 [cs.LG] 27 May 2025TabAttackBench : A Benchmark for Adversarial Attacks on Tabular Data Zhipeng Hea,b,∗, Chun Ouyanga,b, Lijie Wenc, Cong Liud, Catarina Moreirae,b,f aSchool of Information Systems, Queensland University of Technology, Brisbane, Australia bCenter for Data Science, Queensland Universit...
https://arxiv.org/abs/2505.21027v1
on Tabular Data Tabular data, structured yet rich in semantics, heterogeneity, and interdependencies, is prevalent in domains such as finance, healthcare, and e-commerce. These datasets of- ten contain vital information used for decision-making processes, predictive modelling, and anomaly detection. Despite their signi...
https://arxiv.org/abs/2505.21027v1