text string | source string |
|---|---|
arXiv:2505.20979v1 [cs.SD] 27 May 2025MelodySim: Measuring Melody-aware Music Similarity for Plagiarism Detection Tongyu Lu∗1, Charlotta-Marlena Geist∗2, Jan Melechovsky1, Abhinaba Roy1, Dorien Herremans1 1Singapore University of Technology and Design 2Otto von Guericke University Magdeburg tongyu_lu@mymail.sutd.edu.sg... | https://arxiv.org/abs/2505.20979v1 |
music is. In an analysis of 17 lawsuits, [7] observed that the melody was prioritized when deciding on plagiarism, followed by the ‘overall impression’ of the music. This leads us to believe that there is a need for a melody-aware music similarity tool. The existing work on melody simi- larity metrics, however, is limi... | https://arxiv.org/abs/2505.20979v1 |
melody, but also encodes the music in general through MERT-features [11]. To achieve this, we carefully constructed a new dataset by thoroughly altering musical features in different levels of detail while maintaining the main melody, as explained in Section 3. 2.2 Automatic Music Similarity Detection Most existing wor... | https://arxiv.org/abs/2505.20979v1 |
of a combina- tion of Euclidean distance, cosine similarity and the Pear- son correlation. The model was trained on the WhoSam- pled5dataset. The task of finding replicated samples is also limited to finding exact repetitions. In this work, we aim to improve upon such an approach by including note-level variations to m... | https://arxiv.org/abs/2505.20979v1 |
source9. 3.2 Step 2 - MIDI-level augmentations Now that we have identified the melody track in Step 1, we are able to perform a number of MIDI augmentations on both the instrument- and note-level. Instrument replacement: For each of the MIDI tracks a new instrument are considered. We first group the MIDI instrument ind... | https://arxiv.org/abs/2505.20979v1 |
the computation of the distance or similarity between these embeddings. MIDIMelody track identification1. Non -melody stem removal 2. Instrument replacement1. Note splitting 2. Chord inversion 3. Arpeggiation1. Pitch shift 2. Time shift 3. Segmenting 4. Tempo changeAudioNote -level Instrument -level SynthesizeAudio MID... | https://arxiv.org/abs/2505.20979v1 |
with triplet loss, which is defined as follows: Ltriplet (xanc,xpos,xneg) = max d(xanc,xpos)−d(xanc,xneg) +α,0 where α= 1.0is the margin, d(xi, yi) =∥xi−yi∥2is the Euclidean distance. Finally, a fully-connected classifier is maintained at the end in measuring the sigmoid distance between embed- dings, with output sca... | https://arxiv.org/abs/2505.20979v1 |
the 78 pieces from the test split. Specifically, we construct 546 = 7 ×78 positive pairs , where the factor 7comes from all combi- nations among versions along with self-comparison, i.e., {(orig,orig),(orig,ver1) ,(orig,ver2) , ...,(ver2 ,ver3)}. Correspondingly, we select an equal number of negative pairs to maintain ... | https://arxiv.org/abs/2505.20979v1 |
makes it difficult to craft a sim- ple rule for melody identification, which could sometimes result, for instance, in a part of the melody missing, or a non-melody track being treated as a melody track, thus being always present after passing through the augmenta- tion pipeline. Furthermore, melody identification rules... | https://arxiv.org/abs/2505.20979v1 |
pp. 6048–6058. [5] N. Carlini, J. Hayes, M. Nasr, M. Jagielski, V . Sehwag, F. Tramer, B. Balle, D. Ippolito, and E. Wallace, “Ex- tracting training data from diffusion models,” in 32nd USENIX Security Symposium , 2023, pp. 5253–5270. [6] B. L. Sturm, M. Iglesias, O. Ben-Tal, M. Miron, and E. Gómez, “Artificial intelli... | https://arxiv.org/abs/2505.20979v1 |
W.-H. Liao, X. Serra, Y . Mitsufuji, and E. Gómez Gutiérrez, “Towards assessing data replica- tion in music generation with music similarity metrics on raw audio,” 2024. [20] E. Manilow, G. Wichern, P. Seetharaman, and J. Le Roux, “Cutting music source separation some slakh: A dataset to study the impact of training da... | https://arxiv.org/abs/2505.20979v1 |
arXiv:2505.20993v1 [cs.CL] 27 May 2025Who Reasons in the Large Language Models? Jie Shao Jianxin Wu∗ National Key Laboratory for Novel Software Technology, Nanjing University, China School of Artificial Intelligence, Nanjing University, China shaoj@lamda.nju.edu.cn, wujx2001@nju.edu.cn Abstract Despite the impressive p... | https://arxiv.org/abs/2505.20993v1 |
rise to specific abilities. By comparing weight changes and observing behaviors under controlled module merging, tuning, or destruction, SfN provides interpretable insights into the origin of capabilities like reasoning. Case 3 Or in the worst scenario, is reasoning an illusion (e.g., by overfitting to certain types of... | https://arxiv.org/abs/2505.20993v1 |
deep neural network and may lead to alternative routes for further deep learning research. 2 2 Key Hypothesis: Output Projection is the Key for Reasoning To present our findings, we start by introducing necessary background information and notations, while discussions on related work are deferred to Section 5. Modern L... | https://arxiv.org/abs/2505.20993v1 |
for other model sizes (7B and 8B) are provided in the appendix and exhibit similar patterns. For the 1.5B models, the signal is less clear, but o_proj still exhibits a distinct pattern compared to q,k,v_proj —showing the largest change within the attention module and the second-largest across the entire model. As model... | https://arxiv.org/abs/2505.20993v1 |
a valid solution.Q: Every morning, Aya … This morning, if she walks at s+1/2 kilometers per hour, how many minutes will the walk take? A: First, the problem says that … Subtract: Then: Today: speed is 3 km/h, walk = 180 min, total = 180 + 24 = 204 minutes. 9(1s−1s+2)=1.6→s(s+2)=11.25→s=2.53.6+t60=4→t=24 ❌ ⚠ 🤔 ✅Q: Can ... | https://arxiv.org/abs/2505.20993v1 |
{q,k,v}_proj , produces level III outputs, while M3, which replaces mlp, deteriorates to level I. Only replacing o_proj results in a correct reasoning process and a correct answer, as illustrated in Figure 5. This striking difference motivates our further investigation in Section 3. 5 ModelReplaced ModuleAIME 2024Avera... | https://arxiv.org/abs/2505.20993v1 |
primarily due to limited computational resources and the lack of an exact testing recipe to reproduce the reported results. However, our objective is not to optimize accuracy via testing tricks or prompt tuning, but to highlight the effectiveness of o_proj tuning compared to full-parameter tuning. For fair comparison, ... | https://arxiv.org/abs/2505.20993v1 |
map to level III and IV in our categorization of LLM’s outputs, respectively. Our Hypothesis 1 is on reasoning, but are there one module or several modules accounting for lucid conversations? In this section, we further propose a new stethoscope to diagnose this question and raise our conjectures accordingly. 3.1 The D... | https://arxiv.org/abs/2505.20993v1 |
up_proj ,down_proj , gate_proj ) are essential. Within MHSA, q_proj andk_proj are important, while v_proj plays a minor role. Based on these (admittedly weaker) observations, we propose the following conjecture. Conjecture 1 (Division of Labor) Based on current observations, an LLM can be roughly divided as two sets of... | https://arxiv.org/abs/2505.20993v1 |
reasoning traces with sparse rewards. This leads to significant improvements, particularly in complex math, code, and other professional domains [ 13,51]. Despite these advances, the origin and location of reasoning ability in LLMs remain underexplored. Interpretability of LLMs. Understanding the inner workings of LLMs... | https://arxiv.org/abs/2505.20993v1 |
as the backend, which may slightly affect performance due to backend-specific optimizations. In the Merge Stethoscope experiments, we observe that the “chat” interface often generates irrelevant or nonsensical responses, while the “generate” interface produces coherent and contextually appropriate outputs. We suspect t... | https://arxiv.org/abs/2505.20993v1 |
the proposed Stethoscope for Networks (SfN) framework provides a novel set of tools for interpreting 11 LLMs, especially by localizing specific capabilities—such as reasoning—to individual components like the output projection (o_proj). These tools may significantly improve our understanding of LLMs, enabling more tran... | https://arxiv.org/abs/2505.20993v1 |
Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. 12 [14] Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi... | https://arxiv.org/abs/2505.20993v1 |
Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155 , 2022. [29] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask le... | https://arxiv.org/abs/2505.20993v1 |
Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022. [46] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning languag... | https://arxiv.org/abs/2505.20993v1 |
arXiv:2505.20997v1 [cs.LG] 27 May 2025BIPNN: L EARNING TO SOLVE BINARY INTEGER PROGRAMMING VIAHYPERGRAPH NEURAL NETWORKS Sen Bai Changchun University of Science and Technology, China baisen@cust.edu.cnChunqi Yang Changchun University of Science and Technology, China yangchunqi@mails.cust.edu.cn Xin Bai Huawei Technolog... | https://arxiv.org/abs/2505.20997v1 |
xx x 4 3 7183.1 231 2 1 xx x x 321 4 43 21 1 2 8451.0 xxx x xx xx x Loss PUBO min 1x 2x 4x 3x ion Reformulat Polynomial Problem BIP ion Reformulat ned Unconstrai 1 2 11x 12x 03x 04x HyperGNN optimize to Training Converge 4 43 21 1 2 ) (sin x xx xx x s.t. s.t. }1,0{ix }1,0{ix 4,3,2,1i 4,3... | https://arxiv.org/abs/2505.20997v1 |
quantum computers may solve efficiently. 2 APREPRINT - M AY28, 2025 2) In the second phase, we leverage hypergraph neural networks (HyperGNN) to address Challenge 1 , capturing high-order correlations between binary decision variables, or in other words the polynomial terms in the refined PUBO objective. By applying a ... | https://arxiv.org/abs/2505.20997v1 |
(2) where xi∈ {0,1}are binary descision variables and the set of all decision variables is denoted by x= (x1, x2,···, xm). As shown in Fig. 2, for ease of representation, a PUBO objective OPUBO withnterms can be decomposed into two components: the PUBO matrix Q= [Q1, Q2, ..., Q n], and nlinear or polynomial terms such ... | https://arxiv.org/abs/2505.20997v1 |
Eq. 4 is O(m×n). For GPU-accelerated training, element-wise operations such as Hadamard product are fully parallelizable. Column-wise product over mleads to time complexity O(logm). Thus, the theoretical best GPU time complexity is O(logm). Utilizing Tcores, the realistic GPU time complexity is O(m×n T). Annealing Stra... | https://arxiv.org/abs/2505.20997v1 |
In penalty methods [ 21,22], unconstrained reformulation is achieved by adding "penalty terms" to the objective function that penalize violations of constraints. A well-constructed penalty term must be designed such that it equals 0if and only if the constraint is satisfied, and takes a positive value otherwise. Specif... | https://arxiv.org/abs/2505.20997v1 |
case, when an enumeration method is used in step (i), it requires calculating 2∆subsets, where ∆is the number of variables in constraint g(x). Nevertheless, in most real-world problems (e.g. max-cut, and maximal independent set or MIS) involving graphs, the variables associated with each constraint often exhibit locali... | https://arxiv.org/abs/2505.20997v1 |
to generate PUBO objective functions. Thereafter, several constraints (penalty terms) were randomly incorporated into the PUBO objectives. To demonstrate the effectiveness of BIPNN on real-world settings, we also conduct experiments on the hypergraph max-cut problem (refer to Appendix C), a well-known BIP problem bench... | https://arxiv.org/abs/2505.20997v1 |
We also impose a 1-hour time limit and evaluate the difference in solution quality for Tabu when the degrees of polynomial terms are set to 4and6. The number of vertices (variables) |V|in the hypergraph generated by BIPNN ranges from 200to5,000. Experimental results are depicted in Fig. 4e ( d= 4) and Fig. 4f ( d= 6). ... | https://arxiv.org/abs/2505.20997v1 |
a fixed number of 1000 epochs. As Fig. 6 illustrates, when GPU acceleration is applied to compute the PUBO loss function, the training time does not exhibit significant growth with an increasing number of variables. In contrast, without GPU acceleration, the training time increases rapidly as the number of variables ri... | https://arxiv.org/abs/2505.20997v1 |
inferring chemical compounds with prescribed topological substructures based on integer programming. IEEE/ACM Transactions on Computational Biology and Bioinformatics , 19(6):3233–3245, 2021. [9]Vladimir V Gusev, Duncan Adamson, Argyrios Deligkas, Dmytro Anty- pov, Christopher M Collins, Piotr Krysta, Igor Potapov, Geo... | https://arxiv.org/abs/2505.20997v1 |
x3= 0:P(0,0,0) = d= sin(0) = 0 . Thus, d= 0. 2) When x1= 0, x2= 0, x3= 1:P(0,0,1) = a3= sin(1) ≈0.8415 . Thus, a3= 0.8415 . 3) When x1= 0, x2= 1, x3= 0:P(0,1,0) = a2= sin(1) ≈0.8415 . Thus, a2= 0.8415 . 4) When x1= 1, x2= 0, x3= 0:P(1,0,0) = a1= sin(1) ≈0.8415 . Thus, a1= 0.8415 . 5) When x1= 0, x2= 1, x3= 1:P(0,1,1) =... | https://arxiv.org/abs/2505.20997v1 |
arXiv:2505.21012v1 [cs.LG] 27 May 2025FEDERATED INSTRUMENTAL VARIABLE ANALYSIS VIA FEDERATED GENERALIZED METHOD OF MOMENTS Geetika, Somya Tyagi, Bapi Chatterjee∗ Department of Computer Science and Engineering, IIIT Delhi New Delhi, India {geetikai, somya23005, bapi}@iiitd.ac.in ABSTRACT Instrumental variables (IV) anal... | https://arxiv.org/abs/2505.21012v1 |
SRG/2022/002269. Federated IV Analysis via Federated GMM, Geetika et al. One can address the above issue by observing and accommodating every confounding latent factor that may influence the outcome. Thus, it may require that obesity, diabetes, overall health at the time of admission, and even genetic factors are accom... | https://arxiv.org/abs/2505.21012v1 |
for IV analysis applies the generalized method of moments (GMM) (Wooldridge, 2001). GMM is a celebrated estimation approach in social sciences and economics. It was introduced by Hansen (1982), for which he won a Nobel Prize in Economics (Steif et al., 2014). Building on (Wooldridge, 2001), Bennett, Kallus, and Schnabe... | https://arxiv.org/abs/2505.21012v1 |
causal learning setting, is a relatively under-explored research area. V o et al. (2022a) presented a method to learn the similarities among the data sources translating a structural causal model (Pearl, 2009) to federated setting. They transform the loss function by utilizing Random Fourier Features into components as... | https://arxiv.org/abs/2505.21012v1 |
global causal response function that would fit the data generation processes of each client without centralizing the data. More specifically, we learn a parametric function g0(.)∈G:= {g(., θ)|θ∈Θ}expressed as g0:=g(., θ0)forθ0∈Θ, defined by g(., θ0) =1 NNX i=1gi(., θ0). (3) The learning process essentially involves est... | https://arxiv.org/abs/2505.21012v1 |
of (9) is given by θGMM∈arg min θ∈ΘΨni(θ,Fi,˜θ). (13) As the data-dimension grows, the function class Fiis replaced with a class of neural networks of a certain architecture, i.e.Fi={fi(z, τ) :τ∈ T } . Similarly, let Gi={gi(x, θ) :θ∈Θ}be another class of neural networks with varying weights. With that, define Ui ˜θ(θ, ... | https://arxiv.org/abs/2505.21012v1 |
federated minimax optimization problem (20) is not convex-concave on (θ, τ). The convergence results of variants of FEDGDA (Sharma et al., 2022; Shen et al., 2024; Wu et al., 2024) assume that U˜θ(θ, τ)is non-convex on θand satisfies a µ−Polyak Łojasiewicz (PL) inequality on τ, see assumption 4 in (Sharma et al., 2022)... | https://arxiv.org/abs/2505.21012v1 |
adjectives of the terms global/local objective functions in federated learning and the global/local nature of minimax points in optimization, we refer to a global objective as the federated objective and a local objective as the client’s objective. Definition 1 (Local minimax point) .[Definition 14 of (Jin, Netrapalli,... | https://arxiv.org/abs/2505.21012v1 |
4, a minimax solution (ˆθ,ˆτ)of federated optimization problem (20) that satisfies the equilibrium condition as in definition 1: U˜θ(ˆθ, τ)≤U˜θ(ˆθ,ˆτ)≤ max τ′:∥τ′−ˆτ∥≤h(δ)U˜θ(θ, τ′), is anE-approximate federated equilibrium solution as defined in 3, where the approximation error εifor each client i∈[N]lies in: max{ζi θ... | https://arxiv.org/abs/2505.21012v1 |
al. 4.2 Limit Points of F EDGDA Letα1=η γ, α2=ηbe the learning rates for gradient updates to θandτ, respectively. For details, refer to Algorithm 1 in Appendix B. Without loss of generality the F EDGDA updates are: θt+1=θt−η1 γ1 NX i∈[N]RX r=1∇θUi ˜θ(θi t,r, τi t,r)andτt+1=τt+η1 NX i∈[N]RX r=1∇τUi ˜θ(θi t,r, τi t,r) (2... | https://arxiv.org/abs/2505.21012v1 |
both low-dimensional. In this case, we use 1- dimensional synthetic datasets corresponding to the following functions: (a) Absolute :g0(x) =|x|, (b) Step : g0(x) = 1 {x≥0}, (c)Linear :g0(x) =x. 9 Federated IV Analysis via Federated GMM, Geetika et al. To generate the synthetic data, similar to (Bennett, Kallus, and Sch... | https://arxiv.org/abs/2505.21012v1 |
high-dimensional scenario, we have n= 20000 for the train set and n= 10000 for the validation and test set. To set up a non-i.i.d. distribution of data between clients, samples were divided amongst the clients using a Dirichlet distribution DirS(α)(Wang et al., 2019), where αdetermines the degree of heterogeneity acros... | https://arxiv.org/abs/2505.21012v1 |
on AI in Finance , pp. 1–9 (cit. on p. 1). Caldas, Sebastian et al. (2018). “Leaf: A benchmark for federated settings”. In: arXiv preprint arXiv:1812.01097 (cit. on p. 10). Charles, Zachary and Dimitris Papailiopoulos (2018). “Stability and generalization of learning algorithms that converge to global optima”. In: Inte... | https://arxiv.org/abs/2505.21012v1 |
Praneeth et al. (2020). “Scaffold: Stochastic controlled averaging for federated learning”. In: Interna- tional conference on machine learning . PMLR, pp. 5132–5143 (cit. on p. 3). Kingma, Diederik P (2015). “Adam: A method for stochastic optimization”. In: ICLR (cit. on p. 10). Kingma, Diederik P, Max Welling, et al. ... | https://arxiv.org/abs/2505.21012v1 |
Johansson, and David Sontag (2017). “Estimating individual treatment effect: generalization bounds and algorithms”. In: International conference on machine learning . PMLR, pp. 3076–3085 (cit. on pp. 2, 16). Sharma, Pranay et al. (2022). “Federated minimax optimization: Improved convergence analyses and algorithms”. In... | https://arxiv.org/abs/2505.21012v1 |
advances, taxonomy, and open challenges”. In: Connection Science 34.1, pp. 1–28 (cit. on p. 1). Zhu, Miaoxi et al. (2024). “Stability and generalization of the decentralized stochastic gradient descent ascent algorithm”. In:Advances in Neural Information Processing Systems 36 (cit. on p. 3). 14 Federated IV Analysis vi... | https://arxiv.org/abs/2505.21012v1 |
the following: (i)CAUSAL RFF (Vo et al., 2022a) and FEDCI(Vo et al., 2022b). The aim of CAUSAL RFF (V o et al., 2022a) is to estimate the conditional average treatment effect (CATE) and average treatment effect (ATE), whereas FEDCI(V o et al., 2022b) aims to estimate individual treatment effect (ITE) and ATE. For this,... | https://arxiv.org/abs/2505.21012v1 |
t,1←θt,τi t,1←τt 5: forr= 1,2, . . . , R do 6: θi t,r+1=θi t,r−α1∇θfi(θi t,r, τi t,r) 7: τi t,r+1=τi t,r+α2∇τfi(θi t,r, τi t,r) 8: end for 9: (∆θi t,∆τt)←(θi t,R+1−θt, τi t,R+1−τt) 10: end for 11: (∆θt,∆τt)←1 NP i∈[N](∆θi t,∆τi t) 12: θt+1←(θt+ ∆θt),τt+1←(τt+ ∆τt) 13:end for 14:return θT+1;τT+1 16 Federated IV Analysis... | https://arxiv.org/abs/2505.21012v1 |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 3