text string | source string |
|---|---|
learning process, so that the learner does not violate safety while exploring new actions. In post-shielding, the shield is deployed during deployment, i.e., after the learn- ing process has ended, so that the potentially dangerous actions of the learned agents could be corrected at runtime. Besides, shields can be des... | https://arxiv.org/abs/2505.22104v1 |
state at the next time step is given by x′=f(x, u, w ), and we will express this as the transition xu,w− − → x′Thetrajectory ξofΣstarting at a given initialstate x0∈ X and caused by control and disturbance input sequences u0, u1, . . .andw0, w1, . . . is a sequence of transitions x0u0,w0− − − − → x1u1,w1− − − − → x2. .... | https://arxiv.org/abs/2505.22104v1 |
where the subscript “ G” makes it explicit that CGis attached to the particular specification Safety (G). We add one final subclass of controllers to the list of other subclasses pre- sented earlier. For this, we say a controller C′is asub-controller ofC, writ- tenC′⊑ C, if (a) Dom (C′)⊆Dom (C)and (b) for every state x... | https://arxiv.org/abs/2505.22104v1 |
minimally intervening if every intervention is a necessary intervention, i.e., without the intervention, dis- turbances could push the trajectory outside of the shield’s domain, and therefore safetyguaranteeswouldbelost.Ourdefinitionofminimalinterventionisadapted from the definition by Bloem et al. [4], which formalize... | https://arxiv.org/abs/2505.22104v1 |
If the safety specification changes, then the shield needs to be redesigned. This is especially problematic if the precise safety specification is unknown apriori, and the shield needs to adapt as new safety requirements are discovered during runtime. We propose the dynamic shielding problem, where the actual safety ob... | https://arxiv.org/abs/2505.22104v1 |
be called the atomicsafety controllers. During the online deploy- ment phase, at each step the true safe set G=G′∩G′′∩. . .is revealed, where G′, G′′, . . .∈R, and the required safety controller for Safety (G)is obtained by dynamically composing the corresponding atomic safety controllers C∗ G′,C∗ G′′, . . .. The proce... | https://arxiv.org/abs/2505.22104v1 |
Fig.1: Illustration of the two steps involved in the online composition of atomic safety controllers. The automaton represents a finite-state control system with two control inputs u1, u2and no disturbance inputs. The nodes are the states and the arrows represent the transition function. Suppose there are two safety sp... | https://arxiv.org/abs/2505.22104v1 |
specifications [16,24]. Besides, ABC algorithms are usually imple- mentable using efficient symbolic data structures, such as BDDs, helping us to devise efficient push-button controller synthesis algorithms in practice. The typical workflow of an ABC algorithm has three stages, namely ab- straction ,synthesis , and (co... | https://arxiv.org/abs/2505.22104v1 |
x∈Dom (C) = ∪bx∈Dom (bCbΦ)bx. By virtue of the FRR Qbetween ΣandbΣ, it is guaranteed that Paths (Σ,CΦ, x0)⊆Φfor all x0∈Dom (CΦ); in other words, CΦis a sound con- troller of Σ. It is worthwhile to mention that such a simple refinement stage is one unique strength of FRR, since the other alternatives [26,19] usually req... | https://arxiv.org/abs/2505.22104v1 |
keep the abstract system inside S. It is guaranteed that bCcGiis an n.m.p. safety controller of bΣforSafety bΣ(cGi), and that its refinement is a nonblocking safety controller for SafetyΣ(Gi). Unfortunately, the maximal per- missiveness is not guaranteed with respect to Σ, as explained in Remark 2. Step B1: Computing t... | https://arxiv.org/abs/2505.22104v1 |
are within a certain distance dalong each dimension of the X-Y coordinate axes. This creates a visible region that is a square whose sides have the length 2dcentered around the current location of the robot at each time step. This is a realistic scenario experienced by many mobile agents, including self-driving cars an... | https://arxiv.org/abs/2505.22104v1 |
Dubins vehicle model. The system has three state variables x, y, and θ, where xandyrepresent the location in the X-Y coordinate, and θ represents the heading angle in radians (measured counter-clockwise from the positive X axis); two control input variables vanda, representing the forward velocity and the angular veloc... | https://arxiv.org/abs/2505.22104v1 |
report them in Figure 3. We observe that the dynamic shields are almost always faster than the pure online shields, and as the abstraction gets finer, their difference becomes more prominent. With the finest abstraction, the dynamic shield was upto five times faster! Furthermore, any efficiency improvement of the pure ... | https://arxiv.org/abs/2505.22104v1 |
enforcement for reactive systems. In: International conference on tools and algo- rithms for the construction and analysis of systems. pp. 533–548. Springer (2015) 5. ElSayed-Aly, I., Bharadwaj, S., Amato, C., Ehlers, R., Topcu, U., Feng, L.: Safe multi-agent reinforcement learning via shielding. arXiv preprint arXiv:2... | https://arxiv.org/abs/2505.22104v1 |
(2016) 25. Rungger,M.,Zamani,M.:Scots:Atoolforthesynthesisofsymboliccontrollers.In: Proceedings of the 19th international conference on hybrid systems: Computation and control. pp. 99–104 (2016) 26. Tabuada, P.: An approximate simulation approach to symbolic control. IEEE Transactions on Automatic Control 53(6), 1406–1... | https://arxiv.org/abs/2505.22104v1 |
arXiv:2505.22106v1 [cs.SD] 28 May 2025AudioTurbo: Fast Text-to-Audio Generation with Rectified Diffusion Junqi Zhao1, Jinzheng Zhao1, Haohe Liu1, Yun Chen1, Lu Han2, Xubo Liu1, Mark Plumbley1, Wenwu Wang1 1Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, UK 2Laboratory of Noise and Audio R... | https://arxiv.org/abs/2505.22106v1 |
other TTA models, in- cluding AudioLDM2 [13], Make-an-Audio [14], and Make-an- Audio2 [15], utilize latent variable generation in conjunction with a pre-trained V AE and vocoder for audio synthesis, deliv- ering outstanding generation quality. Although diffusion models have achieved substantial ad- vancements in audio ... | https://arxiv.org/abs/2505.22106v1 |
tis a continuous time variable with t∈[0, T]. The func- tionsf(·)andg(·)denote the drift coefficient and diffusion co- efficient, respectively, while wrepresents Brownian motion. The reverse process has an equivalent deterministic process whose trajectories share the same marginal probability densities as those of the ... | https://arxiv.org/abs/2505.22106v1 |
diffusion process that progressively trans- forms the clean data distribution into a standard normal distri- bution, following a predefined noise schedule, q(zt|z0) =N(zt;αtz0, σtI), (7) this forward process demonstrates that any latent variable ztcan be directly sampled from z0, and it represents a weighted in- terpol... | https://arxiv.org/abs/2505.22106v1 |
these, LAFMA is the SOTA accelerated model in the TTA domain and serves as our primary comparison target. 4.1.4. Evaluation Metrics In this study, we evaluate our proposed TTA system, utilizing both objective metrics and subjective measures to assess the fi- delity and diversity of the generated audio clips. For object... | https://arxiv.org/abs/2505.22106v1 |
infer- ence efficiency. In addition, the proposed AudioTurbo achieves a CLAP score of 29.8 and a REL of 85.58 with 10 sampling steps, outperforming other baselines by a large margin. This in- dicates that our proposed model can generate audio that is more relevant to the given textual description, demonstrating superio... | https://arxiv.org/abs/2505.22106v1 |
With the same number of inference steps, our model achieves the best or near-best performance, with its advantage being especially prominent in the low-step regime. Furthermore, compared to the flow-matching-based TTA acceleration model (LAFMA), our model achieves compara- ble performance in just three steps, matching ... | https://arxiv.org/abs/2505.22106v1 |
the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2022, pp. 10 684–10 695. [12] J. Xue, Y . Deng, Y . Gao, and Y . Li, “Auffusion: Leveraging the power of diffusion and large language models for text-to-audio generation,” IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing , vol. 32, pp... | https://arxiv.org/abs/2505.22106v1 |
K. Chen, T. Zhang, Y . Hui, T. Berg-Kirkpatrick, and S. Dubnov, “Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation,” in In- ternational Conference on Acoustics, Speech and Signal Process- ing (ICASSP) . IEEE, 2023, pp. 1–5. [27] A. Radford, J. W. Kim, C. Hallacy,... | https://arxiv.org/abs/2505.22106v1 |
arXiv:2505.22108v1 [cs.LG] 28 May 2025Inclusive, Differentially Private Federated Learning for Clinical Data Santhosh Parampottupadam1,2[0009−0009−9401−887X], Melih Coşğun3[0009−0008−3596−8376], Sarthak Pati5[0000−0003−2243−8487], Maximilian Zenk1,2[0000−0002−8933−5995], Saikat Roy1[0000−0002−0809−6524], Dimitrios Bouni... | https://arxiv.org/abs/2505.22108v1 |
while addressing compliance and c omputational barri- ers [19,5,6]. This paper proposes a novel compliance-aware FL framework t o enhance pri- vacy in healthcare by dynamically integrating DP with clien t compliance scores. The framework introduces a customizable compliance scorin g tool aligned with key healthcare sta... | https://arxiv.org/abs/2505.22108v1 |
7:end for 8: Send{CLIENT i}to aggregator 9:DP Processing: 10: foreach client ido 11: DPi←Copy(CLIENT i) 12: DPi←DPTrain (DPi,agg_data,η=AdaptiveNoise (ci)) 13: end for 14: Aggregation: 15:GLOBAL _MODEL←FedAvg ({DPi})⊲Fed Median/Prox/Yogi/Adam 16: Broadcast GLOBAL _MODEL to clients 17:end for 18:returnGLOBAL _MODEL Nois... | https://arxiv.org/abs/2505.22108v1 |
igator (PI)(Table 2). This tool, grounded in established healthcare and security standards, evaluated clients on 12 compliance factors with predefined options and weights (Equation 1). These scores determined the level of noise dynamically a dded to client con- tributions2, ensuring baseline privacy with a minimum nois ... | https://arxiv.org/abs/2505.22108v1 |
and dp_FedProx 64.04%. 4 Discussion In this manuscript, we have developed a novel compliance-aw are FL framework which optimizes the privacy-utility trade-off by dynamical ly adjusting DP noise based on client compliance scores. We evaluated our method a cross multiple ex- periments using various aggregation strategies ... | https://arxiv.org/abs/2505.22108v1 |
but adding minimal nois e in the first round or using secure multi-party computation (SMPC) could enhan ce security. Addi- tionally, the framework assumes accurate and honest compli ance scores, which may not always hold. Future work could explore dynamic valid ation to ensure real-time compliance verification. 8 S. Para... | https://arxiv.org/abs/2505.22108v1 |
81.56 63.72 70.80 63.72 65.51 FedAdam 79.12 89.10 78.30 89.12 63.45 79.90 73.01 75.30 10 S. Parampottupadam et al. References 1. Act, E.A.I.: EU Artificial Intelligence Act | Up-to-date d evelop- ments and analyses of the EU AI Act — artificialintelligencea ct.eu. https://artificialintelligenceact.eu/ , [Accessed 11-01-2... | https://arxiv.org/abs/2505.22108v1 |
optimization in heterogeneous networks. Proceedings of Ma chine learning and sys- tems2, 429–450 (2020) 21. Li, X., Zmigrod, R., Ma, Z., Liu, X., Zhu, X.: Fine-tuning language models with differential privacy through adaptive noise all ocation (2024), https://arxiv.org/abs/2410.02912 22. McMahan, B., Moore, E., Ramage, ... | https://arxiv.org/abs/2505.22108v1 |
The quest for the GRAph Level autoEncoder (GRALE) Paul Krzakala LTCI & CMAP , Télécom paris, IP Paris Gabriel Melo LTCI, Télécom paris, IP ParisCharlotte Laclau LTCI, Télécom paris, IP Paris Florence d’Alché-Buc LTCI, Télécom paris, IP ParisRémi Flamary CMAP, Ecole Polytechnique, IP Paris Abstract Although graph-based ... | https://arxiv.org/abs/2505.22109v1 |
approaches, such as proposed in this work and in [ 67] use a learnable module to provide the matching. model, such as adversarial regularization [ 47] or masking [ 55], and it has been shown to be efficient for many node-level tasks, such as node clustering [ 80,81,59] or node outlier detection [ 14]. Node- level model... | https://arxiv.org/abs/2505.22109v1 |
classical AutoEncoders (e.g., images): 1. The encoder gmust be permutation invariant. 2. The decoder fmust be able to map vectors of fixed dimension to graphs of various sizes. 3. The loss Lmust be permutation invariant, differentiable and efficient to compute. Permutation invariant encoder. The first challenge is a we... | https://arxiv.org/abs/2505.22109v1 |
of O(k(N)N3), where k(N)is the number of iterations until convergence. However, there is no guarantee that the optimizer reaches a global optimum. Another approach, first proposed in PIGV AE [ 67], is to completely bypass the inner optimization problem by making the matching Ta prediction of the model , that is, the mo... | https://arxiv.org/abs/2505.22109v1 |
embedding Zto reconstruct a graph ˆGand its node embeddings ˆX that will be used for matching as discussed in the next subsection. fgraph(Z) =ˆG, f nodes(Z) =ˆX (6) The decoder architecture mirrors that of the encoder. First, a transformer encoder updates the graph representation Z, then a novel Evoformer decoder modul... | https://arxiv.org/abs/2505.22109v1 |
model is too expensive to train. MODELCOLORING PUBCHEM 16 PUBCHEM 32 EDIT. DIST. (↓) GI A CC. (↑) EDIT. DIST. (↓) GI A CC. (↑) EDIT. DIST. (↓) GI A CC. (↑) GRAPH VAE 2.13 35.90 3.72 07.8 N.A. N.A. PIGVAE∗0.09 85.30 1.69 41.0 2.53 24.91 GRALE 0.02 99.20 0.11 93.0 0.78 66.80 Architecture. PIGV AE architecture is composed... | https://arxiv.org/abs/2505.22109v1 |
than or equal to 20. Then, for molecular representation learning, we download and preprocess molecules from the PUBCHEM database [ 33]. We denote PUBCHEM 32 and PUBCHEM 16 as the subsets containing 84M and 14M molecules, respectively, with size up to 32 and 16 atoms. PUBCHEM 16 is used for training a lightweight versio... | https://arxiv.org/abs/2505.22109v1 |
a transformer encoder scales with O(dmax( K, D )). These findings align with the broader hypothesis that many types of data benefit from being represented as tokens [ 29,54,26], a direction already explored in recent theoretical works [17]. 5.2 Qualitative properties of GRALE Capturing graph properties in the embedding... | https://arxiv.org/abs/2505.22109v1 |
representations obtained by GRALE as input for classification and regression tasks in the MoleculeNet benchmark [ 69]. We compare our method to several graph representation learning baselines, including graph AutoEncoders (PIGV AE [ 67], VGAE [ 36]) and contrastive learning methods (Infograph [ 53], Simgrace [ 70]). Fo... | https://arxiv.org/abs/2505.22109v1 |
G1andG2and plug them into the matcher T(G1, G2) =m gnodes(G1), gnodes(G2) (11) where we enforce that T(G1, G2)∈σNby replacing the Sinkhorn algorithm with the Hungarian algorithm [ 15] inside the matcher. Then, we use this (potentially suboptimal) matching to compute an upper bound of the edit distance. In Table 5, we... | https://arxiv.org/abs/2505.22109v1 |
universe database gdb-13. Journal oftheAmerican Chemical Society , 131(25):8732–8733. [5]Bolte, J., Pauwels, E., and Vaiter, S. (2023). One-step differentiation of iterative algorithms. Advances inNeural Information Processing Systems, 36:77089–77103. [6]Brogat-Motte, L., Flamary, R., Brouard, C., Rousu, J., and d’Alch... | https://arxiv.org/abs/2505.22109v1 |
B., Laurent, T., Perold, A., LeCun, Y ., and Bresson, X. (2023). A generalization of vit/mlp-mixer to graphs. In International conference onmachine learning , pages 12724–12745. PMLR. [27] Hlaoui, A. and Wang, S. (2006). Median graph computation for graph clustering. Soft Computing, 10:47–53. [28] Hu, W., Fey, M., Zitn... | https://arxiv.org/abs/2505.22109v1 |
arXiv preprint arXiv:2302.04181. [47] Pan, S., Hu, R., Long, G., Jiang, J., Yao, L., and Zhang, C. (2018). Adversarially regularized graph autoencoder for graph embedding. arXiv preprint arXiv:1802.04407. [48] Piao, C., Xu, T., Sun, X., Rong, Y ., Zhao, K., and Cheng, H. (2023). Computing graph edit distance via neural... | https://arxiv.org/abs/2505.22109v1 |
, 45(6):6984–7000. [64] Wang, R., Zhang, T., Yu, T., Yan, J., and Yang, X. (2021). Combinatorial learning of graph edit distance via dynamic embedding. In Proceedings oftheIEEE/CVF conference oncomputer vision andpattern recognition, pages 5241–5250. [65] Wang, T., Liu, H., Li, Y ., Jin, Y ., Hou, X., and Ling, H. (202... | https://arxiv.org/abs/2505.22109v1 |
Hu, S., Zhang, Z., Yang, C., Liu, Z., Wang, L., Li, C., and Sun, M. (2020). Graph neural networks: A review of methods and applications. AIopen, 1:57–81. [83] Zhu, Y ., Du, Y ., Wang, Y ., Xu, Y ., Zhang, J., Liu, Q., and Wu, S. (2022). A survey on deep graph generation: Methods and applications. In Learning onGraphs C... | https://arxiv.org/abs/2505.22109v1 |
Biophysics (BACE), to Physiology (BBBP). In all cases, we preprocess the molecules using the same pipeline as for PUBCHEM 32 to enable transfer learning. In 3Out of the many available regression targets we focus only on internal energy. 15 particular, we discard all graphs with more than N= 32 atoms, resulting in a tru... | https://arxiv.org/abs/2505.22109v1 |
to ˆG from G(and vice-versa). The possible editions are node or edge addition, deletion, or modification, where node/edge modification stands for changing the label of a node/edge. In this paper, we set the cost of all editions to 1. It is well known that computing the edit distance is equivalent to solving a graph mat... | https://arxiv.org/abs/2505.22109v1 |
results on neural scaling laws in molecular representation learning, we refer to [10]. We propose 2 different performance measures. On the one hand, we report the quality of reconstruction (on a test set) against the size of the data set (Figure 9, left). On the other hand, we report the results achieved when the learn... | https://arxiv.org/abs/2505.22109v1 |
we do not observe that the downstream performance deteriorates with higher embedding dimensions4. This suggests that learning to encode/decode entire graphs into an Euclidean space is always a challenging task, even when the latent space is of large dimensions. B.3 Reconstruction failure case We now highlight an intere... | https://arxiv.org/abs/2505.22109v1 |
30] that produces the hidden representation (FL, CL)of the input graph. (Fl+1, Cl+1) =EvoformerEncoder (Fl, Cl) (19) where F1∈Rn×dF(resp. C1∈Rn×n×dc) are initialized by applying a node-wise (resp. edge-wise) linear layer on F(resp. C). The Evoformer Encoder layer used in GRALE is represented in Figure 13. Compared to t... | https://arxiv.org/abs/2505.22109v1 |
node and edge-wise. Similarly, the node-level embeddings of the output graphs are defined as ˆX=Linear (FL Q) (25) C.4 Matcher The matcher uses the node embeddings of the input graph Xand target graph ˆXto compute the matching ˆTbetween the two graphs. The first step is to build an affinity matrix K∈RN×Nbetween the nod... | https://arxiv.org/abs/2505.22109v1 |
and biais matrix B∈RN×M, the Dot Product Attention attention writes as: DPA(Q, K, V, B ) =Softmax [QKT+B]V (32) More generally, for B∈RN×M×h, Multi-Head Attention writes as MHA(Q, K, V, B ) =CONCAT (O1, ..., O h) (33) where Ol=DPA ql[Q], kl[K], vl[V], bl (34) and are linear layers and bl i,j=Bi,j,l. 23 Cross-Attentio... | https://arxiv.org/abs/2505.22109v1 |
loss cannot take into account the node padding vector h, we propose the following extension LPIGVAE+ (G,ˆG,ˆT) =NX iℓh(hi,[ˆTˆh]i) +NX ihiℓF(Fi,[ˆTˆF]i) +NX i,jhihjℓC([Ci,j;ˆTˆCˆTT]i,j). (46) We also add a regularization term as suggested in the original paper, and extend it to take into account the padding ΩPIGVAE+ (ˆ... | https://arxiv.org/abs/2505.22109v1 |
The proof remains the same. Note that the assumption made in the original paper is that there exist h1, h2, f1, f2such that ℓC(a, b) =f1(a) +f2(b)− ⟨h1(a), h2(b)⟩. Instead, we make the slightly stronger (but arguably simpler) assumption that ℓCis a Bregman divergence. By definition, any Bregman divergence ℓwrites as ℓ(... | https://arxiv.org/abs/2505.22109v1 |
arXiv:2505.22112v1 [cs.AI] 28 May 20251 Visual Large Language Models Exhibit Human-Level Cognitive Flexibility in the Wisconsin Card Sorting Test Guangfu Hao1,2,Frederic Alexandre3,Shan Yu1,2,* 1Laboratory of Brain Atlas and Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, Chi... | https://arxiv.org/abs/2505.22112v1 |
to new rules when the criteria change (see Figure 1). The test’s sensitivity to PFC function has been consistently demonstrated through lesion studies [16], neuroimaging research [17], and clinical observations, cementing its status as a crucial tool in understanding the cognitive flexibility. While other measures of c... | https://arxiv.org/abs/2505.22112v1 |
human creativity found that ChatGPT-assisted ideas were more creative compared to those generated without LLM assistance [26]. However, challengespersist in other areas. The Test of Time (ToT) benchmark exposed difficulties with complex temporal reasoning tasks, particularly those requiring multi-fact integration and i... | https://arxiv.org/abs/2505.22112v1 |
1 18:end while Additionally, we collected data from 30 cognitively healthy human participants (aged 20-35) as a baseline for comparison. Human participants interacted with a web-based interfacedesigned to replicate the WCST experience while accom- modating human response patterns (supplementary Figure s- 5). The interf... | https://arxiv.org/abs/2505.22112v1 |
perseverative. NPE =Total Errors −PE (3) NPE captures non-perseverative errors, potentially indicating exploration or random mistakes. Trials to First Category (TFC): The number of trials required to complete the first category, indicating how quickly the model can deduce and consistently apply the first sorting rule. ... | https://arxiv.org/abs/2505.22112v1 |
models on our novel ALIEN Task variant under both STA-TI and CoT-TI conditions. The results, presented in Figure s-1 and Table t-1, demonstrate performance patterns remarkably consistent with those ob- served in the original WCST. Under the STA-TI condition, all models struggled to complete the ALIEN Task, while under ... | https://arxiv.org/abs/2505.22112v1 |
22.90 (18.22) 15.80 (-) 8.90 (7.14) 0.10 (0.32) CoT-VI 4.80 (0.42) 7.20 (2.82) 2.20 (1.40) 12.70 (1.57) 65.16 (5.32) 0.00 (0.00) CoT-TI 5.00 (0.00) 6.30 (0.82) 2.00 (0.82) 12.00 (0.94) 67.50 (2.74) 0.00 (0.00) Human STA-VI 4.73 (0.45) 6.87 (1.63) 2.80 (1.69) 12.93 (1.62) 65.15 (4.35) 0.10 (0.31) transition from VI to T... | https://arxiv.org/abs/2505.22112v1 |
visual feature recognition (Table II). Claude- 3.5 Sonnet demonstrated perfect accuracy across all features, while Gemini-1.5 Pro and GPT-4o showed a decline in visual capabilities, particularly when recognizing how many cards were present in the image and the number of shapes on each card. Notably, GPT-4o almost alway... | https://arxiv.org/abs/2505.22112v1 |
a superior ability to infer and adhere to simpler rule structures, even when the possibility of more complex rules is not explicitly excluded. E. Simulating Cognitive Impairment To explore the potential of VLLMs in modeling human cognitive impairment without modifying the models, we em- ployed role-playing prompts to s... | https://arxiv.org/abs/2505.22112v1 |
for inhibitory control (PE: 12.80, NPE: 18.70). This pattern suggests that Claude-3.5 Sonnet’s high baseline performance may rely on finely tuned cognitive processes that are more susceptible to disruption when specific aspects of executive function are impaired. Across all models, inhibitory control impairment consis-... | https://arxiv.org/abs/2505.22112v1 |
capabilities and the in- tegration of multimodal information processing warrant further exploration. The potential of VLLMs to simulate specific pat- terns of cognitive impairment also opens up new possibilities for creating realistic models of neuropsychological conditions, which could have applications in both clinic... | https://arxiv.org/abs/2505.22112v1 |
June 2024 Model Ranking (till 2024-08-31)LMSYS #4 #1 #2 OpenCompass #5 #1 #2 Benchmarks #4 #2 #1 9 two, three, four) of symbols. A series of 64 response cards is used, each sharing properties with the stimulus cards but in different combinations. The sorting rules are based on three possible categories: color, shape, o... | https://arxiv.org/abs/2505.22112v1 |
WCST cards across 64 trials. The system encompassed five distinct measures: Card Count Accuracy, Color Accuracy, Shape Accuracy, Number Accuracy, and Overall Accuracy. For each trial, models were evaluated on their ability to correctly identify the presence of five cards and accurately describe the color, shape, and nu... | https://arxiv.org/abs/2505.22112v1 |
634,350 $3.29 6,343,505 $32.93 Claude-3.5 SonnetSTA-VI WCST 27,404 903,104 $2.72 9,031,040 $27.2 STA-TI WCST 7,073 242,113 $0.73 2,421,131 $7.34 CoT-VI WCST 43,704 1,426,502 $4.48 14,265,023 $44.78 CoT-TIWCST 19,257 641,037 $2.08 6,410,367 $20.77 WCST w/o restriction 20,718 675,461 $2.2 6,754,606 $21.96 WCST Goal Maint... | https://arxiv.org/abs/2505.22112v1 |
millions of tokens of context,” arXiv preprint arXiv:2403.05530 , 2024. [12] Anthropic, “Announcements: Claude 3.5 sonnet,” https://www.anthropic. com/news/claude-3-5-sonnet, 2024, accessed: 2024-06-21. [13] E. A. Berg, “A simple objective technique for measuring flexibility in thinking,” The Journal of general psychol... | https://arxiv.org/abs/2505.22112v1 |
“Vision language models are blind,” arXiv preprint arXiv:2407.06581 , 2024. [29] R. Loconte, G. Orru, M. Tribastone, P. Pietrini, and G. Sartori, “Chal- lenging chatgpt’intelligence’with human tools: a neuropsychological investigation on prefrontal functioning of a large language model,” Intelligence , 2023. [30] G. V ... | https://arxiv.org/abs/2505.22112v1 |
Multimodal Forecasting of Sparse Intraoperative Hypotension Events Powered by Language Model Jintao Zhang1Zirui Liu1Mingyue Cheng1Shilong Zhang1Tingyue Pan1 Qi Liu1∗Yanhu Xie2 1University of Science and Technology of China 2The First Affiliated Hospital of University of Science and Technology of China {zjttt, liuzirui}... | https://arxiv.org/abs/2505.22116v1 |
patterns vary with static attributes. Figure 1: (a) IOH events are sparse and exhibit substantial inter patient variability in onset time, duration, and waveform morphology. (b) MAP series vary significantly across static attributes including age groups, genders and surgery types. architectures [ 18,19] demonstrating s... | https://arxiv.org/abs/2505.22116v1 |
features and structured inputs. Meanwhile, the frequency-domain perspective [ 31] has also been explored. 2 While existing IOH prediction methods have made considerable progress, most are grounded in either biomarker identification or deep learning models that lack the capacity to align patient specific clinical narrat... | https://arxiv.org/abs/2505.22116v1 |
length L, the model predicts a future MAP series of length T, as illustrated in Fig. 2. To prevent label leakage and enable realis- tic forecasting, instances with historical windows overlapping IOH episodes are excluded. To miti- gate class imbalance and capture temporal dynam- ics, we adopt an adaptive slicing strate... | https://arxiv.org/abs/2505.22116v1 |
each patient pi, the personalized clinical description is defined as: di=ϕ(ai, gi, si), (2) Here, ϕrepresents GPT-4o that generates clinical descriptions dibased on the static attributes (ai, gi, si), following a predefined medical template. To enhance the clinical relevance of the language model, the tokenizer is exte... | https://arxiv.org/abs/2505.22116v1 |
to a lightweight denoising decoder composed of stacked linear layers and normalization blocks, which iteratively refine the residual series while reducing computational overhead. A projection layer then maps the refined representation back to the residual space and combines it with the trend component xi,trendto genera... | https://arxiv.org/abs/2505.22116v1 |
information, thereby providing a stronger language model foundation for subsequent hypotension prediction. 4.4 Task Fine-tuning To adapt the pretrained model to the downstream IOH prediction task, the task fine-tuning stage further refines the representations learned during domain adaptive pretraining, thereby enhancin... | https://arxiv.org/abs/2505.22116v1 |
Full training configurations are provided in Appendix B. 5.2 Results and Discussion Main Results. We conduct comprehensive experiments on the Clinical IOH dataset and VitalDB dataset. Results are summarized in Table 1. Experimental results highlight key differences among baseline models. DLinear’s moderate performance ... | https://arxiv.org/abs/2505.22116v1 |
identify MAP trends and variability, reducing the ability to detect abnormal patterns and generalize across populations and surgery types. Excluding the expanded tokenizer weakens the model’s ability to associate clinical terminology with physiological patterns, diminishing cross-modal representation learning. Using on... | https://arxiv.org/abs/2505.22116v1 |
practical deployability of our framework, we compare its training and inference efficiency with HMF [ 12], a baseline for IOH prediction. As shown in Figure 5, IOHFuseLM consistently achieves higher runtime efficiency across configurations from the VitalDB and Clinical IOH datasets. The improvements in both training an... | https://arxiv.org/abs/2505.22116v1 |
hypotension using deep learning models based on non-invasive monitoring devices. Journal of Clinical Monitoring and Computing , pages 1–9, 2024. [10] Joon-myoung Kwon, Youngnam Lee, Yeha Lee, Seungwoo Lee, and Jinsik Park. An algorithm based on deep learning for predicting in-hospital cardiac arrest. Journal of the Ame... | https://arxiv.org/abs/2505.22116v1 |
care medicine , 4, 2020. [26] Netsanet Temesgen, Efrem Fenta, Chernet Eshetie, and Moges Gelaw. Early intraoperative hypotension and its associated factors among surgical patients undergoing surgery under general anesthesia: An observational study. Annals of Medicine and Surgery , 71:102835, 2021. [27] Ményssa Cherifa,... | https://arxiv.org/abs/2505.22116v1 |
Wentai Wu, Ruichao Mo, and Haocheng Zhong. Cyclenet: enhancing time series forecasting through modeling periodic patterns. Advances in Neural Information Processing Systems , 37:106315–106345, 2024. [42] Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. Csdi: Conditional score-based diffusion models for proba... | https://arxiv.org/abs/2505.22116v1 |
for multidomain learning and predisaster building information extraction from images. Journal of Computing in Civil Engineering , 36(5):04022024, 2022. [58] Qianli Ma, Zhen Liu, Zhenjing Zheng, Ziyang Huang, Siying Zhu, Zhongzhong Yu, and James T Kwok. A survey on time-series pre-trained models. IEEE Transactions on Kn... | https://arxiv.org/abs/2505.22116v1 |
Data Splitting and Forecasting Settings. Both datasets are split into training, validation, and test subsets using a 3:1:1 ratio, preserving temporal consistency without shuffling. Each model ingests a fixed 15-minute historical MAP window and predicts MAP trajectories over future horizons of 5, 10, or 15 minutes. Thes... | https://arxiv.org/abs/2505.22116v1 |
multiscale smoothing and simultaneously introduce fine-grained variations that enrich the temporal structure of the original series. In particular, the augmented outputs retain the essential characteristics of hypotensive episodes while reducing noise, reflecting the ability of MTRDA to reconstruct physiologically mean... | https://arxiv.org/abs/2505.22116v1 |
# LLMs Text Patient is in the () age group. At this stage, () hormones influence vascular tone.Hemodynamic compliance and compensation are ().It is classified as a () surgery. The estimated blood loss during surgery is (). Figure 9: Illustration of the PCDG Prompt De- sign framework.To generate patient-specific clinica... | https://arxiv.org/abs/2505.22116v1 |
arXiv:2505.22125v1 [cs.MA] 28 May 2025SENTIMENT SIMULATION USING GENERATIVE AI AGENTS Melrose Tia1, Jezreel Sophia Lanuzo1, Lei Rigi Baltazar1, Marie Joy Lopez-Relente2,Diwa Malaya Quiñones3,Jason Albia1∗ 1Netopia AI, Inc., Manila, Philippines 2Institute of Statistics, University of the Philippines Los Baños, Laguna 3D... | https://arxiv.org/abs/2505.22125v1 |
al. (2020) [ 13] assessed citizen satisfaction with various government agencies based on social media commentary. Beyond politics, sentiment analysis is widely used in the private sector, where it serves as a critical tool in marketing, advertising, and customer experience strategies. Rathore et al. (2020) [ 14], for e... | https://arxiv.org/abs/2505.22125v1 |
simulation-based modeling paradigm enabled by generative AI. Behavioral science provides the theoretical foundation for this shift. It conceptualizes sentiment as a dynamic construct shaped by cognition, emotion, and situational context. Social psychology suggests that sentiment reflects attitudes formed from beliefs, ... | https://arxiv.org/abs/2505.22125v1 |
agents were instantiated to embody the psychological profiles derived from nationally representative survey and their simulated response is compared with the ground truth data. More precisely, the contributions of this work are as follows: •We demonstrate that AI agents can be effectively instantiated to embody the psy... | https://arxiv.org/abs/2505.22125v1 |
psychological frameworks, offering a robust basis for generalizing the findings to the broader adult population. Previous psychological studies in the Filipino samples, such as those by Church et al. (1997) [ 43](N= 629) , Del Pilar (2017) [44] (N= 576) , and Wapaño (2021) [45] (N= 828) , were conducted with smaller, m... | https://arxiv.org/abs/2505.22125v1 |
(Negative, Slightly Negative, Neutral, Slightly Positive, and Positive), along with a brief explanatory rationale for its simulated sentiment. After generating its initial sentiment, the agent was prompted with a self-assessment task, asking whether its response was logically consistent with its psychographic profile a... | https://arxiv.org/abs/2505.22125v1 |
[ 31] used structured prompts with demographic and background details, similar to our contextualized strategy, to elicit trust behaviors from LLMs. Our study advances these efforts by grounding both encoding strategies in real large-scale survey data, allowing systematic comparisons between encoding levels. Agent align... | https://arxiv.org/abs/2505.22125v1 |
and contextualized encodings highlights the benefits of translating psychological variable labels into rich psychographic contexts, enabling agents to respond more accurately in alignment with their profiles—a critical foundation for generating psychologically coherent sentiment simulations. 3.2 Sentiment Simulation Pe... | https://arxiv.org/abs/2505.22125v1 |
trials is illustrative of its suitability for use in replicable and scalable behavioral simulations. Our findings highlight three pillars of effective simulation in behavioral science specifically in social sciences: (1) psychological grounding through contextualized traits, (2) consistency of performance across divers... | https://arxiv.org/abs/2505.22125v1 |
than context-reactive responses. 4 Conclusion This study presents a psychographically grounded framework for sentiment simulation, leveraging language model agents embodied with empirically derived psychological profiles. By integrating validated constructs into structured prompts, we enable AI agents to simulate senti... | https://arxiv.org/abs/2505.22125v1 |
Informatics, Multimedia, Cyber and Information System (ICIMCIS) , pages 153–158. IEEE, 2020. [8]Jiri Hradec, Nicole Ostlaender, Alba Bernini, et al. Fables: framework for autonomous behaviour-rich language- driven emotion-enabled synthetic populations. Technical report, Joint Research Centre, 2023. [9]Jana Flor V Vizma... | https://arxiv.org/abs/2505.22125v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.