text
string
source
string
2.2.c) LLM-generated: UGEN (Pal et al., 2024) leverages Large Language Models (LLMs) to gen- erate table pairs, aiming to overcome limitations of previous methods by crafting purposefully chal- lenging scenarios, including hard negatives. How- ever, this strategy introduces the risk of ground truth inconsistency, as LL...
https://arxiv.org/abs/2505.21329v2
from these methodologies: (1) excessive overlap ,(2) semantic simplicity , and (3) ground truth inconsistencies , which we detail below: 3.1.a) Excessive Overlap: Benchmarks like TUSSmall,TUSLarge,SANTOS , and the synthetic query portion of the LAKEBENCH derivatives are created by partitioning seed tables horizontally ...
https://arxiv.org/abs/2505.21329v2
overall observed overlap suggests that synthetic, partitioning-based queries constitute a large por- tion of the benchmark. The semantic simplicity evident in PYLON ’s topics and the public origins 5https://github.com/RLGen/LakeBench/issues/9of data in both PYLON andLAKEBENCH could favor general-purpose models like BER...
https://arxiv.org/abs/2505.21329v2
a single table vector. These baselines as- sess whether general semantic embeddings, with- out task-specific fine-tuning, suffice for high per- formance on benchmarks with general vocabulary. 4 Experimental Setup To evaluate our hypotheses about benchmark limi- tations, we employ both simple baseline methods (Section 3...
https://arxiv.org/abs/2505.21329v2
availability at the time of writing. We provide fur- ther implementation details in Appendix B.2. 4.3 Evaluation Procedure We use a consistent evaluation procedure for all baseline and SOTA methods to ensure fair com- parison. Table vectors are generated per method (Section 3.2 for baselines; SOTA-specific proce- dures...
https://arxiv.org/abs/2505.21329v2
but still under-performs SBERT variants. 5.2 Ground Truth Reliability Issues A notable observation across UGEN and LAKEBENCH derivatives is the significant gap between the R@k achieved by all methods and the IDEAL recall (Table 2). This discrepancy led us to question the reliability of the benchmarks’ ground truth labe...
https://arxiv.org/abs/2505.21329v2
TFIDF/COUNT 0m 53s 0m 0s 1m 45s 0m 1s 3m 10s 0m 2s 0m 22s 0m 1s 0m 9s 0m 0s 0m 12s 0m 0s 22m 22s 0m 31s 37m 14s 0m 21s 6m 21s 0m 22s SBERT 1m 45s 0m 0s 3m 30s 0m 0s 9m 21s 0m 15s 3m 18s 0m 0s 1m 41s 0m 0s 2m 20s 0m 0s 27m 47s 0m 4s 82m 13s 0m 4s 30m 45s 0m 3s Specialized Table Union Search Methods STARMIE 19m 3s 1m 2s ...
https://arxiv.org/abs/2505.21329v2
Table 4. Beyond heuristic metrics, we also conduct a more direct–though still imperfect–assessment ofUGEN’s ground truth using an LLM-as-a- judge approach. While this method may not capture the same conflicts identified by the cheaper GTFP/GTFN heuristics, it provides a complementary perspective that can offer more pre...
https://arxiv.org/abs/2505.21329v2
the query table as a valid candidate for itself. Therefore, the top-1 match is al- ways correct by construction, yielding no disagreement @1. GT Label LLM Judge UGEN V1 UGEN V2 Unionable Non-unionable 24.8% 0.0% Non-unionable Unionable 33.8% 23.6% Non-unionable Non-unionable 16.2% 76.4% Unionable Non-unionable 25.2% 0....
https://arxiv.org/abs/2505.21329v2
by introducing NL conditions for union and join searches on column values and table size constraints. However, its predicate-style conditions may be better addressed via post-retrieval filtering (e.g., translating NL to SQL predicates with an LLM), avoiding early discard of unionable candi- dates and unnecessary retrie...
https://arxiv.org/abs/2505.21329v2
the VLDB Endow- ment , 14(12):2791–2794. Pei Chen, Soumajyoti Sarkar, Leonard Lausen, Balasub- ramaniam Srinivasan, Sheng Zha, Ruihong Huang, and George Karypis. 2023. Hytrel: Hypergraph- enhanced tabular data representation learning. Ad- vances in Neural Information Processing Systems , 36:32173–32193. Tianji Cong, Fa...
https://arxiv.org/abs/2505.21329v2
based Semantic Table Union Search. Proceedings of the ACM on Management of Data , 1(1):1–25. Aamod Khatiwada, Harsha Kokel, Ibrahim Abdelaziz, Subhajit Chaudhury, Julian Dolby, Oktie Hassan- zadeh, Zhenhan Huang, Tejaswini Pedapati, Horst Samulowitz, and Kavitha Srinivas. 2025. Tabs- ketchfm: Sketch-based tabular repre...
https://arxiv.org/abs/2505.21329v2
used in our experiments, complementing the core method- ology described in Sections 3.2 and 4.3. B.1 Lexical Baselines (Hashing, TF-IDF, Count) Implementation Details Vectorizers: We used implementations from scikit-learn11. All vectorizers were configured with lowercase=True . •TfidfVectorizer and CountVectorizer : Us...
https://arxiv.org/abs/2505.21329v2
compared to naive matching, while remaining more precise than approximate search approaches. Benchmark Sampling Augmentation SANTOS tfidf_entity drop_col TUS (Small) alphaHead drop_cell TUS Large tfidf_entity drop_cell PYLON tfidf_entity drop_col UGEN V1 tfidf_entity drop_col UGEN V2 tfidf_entity drop_col LB-O PENDATA ...
https://arxiv.org/abs/2505.21329v2
suggesting non- unionability. D LLM Adjudicator D.1 Prompt Details To systematically re-evaluate potential ground truth inconsistencies in the UGEN benchmarks, we em- ployed an LLM-based adjudicator. This process tar- geted disagreements identified during our analysis, specifically Ground Truth False Positives (GTFPs, ...
https://arxiv.org/abs/2505.21329v2
unionable but not labeled as such. Figure 5: Examples of LB-W EBTABLE Ground Truth Incompleteness.Source: OpenData (Canada) Query: CAN_CSV0000000000000659.csv Candidate: CAN_CSV0000000000000562.csv REF_DATE GEO Age group Sex ... V ALUE 2003 Canada Total, 12 years and over Both sexes ... 20723896.0 2003 Canada Total, 12...
https://arxiv.org/abs/2505.21329v2
rows from one table to the other should result in a dataset that makes logical sense. 2. Meaningful Column Alignment: There must be a reasonable set of columns across the two tables that represent the same underlying attributes or concepts. * These columns can have DIFFERENT NAMES (e.g., "Cust_ID" vs. "ClientIdentifier...
https://arxiv.org/abs/2505.21329v2
arXiv:2505.21335v1 [cs.GR] 27 May 2025Structure from Collision Takuhiro Kaneko NTT Corporation Abstract Recent advancements in neural 3D representations, such as neural radiance fields (NeRF) and 3D Gaussian splat- ting (3DGS), have enabled the accurate estimation of 3D structures from multiview images. However, this c...
https://arxiv.org/abs/2505.21335v1
has succeeded in capturing the bias in the location of the holes (b)(d). For example, in Figure 1, the two objects have different internal structures, as shown in Figure 1(3)(b) and (3)(d). However, they are identical in the static images, as shown in Figure 1(1)(a) and (1)(c). Consequently, a standard static neural 3D...
https://arxiv.org/abs/2505.21335v1
is searched for through an annealing process that repeatedly reduces and expands the volume. We comprehensively evaluated the proposed method us- ing a dataset containing 115objects with diverse structures (i.e., various cavity shapes, locations, and sizes) and mate- rial properties. Our results reveal the properties o...
https://arxiv.org/abs/2505.21335v1
However, they lose flexibility and are difficult to apply to scenes or objects that cannot be ex- plained by physics. This study adopts a physics-informed model (the second-category strategy) because SfCis an ill- posed problem, and physics plays an important role in nar- 2 rowing the solution space. However, in the fu...
https://arxiv.org/abs/2505.21335v1
in the training dataset. t∈ {t0, . . . , t N−1} represents the time, where Nis the total number of frames. Given these data, we aim to estimate the 3D structure (both external and internal ones) of the object PP(t0), which cor- responds to the ground truth ˆPP(t0). Here, we represent the 3D structures as particle sets,...
https://arxiv.org/abs/2505.21335v1
(DiffMPM) [26]. Particle–grid interconverter. DiffMPM is a particle-based method that conducts simulations in a Lagrangian space. However, these particles do not necessarily lie on the ray, which makes rendering difficult. Considering this, PAC- NeRF renders in an Eulerian grid space with voxel-based NeRF [63] and brid...
https://arxiv.org/abs/2505.21335v1
with volume annealing . Physical constraints. As discussed in Section 3.1, we as- sume that the physical properties related to the material (e.g., Young’s modulus ˆE, Poisson’s ratio ˆν, and density ˆρ) and mass ˆmare known. We utilize them to narrow the solution space of SfC. Physical constraints on material propertie...
https://arxiv.org/abs/2505.21335v1
Hence, we employ a pixel-preserving lossthat preserves the appearance of the initial frame. Lpixel0=1 |ˆR|X r∈ˆR∥C(r, t0)−ˆC(r, t0)∥2 2. (9) This is a variant of pixel loss (Equation 3) when N= 1. Because the constraints on the 2D projection plane aloneare insufficient for preserving the 3D structure (e.g., objects wit...
https://arxiv.org/abs/2505.21335v1
structure, focusing on the cav- ity sizes (Experiment I in Section 4.2) and locations (Ex- periment II in Section 4.3). We then explored the effect of the material properties in Experiment III (Section 4.4). The main results are summarized here, with the detailed results and implementation details provided in the Appen...
https://arxiv.org/abs/2505.21335v1
SfC, we adapted previous methods to make them 6 1.149 1.210(c) GO (d) GO mass 0.118 0.0921.164 0.285 0.107 0.067 0.991 1.027(f) LPO mass (e) LPO (h) SfC-NeRF −APL (g) SfC-NeRF −mass 0.491(i) SfC-NeRF −APT (j) SfC-NeRF −key (k) SfC-NeRF −VA (l) SfC-NeRF (a) GT (b) Static Training data Figure 4. Comparison of learned str...
https://arxiv.org/abs/2505.21335v1
0.332 0.661 0.335 SfC-NeRF −key 0.082 0.127 0.211 0.325 0.186 SfC-NeRF −V A 0.146 0.293 0.370 0.456 0.316 SfC-NeRF 0.081 0.122 0.195 0.262 0.165 Table 1. Comparison of CD ( ×103↓) when varying the cavity size sc. The scores were averaged over five external shapes. lc left right up down Avg. Static 0.841 0.842 0.815 0.8...
https://arxiv.org/abs/2505.21335v1
material prop- erties . Table 3 summarizes the quantitative results for elas- tic materials when ˆEandˆνwere varied. Table 4 sum- marizes the quantitative results for other materials. Ap- pendix B.2 and project page present the qualitative re- sults. These results demonstrate that SfC-NeRF improves the structure estima...
https://arxiv.org/abs/2505.21335v1
impact on volume estimation in Lmass. However, in other cases, the degradation is moderate. All the scores exceed those of the baselines listed in Table 1 (e.g., 0.841 by LPO). These results indicate that the proposed method is robust against inaccurate physical properties. Additional challenges asso- ciated with real ...
https://arxiv.org/abs/2505.21335v1
Quan Zheng, Erik Franz, Hans- Peter Seidel, Christian Theobalt, and Rhaleb Zayer. Physics informed neural fields for smoke reconstruction with sparse data. ACM Trans. Graph. , 41(4), 2022. 2, 3 [14] Yu Deng, Jiaolong Yang, Jianfeng Xiang, and Xin Tong. GRAM: Generative radiance manifolds for 3D-aware image generation. ...
https://arxiv.org/abs/2505.21335v1
Kaneko. Improving physics-augmented continuum neural radiance field-based geometry-agnostic system iden- tification with Lagrangian particle optimization. In CVPR , 2024. 2, 3, 4, 7, 17 9 [32] Bernhard Kerbl, Georgios Kopanas, Thomas Leimk ¨uhler, and George Drettakis. 3D Gaussian splatting for real-time radiance field...
https://arxiv.org/abs/2505.21335v1
Thomas M ¨uller, Alex Evans, Christoph Schied, and Alexan- der Keller. Instant neural graphics primitives with a multires- olution hash encoding. ACM Trans. Graph. , 41(4), 2022. 2 [50] Thomas Neff, Pascal Stadlbauer, Mathias Parger, Andreas Kurz, Joerg H. Mueller, Chakravarty R. Alla Chaitanya, An- ton Kaplanyan, and ...
https://arxiv.org/abs/2505.21335v1
Men- glei Chai, Yun Fu, and Sergey Tulyakov. R2L: Distilling neural radiance field to neural light field for efficient novel view synthesis. In ECCV , 2022. 2 [69] Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Ima...
https://arxiv.org/abs/2505.21335v1
. . . . . . . 8 5. Discussion 8 6. Conclusion 8 A . Detailed analyses and discussions 12 A.1 . Detailed ablation studies . . . . . . . . . . . 12 A.1.1. Effect of each appearance-preserving loss . . . . . . . . . . . . . . . . . 12 A.1.2. Effect of keyframe selection . . . . . 13 A.1.3. Effect of background loss . . . ...
https://arxiv.org/abs/2505.21335v1
loss Lpixel0 (Equation 9) and depth-preserving loss Ldepth0(Equa- tion 10). These losses help prevent the degradation of the external structure, which is effectively learned from the first frame of the video sequence during the fitting process across the entire video sequence. In the ablation study presented in Section...
https://arxiv.org/abs/2505.21335v1
scis varied. The score indicates CD ( ×103↓). When k=None, a keyframe pixel loss Lpixelkwas not used. In contrast, when k∈ {6,9},Lpixelkwas used. k left right up down Avg. None 0.308 0.296 0.307 0.313 0.306 6 0.303 0.258 0.274 0.291 0.281 9 0.296 0.296 0.313 0.303 0.302 Table 10. Analysis of the effect of keyframe sele...
https://arxiv.org/abs/2505.21335v1
proposed method is effective for improving the performance of SfC, even without the use of advanced techniques, such as background loss. A.2. Extended experiments A.2.1. Experiment IV: Influence of collision angle In the above experiments, the collision angle was fixed, as shown in Figures 6–13, regardless of the inter...
https://arxiv.org/abs/2505.21335v1
after the collision became asymmetrical. Con- sequently, the ease of estimating the internal structure also became asymmetrical. A.3. Evaluation from multiple perspectives A.3.1. Evaluation through video sequences In the main experiments, we evaluated the models using the chamfer distance between the ground-truth parti...
https://arxiv.org/abs/2505.21335v1
(1.811) SfC-NeRF0.303 0.308 0.258 0.313 0.274 0.273 0.291 0.307 0.281 0.300 (0.367) (1.821) (0.431) (1.647) (0.448) (1.262) (0.417) (1.204) (0.416) (1.483) Table 15. Comparison of CD and ACD ( ×103↓) when the cavity location lcis varied. This is an extended version of Table 2. For each condition, the left score indicat...
https://arxiv.org/abs/2505.21335v1
video, ACD static is smaller than ACD video. This is be- cause the difference in location gradually increased after the collision when the cavity was located on the opposite side. As the objective of this study was to correctly predict the shape rather than the location, ACD staticis a more valid evaluation than ACD vi...
https://arxiv.org/abs/2505.21335v1
outperformed both the baseline and ablated models in most cases. A.4. Possible challenges with real data As discussed in Section 5, because SfCis a novel task, this study focused on evaluating its fundamental performance using simulation data, leaving validation with real data a challenge for future research. However, ...
https://arxiv.org/abs/2505.21335v1
0.201 0.183 0.345 0.245 0.150 (h) LPOmass (g) LPO (e) GO (f) GO mass (j) SfC-NeRF −APL(a) Before collision(b) After collision(c) Ground truth(k) SfC-NeRF −APT(l) SfC-NeRF −key(d) Static (m) SfC-NeRF −VA(n) S fC-NeRF(1)sc=0 (2)sc=1 23(3)sc=2 33(4)sc=3 43(5)lc=left (6)lc=right (7)lc=up (8)lc=bottom(i) SfC-NeRF −mass 0.09...
https://arxiv.org/abs/2505.21335v1
0.1601.155 0.290 0.475 0.696 0.291 1.186 0.163 0.323 0.215 0.197 0.559 0.305 0.193 0.416 0.277 0.4080.082 0.153 1.179 0.3630.574 0.338 0.422 Figure 8. Comparison of learned internal structures for bicone objects. The view in the figure is the same as that of Figure 6. 20 (h) LPOmass (g) LPO (e) GO (f) GO mass (j) SfC-N...
https://arxiv.org/abs/2505.21335v1
in all cases. As a result, the same internal structure was learned across all variations. In contrast, in SfC-NeRF (e), the internal structure was learned using video sequences with different appearances. In this example, the same internal structure is expected to be learned in all cases. However, the varying appearanc...
https://arxiv.org/abs/2505.21335v1
setting (Young’s modulus ˆE= 2×106, Poisson’s ratio ˆν= 0.3, and yield stress ˆτY= 1.54×104). (6) Plasticine with the “Cat” setting (ˆE= 106,ˆν= 0.3, and ˆτY= 3.85×103). (7) Sand with the “Trophy” setting ( ˆθfric= 40 °). These results demonstrate that SfC-NeRF ((e) and (j)) improves structure estimation compared to St...
https://arxiv.org/abs/2505.21335v1
materials : those with four dif- ferent Young’s moduli ˆE∈ {2.5×105,5×105,2× 106,4×106}and four different Poisson’s ratios ˆν∈ {0.2,0.25,0.35,0.4}. (c-2) Seven different materials : two Newtonian fluids, two non-Newtonian fluids, two plasticines, and one sand. Their physical properties were derived from the PAC-NeRF da...
https://arxiv.org/abs/2505.21335v1
set to 0.9and0.999, respectively. We found that a high learning rate is useful for efficiently reducing the volume density; however, this is not necessary when the estimated mass m sufficiently approaches the ground-truth mass ˆm. There- fore, we divided the learning rate by 2(with a minimum of 0.1) as long as the esti...
https://arxiv.org/abs/2505.21335v1
arXiv:2505.21339v1 [cs.LG] 27 May 2025An Uncertainty-Aware ED-LSTM for Probabilistic Suffix Prediction Henryk Mustroph Michel Kunkler Stefanie Rinderle-Ma Technical University of Munich, TUM School of Computation, Information and Technology, Garching, Germany {henryk.mustroph, michel.kunkler, stefanie.rinderle-ma}@tum....
https://arxiv.org/abs/2505.21339v1
as delays in deliveries from external stakeholders or the involvement of humans in process execution. In this work, instead of predicting a single most likely suffix, we consider epistemic and aleatoric uncer- tainties to predict a probability distribution of suffixes. In line with the term probabilistic learning, whic...
https://arxiv.org/abs/2505.21339v1
can henceforth be learned in a probabilistic model. Uncertainty-Aware Neural Networks (NN). For NNs, two common approaches for estimating a model’s uncertainty in its prediction are Bayesian approximation and ensemble learning-based techniques [1]. Bayesian approximation can be conducted with Bayesian Neural Networks (...
https://arxiv.org/abs/2505.21339v1
learned on the predicted logits. Since the logits are passed in the Softmax function, MC integration has to be applied, i.e., averaging the cross-entropy loss of multiple draws from the logits distributions. We denote the number of MC trials with T, the categorical classes with C, and the ground truth class with c: ˆzi...
https://arxiv.org/abs/2505.21339v1
and output continuous event attributes, when assuming that the observation noise follows a Log-Normal distribution, we first transform the attributes into log-space by applying the natural logarithm function as ln(1+x), ensuring that only positive values are passed to the logarithm. After this step, we apply standard s...
https://arxiv.org/abs/2505.21339v1
state in the encoder. 6 Thedecoder receives the latent vector tuple from the encoder along with the last event from the prefix. At each subsequent timestep, the model uses the previously updated latent vector tuple and a previous event. During training, teacher forcing is applied, selecting either the event from the ta...
https://arxiv.org/abs/2505.21339v1
to sample the resulting event individually. Event attributes are sampled differently depending on whether they are continuous or categorical. For continuous event attributes, the event attribute values are directly drawn from a Normal distribution with the predicted means and variances. For categorical event attributes...
https://arxiv.org/abs/2505.21339v1
1: Dataset Properties Dataset Cases Events Variants Activities Mean–SD Case Length Mean–SD Case Duration Cat. Event Attr. Con. Event Attr. Helpdesk 4580 21348 226 14 4.66 – 1.18 40.86 – 8.39 (days) 12 4 Sepsis 1049 15214 845 16 14.48 – 11.47 28.48 – 60.54 (days) 26 8 BPIC17 31509 1202267 15930 26 38.16 – 16.72 21.90 – ...
https://arxiv.org/abs/2505.21339v1
LSTMs, along with an FC layer in the de- coder containing separate mean and variance heads for each output event attribute. We assumed normal distributed continuous event attributes and noise. All event attributes were used as input features for the encoder, while only the activity and time attributes were used as inpu...
https://arxiv.org/abs/2505.21339v1
works [3, 9, 15, 24, 25, 28]. Additionally, we report results from other models in the literature without re-implementation and with different hyperparameter settings. These re- sults are not intended for direct performance comparison but rather to indicate that the predictive performance of our models is reasonable. I...
https://arxiv.org/abs/2505.21339v1
a dropout rate of 0.2 for regularization duringtraining. Additionally, MM-Predincludesacomponentcalledthe“Modulator”, whichlearns the significance of each event label and lifecycle transition attribute and passes this information as an additional feature to the decoder. ED-LSTM-GAN from [25] was evaluated on the Helpde...
https://arxiv.org/abs/2505.21339v1
the MAE of suffix length predictions for the Helpdesk dataset when comparing Setting 1 to Setting 2. One possible reason Setting 2 provided the best results on the Helpdesk dataset could be the deeper architecture with a 4-layer encoder and decoder LSTMs. This enhanced the U-ED-LSTM’s ability to abstract higher-level p...
https://arxiv.org/abs/2505.21339v1
3: Predictive Performance (Categorical): Suffix Length MAE and Suffix Event Labels DLS Method Suffix Length MAE Suffix Event Labels DLS Helpdesk Sepsis BPIC17 PCR Helpdesk Sepsis BPIC17 PCR Own Results Most likely - Setting 1 0.96 27.59 13.74 1.48 0.53 0.1 0.35 0.83 Probabilistic - Setting 1 0.74 6.84 14.29 1.98 0.44 0...
https://arxiv.org/abs/2505.21339v1
15262002.78 557.32 24.87 4448.17 412893.69 ED-LSTM from Lit. MM-Pred [15] - - - - - - - - ED-LSTM-GAN [25] 6.21 - 13.95 - - - - - AE [13] 3.83 735.04 69.51 - - - - - AE-GAN [13] 3.88 187.12 100.19 - - - - - ED-LSTM [28] - - 8.44 - - - - - Transformer from Lit. SuTraN [28] - - 5.5 - - - - - probabilistic approach can ob...
https://arxiv.org/abs/2505.21339v1
len 0 20 40 60 80 prefix len0102030 Rem. time (event sum) MAE (days)0 20 40 60 80 suffix lenBPIC17 Figure 2: Predictive Performance - Setting 1 16 0100200300400500 instances 0100200300400500 instances 0100200300400500 instances0 2 4 6 8 10 prefix len0123456 Suffix length MAEmost-likely suffix mean probabilistic suffix ...
https://arxiv.org/abs/2505.21339v1
Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 20 40 60 80 suffix len 0 20 40 60 80 prefix len0.00.10.20.30.4DLS 0 20 40 60 80 suffix len 0 20 40 60 80 prefix len050000100000150000200000250000300000 Remaining time MAE (days) 0 20 40 60 80 suffix lenBPIC17 Figure 4: Predictive Perf...
https://arxiv.org/abs/2505.21339v1
a systematic bias (slope at 0) is visible. The model tends to predict values that are too large. Nevertheless, for the Helpdesk dataset, the U- ED-LSTM, especially in Setting 2, can capture the variability in the remaining time (sum and last) from the test dataset quite well. For the Sepsis dataset, the U-ED-LSTM consi...
https://arxiv.org/abs/2505.21339v1
elapsed time0246Probability densitySepsis y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times0123Probability densityPCR y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case elapsed time0123Probability densityPCR y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times0123Probability densityBPIC17 y = 1 ...
https://arxiv.org/abs/2505.21339v1
construct prediction intervals for the next activity prediction to improve interpretability. However, they do not evaluate their approach on open-source, real-world datasets. 21 In [19, 22], Bayesian Networks are used to predict the sequence of activities, but Bayesian networks cannot handle large and complex data. 6 C...
https://arxiv.org/abs/2505.21339v1
Trans. Serv. Comput. , 16(4):2330–2342, 2023. doi: 10.1109/TSC.2023.3245726. [10] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation , 9(8):1735–1780, 1997. doi: 10.1162/neco.1997.9.8.1735. [11] Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machine learning...
https://arxiv.org/abs/2505.21339v1
ACM Trans. Intell. Syst. Technol. , 10(4):34:1–34:34, 2019. doi: 10.1145/ 3331449. 24 [27] Hans Weytjens and Jochen De Weerdt. Learning uncertainty with artificial neural networks for predictive process monitoring. Appl. Soft Comput. , 125:109134, 2022. doi: 10.1016/J.ASOC. 2022.109134. [28] Brecht Wuyts, Seppe K. L. M...
https://arxiv.org/abs/2505.21339v1
arXiv:2505.21344v1 [cs.AI] 27 May 2025The Multilingual Divide and Its Impact on Global AI Safety Aidan Peppin*2, Julia Kreutzer*1, Alice Schoenauer Sebag*2, Kelly Marchisio*2, Beyza Ermis1, John Dang1, Samuel Cahyawijaya2, Shivalika Singh1, Seraphina Goldfarb-Tarrant2, Viraat Aryabumi2, Aakanksha2, Wei-Yin Ko2, Ahmet Ü...
https://arxiv.org/abs/2505.21344v1
this paper, we articulate how these approaches have addressed language disparity and global safety gaps in AI models. This paper is written for both research and policy experts to help provide an overview of the key challenges that remain in bridging the language gap and minimizing safety risks across languages. We pro...
https://arxiv.org/abs/2505.21344v1
technologies and preserving cultural representation in the digital age. Large language models are finding beneficial applications in a range of contexts across societies and economies around the world. However, the vast majority of language models are currently optimized for a small handful of languages, and the Englis...
https://arxiv.org/abs/2505.21344v1
in terms of available resources with the example from textual datasets hosted on HuggingFace4and number of Wikipedia pages in each language (stats from Ranathunga & de Silva (2022)), for a set of high- and lower-resource languages. These represent popular sources for textual data for training of current LLMs, and highl...
https://arxiv.org/abs/2505.21344v1
because of ongoing warfare, creating challenges for organizers when mailing out Aya gifts to thank committed volunteers. Ultimately, organizers were not able to send gifts to thank researchers who participated from Somalia, Yemen, and Palestine. For Somalia and Yemen, both Canada Post, DHL, and Fedex where not able to ...
https://arxiv.org/abs/2505.21344v1
synthetic data and advanced evaluation methods, while development in low-resource languages is hindered by limited data and unreliable assessments, leading to a widening divide in model capabilities and access . ➤This gap results in higher costs and poorer performance for non-English languages, leav- ing many communiti...
https://arxiv.org/abs/2505.21344v1
English than to other languages (Cahyawijaya et al., 2023b; Yong et al., 2023a; Wendler et al., 2024; Aakanksha et al., 2024a;b), and introduces biases against languages and cultural perspectives seen rarely in model training (Schwartz et al., 2022; Kunchukuttan et al., 2021; Kotek et al., 2023; Khandelwal et al., 2023...
https://arxiv.org/abs/2505.21344v1
— predominantly English — or overfit to types of harm common in Western-centric datasets (Sambasivan et al., 2021; Shen et al., 2024). Approaches to remedying the generation of violent, biased, false, or toxic content (Weidinger et al., 2021) are largely oriented towards English or monolingual settings, and there is a ...
https://arxiv.org/abs/2505.21344v1
initiative to date, involving 3000 independent collaborators across 119 countries. The inaugural Aya 101 release doubled coverage of existing languages covered by AI and released the largest ever collection of multilingual, instruction fine-tuning data , with 513 million prompts and completions covering 114 languages (...
https://arxiv.org/abs/2505.21344v1
is a language-parallel evaluation set: the same questions are asked across languages. This allows for control of question difficulty and topic, and results can be interpreted apples-to-apples across languages. However, it means that the quality of the benchmark rests on the quality of translation by human annotators (i...
https://arxiv.org/abs/2505.21344v1
harmful worldwide) or “local” (harm is tied to specific cultural or historical contexts). Evaluationsshouldreflectrelevantgenerativeusecasesacrossmodalities. Languagemod- elshavehistoricallybeenevaluatedondiscriminativetasks, inwhichmodelshavetoanswermultiple- choice questions (such as MMLU (Hendrycks et al., 2020)). A...
https://arxiv.org/abs/2505.21344v1
Cahyawijaya et al., 2025). Aya 101 was organized as a global open science project dedicated to collecting high- quality, human-annotatedinstruction-styledataandbuildingamodeltoserve101languages13. The Aya initiative adopted a decentralized approach, empowering contributors—regardless of academic orprofessionalbackgroun...
https://arxiv.org/abs/2505.21344v1
A core safety guardrail for language models is the ability to refuse to respond to potentially harmful prompts. For example, when a model is prompted to produce hate speech, it will refuse to do so. To develop the Aya 101 model and ensure its ability to refuse harmful prompts across different languages, we used ‘safety...
https://arxiv.org/abs/2505.21344v1
download in certain regions of the world. 6 Conclusion and recommendations for policy makers The language gap in AI is a significant issue that risks excluding communities from the benefits of languagemodels, underminingmodelsafety, andexacerbatingexistingsocial, linguistic, andcultural inequalities, particularly for s...
https://arxiv.org/abs/2505.21344v1
IrokoBench: A new benchmark for african languages in the age of large language models, 2024. URL http://arxiv.org/abs/2406.03368 . 18 Muhammad Farid Adilazuarda, Samuel Cahyawijaya, Genta Indra Winata, Pascale Fung, and Ayu Purwarianti. IndoRobusta: Towards robustness against diverse code-mixed Indonesian local lan- gu...
https://arxiv.org/abs/2505.21344v1
Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Jon Ander Campos, Yi Chern Tan, Kelly Marchisio, Max Bartolo, Se- bastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Aidan Gomez, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker. Aya 23: Open weight releases to further multilingual pr...
https://arxiv.org/abs/2505.21344v1
and James Zou. Safety-tuned LLaMAs: Lessons from improving the safety of large language models that follow instructions. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=gT5hALch9z . Steven Bird. Local languages, third spaces, and other high-resource scenar...
https://arxiv.org/abs/2505.21344v1
Ravi Shulthan Habibi, Muhammad Reza Qorib, Amit Agarwal, Joseph Marvin Imperial, HiteshLaxmichandPatel, VickyFeliren, BahrulIlmiNasution, ManuelAntonioRufino, GentaIn- dra Winata, Rian Adam Rajagede, Carlos Rafael Catalan, Mohamed Fazli Imam, Priyaranjan Pattnayak, Salsabila Zahirah Pranida, Kevin Pratama, Yeshil Bange...
https://arxiv.org/abs/2505.21344v1
John Dang, William Darling, Omar Darwiche Domingues, Saurabh Dash, Antoine Debugne, Théo Dehaze, Shaan Desai, Joan Devassy, Rishit Dholakia, Kyle Duffy, Ali Edalati, Ace Eldeib, Abdullah Elkady, Sarah Elsharkawy, Irem Ergün, Beyza Ermis, Marzieh Fadaee, Boyu Fan, Lucas Fayoux, Yan- nis Flet-Berliac, Nick Frosst, Matthi...
https://arxiv.org/abs/2505.21344v1
pp. 13134–13156, Miami, Florida, USA, November 2024a. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-mai n.729. URL https://aclanthology.org/2024.emnlp-main.729/ . John Dang, Shivalika Singh, Daniel D’souza, Arash Ahmadian, Alejandro Salamanca, Madeline Smith, Aidan Peppin, Sungjin Hong, Manoj G...
https://arxiv.org/abs/2505.21344v1
cle/pii/S0048733313001212 . Lea Frermann and Mirella Lapata. A Bayesian model of diachronic meaning change. Transactions of the Association for Computational Linguistics , 4:31–45, 2016. doi: 10.1162/tacl_a_00081. URLhttps://aclanthology.org/Q16-1003/ . Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao...
https://arxiv.org/abs/2505.21344v1
Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, StevenBethard, RyanCotterell, TanmoyChakraborty, andYichaoZhou(eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies ,pp.588–602.AssociationforCompu...
https://arxiv.org/abs/2505.21344v1
Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Pa- padimitriou, Salomey Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo Rubungo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mi...
https://arxiv.org/abs/2505.21344v1
28 Gupta, Vivek Sharma, Xuhui Zhou, Caiming Xiong, Luis Villa, Stella Biderman, Alex Pentland, Sara Hooker, and Jad Kabbara. Bridging the data provenance gap across text, speech and video, 2024. Holy Lovenia, Rahmad Mahendra, Salsabil Maulana Akbar, Lester James Validad Miranda, Jen- nifer Santoso, Elyanah Aco, Akhdan ...
https://arxiv.org/abs/2505.21344v1
in large language models. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.),Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 16366–16393, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.8...
https://arxiv.org/abs/2505.21344v1
Eldar Khalilov, Christopher Klamm, Fajri Koto, Dominik Krzemiński, Gabriel Adriano de Melo, Syrielle Montariol, Yiyang Nan, Joel Niklaus, Jekaterina Novikova, JohanSamirObandoCeron, DebjitPaul, EstherPloeger, JebishPurbey, SwatiRajwal, Selvan Sunitha Ravi, Sara Rydell, Roshan Santhosh, Drishti Sharma, Marjana Prifti Sk...
https://arxiv.org/abs/2505.21344v1
Ferrante, Marzieh Fadaee, Beyza Ermis, and Sara Hooker. Global mmlu: Understanding and addressing cultural and linguistic biases in multilingual evaluation, 2025. URL https://arxiv.org/abs/2412.03304 . Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari S. Morcos. Beyond neural scaling laws: beating ...
https://arxiv.org/abs/2505.21344v1
Open foundation and fine-tuned chat models.arXiv, abs/2307.09288, 2023b. Marcos Treviso, Ji-Ung Lee, Tianchu Ji, Betty van Aken, Qingqing Cao, Manuel R. Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Colin Raffel, Pedro H. Martins, André F. T. Martins, Jessica Zosa Forde, Peter Milder, Edwin Simpson, Noam Slon...
https://arxiv.org/abs/2505.21344v1
Phan, Rowena Garcia, Thamar Solorio, and Alham Fikri Aji. Prompting mul- tilingual large language models to generate code-mixed texts: The case of south East Asian lan- guages. In Genta Winata, Sudipta Kar, Marina Zhukova, Thamar Solorio, Mona Diab, Sunayana Sitaram, Monojit Choudhury, and Kalika Bali (eds.), Proceedin...
https://arxiv.org/abs/2505.21344v1
Prostate Cancer Screening with Artificial Intelligence–Enhanced Micro-Ultrasound: A Comparative Study with Traditional Methods Muhammad Imrana, Wayne G. Brisbaneb, Li-Ming Suc, Jason P. Josephc, Wei Shaoa,∗ aDepartment of Medicine, University of Florida, , Gainesville, 32611, FL, USA bDepartment of Urology, University ...
https://arxiv.org/abs/2505.21355v1
is commonly used to guide targeted biopsies (Ahmed et al., 2017). However, its role in routine screening is limited by high cost, long acquisition times, limited availability, and the need for specialized radiological expertise. These constraints make mpMRI impractical for large-scale or point-of-care screening. Micro-...
https://arxiv.org/abs/2505.21355v1