text
string | source
string |
|---|---|
2.2.c) LLM-generated: UGEN (Pal et al., 2024) leverages Large Language Models (LLMs) to gen- erate table pairs, aiming to overcome limitations of previous methods by crafting purposefully chal- lenging scenarios, including hard negatives. How- ever, this strategy introduces the risk of ground truth inconsistency, as LLMs may interpret the cri- teria for unionability differently across generations, affecting label reliability. 2.2.d) Hybrid approaches: LAKEBENCH (Deng et al., 2024) uses tables from OpenData3and WebTable corpora4alongside both partitioning- based synthetic queries and real queries sampled from the corpus. However, such hybrid approaches can inherit the limitations of their constituent meth- ods: partitioning still risks high overlap, candidate- based labeling may yield incomplete ground truth, 3https://data.gov/ 4https://webdatacommons.org/webtables/ Benchmark Overall Statistics Column Type (%) Size (MB) Files Rows Cols Avg Shape Missing% Str Int Float Other SANTOSNQ 500 2,736,673 5,707 5473 × 11 9.96 65.39 17.00 11.46 6.15 ∼422 Q 50 1,070,085 615 21402 × 12 5.79 73.17 15.93 8.46 2.44 TUS SmallNQ 1,401 5,293,327 13,196 3778 × 9 6.77 85.43 5.93 4.77 3.86 ∼1162 Q 125 577,900 1,610 4623 × 13 6.86 82.05 7.08 5.84 5.03 TUS LargeNQ 4,944 8,416,415 53,133 1702 × 11 12.53 90.12 5.10 3.57 1.21 ∼1459 Q 100 213,229 1,792 2132 × 18 14.87 90.46 3.68 4.13 1.73 PYLONNQ 1,622 85,282 16,802 53 × 10 0.00 58.74 25.36 15.90 0.00 ∼22 Q 124 11,207 880 90 × 7 0.00 75.68 22.95 1.36 0.00 UGEN V1NQ 1,000 7,609 10,315 8 × 10 5.79 91.68 3.27 4.29 0.76 ∼4 Q 50 405 546 8 × 11 5.87 90.48 4.58 4.21 0.73 UGEN V2NQ 1,000 18,738 13,360 19 × 13 8.16 82.40 11.71 5.50 0.39 ∼8 Q 50 5,363 665 107 × 13 4.14 84.96 10.23 2.41 2.41 LB-OpenDataNQ 4,832 351,067,113 89,757 72655 × 19 3.44 52.50 22.56 22.37 2.57 ∼80834 Q 3,138 238,576,481 61,815 76028 × 20 2.90 40.60 26.28 27.60 5.53 LB-WebtableNQ 29,686 1,039,347 387,432 35 × 13 0.01 61.07 26.28 12.64 0.01 ∼170 Q 5,488 335,187 56,174 61 × 10 0.00 40.43 43.06 16.51 0.01 Table 1: Table Union Search Benchmarks Summary. NQ = Non-query table, Q = Query table. and the large scale of these benchmarks can intro- duce practical evaluation challenges. 3 Methodology As TUS methods become increasingly sophisti- cated, the benchmarks used for their evaluation may contain inherent characteristics that hinder the accurate assessment of progress in semantic under- standing. This section outlines our approach to ex- amining prominent TUS benchmarks through anal- ysis of their construction methods and strategic use of simple baselines as diagnostic tools. The goal of advanced TUS methods is to capture deep semantic compatibility between tables, beyond simple lexi- cal or structural similarity. Our investigation first analyzes the various benchmark construction pro- cesses to identify potential structural weaknesses, then employs computationally inexpensive baseline methods to reveal how these characteristics enable alternative pathways to high performance, thereby influencing evaluation outcomes. 3.1 Analyzing Benchmark Construction We examine five prominent families of TUS bench- marks and formulate hypotheses about their poten- tial limitations based on their construction method- ologies (Table 1). We identify three issues stem- ming
|
https://arxiv.org/abs/2505.21329v2
|
from these methodologies: (1) excessive overlap ,(2) semantic simplicity , and (3) ground truth inconsistencies , which we detail below: 3.1.a) Excessive Overlap: Benchmarks like TUSSmall,TUSLarge,SANTOS , and the synthetic query portion of the LAKEBENCH derivatives are created by partitioning seed tables horizontally andvertically, with tables derived from the same orig- inal seed designated as unionable pairs. We hy- pothesize that this methodology inherently leads to significant overlap in both schema (column names) and content (data values) between query tables and their ground truth unionable candidates. To quantify this, we measure overlap using the Szymkiewicz–Simpson coefficient for exact col- umn names ( Overlapc, Eq. 1) and for values of a given data type d(Overlapv, Eq. 2) between ground truth pairs. Overlapc(Q, C) =|Cols Q∩Cols C| min(|Cols Q|,|Cols C|)(1) Overlapv(Q, C) =|Vd Q∩Vd C| min(|Vd Q|,|Vd C|)(2) where Cols QandCols Cdenote the sets of col- umn names in the query table Qand candidate table Crespectively, and Vd Q,Vd Crepresent the sets of unique values of data type din each ta- ble. The coefficient equals 1.0 when one set is fully contained within the other. Figure 1 shows the distribution of overlap coefficients, with val- ues≥50% indicating substantial overlap. As expected, partitioning-based benchmarks exhibit high overlap: over 90% of ground truth pairs share ≥50% of exact column names. For value over- lap, we focus on string data types, which dominate the benchmarks (Table 1). Here too, 45–60% of query-candidate pairs share ≥50% of string tokens. LAKEBENCH derivatives ( LB-O PENDATA,LB- WEBTABLE ) show similar trends. Appendix A pro- vides a detailed breakdown by data type. This high surface similarity favors simple lexical methods /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000026/uni00000052/uni00000048/uni00000049/uni00000049/uni0000004c/uni00000046/uni0000004c/uni00000048/uni00000051/uni00000057/uni0000002f/uni00000025/uni00000010/uni0000003a/uni00000048/uni00000045/uni00000037/uni00000044/uni00000045/uni0000004f/uni00000048/uni0000002f/uni00000025/uni00000010/uni00000032/uni00000053/uni00000048/uni00000051/uni00000027/uni00000044/uni00000057/uni00000044/uni00000038/uni0000002a/uni00000028/uni00000031/uni00000010/uni00000039/uni00000015/uni00000038/uni0000002a/uni00000028/uni00000031/uni00000010/uni00000039/uni00000014/uni00000033/uni0000003c/uni0000002f/uni00000032/uni00000031/uni00000037/uni00000038/uni00000036/uni00000010/uni0000002f/uni00000037/uni00000038/uni00000036/uni00000036/uni00000024/uni00000031/uni00000037/uni00000032/uni00000036 /uni00000051/uni00000020/uni0000001b/uni0000001c/uni00000018/uni0000001a/uni0000001a/uni00000013/uni00000011/uni00000019/uni00000019/uni00000018/uni00000051/uni00000020/uni00000018/uni0000001b/uni00000013/uni00000019/uni0000001a/uni00000013/uni00000011/uni00000019/uni00000014/uni00000016/uni00000051/uni00000020/uni00000018/uni00000013/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni0000001a/uni00000051/uni00000020/uni00000018/uni00000013/uni00000013/uni00000013/uni00000011/uni00000017/uni00000015/uni00000014/uni00000051/uni00000020/uni00000018/uni00000013/uni0000001b/uni00000019/uni00000013/uni00000011/uni00000014/uni00000018/uni00000018/uni00000051/uni00000020/uni00000016/uni00000015/uni00000013/uni0000001c/uni0000001b/uni00000013/uni00000011/uni00000019/uni0000001b/uni00000013/uni00000051/uni00000020/uni00000014/uni0000001c/uni00000016/uni00000015/uni00000013/uni00000013/uni00000011/uni0000001b/uni00000016/uni0000001a/uni00000051/uni00000020/uni00000019/uni00000017/uni00000015/uni00000013/uni00000011/uni0000001a/uni0000001b/uni00000014/uni00000026/uni00000052/uni0000004f/uni00000058/uni00000050/uni00000051/uni00000003/uni00000031/uni00000044/uni00000050/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000026/uni00000052/uni00000048/uni00000049/uni00000049/uni0000004c/uni00000046/uni0000004c/uni00000048/uni00000051/uni00000057/uni0000002f/uni00000025/uni00000010/uni0000003a/uni00000048/uni00000045/uni00000037/uni00000044/uni00000045/uni0000004f/uni00000048/uni0000002f/uni00000025/uni00000010/uni00000032/uni00000053/uni00000048/uni00000051/uni00000027/uni00000044/uni00000057/uni00000044/uni00000038/uni0000002a/uni00000028/uni00000031/uni00000010/uni00000039/uni00000015/uni00000038/uni0000002a/uni00000028/uni00000031/uni00000010/uni00000039/uni00000014/uni00000033/uni0000003c/uni0000002f/uni00000032/uni00000031/uni00000037/uni00000038/uni00000036/uni00000010/uni0000002f/uni00000037/uni00000038/uni00000036/uni00000036/uni00000024/uni00000031/uni00000037/uni00000032/uni00000036 /uni00000051/uni00000020/uni0000001b/uni00000018/uni00000018/uni0000001a/uni00000013 /uni00000013/uni00000011/uni00000017/uni0000001b/uni00000014/uni00000051/uni00000020/uni00000018/uni00000018/uni00000019/uni00000014/uni00000019 /uni00000013/uni00000011/uni00000017/uni00000018/uni00000016/uni00000051/uni00000020/uni00000018/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000017/uni00000015/uni00000015/uni00000051/uni00000020/uni00000018/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000016/uni00000016/uni00000017/uni00000051/uni00000020/uni00000018/uni00000013/uni0000001b/uni00000019 /uni00000013/uni00000011/uni00000015/uni00000018/uni00000015/uni00000051/uni00000020/uni00000016/uni00000015/uni00000013/uni0000001c/uni0000001b /uni00000013/uni00000011/uni00000018/uni00000013/uni0000001c/uni00000051/uni00000020/uni00000014/uni0000001c/uni00000016/uni00000015/uni00000013 /uni00000013/uni00000011/uni00000019/uni00000013/uni00000014/uni00000051/uni00000020/uni00000019/uni00000017/uni00000015 /uni00000013/uni00000011/uni00000019/uni0000001a/uni00000017/uni00000036/uni00000057/uni00000055/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000039/uni00000044/uni0000004f/uni00000058/uni00000048/uni00000056/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053Figure 1: Distribution of Exact Column Name Overlap (Top) and String Value Overlap (Bottom) Coefficients for Ground Truth Unionable Pairs Across Benchmarks. Colored circles represent mean values; numbers on the right indicate total pairwise relationships considered. and also influences advanced models by introduc- ing repeated patterns in serialized inputs (Starmie), data sketches (TabSketchFM), and graph structures (HEARTS). Though designed for deeper semantics, these models are affected by strong benchmark- induced surface signals, making it hard to attribute performance gains purely to nuanced reasoning. 3.1.b) Semantic Simplicity: Benchmarks derived directly from large corpora, such as PYLON (Cong et al., 2023) using GitTables (Hulsebos et al., 2023) or the realquery portions of LAKEBENCH deriva- tives using diverse public datasets, avoid the sys- tematic overlap introduced by partitioning. How- ever, we hypothesize that this construction method introduces other limitations since (1) it often fo- cuses on relatively common topics with simpler se- mantics, reducing the need for specialized domain knowledge, and (2) it generally draws from public data sources likely included in the pre-training cor- pora of large foundation models. Evidence from specific benchmarks supports this concern. PY- LON’s construction indeed avoids high overlap (Fig- ure 1 shows lower overlap than partitioning-based benchmarks). For LAKEBENCH , while the dis- tinction between real andsynthetic queries was unavailable during our analysis5, the significant
|
https://arxiv.org/abs/2505.21329v2
|
overall observed overlap suggests that synthetic, partitioning-based queries constitute a large por- tion of the benchmark. The semantic simplicity evident in PYLON ’s topics and the public origins 5https://github.com/RLGen/LakeBench/issues/9of data in both PYLON andLAKEBENCH could favor general-purpose models like BERT (Devlin et al., 2019) or SBERT (Reimers and Gurevych, 2019), which have with a high, however unverifi- able, probability encountered similar content dur- ing pre-training. Consequently, the semantic chal- lenge presented by these benchmarks might be relatively low for models with strong general lan- guage understanding – a contrast to documented LLM struggles with non-public, enterprise-specific data characteristics (Bodensohn et al., 2025), po- tentially allowing off-the-shelf embedding models to achieve high performance without fine-tuning. 3.1.c) Noisy Ground Truths: Ensuring accurate and complete ground truth labels is challenging, es- pecially with automated generation or large-scale human labeling efforts as used in LLM-generated benchmarks ( UGEN) and large human-labeled ones (LAKEBENCH derivatives). We hypothesize that ground truth in these benchmarks may suffer from reliability issues, including incorrect labels (false positives/negatives) or incompleteness (missed true positives). For UGEN, generating consistent, ac- curate positive and negative pairs (especially hard negatives) is difficult. LLMs might interpret union- ability rules inconsistently across generations, lead- ing to noisy labels. For large-scale human labeling with LB-O PENDATAandLB-W EBTABLE , the pro- cess introduces two risks: incompleteness , if the initial retrieval misses true unionable tables; and incorrectness , if human judgments vary or contain errors despite validation efforts. Evaluating perfor- mance on UGEN andLAKEBENCH derivatives thus requires caution. Scores are affected by label noise or incompleteness; low scores reflect ground truth issues and are therefore not solely attributable to benchmark difficulty, while the maximum achiev- able recall is capped by unlabeled true positives. 3.2 Baseline Methods for Benchmark Analysis Based on the hypothesized benchmark issues iden- tified above, we select some simple baseline meth- ods to test benchmark sensitivity to different in- formation types. While the (1) overlap and (2) general semantics limitations can be directly exam- ined through baseline performance, (3) the ground truth integrity issue requires separate validation of labels, which we address in Section 5.2. Detailed implementation choices for all baseline methods are in Appendix B.1. 3.2.a) Bag-of-Words Vectorizers: To test whether theExcessive Overlap enables methods sensi- tive to token frequency to perform well on partitioning-based benchmarks, we employ stan- dard lexical vectorizers ( HashingVectorizer , TfidfVectorizer , and CountVectorizer ) from scikit-learn6. These generate column embeddings based on sampled string values, with a single ta- ble vector obtained via max pooling across column vectors. These baselines test whether high per- formance can be achieved primarily by exploiting surface signals without semantic reasoning. 3.2.b) Pre-trained Sentence Transformers: To ex- amine whether the Semantic Simplicity allows benchmarks from broad corpora to be effectively processed by pre-trained language models, we use a Sentence-BERT model ( all-mpnet-base-v27) with three column-to-text serializations: (1) SBERT (V+C): input includes column name and sampled values; (2) SBERT (C): input is only the column name; and (3) SBERT (V): input is only concatenated sampled values. Column em- beddings are aggregated using mean pooling to produce
|
https://arxiv.org/abs/2505.21329v2
|
a single table vector. These baselines as- sess whether general semantic embeddings, with- out task-specific fine-tuning, suffice for high per- formance on benchmarks with general vocabulary. 4 Experimental Setup To evaluate our hypotheses about benchmark limi- tations, we employ both simple baseline methods (Section 3.2) and advanced SOTA methods in a controlled experimental framework. This section details the benchmark datasets used, any necessary preprocessing, the comparative methods, and our standardized evaluation approach. 4.1 Benchmarks Our analysis uses the benchmarks described in Sec- tion 2.2, with post-preprocessing statistics sum- marized in Table 1. Most benchmarks were used as-is, but the large-scale LAKEBENCH derivatives (LB-O PENDATA andLB-W EBTABLE ) required additional preprocessing for feasibility and repro- ducibility. The original datasets were too large to process directly and included practical issues, such as missing files, as well as characteristics that complicated evaluation, such as many unreferenced tables. We removed ground truth entries pointing 6Scikit-Learn Vectorizers Documentation 7all-mpnet-base-v2 on Hugging Faceto missing files (58 in LB-W EBTABLE ), and ex- cluded unreferenced tables from the retrieval cor- pus (removing ∼5,300 and >2.7M files from LB- OPENDATA and LB-W EBTABLE , respectively). This latter step was done purely for computational feasibility; as a side effect, it simplifies the bench- mark by eliminating tables that would otherwise be false positives if retrieved. We also ensured that each query table was listed as a candidate for itself. These steps substantially reduced corpus size while preserving evaluation integrity. The LAKEBENCH variants considered in our study are those available as of May 20, 20258. Future updates to the origi- nal repository may modify dataset contents, which yield different evaluation results. Additionally, for LB-O PENDATA, we created a smaller variant with tables truncated to 1,000 rows, which we use in experiments alongside the original version (Table 2). For TUSSmall andTUSLarge, we followed prior work (Fan et al., 2023b; Hu et al., 2023), sampling 125 and 100 queries, respectively. For the other benchmarks, all queries were used. 4.2 Comparative Methods To evaluate our baseline methods (Section 3.2), we compare them against key TUS models previously discussed in Section 2.1, focusing on SOTA meth- ods. For each method, we optimize implementation using publicly available code for fairness: •Starmie (Fan et al., 2023b): We retrained the RoBERTa-based model for 10 epochs on each benchmark using recommended hyperparameters and their “Pruning” bipartite matching search strategy for generating rankings, which achieves optimal results according to the original paper. •HEARTS (Boutaleb et al., 2025): We utilized pre-trained HyTrel embeddings (Chen et al., 2023) with a contrastively-trained checkpoint. For each benchmark, we adopted the best- performing search strategy from the HEARTS repository: Cluster Search for SANTOS ,PYLON , andUGEN benchmarks, and ANN index search with max pooling for the TUSandLAKEBENCH benchmarks. •TabSketchFM (Khatiwada et al., 2025): Results for the TUSSmall andSANTOS were reported di- rectly from the original paper, as the pretrained checkpoint was unavailable at the time of our experiments. 8LakeBench commit df7559d used in our study These methods represent significant advance- ments in table representation learning. AutoTUS (Hu et al., 2023) wasn’t included due to code un-
|
https://arxiv.org/abs/2505.21329v2
|
availability at the time of writing. We provide fur- ther implementation details in Appendix B.2. 4.3 Evaluation Procedure We use a consistent evaluation procedure for all baseline and SOTA methods to ensure fair com- parison. Table vectors are generated per method (Section 3.2 for baselines; SOTA-specific proce- dures otherwise) and L2-normalized for similarity via inner product. For similarity search, baseline methods use the FAISS library (Douze et al., 2024) with an exact inner product index ( IndexFlatIP ); each query ranks all candidate tables by simi- larity. SOTA methods use FAISS or alternative search strategies (Appendix B.2). Following prior work (Fan et al., 2023b; Hu et al., 2023), we report Precision@k (P@k) and Recall@k (R@k), aver- aged across queries. Values of kfollow prior works and are shown in results tables (e.g., Table 2). We also evaluate computational efficiency via offline (training, vector extraction, indexing) and online (query search) runtimes, with hardware details in Appendix B.3. 5 Results and Discussion Our empirical evaluation revealed significant pat- terns across benchmarks that expose fundamental limitations in their ability to measure progress in semantic understanding. Tables 2 and 3 present effectiveness and efficiency metrics respectively. 5.1 Evidence of Benchmark Limitations The most compelling evidence for our bench- mark limitation hypotheses emerges from the un- expectedly strong performance of simple base- lines. On partitioning-based benchmarks ( TUSSmall, TUSLarge,SANTOS ), lexical methods achieve near- perfect precision, matching or exceeding sophis- ticated models at a fraction of the cost. This di- rectly validates our overlap issue hypothesis: the high schema and value overlap (Figure 1) creates trivial signals that simple lexical matching can ex- ploit. While advanced methods like Starmie or HEARTS also achieve high scores here, the fact that much simpler, non-semantic methods perform nearly identically leads us to conclude that the benchmark itself does not effectively differentiate methods based on deep semantic understanding. This phenomenon, where simpler approaches canachieve comparable or even better results than more complex counterparts, especially when computa- tional costs are considered, has also been observed in related data lake tasks such as table augmenta- tion via join search (Cappuzzo et al., 2024). ForPYLON , a different pattern emerges: lexical methods perform considerably worse due to the much lower exact overlap, but general-purpose se- mantic embeddings excel. SBERT variants, partic- ularly SBERT(V+C) combining column and value information, outperform specialized SOTA models like Starmie. This confirms our general semantics hypothesis that these benchmarks employ vocab- ulary well-represented in standard pre-trained em- beddings, diminishing the advantage of specialized tabular architectures for the TUS task. LB-O PENDATA and LB-W EBTABLE exhibit both limitations despite their scale. Simple lexi- cal methods remain surprisingly competitive, while SBERT variants consistently outperform special- ized models. The computational demands of so- phisticated models create additional practical barri- ers: Starmie requires substantial offline costs (train- ing and inference) plus over 16 hours to process the queries on the truncated LB-O PENDATA, and over 21 hours to evaluate the queries of LB-W EBTABLE . HEARTS performs better computationally by lever- aging a pre-trained checkpoint without additional training, resulting in a shorter offline processing time,
|
https://arxiv.org/abs/2505.21329v2
|
but still under-performs SBERT variants. 5.2 Ground Truth Reliability Issues A notable observation across UGEN and LAKEBENCH derivatives is the significant gap between the R@k achieved by all methods and the IDEAL recall (Table 2). This discrepancy led us to question the reliability of the benchmarks’ ground truth labels. We hypothesized that such gaps might indicate not only the limitations of the search methods or the inherent difficulty of the benchmarks but also potential incompleteness or inaccuracies within the ground truth itself. Examining discrepancies at small values of k is particularly revealing, as this scrutinizes the highest-confidence predictions of a system. If a high-performing method frequently disagrees with the ground truth at these top ranks, it may signal issues with the ground truth labels. To investigate this, we defined two heuristic met- rics designed to help identify potential ground truth flaws. Let Q={Q1, . . . , Q N}beNquery tables. ForQi∈ Q,CQi,kis the set of top- kcandidates MethodSANTOS TUS TUS Large PYLON UGEN V1 UGEN V2 LB-O PENDATA 1kLB-O PENDATA LB-WebTable P@10 R@10 P@60 R@60 P@60 R@60 P@10 R@10 P@10 R@10 P@10 R@10 P@50 R@50 P@50 R@50 P@20 R@20 IDEAL 1.00 0.75 1.00 0.34 1.00 0.23 1.00 0.24 1.00 1.00 1.00 1.00 0.39 1.00 0.39 1.00 0.81 0.95 Non-specialized Embedding Methods HASH 0.98 0.74 0.99 0.33 0.99 0.23 0.64 0.15 0.59 0.59 0.43 0.43 0.21 0.60 0.21 0.60 0.21 0.25 TFIDF 0.99 0.74 1.00 0.34 0.99 0.23 0.70 0.17 0.58 0.58 0.50 0.50 0.21 0.61 0.21 0.61 0.23 0.27 COUNT 0.99 0.74 1.00 0.34 0.99 0.23 0.68 0.17 0.58 0.58 0.50 0.50 0.21 0.60 0.21 0.60 0.23 0.27 SBERT (V+C) 0.98 0.74 1.00 0.34 0.99 0.23 0.91 0.22 0.61 0.61 0.68 0.68 0.23 0.66 0.23 0.66 0.26 0.31 SBERT (V) 0.94 0.71 1.00 0.34 0.99 0.23 0.84 0.20 0.58 0.58 0.58 0.58 0.22 0.62 0.22 0.62 0.25 0.29 SBERT (C) 0.98 0.74 1.00 0.34 0.98 0.23 0.85 0.21 0.60 0.60 0.65 0.65 0.22 0.64 0.22 0.64 0.16 0.20 Specialized Table Union Search Methods Starmie 0.98 0.73 0.96 0.31 0.93 0.21 0.81 0.20 0.57 0.57 0.58 0.58 0.18 0.51 ‡ ‡ 0.25 0.30 HEARTS 0.98 0.74 1.00 0.34 0.99 0.23 0.65 0.16 0.56 0.56 0.37 0.37 0.19 0.61 0.19 0.60 0.23 0.28 TabSketchFM 0.92 0.69 0.97 0.32 * * * * * * * * * * * * * * Table 2: Precision and Recall across benchmarks. Highest values in bold , second highest underlined . IDEAL represents the maximum possible P@k and R@k achievable for each benchmark at the specified k. *: Results unavailable as checkpoint was not publicly accessible. ‡: Not reported due to excessive computational requirements. MethodSANTOS TUS TUS Large PYLON UGEN V1 UGEN V2 LB-O PENDATA 1K LB-O PENDATA LB-WebTable Offline Online Offline Online Offline Online Offline Online Offline Online Offline Online Offline Online Offline Online Offline Online Non-specialized Embedding Methods HASH 0m 15s 0m 0s 0m 43s 0m 1s 1m 45s 0m 2s 0m 19s 0m 1s 0m 12s 0m 0s 0m 14s 0m 0s 7m 56s 0m 31s 12m 4s 0m 22s 6m 3s 0m 21s
|
https://arxiv.org/abs/2505.21329v2
|
TFIDF/COUNT 0m 53s 0m 0s 1m 45s 0m 1s 3m 10s 0m 2s 0m 22s 0m 1s 0m 9s 0m 0s 0m 12s 0m 0s 22m 22s 0m 31s 37m 14s 0m 21s 6m 21s 0m 22s SBERT 1m 45s 0m 0s 3m 30s 0m 0s 9m 21s 0m 15s 3m 18s 0m 0s 1m 41s 0m 0s 2m 20s 0m 0s 27m 47s 0m 4s 82m 13s 0m 4s 30m 45s 0m 3s Specialized Table Union Search Methods STARMIE 19m 3s 1m 2s 4m 24s 8m 59s 14m 43s 20m 29s 7m 56s 3m 27s 2m 8s 1m 0s 2m 45s 1m 45s 131m 48s 1220m 53s – – 48m 11s 1311m 43s HEARTS 0m 21s 0m 34s 1m 1s 0m 0s 3m 10s 0m 0s 0m 57s 0m 36s 0m 23s 0m 40s 0m 30s 0m 35s 21m 33s 0m 3s 76m 12s 0m 5s 29m 28s 0m 3s Table 3: Computational efficiency across benchmarks. Times are averaged over 5 runs due to runtime variability. Offline includes vector generation, indexing, and training times where applicable; Online is total query search time. retrieved by a search method for Qi, andGQiis the set of ground truth candidates labeled unionable withQi. 1.GTFP@k (Ground Truth False Positive Rate) : This measures the fraction of top- kcandidates retrieved by a search method that are not labeled as unionable in the original ground truth. A high GTFP@k, especially at small k, suggests the method might be identifying valid unionable tables missing from the ground truth, thereby helping us pinpoint its possible incompleteness . It is calculated as: PN i=1|CQi,k\GQi| N·k Here,|CQi,k\GQi|counts retrieved candidates forQithat are absent from its ground truth set GQi. The denominator is the total top- kslots considered across all queries. 2.GTFN@k (Ground Truth False Negative Rate) : This quantifies the fraction of items la- beled as positives in the ground truth that a well- performing search method fails to retrieve within its top- kresults (considering a capped expecta- tion up to kitems per query). It is calculated as: PN i=1(min( k,|GQi|)− |GQi∩CQi,k|)PN i=1min(k,|GQi|) The term min(k,|GQi|)represents the capped ideal number of ground truth items we wouldexpect to find in the top kforQi. The numerator sums the "misses" for each query: the differ- ence between this capped ideal and the number of ground truth items actually retrieved. The denominator sums this capped ideal across all queries. A high GTFN@k at small kis particu- larly insightful when investigating ground truth integrity. If we trust the method’s ability to dis- cern relevance, a high GTFN@k implies that the method correctly deprioritizes items that, de- spite being in the ground truth, might be less relevant or even incorrectly labeled as positive. Thus, it can signal potential incorrectness within the ground truth. GTFN@k is equivalent to "1−CappedRecall@k" (Thakur et al., 2021). These metrics assume discrepancies between a strong search method and the ground truth may in- dicate flaws in the latter. While not highly accurate, they helped us identify a smaller, focused subset of query-candidate pairs with disagreements for deeper manual or LLM-based inspection. Results are shown in
|
https://arxiv.org/abs/2505.21329v2
|
Table 4. Beyond heuristic metrics, we also conduct a more direct–though still imperfect–assessment ofUGEN’s ground truth using an LLM-as-a- judge approach. While this method may not capture the same conflicts identified by the cheaper GTFP/GTFN heuristics, it provides a complementary perspective that can offer more precise insights in certain cases. We use gemini-2.0-flash-thinking-exp-01-219, cho- sen for its 1M-token context window, baked-in rea- soning abilities, and low hallucination rate10. This LLM-as-a-judge approach has become increasingly common in recent works (Gu et al., 2024; Wolff and Hulsebos, 2025). We gave the LLM both ta- bles in each query-candidate pair, along with a de- tailed prompt including curated unionable and non- unionable examples from UGEN (see Appendix D) to condition the LLM’s understanding of unionabil- ity based on the benchmark. Each pair was evalu- ated in 5 independent runs with temperature=0.1 . A sample of 20 LLM outputs was manually val- idated and showed strong alignment with human judgment. Comparison with original U GEN labels (Table 5) revealed substantial inconsistencies. Our manual inspection (Appendix C.1) suggested the LLM often provided more accurate assessments, in- dicating notable noise in the original ground truth. Given the scale of LB-O PENDATA and LB- WEBTABLE , full LLM adjudication was imprac- tical. Instead, we used SBERT(V+C) as our refer- ence search method to compute GTFP@k, focus- ing on top-ranked pairs not labeled as unionable in the ground truth. As shown in Table 4, such cases were frequent even at top ranks ( 2< k < 5). To assess ground truth completeness, we manually inspected 20 randomly sampled top-2 and top-3 disagreements. Of these, 19 were genuinely union- able but missing from the ground truth; the remain- ing pair was correctly non-unionable, with SBERT likely misled by its numeric-only columns. These results suggest non-negligible incompleteness in theLAKEBENCH ground truth. Example cases are shown in Appendix C.2. In summary, our investigations, combining heuristic metrics, LLM-based adjudication, and manual inspection, reveal the presence of non- negligible noise and incompleteness within the original benchmark labels for both UGEN and LAKEBENCH . Consequently, performance metrics reported on these benchmarks may be influenced by these underlying ground truth issues, potentially misrepresenting true task difficulty or method ca- pabilities. 5.3 Implications for Measuring Progress Our experiments reveal several critical issues. Benchmark scores often fail to measure true seman- tic capabilities, as simple lexical or general embed- 9Gemini 2.0 Flash Thinking Model Card 10Vectara Hallucination LeaderboardBenchmark (Metric) @1 @2 @3 @4 @5 UGEN V1(GTFP) 0.160 0.210 0.247 0.275 0.308 UGEN V1(GTFN) 0.160 0.210 0.247 0.275 0.308 UGEN V2(GTFP) 0.060 0.080 0.093 0.140 0.156 UGEN V2(GTFN) 0.060 0.080 0.093 0.140 0.156 LB-O PENDATA(GTFP) 0.000 0.059 0.092 0.123 0.154 LB-O PENDATA(GTFN) 0.000 0.054 0.080 0.105 0.132 LB-W EBTABLE (GTFP) 0.000 0.110 0.198 0.296 0.377 LB-W EBTABLE (GTFN) 0.000 0.110 0.197 0.295 0.376 Table 4: Disagreement rates of top- kretrieved results between SBERT and the ground truth across different benchmarks. For UGEN, the query table is not consid- ered a candidate to itself, so values at @1 reflect actual disagreement. For LAKEBENCH variants, the ground truth is normalized to include
|
https://arxiv.org/abs/2505.21329v2
|
the query table as a valid candidate for itself. Therefore, the top-1 match is al- ways correct by construction, yielding no disagreement @1. GT Label LLM Judge UGEN V1 UGEN V2 Unionable Non-unionable 24.8% 0.0% Non-unionable Unionable 33.8% 23.6% Non-unionable Non-unionable 16.2% 76.4% Unionable Non-unionable 25.2% 0.0% Table 5: Breakdown of agreement and disagreement between ground truth labels and LLM-based judgments. ding methods can match or outperform specialized models by exploiting excessive domain overlap, semantic simplicity, or ground truth inconsistency. This suggests that current benchmarks may inad- vertently reward adaptation to these characteristics, making it difficult to quantify the practical benefits of progress on sophisticated TUS methods capabil- ities within these settings. These persistent issues also point to a fundamental challenge, the lack of a precise, operational definition for unionability, mir- roring broader difficulties in dataset search (Hulse- bos et al., 2024) and highlighting the need to ad- dress the subjective, context-dependent nature of table compatibility in practice. 6 Towards Better TUS Benchmarks In industry practice, unionability judgments are in- herently subjective, depending on analytical goals, domain contexts, data accessibility constraints (Martorana et al., 2025), and user preferences (Mirzaei and Rafiei, 2023). Yet current benchmarks impose fixed definitions, creating a disconnect with practical utility: methods excelling on benchmarks often falter in real-world scenarios demanding dif- ferent compatibility thresholds. Addressing this requires benchmark designs that embrace contex- tual variability and provide a stable foundation for evaluation, lest even advanced methods fall short in practice. Rethinking Benchmark Design Principles: Overcoming current benchmark limitations re- quires a shift in design focusing on three key prin- ciples: (1) actively reducing artifactual overlap while introducing controlled semantic heterogene- ity to better reflect real-world schema and value divergence; (2) incorporating realistic domain com- plexity beyond general vocabularies, addressing challenges like non-descriptive schemas and pro- prietary terms where LLMs struggle (Bodensohn et al., 2025), thus emphasizing domain-specific training that may require industry collaboration; and (3) rethinking ground truth representation by replacing brittle binary labels with richer, nuanced formats validated through multi-stage adjudication to improve completeness and consistency. Exploring Implementation Pathways: Trans- lating these principles into practice requires con- crete strategies for benchmark design and evalu- ation. One approach is to develop (1) scenario- driven micro-benchmarks targeting specific chal- lenges such as schema drift simulation or value representation noise, enabling more granular anal- ysis than coarse end-to-end metrics. Another is (2) advancing controllable synthetic data genera- tion, following LLM-based methods like UGEN (Pal et al., 2024), to verifiably embed semantic con- straints or domain knowledge, supporting diverse testbeds when real data is unavailable or sensitive. Equally important is (3) exploring adaptive, inter- active evaluation frameworks such as human-in- the-loop systems, which would dynamically adjust relevance criteria based on user feedback to better capture the subjective nature of unionability. Tools like LakeVisage (Hu et al., 2025) further enhance usability and trust by recommending visualizations that help users interpret relationships among re- turned tables, improving transparency and inter- pretability in union search systems.Incorporating natural language preferences is also key. The re- cent NLCT ABLES benchmark (Cui et al., 2025) ad- vances this
|
https://arxiv.org/abs/2505.21329v2
|
by introducing NL conditions for union and join searches on column values and table size constraints. However, its predicate-style conditions may be better addressed via post-retrieval filtering (e.g., translating NL to SQL predicates with an LLM), avoiding early discard of unionable candi- dates and unnecessary retrieval model complexity. To drive further advancement, benchmarks shouldincorporate (4) natural language conditions that capture key aspects of unionability and joinability, including specifications about the characteristics of the final integrated table or conditional integration logic. For example, a challenging predicate might require identifying tables that can be "joined with a query table on column A, unioned on columns B and C, and also contain an additional column D providing specific contextual information about a particular attribute." Such conditions would de- mand deeper reasoning capabilities from data inte- gration systems and encourage the development of more sophisticated methods for Table Union and Join Search. Finally, moving beyond binary suc- cess metrics, future benchmarks could adopt (5) multi-faceted evaluation frameworks using richer ground truth representations to assess unionability across dimensions like schema compatibility, se- mantic type alignment, value distribution similarity, and task-specific relevance, offering a more holistic evaluation than current standards. 7 Conclusion Our analysis of TUS benchmarks highlights three major limitations: excessive overlap in partitioning- based datasets, semantics easily captured by pre- trained embeddings, and non-negligible ground- truth inconsistencies. The first two allow simple baselines to rival sophisticated models with far lower computational cost, showing that high perfor- mance isn’t necessarily tied to advanced semantic reasoning. The third undermines evaluation valid- ity, as scores may reflect misalignment with flawed ground truth rather than actual benchmark diffi- culty. This gap between benchmark performance and true semantic capability suggests current eval- uations often reward adaptation to benchmark- specific artifacts. To address this, we propose de- sign principles that better reflect the complex, sub- jective nature of real-world table union search. Limitations: Our study examined selected benchmarks and methods, with broader evalua- tion potentially revealing more insight. Our in- vestigation of ground truth issues in UGEN and LAKEBENCH , while systematic, identifies certain patterns without exhaustive quantification. Future Work: Developing benchmarks aligned with our proposed criteria represents the next step towards ensuring that measured progress translates to meaningful real-world utility. References Jan-Micha Bodensohn, Ulf Brackmann, Liane V ogel, Anupam Sanghi, and Carsten Binnig. 2025. Unveil- ing challenges for llms in enterprise data engineering. Preprint , arXiv:2504.10950. Alex Bogatu, Alvaro A. A. Fernandes, Norman W. Pa- ton, and Nikolaos Konstantinou. 2020. D3L: Dataset Discovery in Data Lakes. In 2020 IEEE 36th Inter- national Conference on Data Engineering (ICDE) , pages 709–720. ArXiv:2011.10427 [cs]. Allaa Boutaleb, Alaa Almutawa, Bernd Amann, Rafael Angarita, and Hubert Naacke. 2025. HEARTS: Hypergraph-based related table search. In ELLIS workshop on Representation Learning and Genera- tive Models for Structured Data . Riccardo Cappuzzo, Gaël Varoquaux, Aimee Coelho, and Paolo Papotti. 2024. Retrieve, merge, pre- dict: Augmenting tables with data lakes. CoRR , abs/2402.06282. Sonia Castelo, Rémi Rampin, Aécio Santos, Aline Bessa, Fernando Chirigati, and Juliana Freire. 2021. Auctus: a dataset search engine for data discovery and augmentation. Proceedings of
|
https://arxiv.org/abs/2505.21329v2
|
the VLDB Endow- ment , 14(12):2791–2794. Pei Chen, Soumajyoti Sarkar, Leonard Lausen, Balasub- ramaniam Srinivasan, Sheng Zha, Ruihong Huang, and George Karypis. 2023. Hytrel: Hypergraph- enhanced tabular data representation learning. Ad- vances in Neural Information Processing Systems , 36:32173–32193. Tianji Cong, Fatemeh Nargesian, and H. V . Jagadish. 2023. Pylon: Semantic table union search in data lakes. CoRR , abs/2301.04901. Lingxi Cui, Huan Li, Ke Chen, Lidan Shou, and Gang Chen. 2025. Nlctables: A dataset for marrying natu- ral language conditions with table discovery. CoRR , abs/2504.15849. Yuhao Deng, Chengliang Chai, Lei Cao, Qin Yuan, Siyuan Chen, Yanrui Yu, Zhaoze Sun, Junyi Wang, Jiajun Li, Ziqi Cao, Kaisen Jin, Chi Zhang, Yuqing Jiang, Yuanfang Zhang, Yuping Wang, Ye Yuan, Guoren Wang, and Nan Tang. 2024. LakeBench: A Benchmark for Discovering Joinable and Union- able Tables in Data Lakes. Proceedings of the VLDB Endowment , 17(8):1925–1938. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers) , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, Pierre-Emmanuel Mazaré, Maria Lomeli, Lucas Hosseini, and Hervé Jégou. 2024. The faiss library. Grace Fan, Jin Wang, Yuliang Li, and Renée J. Miller. 2023a. Table Discovery in Data Lakes: State-of-the- art and Future Directions. In Companion of the 2023 International Conference on Management of Data , pages 69–75, Seattle WA USA. ACM. Grace Fan, Jin Wang, Yuliang Li, Dan Zhang, and Renée J. Miller. 2023b. Semantics-aware dataset dis- covery from data lakes with contextualized column- based representation learning. Proc. VLDB Endow. , 16(7):1726–1739. Daniel Gomm and Madelon Hulsebos. 2025. Metadata matters in dense table retrieval. In ELLIS workshop on Representation Learning and Generative Models for Structured Data . Jiawei Gu, Xuhui Jiang, Zhichao Shi, Hexiang Tan, Xuehao Zhai, Chengjin Xu, Wei Li, Yinghan Shen, Shengjie Ma, Honghao Liu, Yuanzhuo Wang, and Jian Guo. 2024. A survey on llm-as-a-judge. CoRR , abs/2411.15594. Xuming Hu, Shen Wang, Xiao Qin, Chuan Lei, Zhengyuan Shen, Christos Faloutsos, Asterios Katsi- fodimos, George Karypis, Lijie Wen, and Philip S. Yu. 2023. AUTOTUS: Automatic Table Union Search with Tabular Representation Learning. In Findings of the Association for Computational Linguistics: ACL 2023 , pages 3786–3800, Toronto, Canada. Associa- tion for Computational Linguistics. Yihao Hu, Jin Wang, and Sajjadur Rahman. 2025. Lake- visage: Towards scalable, flexible and interactive vi- sualization recommendation for data discovery over data lakes. CoRR , abs/2504.02150. Madelon Hulsebos, Çagatay Demiralp, and Paul Groth. 2023. Gittables: A large-scale corpus of relational tables. Proc. ACM Manag. Data , 1(1):30:1–30:17. Madelon Hulsebos, Wenjing Lin, Shreya Shankar, and Aditya Parameswaran. 2024. It Took Longer than I was Expecting: Why is Dataset Search Still so Hard? InProceedings of the 2024 Workshop on Human-In- the-Loop Data Analytics , pages 1–4, Santiago AA Chile. ACM. Aamod Khatiwada, Grace Fan, Roee Shraga, Zixuan Chen, Wolfgang Gatterbauer, Renée J. Miller, and Mirek Riedewald. 2023. SANTOS: Relationship-
|
https://arxiv.org/abs/2505.21329v2
|
based Semantic Table Union Search. Proceedings of the ACM on Management of Data , 1(1):1–25. Aamod Khatiwada, Harsha Kokel, Ibrahim Abdelaziz, Subhajit Chaudhury, Julian Dolby, Oktie Hassan- zadeh, Zhenhan Huang, Tejaswini Pedapati, Horst Samulowitz, and Kavitha Srinivas. 2025. Tabs- ketchfm: Sketch-based tabular representation learn- ing for data discovery over data lakes. IEEE ICDE . Margherita Martorana, Tobias Kuhn, and Jacco van Ossenbruggen. 2025. Metadata-driven table union search: Leveraging semantics for restricted access data integration. CoRR , abs/2502.20945. Leland McInnes, John Healy, Steve Astels, and 1 others. 2017. hdbscan: Hierarchical density based clustering. J. Open Source Softw. , 2(11):205. Leland McInnes, John Healy, Nathaniel Saul, and Lukas Großberger. 2018. UMAP: uniform manifold ap- proximation and projection. J. Open Source Softw. , 3(29):861. Hamed Mirzaei and Davood Rafiei. 2023. Table union search with preferences. In Joint Proceedings of Workshops at the 49th International Conference on Very Large Data Bases (VLDB 2023), Vancou- ver, Canada, August 28 - September 1, 2023 , vol- ume 3462 of CEUR Workshop Proceedings . CEUR- WS.org. Fatemeh Nargesian, Erkang Zhu, Ken Q. Pu, and Renée J. Miller. 2018. TUS: Table union search on open data. Proceedings of the VLDB Endowment , 11(7):813–825. Koyena Pal, Aamod Khatiwada, Roee Shraga, and Renée J Miller. 2024. Alt-gen: Benchmarking table union search using large language models. Proceed- ings of the VLDB Endowment. ISSN , 2150:8097. Thomas Pellissier Tanon, Gerhard Weikum, and Fabian Suchanek. 2020. Yago 4: A reason-able knowledge base. In The Semantic Web , pages 583–596, Cham. Springer International Publishing. Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP) , pages 3982–3992, Hong Kong, China. Association for Com- putational Linguistics. Anish Das Sarma, Lujun Fang, Nitin Gupta, Alon Y Halevy, Hongrae Lee, Fei Wu, Reynold Xin, and Cong Yu. 2012. Finding related tables. In SIGMOD Conference , volume 10, pages 2213836–2213962. Nandan Thakur, Nils Reimers, Andreas Rücklé, Ab- hishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Con- ference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) . Cornelius Wolff and Madelon Hulsebos. 2025. How well do llms reason over tabular data, really? arXiv preprint arXiv:2505.07453 . Erkang Zhu, Fatemeh Nargesian, Ken Q. Pu, and Renée J. Miller. 2016. LSH ensemble: Internet-scale domain search. Proc. VLDB Endow. , 9(12):1185– 1196.A Benchmark Overlap As discussed in section 3.1.a), the degree of lex- ical overlap (both in column names and values) between query and candidate tables in benchmark ground truths can significantly influence model per- formance. Methods sensitive to surface-level sim- ilarity might perform well on benchmarks with high overlap without necessarily capturing deeper semantic relationships. This section provides a more detailed breakdown of overlap coefficients by data type across the different benchmarks evaluated. Figure 2 presents these distributions. BImplementation and Evaluation Details This appendix provides supplementary details re- garding the implementation of baseline methods, SOTA models, and the evaluation procedure
|
https://arxiv.org/abs/2505.21329v2
|
used in our experiments, complementing the core method- ology described in Sections 3.2 and 4.3. B.1 Lexical Baselines (Hashing, TF-IDF, Count) Implementation Details Vectorizers: We used implementations from scikit-learn11. All vectorizers were configured with lowercase=True . •TfidfVectorizer and CountVectorizer : Used an ngram_range=(1, 2) . Their vocabu- lary was constructed by first collecting unique tokens from all columns across the entire corpus (query tables included), ensuring a consistent feature space. •HashingVectorizer : Used an ngram_range=(1, 1) and alternate_sign=False . Input Data: For each table, we randomly sam- pled up to 1000 unique non-null cell values per column. Vectorization: Each column’s sampled values were treated as a document and vectorized into a 4096-dimensional vector using the appropriately fitted or configured vectorizer. B.2 SOTA Method Implementation Details B.2.a) Starmie:12We utilized the implemen- tation and recommendations from the original Starmie paper (Fan et al., 2023b). 11https://scikit-learn.org/stable/api/sklearn. feature_extraction.html 12Starmie GitHub Repository /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000019/uni00000013/uni00000008/uni0000001b/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni00000033/uni0000003c/uni0000002f/uni00000032/uni00000031/uni00000003/uni00000010/uni00000003/uni00000026/uni00000052/uni0000004f/uni00000058/uni00000050/uni00000051/uni00000003/uni00000031/uni00000044/uni00000050/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000019/uni00000013/uni00000008/uni0000001b/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni00000033/uni0000003c/uni0000002f/uni00000032/uni00000031/uni00000003/uni00000010/uni00000003/uni00000037/uni00000058/uni00000053/uni0000004f/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000045/uni0000005c/uni00000003/uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni0000004c/uni00000051/uni00000057/uni00000048/uni0000004a/uni00000048/uni00000055 /uni00000049/uni0000004f/uni00000052/uni00000044/uni00000057 /uni00000056/uni00000057/uni00000055/uni0000004c/uni00000051/uni0000004a /uni00000052/uni00000057/uni0000004b/uni00000048/uni00000055 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000019/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni00000037/uni00000038/uni00000036/uni00000003/uni00000010/uni00000003/uni00000026/uni00000052/uni0000004f/uni00000058/uni00000050/uni00000051/uni00000003/uni00000031/uni00000044/uni00000050/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000019/uni00000013/uni00000008/uni0000001b/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni00000037/uni00000038/uni00000036/uni00000003/uni00000010/uni00000003/uni00000037/uni00000058/uni00000053/uni0000004f/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000045/uni0000005c/uni00000003/uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni0000004c/uni00000051/uni00000057/uni00000048/uni0000004a/uni00000048/uni00000055 /uni00000049/uni0000004f/uni00000052/uni00000044/uni00000057 /uni00000056/uni00000057/uni00000055/uni0000004c/uni00000051/uni0000004a /uni00000052/uni00000057/uni0000004b/uni00000048/uni00000055 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000011/uni00000013/uni00000008/uni00000014/uni00000013/uni00000011/uni00000013/uni00000008/uni00000015/uni00000013/uni00000011/uni00000013/uni00000008/uni00000016/uni00000013/uni00000011/uni00000013/uni00000008/uni00000017/uni00000013/uni00000011/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni00000037/uni00000038/uni00000036/uni00000010/uni0000002f/uni00000003/uni00000010/uni00000003/uni00000026/uni00000052/uni0000004f/uni00000058/uni00000050/uni00000051/uni00000003/uni00000031/uni00000044/uni00000050/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000019/uni00000013/uni00000008/uni0000001b/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni00000037/uni00000038/uni00000036/uni00000010/uni0000002f/uni00000003/uni00000010/uni00000003/uni00000037/uni00000058/uni00000053/uni0000004f/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000045/uni0000005c/uni00000003/uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni0000004c/uni00000051/uni00000057/uni00000048/uni0000004a/uni00000048/uni00000055 /uni00000049/uni0000004f/uni00000052/uni00000044/uni00000057 /uni00000056/uni00000057/uni00000055/uni0000004c/uni00000051/uni0000004a /uni00000052/uni00000057/uni0000004b/uni00000048/uni00000055 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000014/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000016/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000018/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni00000036/uni00000024/uni00000031/uni00000037/uni00000032/uni00000036/uni00000003/uni00000010/uni00000003/uni00000026/uni00000052/uni0000004f/uni00000058/uni00000050/uni00000051/uni00000003/uni00000031/uni00000044/uni00000050/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000019/uni00000013/uni00000008/uni0000001b/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni00000036/uni00000024/uni00000031/uni00000037/uni00000032/uni00000036/uni00000003/uni00000010/uni00000003/uni00000037/uni00000058/uni00000053/uni0000004f/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000045/uni0000005c/uni00000003/uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni0000004c/uni00000051/uni00000057/uni00000048/uni0000004a/uni00000048/uni00000055 /uni00000049/uni0000004f/uni00000052/uni00000044/uni00000057 /uni00000056/uni00000057/uni00000055/uni0000004c/uni00000051/uni0000004a /uni00000052/uni00000057/uni0000004b/uni00000048/uni00000055 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000011/uni00000013/uni00000008/uni00000014/uni00000013/uni00000011/uni00000013/uni00000008/uni00000015/uni00000013/uni00000011/uni00000013/uni00000008/uni00000016/uni00000013/uni00000011/uni00000013/uni00000008/uni00000017/uni00000013/uni00000011/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni00000038/uni0000002a/uni00000028/uni00000031/uni00000010/uni00000039/uni00000014/uni00000003/uni00000010/uni00000003/uni00000026/uni00000052/uni0000004f/uni00000058/uni00000050/uni00000051/uni00000003/uni00000031/uni00000044/uni00000050/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000019/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni00000038/uni0000002a/uni00000028/uni00000031/uni00000010/uni00000039/uni00000014/uni00000003/uni00000010/uni00000003/uni00000037/uni00000058/uni00000053/uni0000004f/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000045/uni0000005c/uni00000003/uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni0000004c/uni00000051/uni00000057/uni00000048/uni0000004a/uni00000048/uni00000055 /uni00000049/uni0000004f/uni00000052/uni00000044/uni00000057 /uni00000056/uni00000057/uni00000055/uni0000004c/uni00000051/uni0000004a /uni00000052/uni00000057/uni0000004b/uni00000048/uni00000055 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000019/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni00000038/uni0000002a/uni00000028/uni00000031/uni00000010/uni00000039/uni00000015/uni00000003/uni00000010/uni00000003/uni00000026/uni00000052/uni0000004f/uni00000058/uni00000050/uni00000051/uni00000003/uni00000031/uni00000044/uni00000050/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000019/uni00000013/uni00000008/uni0000001b/uni00000013/uni00000008/uni00000014/uni00000013/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni00000038/uni0000002a/uni00000028/uni00000031/uni00000010/uni00000039/uni00000015/uni00000003/uni00000010/uni00000003/uni00000037/uni00000058/uni00000053/uni0000004f/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000045/uni0000005c/uni00000003/uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni0000004c/uni00000051/uni00000057/uni00000048/uni0000004a/uni00000048/uni00000055 /uni00000049/uni0000004f/uni00000052/uni00000044/uni00000057 /uni00000056/uni00000057/uni00000055/uni0000004c/uni00000051/uni0000004a /uni00000052/uni00000057/uni0000004b/uni00000048/uni00000055 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000014/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000016/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000018/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni0000002f/uni00000025/uni00000010/uni00000032/uni00000053/uni00000048/uni00000051/uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000010/uni00000003/uni00000026/uni00000052/uni0000004f/uni00000058/uni00000050/uni00000051/uni00000003/uni00000031/uni00000044/uni00000050/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000019/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni0000002f/uni00000025/uni00000010/uni00000032/uni00000053/uni00000048/uni00000051/uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000010/uni00000003/uni00000037/uni00000058/uni00000053/uni0000004f/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000045/uni0000005c/uni00000003/uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni0000004c/uni00000051/uni00000057/uni00000048/uni0000004a/uni00000048/uni00000055 /uni00000049/uni0000004f/uni00000052/uni00000044/uni00000057 /uni00000056/uni00000057/uni00000055/uni0000004c/uni00000051/uni0000004a /uni00000052/uni00000057/uni0000004b/uni00000048/uni00000055 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000014/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000016/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000018/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni0000002f/uni00000025/uni00000010/uni0000003a/uni00000048/uni00000045/uni00000037/uni00000044/uni00000045/uni0000004f/uni00000048/uni00000003/uni00000010/uni00000003/uni00000026/uni00000052/uni0000004f/uni00000058/uni00000050/uni00000051/uni00000003/uni00000031/uni00000044/uni00000050/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000013/uni00000010/uni00000015/uni00000018 /uni00000015/uni00000018/uni00000010/uni00000018/uni00000013 /uni00000018/uni00000013/uni00000010/uni0000001a/uni00000018 /uni0000001a/uni00000018/uni00000010/uni00000014/uni00000013/uni00000013 /uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000035/uni00000044/uni00000051/uni0000004a/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000013/uni00000008/uni00000015/uni00000013/uni00000008/uni00000017/uni00000013/uni00000008/uni00000019/uni00000013/uni00000008/uni00000008/uni00000003/uni00000052/uni00000049/uni00000003/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni0000004b/uni0000004c/uni00000053/uni00000056/uni0000002f/uni00000025/uni00000010/uni0000003a/uni00000048/uni00000045/uni00000037/uni00000044/uni00000045/uni0000004f/uni00000048/uni00000003/uni00000010/uni00000003/uni00000037/uni00000058/uni00000053/uni0000004f/uni00000048/uni00000003/uni00000032/uni00000059/uni00000048/uni00000055/uni0000004f/uni00000044/uni00000053/uni00000003/uni00000045/uni0000005c/uni00000003/uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni00000027/uni00000044/uni00000057/uni00000044/uni00000003/uni00000037/uni0000005c/uni00000053/uni00000048 /uni0000004c/uni00000051/uni00000057/uni00000048/uni0000004a/uni00000048/uni00000055 /uni00000049/uni0000004f/uni00000052/uni00000044/uni00000057 /uni00000056/uni00000057/uni00000055/uni0000004c/uni00000051/uni0000004a /uni00000052/uni00000057/uni0000004b/uni00000048/uni00000055Figure 2: Distribution of exact column name and tuple overlap across different benchmarks, broken down by data type (String, Numeric, Datetime, Other). Each subplot represents a benchmark, showing the percentage of ground truth pairs falling into different overlap ranges. Training Setup: The provided RoBERTa-based model was retrained for 10 epochs on each bench- mark. Key hyperparameters included: batch size 32, projection dimension 768, learning rate 5e-5, max sequence length 256, and fp16 precision. Sampling and Augmentation Strategies: Starmie employs specific strategies during con- trastive pre-training to generate positive pairs (views of the same column). The strategies, based on the definitions in the original paper, are: •TF-IDF Entity Sampling (‘tfidf_entity‘): Samples cells in columns that have the highest average TF-IDF scores calculated over their tokens. •Alpha Head Sampling (‘alphaHead‘): Samples the first N tokens sorted alphabetically. •Column Dropping Augmentation (‘drop_col‘): Creates augmented views by dropping a random subset of columns from the table. •Drop Cell Augmentation (‘drop_cell‘): Creates augmented views by dropping random cells within the table. We followed the paper’s recommendations for each benchmark, detailed in Table 6. For bench- marks not explicitly mentioned in the original pa- per (PYLON ,UGEN,LAKEBENCH derivatives), we applied the same strategies recommended for the SANTOS benchmark.Evaluation: We used the "Pruning" search strat- egy described in the Starmie paper, also referred to as "bounds" in the original implementation. This involves a maximum bipartite matching approach on a pruned set of candidate column pairs to cal- culate table similarity, offering higher efficiency
|
https://arxiv.org/abs/2505.21329v2
|
compared to naive matching, while remaining more precise than approximate search approaches. Benchmark Sampling Augmentation SANTOS tfidf_entity drop_col TUS (Small) alphaHead drop_cell TUS Large tfidf_entity drop_cell PYLON tfidf_entity drop_col UGEN V1 tfidf_entity drop_col UGEN V2 tfidf_entity drop_col LB-O PENDATA tfidf_entity drop_col LB-W EBTABLE tfidf_entity drop_col Table 6: Starmie sampling and augmentation strategies applied per benchmark. B.2.b) HEARTS:13 Model: Employs pre-trained HyTrel embeddings (Chen et al., 2023), utilizing a publicly available checkpoint trained with a contrastive learning ob- jective14. No further finetuning was performed. 13HEARTS GitHub Repository 14https://github.com/awslabs/ hypergraph-tabular-lm/tree/main/checkpoints Evaluation Strategy: We adopted the best- performing search strategy reported in the HEARTS repository for each benchmark: •Cluster Search (for SANTOS ,PYLON , UGEN V1, UGEN V2):This strategy first reduces the dimensionality of the pre-trained HyTrel column embeddings using UMAP (McInnes et al., 2018) and then performs clustering using HDBSCAN (McInnes et al., 2017). Default parameters provided in the HEARTS repository were used for both UMAP and HDBSCAN within this search method. Table similarity is derived based on cluster assignments. •FAISS + Max Pooling (for TUS Small, TUS Large, LB-O PENDATA,LB-W EBTABLE ):This strat- egy uses FAISS (Douze et al., 2024) for efficient similarity search. Table vectors are computed by max-pooling the embeddings of their constituent columns before indexing and searching. B.3 Hardware Our experiments were conducted using the follow- ing setup: •CPU: Intel Xeon Gold 6330: 4 cores / 8 threads @ 2.00 GHz. •GPU: 40GB MIG partition of NVIDIA A100 (used for SBERT embedding generation and SOTA models training/inference). • 64 Go DDR4 RAM. C Inconsistent Ground Truth Examples This section provides illustrative examples of the ground truth inconsistencies identified in the UGEN andLAKEBENCH benchmarks during our analy- sis (Section 5.2). We categorize these into False Positives (pairs incorrectly labeled as unionable) and False Negatives (pairs incorrectly labeled as non-unionable or missed). C.1 UGEN Benchmark Inconsistencies Figures 3 and 4 showcase examples from UGEN variants. C.2 Lakebench Benchmark Inconsistencies This subsection presents examples of GTFPs from theLAKEBENCH benchmarks, where semantically and structurally compatible tables were not labeled as unionable in the ground truth but were correctly retrieved by search methods. Figures 5 and 6 show such cases from the WebTable and OpenData sub- sets, respectively.Query: Anthropology_FGTNBDWF.csv Candidate: Anthropology_N30U114M.csv Age Culture Arena Domain Meaning Origin Activity 1 Neo. Arch. Past Prim. Africa Hunt. 2 Islam. Artif. Hist. Cplx. Asia Farm. Artifact Language Technology Education Society 1 English GPS Political Communal 2 Latin Smartphone Scientific Global (a) UGEN V1Example: Tables discussing structurally and se- mantically distinct aspects of Anthropology (historical cul- tures vs. social technology), originally labeled unionable despite conceptual incompatibility. Query: Anthropology_N7BS08I4.csv Candidate: Anthropology_VS4SJ2VH.csv Site Name Location Period Culture Olduvai Gorge Tanzania, Africa Pliocene Hominin Teotihuacan Central Mexico Early Classic Teotihuacanos Age Group Clothing Food Housing Children (0-12) Tunics, hides Porridge, roots Huts (branch) Teenagers (13-19) Garments, beads Grains, stews Huts (woven) (b) UGEN V2Example: Tables about archaeological sites ver- sus demographic lifestyles, representing fundamentally differ- ent entity types despite the shared Anthropology topic. Figure 3: Examples of UGEN where pairs labeled unionable in the original ground truth exhibit signif- icant semantic/structural divergence
|
https://arxiv.org/abs/2505.21329v2
|
suggesting non- unionability. D LLM Adjudicator D.1 Prompt Details To systematically re-evaluate potential ground truth inconsistencies in the UGEN benchmarks, we em- ployed an LLM-based adjudicator. This process tar- geted disagreements identified during our analysis, specifically Ground Truth False Positives (GTFPs, pairs retrieved as potentially unionable within a rank threshold k′but not labeled as unionable in the ground truth, k′< k) and Ground Truth False Negatives (GTFNs, pairs labeled as unionable in the ground truth but retrieved within a rank thresh- oldk′,k′> k, or not retrieved at all). For each query-candidate pair under review, we provided the LLM with the full content of both tables. The table data was se- rialized into a Markdown format using the MarkdownRawTableSerializer recipe from the Table Serialization Kitchen library15(Gomm and Hulsebos, 2025). This serialized data was in- serted into specific placeholders (‘<Query Table Data>‘, ‘<Candidate Table Data>‘) within 15Table Serialization Kitchen Github Repository Query: Archeology_2LWSQ5A2.csv Candidate: Archeology_3ML53C0M.csv Discovery Item Artifact Date Culture Region Giza Pyramid Scroll Diamond ~2500 BC Anc. Egypt N. Africa Tut. Tomb Knife Stone Tab. 1323 BC Anc. Egypt N. Africa Item Discovery Artifact Date Culture Region Scroll Giza Pyramid Diamond ~2500 BC Anc. Egypt N. Africa Knife Tut. Tomb Stone Tab. 1323 BC Anc. Egypt N. Africa (a) UGEN V1Example: Two archaeology tables with identical information and permuted but perfectly alignable columns, incorrectly labeled non-unionable despite clear semantic com- patibility. Query: Veterinary-Science_YP1NJGLN.csv Candidate: Veterinary-Medicine_GVNM098Q.csv Animal Type Breed Age Health Status Symptoms Diagnosis Dog Labrador Retr. 3 years Healthy No symptoms Routine check-up Cat Domestic SH 5 years Overweight Lethargy... Obesity Animal Type Breed Age Gender Symptoms Diagnosis Dog Labrador 3 years Male Aggression... Rabies Cat Siamese 8 years Female Limping... Arthritis (b) UGEN V2Example: Two veterinary case tables with highly alignable core columns (Animal Type, Breed, Age, Symptoms, Diagnosis) representing the same fundamental entity type (animal patients). Figure 4: Examples of UGEN Pairs explicitly labeled as non-unionable in the original ground truth exhibiting strong compatibility suggesting unionability. Query: csvData10212811.csv Candidate: csvData1066748.csv Player Team POS G AB H HR ... OPS B Dean GL 1B 96 350 83 7 ... 0.657 Y Arbelo SB 1B 134 461 114 31 ... 0.877 Player Team POS G AB H HR ... OPS J Colina WS 2B 59 216 66 3 ... 0.832 B Friday LYN SS 85 341 98 2 ... 0.752 (a) WebTable Example 1: Baseball player statistics tables with identical, rich schemas (including Player, Team, POS, G, AB, H, HR, OPS, etc.). These tables represent the same entity type (player season stats) and are highly unionable, but were not labeled as such in the ground truth. Query: csvData10025189.csv Candidate: csvData20099586.csv Player Team POS A VG G AB R ... OPS A Ramirez MIL 3B 0.285 133 494 47 ... 0.757 E Chavez ARI 3B 0.246 44 69 6 ... 0.795 Player Team POS A VG G AB R ... OPS L Castillo NYM 2B 0.245 87 298 46 ... 0.660 R Durham MIL 2B 0.289 128 370 64 ... 0.813 (b) WebTable Example 2: More baseball player statistics tables with identical schemas, clearly
|
https://arxiv.org/abs/2505.21329v2
|
unionable but not labeled as such. Figure 5: Examples of LB-W EBTABLE Ground Truth Incompleteness.Source: OpenData (Canada) Query: CAN_CSV0000000000000659.csv Candidate: CAN_CSV0000000000000562.csv REF_DATE GEO Age group Sex ... V ALUE 2003 Canada Total, 12 years and over Both sexes ... 20723896.0 2003 Canada Total, 12 years and over Both sexes ... 20632799.0 REF_DATE GEO Age group Sex ... V ALUE 2003 Canada Total, 12 years and over Both sexes ... 26567928.0 2003 Canada Total, 12 years and over Both sexes ... 26567928.0 (a) OpenData Example 1: Canadian health survey tables shar- ing key demographic columns (REF_DATE, GEO, Age group, Sex) for the same population. This pair represents unionable statistics about that population but was not labeled as union- able in the ground truth. Source: OpenData (Canada) Query: CAN_CSV0000000000000686.csv Candidate: CAN_CSV0000000000005304.csv Sex Type of work Hourly wages UOM UOM_ID ... V ALUE Both Both full- and part...Total employees, all wagesPersons 249 ... 10921.0 Males Both full- and part...Total employees, all wagesPersons 249 ... 5645.4 Sex Type of work Weekly wages UOM UOM_ID ... V ALUE Both Both full- and part...Total employees, all wagesPersons 249 ... 11364.5 Males Both full- and part...Total employees, all wagesPersons 249 ... 5954.5 (b) OpenData Example 2: Canadian employment statistics. The query table (data related to ’Hourly wages’) and candidate table (data related to ’Weekly wages’) share key dimensions like Sex, Type of work, and UOM. The cell values within their respective ’Hourly wages’/’Weekly wages’ columns (e.g., ’To- tal employees, all wages’) describe similar employee groups. This pair, differing mainly in wage aggregation period (hourly vs. weekly) and slightly in REF_DATE format (YYYY vs YYYY-MM), is potentially unionable for comprehensive wage analysis but was not labeled as such in the ground truth. Figure 6: Examples of LB-O PENDATAGround Truth Incompleteness. the prompt detailed below. Crucially, the original table names were notincluded in the prompt. This decision was made to avoid potentially biasing the LLM by providing explicit hints about the table’s topic beforehand, thereby ensuring the adjudica- tion relies solely on the semantic and structural information present in the table content itself. The prompt utilizes few-shot learning, incorpo- rating hand-selected positive and negative exam- ples of unionability from the UGEN benchmarks themselves to guide the LLM’s judgment (these examples are represented by a placeholder in the verbatim prompt below for brevity). The prompt defines the LLM’s role, outlines core principles for assessing conceptual coherence and semantic col- umn alignment, and specifies the required output format. The complete prompt structure provided to the LLM adjudicator is shown below: You are an experienced data curator evaluating if two database tables can be meaningfully combined vertically (unioned). The goal of unioning is to create a single, larger dataset containing the same kind of information or describing the same type of entity/event. Your task is to determine if TABLE 1 and TABLE 2 are conceptually compatible enough for a union operation. CORE PRINCIPLES FOR UNIONABILITY: 1. Conceptual Coherence: Do both tables fundamentally describe the same type of entity (e.g., customers, products, logs) or record the same type of event (e.g., sales, website visits)? Appending
|
https://arxiv.org/abs/2505.21329v2
|
rows from one table to the other should result in a dataset that makes logical sense. 2. Meaningful Column Alignment: There must be a reasonable set of columns across the two tables that represent the same underlying attributes or concepts. * These columns can have DIFFERENT NAMES (e.g., "Cust_ID" vs. "ClientIdentifier"). * They can have DIFFERENT FORMATS (e.g., "2023-01-15" vs. "1/15/2023"). * They may have LITTLE TO NO OVERLAP in actual data values. * Focus on the semantic meaning of the columns in the context of their respective tables. 3. Sufficient Column Matching: The alignment shouldn 't rely on just one incidental or minor column. There should be enough matching among key columns to confidently conclude that the tables represent the same underlying information. More aligned columns representing core attributes increase confidence. 4. Distinction from Joins: We are NOT looking for keys to join tables horizontally. We are assessing if they can be stacked vertically. [EXAMPLES OF UNIONABILITY/NON-UNIONABILTY:] (Hand-selected positive/negative examples from \textsc{Ugen} benchmark were inserted here in the actual prompt) YOUR TASK: Examine the following two tables based on the principles and examples above: TABLE 1: <Query Table Data (Serialized)> TABLE 2: <Candidate Table Data (Serialized)> PROVIDE YOUR ANSWER IN THE FOLLOWING FORMAT: 1. First line: `UNIONABLE: Yes `or`UNIONABLE:No` 2. Second line: `EXPLANATION: `followed by a brief justification focusing on the conceptual coherence and the sufficiency of semantic column alignment. Explain why they are or are not the same kind of data. D.2 Example LLM Outputs Below are illustrative examples of the outputs gen- erated by the LLM adjudicator based on the prompt above for some table pairs showcased in Appendix C.1. Example 1: Archeology_2LWSQ5A2.csv vs. Archeology_3ML53C0M.csv (UGEN-V1) UNIONABLE: Yes EXPLANATION: Both tables describe archeological artifacts and their attributes (discovery location, item type, date, culture, material, etc.). They represent the same kind of data - records of archeological findings. Unioning them would create a consolidated dataset of such findings. There is strong semantic alignment across almost all columns, even though the column order differs. Example 2: Anthropology_FGTNBDWF.csv vs. Anthropology_N30U114M.csv (UGEN-V1) UNIONABLE: No EXPLANATION: TABLE 1 categorizes anthropological concepts based on dimensions like 'Age','Culture ','Arena ', 'Domain ','Meaning ','Origin ', and 'Activity '. TABLE 2, on the other hand, lists categories such as 'Artifact ', 'Language ','Technology ','Education ', and 'Society 'as columns. While both tables are related to Anthropology, they represent different perspectives and categorizations within the field. There is insufficient semantic overlap in the columns to suggest they describe the same type of entity or event. Unioning them would likely result in a table that is not conceptually coherent, as the rows from each table represent different kinds of anthropological classifications.
|
https://arxiv.org/abs/2505.21329v2
|
arXiv:2505.21335v1 [cs.GR] 27 May 2025Structure from Collision Takuhiro Kaneko NTT Corporation Abstract Recent advancements in neural 3D representations, such as neural radiance fields (NeRF) and 3D Gaussian splat- ting (3DGS), have enabled the accurate estimation of 3D structures from multiview images. However, this capability is limited to estimating the visible external structure, and identifying the invisible internal structure hidden behind the surface is difficult. To overcome this limitation, we address a new task called Structure from Collision (SfC), which aims to estimate the structure (including the invisible internal structure) of an object from appearance changes during col- lision. To solve this problem, we propose a novel model called SfC-NeRF that optimizes the invisible internal struc- ture of an object through a video sequence under physical, appearance (i.e., visible external structure)-preserving, and keyframe constraints. In particular, to avoid falling into un- desirable local optima owing to its ill-posed nature, we pro- pose volume annealing; that is, searching for global optima by repeatedly reducing and expanding the volume. Exten- sive experiments on 115 objects involving diverse structures (i.e., various cavity shapes, locations, and sizes) and mate- rial properties revealed the properties of SfC and demon- strated the effectiveness of the proposed SfC-NeRF .1 1. Introduction Learning 3D representations from multiview images is a fundamental problem in computer vision and graphics, with applications across various domains, including augmented and virtual reality, gaming, robotics, and autonomous driv- ing. Recent advancements in neural 3D representations, such as neural radiance fields (NeRF) [47] and 3D Gaus- sian splatting (3DGS) [32], have enabled the accurate esti- mation of 3D structures from multiview images and yielded impressive results in novel view synthesis. However, this benefit is limited to the estimation of the visible external structure, and it remains difficult to estimate theinvisible internal structure hidden behind the surface.2 1The project page is available at https://www.kecl.ntt.co. jp/people/kaneko.takuhiro/projects/sfc/ . 2More strictly, when an object is transparent or translucent, it is possi- ble to estimate the internal structure hidden behind the surface using a vol- ume rendering-based 3D representation learning model (e.g., NeRF [47]) (a) (b) (c) (d)(1) Static (2) SfC (proposed) (3) Ground truth 1.005 0.1431.005 0.094Figure 1. Concept of Structure from Collision (SfC) . (a) and (c) Examples of training images taken from a certain viewpoint. (b) and (d) Cross-sectional views of the internal structures cut per- pendicular to the viewpoint. The score indicates the chamfer dis- tance (×103↓) between the ground-truth and estimated particles (the smaller, the better). Here, two objects appear to be identical in static images (1) but actually have different internal structures (3). (1) A static 3D representation learning model cannot distin- guish the difference in internal structures (b)(d) because there is no difference in appearance in static images (a)(c). (2) To overcome this limitation, we address SfC. As shown in (a) and (c), changes in shape and appearance during collision are influenced by the inter- nal structure. We utilize this property to identify the internal struc- ture of the object. Although it is still difficult to identify perfectly owing to its ill-posed nature, the proposed method
|
https://arxiv.org/abs/2505.21335v1
|
has succeeded in capturing the bias in the location of the holes (b)(d). For example, in Figure 1, the two objects have different internal structures, as shown in Figure 1(3)(b) and (3)(d). However, they are identical in the static images, as shown in Figure 1(1)(a) and (1)(c). Consequently, a standard static neural 3D representation learning model (e.g., the voxel- based NeRF [63] used in this example) learns the same in- because it represents appearance on the basis of cumulative volume densi- ties. However, this effect is limited when an object is not transparent. This study aims to identify the internal structure even in the latter case. 1 ternal structures (Figure 1(1)(b) and (1)(d)) and ignores the differences in the internal structures. This misestimation of the internal structure can cause issues in practical applica- tions, such as reproducing and simulating objects in virtual and augmented reality and controlling forces during inter- actions with objects in robotics. To overcome this limitation, we address a novel task called Structure from Collision (SfC) , the objective of which is to identify the structure (including the invisible internal structures) of an object based on observations at collision. This is motivated by the observation that changes in appear- ance and shape during collisions are influenced by the inter- nal structures. For example, as shown in Figure 1(2)(a) and (2)(c), when a hole exists inside the sphere on the left side (Figure 1(3)(b)) or on the upper side (Figure 1(3)(d)), the sphere crumples when it hits the ground. We use this prop- erty to identify the internal structure of the object. We formulated SfC to optimize the invisible internal structures of an object under physical ,appearance (i.e., visible external structure)-preserving , and keyframe con- straints . Specifically, we implemented this approach using SfC-NeRF , which consists of four components. (1) Physical constraints. SfC is ill-posed because the observable data represent just one of many possible solu- tions. To address this issue, we narrow the solution space by incorporating physical constraints , specifically by using physics-augmented continuum NeRF (PAC-NeRF) [36]. (2) Appearance-preserving constraints. Owing to the re- cent advancements in neural 3D representations, learning visible external structures is easier than learning invisible internal structures . Accordingly, we first learn the external structures using a standard static neural 3D representation learning model (voxel-based NeRF [63] in practice) using the first frame (Figure 1(1)). We then optimize the internal structures using a video sequence (Figure 1(2)). In the sec- ond step, to avoid damaging the external structures learned in the first step when fitting the entire video, we introduce appearance-preserving constraints that optimize the inter- nal structures while preserving the external structures. (3) Keyframe constraints. In a collision video, a specific frame (e.g., immediately after a collision) is effective in ex- plaining the shape change caused by the collision. Accord- ingly, we incorporate keyframe constraints to strengthen shape learning in the keyframe. (4) Volume annealing. To avoid becoming stuck in unde- sirable local optima owing to the existence of multiple solu- tions, we developed volume annealing , in which the global optimum
|
https://arxiv.org/abs/2505.21335v1
|
is searched for through an annealing process that repeatedly reduces and expands the volume. We comprehensively evaluated the proposed method us- ing a dataset containing 115objects with diverse structures (i.e., various cavity shapes, locations, and sizes) and mate- rial properties. Our results reveal the properties of SfCand demonstrate the effectiveness of SfC-NeRF . Figure 1(2)(b) and (d) show examples of the results obtained using SfC-NeRF . Although it is challenging to perfectly match the in- ternal structures to the ground truth owing to the high de- grees of freedom in the solution, SfC-NeRF successfully identified the deviation of the hole inside the sphere. The contributions of this study are threefold: • We address a novel task called SfC, whose aim is to iden- tify structures (including the internal structures) from the appearance changes at collision. • To solve SfC, we propose SfC-NeRF , which consists of four components: physical ,appearance-preserving , and keyframe constraints , and volume annealing . • Through extensive experiments on 115 objects, we demonstrate the effectiveness of SfC-NeRF while clarify- ing the properties of SfC. We also provide detailed results and implementation details in the Appendices. Video samples are available at the project page. 2. Related work Neural 3D representations. Learning 3D representations is a fundamental problem in computer vision and graphics. Recent advancements in neural 3D representations, such as NeRF [47] and 3DGS [32], have lead to significant break- throughs, and various derivative models have been pro- posed. These models can be roughly divided into three cate- gories, based on their objectives. (1) Improvement of quality of rendered images or reconstructed 3D data [4–6, 24, 27, 37, 39, 43, 48, 67, 74, 79, 80], (2) improvement of efficiency , i.e., speeding up and reducing memory usage in training or inference [3, 10, 12, 16, 19, 22, 23, 30, 34, 35, 42, 44, 49– 51, 57, 58, 60, 62, 63, 68, 71, 78], and (3) incorporation of other modules or functionalities , such as generative mod- els [7–9, 11, 14, 18, 20, 29, 40, 52, 54, 59, 61, 64, 65, 70, 73, 77, 81] and physics/dynamics [1, 2, 13, 15, 17, 21, 28, 31, 36, 38, 45, 46, 53, 55, 56, 66, 72, 75, 76]. This study focuses on the third category, aiming to discover internal structures based on dynamic observations under physical constraints. Because these models are mutually developed, applying the proposed approach to other models presents an interesting direction for future research. Dynamic neural 3D representations. Dynamic neural 3D representations can be classified into two categories, based on whether they incorporate physics. (1) Non- (or weak) physics-informed models [17, 38, 45, 46, 53, 55, 66, 75, 76] and(2) physics-informed models [1, 2, 13, 15, 21, 28, 31, 36, 56, 72]. The first category offers flexibility, and can be applied to scenes or objects that are difficult to describe physically. However, it requires a large amount of train- ing data and lacks interpretability because of its fully data- driven black-box nature. By introducing physics, the sec- ond category provides a better interpretability and narrows the solution space.
|
https://arxiv.org/abs/2505.21335v1
|
However, they lose flexibility and are difficult to apply to scenes or objects that cannot be ex- plained by physics. This study adopts a physics-informed model (the second-category strategy) because SfCis an ill- posed problem, and physics plays an important role in nar- 2 rowing the solution space. However, in the future, it would be interesting to explore how the first-category strategy can be used by expanding data and developing new theories. Physics-informed neural 3D representations. Physics- informed neural 3D representations can be divided into two categories based on the problem setting. (1) Forward engi- neering [15, 28, 56, 72], where a physics-informed model is optimized to fit static scenes or objects, and then physics- informed dynamic simulations or interactive manipulations are performed. In most cases, the inside of the object is assumed to be filled , and internal factors, such as physical properties, are manually adjusted to achieve visually plau- sible results. (2) Reverse engineering [1, 2, 13, 21, 31, 36], which focuses on system identification—identifying inter- nal factors (e.g., physical properties) from dynamic obser- vations (i.e., video sequences). This study falls into the sec- ond category because it aims to reverse engineer the inter- nal structure , which is hidden but essential for describing the system, from collision videos. Reverse engineering is generally ill-posed because the observable data represent only one of the many possible so- lutions. To address this issue, the methods in this category typically impose assumptions on internal factors that are not optimized. Previous studies have made various assumptions regarding the internal structure, which is the main focus of this study. For example, [13] assumes that an object, such as smoke, is translucent , allowing part of the internal struc- ture to be visible . Other studies [1, 2, 21, 31, 36] consid- ered non-transparent objects but assumeed that the interior isfilled . Consequently, non-transparent and unfilled ob- jects have not been sufficiently explored. Therefore, this study focused on such objects. It is important to note that, as with conventional problems, solving SfCis challenging without making any assumptions. In this study, we assumed that certain internal factors, such as physical properties, are known in advance. Even with this assumption, as shown in Figure 1 (where physical properties, such as mass, Young’s modulus, and density, are identical), multiple solutions still exist, making SfCa challenging problem. Details of the problem settings are discussed in Section 3.1. 3. Method 3.1. Problem statement First, we define the SfC problem. Given a set of multi- view videos in which objects collide (e.g., Figure 1(2)(a) and (2)(c)), the objective of SfCis to identify the structure of the object, including its invisible internal structure, based on the appearance changes before and after the collision. Formally, the training data, i.e., a set of multiview videos, are defined as a collection of ground-truth color observa- tions ˆC(r, t). Here, r∈R3is a camera ray defined as r(s) =o+sd, where o∈R3is the camera origin, d∈S2 is the view direction, and s∈[sn, sf]is the distance from o. During training, ris sampled from ˆR, which is a collectionof camera rays
|
https://arxiv.org/abs/2505.21335v1
|
in the training dataset. t∈ {t0, . . . , t N−1} represents the time, where Nis the total number of frames. Given these data, we aim to estimate the 3D structure (both external and internal ones) of the object PP(t0), which cor- responds to the ground truth ˆPP(t0). Here, we represent the 3D structures as particle sets, PP(t0)andˆPP(t0), as shown in Figure 1(b) and (d). During training, only the external ap- pearance ˆC(r, t)is observed; ˆPP(t0), which includes the internal structure, is not observable. As discussed in Sections 1 and 2, SfC is an ill-posed problem with multiple solutions. Internal structures and physical properties, such as Young’s modulus, have a mutu- ally dependent relationship because both can explain the re- lationship between strain and stress. For example, a highly elastic object can be created either by making it hollow or by using soft materials. To address this issue, PAC- NeRF [36] optimizes physical properties by assuming that the inside of the object is filled . In contrast, we address a complementary problem, namely optimizing the internal structure based on the assumption that the physical prop- erties are known . Specifically, we assume that the phys- ical properties related to the material (e.g., Young’s mod- ulus ˆE, Poisson’s ratio ˆν, and density ˆρ) and mass ˆmare known. Even with this assumption, SfCremains a challeng- ing problem because multiple internal structures can satisfy the same set of physical properties, as shown in Figure 1. 3.2. Preliminary: PAC-NeRF As explained in the previous subsection, the problem set- tings differ between the PAC-NeRF study [36] and this study. However, because the proposed model uses PAC- NeRF to describe the physics, we briefly review PAC-NeRF here. PAC-NeRF is a variant of NeRF that bridges the Eule- rian grid-based scene representation [63] with a Lagrangian particle-based differentiable physical simulation [26] for continuum materials, such as elastic materials, plasticine, sand, and fluids. PAC-NeRF obtains this functionality us- ing three components: a continuum NeRF, a particle–grid interconverter, and a Lagrangian field. Continuum NeRF. Continuum NeRF is built on dynamic NeRF (NeRF for a dynamic scene) [55]. In the dynamic NeRF, the volume density and color fields for position x, view direction d, and time tare defined as σ(x, t)and c(x,d, t), respectively. On this basis, the color of each pixel C(r, t)is rendered using volume rendering [47]: C(r, t) =Zsf snTr(s, t)σ(r(s), t)c(r(s),d, t)ds, (1) Tr(s, t) = exp −Zs snσ(r(u), t)du . (2) This model can be trained using a pixel loss. Lpixel=1 NN−1X i=01 |ˆR|X r∈ˆR∥C(r, ti)−ˆC(r, ti)∥2 2.(3) 3 Dynamic NeRF is extended to continuum NeRF to describe the dynamics of continuum materials. This is achieved by applying the conservation laws to σ(x, t)andc(x,d, t): Dσ Dt= 0,Dc Dt=0, (4) whereDϕ Dt=∂ϕ ∂t+v· ∇ϕfor an arbitrary time-dependent fieldϕ(x, t). Here, vis the velocity field and obeys mo- mentum conservation for continuum materials: ρDv Dt=∇ ·T+ρg, (5) where ρis the physical density field, Tis the internal Cauchy stress tensor, and gis the gravitational accelera- tion. This equation can be solved differentially using the differentiable material point method
|
https://arxiv.org/abs/2505.21335v1
|
(DiffMPM) [26]. Particle–grid interconverter. DiffMPM is a particle-based method that conducts simulations in a Lagrangian space. However, these particles do not necessarily lie on the ray, which makes rendering difficult. Considering this, PAC- NeRF renders in an Eulerian grid space with voxel-based NeRF [63] and bridges these two spaces using grid-to- particle (G2P) and particle-to-grid (P2G) conversions: FP p≈X iwipFG i,FG i≈P pwipFP pP pwip, (6) where FX={σX(x, t),cX(x,d, t)}forX∈ {G, P}. Here, GandPrepresent the Eulerian and Lagrangian views, respectively. When FXis used with a subscript, that is, FX x(x∈ {i, p}), the subscripts iandpindicate the grid node and particle index, respectively. wipdenotes the weight of the trilinear shape function defined at iand evaluated at p. Lagrangian field. The physical simulation and rendering pipeline in PAC-NeRF proceeds as follows: (1) V olume densities and colors are initialized over the first frame of the video sequence in an Eulerian grid field, FG′(t0). Here, we use the superscript G′to distinguish FG′fromFGused in Step (4). (2) Using the G2P process, FG′(t0)is converted into a Lagrangian particle field, FP(t0). In this step, parti- clesPP(t0)are sampled at intervals of half the grid, that is, ∆x 2(where ∆xis the grid size), with random fluctuations. The alpha value (or amount of opacity) αP pis calculated for each particle using αP p= 1−exp(−softplus (σP p)), and a particle is removed if αP p< ϵ (ϵ= 10−3in practice). (3) The particle field in the next step, FP(t1), is calculated fromFP(t0)using DiffMPM [26], where t1=t0+δt, and δtis the duration of the time step. Similarly, the particle field at t,FP(t), is calculated for t∈ {t0, . . . , t N−1}. (4) Using the P2G process, FP(t)is converted into an Eulerian grid field, FG(t). (5)C(r, t)is rendered based on FG(t) by using voxel-based volume rendering [63].During training, two-step optimization is conducted. (i) FG′(t0)is initially optimized using the first frame of the video sequence by conducting processes (1)–(5) for t= t0. (ii) Physical properties, such as the Young’s mod- ulusEand Poisson’s ratio ν, are optimized for the en- tire video sequence by conducting processes (1)–(5) for t∈ {t0, . . . , t N−1}. In both optimizations, Lpixel(Equa- tion 3) is used as the objective function. 3.3. Proposal: SfC-NeRF Similar to PAC-NeRF, SfC-NeRF performs two-step opti- mization, as shown in Figure 2. The first-step optimiza- tion (Figure 2(i)) is the same as that in PAC-NeRF, that is, FG′(t0)is initially optimized using the first frame of the video sequence. In this step, the filled object is learned, as shown in Figure 1(1). In contrast, the second step of the optimization (Figure 2(ii)) differs because of the dif- ference in the optimization target. In the PAC-NeRF, the physical properties are optimized in this step, whereas in theSfC-NeRF , the internal structure is optimized. Specif- ically, as explained in the previous section, we obtain par- ticlesPP(t0)based on σP(t0), which is calculated from σG′(t0)(Steps (1) and (2)). Therefore, we select σG′(t0) as the optimization target.3In particular, we formulate SfC as a problem of optimizing σG′(t0)under physical ,appear- ance (i.e., external structure)-preserving , and keyframe con- straints , along
|
https://arxiv.org/abs/2505.21335v1
|
with volume annealing . Physical constraints. As discussed in Section 3.1, we as- sume that the physical properties related to the material (e.g., Young’s modulus ˆE, Poisson’s ratio ˆν, and density ˆρ) and mass ˆmare known. We utilize them to narrow the solution space of SfC. Physical constraints on material properties. We can reflect material-specific physical properties (e.g., Young’s modulus ˆE, Poisson’s ratio ˆν, and density ˆρ) explicitly when con- structing DiffMPM [26]. Motivated by this fact, we opti- mize σG′(t0)under the explicit material-specific physical constraints imposed by DiffMPM . Physical constraints on mass. Unlike physical material properties, mass is not determined only by the material and varies depending on the individual objects. Therefore, in- stead of explicitly representing the mass in DiffMPM, we constrain the mass using a mass loss . Lmass=∥log10(m)−log10( ˆm)∥2 2, (7) m=X p∈PP(t0)ˆρ·∆x 23 ·αP p, (8) 3Note that Lagrangian particle optimization (LPO) [31] also consid- ers a similar optimization (i.e., optimizing FP(t0)orFG′(t0)through a video sequence) for few-shot (sparse view) learning. However, it aims to compensate for the external structure where the viewpoint is missing and has not sufficiently considered the components necessary for estimating the internal structures, which are discussed in the following paragraphs. We demonstrate the limitations of LPO in our experiments (Section 4). 4 FG(tk) FP(tk) G2P P2GrenderingV oxel volume G2P P2GrenderingV oxel volume P2G renderingV oxel volume Ground truth DiffMPM(i) Static optimization (ii) Dynamic optimizationFG(t0) FP(t0) FG(t0) FP(t0)FG(t0) FG(t0)samplingRandom samplingRandom ˆmm LmassLpixel Lpixel LpixelLpixel0 LpixelkLdepth 0 ×λmass×λpres ×λkey×λpreswdepth Figure 2. Optimization pipelines of SfC-NeRF. (i) The grid field FG′(t0)is initially optimized using the first frame of the video sequence. (ii) Subsequently, the structure (i.e., volume density σG′(t0)∈ FG′(t0)) of the object is optimized through the entire video sequence with physical constraints ( Lmassand DiffMPM), appearance-preserving constraints (i.e., Lpixel0andLdepth0), and keyframe constraints ( Lpixelk) along with a standard pixel loss ( Lpixel). where mandˆmare the estimated and ground-truth masses, respectively. In Equation 8, mis computed by summarizing the mass of each particle indexed by p∈ PP(t0), where the mass of each particle is given by the product of the physical density ˆρ, the unit volume of a particle ∆x 23, and the alpha value αP p. In Equation 7, we employ a logarithmic scale to prioritize scale matching. Appearance-preserving constraints. As mentioned above, we use two-step optimization: (i) FG′is initially optimized using the first frame of the video sequence (Fig- ure 2(i)). (ii) σG′is optimized through a video sequence (Figure 2(ii)). In Step (ii), the external structure (or surface) learned in Step (i) does not need to be changed, consider- ing that learning the external structure is easier than learning the internal structure. However, the physical constraints dis- cussed above are not sufficient to satisfy this requirement. Hence, we introduced appearance-preserving constraints at both the loss and training scheme levels. Appearance-preserving loss. The standard pixel loss (Equa- tion 3) treats the loss for each frame equally. This is insuffi- cient to prevent the external structure, which is well-learned in Step (i), from changing as a result of the fitting of the en- tire video sequence.
|
https://arxiv.org/abs/2505.21335v1
|
Hence, we employ a pixel-preserving lossthat preserves the appearance of the initial frame. Lpixel0=1 |ˆR|X r∈ˆR∥C(r, t0)−ˆC(r, t0)∥2 2. (9) This is a variant of pixel loss (Equation 3) when N= 1. Because the constraints on the 2D projection plane aloneare insufficient for preserving the 3D structure (e.g., objects with reversed concavity may be learned), we also incorpo- rate a depth-preserving loss to encourage the preservation of the depth of the initial frame. Ldepth0=1 |ˆR|X r∈ˆR(∥∆hZ(r, t0)−∆h˜Z(r, t0)∥2 2, +∥∆vZ(r, t0)−∆v˜Z(r, t0)∥2 2),(10) where Z(r, t0)and˜Z(r, t0)are the depths predicted by the current model and the model before performing Step (ii), respectively. We use ˜Z(r, t0)because the ground- truth depth is not observable. Z(r, t0)is calculated by Z(r, t0) =Rsf snTr(s, t)σ(r(s), t)sds, and ˜Z(r, t0)is calcu- lated in a similar manner. ∆hand∆vare operations that calculate the horizontal and vertical differences between adjacent pixels, respectively. We compare the differences rather than the raw data to mitigate the negative effects of depth estimation errors. Appearance-preserving training. Ideally, when an object is non-transparent, its appearance is not expected to change, even if the internal volume density is changed. However, in preliminary experiments, we found that it is difficult to retain the appearance learned in Step (i) through a simple adaptation of the appearance-preserving losses. This mo- tivated us to employ appearance-preserving training , that is, reoptimizing FG′(t0)using the first frames of the video sequence every time after optimizing σG′(t0)for the entire video sequence. 5 Keyframe constraints. As mentioned in the explanation of appearance-preserving loss, the standard pixel loss treats the loss for each frame equally. However, in preliminary experiments, we found that certain frames, particularly the frame immediately after the collision, were useful for ex- plaining shape changes due to the internal structures. Based on this observation, we impose a keyframe pixel loss defined as follows: Lpixelk=1 |ˆR|X r∈ˆR∥C(r, tk)−ˆC(r, tk)∥2 2, (11) where kis the keyframe index (the frame immediately after the collision is used in practice). Volume annealing. As discussed previously, we begin the optimization from the state in which the interior of the ob- ject is filled (Figure 2(i)). The internal structure is then op- timized by reducing the volume using the aforementioned techniques (Figure 2(ii)). Owing to these learning dynam- ics, if the volume reduction goes in the wrong direction and leads to a local optimum, it becomes challenging to determine the global optimum. To address this issue, we introduce volume annealing , which involves alternating be- tween the volume reduction and expansion. This strategy facilitates the search for a global optimum. Specifically, we implement the volume expansion by successively perform- ing the G2P and P2G processes and replacing the obtained FG(t0)withFG′(t0). Full objective. The full objective used in Step (ii) is ex- pressed as follows: Lfull=Lpixel+λmassLmass +λpres(Lpixel0+wdepthLdepth0) +λkeyLpixelk(12) where λmass,λpres,wdepth, and λkeyare the weighting hyper- parameters. The effect of each loss is analyzed using the ablation study presented in Section 4. 4. Experiments 4.1. Experimental setup We conducted three experiments to evaluate SfC-NeRF and explore the properties of SfC. First, we examined the impact of changes in the internal
|
https://arxiv.org/abs/2505.21335v1
|
structure, focusing on the cav- ity sizes (Experiment I in Section 4.2) and locations (Ex- periment II in Section 4.3). We then explored the effect of the material properties in Experiment III (Section 4.4). The main results are summarized here, with the detailed results and implementation details provided in the Appen- dices. Video samples are available at the project page. Dataset. Because SfCis a new task and there is no es- tablished dataset, we created a new dataset called the SfC dataset based on the protocol of the PAC-NeRF study [36]. We prepared 115objects by changing their external shapes, internal structures, and materials. Figure 3 shows examples (a) Sphere (b) Cube (c) Bicone (d) Cylinder (e) Diamond Figure 3. Examples of the data in the SfC dataset. of the data in this dataset. First, we prepared five exter- nal shapes: sphere ,cube ,bicone ,cylinder , and diamond . Regarding the internal structure and material, we set the de- fault values as follows: the cavity size rate for filled ob- ject,sc, was set to (2 3)3, the cavity location, lc, was set at the center, and the material was defined as an elastic ma- terial with Young’s modulus ˆE= 106and Poisson’s ra- tioˆν= 0.3. Under these default properties, one of them was changed as follows: (a) Three differently sized cavities: sc∈ {0,(1 2)3,(3 4)3}. (b) Four different cavity locations: center lcis moved {up,down ,left,right}. (c) Eight different elastic materials: those with four different Young’s moduli ˆE∈ {2.5×105,5×105,2×106,4×106}and four different Poisson’s ratios ˆν∈ {0.2,0.25,0.35,0.4}. Seven different materials: two Newtonian fluids, two non-Newtonian flu- ids, two plasticines, and one sand. Their physical properties were derived from the PAC-NeRF dataset [36]. Thus, we created 5external shapes ×(1default + 3sizes + 4loca- tions + (8 + 7) materials) = 115 objects. Following the PAC-NeRF study [36], ground-truth data were generated using the MLS-MPM simulator [25], where each object fell freely under the influence of gravity and col- lided with the ground plane. Images were rendered under various environmental lighting conditions and ground tex- tures using a photorealistic renderer. Each scene was cap- tured from 11 viewpoints, including an object, using cam- eras spaced in the upper hemisphere. Preprocessing. Following the PAC-NeRF study [36], we made two assumptions and performed preprocessing to fo- cus on solving SfC. (1) The intrinsic and extrinsic parame- ters of the cameras are known. (2) Collision objects, such as the ground plane, are known. As mentioned in [36], the latter can be easily estimated from observed images. For preprocessing, we applied video matting [41] to exclude static background objects, and concentrated the computa- tion on the object of interest. This process provides a back- ground segmentation mask ˆB(r, t). NeRF can estimate a background segmentation mask B(r, t)using B(r, t) = 1− Tr(sf, t). Taking advantage of this property, we also used a background loss Lbg=∥B(r, t)−ˆB(r, t)∥2 2when calculat- ing the pixel-related losses ( Lpixel,Lpixel0, andLpixelk) with a weighting hyperparameter of wbg. In the experiments, this technique was applied to all the models. Comparison models. Because there is no established method for
|
https://arxiv.org/abs/2505.21335v1
|
SfC, we adapted previous methods to make them 6 1.149 1.210(c) GO (d) GO mass 0.118 0.0921.164 0.285 0.107 0.067 0.991 1.027(f) LPO mass (e) LPO (h) SfC-NeRF −APL (g) SfC-NeRF −mass 0.491(i) SfC-NeRF −APT (j) SfC-NeRF −key (k) SfC-NeRF −VA (l) SfC-NeRF (a) GT (b) Static Training data Figure 4. Comparison of learned structures for sphere objects with sc= (2 3)3. The score under particles indicates the CD ( ×103↓). (c)–(f) GO/LPO failed to determine optimal learning directions. (g)–(k) The ablated models failed to avoid improper solutions. (l) The full model overcomes these issues and achieves the best CD. suitable for SfC. Specifically, we used grid optimization (GO) and Lagrangian particle optimization ( LPO ) [31] as baselines. GO and LPO are improved variants of PAC- NeRF that optimize FG′(t0)andFP(t0), respectively, us- ingLpixelacross a video sequence for few-shot learning. For a fair comparison with SfC-NeRF , GO and LPO were trained using the ground-truth physical properties. Al- though the original GO and LPO do not use the mass infor- mation for training, it may not be fair to apply it solely to the proposed method. Therefore, we also examined GO massand LPO mass, extensions of GO and LPO that incorporate Lmass. Furthermore, as an ablation study, we compared SfC-NeRF with various variants: SfC-NeRF −mass,SfC-NeRF −APL,SfC- NeRF −APT,SfC-NeRF −key, and SfC-NeRF −VA, in which the mass loss ( Lmass),4appearance-preserving losses ( Lpixel0 andLdepth0), appearance-preserving training, keyframe loss (Lpixelk), and volume annealing were ablated, respectively. We also examined Stacic , a model trained using only the first frame of a video sequence, to assess the effect of opti- mization across videos. Evaluation metric. As mentioned in Section 3.1, we use particles PP(t0)to represent the structure (including the internal structure) of an object and estimate PP(t0)that matches the ground truth ˆPP(t0). Therefore, we evaluate the model by measuring the distance between PP(t0)and ˆPP(t0)using the chamfer distance (CD) . The smaller the value, the higher is the degree of matching. 4.2. Experiment I: Influence of cavity size First, we investigated the influence of the cavity size in- side the object. Table 1 summarizes the quantitative results, and the qualitative results are presented in Figure 4, Ap- pendix B.1, and the project page. Our findings are threefold. (1) Limitations of GO and LPO [31] . GO, a simple voxel grid optimization using Lpixel, failed to determine an appro- priate optimization direction, which led to the deterioration ofPP(t0)as it fits the video. LPO showed a slight im- provement by moving particles within physical constraints 4As explained in Appendix C.3, the mass information is not only used in the loss but also in adjusting the learning rate. In this experiment, we ablated both to simulate a case in which the mass is unknown.sc 0 (1 2)3(2 3)3(3 4)3Avg. Static 0.093 0.294 0.920 1.574 0.720 GO 0.091 0.301 0.941 1.586 0.730 GO mass 0.081 0.319 1.244 2.291 0.984 LPO 0.092 0.284 0.841 1.406 0.656 LPO mass 0.087 0.284 0.876 1.477 0.681 SfC-NeRF −mass 0.089 0.226 0.550 1.148 0.503 SfC-NeRF −APL 0.106 0.423 0.898 1.326 0.688 SfC-NeRF −APT 0.085 0.261
|
https://arxiv.org/abs/2505.21335v1
|
0.332 0.661 0.335 SfC-NeRF −key 0.082 0.127 0.211 0.325 0.186 SfC-NeRF −V A 0.146 0.293 0.370 0.456 0.316 SfC-NeRF 0.081 0.122 0.195 0.262 0.165 Table 1. Comparison of CD ( ×103↓) when varying the cavity size sc. The scores were averaged over five external shapes. lc left right up down Avg. Static 0.841 0.842 0.815 0.813 0.828 GO 0.874 0.853 0.878 0.870 0.869 GO mass 1.349 1.334 1.104 1.001 1.197 LPO 0.791 0.787 0.796 0.743 0.779 LPO mass 0.824 0.817 0.828 0.775 0.811 SfC-NeRF −mass 0.513 0.485 0.705 0.479 0.545 SfC-NeRF −APL 0.845 0.783 0.805 0.583 0.754 SfC-NeRF −APT 0.624 0.428 0.384 0.464 0.475 SfC-NeRF −key 0.308 0.296 0.307 0.313 0.306 SfC-NeRF −V A 0.542 0.596 0.333 0.385 0.464 SfC-NeRF0.303 0.258 0.274 0.291 0.281 (0.367) (0.431) (0.448) (0.417) (0.416) Table 2. Comparison of CD ( ×103↓) when varying the cavity lo- cation lc. The gray score in parentheses indicates ACD ( ×103). via DiffMPM. However, its effectiveness was limited be- cause significant particle movement could alter the unit vol- ume density, making it difficult to find the optimal internal structure. Furthermore, in both GO and LPO, using mass knowledge with Lmass did not improve the performance, possibly because they lacked appearance-preserving mech- anisms, and forcing mclose to ˆmcan damage the overall structure. (2) Effectiveness of each component. The ab- lation study confirms the importance of each model com- ponent. (3) Increased difficulty with increased cavity size. Because optimization begins in the filled state, large cavity sizes require significant volume changes. We believe that this is the key reason for the deterioration in performance as the cavity size increases. 4.3. Experiment II: Influence of cavity location Next, we examined the influence of the cavity location . Ta- ble 2 summarizes the quantitative results, and the qualita- tive results are presented in Appendix B.1 and project page. Similar to Experiment I, we observed two main findings: (1) Limitations of GO and LPO. (2) Effectiveness of each component . In addition, we discuss (3) how well SfC-NeRF captured the cavity location . A simple CD is insufficient for this evaluation because it does not account for the devi- ations. Therefore, we calculated the anti-chamfer distance 7 ˆE 2.5×1055.0×1051.0×1062.0×1064.0×106 Static 0.920 0.921 0.920 0.920 0.920 SfC-NeRF 0.289 0.254 0.195 0.314 0.374 ˆν 0.2 0 .25 0 .3 0 .35 0 .4 Static 0.920 0.919 0.920 0.920 0.921 SfC-NeRF 0.196 0.198 0.195 0.207 0.224 Table 3. Comparison of CD ( ×103↓) when varying Young’s moduls ˆEand Poisson’s ratio ˆν. Newtonian Non-Newtonian Plasticine Sand Static 0.921 0.919 0.920 0.920 SfC-NeRF 0.196 0.218 0.230 0.222 Table 4. Comparison of CD ( ×103↓) for various materials. (ACD) , which measures the chamfer distance between the predicted particles PP(t0)and the ground-truth particles ˜PP(t0), where the cavity is placed on the opposite side. This distance is expected to be longer than the original CD. The results confirm that the original CD is smaller than the ACD. These findings suggests that SfC-NeRF can capture the positional deviation of a cavity. 4.4. Experiment III: Influence of material Finally, we investigated the influence of the
|
https://arxiv.org/abs/2505.21335v1
|
material prop- erties . Table 3 summarizes the quantitative results for elas- tic materials when ˆEandˆνwere varied. Table 4 sum- marizes the quantitative results for other materials. Ap- pendix B.2 and project page present the qualitative re- sults. These results demonstrate that SfC-NeRF improves the structure estimation compared with the initial state, re- gardless of the material. However, the rate of improvement depends on the material used. For example, when an object is soft, its shape changes significantly, making it difficult to capture dynamic changes. In contrast, when the object is hard, there are fewer shape changes that provide limited cues for estimating the internal structure, making learning more difficult. Thus, the proposed method is most effec- tive when the object is moderately soft or hard. As an ini- tial approach to address SfC, we proposed a general-purpose method in this study. However, in future studies, it would be interesting to develop methods that are specifically tailored to individual materials. 4.5. Application to future prediction To demonstrate the practical importance of SfC, we inves- tigated the effectiveness of SfC-NeRF for future prediction. Specifically, the first 14 frames were used for training and the subsequent 14 frames were used for evaluation. We compared SfC-NeRF , which optimizes the internal struc- tures with fixed physical properties , with PAC-NeRF [36], which optimizes physical properties with fixed (filled) in- ternal structures . Table 5 summarizes the results. SfC- NeRF outperformed PAC-NeRF in terms of the peak-to-Internal structure PSNR ↑ SSIM↑ PAC-NeRF Fixed (filled) 23.44 0.975 SfC-NeRF Optimized 26.60 0.981 Table 5. Results of future prediction. The scores were averaged over all cavity sizes and locations for the 40 objects examined in Experiments I and II. Error rate −30%−20%−10% 0% 10% 20% 30% Young’s modulus ˆE0.363 0.242 0.216 0.195 0.213 0.231 0.244 Poisson’s ratio ˆν0.240 0.231 0.208 0.195 0.200 0.214 0.236 Density ˆρ 0.798 0.533 0.289 0.195 0.207 0.259 0.308 Table 6. Comparison of CD ( ×103↓) for inaccurate physical prop- erties. In the 0% case, an elastic material with default settings (sc= 2 33,lc=center, ˆE= 106, and ˆν= 0.3) was used. signal ratio (PSNR) and structural similarity index measure (SSIM) [69]. These results indicate that the optimization of the internal structure is crucial in practical scenarios. 5. Discussion Based on the above experiments, we obtained promising re- sults for SfC. However, the proposed method has some lim- itations. (1) Our approach assumes that the objects deform during collisions. Therefore, its performance depends on the type of material used. For example, it may be difficult to apply this method to metallic objects that do not deform. However, detecting small changes may help to overcome this issue. (2) Since SfCis a novel task, this study focused on evaluating its fundamental performance using simulation data, leaving the validation with real data as a challenge for future research. To explore its potential use with real data, we examined its robustness against inaccurate physical properties. Table 6 presents the results when errors exist in the physical properties. A significant error (e.g., −30%) in ˆρcauses a notable degradation owing to its negative
|
https://arxiv.org/abs/2505.21335v1
|
impact on volume estimation in Lmass. However, in other cases, the degradation is moderate. All the scores exceed those of the baselines listed in Table 1 (e.g., 0.841 by LPO). These results indicate that the proposed method is robust against inaccurate physical properties. Additional challenges asso- ciated with real data are discussed in Appendix A.4. 6. Conclusion We introduced SfCto identify the invisible internal struc- ture of an object—a task that remains challenging even with the latest neural 3D representations. We proposed SfC- NeRF as an initial model to address this challenge. SfC- NeRF solves SfCby optimizing the internal structures under physical ,appearance-preserving , and keyframe constraints , along with volume annealing . As discussed in Section 5, the proposed method has certain limitations. Nonetheless, this study suggests a new direction for the development of neural 3D representations, and we believe that future devel- opments in this field will overcome these limitations. 8 References [1] Jad Abou-Chakra, Feras Dayoub, and Niko S ¨underhauf. Par- ticleNeRF: A particle-based encoding for online neural radi- ance fields. In WACV , 2024. 2, 3 [2] Jad Abou-Chakra, Krishan Rana, Feras Dayoub, and Niko S¨underhauf. Physically embodied Gaussian splatting: Em- bedding physical priors into a visual 3D world model for robotics. In CoRL , 2024. 2, 3 [3] Benjamin Attal, Jia-Bin Huang, Michael Zollhoefer, Jo- hannes Kopf, and Changil Kim. Learning neural light fields with ray-space embedding networks. In CVPR , 2022. 2 [4] Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Pe- ter Hedman, Ricardo Martin-Brualla, and Pratul P. Srini- vasan. Mip-NeRF: A multiscale representation for anti- aliasing neural radiance fields. In ICCV , 2021. 2 [5] Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Mip-NeRF 360: Unbounded anti-aliased neural radiance fields. In CVPR , 2022. [6] Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Zip-NeRF: Anti-aliased grid- based neural radiance fields. In ICCV , 2023. 2 [7] Eric R. Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-GAN: Periodic implicit genera- tive adversarial networks for 3D-aware image synthesis. In CVPR , 2021. 2 [8] Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J. Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. Efficient geometry- aware 3D generative adversarial networks. In CVPR , 2022. [9] Eric R. Chan, Koki Nagano, Matthew A. Chan, Alexander W. Bergman, Jeong Joon Park, Axel Levy, Miika Aittala, Shalini De Mello, Tero Karras, and Gordon Wetzstein. Generative novel view synthesis with 3D-aware diffusion models. In ICCV , 2023. 2 [10] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. TensoRF: Tensorial radiance fields. In ECCV , 2022. 2 [11] Rui Chen, Yongwei Chen, Ningxin Jiao, and Kui Jia. Fan- tasia3D: Disentangling geometry and appearance for high- quality text-to-3D content creation. In ICCV , 2023. 2 [12] Yihang Chen, Qianyi Wu, Weiyao Lin, Mehrtash Harandi, and Jianfei Cai. HAC: Hash-grid assisted context for 3D Gaussian splatting compression. In ECCV , 2024. 2 [13] Mengyu Chu, Lingjie Liu,
|
https://arxiv.org/abs/2505.21335v1
|
Quan Zheng, Erik Franz, Hans- Peter Seidel, Christian Theobalt, and Rhaleb Zayer. Physics informed neural fields for smoke reconstruction with sparse data. ACM Trans. Graph. , 41(4), 2022. 2, 3 [14] Yu Deng, Jiaolong Yang, Jianfeng Xiang, and Xin Tong. GRAM: Generative radiance manifolds for 3D-aware image generation. In CVPR , 2022. 2 [15] Yutao Feng, Yintong Shang, Xuan Li, Tianjia Shao, Chen- fanfu Jiang, and Yin Yang. PIE-NeRF: Physics-based inter- active elastodynamics with NeRF. In CVPR , 2023. 2, 3 [16] Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plenoxels: Radiance fields without neural networks. In CVPR , 2022. 2[17] Guy Gafni, Justus Thies, Michael Zollhofer, and Matthias Nießner. Dynamic neural radiance fields for monocular 4D facial avatar reconstruction. In CVPR , 2021. 2 [18] Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T. Barron, and Ben Poole. CAT3D: Create any- thing in 3D with multi-view diffusion models. In NeurIPS , 2024. 2 [19] Stephan J. Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, and Julien Valentin. FastNeRF: High-fidelity neural rendering at 200FPS. In ICCV , 2021. 2 [20] Jiatao Gu, Lingjie Liu, Peng Wang, and Christian Theobalt. StyleNeRF: A style-based 3D-aware generator for high- resolution image synthesis. In ICLR , 2022. 2 [21] Shanyan Guan, Huayu Deng, Yunbo Wang, and Xiaokang Yang. NeuroFluid: Fluid dynamics grounding with particle- driven neural radiance fields. In ICML , 2022. 2, 3 [22] Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, and Paul Debevec. Baking neural ra- diance fields for real-time view synthesis. In ICCV , 2021. 2 [23] Tao Hu, Shu Liu, Yilun Chen, Tiancheng Shen, and Jiaya Jia. EfficientNeRF: Efficient neural radiance fields. In CVPR , 2022. 2 [24] Wenbo Hu, Yuling Wang, Lin Ma, Bangbang Yang, Lin Gao, Xiao Liu, and Yuewen Ma. Tri-MipRF: Tri-Mip represen- tation for efficient anti-aliasing neural radiance fields. In ICCV , 2023. 2 [25] Yuanming Hu, Yu Fang, Ziheng Ge, Ziyin Qu, Yixin Zhu, Andre Pradhana, and Chenfanfu Jiang. A moving least squares material point method with displacement discontinu- ity and two-way rigid body coupling. ACM Trans. Graph. , 37(4), 2018. 6, 27 [26] Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, and Fr ´edo Durand. Diff- Taichi: Differentiable programming for physical simulation. InICLR , 2020. 3, 4, 16, 27 [27] Yingwenqi Jiang, Jiadong Tu, Yuan Liu, Xifeng Gao, Xiaox- iao Long, Wenping Wang, and Yuexin Ma. GaussianShader: 3D Gaussian splatting with shading functions for reflective surfaces. In CVPR , 2024. 2 [28] Ying Jiang, Chang Yu, Tianyi Xie, Xuan Li, Yutao Feng, Huamin Wang, Minchen Li, Henry Lau, Feng Gao, Yin Yang, and Chenfanfu Jiang. VR-GS: A physical dynamics- aware interactive Gaussian splatting system in virtual reality. ACM Trans. Graph. , 78, 2024. 2, 3 [29] Takuhiro Kaneko. AR-NeRF: Unsupervised learning of depth and defocus effects from natural images with aperture rendering neural radiance fields. In CVPR , 2022. 2 [30] Takuhiro Kaneko. MIMO-NeRF: Fast neural rendering with multi-input multi-output neural radiance fields. In ICCV , 2023. 2 [31] Takuhiro
|
https://arxiv.org/abs/2505.21335v1
|
Kaneko. Improving physics-augmented continuum neural radiance field-based geometry-agnostic system iden- tification with Lagrangian particle optimization. In CVPR , 2024. 2, 3, 4, 7, 17 9 [32] Bernhard Kerbl, Georgios Kopanas, Thomas Leimk ¨uhler, and George Drettakis. 3D Gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. , 42(4), 2023. 1, 2 [33] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR , 2015. 27 [34] Andreas Kurz, Thomas Neff, Zhaoyang Lv, Michael Zollh ¨ofer, and Markus Steinberger. AdaNeRF: Adaptive sampling for real-time rendering of neural radiance fields. InECCV , 2022. 2 [35] Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. Compact 3D Gaussian representation for radiance field. In CVPR , 2024. 2 [36] Xuan Li, Yi-Ling Qiao, Peter Yichen Chen, Krishna Murthy Jatavallabhula, Ming Lin, Chenfanfu Jiang, and Chuang Gan. PAC-NeRF: Physics augmented continuum neural ra- diance fields for geometry-agnostic system identification. In ICLR , 2023. 2, 3, 6, 8, 17, 25, 27 [37] Yanyan Li, Chenyu Lyu, Yan Di, Guangyao Zhai, Gim Hee Lee, and Federico Tombari. GeoGaussian: Geometry-aware Gaussian splatting for scene rendering. In ECCV , 2024. 2 [38] Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. Neural scene flow fields for space-time view synthesis of dy- namic scenes. In CVPR , 2021. 2 [39] Zhihao Liang, Qi Zhang, Wenbo Hu, Ying Feng, Lei Zhu, and Kui Jia. Analytic-Splatting: Anti-aliased 3D Gaussian splatting via analytic integration. In ECCV , 2024. 2 [40] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3D: High-resolution text-to-3D content creation. In CVPR , 2023. 2 [41] Shanchuan Lin, Andrey Ryabtsev, Soumyadip Sengupta, Brian L. Curless, Steven M. Seitz, and Ira Kemelmacher- Shlizerman. Real-time high-resolution background matting. InCVPR , 2021. 6, 13 [42] David B. Lindell, Julien N. P. Martel, and Gordon Wetzstein. AutoInt: Automatic integration for fast neural volume ren- dering. In CVPR , 2021. 2 [43] Jiayue Liu, Xiao Tang, Freeman Cheng, Roy Yang, Zhihao Li, Jianzhuang Liu, Yi Huang, Jiaqi Lin, Shiyong Liu, Xi- aofei Wu, Songcen Xu, and Chun Yuan. MirrorGaussian: Reflecting 3D Gaussians for reconstructing mirror reflec- tions. In ECCV , 2024. 2 [44] Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural sparse voxel fields. In NeurIPS , 2020. 2 [45] Zhicheng Lu, Xiang Guo, Le Hui, Tianrui Chen, Min Yang, Xiao Tang, Feng Zhu, and Yuchao Dai. 3D geometry-aware deformable Gaussian splatting for dynamic view synthesis. InCVPR , 2024. 2 [46] Jonathon Luiten, Georgios Kopanas, Bastian Leibe, and Deva Ramanan. Dynamic 3D Gaussians: Tracking by per- sistent dynamic view synthesis. In 3DV, 2024. 2 [47] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view syn- thesis. In ECCV , 2020. 1, 2, 3[48] Ben Mildenhall, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan, and Jonathan T. Barron. NeRF in the dark: High dynamic range view synthesis from noisy raw images. In CVPR , 2022. 2 [49]
|
https://arxiv.org/abs/2505.21335v1
|
Thomas M ¨uller, Alex Evans, Christoph Schied, and Alexan- der Keller. Instant neural graphics primitives with a multires- olution hash encoding. ACM Trans. Graph. , 41(4), 2022. 2 [50] Thomas Neff, Pascal Stadlbauer, Mathias Parger, Andreas Kurz, Joerg H. Mueller, Chakravarty R. Alla Chaitanya, An- ton Kaplanyan, and Markus Steinberger. DONeRF: Towards real-time rendering of compact neural radiance fields using depth oracle networks. Comput. Graph. Forum , 40(4), 2021. [51] Simon Niedermayr, Josef Stumpfegger, and R ¨udiger West- ermann. Compressed 3D Gaussian splatting for accelerated novel view synthesis. In CVPR , 2024. 2 [52] Michael Niemeyer and Andreas Geiger. GIRAFFE: Rep- resenting scenes as compositional generative neural feature fields. In CVPR , 2021. 2 [53] Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B. Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. InICCV , 2021. 2 [54] Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Milden- hall. DreamFusion: Text-to-3D using 2D diffusion. In ICLR , 2023. 2 [55] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-NeRF: Neural radiance fields for dynamic scenes. In CVPR , 2021. 2, 3 [56] Ri-Zhao Qiu, Ge Yang, Weijia Zeng, and Xiaolong Wang. Feature Splatting: Language-driven physics-based scene synthesis and editing. In ECCV , 2024. 2, 3 [57] Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, and Andrea Tagliasacchi. DeRF: Decom- posed radiance fields. In CVPR , 2021. 2 [58] Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. KiloNeRF: Speeding up neural radiance fields with thousands of tiny MLPs. In ICCV , 2021. 2 [59] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. GRAF: Generative radiance fields for 3D-aware im- age synthesis. In NeurIPS , 2020. 2 [60] Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, and Fredo Durand. Light field networks: Neu- ral scene representations with single-evaluation rendering. In NeurIPS , 2021. 2 [61] Ivan Skorokhodov, Sergey Tulyakov, Yiqun Wang, and Peter Wonka. EpiGRAF: Rethinking training of 3D GANs. In NeurIPS , 2022. 2 [62] Mohammed Suhail, Carlos Esteves, Leonid Sigal, and Ameesh Makadia. Light field neural rendering. In CVPR , 2022. 2 [63] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In CVPR , 2022. 1, 2, 3, 4, 27 [64] Jiaxiang Tang, Zhaoxi Chen, Xiaokang Chen, Tengfei Wang, Gang Zeng, and Ziwei Liu. LGM: Large multi-view Gaus- sian model for high-resolution 3D content creation. In ECCV , 2024. 2 10 [65] Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, and Gang Zeng. DreamGaussian: Generative Gaussian splatting for efficient 3D content creation. In ICLR , 2024. 2 [66] Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollh ¨ofer, Christoph Lassner, and Christian Theobalt. Non- rigid neural radiance fields: Reconstruction and novel view synthesis of a dynamic scene from monocular video. In ICCV , 2021. 2 [67] Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, and Pratul P. Srinivasan. Ref-NeRF: Structured view-dependent appearance for neural radiance fields. In CVPR , 2022. 2 [68] Huan Wang, Jian Ren, Zeng Huang, Kyle Olszewski,
|
https://arxiv.org/abs/2505.21335v1
|
Men- glei Chai, Yun Fu, and Sergey Tulyakov. R2L: Distilling neural radiance field to neural light field for efficient novel view synthesis. In ECCV , 2022. 2 [69] Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. , 13(4), 2004. 8 [70] Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. ProlificDreamer: High-fidelity and diverse text-to-3D generation with variational score dis- tillation. In NeurIPS , 2023. 2 [71] Suttisak Wizadwongsa, Pakkapon Phongthawee, Jiraphon Yenphraphai, and Supasorn Suwajanakorn. NeX: Real-time view synthesis with neural basis expansion. In CVPR , 2021. 2 [72] Tianyi Xie, Zeshun Zong, Yuxing Qiu, Xuan Li, Yutao Feng, Yin Yang, and Chenfanfu Jiang. PhysGaussian: Physics- integrated 3D Gaussians for generative dynamics. In CVPR , 2024. 2, 3 [73] Yang Xue, Yuheng Li, Krishna Kumar Singh, and Yong Jae Lee. GIRAFFE HD: A high-resolution 3D-aware generative model. In CVPR , 2022. 2 [74] Zhiwen Yan, Weng Fei Low, Yu Chen, and Gim Hee Lee. Multi-scale 3D Gaussian splatting for anti-aliased rendering. InCVPR , 2024. 2 [75] Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3D Gaussians for high-fidelity monocular dynamic scene reconstruction. In CVPR , 2024. 2 [76] Zeyu Yang, Hongye Yang, Zijie Pan, and Li Zhang. Real- time photorealistic dynamic scene representation and render- ing with 4D Gaussian splatting. In ICLR , 2024. 2 [77] Taoran Yi, Jiemin Fang, Junjie Wang, Guanjun Wu, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Qi Tian, and Xinggang Wang. GaussianDreamer: Fast generation from text to 3D Gaussians by bridging 2D and 3D diffusion models. In CVPR , 2024. 2 [78] Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. PlenOctrees for real-time rendering of neural radiance fields. In ICCV , 2021. 2 [79] Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-Splatting: Alias-free 3D Gaussian splatting. In CVPR , 2024. 2[80] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. NeRF++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492 , 2020. 2 [81] Shijie Zhou, Zhiwen Fan, Dejia Xu, Haoran Chang, Pradyumna Chari, Tejas Bharadwaj, Suya You, Zhangyang Wang, and Achuta Kadambi. DreamScene360: Uncon- strained text-to-3D scene generation with panoramic Gaus- sian splatting. In ECCV , 2024. 2 11 Contents 1. Introduction 1 2. Related work 2 3. Method 3 3.1. Problem statement . . . . . . . . . . . . . . 3 3.2. Preliminary: PAC-NeRF . . . . . . . . . . . 3 3.3. Proposal: SfC-NeRF . . . . . . . . . . . . . 4 4. Experiments 6 4.1. Experimental setup . . . . . . . . . . . . . . 6 4.2. Experiment I: Influence of cavity size . . . . 7 4.3. Experiment II: Influence of cavity location . 7 4.4. Experiment III: Influence of material . . . . 8 4.5. Application to future prediction
|
https://arxiv.org/abs/2505.21335v1
|
. . . . . . . 8 5. Discussion 8 6. Conclusion 8 A . Detailed analyses and discussions 12 A.1 . Detailed ablation studies . . . . . . . . . . . 12 A.1.1. Effect of each appearance-preserving loss . . . . . . . . . . . . . . . . . 12 A.1.2. Effect of keyframe selection . . . . . 13 A.1.3. Effect of background loss . . . . . . 13 A.2 . Extended experiments . . . . . . . . . . . . 14 A.2.1. Experiment IV: Influence of collision angle . . . . . . . . . . . . . . . . 14 A.3 . Evaluation from multiple perspectives . . . . 14 A.3.1. Evaluation through video sequences . 14 A.3.2. Evaluation per external shape . . . . 16 A.4 . Possible challenges with real data . . . . . . 16 B . Qualitative results 17 B.1. Qualitative results of Experiments I and II . . 17 B.2. Qualitative results of Experiment III . . . . . 17 B.3. Qualitative results of Experiment IV . . . . . 17 C . Implementation details 27 C.1. Dataset . . . . . . . . . . . . . . . . . . . . 27 C.2. Model . . . . . . . . . . . . . . . . . . . . . 27 C.3. Training settings . . . . . . . . . . . . . . . 27 C.4. Evaluation metrics . . . . . . . . . . . . . . 28 A. Detailed analyses and discussions A.1. Detailed ablation studies Owing to space limitations in the main text, we conducted an ablation study that focused only on the selected key com- ponents. In this appendix, we present detailed ablation studies that further assess the effectiveness of the proposedLpixel0Ldepth00 (1 2)3(2 3)3(3 4)3Avg. 0.106 0.423 0.898 1.326 0.688 ✓ 0.105 0.142 0.334 0.342 0.231 ✓ 0.079 0.313 0.314 0.287 0.248 ✓ ✓ 0.081 0.122 0.195 0.262 0.165 Table 7. Results of the detailed ablation study of APLs when the cavity size scis varied. The score indicates CD ( ×103↓). A check- mark✓indicates that the corresponding loss was used. Lpixel0Ldepth0left right up down Avg. 0.845 0.783 0.805 0.583 0.754 ✓ 0.295 0.451 0.325 0.311 0.345 ✓ 0.362 0.299 0.348 0.389 0.349 ✓ ✓ 0.303 0.258 0.274 0.291 0.281 Table 8. Results of the detailed ablation study of APLs when the cavity location lcis varied. The score indicates CD ( ×103↓). A checkmark ✓indicates that the corresponding loss was used. method from multiple perspectives. Specifically, we ex- amine the effects of each appearance-preserving loss (Ap- pendix A.1.1), keyframe selection (Appendix A.1.2), and background loss (Appendix A.1.3). A.1.1. Effect of each appearance-preserving loss As explained in Section 3.3, regarding appearance- preserving constraints, we adopted two appearance- preserving losses (APLs): pixel-preserving
|
https://arxiv.org/abs/2505.21335v1
|
loss Lpixel0 (Equation 9) and depth-preserving loss Ldepth0(Equa- tion 10). These losses help prevent the degradation of the external structure, which is effectively learned from the first frame of the video sequence during the fitting process across the entire video sequence. In the ablation study presented in Sections 4.2 and 4.3, we ablated both losses simultane- ously to examine the overall effect of APLs. For a more detailed ablation study, we assessed the performance when each APL was individually ablated. Results. Table 7 summarizes the results when the cavity sizescis varied, and Table 8 summarizes the results when the cavity location lcis varied. Our findings are threefold: (1) No APL vs. either Lpixel0orLdepth0.Both SfC-NeRF with only Lpixel0and SFC-NeRF with only Ldepth0outper- formed SfC-NeRF without APL in all cases. These results indicate that both Lpixel0andLdepth0effectively enhance the performance of SfC. (2) Full APLs vs. either Lpixel0orLdepth0.SfC-NeRF with both Lpixel0andLdepth0outperformed SfC-NeRF with onlyLpixel0and SFC-NeRF with only Ldepth0in most cases. These results indicate that Lpixel0andLdepth0contribute to improving the performance of SfCfrom different perspec- tives and are most effective when used together. (3)Lpixel0vsLdepth0.The superiority or inferiority of each loss depends on the cavity settings. This is related to the 12 (a)sc=0 (b)sc=3 43(c)lc=left (d)lc=up (1) t0 (2) t6 (3) t9 (4) Structure Figure 5. Comparison of appearances for objects with different internal structures when tis varied within {t0, t6, t9}. learnability of 3D appearance, and further detailed analyses will be an interesting direction for future research. A.1.2. Effect of keyframe selection As discussed in Section 3.3, regarding keyframe constraints, we employed a keyframe pixel loss Lpixelk(Equation 11) to effectively capture shape changes caused by internal struc- tures. Specifically, we selected the frame immediately af- ter the collision as the keyframe ( k= 6, where kis the keyframe index) for the experiments described in the main text. An important question is whether the choice of kis op- timal. To investigate this, we evaluated the change in perfor- mance by varying the value of k, specifically within {6,9}. Figure 5 compares the appearances of objects with different internal structures in these keyframes. For reference, we also provide scores for the model without keyframe pixel loss (denoted by k=None ). Results. Table 9 summarizes the results when the cavity sizescis varied, and Table 10 summarizes the results when the cavity location lcis varied. Our findings are twofold: (1)Lpixel6vs.Lpixel9.SfC-NeRF with Lpixel6outperformed that with Lpixel9in most cases. As shown in Figure 5, im- mediately after the collision (at t6(2)), the difference in the shapes of the objects is noticeable. However, as time progressed after the collision (at t9(3)), the difference in the shapes of the objects decreased, whereas the difference in their positions became more pronounced. We consider this to be the main reason why SfC-NeRF with Lpixel6per- formed better than that with Lpixel9. (2)Lpixel6/Lpixel9vs. None. We found that SfC-NeRF with Lpixel6orLpixel9outperformed SfC-NeRF without keyframek 0 (1 2)3(2 3)3(3 4)3Avg. None 0.082 0.127 0.211 0.325 0.186 6 0.081 0.122 0.195 0.262 0.165 9 0.082 0.120 0.208 0.290 0.175 Table 9. Analysis of the effect of keyframe selection when the cavity size
|
https://arxiv.org/abs/2505.21335v1
|
scis varied. The score indicates CD ( ×103↓). When k=None, a keyframe pixel loss Lpixelkwas not used. In contrast, when k∈ {6,9},Lpixelkwas used. k left right up down Avg. None 0.308 0.296 0.307 0.313 0.306 6 0.303 0.258 0.274 0.291 0.281 9 0.296 0.296 0.313 0.303 0.302 Table 10. Analysis of the effect of keyframe selection when the cavity location lcis varied. The score indicates CD ( ×103↓). When k=None, a keyframe pixel loss Lpixelkwas not used. In contrast, when k∈ {6,9},Lpixelkwas used. pixel loss in most cases. These results indicate that strate- gically weighing frames is more effective than treating all frames equally. A.1.3. Effect of background loss As mentioned in the explanation of preprocessing in Sec- tion 4.1, we use a background loss Lbgby leveraging the fact that the background segmentation has been obtained. For example, when an image with a white background is given, this background loss is useful for distinguishing whether the white part belongs to the background or a foreground ob- ject. We used a background segmentation that is not created manually but is predicted from a given image using a DNN- based image matting model [41]. Therefore, this setting is not unrealistic. However, it is important to investigate the effectiveness of the background loss. To this end, we inves- tigated the performance of SfC-NeRF −bg, where the back- ground loss ( Lbg) was ablated. In this setting, the perfor- mance of a model trained using only the first frame of the video sequence (Step (i) in Figure 2(a)) also changes be- cause the background loss is ablated in this step. We refer to this model as Static−bg. We compared the scores of these models with those of the original models (i.e., SfC-NeRF andStatic ). Results. Table 11 summarizes the results when the cavity sizescis varied, and Table 12 summarizes the results when the cavity location lcis varied. Our findings are twofold: (1) SfC-NeRF vs. SfC-NeRF −bg.SfC-NeRF outperformed SfC-NeRF −bgin most cases. As mentioned above, the background loss is useful for distinguishing between back- ground and foreground objects, allowing for a more accu- rate capture of external structures. The movements of an object are affected by its external and internal structures. Therefore, if the external structure can be estimated more 13 0 (1 2)3(2 3)3(3 4)3Avg. Static 0.093 0.294 0.920 1.574 0.720 SfC-NeRF 0.081 0.122 0.195 0.262 0.165 Static−bg 0.093 0.290 0.906 1.545 0.708 SfC-NeRF −bg 0.101 0.149 0.222 0.279 0.188 Table 11. Results of the ablation study of background loss when the cavity size scis varied. The score indicates CD ( ×103↓). left right up down Avg. Static 0.841 0.842 0.815 0.813 0.828 SfC-NeRF 0.303 0.258 0.274 0.291 0.281 Static−bg 0.831 0.830 0.799 0.800 0.815 SfC-NeRF −bg 0.324 0.210 0.361 0.277 0.293 Table 12. Results of the ablation study of background loss when the cavity location lcis varied. The score indicates CD ( ×103↓). accurately, the internal structure can also be estimated more accurately. (2) SfC-NeRF −bgvs Static −bg.SfC-NeRF −bgoutperformed Static −bgexcept when dealing with filled objects ( sc= 0in Table 11).5These results indicate that the
|
https://arxiv.org/abs/2505.21335v1
|
proposed method is effective for improving the performance of SfC, even without the use of advanced techniques, such as background loss. A.2. Extended experiments A.2.1. Experiment IV: Influence of collision angle In the above experiments, the collision angle was fixed, as shown in Figures 6–13, regardless of the internal structure and physical properties, to focus on comparisons related to the internal structures and physical properties. For com- pleteness, we investigated the influence of collision angle θc on the performance of SfC. Specifically, we selected objects with default settings ( sc= (2 3)3,lc=center, and elastic material defined by ˆE= 1.0×106andˆν= 0.3) as the ob- jects of investigation and examined their performance when only the collision angles were altered. The objects were ro- tated in the depth direction, as shown in Figure 14. The col- lision angle θcwas chosen from {0°,22.5°,45°,67.5°,90°}. We compared the performance of Static andSfC-NeRF . Results. Table 13 summarizes the quantitative results. Figure 14 shows the qualitative results. Our findings are twofold: (1) SfC-NeRF vs. Static. SfC-NeRF outperformed Static in 5When handling a filled object, inaccurate estimation of external struc- ture is problematic because it causes a difference between the actual and estimated masses. In this situation, if the estimated mass is encouraged to approach the ground-truth mass using a mass loss while maintaining the external appearance using APLs, the internal structure must be changed unnecessarily. Consequently, SfC-NeRF −bgdegrades the performance of SfCwhen handling filled objects. An accurate estimation of the external structure using a background loss is effective for addressing this issue.Sphere 0° 22.5° 45° 67.5° 90° Static 1.164 1.163 1.163 1.162 1.160 SfC-NeRF 0.067 0.068 0.066 0.067 0.066 Cube 0° 22.5° 45° 67.5° 90° Static 0.775 0.776 0.848 0.768 0.776 SfC-NeRF 0.201 0.173 0.627 0.201 0.201 Bicone 0° 22.5° 45° 67.5° 90° Static 0.933 0.925 0.918 0.921 0.926 SfC-NeRF 0.144 0.194 0.187 0.146 0.154 Cylinder 0° 22.5° 45° 67.5° 90° Static 0.891 0.905 0.915 0.905 0.964 SfC-NeRF 0.342 0.288 0.311 0.209 0.639 Diamond 0° 22.5° 45° 67.5° 90° Static 0.837 0.830 0.833 0.819 0.838 SfC-NeRF 0.220 0.300 0.222 0.163 0.209 Table 13. Comparison of CD ( ×103↓) when collision angle θcis varied. all cases. These results indicate that optimizing the inter- nal structure through a video sequence using the proposed method is beneficial, regardless of the collision angle. (2) Effect of collision angle. We found that the collision an- gle influenced the performance of SfC. The strength of this effect depends on the object shape. There are three pos- sible reasons for this performance variation: (i) Changes in estimation accuracy of external structures. The internal structure was optimized under the constraint that the exter- nal structure, learned from the first frame, should be main- tained. Therefore, when the accuracy of the external struc- ture estimation changed, the accuracy of the internal struc- ture estimation also changed. (ii) Difference in amount of deformation. The amount of deformation varied depending on the collision angle. This factor also affected the ease of estimating the internal structure. (iii) Asymmetry. When an object was not symmetrical relative to the collision angle, its behavior
|
https://arxiv.org/abs/2505.21335v1
|
after the collision became asymmetrical. Con- sequently, the ease of estimating the internal structure also became asymmetrical. A.3. Evaluation from multiple perspectives A.3.1. Evaluation through video sequences In the main experiments, we evaluated the models using the chamfer distance between the ground-truth particles ˆPP(t0)and estimated particles PP(t0)in the first frame of the video sequence, i.e., at t=t0. For the multidimen- sional analysis, we investigated the chamfer distance be- tween the ground-truth particles ˆPP(t)and estimated parti- clesPP(t), averaged over the entire video sequence , i.e., t∈ {t0, . . . , t N−1}. For clarity, we refer to the former 14 0 (1 2)3(2 3)3(3 4)3Avg. Static 0.093 0.104 0.294 0.309 0.920 1.057 1.574 1.964 0.720 0.859 GO 0.091 0.092 0.301 0.301 0.941 0.944 1.586 1.612 0.730 0.737 GO mass 0.081 0.083 0.319 0.325 1.244 1.266 2.291 2.367 0.984 1.010 LPO 0.092 0.091 0.284 0.282 0.841 0.833 1.406 1.380 0.656 0.646 LPO mass 0.087 0.087 0.284 0.283 0.876 0.868 1.477 1.451 0.681 0.672 SfC-NeRF −mass 0.089 0.090 0.226 0.225 0.550 0.544 1.148 1.112 0.503 0.493 SfC-NeRF −APL 0.106 0.108 0.423 0.421 0.898 0.886 1.326 1.307 0.688 0.680 SfC-NeRF −APT 0.085 0.101 0.261 0.279 0.332 0.337 0.661 0.680 0.335 0.349 SfC-NeRF −key 0.082 0.086 0.127 0.131 0.211 0.213 0.325 0.325 0.186 0.189 SfC-NeRF −V A 0.146 0.269 0.293 0.338 0.370 0.407 0.456 0.485 0.316 0.375 SfC-NeRF 0.081 0.085 0.122 0.126 0.195 0.196 0.262 0.258 0.165 0.166 Table 14. Comparison of CD ( ×103↓) when the cavity size scis varied. This is an extended version of Table 1. For each condition, the left score indicates CD static, the chamfer distance between P(t0)andˆP(t0)at the first frame, i.e., t=t0, and the right score indicates CD video, the chamfer distance between P(t)andˆP(t)averaged over the entire video sequence, i.e., t∈ {t0, . . . , t N−1}. left right up down Avg. Static0.841 1.159 0.842 1.306 0.815 1.731 0.813 1.241 0.828 1.359 (0.841) (1.294) (0.843) (1.154) (0.814) (1.246) (0.813) (1.727) (0.828) (1.355) GO0.874 0.879 0.853 0.870 0.878 0.875 0.870 1.035 0.869 0.915 (0.872) (2.606) (0.856) (2.549) (0.881) (1.471) (0.870) (1.673) (0.870) (2.075) GO mass1.349 1.386 1.334 1.375 1.104 1.141 1.001 1.370 1.197 1.318 (1.340) (3.134) (1.344) (3.126) (1.127) (1.866) (1.004) (1.805) (1.204) (2.483) LPO0.791 0.789 0.787 0.787 0.796 0.776 0.743 0.721 0.779 0.768 (0.802) (2.493) (0.800) (2.507) (0.819) (1.468) (0.737) (1.471) (0.790) (1.985) LPO mass0.824 0.822 0.817 0.818 0.828 0.806 0.775 0.753 0.811 0.800 (0.833) (2.529) (0.832) (2.556) (0.847) (1.497) (0.771) (1.538) (0.821) (2.030) SfC-NeRF −mass0.513 0.520 0.485 0.491 0.705 0.689 0.479 0.457 0.545 0.539 (0.858) (2.502) (0.878) (2.661) (0.747) (1.506) (0.956) (1.762) (0.860) (2.108) SfC-NeRF −APL0.845 0.840 0.783 0.788 0.805 0.786 0.583 0.580 0.754 0.749 (1.069) (2.885) (1.083) (2.943) (0.934) (1.764) (0.883) (1.750) (0.992) (2.335) SfC-NeRF −APT0.624 0.631 0.428 0.604 0.384 0.461 0.464 0.514 0.475 0.553 (0.588) (1.920) (0.586) (1.486) (0.579) (1.196) (0.646) (1.305) (0.600) (1.477) SfC-NeRF −key0.308 0.307 0.296 0.326 0.307 0.306 0.313 0.343 0.306 0.321 (0.372) (1.854) (0.396) (1.746) (0.387) (1.291) (0.389) (1.105) (0.386) (1.499) SfC-NeRF −V A0.542 0.611 0.596 0.767 0.333 0.389 0.385 0.421 0.464 0.547 (0.639) (2.304) (0.757) (2.265) (0.445) (1.338) (0.549) (1.339) (0.597)
|
https://arxiv.org/abs/2505.21335v1
|
(1.811) SfC-NeRF0.303 0.308 0.258 0.313 0.274 0.273 0.291 0.307 0.281 0.300 (0.367) (1.821) (0.431) (1.647) (0.448) (1.262) (0.417) (1.204) (0.416) (1.483) Table 15. Comparison of CD and ACD ( ×103↓) when the cavity location lcis varied. This is an extended version of Table 2. For each condition, the left score indicates CD static, the chamfer distance between P(t0)andˆP(t0)at the first frame, i.e., t=t0, and the right score indicates CD video, the chamfer distance between P(t)andˆP(t)averaged over the entire video sequence, i.e., t∈ {t0, . . . , t N−1}. The gray score in parentheses indicates the ACD. For each condition, the left score indicates ACD static, the anti-chamfer distance at the first frame, and the right score indicates ACD video, the anti-chamfer distance averaged over the entire video sequence. It is expected that each original CD is smaller than the corresponding ACD. (chamfer distance for the first static frame) as CD static and the latter (chamfer distance for the entire video sequence) asCD video. In the evaluation of the influence of cavity lo- cation (Section 4.3), we introduce anti-chamfer distance, which is the chamfer distance between the predicted par- ticlesPP(t0)and ground-truth particles ˜PP(t0), where the cavity is placed on the opposite side, in the first frame of the video sequence to evaluate how well the cavity location is captured. For further analysis, we calculated and averaged similar scores for the entire video sequence . For clarity, werefer to the former (anti-chamfer distance for the first static frame) as ACD staticand the latter (anti-chamfer distance for the entire video sequence) as ACD video. Results. Table 14 summarizes the results when the cavity sizescis varied, and Table 15 summarizes the results when the cavity location lcis varied. Our findings are fourfold: (1) CD static vs. CD video.The relative values of CD static and CD video vary across different cases. When calculating CD staticin the first frame, the locations of the ground-truth and synthesized objects were well aligned, allowing us to 15 focus on the differences in shapes. In contrast, when calcu- lating CD video for the entire video sequence, we must con- sider not only the differences in shapes but also the differ- ences in absolute locations. Misalignments accumulate over time because the locations must vary within the allowance of the physical constraints via DiffMPM [26]. Because the objective of this study was to correctly predict the shape rather than the location, CD staticis a more valid evaluation than CD videofor this purpose. (2) Comparison of CD staticand CD videoamong models. Al- though there was some variation in the superiority of the models depending on the metric used, the general trend re- mained consistent: SfC-NeRF achieved the best score in most cases. The two exceptions are CD video forsc= 0 in Table 14 and CD video forlc=left in Table 15. How- ever, the difference from the best score is small (less than 0.002). These results validate the effectiveness of the pro- posed method compared to the baseline and ablated models according to both metrics. (3) ACD static vs. ACD video.Comparing ACD static with ACD
|
https://arxiv.org/abs/2505.21335v1
|
video, ACD static is smaller than ACD video. This is be- cause the difference in location gradually increased after the collision when the cavity was located on the opposite side. As the objective of this study was to correctly predict the shape rather than the location, ACD staticis a more valid evaluation than ACD videofor this purpose. (4) Comparison of CD static and ACD static among models. When comparing the models, the baselines (i.e., the GO- and LPO-based models) tended to obtain similar CD static and ACD staticvalues because they struggled to determine the optimization direction, as shown in Figures 6–10. In con- trast, the proposed models (i.e., the SfC-NeRF-based mod- els, including the ablated models) tended to obtain a smaller CD staticthan ACD static. These results indicate that the pro- posed models effectively capture the positional bias of the cavity. Notably, a larger ACD staticdoes not indicate better performance unless CD staticis adequately small because it is possible to increase ACD staticwhile sacrificing CD static. A.3.2. Evaluation per external shape In Experiments I (Section 4.2) and II (Section 4.3), we re- ported the scores averaged over external shapes (i.e., sphere, cube, bicone, cylinder, and diamond objects). For a dif- ferent evaluation perspective, this appendix presents the scores for each external shape, averaged over other con- ditions, i.e., either sc∈ {0,(1 2)3,(2 3)3,(3 4)3}orlc∈ {left,right,up,down}. Results. Table 16 summarizes the results when the cavity sizescis varied (related to the results in Table 1), and Ta- ble 17 summarizes the results when the cavity location lc is varied (related to the results in Table 2). Although the scores were affected by the external shape, the same trends observed previously regarding the superiority or inferioritySphere Cube Bicone Cylinder Diamond Static 0.897 0.612 0.724 0.697 0.671 GO 0.889 0.637 0.704 0.756 0.663 GO mass 0.934 1.345 0.760 1.218 0.663 LPO 0.774 0.564 0.639 0.678 0.622 LPO mass 0.796 0.605 0.656 0.726 0.622 SfC-NeRF −mass 0.561 0.500 0.455 0.447 0.553 SfC-NeRF −APL 0.303 1.082 0.579 0.885 0.591 SfC-NeRF −APT 0.178 0.375 0.286 0.502 0.331 SfC-NeRF −key 0.081 0.173 0.159 0.288 0.230 SfC-NeRF −V A 0.113 0.279 0.363 0.558 0.268 SfC-NeRF 0.067 0.163 0.138 0.264 0.193 Table 16. Comparison of CD ( ×103↓) when the cavity size scis varied. The scores were averaged over cavity sizes. Sphere Cube Bicone Cylinder Diamond Static 1.006 0.719 0.824 0.818 0.772 GO 0.991 0.809 0.847 0.898 0.799 GO mass 1.065 1.528 0.934 1.332 1.125 LPO 0.954 0.673 0.764 0.804 0.701 LPO mass 0.980 0.723 0.796 0.845 0.711 SfC-NeRF −mass 0.695 0.480 0.424 0.595 0.533 SfC-NeRF −APL 0.548 1.064 0.373 1.194 0.592 SfC-NeRF −APT 0.318 0.502 0.374 0.730 0.451 SfC-NeRF −key 0.189 0.371 0.235 0.448 0.286 SfC-NeRF −V A 0.240 0.418 0.790 0.534 0.338 SfC-NeRF0.152 0.342 0.231 0.393 0.289 (0.417) (0.386) (0.365) (0.491) (0.420) Table 17. Comparison of CD ( ×103↓) when the cavity location lcis varied. The scores were averaged over cavity locations. The gray score in parentheses indicates ACD ( ×103). It is expected that the original CD is smaller than this. of the models were maintained. In particular, SfC-NeRF
|
https://arxiv.org/abs/2505.21335v1
|
outperformed both the baseline and ablated models in most cases. A.4. Possible challenges with real data As discussed in Section 5, because SfCis a novel task, this study focused on evaluating its fundamental performance using simulation data, leaving validation with real data a challenge for future research. However, it is both feasible and important to discuss the potential challenges associated with real data, which we address in this appendix. Three potential challenges are outlined below: (1) Difficulty in accurately estimating external structures. Although significant progress has been made in the estima- tion of 3D external structures in recent years, it is not yet possible to accurately estimate them for all objects in all situations. The proposed method assumes that the external structure learned in the first frame of the video sequence is accurate. Therefore, if this estimation fails, the overall performance is degraded. We believe that incorporating the concept of a physics-informed model, particularly in chal- 16 lenging scenarios (e.g., sparse views), such as Lagrangian particle optimization [31], could provide a solution to this issue. (2) Gap between real physics and the physics used in the simulation. Despite recent advancements in physi- cal simulation models, discrepancies between real-world physics and the physics underlying the simulation still per- sist. We believe that refining the proposed method along- side physics-informed models (e.g., those discussed in Sec- tion 2) could help alleviate this problem. (3) Difficulty in accurately estimating physical properties. As mentioned in Section 3.1, we address SfC under the assumption that the ground-truth physical properties are available in advance to mitigate the chicken-and-egg prob- lem between the physical properties and internal structures. This assumption is reasonable if the material can be identi- fied; however, obtaining perfectly accurate values for phys- ical properties in real-world scenarios is challenging. Al- though the issue of solving the chicken-and-egg problem remains, an appearance-based physical property estimation method has already been proposed (e.g., PAC-NeRF [36]). Combining the proposed approach with previous methods for the simultaneous optimization of physical properties and internal structures is an exciting direction for future re- search. B. Qualitative results This appendix presents the qualitative results. The corre- sponding demonstration videos are available at https: / / www . kecl . ntt . co . jp / people / kaneko . takuhiro/projects/sfc/ . B.1. Qualitative results of Experiments I and II We provide the qualitative results of Experiments I (Sec- tion 4.2) and II (Section 4.3) in Figures 6–10. B.2. Qualitative results of Experiment III We provide the qualitative results of Experiment III (Sec- tion 4.4) in Figures 11–13. B.3. Qualitative results of Experiment IV We provide the qualitative results of Experiment IV (Ap- pendix A.2.1) in Figure 14. 17 1.164 0.063 0.062 0.062 0.062 0.063 0.062 0.328 2.035 1.0051.005 1.005 1.0080.991 1.0270.063 0.063 0.302 0.305 1.741 1.789 1.030 1.0630.940 0.959 0.936 0.959 0.912 0.9401.149 1.2100.062 0.062 0.327 0.337 2.019 2.126 0.994 1.0450.980 1.086 0.990 1.084 0.999 1.0450.118 0.092 0.285 0.107 0.0670.146 0.072 0.409 0.391 0.065 0.125 0.099 0.455 0.153 0.073 0.169 0.153 0.899 0.188 0.0940.241 0.155 0.471 0.340 0.143 0.350 0.266 0.475 0.500 0.218
|
https://arxiv.org/abs/2505.21335v1
|
0.201 0.183 0.345 0.245 0.150 (h) LPOmass (g) LPO (e) GO (f) GO mass (j) SfC-NeRF −APL(a) Before collision(b) After collision(c) Ground truth(k) SfC-NeRF −APT(l) SfC-NeRF −key(d) Static (m) SfC-NeRF −VA(n) S fC-NeRF(1)sc=0 (2)sc=1 23(3)sc=2 33(4)sc=3 43(5)lc=left (6)lc=right (7)lc=up (8)lc=bottom(i) SfC-NeRF −mass 0.092 0.4910.203 1.459 1.0870.578 0.759 0.358Figure 6. Comparison of learned internal structures for sphere objects. (a) and (b) Examples of training images. The images are zoomed in for easy viewing. (a) Examples of training images before collision. As shown in this column, the appearances of the objects are the same across all scenes (1)–(8). Consequently, it is difficult to distinguish the internal structures based solely on these appearances. (b) Examples of training images after collision. To overcome the difficulty mentioned above, we address SfC, in which we aim to identify the internal structures based on appearance changes before and after collision, as shown in (a) and (b). (c)–(n) Internal structures visualized through cross-sectional views perpendicular to the ground. In (d)–(n), the score below each image indicates CD ( ×103↓).(c) Ground-truth internal structures. As shown in this column, although the external appearances are the same in (a), the internal structures are different. (d) Internal structures learned from the first frames of the video sequences. The same internal structures (i.e., filled objects) were learned because the appearances were the same before the collision (a). (e)–(h) Internal structures learned using the baselines (GO- and LPO-based models). These models struggled to determine optimal learning directions. (i)–(m) Internal structures learned using the ablated models. The ablated models are insufficient to prevent convergence to improper solutions. (n) Internal structures learned using SfC-NeRF (full model). The full model overcomes the above drawbacks and achieved the best CD. 18 (h) LPOmass (g) LPO (e) GO (f) GO mass (j) SfC-NeRF −APL(a) Before collision(b) After collision(c) Ground truth(k) SfC-NeRF −APT(l) SfC-NeRF −key(d) Static (m) SfC-NeRF −VA(n) S fC-NeRF(1)sc=0 (2)sc=1 23(3)sc=2 33(4)sc=3 43(5)lc=left (6)lc=right (7)lc=up (8)lc=bottom(i) SfC-NeRF −mass 0.775 0.096 0.262 1.315 0.7250.715 0.720 0.7170.722 0.7720.101 0.098 0.261 0.262 1.170 1.286 0.678 0.7360.676 0.726 0.688 0.729 0.652 0.7000.813 1.6340.104 0.103 0.281 0.399 1.351 3.245 0.857 1.5500.790 1.506 0.758 1.510 0.831 1.5450.425 0.203 1.263 0.378 0.2010.118 0.086 0.183 0.085 0.094 0.232 0.127 0.514 0.138 0.152 0.339 0.278 2.367 0.901 0.205 0.417 0.545 0.945 0.499 0.3930.528 0.305 1.199 0.665 0.336 0.325 0.339 1.020 0.465 0.328 0.402 0.296 1.092 0.379 0.311 0.7100.077 0.225 0.987 0.9030.314 0.234 0.469 Figure 7. Comparison of learned internal structures for cube objects. The view in the figure is the same as that of Figure 6. 19 (h) LPOmass (g) LPO (e) GO (f) GO mass (j) SfC-NeRF −APL(a) Before collision(b) After collision(c) Ground truth(k) SfC-NeRF −APT(l) SfC-NeRF −key(d) Static (m) SfC-NeRF −VA(n) S fC-NeRF(1)sc=0 (2)sc=1 23(3)sc=2 33(4)sc=3 43(5)lc=left (6)lc=right (7)lc=up (8)lc=bottom(i) SfC-NeRF −mass 0.933 0.070 0.280 1.612 0.7700.878 0.877 0.7700.819 0.8550.071 0.071 0.271 0.277 1.396 1.423 0.733 0.7620.807 0.846 0.803 0.832 0.716 0.7420.916 0.9800.069 0.067 0.279 0.287 1.552 1.705 0.765 0.8080.888 1.065 0.868 1.058 0.866 0.8050.224 0.159 0.912 0.237 0.1440.197 0.066 0.066 0.083 0.066 0.580 0.144 0.312 0.242 0.078 0.449 0.268 1.027 0.583 0.264 0.260 0.183 0.499 0.171
|
https://arxiv.org/abs/2505.21335v1
|
0.1601.155 0.290 0.475 0.696 0.291 1.186 0.163 0.323 0.215 0.197 0.559 0.305 0.193 0.416 0.277 0.4080.082 0.153 1.179 0.3630.574 0.338 0.422 Figure 8. Comparison of learned internal structures for bicone objects. The view in the figure is the same as that of Figure 6. 20 (h) LPOmass (g) LPO (e) GO (f) GO mass (j) SfC-NeRF −APL(a) Before collision(b) After collision(c) Ground truth(k) SfC-NeRF −APT(l) SfC-NeRF −key(d) Static (m) SfC-NeRF −VA(n) S fC-NeRF(1)sc=0 (2)sc=1 23(3)sc=2 33(4)sc=3 43(5)lc=left (6)lc=right (7)lc=up (8)lc=bottom(i) SfC-NeRF −mass 0.891 0.089 0.285 1.524 0.8010.833 0.837 0.8010.891 0.9360.089 0.087 0.286 0.295 1.446 1.585 0.809 0.8430.828 0.875 0.823 0.873 0.756 0.7890.995 1.5510.086 0.083 0.316 0.290 1.627 2.948 1.003 1.3420.904 1.619 0.878 1.530 0.808 0.8360.738 0.360 1.243 0.544 0.3420.250 0.078 0.102 0.080 0.076 0.313 0.153 0.478 0.357 0.166 0.932 0.563 1.718 1.029 0.474 0.392 0.384 0.939 0.737 0.4530.473 0.461 1.502 0.829 0.390 0.765 0.499 1.453 0.625 0.331 0.505 0.447 0.882 0.729 0.396 0.5200.079 0.244 0.944 0.4350.685 0.742 0.519 Figure 9. Comparison of learned internal structures for cylinder objects. The view in the figure is the same as that of Figure 6. 21 (h) LPOmass (g) LPO (e) GO (f) GO mass (j) SfC-NeRF −APL(a) Before collision(b) After collision(c) Ground truth(k) SfC-NeRF −APT(l) SfC-NeRF −key(d) Static (m) SfC-NeRF −VA(n) S fC-NeRF(1)sc=0 (2)sc=1 23(3)sc=2 33(4)sc=3 43(5)lc=left (6)lc=right (7)lc=up (8)lc=bottom(i) SfC-NeRF −mass 0.837 0.148 0.315 1.383 0.7730.774 0.773 0.7700.780 0.7890.133 0.114 0.297 0.282 1.279 1.303 0.732 0.7370.703 0.713 0.686 0.690 0.682 0.7040.835 0.8450.135 0.091 0.301 0.285 1.382 1.432 0.771 0.7740.808 1.467 0.771 1.486 0.845 0.7710.342 0.242 0.785 0.396 0.2200.101 0.119 0.116 0.115 0.106 0.193 0.141 0.403 0.175 0.151 0.435 0.417 1.062 0.640 0.296 0.426 0.272 0.743 0.325 0.2670.311 0.328 0.577 0.591 0.352 0.353 0.210 0.642 0.335 0.217 0.261 0.332 0.405 0.554 0.322 0.6190.114 0.307 1.171 0.7350.415 0.355 0.626 Figure 10. Comparison of learned internal structures for diamond objects. The view in the figure is the same as that of Figure 6. 22 (h) Ground tru th(c) Ground truth (a) Before coll ision(b) After collision(d) Static (e) SfC-NeRF 1.164 0.0671.163 0.0951.162 0.101 1.163 0.317 1.161 0.309 0.933 0.1440.933 0.2470.933 0.265 0.932 0.226 0.932 0.209(f) Before coll ision(g) After collision(i) Static (j) SfC-NeRFˆE= 2.5×105(1) ˆE= 5.0×105(2) ˆE= 1.0×106(3) ˆE= 2.0×106(4) ˆE= 4.0×106(5) ˆE= 2.5×105(1) ˆE= 5.0×105(2) ˆE= 1.0×106(3) ˆE= 2.0×106(4) ˆE= 4.0×106(5)Figure 11. Comparison of learned internal structures for sphere objects (left) and bicone objects (right) when Young’s modulus ˆEis varied. Young’s modulus is a measure of elasticity and quantifies tensile or compressive stiffness when force is applied. Here, we discuss the results for the sphere objects because the same tendencies were observed for the bicone objects. As shown in (a) and (c), the external appearances before collision (a) and internal structures (c) are the same in all cases (1)–(5). However, as shown in (b), the shapes after collision differ because of variations in Young’s modulus ˆE∈ {2.5×105,5.0×105,1.0×106,2.0×106,4.0×106}. In particular, as Young’s modulus increases from top to bottom, the object becomes stiffer, and the amount of shape change decreases. In the Static model (b), the internal structure was learned from the first frame, which looks the same
|
https://arxiv.org/abs/2505.21335v1
|
in all cases. As a result, the same internal structure was learned across all variations. In contrast, in SfC-NeRF (e), the internal structure was learned using video sequences with different appearances. In this example, the same internal structure is expected to be learned in all cases. However, the varying appearances after collision (b), which provide a clue for solving the problem, lead to different outcomes. As shown in (1)(b) and (2)(b), when the object is soft, it deforms significantly after collision. This makes it difficult to capture the internal structure consistently, as shown in (1)(e) and (2)(e). In contrast, as shown in (4)(b) and (5)(b), when the object is stiffer, the shape change is limited. This narrows the range within which internal structures can be estimated, as shown in (4)(e) and (5)(e). Because SfCis an ill-posed problem with multiple possible solutions, the obtained results are considered reasonable. However, further improvement remains a topic for future work. 23 (h) Ground tru th(c) Ground truth(a) Before collision(b) After collision(d) Static (e) SfC-NeRF (f) Before collision(g) After collision(i) Static (j) SfC-NeRF 1.164 0.0671.160 0.0681.163 0.069 1.163 0.066 1.165 0.071 0.775 0.2010.772 0.2250.775 0.223 0.775 0.234 0.776 0.213 ˆν= 0.2 (1) (2)ˆν= 0.25 (3)ˆν= 0.3 (4)ˆν= 0.35 (5)ˆν= 0.4 ˆν= 0.2 (1) (2)ˆν= 0.25 (3)ˆν= 0.3 (4)ˆν= 0.35 (5)ˆν= 0.4Figure 12. Comparison of learned internal structures for sphere objects (left) and cube objects (right) when Poisson’s ratio ˆνis var- ied. Poisson’s ratio is a measure of the Poisson effect and quantifies how much a material deforms in a direction perpendicular to the direction in which force is applied. We varied Poisson’s ratio ˆνwithin the range of values commonly observed in real materials, i.e., ˆν∈ {0.2,0.25,0.3,0.35,0.4}. As shown in (b) and (g), this physical property does not significantly affect the appearance after the colli- sion compared to the results when Young’s modulus is varied (Figure 11). As a result, the learned internal structures are almost identical, as shown in (e) and (j). 24 (h) Ground tru th(c) Ground truth (a) Before coll ision(b) After collision(d) Static (e) SfC-NeRF 1.163 0.0911.164 0.0711.162 0.071 1.162 0.115 1.163 0.106 0.836 0.2250.839 0.2760.838 0.206 0.837 0.226 0.838 0.310(f) Before coll ision(g) After collision(i) Static (j) SfC-NeRF (1) Droplet (2) LetterNewtonian fluid (3) Cream (4) ToothpasteNon-Newtonian fluid (5) Playdoh (6) Cat (7) TrophyPlasticine Sand (1) Droplet (2) LetterNewtonian fluid (3) Cream (4) ToothpasteNon-Newtonian fluid (5) Playdoh (6) Cat (7) TrophyPlasticine Sand 1.163 0.068 1.163 0.075 0.837 0.207 0.838 0.231Figure 13. Comparison of learned internal structures for sphere objects (left) and diamond objects (right) with varying materials. The physical properties were based on the PAC-NeRF dataset [36]. Specifically: (1) Newtonian fluid with the “Droplet” setting (fluid viscosity ˆµ= 200 and bulk modulus ˆκ= 105). (2) Newtonian fluid with the “Letter” setting ( ˆµ= 100 andˆκ= 105). (3) Non-Newtonian fluid with the “Cream” setting (shear modulus ˆµ= 104, bulk modulus ˆκ= 106, yield stress ˆτY= 3×103, and plasticity viscosity ˆη= 10 ). (4) Non-Newtonian fluid with the “Toothpaste” setting ( ˆµ= 5×103,ˆκ= 105,ˆτY= 200 , and ˆη= 10 ). (5) Plasticine with the “Playdoh”
|
https://arxiv.org/abs/2505.21335v1
|
setting (Young’s modulus ˆE= 2×106, Poisson’s ratio ˆν= 0.3, and yield stress ˆτY= 1.54×104). (6) Plasticine with the “Cat” setting (ˆE= 106,ˆν= 0.3, and ˆτY= 3.85×103). (7) Sand with the “Trophy” setting ( ˆθfric= 40 °). These results demonstrate that SfC-NeRF ((e) and (j)) improves structure estimation compared to Static ((d) and (i)), regardless of the material. However, the improvement rate depends on the material. As an initial approach to address SfC, we proposed a general-purpose method. However, it would be interesting to develop methods specifically tailored to individual materials in future work. 25 (h) Ground tru th(c) Ground truth(a) Before collision(b) After collision(d) Static (e) SfC-NeRF (f) Before collision(g) After collision(i) Static (j) SfC-NeRF 0.918 0.1870.925 0.1940.933 0.144 0.921 0.146 0.926 0.154 0.915 0.3110.905 0.2880.891 0.342 0.905 0.209 0.964 0.639 (1)θc= 0° θc= 22. 5° (2) θc= 45° (3) θc= 67. 5° (4) θc= 90° (5) (1)θc= 0° θc= 22. 5° (2) θc= 45° (3) θc= 67. 5° (4) θc= 90° (5)Figure 14. Comparison of learned internal structures for bicone objects (left) and cylinder objects (right) when collision angle θcis varied. We varied collision angle θc∈ {0°,22.5°,45°,67.5°,90°}. We found that the effect of collision angle on the estimation of the internal structure depends on the object shape. (a)–(e) In the case of an object such as bicone , where the object is entirely visible regardless of the collision angle, the estimation performance remains relatively stable across different collision angles. (f)–(j) In contrast, in the case of an object, such as cylinder , where the visible area varies greatly depending on the collision angle, the estimation performance also changes with the collision angle. For example, in (5)(g), the bottom of the object is not visible when it collides with the ground. As a result, a hole is generated at the bottom of the object in (5)(j). This issue may be alleviated by improving camera placement. Other possible factors that affect estimation performance are discussed in Appendix A.2.1. 26 C. Implementation details C.1. Dataset Because SfC is a new task and no established dataset is available, we created a new dataset called the SfC dataset based on the protocol of PAC-NeRF [36], which is a pio- neering study on geometry-agnostic system identification. In the main experiments presented in Section 4, we pre- pared 115 objects by changing their external shapes, in- ternal structures, and materials. Figure 3 shows examples of the data in this dataset. First, we prepared five exter- nal shapes: sphere ,cube ,bicone ,cylinder , and diamond . Regarding the internal structure and material, we set the de- fault values as follows: the cavity size rate for the filled object, sc, was set to (2 3)3, the cavity location, lc, was set to the center, and the material was defined as an elastic ma- terial with Young’s modulus ˆE= 106and Poisson’s ratio ˆν= 0.3. Under these default properties, one of them was changed as follows: (a) Three different sized cavities :sc∈ {0,(1 2)3,(3 4)3}. (b) Four different locations of cavities : the center lcis moved {up,down ,left,right}. (c-1) Eight different elastic
|
https://arxiv.org/abs/2505.21335v1
|
materials : those with four dif- ferent Young’s moduli ˆE∈ {2.5×105,5×105,2× 106,4×106}and four different Poisson’s ratios ˆν∈ {0.2,0.25,0.35,0.4}. (c-2) Seven different materials : two Newtonian fluids, two non-Newtonian fluids, two plasticines, and one sand. Their physical properties were derived from the PAC-NeRF dataset [36]. Specifically, the two Newtonian fluids in- cluded one with the “Droplet” setting (fluid viscosity ˆµ= 200 and bulk modulus ˆκ= 105) and one with the “Let- ter” setting ( ˆµ= 100 andˆκ= 105). The two non- Newtonian fluids included one with the “Cream” setting (shear modulus ˆµ= 104, bulk modulus ˆκ= 106, yield stress ˆτY= 3×103, and plasticity viscosity ˆη= 10 ) and one with the “Toothpaste” setting ( ˆµ= 5×103,ˆκ= 105, ˆτY= 200 , and ˆη= 10 ). The two plasticines included one with the “Playdoh” setting (Young’s modulus ˆE= 2×106, Poisson’s ratio ˆν= 0.3, and yield stress ˆτY= 1.54×104) and one with the “Cat” setting ( ˆE= 106,ˆν= 0.3, and ˆτY= 3.85×103). The sand had the “Trophy” setting (ˆθfric= 40 °). Thus, we created 5external shapes ×(1default + 3sizes + 4locations + (8 + 7) materials) = 115 objects. We also prepared 20objects for the extended experi- ments described in Appendix A.2. Specifically, we con- sidered four collision angles: θc∈ {22.5°,45°,67.5°,90°}. Thus, in this appendix, we created 5external shapes ×4 collision angles = 20 objects. The total number of objects created in the main text and this appendix is 115+20 = 135 . Following the PAC-NeRF study [36], the ground-truthdata were generated using the MLS-MPM simulator [25], where each object fell freely under the influence of grav- ity and collided with the ground plane. Images were ren- dered under various environmental lighting conditions and ground textures using a photorealistic renderer. Each scene was captured from 11 viewpoints using cameras spaced in the upper hemisphere including an object. C.2. Model We implemented the models based on the official PAC- NeRF code [36].6PAC-NeRF represents an Eulerian grid-based scene representation using voxel-based NeRF (specifically, direct voxel grid optimization (DVGO) [63]) and conducts a Lagrangian particle-based differentiable physical simulation using a differentiable MPM simulator (specifically, DiffTaichi [26]). More specifically, DVGO represents a volume density field σG′using a 3D dense voxel grid and represents a color field cG′using a combi- nation of a 4D dense voxel grid and a two-layer multi-layer perceptron (MLP) with a hidden dimension of 128. When the MLP is employed, positional embedding in the viewing direction dis used as an additional input. We set the reso- lutions of σG′andcG′to match those in PAC-NeRF [36]. C.3. Training settings We performed static optimization (Figure 2(i)) using the same settings as those used for PAC-NeRF. Specifically, we trained the model for 6000 iterations using the Adam opti- mizer [33] with learning rates of 0.1for the volume density and color grids and a learning rate of 0.001for the MLP. The momentum terms β1andβ2were set to 0.9and0.999, respectively. In the dynamic optimization (Figure 2(ii)), we trained the model for 1000 iterations using the Adam opti- mizer [33] with a default learning rate of 6.4for the volume density grid. The momentum terms β1andβ2were
|
https://arxiv.org/abs/2505.21335v1
|
set to 0.9and0.999, respectively. We found that a high learning rate is useful for efficiently reducing the volume density; however, this is not necessary when the estimated mass m sufficiently approaches the ground-truth mass ˆm. There- fore, we divided the learning rate by 2(with a minimum of 0.1) as long as the estimated mass mwas below the ground- truth mass ˆm. Conversely, we multiplied the learning rate by2(with a maximum of 6.4) as long as the estimated mass mexceeded the ground-truth mass ˆm. We conducted volume annealing every 100iteration dur- ing the dynamic optimization. When the estimated mass mwas significantly larger than the ground-truth mass ˆm (specifically, when the difference exceeded 10in practice), the expansion process was skipped to prevent mfrom devi- ating further from ˆm. In appearance-preserving training, static optimization was performed using settings similar to those mentioned 6https://github.com/xuan-li/PAC-NeRF 27 above (i.e., static optimization in Step (i) (Figure 2(i))), but the number of iterations was reduced to 10. We empirically set the hyperparameters for the full ob- jective Lfull(Equation 12) to λmass= 1 ,λpres= 100 , wdepth= 0.01, and λkey= 10 . The hyperparameter for background loss Lbgwas set to wbg= 0.2. C.4. Evaluation metrics As mentioned in Section 3.1, we use particles PP(t0)to represent the structure (including the internal structure) of an object and estimate PP(t0)to match the ground-truth particles ˆPP(t0). Therefore, we evaluated the model by measuring the distance between PP(t0)andˆPP(t0)us- ing the chamfer distance (CD) . The smaller the value, the higher the degree of matching. As mentioned in Section 4.3, we also used the anti-chamfer distance (ACD) , which is the chamfer distance between the predicted particles PP(t0) and ground-truth particles ˜PP(t0), where the cavity was placed on the opposite side, to evaluate the capture of the cavity location. 28
|
https://arxiv.org/abs/2505.21335v1
|
arXiv:2505.21339v1 [cs.LG] 27 May 2025An Uncertainty-Aware ED-LSTM for Probabilistic Suffix Prediction Henryk Mustroph Michel Kunkler Stefanie Rinderle-Ma Technical University of Munich, TUM School of Computation, Information and Technology, Garching, Germany {henryk.mustroph, michel.kunkler, stefanie.rinderle-ma}@tum.de Abstract Suffix prediction of business processes forecasts the remaining sequence of events until process completion. Current approaches focus on predicting a single, most likely suffix. However, if the future course of a process is exposed to uncertainty or has high variability, the expressiveness of a single suffix prediction can be limited. To address this limitation, we propose probabilistic suffix prediction, a novel approach that approximates a probability distribution of suffixes. The proposed approach is based on an Uncertainty-Aware Encoder-Decoder LSTM (U-ED-LSTM) and a Monte Carlo (MC) suffix sampling algorithm. We capture epistemic uncertainties via MC dropout and aleatoric uncertainties as learned loss attenuation. This technical report provides a detailed evaluation of the U-ED-LSTM’s predictive performance and assesses its calibration on four real-life event logs with three different hyperparameter settings. The results show that i) the U-ED-LSTM has reasonable predictive performance across various datasets, ii) aggregating probabilistic suffix predictions into mean values can outperform most likely predictions, partic- ularly for rare prefixes or longer suffixes, and iii) the approach effectively captures uncertainties present in event logs. Keywords: Probabilistic Suffix Prediction, Epistemic and Aleatoric Uncertainties, Encoder- Decoder LSTM 1 Introduction In recent years, predicting the future course of a running business process (BP) using machine learning models has gained considerable attention in the field of Predictive Process Monitoring (PPM) [16]. Many well-performing PPM approaches use neural network (NN) architectures and have been criticized for acting as black-box approaches, lacking interpretability of their predictions [26]. Developing approaches that combine high accuracy and interpretability has therefore been acknowledged as a primary challenge in PPM [4]. Recent works have contributed towards more transparency in predictions by developing approaches that predict an entire sequence of remaining events, known as suffix prediction [3, 6, 9, 15, 21, 24, 25, 28]. Current suffix prediction approaches 1 have focused on predicting a single most likely suffix. In certain domains, the future course of a business process is often subjected to uncertainties and high variability, making it unlikely that it will match exactly with the predicted most likely suffix. Consider a drug development process in a pharmaceutical company: The process involves risks and uncertainties. Unforeseen events and unpredictable human influences can affect the remaining sequence of events. The company aims to predict the suffix, e.g., to plan resources, estimate the remaining time, and gauge the potential market approval of the drug. However, focusing solely on the most likely suffix may overlook alternative, plausible event sequences. By considering other possible suffixes, the pharmaceutical company can account for uncertainty in decision-making and improve its risk management. Machine learning (ML) distinguishes epistemic and aleatoric uncertainties [11]. Epistemic un- certainties are reducible and stem from a lack of knowledge, e.g., training data. Aleatoric uncertain- ties, conversely, are irreducible. For instance, in the context of a business process, they can stem from external factors beyond the control of the organization running the process, such
|
https://arxiv.org/abs/2505.21339v1
|
as delays in deliveries from external stakeholders or the involvement of humans in process execution. In this work, instead of predicting a single most likely suffix, we consider epistemic and aleatoric uncer- tainties to predict a probability distribution of suffixes. In line with the term probabilistic learning, which has been used in machine learning and statistics to emphasize that not a single target is learned, but a target distribution [14], we refer to our approach as probabilistic suffix prediction . We achieve probabilistic suffix prediction by training an Uncertainty-Aware Encoder-Decoder Long Short-Term Memory (U-ED-LSTM) NN and an MC suffix sampling algorithm. The U-ED-LSTM captures epistemic uncertainties by using MC dropout and aleatoric uncertainties as learned loss attenuation [7, 8, 12]. The MC suffix sampling algorithm can be outlined as follows: Multiple MC trials are conducted, where in each trial, the U-ED-LSTM is used to sample a suffix. The suffix sampling in one MC trial is performed auto-regressively, i.e., the suffix is generated iteratively by sampling one event after another until an end-of-sequence (EOS) token is sampled. We sample all event attributes from probability distributions obtained from the U-ED-LSTM for each event. Using three different hyperparameter settings, we evaluate the U-ED-LSTM predictive performance on four real-life datasets. Additionally, we added results from comparable models from the liter- ature to demonstrate that the U-ED-LSTM has reasonable predictive performance. Furthermore, the probabilistic suffix prediction results are evaluated more thoroughly by comparing them with the most likely suffix prediction and assessing the model’s calibration. The results show that i) the U-ED-LSTM exhibits reasonable predictive performance across various datasets, ii) aggregating probabilistic suffix predictionsintomean valuescan outperform mostlikelypredictions, particularly for rare prefixes or longer suffixes, and iii) assessing the calibration on the predicted remaining time (continuous event attributes) shows that our approach can capture temporal uncertainties given in the training data. The technical report is outlined as follows: Sec. 2 covers preliminaries, Sec. 3 describes our probabilistic suffix prediction framework, Sec. 4 presents the evaluation, Sec. 5 discusses related approaches, and Sec. 6 concludes the work. 2 2 Preliminaries This section introduces general uncertainty concepts, how to model uncertainty in NN, and a definition of suffix and remaining time prediction of BPs. 2.1 Uncertainty in Machine Learning ML distinguishes epistemic and aleatoric uncertainties. [11] defines and describes both types of uncertainties. Epistemic uncertainty is referred to as “uncertainty due to a lack of knowledge about the perfect predictor” [11] and is reducible. Epistemic uncertainty can be further divided into approximation uncertainty and model uncertainty. Approximation uncertainty refers to a lack of data for selecting appropriate parameters for a predictor model and can generally be reduced by obtaining more training samples. Model uncertainty refers to a model’s insufficient approximation capabilities and can be reduced by training models with a higher capacity. There is ongoing debate regarding how epistemic uncertainty should be captured, with one possibility being the use of probability distributions [11]. Aleatoric uncertainty is irreducible as it stems from inherently random effects in the underlying data. Aleatoric uncertainty is “appropriately modeled in terms of probability distributions” [11] and
|
https://arxiv.org/abs/2505.21339v1
|
can henceforth be learned in a probabilistic model. Uncertainty-Aware Neural Networks (NN). For NNs, two common approaches for estimating a model’s uncertainty in its prediction are Bayesian approximation and ensemble learning-based techniques [1]. Bayesian approximation can be conducted with Bayesian Neural Networks (BNNs). BNNs assume their weights follow probability distributions, allowing a posterior distribution to be inferred, which can be used to quantify uncertainty. In most cases, obtaining an analytical solution for the posterior distribution is intractable due to neural networks’ high non-linearity and dimensionality. Even techniques for approximating the posterior distribution, such as Markov Chain Monte Carlo or Variational Inference (VI) methods, can still be computationally expensive [1, 7]. Ensemble techniques, on the other hand, achieve uncertainty quantification by aggregating the predictions of multiple models. This can also become computationally expensive, especially when numerous complex models are involved [1]. Epistemic Uncertainty using Dropout as a Bayesian Approximation. Using Dropout during training and inference at every weight layer in an NN can be a simple and computation- ally efficient variational inference method for Bayesian approximation of a posterior distribution [7]. This approach is referred to as Monte Carlo (MC) dropout because the posterior distribution p(W|X,Y )is approximated with a variational distribution qθ(W). Masked weights are sampled from the variational distribution ˆW∼qθ(W), whereθdenotes the set of the variational distribu- tion’s parameters (weights and bias terms) to be optimized. In practice, a dropout mask is often sampled from a Bernoulli distribution z∼Bernoulli( p), where p denotes the dropout probability. The dropout mask is then applied on the NN’s weight matrices Wsuch that ˆW=Wdiag(z). During training, the L2regularization on the NN parameters θensures the method aligns with a probabilistic framework. 3 Heteroscedastic Aleatoric Uncertainty as Learned Loss Attenuation. Heteroscedastic models assume that observation noise, which follows a probability distribution, can vary with the input data x. This input-dependent observation noise, denoted as σ(x), captures aleatoric uncertainty arising from inherent randomness in the data-generating process. To explicitly model this irreducible uncertainty, NNs are extended to directly learn σ(x). This is commonly done assuming that the observation noise follows a Normal distribution. To learn this observation noise usinganNN fW(·)parameterizedbyweights W, anadditionaloutputneuron fW σ(x)isaddedtothe (mean) output neuron fW y(x). However, in cases where both the inputs xand the predicted outputs fW y(x)are constrained to be strictly positive (e.g., time durations), assuming that the observation noise follows a Log-Normal distribution may be more appropriate. This is equivalent to assuming that the observation noise is usually distributed over the input data in the log-transformed space, i.e.,ln(x). Training the standard deviation of a probability distribution by including it in the loss function is referred to as learned loss attenuation [12]. In the regression case, the adapted loss function over Ntraining samples with target ycan be written as the negative log-likelihood of the underlying probability density function1: Lcon=1 NN/summationdisplay i=11 2/parenleftigg (yi−fW y(xi))2 fWσ(xi)2+ log/parenleftbigfW σ(xi)2/parenrightbig/parenrightigg (1) In the case of classification, NNs typically employ the Softmax function, which already outputs a categorical probability distribution. However, this probability distribution might not capture model uncertainties [12]. Therefore, means and variances can also be
|
https://arxiv.org/abs/2505.21339v1
|
learned on the predicted logits. Since the logits are passed in the Softmax function, MC integration has to be applied, i.e., averaging the cross-entropy loss of multiple draws from the logits distributions. We denote the number of MC trials with T, the categorical classes with C, and the ground truth class with c: ˆzi,t=fW y(xi) +fW σ(xi)ϵt, ϵt∼N(0,I). Lcat=1 NN/summationdisplay i=1−log/parenleftigg 1 TT/summationdisplay t=1/parenleftigg exp(ˆzi,t,c−log(C/summationdisplay c′(exp(ˆzi,t,c′))))/parenrightigg/parenrightigg (2) The combination of epistemic uncertainty quantification using MC dropout and aleatoric un- certainty quantification via learned loss attenuation was first proposed by [12], and this approach can be applied to any NN architecture. 2.2 Suffix Prediction We define an event log EL:={t(1),t(2),...,t(L)}as a set of cases, where Ldenotes the total number of cases. A case is a sequence of events denoted by t(l):=⟨e1,e2,...,eM⟩, whereMis the number of events in case l. An event is a tuple of event attributes, denoted em:= (am,tm,(dm1,...,dmk)). In this work, we assume that an event has at least two attributes: i) An event label am, which 1For a detailed derivation of the loss function, see [2]. 4 links the event to a class of event types, and ii) a timestamp attribute tm, which expresses the time an event happened. Additional event attributes are denoted as (dm1,...,dmk). We assume that event attributes are either categorical or continuous. A case can be split into several prefix and suffix pairs. A prefix is defined as p≤k:=⟨e1,e2,...,ek⟩, with 1≤k < M. A suffix is defined as s>k:=⟨ek+1,...,eM⟩. Suffix prediction involves predicting a suffix ˆsbased on an input prefix p≤k. The remaining time of a case, given a prefix p≤k, can be defined as t:=/summationtextM j=1tk+jwhich represents the sum of the durations of all events in the suffix s>kuntil case completion. 3 Probabilistic Suffix Prediction Framework This section presents the probabilistic suffix prediction framework consisting of the U-ED-LSTM model and the MC suffix sampling algorithm. 3.1 Uncertainty-Aware Encoder-Decoder LSTM The U-ED-LSTM implementation comprises the data preparation, model architecture, and loss functions for training. Data Pre-processing and Embedding. Given an event log, we first apply feature engineering techniques to the events’ timestamp attribute to derive additional features for the U-ED-LSTM. We introduce a case elapsed time attribute, representing the time elapsed since the first event in the case, an event elapsed time attribute, representing the time since the last event within the same case (with the value set to 0 for the first event), a day of the week attribute, and a time of day attribute. The latter two features are incorporated due to the potential influence of periodic trends on the future course of a process. For instance, in a company that operates only on weekdays, when an activity is completed on Friday evening, the next activity is unlikely to occur before Monday. Missing values for continuous event attributes are encoded as 0. For all encoder, decoder input, and decoder output continuous event attributes, when assuming that the observation noise over these attributes (as modeled through learned loss attenuation) follows a Normal distribution, we apply standard scaling, excluding the raw timestamp. For all decoder input
|
https://arxiv.org/abs/2505.21339v1
|
and output continuous event attributes, when assuming that the observation noise follows a Log-Normal distribution, we first transform the attributes into log-space by applying the natural logarithm function as ln(1+x), ensuring that only positive values are passed to the logarithm. After this step, we apply standard scaling to the log-transformed values. Following [28], we also apply input padding to facilitate batch training: Each case is padded with zeros at the beginning to a fixed length, determined by the maximum case length in the event log, excluding the top 1.5% of the longest cases. This allows multiple prefixes, regardless of the actual prefix length, to be concatenated into a single batch tensor. After the data pre-processing, all categorical event attributes are embedded using an embedding layer stack that maps each categorical event attribute into a vector of fixed dimensionality. For every event attribute with Kunique category classes, we add an additional NaN class and an 5 unknown class (a category class not present in the training data). The embedding layer is defined as a learnable weight matrix of size (K+ 2)×D, whereD=min(600,round (1.6(K+ 2)0.56))is the chosen embedding dimension, following a common heuristic (see [28]). Model Architecture. The U-ED-LSTM employs an encoder-decoder architecture of LSTMs [10] same as in [25]. LSTMs are well-suited for handling sequential data and have been proven effective for suffix prediction [4]. Additionally, ED architectures offer flexibility by decoupling tasks between the encoder and decoder and by handling different input and output event features: the encoder can focus on summarizing the prefix and can take all event attributes as input, while the decoder leverages these representations to predict target event attributes, e.g., only activity and time (see [15, 25]). Since the encoder-decoder LSTM is aware of epistemic uncertainty, both LSTMs consist of stochastic LSTM cells, which are stochastic since they apply MC dropout for Bayesian approximation adopted from the [27]. Additionally, to enable the encoder-decoder LSTM to model aleatoric uncertainty, each output is represented by two neurons instead of a single one: one neuron predicts the mean (or logit), and the other predicts the associated standard deviation. Fig. 1 illustrates the U-ED-LSTM architecture with two-layer LSTM cells and one fully connected (FC) layer with two output neurons. [8] have proposed a different dropout variant in which the same dropout mask is applied across all time steps in an RNN, since naive dropout has been shown to be ineffective in Recurrent NNs. This variant is called variational (MC) dropout which is applied during training to the U-ED-LSTM. This is illustrated by the colored arrows in Fig. 1, where identical colors indicate the use of the same MC dropout mask across time steps. Figure 1: U-ED-LSTM Architecture and Training Pipeline Theencoder processes input prefixes to compress the event sequence information into a fixed- length representation, known as a latent vector. More formally, we define the encoder as a func- tionfˆWenc(·), with masked weights sampled from the encoder’s variational distribution ˆWenc∼ qθenc(Wenc). For a given input prefix p≤k, a latent vector tuple is predicted: fˆWenc(p≤k) = (hk,ck). Thereby,hkandckrepresent the last hidden and cell
|
https://arxiv.org/abs/2505.21339v1
|
state in the encoder. 6 Thedecoder receives the latent vector tuple from the encoder along with the last event from the prefix. At each subsequent timestep, the model uses the previously updated latent vector tuple and a previous event. During training, teacher forcing is applied, selecting either the event from the target suffix or the last predicted event based on a predefined probability. Then the decoder autoregressively predicts Sevents. For the event log attribute, the decoder has a fully connected output layer. A predicted event consists of the concatenation of all its predicted event attributes. Similar to the encoder, we sample the decoder’s masked weights from its variational distribution ˆWdec∼qθdec(Wdec). For a given time step s= 0,1,...,S−1, whereek+sdenotes the current event and(hk+s,ck+s)the current latent vector tuple, the next event and updated latent vector tuple is predicted as follows: fˆWdec/parenleftbigek+s,(hk+s,ck+s)/parenrightbig=/parenleftbigˆek+(s+1),(hk+(s+1),ck(s+1))/parenrightbig. Loss Functions. To train the U-ED-LSTM, we use two distinct attenuated loss functions, one for continuous and another for categorical event attributes. The loss is calculated for a batch of N prefix-suffix pairs, {p(i) ≤k,s(i) >k}N i=1, where each predicted suffix has a fixed sequence length S. For continuous event attributes, the decoder predicts a mean value ˆy:=fˆWdecconyand the log- variance ˆv:=log(ˆσ2) :=fˆWdecconσ, which is common in practice for numerical stability (see [12]). The loss function based on Eq. 1 is implemented as follows: Lcon=1 N×SN/summationdisplay i=1S−1/summationdisplay s=01 2 (y(i) k+(s+1)−ˆy(i) k+(s+1))2 ˆσ2(i) k+(s+1)+log(ˆσ2(i) k+(s+1)) , =1 N×SN/summationdisplay i=1S−1/summationdisplay s=01 2/parenleftig exp(−ˆv(i) k+(s+1))(y(i) k+(s+1)−ˆy(i) k+(s+1))2+ ˆv(i) k+(s+1))/parenrightig(3) For categorical event attributes, the decoder predicts a mean logit vector with a logit value for each category class ˆl:=fˆWdec catyand the variance vector with a variance for each logit value log(ˆσ2) :=fˆWdec catσ. Then we apply MC integration and average the coss-entropy loss (CEL) of multiple draws from the logits distribution: ˆz=ˆl+ ˆσϵt, whereϵt∼N (0,I). The loss function based on Eq. 2 is implemented as follows: Lcat=1 N×SN/summationdisplay i=1S−1/summationdisplay s=0−log/parenleftigg 1 TT/summationdisplay t=1/parenleftigg exp(ˆz(i) k+(s+1),t,c−logC/summationdisplay c′exp(ˆz(i) k+(s+1),t,c′))/parenrightigg/parenrightigg , =1 N×SN/summationdisplay i=1S−1/summationdisplay s=0/parenleftigg 1 TT/summationdisplay t=1CEL(y(i) k+(s+1),ˆz(i) k+(s+1),t)/parenrightigg (4) The total loss consists of a weighted sum of losses for continuous event attributes and losses for categorical event attributes, weighted by weight coefficient vectors wconandwcat, and theL2 regularization term of the encoder’s and decoder’s parameters weighted by λ. The total loss is implemented as follows: Ltotal(θenc,θdec) =/summationdisplay wconLcon+/summationdisplay wcatLcat+λ(∥θenc∥2 2+∥θdec∥2 2) (5) 7 3.2 MC Suffix Sampling Algorithm The MC Suffix Sampling Alg. 1, which is similar to other MC sampling approaches for sequence predictions [23, 29], approximates a posterior distribution of suffixes. In particular, Alg. 1 uses MC dropout as a Bayesian approximation to sample epistemic uncertainty, similar to [29], and draws samples from probability distributions learned via loss attenuation, following the approach of [23]. A suffix is sampled in each MC trial. Similar to [29], we employ variational MC dropout on the encoder. First, the prefix is passed into the encoder to obtain a latent vector tuple. Then, the decoder samples a suffix auto-regressively. Unlike during training, naive MC dropout is applied to the decoder during inference, as each event is predicted and used
|
https://arxiv.org/abs/2505.21339v1
|
to sample the resulting event individually. Event attributes are sampled differently depending on whether they are continuous or categorical. For continuous event attributes, the event attribute values are directly drawn from a Normal distribution with the predicted means and variances. For categorical event attributes, the logit values are first drawn from Normal distributions and then passed through a Softmax function to obtain a categorical distribution. In a subsequent step, a value for the categorical attribute is drawn from this distribution. The auto-regressive prediction of the next event continues until eitherEOSis predicted or the predefined maximum sequence length Mis reached. The algorithm returns ˜S, a set of sampled suffixes with size T. Algorithm 1 MC Suffix Sampling - Probabilistic Suffix Prediction Require:T∈N: number of MC samples, M∈N: max. Suffix length to be sampled, p ∈[0,1]: dropout probability, p≤k=⟨e1,e2,...,ek⟩: prefix 1:function MCSuffixSampling (T, M, p,p≤k) 2: ˜S←∅ ▷Set of MC sampled probabilistic suffixes. 3:fort= 1toTdo 4: ˜s>k←⟨⟩ ▷Sampled events of one probabilistic suffix. 5: ˆWenc←VariationalDropout (Wenc,p) 6: (henc,cenc)←fˆWencenc (p≤k) 7: ˜e←ek 8:i←1 9: repeat 10: ˆWdec←NaiveDropout (Wdec,p) 11: (ˆy,ˆσ2),(ˆl,ˆσ2),(h,c) =fˆWdec dec(˜e,(henc,cenc)) 12: ˜ycon∼N(ˆy,ˆσ2) 13: forj= 1to|ˆl|do 14: ˜vj∼Categorical(Softmax( ˆlj)) 15: ˜y(j) cat←˜vj 16: end for 17: ˜e←˜ycon∪˜ycat 18: ˜s>k←˜s>k◦˜e 19: i←i+ 1 20: untili=Mor GetActivity( ˜e) =EOS 21: ˜S←˜S∪{˜s>k} 22:end for 23:return ˜S 24:end function 8 4 Evaluation The evaluation presents the predictive performance and calibration results of the U-ED-LSTM under three different hyperparameter settings, using standard suffix prediction and calibration metrics on four real-life datasets. We compare the most likely prediction generated by the U-ED- LSTM with the aggregated mean prediction of all sampled suffixes. This comparison demonstrates that, in certain cases, the aggregated mean prediction exceeds that of the most likely prediction. Additionally, we report predictive performance results from other ED-LSTMS and Transformers usedforsuffixpredictionfromexistingliterature,evaluatedonthesamedatasets. Sincethereported models were not re-implemented, the comparison is not meant to be direct but to provide an intuitionofhowotherapproachesperform,demonstratingthattheU-ED-LSTMachievesreasonable predictive performance. Furthermore, we assessed the calibration of the U-ED-LSTM to show that the model can capture uncertainties in the event logs. Our implementation and evaluation are publicly available.2 Datasets. We evaluated the U-ED-LSTM predictive performance on four real-life data sets. The Helpdesk3dataset is an event log from a ticket management system from an Italian software com- pany. The Sepsis4dataset represents the pathway of patients diagnosed with Sepsis through a hospital. The BPIC-20175dataset is a loan application process from a Dutch bank and has been investigated in the Business Process Intelligence Competition (BPIC) 2017. The PCR6dataset contains process logs from laboratory SARS-CoV-2 RT-PCR tests over one year. Properties for each dataset are presented in Tab. 1. The datasets were split at the case level into training and testing sets using an 80%-20% ratio, following the approach in [13]. The training set was further divided into a training and validation subset: 65% of the original (80% training) data was used for model training, and 15% was used for validation during training, resulting in a 65%-15%-20% training-validation-testing data split. This splitting of the training data into training and validation data is also common in other suffix prediction approaches [15, 25, 28]. Table
|
https://arxiv.org/abs/2505.21339v1
|
1: Dataset Properties Dataset Cases Events Variants Activities Mean–SD Case Length Mean–SD Case Duration Cat. Event Attr. Con. Event Attr. Helpdesk 4580 21348 226 14 4.66 – 1.18 40.86 – 8.39 (days) 12 4 Sepsis 1049 15214 845 16 14.48 – 11.47 28.48 – 60.54 (days) 26 8 BPIC17 31509 1202267 15930 26 38.16 – 16.72 21.90 – 13.17 (days) 9 9 PCR 6166 117703 1213 8 19.09 – 3.37 19872 – 27864 (sec.) 2 4 Training and Sampling. We trained our U-ED-LSTM model on an NVIDIA GTX 4090 GPU. The implementation allows users to flexibly select the encoder’s input event attributes and the decoder’s input and output (prediction) event attributes. Several training optimization techniques were implemented to achieve the best possible predictive performance. During training, the U- ED-LSTM predicts Sevents for each prefix in the batch, from which the loss is calculated. This 2Repository: https://github.com/ProbabilisticSuffixPredictionLab/Probabilistic_Suffix_Prediction_U-ED-LSTM_pub 3Helpdesk: https://doi.org/10.4121/uuid:0c60edf1-6f83-4e75-9367-4c63b3e9d5bb 4Sepsis: https://doi.org/10.4121/uuid:915d2bfb-7e84-49ad-a286-dc35f063a460 5BPIC-17: https://doi.org/10.4121/uuid:5f3067df-f10b-45da-b98b-86ae4c7a310b 6PCR: https://doi.org/10.5281/zenodo.11617408 9 approach optimizes its sequence-to-sequence predictions, similar to, e.g., the Complete Remaining Trace Prediction (CRTP) method [9, 28, 28]. We set S= 5, optimizing the U-ED-LSTM for sequence-to-sequencepredictionswhilereducingerrorpropagationincaseswherethemodelpredicts incorrect events early in the suffix. However, in the original CRTP method, Sis set flexibly since a suffix of the same length as the target suffix is predicted for each prefix. We applied probabilistic teacher forcing, where the last predicted or target suffix event is randomly taken as input for the next event prediction [25]. The initial teacher forcing probability was set to 0.8, meaning that 80% of the input events came from the target suffix. This ratio gradually decreased starting from 20%of the training epochs onward. Since our loss function consists of the sum of multiple attribute losses, referred to as Multi-Task-Learning, we implemented a task-balancing algorithm called GradNorm [5]. GradNormdynamicallyadjuststhegradientmagnitudesandtunestheweightcoefficientvectors wconandwcatafter each optimization step based on the relative importance of each event attribute on the overall loss. To balance an appropriate level of Bayesian variational approximation to measureepistemicuncertaintyduringinferencewhilealsomaintaininggoodpredictiveperformance, we set the MC dropout rate to a constant p = 0.1during training, following the recommendation in [27]. The MC sampling algorithm ran on an AMD Ryzen 9 7950 CPU. For each prefix in our testing datasets, we conducted 1000 MC sampling trials with an MC dropout probability of again p= 0.1[27]. We allow only the predicted continuous event attribute values to be greater than zero and thecase elapsed time to increase. When a lower case elapsed time value than the one from the previous event is sampled, the value from the last event is taken. Hyperparameter Settings. Three different U-ED-LSTMs with varying hyperparameter settings were trained and compared for each dataset. In the first setting , we trained the U-ED-LSTM with 2-layer encoder and decoder LSTMs, along with a fully-connected (FC) layer in the decoder containing separate mean and variance heads for each output event attribute. We assumed normal distributed continuous event attributes and noise and used all event attributes as input features for the encoder and input and output features for the decoder. In the second setting , we trained the U-ED-LSTM with a 4-layer encoder and decoder
|
https://arxiv.org/abs/2505.21339v1
|
LSTMs, along with an FC layer in the de- coder containing separate mean and variance heads for each output event attribute. We assumed normal distributed continuous event attributes and noise. All event attributes were used as input features for the encoder, while only the activity and time attributes were used as input and out- put features for the decoder. In the third setting , we trained the U-ED-LSTM with a 4-layer encoder and decoder LSTMs, along with an FC layer in the decoder containing separate mean and variance heads for each output event attribute. We assumed log-normal distributed continuous event attributes and noise. All event attributes were used as input features for the encoder, while only the activity and time attributes were used as input and output features for the decoder. The remaining hyperparameters were set as follows: In each hyperparameter setting, the encoder and decoder LSTM have a hidden size of 128. We used the standard Adam optimizer in the first setting, while the more advanced AdamW optimizer was applied in the second and third settings. Learning rates were set, in each setting, the same, to 1×10−4for the smaller Helpdesk and PCR datasets, 10 1×10−5for the medium-sized Sepsis dataset, and 1×10−6for the larger BPIC-17 dataset. After extensive testing, we observed that larger datasets benefited from lower learning rates. A batch size of 128 was used for all datasets except for the BPIC-17, which required a batch size 256 during training. Although a batch size of 128 might also produce good results for BPIC-17, computational constraints necessitated using a larger batch size to reduce training time. All models were trained for 200 epochs in the first and 100 epochs in the second and third settings, without early stopping but with continuous monitoring of validation set performance. Across all settings and datasets, the learning curves decreased during the first 100 epochs and stagnated. Based on this observa- tion, and to reduce training time, we consistently train models for 100 epochs in the second and third settings. Additionally, we applied a L2regularization parameter of λ= 1×10−4. Table 2 summarizes the hyperparameters in each setting. Table 2: Hyperparameter Settings Hyperparameter Setting 1 Setting 2 Setting 3 U-ED-LSTM layers 2 4 4 FC layers in decoder 1 1 1 Assumed distribution for con. event attr. Normal Normal Log-Normal Encoder features All All All Decoder features All Activity & Time Activity & Time Hidden size 128 128 128 Optimizer Adam AdamW AdamW Learning rate 1×10−4– 1×10−61×10−4– 1×10−61×10−4– 1×10−6 Batch size 128 (BPIC17 256) 128 (BPIC17 256) 128 (BPIC17 256) Epochs 200 100 100 MC Dropout Probability (Train/ Test) 0.1 0.1 0.1 Weight-Decay/ Regularization 1×10−41×10−41×10−4 4.1 Predictive Performance The evaluation demonstrates the predictive performance of our U-ED-LSTM under different hy- perparameter settings across multiple datasets. Therefore, the most likely prediction generated by the U-ED-LSTM is compared with the aggregated mean prediction of all sampled suffixes from the probabilistic approach. The most likely suffix prediction is obtained by auto-regressively sampling the most probable event label at each step until the EOStoken is reached, following the approach used in previous
|
https://arxiv.org/abs/2505.21339v1
|
works [3, 9, 15, 24, 25, 28]. Additionally, we report results from other models in the literature without re-implementation and with different hyperparameter settings. These re- sults are not intended for direct performance comparison but rather to indicate that the predictive performance of our models is reasonable. In future work, we plan to re-implement the approaches proposed in the literature to enable a thoroughly reliable and transparent comparison of results using the best-performing hyperparameter settings. Metrics. We adopted three commonly used evaluation metrics for suffix prediction, the Damerau- Levenshtein Similarity (DLS) metric to assess the activity sequence prediction[3, 18, 21, 25, 28] , theMean Average Error (MAE) of the remaining time predictions [3, 24, 25, 28], and the Mean Average Error (MAE) ofthesuffixlengthprediction. Here, weusedaholdouttestset, didnotprune the datasets, started the evaluation from a prefix length of 1, i.e., p≤k,∀k≥1, and implemented 11 the metrics in the following manner: The DLS on the event labels is defined as a normalized DLS distance DLS (ˆs,s) := 1−DL(s,ˆs) max(|s|,|ˆs|)wheresandˆsdenote the actual and predicted sequence of event labels. Informally, DLS = 1expresses that the two sequences are identical, while a DLS = 0expresses that the two are entirely dissimilar. We obtain the DLS for the most likely suffix prediction by comparing the predicted suffix with the ground truth suffix. For the probabilistic suffixprediction, wecalculatetheDLSforallMCsamplesandtakethemean. Sincewepreprocessed the event logs by adding the case elapsed time and aevent elapsed time attributes to the event logs, we can obtain a remaining time prediction in two ways: By summing the predicted event elapsed timesof a case or by taking the case elapsed time value of the last event in a case. We implemented both. We calculated the mean remaining time for the probabilistic suffix prediction and compared that mean aggregation with the ground truth value. Other works [3, 24] observed that some suffix prediction approaches fail at predicting the right suffix length. Therefore, we also measured the MAE between the true and the predicted suffix length, the suffix length MAE . We took the mean suffix length of the MC samples for probabilistic suffix prediction. Predictive Performance Results. The results of the most likely and aggregated mean of all sampled suffixes from the probabilistic approach using the U-ED-LSTM with different hyperpa- rameter settings, as well as results from existing ED-LSTM and Transformer-based approaches from the literature, are presented in Tab. 3 for suffix length and suffix event label predictions, and in Table 4 for remaining time predictions. The results, detailed by prefix and suffix lengths, for each hyperparameter setting, comparing the most likely predictions to the aggregated mean of all sampled suffixes from the probabilistic approach, are shown in Figure 2 for Setting 1 , Figure 3 for Setting 2 , and Figure 4 for Setting 3 . As existing models from the literature and their predictive performance results, the following are selected: MM-Pred from [15], an ED-LSTM model evaluated on the Helpdesk and BPIC-17 datasets. The authors used a 70%-10%-20% train-validation-test split. The model consists of a 2- layer encoder and decoder LSTMs with a hidden size 32 and
|
https://arxiv.org/abs/2505.21339v1
|
a dropout rate of 0.2 for regularization duringtraining. Additionally, MM-Predincludesacomponentcalledthe“Modulator”, whichlearns the significance of each event label and lifecycle transition attribute and passes this information as an additional feature to the decoder. ED-LSTM-GAN from [25] was evaluated on the Helpdesk and BPIC-17 datasets. The authors used a 70%-10%-20% train-validation-test split. The model consists of a 5-layer encoder and decoder LSTMs with a hidden size 32 as a generator and an FC layer as the discriminator as part of the GAN architecture. Training is conducted for 500 epochs, with early stopping applied after 30 epochs. RMSprop is the optimizer, with a 5×10−5learning rate. Probabilistic teacher forcing is used with a ratio of 0.1. AE (inspired by [15, 25]) and AE- GAN (inspired by [25]) from [13] were evaluated on the Helpdesk, Sepsis, and BPIC-17 datasets. An 80%-20% train-test split was used. Both models consist of 4-layer encoder and decoder LSTMs with a hidden size of 128 and are trained for 400 epochs, with early stopping applied after 50 epochs. The Adam optimizer is used with a learning rate of 1×10−4, and a dropout rate of 0.3 is applied for regularization. SuTraN, along with an ED-LSTM re-implementation of [25], from 12 [28], was evaluated on the BPIC-17 dataset. The authors used a 55%-35%-25% train-validation- test split. The ED-LSTM architecture consists of a 4-layer encoder and decoder LSTMs with a hidden size 64. A batch size of 128 is used, and models are trained for 200 epochs with early stopping based on the validation set performance. The Adam optimizer is applied with an initial learning rate of 2×10−4, adjusted dynamically via an exponential learning rate scheduler with a decay factor of 0.96. A 1×10−4weight decay is used, and teacher forcing is employed during training. Similarly to [4], we observe that results from other works are difficult to compare directly, as they often differ not only in hyperparameter settings but also in several different aspects, such as varying data preprocessing strategies (e.g., pruning of event labels), the exclusion of short prefix lengths, differences in evaluation methodology (e.g., holdout test sets versus cross-validation), and inconsistent metric implementations (e.g., comparing the predicted suffix to a single ground truth versus a set of possible suffixes). However, we report the predictive performance results from the abovementioned models to provide an intuition of whether the U-ED-LSTM demonstrates reasonable and competitive predictive performance. Based on the results for the suffix length MAE and the DLS of suffix event labels reported in Table 3, it can be concluded that the second hyperparameter setting yielded the best performance for the U-ED-LSTM on the Helpdesk and BPIC-17 datasets, especially for the results of the prob- abilistic approach. Notably, for the Helpdesk dataset, the DLS for suffix event label prediction improved significantly, from 0.53 to 0.82 for the most likely prediction and from 0.44 to 0.65 for the aggregated mean of sampled suffixes. These results are comparable to those reported in the literature, where DLS values for most likely predictions range from 0.84 (-0.02) (ED-LSTM-GAN [25]) to 0.87 (-0.05) (MM-Pred [15]). Similar improvements are observed in
|
https://arxiv.org/abs/2505.21339v1
|
the MAE of suffix length predictions for the Helpdesk dataset when comparing Setting 1 to Setting 2. One possible reason Setting 2 provided the best results on the Helpdesk dataset could be the deeper architecture with a 4-layer encoder and decoder LSTMs. This enhanced the U-ED-LSTM’s ability to abstract higher-level patterns, capture more complex dependencies, and generalize better to longer suffixes. This is particularly evident when comparing the DLS for suffix event label prediction at a rare suffix length of 10: In Setting 2 (Fig. 3), the DLS is 0.4, representing a 100% improvement over Setting 1 (Figure 2), which is 0.2. Moreover, comparing Setting 2 with Setting 3 on all datasets instead of some results from the Sepsis dataset, it can be seen that using a normal distribution compared to a log-normal distribution for loss attenuation also outperformed the predictive performance for the suffix length and suffix event label prediction. The model’s learning process was more stable in the normal distribution settings, resulting in a smoother convergence across all output event attributes during training. The third setting yielded the best suffix length MAE and the best DLS for suffix event label predictions of the U-ED-LSTM on the Sepsis dataset, achieving a DLS of 0.18 based on the aggregated mean of all sampled suffixes from the probabilistic approach, surprisingly, outperforming the most likely prediction. However, while this is still a relatively poor result, it is comparable to those reported in the literature, such as 0.14 (+0.04) for AE-GAN [13] and 0.22 (-0.04) for AE [13]. These results suggest that suffix prediction for the Sepsis dataset is challenging, 13 likely due to high variability and many unique cases that do not follow a general pattern. In the Sepsis dataset, wide interquantile ranges (IQRs) can be observed across all settings in Figure 2, Figure 3, and Figure 4. The DLS of the aggregated mean consistently falls near the center of the distribution, probably indicating that a diverse set of sampled suffixes with different suffix lengths contribute equally to DLS. For the BPIC-17 dataset, the probabilistic approach achieved the best results regarding suffix length MAE and DLS for suffix event labels in Setting 2, even outperform- ing the most likely prediction. This improvement can be attributed to the potentially long suffixes present in BPIC-17 cases. The probabilistic method performs better in predicting suffix lengths, contributing to a lower DLS score due to more accurate sampling of the EOS token. This result demonstrates the advantage of using a probabilistic approach for predicting long suffixes, where uncertainty and variability naturally increase with each additional (false) predicted event. For the PCR dataset, the first setting provided the best performance of the U-ED-LSTM in terms of both suffix length MAE and DLS for suffix event label predictions. One possible reason why the 2-layer setting outperformed the 4-layer could be the relatively small size and low complexity of the PCR dataset. The PCR dataset represents a highly automated process managed by a workflow engine, resulting in less variability than the other datasets. Since the most likely suffix was obtained Table
|
https://arxiv.org/abs/2505.21339v1
|
3: Predictive Performance (Categorical): Suffix Length MAE and Suffix Event Labels DLS Method Suffix Length MAE Suffix Event Labels DLS Helpdesk Sepsis BPIC17 PCR Helpdesk Sepsis BPIC17 PCR Own Results Most likely - Setting 1 0.96 27.59 13.74 1.48 0.53 0.1 0.35 0.83 Probabilistic - Setting 1 0.74 6.84 14.29 1.98 0.44 0.14 0.28 0.59 Most likely - Setting 2 0.36 8.8 40.83 3.49 0.82 0.11 0.21 0.67 Probabilistic - Setting 2 0.38 6.83 11.42 3.63 0.65 0.12 0.31 0.54 Most likely - Setting 3 0.54 26.96 40.78 4.37 0.82 0.09 0.16 0.62 Probabilistic - Setting 3 0.536.1633.71 4.43 0.520.18 0.2 0.54 ED-LSTM from Lit. MM-Pred [15] - - - - 0.87 - 0.3 - ED-LSTM-GAN [25] - - - - 0.84 - 0.34 - AE [13] - - - - 0.86 0.22 0.14 - AE-GAN [13] - - - - 0.86 0.14 0.07 - ED-LSTM [28] - - - - - - 0.32 - Transformer from Lit. SuTraN [28] - - - - - - 0.38 - from auto-regressively sampling the mode activity with the highest softmax probability, it can be expected to have the best DLS. Interestingly, this is not true for small prefix sizes in the Sepsis and BPIC-17 datasets in Setting 2. A similar conclusion regarding the optimal hyperparameter setting per dataset for the U-ED- LSTM can be drawn for the continuous event time attributes, as shown in Table 4. Interestingly, the assumed log-normal distribution returned significantly worse results than the normal distribu- tion for all datasets except for the remaining time of the last event MAE in the Sepsis dataset. 14 This observation suggests that while the log-normal distribution offers certain theoretical benefits (e.g., modeling non-negative values), it may not be suitable for suffix prediction. Our observa- tions indicate that the log-normal distribution performs well when the target values are close to each other (i.e., exhibit low variance) but perform poorly when the distribution is broader. This issue was particularly relevant in our case. Although the remaining times (sum and last) across datasets were measured initially in days (except for the PCR dataset), we trained the model us- ing the logarithm of time values in seconds, followed by standard normalization. Consequently, after destandardizing and exponentiating the outputs for evaluation, the predictions became highly sensitive to sampled outliers, often resulting in extreme and unrealistic values. This outcome is surprising, as we expected the log-normal distribution to provide improved precision due to its ability to model non-negative continuous values, which is the case for time event attributes. Our Table 4: Predictive Performance (Continuous): Remaining Time MAE Method Remaining Time Sum MAE Remaining Time Last MAE Helpdesk Sepsis BPIC17 PCR (sec.) Helpdesk Sepsis BPIC17 PCR (sec.) Own Results Most likely Setting 1 11.21 38.09 11.76 159.19 18.35 29.21 9.95 9340.1 Probabilistic Setting 1 8.74 31.21 12.45 165.91 14.02 34.3 10.83 19237.2 Most likely Setting 2 11.76 34.5 10.73 170.87 sec. 9.129.88 10.75 8871.74 Probabilistic Setting 2 9.5831.18 10.62 170.44 10.99 31.41 14.4 12281.35 Most likely Setting 3 261.64 147.15 10.68 180.18 sec. 553.8224.542887.58 411704.04 Probabilistic Setting 3 5789.51 135.05 1387.15
|
https://arxiv.org/abs/2505.21339v1
|
15262002.78 557.32 24.87 4448.17 412893.69 ED-LSTM from Lit. MM-Pred [15] - - - - - - - - ED-LSTM-GAN [25] 6.21 - 13.95 - - - - - AE [13] 3.83 735.04 69.51 - - - - - AE-GAN [13] 3.88 187.12 100.19 - - - - - ED-LSTM [28] - - 8.44 - - - - - Transformer from Lit. SuTraN [28] - - 5.5 - - - - - probabilistic approach can obtain better remaining time (sum) predictions for smaller prefixes and, in total, in the Helpdesk and Sepsis data sets. This might be because suffixes from small prefix lengths have higher uncertainty. Interestingly, the opposite is the case with the BPIC-17 dataset. Since the BPIC-17 dataset has the most extended case length, obtaining only 1000 MC samples might not have been sufficient. We noticed that the algorithm often moves into highly uncertain regions in our MC sampling approach. This usually leads to sampling from high variances, resulting in considerable event elapsed time andcase elapsed time values, inflating the mean aggregations. Furthermore, it can be seen in Fig. 4 that the values of the learned log-normal distributed loss attenuation are sensitive to outlier predictions, which greatly influence the aggregated mean of all sampled suffixes. Overall, Setting 2 proved to be the best hyperparameter configuration. The probabilistic ap- proach’s aggregated mean of all sampled suffixes often outperforms the most likely prediction, 15 particularly for short prefixes and long suffixes (high variability, high uncertainty), especially for continuous event attribute predictions. Assuming a normal distribution-based loss attenuation for noise on continuous input data yielded better results than a log-normal distribution. 0100200300400500 instances 0100200300400500 instances 0100200300400500 instances0 2 4 6 8 10 prefix len0246 Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 2 4 6 8 10 suffix len 0 2 4 6 8 10 prefix len0.00.20.40.60.81.0DLS 0 2 4 6 8 10 suffix len 0 2 4 6 8 10 prefix len010203040 Rem. time (event sum) MAE (days)0 2 4 6 8 10 suffix lenHelpdesk 020406080100120 instances 020406080100120 instances 020406080100120 instances0 10 20 30 40 prefix len010203040 Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 10 20 30 40 suffix len 0 10 20 30 40 prefix len0.00.10.20.30.4DLS 0 10 20 30 40 suffix len 0 10 20 30 40 prefix len0255075100125 Rem. time (event sum) MAE (days)0 10 20 30 40 suffix lenSepsis 02004006008001000 instances 02004006008001000 instances 02004006008001000 instances0 5 10 15 20 25 prefix len0.02.55.07.510.012.515.0Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 5 10 15 20 25 suffix len 0 5 10 15 20 25 prefix len0.00.20.40.60.8DLS 0 5 10 15 20 25 suffix len 0 5 10 15 20 25 prefix len020040060080010001200 Rem. time (event sum) MAE (sec.)0 5 10 15 20 25 suffix lenPCR 01000200030004000 instances 01000200030004000 instances 01000200030004000 instances0 20 40 60 80 prefix len020406080 Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 20 40 60 80 suffix len 0 20 40 60 80 prefix len0.00.10.20.30.40.50.6DLS 0 20 40 60 80 suffix
|
https://arxiv.org/abs/2505.21339v1
|
len 0 20 40 60 80 prefix len0102030 Rem. time (event sum) MAE (days)0 20 40 60 80 suffix lenBPIC17 Figure 2: Predictive Performance - Setting 1 16 0100200300400500 instances 0100200300400500 instances 0100200300400500 instances0 2 4 6 8 10 prefix len0123456 Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 2 4 6 8 10 suffix len 0 2 4 6 8 10 prefix len0.00.20.40.60.81.0DLS 0 2 4 6 8 10 suffix len 0 2 4 6 8 10 prefix len0102030 Rem. time (event sum) MAE (days)0 2 4 6 8 10 suffix lenHelpdesk 020406080100120 instances 020406080100120 instances 020406080100120 instances0 10 20 30 40 prefix len010203040 Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 10 20 30 40 suffix len 0 10 20 30 40 prefix len0.000.050.100.150.200.250.30DLS 0 10 20 30 40 suffix len 0 10 20 30 40 prefix len0255075100125 Rem. time (event sum) MAE (days)0 10 20 30 40 suffix lenSepsis 02004006008001000 instances 02004006008001000 instances 02004006008001000 instances0 5 10 15 20 25 prefix len05101520 Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 5 10 15 20 25 suffix len 0 5 10 15 20 25 prefix len0.00.20.40.60.8DLS 0 5 10 15 20 25 suffix len 0 5 10 15 20 25 prefix len0200400600800 Rem. time (event sum) MAE (sec.)0 5 10 15 20 25 suffix lenPCR 01000200030004000 instances 01000200030004000 instances 01000200030004000 instances0 20 40 60 80 prefix len0204060 Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 20 40 60 80 suffix len 0 20 40 60 80 prefix len0.00.10.20.30.40.50.6DLS 0 20 40 60 80 suffix len 0 20 40 60 80 prefix len0102030 Remaining time MAE (days) 0 20 40 60 80 suffix lenBPIC17 Figure 3: Predictive Performance - Setting 2 17 0100200300400500 instances 0100200300400500 instances 0100200300400500 instances0 2 4 6 8 10 prefix len0246 Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 2 4 6 8 10 suffix len 0 2 4 6 8 10 prefix len0.00.20.40.60.81.0DLS 0 2 4 6 8 10 suffix len 0 2 4 6 8 10 prefix len05000100001500020000 Rem. time (event sum) MAE (days)0 2 4 6 8 10 suffix lenHelpdesk 020406080100120 instances 020406080100120 instances 020406080100120 instances0 10 20 30 40 prefix len010203040 Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 10 20 30 40 suffix len 0 10 20 30 40 prefix len0.00.10.20.3DLS 0 10 20 30 40 suffix len 0 10 20 30 40 prefix len0250500750100012501500 Rem. time (event sum) MAE (days)0 10 20 30 40 suffix lenSepsis 02004006008001000 instances 02004006008001000 instances 02004006008001000 instances0 5 10 15 20 25 prefix len05101520 Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 5 10 15 20 25 suffix len 0 5 10 15 20 25 prefix len0.00.20.40.60.81.0DLS 0 5 10 15 20 25 suffix len 0 5 10 15 20 25 prefix len0.00.51.01.52.0Rem. time (event sum) MAE (sec.)×108 0 5 10 15 20 25 suffix lenPCR 05001000150020002500 instances 05001000150020002500 instances 05001000150020002500 instances0 20 40 60 80 prefix len01020304050
|
https://arxiv.org/abs/2505.21339v1
|
Suffix length MAEmost-likely suffix mean probabilistic suffix IQR Range # instances 0 20 40 60 80 suffix len 0 20 40 60 80 prefix len0.00.10.20.30.4DLS 0 20 40 60 80 suffix len 0 20 40 60 80 prefix len050000100000150000200000250000300000 Remaining time MAE (days) 0 20 40 60 80 suffix lenBPIC17 Figure 4: Predictive Performance - Setting 3 4.2 Calibration Results We evaluated the calibration of the U-ED-LSTM to demonstrate that the model can capture the variability in the respective event logs. Metrics. To evaluate the calibration of remaining time predictions (sum and last), we used the Probability Integral Transform (PIT). Given the predicted distribution of remaining times gener- 18 ated by the U-ED-LSTM for each test case, obtained by sampling multiple suffixes, we constructed an empirical cumulative distribution function (CDF) for each prediction. For a given test case i, let ˆti,jbe the remaining time prediction of the j-th sampled suffix, and let tibe the ground-truth re- maining time. The normalized PIT value per test case uiis computed as: ui=1 T/summationtextT j=11{ˆti,j≤ti} , whereT= 1000is the total number of MC samples and 1{·}is the indicator function. The calculation is the same for the remaining time sum and last. After computing the set of PIT values u:={u1,...uDtest}, PIT plots were constructed. The x-axis represents the PIT values between [0,1], and the y-axis shows the probability density. The PIT plots can have different shapes, each indicating different model calibration: A uniform distribution (i.e., a flat line at density 1) indicates perfect calibration. The predicted variance matches the true variability in the ground truth data exactly. A U-shape suggests that the model predicts too little variance, and outliers get underesti- mated. For instance, if the true variability of remaining times spans ±5 days around the mean, but the model only predicts a variance of ±3 days, it will consistently fail to capture outliers. A bell shape indicates that the model predicts too much variance. For instance, if the true variability of remaining times spans ±5 days around the mean, and the model predicts a much higher variance of ±10 days, it will consistently predict, over all samples per case, remaining times below and above the ground truth. A slope-shaped PIT plot indicates a systematic bias in the model’s predictions. In such cases, no reliable conclusions about the predicted variance can be drawn. The PIT plots of the U-ED-LSTM, across all hyperparameter settings, for all datasets are depicted in Fig. 5 (Setting 1 ), Fig. 6 ( Setting 2 ), Fig. 7 ( Setting 3 ). Calibration Results. For the Helpdesk dataset, the model is quite well calibrated for predicting the remaining time for both sum and last in Setting 2, demonstrating its ability to predict a wide range of remaining times across all test cases. In contrast, Setting 1 shows that the U-ED-LSTM tends to predict values with a bit more variance across test cases. However, the model tends to predict smaller remaining time values sum and last compared to the ground truths (values tend to be greater than 0.5 on the x-axis). In Setting 3,
|
https://arxiv.org/abs/2505.21339v1
|
a systematic bias (slope at 0) is visible. The model tends to predict values that are too large. Nevertheless, for the Helpdesk dataset, the U- ED-LSTM, especially in Setting 2, can capture the variability in the remaining time (sum and last) from the test dataset quite well. For the Sepsis dataset, the U-ED-LSTM consistently exhibits U-shaped PIT plots across all settings. This suggests that the model underestimates the variance, predicting much less variability than in the ground truth values across all test cases. This contrasts the Helpdesk dataset, where the model tends to overestimate variance. These observations align closely with the predictive performance results and are consistent with the characteristics of the datasets. The Helpdesk dataset contains lower inherent variability and shorter suffix lengths. Therefore, the model tends to predict too much variance, whereas for the Sepsis dataset, which contains high variability, the model underestimates it. Nevertheless, among the evaluated settings, Setting 2 yielded the best calibration for Sepsis. The calibration performance for the BPIC-17 dataset is generally poor. Clear systematic bias is observed in the remaining time predictions (sum and last). Among all configurations, Setting 2 provides the most balanced calibration, although a 19 minor bias remains. Specifically, in Setting 2, the remaining time sum is slightly underestimated, while the remaining time last is somewhat overestimated. In Setting 1 of the U-ED-LSTM on the PCR dataset, the PIT plot exhibits a bell-shaped distribution for the remaining time sum, indicating that the model’s predicted variance is much higher than the true variability of remaining time values. The sloped PIT plot for the remaining time last reveals a systematic bias, with the U-ED-LSTM consistently overestimating the true remaining times last. However, calibration for both remaining time types shows improvement in Setting 2. Overall, the results suggest that in Setting 2, the model’s calibration for the PCR dataset aligns reasonably well. 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times0.00.51.01.52.02.5Probability densityHelpdesk y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case elapsed time0.00.51.01.52.0Probability densityHelpdesk y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times01234Probability densitySepsis y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case elapsed time0246Probability densitySepsis y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times012345Probability densityPCR y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case elapsed time02468Probability densityPCR y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times02468Probability densityBPIC17 y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case elapsed time02468Probability densityBPIC17 y = 1 Figure 5: Model Calibration of Remaining Time Predictions - Setting 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times0.00.51.01.52.0Probability densityHelpdesk y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case elapsed time0.00.51.01.52.0Probability densityHelpdesk y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times012345Probability densitySepsis y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case
|
https://arxiv.org/abs/2505.21339v1
|
elapsed time0246Probability densitySepsis y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times0123Probability densityPCR y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case elapsed time0123Probability densityPCR y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times0123Probability densityBPIC17 y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case elapsed time0123Probability densityBPIC17 y = 1 Figure 6: Model Calibration of Remaining Time Predictions - Setting 2 20 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times0123Probability densityHelpdesk y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case elapsed time0123Probability densityHelpdesk y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times012345Probability densitySepsis y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case elapsed time012345Probability densitySepsis y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times01234Probability densityPCR y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case elapsed time02468Probability densityPCR y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Sum of event processing times02468Probability densityBPIC17 y = 1 0.0 0.2 0.4 0.6 0.8 1.0 PIT | Last case elapsed time05101520Probability densityBPIC17 y = 1 Figure 7: Model Calibration of Remaining Time Predictions - Setting 3 5 Related Work Suffix Prediction. Current suffix prediction approaches focus on predicting the most likely suffix and improving predictive performance. The methods differ in the models used, predicted event attributes, and strategies to enhance training. Early suffix prediction approaches use LSTMs [3, 6, 9, 24]. Predictive performance is improved by using encoder-decoder LSTMs [15, 25]. More recent encoder-decoders are enriched by more complex NN architectures such as combined General Recurrent Units, Graph NNs, and attention [21] or transformers [28]. Recently, LLMs have been used for suffix prediction [18], facing challenges such as lack of interpretability or not all prefixes can simultaneously be passed into a prompt. In addition, existing approaches can be categorized based onpredictedeventattributes. Someapproachespredictonlythesequenceofactivities[6,18,21]and lifecycle transitions [15]. Other approaches predict the sequence of activities and time attributes [9, 24, 25, 28], and resource information [3]. Special training considerations are applied to improve predictive performance. [25], for example, introduce teacher forcing and enhance its encoder- decoder LSTM with adversarial training to improve performance and robustness. [9, 28] proposed CRTP, demonstrating that models trained this way outperform those optimized for single-event prediction. For testing, [3] try random sampling from categorical distributions against an arg-max strategy to derive the best matching activities in a suffix. Similar to our approach, they observe better performance in suffix length prediction. Uncertainty in PPM. For remaining time and next activity predictions, combined epistemic and aleatoric uncertainty for NNs is applied to PPM by [27]. [20] applies and compares deep ensemble and MC dropout in attention-based NNs for the next activity prediction. Both approaches aim to improve single-event prediction performance and show how uncertainty and prediction accuracy correlate. Most recently, [17] introduces Conformalized MC dropout, leveraging uncertainty and conformal predictions to
|
https://arxiv.org/abs/2505.21339v1
|
construct prediction intervals for the next activity prediction to improve interpretability. However, they do not evaluate their approach on open-source, real-world datasets. 21 In [19, 22], Bayesian Networks are used to predict the sequence of activities, but Bayesian networks cannot handle large and complex data. 6 Conclusion In this technical report, we presented an approach for probabilistic suffix prediction that leverages ourU-ED-LSTM andMC suffix sampling algorithm and an extensive performance evaluation of the proposed model. Our approach captures epistemic uncertainty via MC dropout and aleatoric uncertainty as learned loss attenuation. No other work has yet addressed incorporating epistemic andaleatoricuncertaintiesforsuffixpredictionsofbusinessprocesses. Probabilisticsuffixprediction can offer enhanced reliability and transparency by generating a distribution over possible future sequences rather than a single deterministic outcome. For instance, instead of predicting a single remaining time or a fixed number of activity loop executions, the model can provide a range of possible values and their associated probabilities. Future Work. We demonstrated the predictive performance and calibration of our U-ED-LSTM. However, to further improve the performance and calibration of our approach, we i) further exper- iment with different hyperparameters, ii) try to improve the log-normal distribution of assumed observation noise for loss attenuation to obtain results such as in [23], iii) try different methods to measure epistemic uncertainty such as deep ensembles instead of MC dropout, iv) choose different NN architectures for sequence predictions, especially for long-range sequences such as transform- ers. In the evaluation, we only assessed activity sequence and remaining time predictions. Since our approach can predict all event attributes, evaluating additional attributes could yield further insights into the model’s predictive performance. References [1] Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul W. Fieguth, Xiaochun Cao, Abbas Khosravi, U. Rajendra Acharya, Vladimir Makarenkov, and Saeid Nahavandi. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Inf. Fusion , 76:243–297, 2021. doi: 10.1016/J.INFFUS.2021.05.008. [2] Christopher M Bishop. Mixture density networks. 1994. [3] ManuelCamargo, MarlonDumas, andOscarGonzálezRojas. LearningaccurateLSTMmodels of business processes. In Business Process Management - BPM , pages 286–302, 2019. doi: 10.1007/978-3-030-26619-6\_19. [4] Paolo Ceravolo, Marco Comuzzi, Jochen De Weerdt, Chiara Di Francescomarino, and Fab- rizio Maria Maggi. Predictive process monitoring: concepts, challenges, and future research directions. Process Science , 1(1):2, 2024. 22 [5] Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. Gradnorm: Gra- dient normalization for adaptive loss balancing in deep multitask networks. In International Conference on Machine Learning - ICML , pages 794–803, 2018. [6] Joerg Evermann, Jana-Rebecca Rehse, and Peter Fettke. Predicting process behaviour using deep learning. Decis. Support Syst. , 100:129–140, 2017. doi: 10.1016/J.DSS.2017.04.003. [7] YarinGalandZoubinGhahramani. Dropoutasabayesianapproximation: Representingmodel uncertainty in deep learning. In International Conference on Machine Learning - ICML , pages 1050–1059, 2016. URL http://proceedings.mlr.press/v48/gal16.html . [8] Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems - NeurIPS , pages 1019–1027, 2016. URL https://proceedings.neurips.cc/paper/2016/ hash/076a0c97d09cf1a0ec3e19c7f2529f2b-Abstract.html . [9] Björn Rafn Gunnarsson, Seppe vanden Broucke, and Jochen De Weerdt. A direct data aware LSTM neural network architecture for complete remaining trace and runtime prediction. IEEE
|
https://arxiv.org/abs/2505.21339v1
|
Trans. Serv. Comput. , 16(4):2330–2342, 2023. doi: 10.1109/TSC.2023.3245726. [10] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation , 9(8):1735–1780, 1997. doi: 10.1162/neco.1997.9.8.1735. [11] Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods. Mach. Learn. , 110(3):457–506, 2021. doi: 10.1007/S10994-021-05946-3. [12] Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learn- ing for computer vision? In Advances in Neural Information Processing Systems - NeurIPS , pages 5574–5584, 2017. URL https://proceedings.neurips.cc/paper/2017/ hash/2650d6089a6d640c5e85b2b88265dc2b-Abstract.html . [13] István Ketykó, Felix Mannhardt, Marwan Hassani, and Boudewijn F. van Dongen. What aver- ages do not tell: predicting real life processes with sequential deep learning. In ACM/SIGAPP Symposium on Applied Computing - SAC , pages 1128–1131, 2022. doi: 10.1145/3477314. 3507179. [14] Nadja Klein. Distributional regression for data analysis. Annual Review of Statistics and Its Application , 11, 2024. doi: 10.1146/annurev-statistics-040722-053607. [15] Li Lin, Lijie Wen, and Jianmin Wang. Mm-pred: A deep predictive model for multi attribute event sequence. In SIAM International Conference on Data Mining - SDM , pages 118–126, 2019. doi: 10.1137/1.9781611975673.14. 23 [16] Fabrizio Maria Maggi, Chiara Di Francescomarino, Marlon Dumas, and Chiara Ghidini. Pre- dictive monitoring of business processes. In Advanced Information Systems Engineering - CAiSE, pages 457–472. Springer, 2014. doi: 10.1007/978-3-319-07881-6\_31. [17] Nijat Mehdiyev, Maxim Majlatow, and Peter Fettke. Augmenting post-hoc explanations for predictive process monitoring with uncertainty quantification via conformalized monte carlo dropout. Data Knowl. Eng. , 156:102402, 2025. doi: 10.1016/J.DATAK.2024.102402. [18] Vincenzo Pasquadibisceglie, Annalisa Appice, and Donato Malerba. LUPIN: A LLM approach for activity suffix prediction in business process event logs. In International Conference on Process Mining - ICPM , pages 1–8, 2024. doi: 10.1109/ICPM63005.2024.10680620. [19] Stephen Pauwels and Toon Calders. Bayesian network based predictions of business processes. InBusiness Process Management Forum - BPM Forum , pages 159–175. Springer, 2020. doi: 10.1007/978-3-030-58638-6\_10. [20] Pietro Portolani, Alessandro Brusaferri, Andrea Ballarino, and Matteo Matteucci. Uncertainty inpredictiveprocessmonitoring. In Information Processing and Management of Uncertainty in Knowledge-Based Systems - IPMU ,pages547–559,2022. doi: 10.1007/978-3-031-08974-9\_44. [21] EfrénRama-Maneiro, JuanCarlosVidal, ManuelLama, andPabloMonteagudo-Lago. Exploit- ing recurrent graph neural networks for suffix prediction in predictive monitoring. Computing , 106(9):3085–3111, 2024. doi: 10.1007/S00607-024-01315-9. [22] Simon Rauch, Christian M. M. Frey, Ludwig Zellner, and Thomas Seidl. Process-aware bayesian networks for sequential event log queries. In International Conference on Process Mining - ICPM , pages 161–168, 2024. doi: 10.1109/ICPM63005.2024.10680678. [23] David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. Deepar: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Forecasting , 36 (3):1181–1191, 2020. doi: 0.1016/j.ijforecast.2019.07.001. [24] Niek Tax, Ilya Verenich, Marcello La Rosa, and Marlon Dumas. Predictive business process monitoring with LSTM neural networks. In Advanced Information Systems Engineering - CAiSE, pages 477–492, 2017. doi: 10.1007/978-3-319-59536-8\_30. [25] Farbod Taymouri, Marcello La Rosa, and Sarah M. Erfani. A deep adversarial model for suffix and remaining time prediction of event sequences. In SIAM International Conference on Data Mining - SDM , pages 522–530, 2021. doi: 10.1137/1.9781611976700.59. [26] Ilya Verenich, Marlon Dumas, Marcello La Rosa, Fabrizio Maria Maggi, and Irene Teinemaa. Survey and cross-benchmark comparison of remaining time prediction methods in business process monitoring.
|
https://arxiv.org/abs/2505.21339v1
|
ACM Trans. Intell. Syst. Technol. , 10(4):34:1–34:34, 2019. doi: 10.1145/ 3331449. 24 [27] Hans Weytjens and Jochen De Weerdt. Learning uncertainty with artificial neural networks for predictive process monitoring. Appl. Soft Comput. , 125:109134, 2022. doi: 10.1016/J.ASOC. 2022.109134. [28] Brecht Wuyts, Seppe K. L. M. vanden Broucke, and Jochen De Weerdt. Sutran: an encoder- decoder transformer for full-context-aware suffix prediction of business processes. In Interna- tional Conference on Process Mining - ICPM , pages 17–24, 2024. doi: 10.1109/ICPM63005. 2024.10680671. [29] Lingxue Zhu and Nikolay Laptev. Deep and confident prediction for time series at uber. In IEEE International Conference on Data Mining Workshops - ICDM , pages 103–110, 2017. doi: 10.1109/ICDMW.2017.19. 25
|
https://arxiv.org/abs/2505.21339v1
|
arXiv:2505.21344v1 [cs.AI] 27 May 2025The Multilingual Divide and Its Impact on Global AI Safety Aidan Peppin*2, Julia Kreutzer*1, Alice Schoenauer Sebag*2, Kelly Marchisio*2, Beyza Ermis1, John Dang1, Samuel Cahyawijaya2, Shivalika Singh1, Seraphina Goldfarb-Tarrant2, Viraat Aryabumi2, Aakanksha2, Wei-Yin Ko2, Ahmet Üstün1, Matthias Gallé2, Marzieh Fadaee1, and Sara Hooker*1 1Cohere Labs,2Cohere Corresponding authors: {aidanpeppin, juliakreutzer, alice, kelly, sarahooker}@cohere.com Abstract Despite advances in large language model capabilities in recent years, a large gap remains in their capabilities and safety performance for many languages beyond a relatively small handful of globally dominant languages. This paper provides researchers, policymakers and governance experts with an overview of key challenges to bridging the "language gap" in AI and minimizing safety risks across languages. We provide an analysis of why the language gap in AI exists and grows, and how it creates disparities in global AI safety. We identify barriers to address these challenges, and recommend how those working in policy and governance can help address safety concerns associated with the language gap by supporting multilingual dataset creation, transparency, and research. 1 Introduction The limits of my language means the limits of my world. — Ludwig Wittgenstein More than 7000 languages are spoken around the world today,1but current state-of-the-art large language models (LLMs) cover only a relatively small fraction of them. This “language gap” has far-reaching implications which ultimately leave certain language communities around the globe marginalized. AI models present both limited language support and biases are introduced that reflect Western-centric viewpoints, undermining other cultural perspectives. This language gap crucially affects the safety of AI models. Though various efforts have gained traction and momentum around the world to improve the general safety of AI models, a critical challenge remains: how to ensure safety across diverse languages and cultures . This challenge is *First authors. 1Eberhard, David M., Gary F. Simons, and Charles D. Fennig. (2024) Ethnologue: Languages of the World. Twenty-seventh edition. Released as a preprint on May 28, 2025 1 Figure 1: Bridging the Multilingual Divide: We scrutinize the reasons for the language gap in AI, and review and recommend concrete steps to bridging it. We highlight that the language gap must involve safety mitigation across languages, and that open challenges remain. often widely overlooked or completely absent in efforts to advance AI safety, which primarily focus onEnglishormonolingualsettings, leadingtopotentialsafetyandsecurityflawsforotherlanguages. Part of the problem is the scarcity of reliable datasets for safety evaluation beyond a few languages. Such evaluations are complex and need to reconcile global harms and unique local contexts. Several research groups have set out to reduce the language and safety gap in AI across diverse linguistic and cultural contexts. One such effort is Cohere Labs’s Aya project2— a global initiative that has developed and publicly released multilingual language models, instruction datasets, and evaluation datasets expanding language coverage (Üstün et al., 2024; Dang et al., 2024a; Aakanksha et al., 2024a;b; Gureja et al., 2024; Singh et al., 2025; Romanou et al., 2025; Dash et al., 2025). In the course of this work, many of the challenges and opportunities around expanding the worlds AI serves have become apparent. In
|
https://arxiv.org/abs/2505.21344v1
|
this paper, we articulate how these approaches have addressed language disparity and global safety gaps in AI models. This paper is written for both research and policy experts to help provide an overview of the key challenges that remain in bridging the language gap and minimizing safety risks across languages. We provide an analysis of why the language gap exists (Section 2) and grows (Section 3), how it creates gaps in global AI safety (Section 4), and an overview of our lab’s efforts in context of the Aya intiative (Section 5), along with technical and fundamental lessons we have learned through this work about how to address the language gap (Section 6). Through this primer, we articulate three overarching barriers that must be overcome to effectively and efficiently close the language disparity and global AI safety gaps for everyone: 2https://cohere.com/research/aya 2 1. Buildinghigh-qualitydatasets, andcuratingevaluationsusinghuman-curateddatafromfluent speakers is resource-intensive, but critical for reducing the global AI safety gap. 2. Access to compute is uneven throughout the work, and is reinforced by disparities in access to digital tools for developing and using large language models. 3. We must capture not only language but nuances in culture and dialect. Languages are di- verse and heterogeneous; while often treated as monoliths, dialects are abundant and re- gional/cultural nuance must be considered. In order to overcome these barriers for enhanced global AI safety, we offer the following considera- tions for policy makers, outlined in the box below. Recommendations for Policy and Research 1.Support multilingual dataset creation: 1.1. Incentivize and facilitate the creation of open access evaluation sets, which reflect relevant generative use cases and safety-relevant use cases across modalities, by both translating existing datasets ("language-parallel") and creating localized ones ("language-specific"). 1.2. Fund long-term annotation efforts in endangered languages. This enables human annotators from diverse backgrounds with multilingual and multicultural expertise to engage in the curation of high-quality, inclusive datasets. 2.Support multilingual transparency from model providers: 2.1. Encourage model providers to articulate the coverage of languages served by each model family. For example, by reporting languages supported and performance in each language in technical or evaluation reports. 2.2. Conductanalysesoflanguagecoverageacrosssafetyresearch,forexamplebyassessing the presence or absence of safety mitigations across languages in published reports. 3.Support multilingual research and development: 3.1. Support multilingual and non-English research that aims to close the language gap through funding and other programs. 3.2. Enable access to (more) compute for multilingual safety research, especially for projects and in regions where it is disproportionately inaccessible. 3 2 State of the Current Language Gap in AI Section Findings ➤There is a significant language gap in AI development, where the majority of language models are optimized for English and a few other high-resource languages, while many other languages worldwide remain underrepresented. ➤This gap is due to resource disparities, data availability, global inequities, and socio- economic factors, which lead to higher costs and limited access for speakers of low- resource languages. ➤English-centric development and research introduces cultural biases in model outputs, andpotential safety risks , exacerbating inequalities and threatening linguistic diversity. ➤Addressing these issues is crucial for ensuring equitable access to AI
|
https://arxiv.org/abs/2505.21344v1
|
technologies and preserving cultural representation in the digital age. Large language models are finding beneficial applications in a range of contexts across societies and economies around the world. However, the vast majority of language models are currently optimized for a small handful of languages, and the English language and North American socio- cultural preferences are dominant across their design, outputs, and behavior (Yong et al., 2023b; Naous et al., 2024; Cahyawijaya et al., 2024). There are several efforts around the world to develop the multilingual capabilities of AI language models, includingCohereLabs’sAyamodels(Üstünetal.,2024;Aryabumietal.,2024b;Dangetal., 2024b; Dash et al., 2025) and datasets (Singh et al., 2024) — a family of open source, massively multilingual language models that cover 101 languages, Cohere’s Command A,(Cohere et al., 2025), Llama (Dubey et al., 2024), Qwen (Qwen Team, 2024), Gemma (Gemini Team et al., 2024) and Mistral families (Mistral Team, 2024a;b). Despite concerted research effort, the language gap remains pervasive and models still underperform on languages outside of English (Li et al., 2024). This language gap in the development, capabilities, and applications of AI language models is the result of several factors, which we discuss below. Resources for AI language model development are biased towards English, and many non-English languages are “low-resourced” (Ranathunga & de Silva, 2022; Joshi et al., 2020). Recent breakthroughs in language models largely depend on the availability of high quality text- based datasets (Lee et al., 2023; Touvron et al., 2023a), but the most widely used datasets in natural language processing currently represent only a handful of data-rich languages. Datasets used for instruction-fine-tuning — a key step in improving language model capability — are almost entirely focused on English (Muennighoff et al., 2023a; Longpre et al., 2024; Singh et al., 2024). Of the 7000 languages spoken around the world today,3easily available data covers only around 1500 (Bapna et al., 2022), and acquiring data that has a high-enough quality for use in training language models is even more challenging, especially for low resource languages (Adda et al., 2016; Adilazuarda et al., 2022; Winata et al., 2023; Kabra et al., 2023; Khan et al., 2024; Purwarianti et al., 2025). Additionally, the availability of resources for a language is not proportionate to its 3Eberhard, David M., Gary F. Simons, and Charles D. Fennig. (2024) Ethnologue: Languages of the World. Twenty-seventh edition. 4 Figure 2: The language gap is clearly visible in the availability of textual datasets across two popular sources: HuggingFace and Wikipedia. Circles represent the number of HuggingFace datasets includ- ing text per size tag and mentioning a given language. Color indicates the number of Wikipedia pages in the same language, for the six most frequent languages and a diverse selection of lower-resource languages (source: Ranathunga & de Silva (2022)). number of speakers, as which languages are favored is often a symptom of historical technological use and access to resources (Bird, 2022; Ranathunga & de Silva, 2022; Üstün et al., 2024). This means that the language gap in AI is already wide, and affects a large proportion of the world’s population. Figure 2 illustrates the gap between languages
|
https://arxiv.org/abs/2505.21344v1
|
in terms of available resources with the example from textual datasets hosted on HuggingFace4and number of Wikipedia pages in each language (stats from Ranathunga & de Silva (2022)), for a set of high- and lower-resource languages. These represent popular sources for textual data for training of current LLMs, and highlight the disparity in resources between languages. Access to resources for model development and evaluation. A lot of focus has been placed on availability of data. However, low-resourcedness goes beyond mere data availability and reflects systemic issues in society (Martinus & Abbott, 2019; Hooker, 2024). The co-occurrence of both compute constraints and low-resource languages has been called the low-resource double-bind and amplifies challenges for progress (Ahia et al., 2021). This is particularly true given how compute heavy recent breakthroughs have been (Treviso et al., 2023). There are global inequities in access to the compute resources required for language model re- search and development, largely due to cost and availability of hardware and infrastructure (OECD, 4https://huggingface.co/datasets accessed on March 26, 2025. 5 2023). In some regions, such as Africa (Ojo et al., 2025) and Southeast Asia (Aji et al., 2022; Lovenia et al., 2024), even the less-costly process of evaluating existing LLMs poses a huge resource challenge, let alone the far more expensive goal of training LLMs for regional languages from the ground up. Disparity in participation of researchers. “Low-resourcedness” goes beyond the mere avail- ability of data, and is rooted in societal structures and “socio-econo-linguistic” factors ( ∀et al., 2020a; Grützner-Zahn et al., 2024; Ahia et al., 2021; Aji et al., 2022; Bird, 2022; OECD, 2023; Singh et al., 2024; Romanou et al., 2025; Salazar et al., 2025). Many languages are less well-studied or privileged globally because there are, for example, fewer economic incentives, little institutional support, restrictions due to present or past political oppressions, high burdens for participation, or few entry paths into research. As a result, the availability of robust datasets required for including these languages in machine learning research and computer science is scarce (Magueresse et al., 2020; Nicholas & Bhatia, 2023; Ranathunga & de Silva, 2022), because the vast majority of the people, organizations, and teams working to develop these datasets originate from a few countries (Longpre et al., 2023; Maslej et al., 2024; Lovenia et al., 2024). In the Aya 101 project (Singh et al., 2024), the challenges of collaborating across the global to expand language coverage were documented by the organizers. For example, Zoom meetings were cut short for some volunteers due to power outages in their countries or lack of access to a stable internet connection. Burmese, a language spoken in Myanmar, started out strong in the project with a group of 35 motivated volunteers, but saw a sudden pause in contributions as civil war broke out in the country resulting in the withdrawal of the volunteers from the project (Petty, 2023). The Language Ambassador for Armenian also had to drop out of the project because of a conflict in that country (Reuters, 2023). In some countries, postal services only functioned a few days per month
|
https://arxiv.org/abs/2505.21344v1
|
because of ongoing warfare, creating challenges for organizers when mailing out Aya gifts to thank committed volunteers. Ultimately, organizers were not able to send gifts to thank researchers who participated from Somalia, Yemen, and Palestine. For Somalia and Yemen, both Canada Post, DHL, and Fedex where not able to support shipments. These geo-political realities shaped both the Aya initiative to expand language coverage as well as the progress of the project. Data quality limitations . A key hurdle is not just the volume of data available for a language, but the quality of the data. Models trained on better data do not require as much compute (Hooker, 2024). A large body of work has emerged which shows that efforts to better curate training corpus, including de-duping (Taylor et al., 2022; Kocetkov et al., 2022), data pruning (Marion et al., 2023; Ankner et al., 2025; Sorscher et al., 2023) or data prioritization (Boubdir et al., 2023; Thakkar et al., 2023)cancompensateforlargermodels. Thismeanstherearemanybenefitstogainsindataquality. However, the current state of progress is challenging for low-resource languages. Where datasets are available for low-resource languages, quality is often insufficient for use in language model research and development (Kreutzer et al., 2022; Cahyawijaya et al., 2023b). Recent studies show that pruning training datasets using different metrics or heuristics to remove low-quality samples improves model performance (Marion et al., 2023; Ankner et al., 2025), but pruning techniques might not equally generalize to all languages and domains (Chimoto et al., 2024). Limited transparency around language coverage. It is not a standard practice for AI model developers to list the languages supported by an LLM. What counts as a supported language is a nuanced question (Hulagadri et al., 2025): many “monolingual” datasets sourced from the 6 Figure 3: ChatGPT requires a greater number of tokens to encode the same contents across language scripts that are less well resourced (FLORES datasets (Goyal et al., 2021), data from Ahia et al. (2023)). The number in brackets indicates the count of languages encoded in each script . web include other languages (Blevins & Zettlemoyer, 2022; Briakou et al., 2023), so by default, most models include training data for many languages, which might equip them for cross-lingual generalization (Blevins & Zettlemoyer, 2022). However, it is more relevant to understand how much dedicated effort during model development and evaluation various languages have received, and results from evaluations on model performance and safety across languages. Sharing these details enables more reliable performance evaluations and fairer cross-model comparisons, contributing to research efforts that aim to overcome the language gap. It also allows governments to run language specific evaluation on model that disclose supporting that language. However, consistent practices around disclosure remain lacking amongst model providers. For example, Mistral only claims to support a handful of languages. However, in practice, it is heavily relied upon by multilingual users relative to explicitly multilingual models like mT0 (Muennighoff et al., 2023b) and BLOOMZ (Lai et al., 2023). 7 3 Why Multilingual Matters: Consequences of the Language Gap Section Findings ➤The language gap is perpetuated in a vicious cycle where high-resource languages benefit from
|
https://arxiv.org/abs/2505.21344v1
|
synthetic data and advanced evaluation methods, while development in low-resource languages is hindered by limited data and unreliable assessments, leading to a widening divide in model capabilities and access . ➤This gap results in higher costs and poorer performance for non-English languages, leav- ing many communities behind as language models become integral to economies and societies, and exacerbating cultural biases and inequities. ➤Global safety initiatives neglect language diversity , posing challenges for ensuring AI safety across all languages. The language gap in a vicious cycle. The language gap risks widening and deepening if not addressed. For instance, the increased use of synthetic data, which is generated by language models and commonly used for training and tuning other models (Anaby-Tavor et al., 2019; Odumakinde et al., 2024), favors those languages that already have highly capable models. Such synthetic data will be less likely available and of sufficient quality for lower-resource languages, which risks deepening the existing gap. In addition, generative capabilities of LLMs are commonly evaluated with other LLMs as judges (Zheng et al., 2023). For lower-resource languages, these judges are likely less reliable due to lack of data and evaluations (Gureja et al., 2024) which, as a consequence, leads to less reliable measurement of advances for each language. This divide is larger in multimodal domains, where often data needs to exist across both domains such as audio, vision and language (Dash et al., 2025). Widening cost in access to technology. The language gap results in higher costs of using languagemodel-basedtechnologiesforsomenon-Englishlanguages, astheymayrequiremoretokens and incur higher latency for generations (Ahia et al., 2023; Ji et al., 2023b). Figure 3 illustrates this effect: for non-Latin scripts, many more tokens are needed to encode the same text for ChatGPT, thereby incurring a higher processing cost. Speakers of low-resource languages often do not have the resources to improve NLP technology for their language due to limited access to compute, data, and opportunity (Ahia et al., 2021; OECD, 2023; ∀et al., 2020a). Many language speakers and communities risk being left behind. The obvious consequence of the language gap is that as language models become increasingly integral across our economies and societies, the people and communities whose languages are not covered will be left behind. An extensive body of research demonstrates how poorly existing language models perform for low- resource languages in comparison to high-resource languages (e.g. Adelani et al., 2024; Üstün et al., 2024; Singh et al., 2024; 2025; Romanou et al., 2025; Arora et al., 2024), and as language models becomemoreembeddedintheprovisionofservicesandproducts, thisperformancegapcouldworsen existing inequities across global communities (e.g. Laurito et al., 2024). Diversityacrosscultures, societies, andcommunitiescouldbereduced. Asmachinelearn- ing models’ outputs can only reflect the world based on the data on which they have been trained and given access, the majority of LLMs reflect an Anglo-centric and predominately North American 8 Figure 4: Results from Shen et al. (2024): Lower-resource languages have a higher rate of harmful and irrelevant generations by GPT-4 than higher-resource languages. (Zou et al., 2023). viewpoint. This lack of linguistic diversity means that the abstract “concept space” that underpins model functionality is more oriented towards
|
https://arxiv.org/abs/2505.21344v1
|
English than to other languages (Cahyawijaya et al., 2023b; Yong et al., 2023a; Wendler et al., 2024; Aakanksha et al., 2024a;b), and introduces biases against languages and cultural perspectives seen rarely in model training (Schwartz et al., 2022; Kunchukuttan et al., 2021; Kotek et al., 2023; Khandelwal et al., 2023; Naous et al., 2024). Many existing language models fail to account for social factors, such as speaker perspective or sociocul- tural norm, and this problem is amplified for low-resource languages (Hovy & Yang, 2021). This means that users may receive responses from LLMs that do not reflect their cultural experience or social history. 4 Challenges of AI Safety in a Global World Section Findings ➤Addressing multilingual safety in AI is challenging due to the focus on English and Western-centric datasets, leading to a lack of reliable safety evaluations and mitigation strategies for most languages. ➤This gap results in models producing harmful or biased outputs in non-English languages, disproportionately affecting non-English speakers and creating security risks. ➤Bothintentional exploitation of language-related vulnerabilities anduninten- tional exposure to harm due to insufficient safeguards pose significant concerns, high- lighting the urgent need for inclusive safety measures across all languages. 9 Overall, addressing safety and performance issues in a multilingual context involves navigating complex challenges. There are many ongoing commitments to address the safety risks posed by AI models. Many of these are high-profile, international efforts. Examples include the Seoul Frontier AI Safety Commitments, which were signed by 16 companies who collectively operate in almost every country and territory around the world;5the inaugural meeting of the international network of AI Safety Institutes, representing 11 countries and regions;6the enshrinement of the EU’s AI Act and the process to draft the Act’s Code of Practice for General Purpose AI model providers, focused on models that pose ‘systemic risk’;7efforts led by Singapore to build capacity for AI safety testing across South East Asia;8and many more. However, ensuring safety across languages or representation of multilingual and global context is not explicitly or prominently mentioned in any oftheseefforts. Thisisahugeoversightgiventhatalackofcareformultilingualsettingsundermines access, performance and safety for global users. Lack of multilingual safeguards undermine safety for all users. A dearth of multilin- gual safety testing and mitigation means that language models can produce harmful outputs when prompted in languages for which they are not optimized or safety-tested (Anwar et al., 2024), creating a sharp performance cliff which disproportionately amplifies risk for non-English speak- ers (Khandelwal et al., 2023; Yong et al., 2023a; Üstün et al., 2024). For instance, models can show stereotypical gender biases when translating into Bengali and Turkish (Ghosh & Caliskan, 2023), and may exhibit unsafe behavior when prompted in low-resource languages (Yong et al., 2023a). There can also be critical security and safety flaws for all users of languages outside of English, where multilingual prompts can be used to subvert safety guardrails (Yong et al., 2023a; Deng et al., 2024). Efforts on safety overly focused on English. Successful mitigation of multilingual harms involves reconciling differing global and local preferences. To date, efforts to ensure safety alignment are primarily focused on homogeneous monolingual settings
|
https://arxiv.org/abs/2505.21344v1
|
— predominantly English — or overfit to types of harm common in Western-centric datasets (Sambasivan et al., 2021; Shen et al., 2024). Approaches to remedying the generation of violent, biased, false, or toxic content (Weidinger et al., 2021) are largely oriented towards English or monolingual settings, and there is a lack of reliable datasets for safety evaluation outside of a small fraction of languages (Gehman et al., 2020; Talat et al., 2022; Pozzobon et al., 2024). This includes the vast majority of work on language model alignment (Stiennon et al., 2020; Christiano et al., 2017; Dai et al., 2024; Bai et al., 2022; Tunstall et al., 2024), a core component of improving model safety. Many multilingual safety harms do not require active intent to subvert guardrails. Harms arising from gaps in multilingual safety might be intentional — e.g. malicious actors find and exploit language gap-related “backdoors” to generate harmful output. Or harms may be unin- tentional — e.g. users from underserved language communities being unknowingly exposed to harm 5Frontier AI Safety Commitments, AI Seoul Summit (2024), https://www.gov.uk/government/publications/f rontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summi t-2024 6US Department of Commerce (2024), International Network of AI Safety Institutes at Inaugural Convening, https://www.commerce.gov/news/fact-sheets/2024/11/fact-sheet-us-department-commerce-us-departmen t-state-launch-international 7European Commission (2024), General-Purpose AI Code of Practice, https://digital-strategy.ec.europa. eu/en/policies/ai-code-practice 8IMDA (2024), Singapore AI Safety Red Teaming Challenge, https://www.imda.gov.sg/activities/activitie s-catalogue/singapore-ai-safety-red-teaming-challenge 10 due to the lack of effective safeguards for their language (Shen et al., 2024; Deng et al., 2024; Yong et al., 2023a). Such unintentional harms are elucidated by Figure 4, which summarizes the findings of Shen et al. (2024): GPT-4 tends to produce more harmful content in low-resource languages, while also following instructions less faithfully as compared to high-resource languages. 5 Closing the AI Language Gap & Extending Safety Guardrails Section Findings ➤Key lessons from Cohere Labs’ Aya initiative, a global collaborative effort towards multi- lingual AI, include the importance of combining human-curated and synthetically generated data to increase volume and language coverage. ➤Building comprehensive evaluation sets alongside models is crucial, especially for open- ended use. Cross-institutional collaboration , involving local communities and multi- disciplinary experts, is essential for preserving cultural and linguistic nuances. ➤Technical innovations such as multilingual preference training, model merging, and safety context distillation have improved model performance and safety across languages, reducing harmful generations while maintaining output quality. ➤Addressing harmful content requires continuous adaptation as language modeling and language model use evolve. Inclusive data collection, robust evaluation, and collaborative innovation are key components in advancing multilingual AI capabilities and safety. Despite the challenges, there are clear levers of progress for reducing the multilingual divide and safety disparities. To center the discussion, we pull upon the concrete lessons we have learned as a lab in our efforts to extend coverage of languages used in AI. 5.1 Background: Cohere Labs’s AyaInitiative Aya9is a multi-year initiative leveraging best practices from open-source and crowd-sourced science projects (Beck et al., 2022; Lenart-Gansiniec et al., 2023; Franzoni & Sauermann, 2014; Muennighoff et al., 2023a), with the goal of increasing access to state-of-the-art AI models, regardless of language. To our knowledge, Aya is the largest participatory machine learning research
|
https://arxiv.org/abs/2505.21344v1
|
initiative to date, involving 3000 independent collaborators across 119 countries. The inaugural Aya 101 release doubled coverage of existing languages covered by AI and released the largest ever collection of multilingual, instruction fine-tuning data , with 513 million prompts and completions covering 114 languages (Singh et al., 2024; Üstün et al., 2024). The Aya dataset includes over 200,000 rare, human-curated annotations in 65 languages, providing re- searchers around the world with high-quality data for instruction fine-tuning. Following Aya 101, we released state-of-art models that outperform proprietary options for a subset of languages — Aya Expanse (Dang et al., 2024b) is a family of multilingual models covering 23 languages that combines research breakthroughs from Cohere and Cohere Labs: strong multilingual base models10, 9https://cohere.com/research/aya 10https://cohere.com/blog/command-series-0824 11 multilingual instruction-tuning, synthetic data generation (Aryabumi et al., 2024a), multilingual ar- bitrage (Odumakinde et al., 2024), multilingual preference training (Dang et al., 2024a), and model merging (Aakanksha et al., 2024b). Aya Vision (Dash et al., 2025), a family of vision-language models covering 23 languages based on Aya Expanse, expands multimodal capabilities to languages spoken by over half the world’s population. It incorporates robust multilingual multimodal evalua- tion, multilingual multimodal synthetic annotation, and merging to outperform models more than 2x of its size. The Aya models and dataset have been released publicly11, and are intended to contribute to closing the language gap by providing resources for researchers and developers to further advance multilingual capabilities and safety. Through these model releases, we have learned a considerable amount about the challenges in mitigating the curse of multilinguality (Conneau et al., 2020), and tractable directions to improve coverage. We share more context on these learnings below. 5.2 Lesson #1: Data availability is one of the most potent levers of progress. Different sources of data can be beneficial for improving coverage. One of the most formidable challenges is quality and quantity of data available. We have found it is better to increase coverage of data by including both human, synthetic and translated data rather than solely prioritizing human annotations. This contradicts some views within the field, where translation is thought of as insufficiently high quality. While we also observe translationese in practice, the volume added to rare languages outweighs trade-offs in quality. Aya 101 brought generative AI to languages previously unseen, in major part by leveraging many sources of data that include both a human-curated dataset and synthetically generated multilingual instructions through templates or machine translation. Combining and carefully weighing multiple sources of varying quantity, quality, and language coverage increased data volume. Combining human-curated datasets and automaticallytranslateddatasetsfacilitatedwiderevaluation, asrelyingsolelyonhumanannotation can be expensive. 5.3 Lesson #2: Build evaluation sets alongside models. To motivate and quantify progress on a given capability, it is crucial to have trusted benchmarks and evaluation suites. This is especially critical for multilingual research, where there are many languages with no evaluation set available. Accordingly, there is a need to build evaluation sets. One of our core recommendations is to complement Language-parallel evaluation sets have benefits yet should be used with an understand- ing of their limitations. Global-MMLU (Singh et al., 2025)
|
https://arxiv.org/abs/2505.21344v1
|
is a language-parallel evaluation set: the same questions are asked across languages. This allows for control of question difficulty and topic, and results can be interpreted apples-to-apples across languages. However, it means that the quality of the benchmark rests on the quality of translation by human annotators (in the case of Global-MMLU) or automatic translation tools. Translation can introduce erroneous artifacts, and nuances in the original language of the questions might not have direct equivalents in other lan- guages (Vanmassenhove et al., 2021; Hartung et al., 2023; Savoldi et al., 2021; Ji et al., 2023a; Chen et al., 2024; Choenni et al., 2024). This is particularly true for translated prompts used in safety 11The Aya models and dataset are available via the Cohere Labs HuggingFace page: https://huggingface.co/C ohereForAI . 12 evaluations, where they can lose their harmful intent or become meaningless through translation errors (Agrawal et al., 2024). In general, we recommend that heavily relied upon evaluation sets are not automatically translated but also undergo human post-edits. While this is more expensive, it ensures evaluations do not present translationese. We invest in these human post-edits for both Global-MMLU (Singh et al., 2025) and aya-human-annotated (Singh et al., 2024). Ensure evaluation sets capture local nuances. While translation enables parallel comparisons across languages, relying on translating an evaluation from a single language can fail to adequately capture regional nuances and knowledge. Cultural biases in multitask datasets limit their utility as global benchmarks. Biases arise not only from differences in language but also from the cultural knowledge required to interpret and understand questions effectively. We analyzed the Massive Multitask Language Understanding (MMLU) benchmark (Hendrycks et al., 2020) — a commonly used benchmark for assessing LLM capability — and found significant Western-centric biases (Singh et al., 2025); 28% of questions require culture-specific knowledge, while 84.9% of the geography- related subset focuses exclusively on North American or European regions (see Figure 5). Our findings underscore how existing benchmarks prioritize Western concepts, distorting evaluations of multilingual models. In response, we developed Global-MMLU (G-MMLU), an enhanced multi- lingual test set covering 42 languages which annotates both global and locally sensitive questions (Singh et al., 2025). Another approach is to complement parallel evaluations with in-language evaluation sets that cap- ture concepts specific to a region. An example of a complementary in-language evaluation set is INCLUDE, to focus on capturing regional and cultural knowledge across 44 languages (Romanou et al., 2025). While the exams are not directly comparable since each is from a different region and covers different questions, this exam provides context about how model perforance reflects local nuance and context. Furthermore, it is critical that safety evaluations don’t just evaluate for global concepts of safety, but account for local context. To construct the Aya Red-teaming dataset (Aakanksha et al., 2024a), we worked with compensated annotators with native language skills in 8 languages (English, Hindi, French, Spanish, Russian, Arabic, Serbian, Filipino) to craft prompts around a list of harmful cat- egories, provide corresponding English translations, identify categories of harm, and label whether the harm is “global” (understood and recognized as
|
https://arxiv.org/abs/2505.21344v1
|
harmful worldwide) or “local” (harm is tied to specific cultural or historical contexts). Evaluationsshouldreflectrelevantgenerativeusecasesacrossmodalities. Languagemod- elshavehistoricallybeenevaluatedondiscriminativetasks, inwhichmodelshavetoanswermultiple- choice questions (such as MMLU (Hendrycks et al., 2020)). As model capabilities have improved, models have started to be used and evaluated for generative tasks (e.g. creative writing, translation, summarization, coding, mathematical reasoning) (Tamkin et al., 2024). In the latter case, mod- els are asked to generate diverse and longer responses — contrast answering “tell me if these two sentences are different” with“write me a story about a princess in a tower.” In fact, models that are best at discriminative tasks are not usually the ones that humans prefer to interact with: this tension has been observed in multiple works (Üstün et al., 2024; Muennighoff et al., 2023a). Extending this to multiple modalities, current multilingual and multimodal bench- marks (Liu et al., 2021; Pfeiffer et al., 2022; Romero et al., 2025; Tang et al., 2024; Yue et al., 2024; Lovenia et al., 2024) lack critical evaluation domains such as open-ended generations based on multimodal input. 13 Figure 5: Of examples in MMLU requiring cultural or regionally-specific knowledge to answer cor- rectly, the majority are geographically tied to North America and dominated by Western culture (from Singh et al. (2025)) One of the core recommendations is that evaluations should always include both some open-ended tasksaswellasmoretraditionalacademicclassificationtasks. Forcriticalareassuchasmultilingual, multimodal there are limited open-ended evaluations. To help address this gap, together with Aya Vision models (Dash et al., 2025), we also released Aya Vision Bench12, constructed for evaluating Vision-Language Model (VLM) performance on real-world applications from distinct task categories and covering 23 languages. In contrast to discriminative benchmarks, this benchmark enables evaluation of VLMs in a setting that is more aligned with human interaction in the wild. 5.4 Lesson #3: Collaborate cross-institutionally. Languages are not monolithic: they contain dialect, regional, and cultural nuances. Many languages are spoken across multiple regions of the world, resulting in cultural or regional dialects. Languages in existing multilingual datasets, including our Aya dataset, have limited representation of regional nuance, as often only a few human annotators are responsible for annotating the majority of any one language dataset (Singh et al., 2024). This might mean that data for a particular language is annotated in a way that represents the perspective of a particular contributor or cultural viewpoint. For example, annotations in French might center on historical and cultral references of France, but neglect other French-speaking communities in Québec or Senegal (Vigouroux, 2013). Designinganddeliveringhighquality, diverseandlocallyrelevantdatasetsdemandalotofresources: the cultural and linguistic knowledge itself, as well as how to build the best scaffolding for it — knowinghowtoengagecommunities, whichdatatoacquireandwhichinfrastructuretobuild. Cross- 12https://huggingface.co/datasets/CohereForAI/AyaVisionBench 14 institutional collaborations are crucial to join such diverse forces, and preserve local contexts. There have recently been multiple successful open science collaborations. Examples include Masakhane, a grassroots organization who has been working to strengthen natural language processing research in African languages since 2020 ( ∀et al., 2020b), or NusaCrowd, a “collaborative initiative to collect and unify existing resources for Indonesian languages (Cahyawijaya et al., 2023a),” with connections to a collaboration of South-East Asian researchers (Lovenia et al., 2024;
|
https://arxiv.org/abs/2505.21344v1
|
Cahyawijaya et al., 2025). Aya 101 was organized as a global open science project dedicated to collecting high- quality, human-annotatedinstruction-styledataandbuildingamodeltoserve101languages13. The Aya initiative adopted a decentralized approach, empowering contributors—regardless of academic orprofessionalbackground—toleadasLanguageAmbassadors. Thiscollaborativeeffortprioritized the preservation and integration of underrepresented languages, such as Malagasy and Telugu’s Sathakam poetry, setting a new standard for inclusive AI development. These and many other ongoing efforts around the world — many of them grassroots community initiatives — are working to broaden the capabilities of language models across a wider range of languages. Successful efforts showcase the importance of (1) planning around local community involvement, (2) involving multidisciplinary experts, ranging from community engagement to NLP and (3) delivering open-source assets that can be shared and re-used widely. Many governments and public bodies are creating initiatives to address the language gap in AI, such as the European Commission’s “Common European Language Data Space”14or the South African Government’s platform.15Given the challenges associated with building localized and high quality assets in low resource languages, more incentives are needed to kickstart and, importantly, support cross- institutional collaborations over time to ensure sustainable community building and asset delivery. 5.5 Lesson #4: Focus on improving multilingual performance. Major gains in multilingual language processing were achieved throughout the Aya Initiative due to technical breakthroughs. Supporting technical innovation, including multilingual learning efficiency, is critical to bringing AI to the world. We detail some examples below. Multilingual Preference Training. Preference optimization techniques have become a standard final stage for training state-of-art LLMs providing models with human or AI feedback on their outputs so they can learn to mimic high-quality output. To date, the vast majority of preference optimization work has focused on globally dominant languages like English and Chinese. Recent work has allowed for more focus on a multilingual setting (Dang et al., 2024a; Aakanksha et al., 2024a), however it requires investment in both the type of feedback collected as well as differing optimization protocols to make sure the models are aligned with global and local nuances. Model Merging. Model merging combines the strengths of different specialized models to create a more capable and balanced system, particularly for handling multiple languages. We explored merging specialized models in a diverse multi-task setting, combining safety and general-purpose tasks within a multilingual context (Aakanksha et al., 2024b; Cohere et al., 2025). Our findings illustrate an important take-away for policymakers; merging can help build stronger and safer 13Cohere Labs. Aya Initiative Overview 14European Commission. The Common European Language Data Space. 15Government of South Africa (2019), ‘Government Establishes A New Digital Centre To Promote Indigenous Languages’. 15 Figure 6: Human ratings of harmfulness in model generations, before (Aya) and after safety mit- igation (Aya Safe) (Üstün et al., 2024). Safety context distillation drastically reduces the ratio of harmful generations for harmful prompts across languages. multilingual systems, offering clear advantages for handling complex tasks in diverse languages. This is an important achievement compared to earlier works, where the assumption was that safety improvementswouldalwaysincuracost, andtherebybelessattractive. Mergingalsohasthebenefit that it is a far cheaper optimization step than alternatives like finetuning or continued training. Safety Context Distillation.
|
https://arxiv.org/abs/2505.21344v1
|
A core safety guardrail for language models is the ability to refuse to respond to potentially harmful prompts. For example, when a model is prompted to produce hate speech, it will refuse to do so. To develop the Aya 101 model and ensure its ability to refuse harmful prompts across different languages, we used ‘safety context distillation’ (Askell et al., 2021; Ganguli et al., 2022; Touvron et al., 2023b; Bianchi et al., 2024) to teach the model in which contexts refusals are appropriate (Üstün et al., 2024). The core idea is to teach a model to generate safe responses for harmful prompts as demonstrated by a teacher. We found this step reduced harmful generations from adversarial prompts by 78–89% as judged by human experts, as illustrated in Figure 6 and is a relatively straightforward protocol that yields large immediate benefits. 5.6 Lesson # 5: Tackle harmful content as it evolves. Languages evolve naturally over time (Frermann & Lapata, 2016; Jaidka et al., 2018; Horn, 2021). Considerableefforthasbeendedicatedtomitigatingtoxicity—thegenerationofoffensiveorharmful text-content — but existing methods often require drastic modifications to model parameters or the use of computationally intensive methods. This means that keeping toxicity safety guards up-to- date as language evolves is onerous. For example, work on continual learning allows for state-of-art toxicity mitigation while the distribution is changing (Pozzobon et al., 2023). Building on this, toxicity mitigation has to expand to techniques beyond just traditional English-centric approaches (Pozzobon et al., 2024). Recent work has expanded the languages covered, while establishing some 16 of the complexities of multilingual toxicity mitigation (Pozzobon et al., 2024). Policymakers should ask researchers what they are doing to ensure their models are up-to-date and evolve alongside languages and cultural references. 5.7 Lesson #6: Access to technology matters and is as important as performance. Expanding the languages covered by AI language models will rely on the input of language speakers around the world. Fortunately, the global availability of internet-connected devices means that it is possibletoconnectwith, engage, andcollaboratewithpeopleacrosscontinentsandtimezonesinreal time. This was a key enabler for our Aya project, as it meant we could use online chat platforms to coordinate input across our global community. Unfortunately, the availability of devices and internet access is not equitable across the world (Avle et al., 2020). Desktop and laptop computers with wired, high-speed internet are commonplace across households in more economically developed nations, but in many other parts of the world, particularly the Global South, mobile devices and cellular or satellite internet are more common. In our Aya project, approximately 54% of users accessed our data collection platform via desktop browsers while 46% utilized mobile browsers (Singh et al., 2024). To enable participation of a wide range of language speakers, language model and dataset development requires the creation of tools that are accessible across different devices, operating systems, and internet connectivities. We have also spent considerable time making our models available in much more accessible ways, such as at a lower more efficient parameter count of 8billion parameter models (fits on a single GPU) or available via whatsapp given this is often the memory efficient app to
|
https://arxiv.org/abs/2505.21344v1
|
download in certain regions of the world. 6 Conclusion and recommendations for policy makers The language gap in AI is a significant issue that risks excluding communities from the benefits of languagemodels, underminingmodelsafety, andexacerbatingexistingsocial, linguistic, andcultural inequalities, particularly for speakers of low-resource languages. Despite efforts across the machine learning research community and global government initiatives, several barriers still exist that must be addressed to close the AI language gap. We complete this primer with some recommendations for policy makers to ensure progress continues on multilingual inclusion. 1.Support multilingual dataset creation: 1.1. Incentivize and facilitate the creation of open access evaluation sets, which reflect relevant generative use cases and safety-relevant use cases across modalities, both by translating existing datasets ("language-parallel") and creating localized ones ("language-specific"). 1.2. Enable human annotators from diverse backgrounds with multilingual and multicultural expertise to engage in the curation of high-quality, inclusive datasets. 2.Support multilingual transparency from model providers: 2.1. Encourage model providers to articulate the coverage of languages served by each model family, for example through technical or evaluation reports. 17 2.2. Conduct analyses of language coverage across safety research, for example by assessing the presence or absence of safety mitigations across languages in published reports. 3.Support multilingual research and development: 3.1. Ensure that diverse languages are represented across training programs that expand skill sets for efficient community engagement, data collection and model training. 3.2. Support multilingual and non-English research that aims to close the language gap through funding and other programs. 3.3. Enable access to (more) compute for multilingual safety research, especially for projects and in regions where it is disproportionately inaccessible. Acknowledgments We thank Thomas Euyang for the visualization of the figures and diagrams, Madeline Smith for the coordination, and Oreva Ahia for providing the raw data for Figure 3. References Aakanksha, Arash Ahmadian, Beyza Ermis, Seraphina Goldfarb-Tarrant, Julia Kreutzer, Marzieh Fadaee, andSaraHooker. Themultilingualalignmentprism: Aligningglobalandlocalpreferences to reduce harm. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pp. 12027–12049, Miami, Florida, USA, November 2024a. Association for Computational Linguistics. doi: 10.186 53/v1/2024.emnlp-main.671. URL https://aclanthology.org/2024.emnlp-main.671/ . Aakanksha, Arash Ahmadian, Seraphina Goldfarb-Tarrant, Beyza Ermis, Marzieh Fadaee, and Sara Hooker. Mix data or merge models? optimizing for diverse multi-task learning, 2024b. URL http://arxiv.org/abs/2410.10801 . Gilles Adda, Sebastian Stüker, Martine Adda-Decker, Odette Ambouroue, Laurent Besacier, David Blachon, Hélène Bonneau-Maynard, Pierre Godard, Fatima Hamlaoui, Dmitry Idiatov, Guy-Noël Kouarata, Lori Lamel, Emmanuel-Moselly Makasso, Annie Rialland, Mark Van de Velde, François Yvon, and Sabine Zerbian. Breaking the unwritten language barrier: The bulb project. Procedia Computer Science , 81:8–14, 2016. ISSN 1877-0509. doi: https://doi.org/10.1016/j.procs.2016.0 4.023. URL https://www.sciencedirect.com/science/article/pii/S1877050916300370 . SLTU-2016 5th Workshop on Spoken Language Technologies for Under-resourced languages 09-12 May 2016 Yogyakarta, Indonesia. David Ifeoluwa Adelani, Jessica Ojo, Israel Abebe Azime, Jian Yun Zhuang, Jesujoba O. Alabi, Xuanli He, Millicent Ochieng, Sara Hooker, Andiswa Bukula, En-Shiun Annie Lee, Chiamaka Chukwuneke, Happy Buzaaba, Blessing Sibanda, Godson Kalipe, Jonathan Mukiibi, Salomon Kabongo, Foutse Yuehgoh, Mmasibidi Setaka, Lolwethu Ndolela, Nkiruka Odu, Rooweither Mabuya, Shamsuddeen Hassan Muhammad, Salomey Osei, Sokhar Samb, Tadesse Kebede Guge, and Pontus Stenetorp.
|
https://arxiv.org/abs/2505.21344v1
|
IrokoBench: A new benchmark for african languages in the age of large language models, 2024. URL http://arxiv.org/abs/2406.03368 . 18 Muhammad Farid Adilazuarda, Samuel Cahyawijaya, Genta Indra Winata, Pascale Fung, and Ayu Purwarianti. IndoRobusta: Towards robustness against diverse code-mixed Indonesian local lan- guages. In Kabir Ahuja, Antonios Anastasopoulos, Barun Patra, Graham Neubig, Monojit Choudhury, Sandipan Dandapat, Sunayana Sitaram, and Vishrav Chaudhary (eds.), Proceed- ings of the First Workshop on Scaling Up Multilingual Evaluation , pp. 25–34, Online, November 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.sumeval-1.5. URL https://aclanthology.org/2022.sumeval-1.5/ . Ashish Sunil Agrawal, Barah Fazili, and Preethi Jyothi. Translation errors significantly impact low- resource languages in cross-lingual learning, 2024. URL http://arxiv.org/abs/2402.02080 . Orevaoghene Ahia, Julia Kreutzer, and Sara Hooker. The low-resource double bind: An empiri- cal study of pruning for low-resource machine translation. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (eds.), Findings of the Association for Compu- tational Linguistics: EMNLP 2021 , pp. 3316–3333. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.findings-emnlp.282. URL https://aclanthology.org/2021.find ings-emnlp.282 . Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Jungo Kasai, David Mortensen, Noah Smith, and Yulia Tsvetkov. Do all languages cost the same? tokenization in the era of commercial language models. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pp. 9904–9923. Asso- ciation for Computational Linguistics, 2023. doi: 10.18653/v1/2023.emnlp-main.614. URL https://aclanthology.org/2023.emnlp-main.614 . Alham Fikri Aji, Genta Indra Winata, Fajri Koto, Samuel Cahyawijaya, Ade Romadhony, Rah- mad Mahendra, Kemal Kurniawan, David Moeljadi, Radityo Eko Prasojo, Timothy Baldwin, Jey Han Lau, and Sebastian Ruder. One country, 700+ languages: NLP challenges for un- derrepresented languages and dialects in Indonesia. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 7226–7249, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.500. URL https://aclanthology.org/2022.acl-long.500/ . Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. Not enough data? deep learning to the rescue!, 2019. URLhttps://arxiv.org/abs/1911.03118 . Zachary Ankner, Cody Blakeney, Kartik Sreenivasan, Max Marion, Matthew L Leavitt, and Man- sheej Paul. Perplexed by perplexity: Perplexity-based data pruning with small reference mod- els. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=1GTARJhxtq . Usman Anwar, Abulhair Saparov, Javier Rando, Daniel Paleka, Miles Turpin, Peter Hase, Ekdeep Singh Lubana, Erik Jenner, Stephen Casper, Oliver Sourbut, Benjamin L. Edelman, Zhaowei Zhang, Mario Günther, Anton Korinek, Jose Hernandez-Orallo, Lewis Hammond, Eric Bigelow, Alexander Pan, Lauro Langosco, Tomasz Korbak, Heidi Zhang, Ruiqi Zhong, Seán Ó hÉigeartaigh, Gabriel Recchia, Giulio Corsi, Alan Chan, Markus Anderljung, Lilian Edwards, Yoshua Bengio, Danqi Chen, Samuel Albanie, Tegan Maharaj, Jakob Foerster, Florian Tramer, He He, Atoosa Kasirzadeh, Yejin Choi, and David Krueger. Foundational challenges in assuring alignment and safety of large language models, 2024. URL http://arxiv.org/abs/2404.09932 . 19 Shane Arora, Marzena Karpinska, Hung-Ting Chen, Ipsita Bhattacharjee, Mohit Iyyer, and Eunsol Choi. Calmqa: Exploring culturally specific long-form question answering across 23 languages, 2024. URL https://arxiv.org/abs/2406.17761 . Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh
|
https://arxiv.org/abs/2505.21344v1
|
Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Jon Ander Campos, Yi Chern Tan, Kelly Marchisio, Max Bartolo, Se- bastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Aidan Gomez, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker. Aya 23: Open weight releases to further multilingual progress, 2024a. URL https://arxiv.org/abs/2405.15032 . Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Jon Ander Campos, Yi Chern Tan, Kelly Marchisio, Max Bartolo, Se- bastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Aidan Gomez, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker. Aya 23: Open weight releases to further multilingual progress, 2024b. URL http://arxiv.org/abs/2405.15032 . Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861 , 2021. Seyram Avle, Emmanuel Quartey, David Hutchful, Seyram Avle, Emmanuel Quartey, and David Hutchful. Research on mobile phone data in the global south: Opportunities and challenges, 2020. URLhttps://academic.oup.com/edited-volume/34286/chapter/290662354 . Book Title: The Oxford Handbook of Networked Communication ISBN: 9780190460518 Publisher: Oxford University Press. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. Constitutional ai: Harmlessness from ai feedback, 2022. URL https://arxiv.org/abs/2212.08073 . Ankur Bapna, Isaac Caswell, Julia Kreutzer, Orhan Firat, Daan van Esch, Aditya Siddhant, Meng- meng Niu, Pallavi Baljekar, Xavier Garcia, Wolfgang Macherey, Theresa Breiner, Vera Axel- rod, Jason Riesa, Yuan Cao, Mia Xu Chen, Klaus Macherey, Maxim Krikun, Pidong Wang, Alexander Gutkin, Apurva Shah, Yanping Huang, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. Building machine translation systems for the next thousand languages, 2022. URL http://arxiv.org/abs/2205.03983 . Susanne Beck, Carsten Bergenholtz, Marcel Bogers, Tiare Maria Brasseur, Marie Louise Conradsen, Diletta Di Marco, Andreas P. Distel, Leonhard Dobusch, Daniel Dörler, Agnes Effert, Benedikt Fecher, Despoina Filiou, Lars Frederiksen, Thomas Gillier, Christoph Grimpe, Marc Gruber, Car- olin Haeussler, Florian Heigl, Karin Hoisl, Katie Hyslop, Olga Kokshagina, Marcel LaFlamme, Cornelia Lawson, Hila Lifshitz-Assaf, Wolfgang Lukas, Markus Nordberg, Maria Theresa Norn, Marion Poetz, Marisa Ponti, Gernot Pruschak, Laia Pujol Priego, Agnieszka Radziwon, Janet 20 Rafner, Gergana Romanova, Alexander Ruser, Henry Sauermann, Sonali K. Shah, Jacob F. Sher- son, Julia Suess-Reyes, Christopher L. Tucci, Philipp Tuertscher, Jane Bjørn Vedel, Theresa Velden, Roberto Verganti, Jonathan Wareham, Andrea Wiggins, and Sunny Mosangzi Xu. The open innovation in science research field: a collaborative conceptualisation approach. Industry and Innovation , 29(2):136–185, 2022. ISSN 1366-2716. doi: 10.1080/13662716.2020.1792274. Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Rottger, Dan Jurafsky, Tatsunori Hashimoto,
|
https://arxiv.org/abs/2505.21344v1
|
and James Zou. Safety-tuned LLaMAs: Lessons from improving the safety of large language models that follow instructions. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=gT5hALch9z . Steven Bird. Local languages, third spaces, and other high-resource scenarios. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 7817–7829. Association for Computational Linguistics, 2022. doi: 10.18653/v1/2022.acl-lon g.539. URL https://aclanthology.org/2022.acl-long.539 . Terra Blevins and Luke Zettlemoyer. Language contamination helps explains the cross-lingual capabilities of English pretrained models. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pp. 3563–3574, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.233. URL https: //aclanthology.org/2022.emnlp-main.233/ . Meriem Boubdir, Edward Kim, Beyza Ermis, Marzieh Fadaee, and Sara Hooker. Which prompts make the difference? data prioritization for efficient human llm evaluation, 2023. Eleftheria Briakou, Colin Cherry, and George Foster. Searching for needles in a haystack: On the role of incidental bilingualism in PaLM‘s translation capability. In Anna Rogers, Jordan Boyd- Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 9432–9452, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.524. URL https://aclanthology.org/2023.acl-long.524/ . Samuel Cahyawijaya, Holy Lovenia, Alham Fikri Aji, Genta Indra Winata, Bryan Wilie, Rah- mad Mahendra, Christian Wibisono, Ade Romadhony, Karissa Vincentio, Fajri Koto, Jennifer Santoso, David Moeljadi, Cahya Wirawan, Frederikus Hudi, Ivan Halim Parmonangan, Ika Al- fina, Muhammad Satrio Wicaksono, Ilham Firdausi Putra, Samsul Rahmadani, Yulianti Oenang, Ali Akbar Septiandri, James Jaya, Kaustubh D. Dhole, Arie Ardiyanti Suryani, Rifki Afina Pu- tri, Dan Su, Keith Stevens, Made Nindyatama Nityasya, Muhammad Farid Adilazuarda, Ryan Ignatius, Ryandito Diandaru, Tiezheng Yu, Vito Ghifari, Wenliang Dai, Yan Xu, Dyah Dama- puspita, Cuk Tho, Ichwanul Muslim Karo Karo, Tirana Noor Fatyanosa, Ziwei Ji, Pascale Fung, Graham Neubig, Timothy Baldwin, Sebastian Ruder, Herry Sujaini, Sakriani Sakti, and Ayu Purwarianti. NusaCrowd: Open source initiative for indonesian NLP resources, 2023a. URL http://arxiv.org/abs/2212.09648 . Samuel Cahyawijaya, Holy Lovenia, Fajri Koto, Dea Adhista, Emmanuel Dave, Sarah Oktavianti, Salsabil Akbar, Jhonson Lee, Nuur Shadieq, Tjeng Wawan Cenggoro, Hanung Linuwih, Bryan Wilie, Galih Muridan, Genta Winata, David Moeljadi, Alham Fikri Aji, Ayu Purwarianti, and Pascale Fung. NusaWrites: Constructing high-quality corpora for underrepresented and ex- tremely low-resource languages. In Jong C. Park, Yuki Arase, Baotian Hu, Wei Lu, Derry 21 Wijaya, Ayu Purwarianti, and Adila Alfa Krisnadhi (eds.), Proceedings of the 13th Interna- tional Joint Conference on Natural Language Processing and the 3rd Conference of the Asia- Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 921–945, Nusa Dua, Bali, November 2023b. Association for Computational Linguistics. doi: 10.18653/v1/2023.ijcnlp-main.60. URL https://aclanthology.org/2023.ijcnlp-main.60/ . Samuel Cahyawijaya, Delong Chen, Yejin Bang, Leila Khalatbari, Bryan Wilie, Ziwei Ji, Etsuko Ishii, and Pascale Fung. High-dimension human value representation in large language models, 2024. URL https://arxiv.org/abs/2404.07900 . Samuel Cahyawijaya, Holy Lovenia, Joel Ruben Antony Moniz, Tack Hwa Wong, Moham- mad Rifqi Farhansyah, Thant Thiri Maung, Frederikus Hudi, David Anugraha, Muhammad
|
https://arxiv.org/abs/2505.21344v1
|
Ravi Shulthan Habibi, Muhammad Reza Qorib, Amit Agarwal, Joseph Marvin Imperial, HiteshLaxmichandPatel, VickyFeliren, BahrulIlmiNasution, ManuelAntonioRufino, GentaIn- dra Winata, Rian Adam Rajagede, Carlos Rafael Catalan, Mohamed Fazli Imam, Priyaranjan Pattnayak, Salsabila Zahirah Pranida, Kevin Pratama, Yeshil Bangera, Adisai Na-Thalang, Pa- tricia Nicole Monderin, Yueqi Song, Christian Simon, Lynnette Hui Xian Ng, Richardy Lobo’ Sapan, Taki Hasan Rafi, Bin Wang, Supryadi, Kanyakorn Veerakanjana, Piyalitt Ittichai- wong, Matthew Theodore Roque, Karissa Vincentio, Takdanai Kreangphet, Phakphum Artkaew, Kadek Hendrawan Palgunadi, Yanzhi Yu, Rochana Prih Hastuti, William Nixon, Mithil Bangera, Adrian Xuan Wei Lim, Aye Hninn Khine, Hanif Muhammad Zhafran, Teddy Ferdinan, Audra Au- rora Izzani, Ayushman Singh, Evan, Jauza Akbar Krito, Michael Anugraha, Fenal Ashokbhai Ilasariya, Haochen Li, John Amadeo Daniswara, Filbert Aurelian Tjiaranata, Eryawan Presma Yulianrifat, Can Udomcharoenchaikit, Fadil Risdian Ansori, Mahardika Krisna Ihsani, Giang Nguyen, Anab Maulana Barik, Dan John Velasco, Rifo Ahmad Genadi, Saptarshi Saha, Cheng- wei Wei, Isaiah Flores, Kenneth Ko Han Chen, Anjela Gail Santos, Wan Shen Lim, Kaung Si Phyo, Tim Santos, Meisyarah Dwiastuti, Jiayun Luo, Jan Christian Blaise Cruz, Ming Shan Hee, Ikhlasul Akmal Hanif, M. Alif Al Hakim, Muhammad Rizky Sya’ban, Kun Kerdthaisong, Lester James V. Miranda, Fajri Koto, Tirana Noor Fatyanosa, Alham Fikri Aji, Jostin Jerico Rosal, Jun Kevin, Robert Wijaya, Onno P. Kampman, Ruochen Zhang, Börje F. Karlsson, and Peerat Limkonchotiwat. Crowdsource, crawl, or generate? creating sea-vl, a multicultural vision- language dataset for southeast asia, 2025. URL https://arxiv.org/abs/2503.07920 . Pinzhen Chen, Simon Yu, Zhicheng Guo, and Barry Haddow. Is it good data for multilingual instruction tuning or just bad multilingual evaluation for large language models?, 2024. URL http://arxiv.org/abs/2406.12822 . Everlyn Asiko Chimoto, Jay Gala, Orevaoghene Ahia, Julia Kreutzer, Bruce A. Bassett, and Sara Hooker. Critical learning periods: Leveraging early training dynamics for efficient data prun- ing. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics: ACL 2024 , pp. 9407–9426, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-acl.560. URL https://aclanthology.org/2024.findings-acl.560/ . Rochelle Choenni, Sara Rajaee, Christof Monz, and Ekaterina Shutova. On the evaluation practices in multilingual NLP: Can machine translation offer an alternative to human translations?, 2024. URLhttp://arxiv.org/abs/2406.14267 . PaulFChristiano,JanLeike,TomBrown,MiljanMartic,ShaneLegg,andDarioAmodei. Deeprein- forcement learning from human preferences. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, 22 R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/p aper_files/paper/2017/file/d5e2c0adad503c91f91df240d0cd4e49-Paper.pdf . Team Cohere, Aakanksha, Arash Ahmadian, Marwan Ahmed, Jay Alammar, Yazeed Alnumay, Sophia Althammer, Arkady Arkhangorodsky, Viraat Aryabumi, Dennis Aumiller, Raphaël Ava- los, Zahara Aviv, Sammie Bae, Saurabh Baji, Alexandre Barbet, Max Bartolo, Björn Bebensee, Neeral Beladia, Walter Beller-Morales, Alexandre Bérard, Andrew Berneshawi, Anna Bialas, Phil Blunsom, Matt Bobkin, Adi Bongale, Sam Braun, Maxime Brunet, Samuel Cahyawijaya, David Cairuz, Jon Ander Campos, Cassie Cao, Kris Cao, Roman Castagné, Julián Cendrero, Leila Chan Currie, Yash Chandak, Diane Chang, Giannis Chatziveroglou, Hongyu Chen, Claire Cheng, Alexis Chevalier, Justin T. Chiu, Eugene Cho, Eugene Choi, Eujeong Choi, Tim Chung, Volkan Cirik, Ana Cismaru, Pierre Clavier, Henry Conklin, Lucas Crawhall-Stein, Devon Crouse, Andres Felipe Cruz-Salinas, Ben Cyrus, Daniel D’souza, Hugo Dalla-Torre,
|
https://arxiv.org/abs/2505.21344v1
|
John Dang, William Darling, Omar Darwiche Domingues, Saurabh Dash, Antoine Debugne, Théo Dehaze, Shaan Desai, Joan Devassy, Rishit Dholakia, Kyle Duffy, Ali Edalati, Ace Eldeib, Abdullah Elkady, Sarah Elsharkawy, Irem Ergün, Beyza Ermis, Marzieh Fadaee, Boyu Fan, Lucas Fayoux, Yan- nis Flet-Berliac, Nick Frosst, Matthias Gallé, Wojciech Galuba, Utsav Garg, Matthieu Geist, Mohammad Gheshlaghi Azar, Seraphina Goldfarb-Tarrant, Tomas Goldsack, Aidan Gomez, Vic- tor Machado Gonzaga, Nithya Govindarajan, Manoj Govindassamy, Nathan Grinsztajn, Nikolas Gritsch, Patrick Gu, Shangmin Guo, Kilian Haefeli, Rod Hajjar, Tim Hawes, Jingyi He, Sebas- tian Hofstätter, Sungjin Hong, Sara Hooker, Tom Hosking, Stephanie Howe, Eric Hu, Renjie Huang, Hemant Jain, Ritika Jain, Nick Jakobi, Madeline Jenkins, JJ Jordan, Dhruti Joshi, Ja- son Jung, Trushant Kalyanpur, Siddhartha Rao Kamalakara, Julia Kedrzycki, Gokce Keskin, Edward Kim, Joon Kim, Wei-Yin Ko, Tom Kocmi, Michael Kozakov, Wojciech Kryściński, Ar- nav Kumar Jain, Komal Kumar Teru, Sander Land, Michael Lasby, Olivia Lasche, Justin Lee, Patrick Lewis, Jeffrey Li, Jonathan Li, Hangyu Lin, Acyr Locatelli, Kevin Luong, Raymond Ma, Lukas Mach, Marina Machado, Joanne Magbitang, Brenda Malacara Lopez, Aryan Mann, Kelly Marchisio, Olivia Markham, Alexandre Matton, Alex McKinney, Dominic McLoughlin, Jozef Mokry, Adrien Morisot, Autumn Moulder, Harry Moynehan, Maximilian Mozes, Vivek Muppalla, Lidiya Murakhovska, Hemangani Nagarajan, Alekhya Nandula, Hisham Nasir, Shauna Nehra, Josh Netto-Rosen, Daniel Ohashi, James Owers-Bardsley, Jason Ozuzu, Dennis Padilla, Gloria Park, Sam Passaglia, Jeremy Pekmez, Laura Penstone, Aleksandra Piktus, Case Ploeg, Andrew Poulton, Youran Qi, Shubha Raghvendra, Miguel Ramos, Ekagra Ranjan, Pierre Richemond, Cé- cile Robert-Michon, Aurélien Rodriguez, Sudip Roy, Laura Ruis, Louise Rust, Anubhav Sachan, Alejandro Salamanca, Kailash Karthik Saravanakumar, Isha Satyakam, Alice Schoenauer Sebag, Priyanka Sen, Sholeh Sepehri, Preethi Seshadri, Ye Shen, Tom Sherborne, Sylvie Chang Shi, Sanal Shivaprasad, Vladyslav Shmyhlo, Anirudh Shrinivason, Inna Shteinbuk, Amir Shukayev, Mathieu Simard, Ella Snyder, Ava Spataru, Victoria Spooner, Trisha Starostina, Florian Strub, Yixuan Su, Jimin Sun, Dwarak Talupuru, Eugene Tarassov, Elena Tommasone, Jennifer Tracey, Billy Trend, Evren Tumer, Ahmet Üstün, Bharat Venkitesh, David Venuto, Pat Verga, Maxime Voisin, Alex Wang, Donglu Wang, Shijian Wang, Edmond Wen, Naomi White, Jesse Willman, Marysia Winkels, Chen Xia, Jessica Xie, Minjie Xu, Bowen Yang, Tan Yi-Chern, Ivan Zhang, Zhenyu Zhao, and Zhoujie Zhao. Command a: An enterprise-ready large language model, 2025. URLhttps://arxiv.org/abs/2504.00698 . Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsuper- 23 vised cross-lingual representation learning at scale. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics , pp. 8440–8451, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.747. URL https://aclanthology.org/2020.acl-main.747/ . Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, and Yaodong Yang. SafeRLHF:Safereinforcementlearningfromhumanfeedback. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=TyFr POKYXw. John Dang, Arash Ahmadian, Kelly Marchisio, Julia Kreutzer, Ahmet Üstün, and Sara Hooker. RLHF can speak many languages: Unlocking multilingual preference optimization for LLMs. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing ,
|
https://arxiv.org/abs/2505.21344v1
|
pp. 13134–13156, Miami, Florida, USA, November 2024a. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-mai n.729. URL https://aclanthology.org/2024.emnlp-main.729/ . John Dang, Shivalika Singh, Daniel D’souza, Arash Ahmadian, Alejandro Salamanca, Madeline Smith, Aidan Peppin, Sungjin Hong, Manoj Govindassamy, Terrence Zhao, Sandra Kublik, Meor Amer, Viraat Aryabumi, Jon Ander Campos, Yi-Chern Tan, Tom Kocmi, Florian Strub, Nathan Grinsztajn,YannisFlet-Berliac,AcyrLocatelli,HangyuLin,DwarakTalupuru,BharatVenkitesh, David Cairuz, Bowen Yang, Tim Chung, Wei-Yin Ko, Sylvie Shang Shi, Amir Shukayev, Sam- mie Bae, Aleksandra Piktus, Roman Castagné, Felipe Cruz-Salinas, Eddie Kim, Lucas Crawhall- Stein, Adrien Morisot, Sudip Roy, Phil Blunsom, Ivan Zhang, Aidan Gomez, Nick Frosst, Marzieh Fadaee, Beyza Ermis, Ahmet Üstün, and Sara Hooker. Aya expanse: Combining research break- throughs for a new multilingual frontier, 2024b. URL https://arxiv.org/abs/2412.04261 . Saurabh Dash, Yiyang Nan, John Dang, Arash Ahmadian, Shivalika Singh, Madeline Smith, Bharat Venkitesh, Vlad Shmyhlo, Viraat Aryabumi, Walter Beller-Morales, Jeremy Pekmez, Ja- son Ozuzu, Pierre Richemond, Acyr Locatelli, Nick Frosst, Phil Blunsom, Aidan Gomez, Ivan Zhang, Marzieh Fadaee, Manoj Govindassamy, Sudip Roy, Matthias Gallé, Beyza Ermis, Ahmet Üstün, and Sara Hooker. Aya vision: Advancing the frontier of multilingual multimodality, 2025. Yue Deng, Wenxuan Zhang, Sinno Jialin Pan, and Lidong Bing. Multilingual jailbreak challenges in large language models. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=vESNKdEMGp . Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. ∀, Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Taiwo Fagbo- hungbe, Solomon Oluwole Akinola, Shamsuddeen Muhammad, Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kath- leen Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ig- natius Ezeani, Idris Abdulkadir Dangana, Herman Kamper, Hady Elsahar, Goodness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van Biljon, Daniel Whitenack, Christopher Onyefu- luchi, Chris Chinenye Emezue, Bonaventure F. P. Dossou, Blessing Sibanda, Blessing Bassey, 24 Ayodele Olabiyi, Arshath Ramkilowan, Alp Öktem, Adewale Akinfaderin, and Abdallah Bashir. Participatory research for low-resourced machine translation: A case study in African lan- guages. In Trevor Cohn, Yulan He, and Yang Liu (eds.), Findings of the Association for Computational Linguistics: EMNLP 2020 , pp. 2144–2160, Online, November 2020a. Associ- ation for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.195. URL https://aclanthology.org/2020.findings-emnlp.195/ . ∀, Iroro Orife, Julia Kreutzer, Blessing K. Sibanda, Daniel Whitenack, Kathleen Siminyu, Laura Martinus, Jamiil Toure Ali, Jade Z. Abbott, Vukosi Marivate, Salomon Kabongo, Musie Meressa, Espoir Murhabazi, Orevaoghene Ahia, Elan Van Biljon, Arshath Ramkilowan, Adewale Akin- faderin, Alp Öktem, Wole Akin, Ghollah Kioko, Kevin Degila, Herman Kamper, Bonaventure Dossou, ChrisEmezue, KelechiOgueji, andAbdallahBashir. Masakhane-machinetranslationfor africa. In Kathleen Siminyu, Laura Martinus, and Vukosi Marivate (eds.), 1st AfricaNLP Work- shop Proceedings, AfricaNLP@ICLR 2020, Virtual Conference, Formerly Addis Ababa Ethiopia, 26th April 2020 , 2020b. URL https://arxiv.org/abs/2003.11529 . Chiara Franzoni and Henry Sauermann. Crowd science: The organization of scientific research in open collaborative projects. Research Policy , 43(1):1–20, 2014. ISSN 0048-7333. doi: https: //doi.org/10.1016/j.respol.2013.07.005. URL https://www.sciencedirect.com/science/arti
|
https://arxiv.org/abs/2505.21344v1
|
cle/pii/S0048733313001212 . Lea Frermann and Mirella Lapata. A Bayesian model of diachronic meaning change. Transactions of the Association for Computational Linguistics , 4:31–45, 2016. doi: 10.1162/tacl_a_00081. URLhttps://aclanthology.org/Q16-1003/ . Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, Andy Jones, Sam Bowman, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Nelson Elhage, Sheer El-Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Danny Hernandez, Tristan Hume, Josh Jacobson, Scott Johnston, Shauna Kravec, Catherine Olsson, Sam Ringer, Eli Tran-Johnson, Dario Amodei, Tom Brown, Nicholas Joseph, Sam McCandlish, Chris Olah, Jared Kaplan, and Jack Clark. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv, abs/2209.07858, 2022. Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. RealToxici- tyPrompts: Evaluating neural toxic degeneration in language models. In Trevor Cohn, Yulan He, and Yang Liu (eds.), Findings of the Association for Computational Linguistics: EMNLP 2020 , pp. 3356–3369, Online, November 2020. Association for Computational Linguistics. doi: 10.18653 /v1/2020.findings-emnlp.301. URL https://aclanthology.org/2020.findings-emnlp.301/ . Gemma Gemini Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295 , 2024. Sourojit Ghosh and Aylin Caliskan. ChatGPT perpetuates gender bias in machine translation and ignores non-gendered pronouns: Findings across bengali and five other low-resource languages, 2023. URL http://arxiv.org/abs/2305.10510 . Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, San- jana Krishnan, Marc’Aurelio Ranzato, Francisco Guzmán, and Angela Fan. The flores-101 eval- uation benchmark for low-resource and multilingual machine translation. 2021. 25 Annika Grützner-Zahn, Federico Gaspari, Maria Giagkou, Stefanie Hegele, Andy Way, and Georg Rehm. Surveying the technology support of languages. In Federico Gaspari, Joss Moorkens, Itziar Aldabe, Aritz Farwell, Begona Altuna, Stelios Piperidis, Georg Rehm, and German Rigau (eds.), Proceedings of the Second International Workshop Towards Digital Language Equality (TDLE): Focusing on Sustainability @ LREC-COLING 2024 , pp. 1–17, Torino, Italia, May 2024. ELRA and ICCL. URL https://aclanthology.org/2024.tdle-1.1/ . SrishtiGureja, LesterJamesV.Miranda, ShayekhBinIslam, RishabhMaheshwary, DrishtiSharma, Gusti Winata, Nathan Lambert, Sebastian Ruder, Sara Hooker, and Marzieh Fadaee. M- rewardbench: Evaluating reward models in multilingual settings, 2024. URL https://arxi v.org/abs/2410.15522 . Kai Hartung, Aaricia Herygers, Shubham Kurlekar, Khabbab Zakaria, Taylan Volkan, Sören Gröttrup, and Munir Georges. Measuring sentiment bias in machine translation, 2023. URL http://arxiv.org/abs/2306.07152 . Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. CoRR, abs/2009.03300, 2020. URLhttps://arxiv.org/abs/2009.03300 . Sara Hooker. On the limitations of compute thresholds as a governance strategy, 2024. URL https://arxiv.org/abs/2407.05694 . Franziska Horn. Exploring word usage change with continuously evolving embeddings. In Heng Ji, Jong C. Park, and Rui Xia (eds.), Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations , pp. 290–297. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.acl-demo.35. URL https://aclanthology.org/2021.acl-demo. 35. Dirk Hovy and Diyi Yang. The importance of modeling social factors of language: Theory and prac- tice. In Kristina
|
https://arxiv.org/abs/2505.21344v1
|
Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, StevenBethard, RyanCotterell, TanmoyChakraborty, andYichaoZhou(eds.), Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies ,pp.588–602.AssociationforComputationalLinguistics,2021. doi: 10.18653/v1/2021.naacl-main.49. URL https://aclanthology.org/2021.naacl-main.49 . Adithya Venkatadri Hulagadri, Julia Kreutzer, Jian Gang Ngui, and Xian Bin Yong. Towards fair and comprehensive multilingual llm benchmarking, 2025. URL https://cohere.com/blog/to wards-fair-and-comprehensive-multilingual-and-multicultural-llm-benchmarking . Kokil Jaidka, Niyati Chhaya, and Lyle Ungar. Diachronic degradation of language models: Insights from social media. In Iryna Gurevych and Yusuke Miyao (eds.), Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pp. 195– 200. Association for Computational Linguistics, 2018. doi: 10.18653/v1/P18-2032. URL https: //aclanthology.org/P18-2032 . Meng Ji, Meng Ji, Pierrette Bouillon, and Mark Seligman. Cultural and Linguistic Bias of Neural Machine Translation Technology , pp. 100–128. Studies in Natural Language Processing. Cam- bridge University Press, 2023a. 26 YunjieJi, YanGong, YongDeng, YipingPeng, QiangNiu, BaochangMa, andXiangangLi. Towards better instruction following language models for chinese: Investigating the impact of training data and evaluation, 2023b. URL http://arxiv.org/abs/2304.07854 . Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. The state and fate of linguistic diversity and inclusion in the NLP world. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pp. 6282–6293, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.560. URL https://aclanthology.org/2020.ac l-main.560/ . Anubha Kabra, Emmy Liu, Simran Khanuja, Alham Fikri Aji, Genta Winata, Samuel Cahyawijaya, Anuoluwapo Aremu, Perez Ogayo, and Graham Neubig. Multi-lingual and multi-cultural figura- tive language understanding. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Findings of the Association for Computational Linguistics: ACL 2023 , pp. 8269–8284, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.finding s-acl.525. URL https://aclanthology.org/2023.findings-acl.525/ . Mohammed Safi Ur Rahman Khan, Priyam Mehta, Ananth Sankar, Umashankar Kumaravelan, Sumanth Doddapaneni, Suriyaprasaad B, Varun G, Sparsh Jain, Anoop Kunchukuttan, Pratyush Kumar, Raj Dabre, and Mitesh M. Khapra. IndicLLMSuite: A blueprint for creating pre- training and fine-tuning datasets for Indian languages. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the 62nd Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers) , pp. 15831–15879, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.843. URL https://aclanthology.org/2024.acl-long.843/ . Khyati Khandelwal, Manuel Tonneau, Andrew M. Bean, Hannah Rose Kirk, and Scott A. Hale. Casteist but not racist? quantifying disparities in large language model bias between india and the west. CoRR, abs/2309.08573, 2023. URL https://doi.org/10.48550/arXiv.2309.08573 . Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra, and Harm de Vries. The stack: 3 tb of permissively licensed source code, 2022. Hadas Kotek, Rikker Dockum, and David Sun. Gender bias and stereotypes in large language models. In Proceedings of The ACM Collective Intelligence Conference , CI ’23, pp. 12–24. ACM, November 2023. doi: 10.1145/3582269.3615599. URL http://dx.doi.org/10.1145/3582269.3 615599. JuliaKreutzer, IsaacCaswell, LisaWang, AhsanWahab, DaanvanEsch, NasanbayarUlzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang
|
https://arxiv.org/abs/2505.21344v1
|
Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Pa- padimitriou, Salomey Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo Rubungo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa Adeyemi. 27 Quality at a glance: An audit of web-crawled multilingual datasets. Transactions of the As- sociation for Computational Linguistics , 10:50–72, 2022. doi: 10.1162/tacl_a_00447. URL https://aclanthology.org/2022.tacl-1.4/ . Anoop Kunchukuttan, Siddharth Jain, and Rahul Kejriwal. A large-scale evaluation of neural ma- chine transliteration for indic languages. In Paola Merlo, Jorg Tiedemann, and Reut Tsarfaty (eds.),Proceedings of the 16th Conference of the European Chapter of the Association for Com- putational Linguistics: Main Volume , pp. 3469–3475. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.eacl-main.303. URL https://aclanthology.org/2021.eacl-mai n.303. Viet Dac Lai, Chien Van Nguyen, Nghia Trung Ngo, Thuat Nguyen, Franck Dernoncourt, Ryan A. Rossi, and Thien Huu Nguyen. Okapi: Instruction-tuned large language models in multiple languages with reinforcement learning from human feedback, 2023. URL https://arxiv.org/ abs/2307.16039 . Walter Laurito, Benjamin Davis, Peli Grietzer, Tomáš Gavenčiak, Ada Böhm, and Jan Kulveit. Ai ai bias: Large language models favor their own generated content, 2024. URL https://arxiv. org/abs/2407.12856 . Alycia Lee, Brando Miranda, Sudharsan Sundar, and Sanmi Koyejo. Beyond scale: the diversity coefficient as a data quality metric demonstrates LLMs are pre-trained on formally diverse data, 2023. URL http://arxiv.org/abs/2306.13840 . Regina Lenart-Gansiniec, Wojciech Czakon, Łukasz Sułkowski, and Jasna Pocek. Understanding crowdsourcing in science, 2023. ISSN 1863-6691. URL https://doi.org/10.1007/s11846-022 -00602-z . ZihaoLi, YuchengShi, ZiruiLiu, FanYang, AliPayani, NinghaoLiu, andMengnanDu. Quantifying multilingual performance of large language models across languages, 2024. URL http://arxiv. org/abs/2404.11553 . Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, and Desmond Elliott. Visually grounded reasoning across languages and cultures. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pp. 10467–10485, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653 /v1/2021.emnlp-main.818. URL https://aclanthology.org/2021.emnlp-main.818/ . Shayne Longpre, Robert Mahari, Anthony Chen, Naana Obeng-Marnu, Damien Sileo, William Brannon, Niklas Muennighoff, Nathan Khazam, Jad Kabbara, Kartik Perisetla, Xinyi Wu, Enrico Shippole, Kurt Bollacker, Tongshuang Wu, Luis Villa, Sandy Pentland, and Sara Hooker. The data provenance initiative: A large scale audit of dataset licensing & attribution in AI, 2023. URLhttp://arxiv.org/abs/2310.16787 . Shayne Longpre, Nikhil Singh, Manuel Cherep, Kushagra Tiwary, Joanna Materzynska, William Brannon, Robert Mahari, Naana Obeng-Marnu, Manan Dey, Mohammed Hamdy, Nayan Saxena, Ahmad Mustafa Anis, Emad A. Alghamdi, Vu Minh Chien, Da Yin, Kun Qian, Yizhi Li, Minnie Liang, An Dinh, Shrestha Mohanty, Deividas Mataciunas, Tobin South, Jianguo Zhang, Ariel N. Lee, Campbell S. Lund, Christopher Klamm, Damien Sileo, Diganta Misra, Enrico Shippole, Kevin Klyman, Lester JV Miranda, Niklas Muennighoff, Seonghyeon Ye, Seungone Kim, Vipul
|
https://arxiv.org/abs/2505.21344v1
|
28 Gupta, Vivek Sharma, Xuhui Zhou, Caiming Xiong, Luis Villa, Stella Biderman, Alex Pentland, Sara Hooker, and Jad Kabbara. Bridging the data provenance gap across text, speech and video, 2024. Holy Lovenia, Rahmad Mahendra, Salsabil Maulana Akbar, Lester James Validad Miranda, Jen- nifer Santoso, Elyanah Aco, Akhdan Fadhilah, Jonibek Mansurov, Joseph Marvin Imperial, Onno P. Kampman, Joel Ruben Antony Moniz, Muhammad Ravi Shulthan Habibi, Frederikus Hudi, Jann Railey Montalan, Ryan Ignatius Hadiwijaya, Joanito Agili Lopo, William Nixon, Börje F. Karlsson, James Jaya, Ryandito Diandaru, Yuze Gao, Patrick Amadeus Irawan, Bin Wang, Jan Christian Blaise Cruz, Chenxi Whitehouse, Ivan Halim Parmonangan, Maria Khelli, Wenyu Zhang, Lucky Susanto, Reynard Adha Ryanda, Sonny Lazuardi Hermawan, Dan John Velasco, Muhammad Dehan Al Kautsar, Willy Fitra Hendria, Yasmin Moslem, Noah Flynn, Muhammad Farid Adilazuarda, Haochen Li, Johanes Lee, R. Damanhuri, Shuo Sun, Muham- mad Reza Qorib, Amirbek Djanibekov, Wei Qi Leong, Quyet V. Do, Niklas Muennighoff, Tan- rada Pansuwan, Ilham Firdausi Putra, Yan Xu, Tai Ngee Chia, Ayu Purwarianti, Sebastian Ruder, William Chandra Tjhi, Peerat Limkonchotiwat, Alham Fikri Aji, Sedrick Keh, Genta In- dra Winata, Ruochen Zhang, Fajri Koto, Zheng Xin Yong, and Samuel Cahyawijaya. SEACrowd: A multilingual multimodal data hub and benchmarksuite for SoutheastAsian languages. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (eds.), Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pp. 5155–5203, Miami, Florida, USA, Novem- ber 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.296. URLhttps://aclanthology.org/2024.emnlp-main.296/ . Alexandre Magueresse, Vincent Carles, and Evan Heetderks. Low-resource languages: A review of past work and future challenges, 2020. URL http://arxiv.org/abs/2006.07264 . Max Marion, Ahmet Üstün, Luiza Pozzobon, Alex Wang, Marzieh Fadaee, and Sara Hooker. When less is more: Investigating data pruning for pretraining LLMs at scale, 2023. URL http://arxi v.org/abs/2309.04564 . Laura Martinus and Jade Z. Abbott. A focus on neural machine translation for african languages. CoRR, abs/1906.05685, 2019. URL http://arxiv.org/abs/1906.05685 . Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, and Erik Bryn- jolfsson. AI index report 2024, 2024. URL https://aiindex.stanford.edu/report/#individ ual-chapters . mistral team Mistral Team. Mistral.ai news: Ministraux, 2024a. URL https://mistral.ai/new s/ministraux/ . mistral team Mistral Team. Mistral.ai news: Ministraux, 2024b. URL https://mistral.ai/new s/mixtral-8x22b/ . Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff, and Colin Raffel. Crosslingual generalization through multitask finetuning. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 15991– 16111. Association for Computational Linguistics, 2023a. doi: 10.18653/v1/2023.acl-long.891. URLhttps://aclanthology.org/2023.acl-long.891 . 29 Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff, and Colin Raffel. Crosslingual generalization through multitask finetuning, 2023b. URLhttps://arxiv.org/abs/2211.01786 . Tarek Naous, Michael J Ryan, Alan Ritter, and Wei Xu. Having beer after prayer? measuring cultural bias
|
https://arxiv.org/abs/2505.21344v1
|
in large language models. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.),Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 16366–16393, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.862. URL https://aclanthology.o rg/2024.acl-long.862/ . Gabriel Nicholas and Aliya Bhatia. Lost in translation: Large language models in non-english content analysis, 2023. URL https://arxiv.org/abs/2306.07377 . Ayomide Odumakinde, Daniel D’souza, Pat Verga, Beyza Ermis, and Sara Hooker. Multilingual arbitrage: Optimizing data pools to accelerate multilingual progress, 2024. URL https://arxi v.org/abs/2408.14960 . OECD. AI language models: Technological, socio-economic and policy considerations, 2023. URL https://www.oecd-ilibrary.org/science-and-technology/ai-language-models_13d38f92 -en. Series: OECD Digital Economy Papers Volume: 352. Jessica Ojo, Odunayo Ogundepo, Akintunde Oladipo, Kelechi Ogueji, Jimmy Lin, Pontus Stene- torp, and David Ifeoluwa Adelani. Afrobench: How good are large language models on african languages?, 2025. URL https://arxiv.org/abs/2311.07978 . Martin Petty. Explainer: Why is Myanmar’s military holding an election?, 2023. URL https: //www.reuters.com/world/asia-pacific/why-is-myanmars-military-holding-an-electio n-2023-03-29/ . Accessed on Jan. 17, 2024. Jonas Pfeiffer, Gregor Geigle, Aishwarya Kamath, Jan-Martin O. Steitz, Stefan Roth, Ivan Vulić, and Iryna Gurevych. xGQA: Cross-lingual visual question answering. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (eds.), Findings of the Association for Computational Linguistics: ACL 2022 , pp.2497–2511, Dublin, Ireland, May2022.AssociationforComputational Linguistics. doi: 10.18653/v1/2022.findings-acl.196. URL https://aclanthology.org/2022.fi ndings-acl.196/ . Luiza Pozzobon, Beyza Ermis, Patrick Lewis, and Sara Hooker. Goodtriever: Adaptive toxicity mitigation with retrieval-augmented models, 2023. URL http://arxiv.org/abs/2310.07589 . Luiza Pozzobon, Patrick Lewis, Sara Hooker, and Beyza Ermis. From one to many: Expanding the scope of toxicity mitigation in language models, 2024. URL http://arxiv.org/abs/2403.03893 . Ayu Purwarianti, Dea Adhista, Agung Baptiso, Miftahul Mahfuzh, Yusrina Sabila, Aulia Adila, Samuel Cahyawijaya, and Alham Fikri Aji. NusaDialogue: Dialogue summarization and genera- tion for underrepresented and extremely low-resource languages. In Derry Wijaya, Alham Fikri Aji, Clara Vania, Genta Indra Winata, and Ayu Purwarianti (eds.), Proceedings of the Second Workshop in South East Asian Language Processing , pp. 82–100, Online, January 2025. Associa- tion for Computational Linguistics. URL https://aclanthology.org/2025.sealp-1.8/ . 30 qwen team Qwen Team. Qwen2.5: A party of foundation models, September 2024. URL https: //qwenlm.github.io/blog/qwen2.5/ . Surangika Ranathunga and Nisansa de Silva. Some languages are more equal than others: Probing deeper into the linguistic disparity in the NLP world. In Yulan He, Heng Ji, Sujian Li, Yang Liu, and Chua-Hui Chang (eds.), Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pp. 823–848, Online only, November 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.aacl-main.62. URL https://aclanthology.org/2022.aacl-main.62/ . Reuters. Explainer: What is happening between Armenia and Azerbaijan over Nagorno-Karabakh?, 2023. URL https://www.reuters.com/world/what-is-happening-between-armenia-azerb aijan-over-nagorno-karabakh-2023-09-19/ . Accessed on Jan. 17, 2024. Angelika Romanou, Negar Foroutan, Anna Sotnikova, Sree Harsha Nelaturu, Shivalika Singh, Rishabh Maheshwary, Micol Altomare, Zeming Chen, Mohamed A. Haggag, Snegha A, Al- fonso Amayuelas, Azril Hafizi Amirudin, Danylo Boiko, Michael Chang, Jenny Chim, Gal Cohen, Aditya Kumar Dalmia, Abraham Diress, Sharad Duwal, Daniil Dzenhaliou, Daniel Fernando Er- azo Florez, Fabian Farestam, Joseph Marvin Imperial, Shayekh Bin Islam, Perttu Isotalo, Maral Jabbarishiviari, Börje F. Karlsson,
|
https://arxiv.org/abs/2505.21344v1
|
Eldar Khalilov, Christopher Klamm, Fajri Koto, Dominik Krzemiński, Gabriel Adriano de Melo, Syrielle Montariol, Yiyang Nan, Joel Niklaus, Jekaterina Novikova, JohanSamirObandoCeron, DebjitPaul, EstherPloeger, JebishPurbey, SwatiRajwal, Selvan Sunitha Ravi, Sara Rydell, Roshan Santhosh, Drishti Sharma, Marjana Prifti Skenduli, Arshia Soltani Moakhar, Bardia soltani moakhar, Ayush Kumar Tarun, Azmine Toushik Wasi, Thenuka Ovin Weerasinghe, Serhan Yilmaz, Mike Zhang, Imanol Schlag, Marzieh Fadaee, Sara Hooker, and Antoine Bosselut. INCLUDE: Evaluating multilingual language understanding with regional knowledge. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=k3gCieTXeY . David Romero, Chenyang Lyu, Haryo Wibowo, Santiago Góngora, Aishik Mandal, Sukannya Purkayastha, Jesus-German Ortiz-Barajas, Emilio Cueva, Jinheon Baek, Soyeong Jeong, et al. Cvqa: Culturally-diverse multilingual visual question answering benchmark. Advances in Neural Information Processing Systems , 37:11479–11505, 2025. Israfel Salazar, Manuel Fernández Burda, Shayekh Bin Islam, Arshia Soltani Moakhar, Shivalika Singh, Fabian Farestam, Angelika Romanou, Danylo Boiko, Dipika Khullar, Mike Zhang, Do- minik Krzemiński, Jekaterina Novikova, Luísa Shimabucoro, Joseph Marvin Imperial, Rishabh Maheshwary, Sharad Duwal, Alfonso Amayuelas, Swati Rajwal, Jebish Purbey, Ahmed Ruby, Nicholas Popovič, Marek Suppa, Azmine Toushik Wasi, Ram Mohan Rao Kadiyala, Olga Tsym- boi, Maksim Kostritsya, Bardia Soltani Moakhar, Gabriel da Costa Merlin, Otávio Ferracioli Co- letti, Maral Jabbari Shiviari, MohammadAmin farahani fard, Silvia Fernandez, María Grandury, Dmitry Abulkhanov, Drishti Sharma, Andre Guarnier De Mitri, Leticia Bossatto Marchezi, Se- tayesh Heydari, Johan Obando-Ceron, Nazar Kohut, Beyza Ermis, Desmond Elliott, Enzo Fer- rante, Sara Hooker, and Marzieh Fadaee. Kaleidoscope: In-language exams for massively multi- lingual vision evaluation, 2025. URL https://arxiv.org/abs/2504.07072 . Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, and Vinodkumar Prabhakaran. Re-imagining algorithmic fairness in india and beyond, 2021. URL http://arxiv.org/abs/21 01.09995 . 31 Beatrice Savoldi, Marco Gaido, Luisa Bentivogli, Matteo Negri, and Marco Turchi. Gender bias in machine translation. Transactions of the Association for Computational Linguistics , 9:845–874, 2021. doi: 10.1162/tacl_a_00401. URL https://aclanthology.org/2021.tacl-1.51/ . Reva Schwartz, Apostol Vassilev, Kristen K. Greene, Lori Perine, Andrew Burt, and Patrick Hall. Towardsastandardforidentifyingandmanagingbiasinartificialintelligence, 2022-03-1504:03:00 2022. URL https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934464 . Lingfeng Shen, Weiting Tan, Sihao Chen, Yunmo Chen, Jingyu Zhang, Haoran Xu, Boyuan Zheng, Philipp Koehn, and Daniel Khashabi. The language barrier: Dissecting safety challenges of LLMs in multilingual contexts. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics: ACL 2024 , pp. 2668–2680, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-acl.156. URLhttps://aclanthology.org/2024.findings-acl.156/ . Shivalika Singh, Freddie Vargus, Daniel Dsouza, Börje F. Karlsson, Abinaya Mahendiran, Wei-Yin Ko, Herumb Shandilya, Jay Patel, Deividas Mataciunas, Laura OMahony, Mike Zhang, Ramith Hettiarachchi, Joseph Wilson, Marina Machado, Luisa Souza Moura, Dominik Krzemiński, Hakimeh Fadaei, Irem Ergün, Ifeoma Okoh, Aisha Alaagib, Oshan Mudannayake, Zaid Alyafeai, Vu Minh Chien, Sebastian Ruder, Surya Guthikonda, Emad A. Alghamdi, Sebastian Gehrmann, Niklas Muennighoff, Max Bartolo, Julia Kreutzer, Ahmet Üstün, Marzieh Fadaee, and Sara Hooker. Aya dataset: An open-access collection for multilingual instruction tuning, 2024. URL http://arxiv.org/abs/2402.06619 . Shivalika Singh, Angelika Romanou, Clémentine Fourrier, David I. Adelani, Jian Gang Ngui, Daniel Vila-Suero, PeeratLimkonchotiwat, KellyMarchisio, WeiQiLeong, YosephineSusanto, Raymond Ng, Shayne Longpre, Wei-Yin Ko, Sebastian Ruder, Madeline Smith, Antoine Bosselut, Alice Oh, Andre F. T. Martins, Leshem Choshen, Daphne Ippolito, Enzo
|
https://arxiv.org/abs/2505.21344v1
|
Ferrante, Marzieh Fadaee, Beyza Ermis, and Sara Hooker. Global mmlu: Understanding and addressing cultural and linguistic biases in multilingual evaluation, 2025. URL https://arxiv.org/abs/2412.03304 . Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari S. Morcos. Beyond neural scaling laws: beating power law scaling via data pruning, 2023. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Rad- ford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural In- formation Processing Systems , volume 33, pp. 3008–3021. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef 9b86448f951-Paper.pdf . Zeerak Talat, Aurélie Névéol, Stella Biderman, Miruna Clinciu, Manan Dey, Shayne Longpre, Sasha Luccioni, Maraim Masoud, Margaret Mitchell, Dragomir Radev, Shanya Sharma, Arjun Subra- monian, Jaesung Tae, Samson Tan, Deepak Tunuguntla, and Oskar Van Der Wal. You reap what you sow: On the challenges of bias evaluation under multilingual settings. In Angela Fan, Suzana Ilic, Thomas Wolf, and Matthias Gallé (eds.), Proceedings of BigScience Episode #5 – Workshop on Challenges & Perspectives in Creating Large Language Models , pp. 26–41, virtual+Dublin, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.bigscience-1.3. URLhttps://aclanthology.org/2022.bigscience-1.3/ . 32 Alex Tamkin, Miles McCain, Kunal Handa, Esin Durmus, Liane Lovitt, Ankur Rathi, Saffron Huang, Alfred Mountfield, Jerry Hong, Stuart Ritchie, Michael Stern, Brian Clarke, Landon Goldberg, Theodore R. Sumers, Jared Mueller, William McEachen, Wes Mitchell, Shan Carter, Jack Clark, Jared Kaplan, and Deep Ganguli. Clio: Privacy-preserving insights into real-world ai use, 2024. URL https://arxiv.org/abs/2412.13678 . Jingqun Tang, Qi Liu, Yongjie Ye, Jinghui Lu, Shu Wei, Chunhui Lin, Wanqing Li, Mohamad Fitri Faiz Bin Mahmood, Hao Feng, Zhen Zhao, et al. Mtvqa: Benchmarking multilingual text-centric visual question answering. CoRR, 2024. RossTaylor, MarcinKardas, GuillemCucurull, ThomasScialom, AnthonyHartshorn, ElvisSaravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science, 2022. Megh Thakkar, Tolga Bolukbasi, Sriram Ganapathy, Shikhar Vashishth, Sarath Chandar, and Partha Talukdar. Self-influence guided data reweighting for language model pre-training, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Ar- mand Joulin, Edouard Grave, and Guillaume Lample. LLaMA: Open and efficient foundation language models, 2023a. URL http://arxiv.org/abs/2302.13971 . Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, WenyinFu, BrianFuller, CynthiaGao, VedanujGoswami, NamanGoyal, AnthonyHartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2:
|
https://arxiv.org/abs/2505.21344v1
|
Open foundation and fine-tuned chat models.arXiv, abs/2307.09288, 2023b. Marcos Treviso, Ji-Ung Lee, Tianchu Ji, Betty van Aken, Qingqing Cao, Manuel R. Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Colin Raffel, Pedro H. Martins, André F. T. Martins, Jessica Zosa Forde, Peter Milder, Edwin Simpson, Noam Slonim, Jesse Dodge, Emma Strubell, Niranjan Balasubramanian, Leon Derczynski, Iryna Gurevych, and Roy Schwartz. Efficient meth- ods for natural language processing: A survey. Transactions of the Association for Computa- tional Linguistics , 11:826–860, 07 2023. ISSN 2307-387X. doi: 10.1162/tacl_a_00577. URL https://doi.org/10.1162/tacl_a_00577 . Lewis Tunstall, Edward Emanuel Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro Von Werra, Clémentine Fourrier, Nathan Habib, Nathan Sarrazin, Omar Sanseviero, Alexander M Rush, and Thomas Wolf. Zephyr: Di- rect distillation of LM alignment. In First Conference on Language Modeling , 2024. URL https://openreview.net/forum?id=aKkAwZB6JV . 33 Ahmet Üstün, Viraat Aryabumi, Zheng Yong, Wei-Yin Ko, Daniel D’souza, Gbemileke Onilude, Neel Bhandari, Shivalika Singh, Hui-Lee Ooi, Amr Kayid, Freddie Vargus, Phil Blunsom, Shayne Longpre, Niklas Muennighoff, Marzieh Fadaee, Julia Kreutzer, and Sara Hooker. Aya model: An instruction finetuned open-access multilingual language model. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the 62nd Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers) , pp. 15894–15939, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.845. URL https://aclanthology.org/2024.acl-long.845/ . Eva Vanmassenhove, Dimitar Shterionov, and Matthew Gwilliam. Machine translationese: Ef- fects of algorithmic bias on linguistic complexity in machine translation. In Paola Merlo, Jorg Tiedemann, and Reut Tsarfaty (eds.), Proceedings of the 16th Conference of the European Chap- ter of the Association for Computational Linguistics: Main Volume , pp. 2203–2213. Associa- tion for Computational Linguistics, 2021. doi: 10.18653/v1/2021.eacl-main.188. URL https://aclanthology.org/2021.eacl-main.188 . Cécile B. Vigouroux. Francophonie, 2013. ISSN 0084-6570, 1545-4290. URL https://www.annual reviews.org/doi/10.1146/annurev-anthro-092611-145804 . Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. Ethical and social risks of harm from language models, 2021. URL https://arxiv.org/abs/2112.04359 . ChrisWendler, VeniaminVeselovsky, GiovanniMonea, andRobertWest. Dollamasworkinenglish? on the latent language of multilingual transformers, 2024. URL http://arxiv.org/abs/2402.1 0588. Genta Indra Winata, Alham Fikri Aji, Samuel Cahyawijaya, Rahmad Mahendra, Fajri Koto, Ade Romadhony, Kemal Kurniawan, David Moeljadi, Radityo Eko Prasojo, Pascale Fung, Timo- thy Baldwin, Jey Han Lau, Rico Sennrich, and Sebastian Ruder. NusaX: Multilingual par- allel sentiment dataset for 10 Indonesian local languages. In Andreas Vlachos and Isabelle Augenstein (eds.), Proceedings of the 17th Conference of the European Chapter of the Asso- ciation for Computational Linguistics , pp. 815–834, Dubrovnik, Croatia, May 2023. Associa- tion for Computational Linguistics. doi: 10.18653/v1/2023.eacl-main.57. URL https: //aclanthology.org/2023.eacl-main.57/ . Zheng Xin Yong, Cristina Menghini, and Stephen Bach. Low-resource languages jailbreak GPT-4. InWorkshopSocially Responsible Language Modelling Research , 2023a. URL https://openrevi ew.net/forum?id=pn83r8V2sv . Zheng Xin Yong, Ruochen Zhang, Jessica Forde, Skyler Wang, Arjun Subramonian, Holy Lovenia, Samuel Cahyawijaya, Genta Winata, Lintang Sutawika, Jan Christian Blaise Cruz, Yin Lin Tan, Long Phan, Long
|
https://arxiv.org/abs/2505.21344v1
|
Phan, Rowena Garcia, Thamar Solorio, and Alham Fikri Aji. Prompting mul- tilingual large language models to generate code-mixed texts: The case of south East Asian lan- guages. In Genta Winata, Sudipta Kar, Marina Zhukova, Thamar Solorio, Mona Diab, Sunayana Sitaram, Monojit Choudhury, and Kalika Bali (eds.), Proceedings of the 6th Workshop on Com- putational Approaches to Linguistic Code-Switching , pp. 43–63, Singapore, December 2023b. As- sociation for Computational Linguistics. URL https://aclanthology.org/2023.calcs-1.5/ . 34 Xiang Yue, Yueqi Song, Akari Asai, Seungone Kim, Jean de Dieu Nyandwi, Simran Khanuja, Anjali Kantharuban, Lintang Sutawika, Sathyanarayanan Ramamoorthy, and Graham Neubig. Pangea: A fully open multilingual multimodal llm for 39 languages. arXiv preprint arXiv:2410.16153 , 2024. URL https://arxiv.org/abs/2410.16153 . Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging LLM-as-a-judge with MT-bench and chatbot arena. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2023. URL https: //openreview.net/forum?id=uccHPGDlao . AndyZou, ZifanWang, NicholasCarlini, MiladNasr, J.ZicoKolter, andMattFredrikson. Universal and transferable adversarial attacks on aligned language models, 2023. URL https://arxiv.or g/abs/2307.15043 . 35
|
https://arxiv.org/abs/2505.21344v1
|
Prostate Cancer Screening with Artificial Intelligence–Enhanced Micro-Ultrasound: A Comparative Study with Traditional Methods Muhammad Imrana, Wayne G. Brisbaneb, Li-Ming Suc, Jason P. Josephc, Wei Shaoa,∗ aDepartment of Medicine, University of Florida, , Gainesville, 32611, FL, USA bDepartment of Urology, University of California, , Los Angeles, 90095, CA, USA cDepartment of Urology, University of Florida, , Gainesville, 32611, FL, USA Abstract Background and objective: Micro-ultrasound (micro-US) is a novel ultrasound modality with diagnostic accu- racy comparable to magnetic resonance imaging (MRI) for detecting clinically significant prostate cancer (csPCa). This study investigated whether interpretation of micro-US by artificial intelligence (AI) can outperform clinical screening methods using prostate-specific antigen (PSA) and digital rectal examination (DRE). Methods: We retrospectively studied 145 men who underwent micro-US guided biopsy (79 with csPCa and 66 without). A self-supervised convolutional autoencoder was trained to extract deep image features from 2D micro- US slices. Random forest classifiers were developed using five-fold cross-validation to predict csPCa at the slice level. A patient was classified as csPCa-positive if ≥8 consecutive slices were predicted positive. Model per- formance was compared with a classifier trained on common clinical screening variables (PSA, DRE, prostate volume, and age). Key findings and limitations: The AI-enhanced micro-US model and the clinical screening model achieved AU- ROCs of 0.871 and 0.753, respectively. Using a fixed classification threshold for both models, the micro-US model achieved a sensitivity of 92.5% and specificity of 68.1%, while the clinical model achieved a sensitivity of 96.2% but with a lower specificity at 27.3%. Limitations of this study include its retrospective single-center design and lack of external validation. Conclusions and clinical implications: AI-interpreted micro-US significantly improves specificity for csPCa while maintaining high sensitivity. This approach may reduce unnecessary biopsies and o ffers a low-cost, point- of-care alternative to PSA-based screening. Future prospective studies are needed to validate these findings. Patient summary: We developed an artificial intelligence system to analyze micro-ultrasound images of the prostate. In this study, it detected aggressive prostate cancer more accurately than traditional screening meth- ods such as PSA blood tests and digital rectal exams. This approach may help reduce unnecessary biopsies in the future. Keywords: Prostate cancer screening, Micro-ultrasound, Artificial intelligence 1. Introduction Prostate cancer is one of the most commonly di- agnosed malignancies and a leading cause of cancer- related death worldwide (James et al., 2024). Early de- tection of clinically significant prostate cancer (csPCa) is critical, as it increases the 5-year survival rate from 37% to nearly 100% (Society, 2025). Clinically, screening often relies on prostate-specific antigen (PSA) testing and digital rectal examination (DRE), but both methods have notable limitations. PSA lacks specificity and may be elevated in benign conditions such as benign prostatic hyperplasia or prostatitis. DRE, while inexpensive and easy to perform, su ffers from poor sensitivity and high inter-observer variabil- ity. As a result, these traditional tools can lead to both ∗Corresponding author. E-mail address: weishao@ufl.edu (W. Shao) ORCID: 0000-0003-4931-4839 (W. Shao)overdiagnosis and missed diagnoses, contributing to unnecessary biopsies or delayed detection of aggres- sive disease. Multiparametric magnetic resonance imaging (mpMRI) improves the detection of csPCa and
|
https://arxiv.org/abs/2505.21355v1
|
is commonly used to guide targeted biopsies (Ahmed et al., 2017). However, its role in routine screening is limited by high cost, long acquisition times, limited availability, and the need for specialized radiological expertise. These constraints make mpMRI impractical for large-scale or point-of-care screening. Micro- ultrasound (micro-US), a high-resolution (29 MHz) imaging modality, provides real-time visualization of prostate microarchitecture with spatial resolution three to four times greater than conventional transrectal ultrasound (Klotz et al., 2020). The OPTIMUM randomized trial confirmed that micro-US–guided biopsy is noninferior to MRI-targeted biopsy forarXiv:2505.21355v1 [eess.IV] 27 May 2025 detecting csPCa (Kinnaird et al., 2025). With its porta- bility, lower cost, and suitability for outpatient use, micro-US is well positioned as a potential screening tool. However, interpretation remains challenging and highly operator-dependent (Zhou et al., 2024), limiting consistent performance and widespread adoption. To address the interpretive limitations of micro- ultrasound, we developed an artificial intelligence (AI) model to automatically detect csPCa from micro-US images. Using a retrospective cohort of 145 men who underwent micro-US–guided prostate biopsy, we trained a self-supervised convolutional autoencoder to extract deep imaging features from 2D micro-US slices. These features were used to train a ran- dom forest classifier for slice-level prediction, and patient-level prediction was determined by aggregat- ing predictions across consecutive slices. We com- pared this model to a classifier trained on standard clin- ical screening variables including PSA, DRE, prostate volume, and age. To our knowledge, this is the first study to align micro-US imaging with biopsy- confirmed pathology at the slice level and to per- form patient-level csPCa screening predictions using AI. This work evaluates the potential of AI-enhanced micro-US as a practical and accurate tool for prostate cancer screening. 2. Patients and methods 2.1. Patient population and data description This retrospective study was approved by the Uni- versity of Florida Institutional Review Board and included 145 men who underwent micro-ultrasound (micro-US)–guided prostate biopsy. All patients had clinical indications for biopsy, such as elevated PSA and/or an abnormal digital rectal examination (DRE). Most patients also underwent a systematic 12-core biopsy, including those without visible micro-US le- sions. During biopsy, the operator recorded needle tra- jectories and target locations using micro-US images, enabling retrospective mapping of cores to correspond- ing regions. All patients provided informed consent. Ground truth for csPCa was established by histopatho- logical analysis of all biopsy cores. Patients were clas- sified as csPCa-positive if any core contained Gleason score≥3+4. Baseline patient characteristics are sum- marized in Table 1. 2.1.1. Micro-ultrasound imaging Pre-biopsy micro-US scans were acquired using a 29 MHz transrectal system (ExactVu, Exact Imag- ing, Markham, Canada) by an experienced urologist (WGB) with four years of micro-US interpretation ex- perience. Scans were recorded at 10 frames per second for up to 30 seconds, producing 200 to 300 2D micro- US slices per scan.2.1.2. Clinical biomarkers For each patient, we collected the following clini- cal biomarkers: PSA, DRE findings, age, and prostate volume. Prostate volume was estimated from the pre- biopsy micro-US scan. We used the MicroSegNet model (Jiang et al., 2024) to segment the prostate cap- sule on
|
https://arxiv.org/abs/2505.21355v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.