text string | source string |
|---|---|
that would elevate the DRG weight beyond the primary diagnosis of colitis. Given these conditions, the DRG code is assigned based on the primary diagnosis of digestive disorders, specifically colitis, without any significant complications or comorbidities that would elevate it to a higher complexity category. Therefore... | https://arxiv.org/abs/2505.21908v1 |
criteria for an MCC. The presence of chest pain, ST elevations, treatment complexity, and follow-up needs justify a CC classification due to increased resource utilization and care complexity. </think> <answer> OTHER CIRCULATORY SYSTEM DIAGNOSES WITH CC </answer> 24 Example from CoT-First <think> To determine the most ... | https://arxiv.org/abs/2505.21908v1 |
care and resource utilization. Why it might not be appropriate: •Condition managed with ibuprofen and colchicine—does not indicate severe complica- tions. •Absence of heart failure or severe circulatory system condition to warrant a more specific DRG. 3. Other circulatory system diagnoses with MCC Why this DRG code mig... | https://arxiv.org/abs/2505.21908v1 |
arXiv:2505.21918v1 [cs.LG] 28 May 2025IJABC: International Journal of Activity and Behavior Computing 1 Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing Haruki Kai1, Tsuyoshi Okita2 12Kyushu Institute of Technology Abstract We developed a deep learning algorithm for human a... | https://arxiv.org/abs/2505.21918v1 |
mod- eling into HAR tasks, we explore the feasibility of building a HAR model based on NLP-oriented Transformers. However, the direct application of these efficient Transformer architectures to sensor data remains relatively unexplored, and their effectiveness requires empirical validation. The contributions of this st... | https://arxiv.org/abs/2505.21918v1 |
univariate time series that shares the same embedding within all the series. And it uses subseries-level patch design which segmentation of time series into subseries-level patches that are served as input tokens to Transformer. Crossformer [32] uses the input which is embedded into a 2D vector ar- ray through the dime... | https://arxiv.org/abs/2505.21918v1 |
consists of the following components: Embedding with Linear Layer :A method that takes n-dimensional numerical data as input and obtains an embedded representation through a linear layer. In our approach, multi-dimensional sensor data is processed in a manner similar to natural language processing (NLP) models, where s... | https://arxiv.org/abs/2505.21918v1 |
input data into embedding vectors. In contrast, our method re- places the embedding layer with a linear layer to transform the input data x∈Rninto an embedding vector h∈Rd. The proposed embedding using a linear layer is defined as follows: h=Wx+b •x∈Rnis the input n-dimensional numerical data, Self-supervised Learning ... | https://arxiv.org/abs/2505.21918v1 |
vector Has input and generating the output for the respective dimension. During Pre-Training, cross-entropy loss is computed using the labels generated from the binning process. The loss Lifor each dimension iis expressed as: Li=−cX j=1yi,jlog ˆyi,j •yi,j∈ {0,1}is the ground-truth label for class jin the i-th dimension... | https://arxiv.org/abs/2505.21918v1 |
Learning : As the downstream task, activity recognition is performed on five datasets: ADL [2], Opportunity [22], PAMAP2 [21], REALWORLD [24], and WISDM [11]. For each dataset, the data is seg- mented into sequences of length 300, and the task involves predicting a single activity label for each sequence. The label cor... | https://arxiv.org/abs/2505.21918v1 |
the linear layers in the output layer are adjusted to match the number of labels in the data. Training Loop : The prepared data is divided into batches of size 25 and input into the model. Cross-entropy loss is computed between the model output and the label data. The loss is averaged across sequences and dimensions to... | https://arxiv.org/abs/2505.21918v1 |
this experiment, the decoder model GPT-2 was employed. The objective of the next-token prediction task is to enable the model to learn the ability to predict the next data point by referencing only the past information up to any given point. In this experiment, the decoder version of the n-dimensional numerical process... | https://arxiv.org/abs/2505.21918v1 |
using only 1-dimensional acceleration data from the X-axis for training. Both models are based on DistilBERT and were pre-trained on the capture24 dataset. After Pre-Training, down- stream tasks were conducted on respective datasets, and performance was evaluated. The evaluation metrics used were accuracy and weighted ... | https://arxiv.org/abs/2505.21918v1 |
data is converted into discrete token IDs, which are then transformed into representation vectors through an embedding layer. This process can potentially disrupt the continuity and relationships inherent in the nu- merical data. Furthermore, due to the architectural design of the Vanilla Trans- former, the embedding l... | https://arxiv.org/abs/2505.21918v1 |
using the MLM task demonstrated supe- rior performance on the Opportunity dataset, the reasons for this high performance require further investigation. Specifically, comparative anal- yses with various other tasks represent an important direction for future research. Self-supervised Learning Method Using Transformer fo... | https://arxiv.org/abs/2505.21918v1 |
Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 19 neither overly simplistic nor excessively complex is crucial when employing n-dimensional numerical data with Transformer models. 6 Model Comparison Table 4: model comparison of parameters Model Model Parameters(Pre-training) Mode... | https://arxiv.org/abs/2505.21918v1 |
exhibit challenges in terms of training cost, CPU inference speed, and memory consumption. However, their ability to achieve superior accu- racy through pretraining represents a significant advantage. On the other hand, ResNet18 requires no pretraining, has a lower training cost, and outperforms Transformer-based model... | https://arxiv.org/abs/2505.21918v1 |
and Chenshu Wu. Hargpt: Are llms zero-shot human activity recognizers?, 2024. [10] Im Y Jung. A review of privacy-preserving human and human activ- ity recognition. International Journal on Smart Sensing and Intelli- gent Systems , 13(1):1–13, 2020. [11] Jennifer R Kwapisz, Gary M Weiss, and Samuel A Moore. Activity re... | https://arxiv.org/abs/2505.21918v1 |
imu-based human activity recognition datasets across var- ied configurations using mig har dataset. International Journal of Activity and Behavior Computing , 2024(2):1–21, 2024. Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and B... | https://arxiv.org/abs/2505.21918v1 |
the capture-24 dataset, used for pretraining, is included in the table. Since this dataset is not used for the activity recognition task, the ”Class” column is not applicable in this case. Only the ”Train Samples” and ”Validation Samples” are shown for this dataset. A.2 Downstream Algorithm Algorithm 3 Downstream Learn... | https://arxiv.org/abs/2505.21918v1 |
arXiv:2505.21919v1 [cs.ET] 28 May 2025Towards Efficient Key-Value Cache Management for Prefix Prefilling in LLM Inference Yue Zhu, Hao Yu, Chen Wang, Zhuoran Liu, Eun Kyung Lee IBM T. J. Watson Research Center, Yorktown Heights, NY , USA yue.zhu@ibm.com, yuh@us.ibm.com, chen.wang1@ibm.com, zhuoran.liu@ibm.com, eunkyung... | https://arxiv.org/abs/2505.21919v1 |
pattern from published traces for real-world LLM-serving [9]. Our analysis reveals fundamentally different access patterns associated with KVC-usage from traditional key-value store workloads: (1) high temporal locality for recent tokens, (2) substantial initial token reusability across requests, and (3) the need for a... | https://arxiv.org/abs/2505.21919v1 |
patterns, we categorize accesses as sequential (two or more contiguous blocks) or non-sequential within each request. Fig. 2a shows the fraction of sequential blocks in a request over the one-hour span. On average, 86.8% of blocks within each request are sequential, enabling efficient key retrieval via range queries. B... | https://arxiv.org/abs/2505.21919v1 |
Latency Based on Real Trace (Redis=1). benchmarks, our experiments show minimal differences be- tween the two when evaluated on KVC workloads. For range queries in Fig. 3a, SHERMAN outperforms CHIME by 10.3% on average, excluding the first 10 minutes for warm up. Conversely, for search latency in Fig. 3b, CHIME slightl... | https://arxiv.org/abs/2505.21919v1 |
random access patterns in KVC prefix prefill workloads. To evaluate the efficiency of metadata management in existing KVC staging solutions, including Redis and state-of-the-art key-value stores, we developed a benchmark that measures metadata efficiency using real-world application traces. Our evaluation demonstrates ... | https://arxiv.org/abs/2505.21919v1 |
FALCON: An ML Framework for Fully Automated Layout-Constrained Analog Circuit Design Asal Mehradfar1Xuzhe Zhao2Yilun Huang2Emir Ceyani1 Yankai Yang2Shihao Han2Hamidreza Aghasi2Salman Avestimehr1 1University of Southern California2University of California, Irvine mehradfa@usc.edu Abstract Designing analog circuits from ... | https://arxiv.org/abs/2505.21923v1 |
step [ 17], missing the opportunity to guide optimization with layout constraints. Finally, many available benchmarks are built on symbolic or synthetic simulations [ 18], lacking the fidelity and realism of the process of commercial grade design flows. As a result, current ML pipelines do not allow fully generalizable... | https://arxiv.org/abs/2505.21923v1 |
topologies. In contrast, FALCON unifies forward modeling and parameter inference in a single differentiable architecture that generalizes to unseen netlists. Layout-aware sizing and parasitic modeling have been explored to mitigate schematic-to-layout mismatch. Parasitic-aware methods [ 25] integrate pre-trained parasi... | https://arxiv.org/abs/2505.21923v1 |
circuit elements such as transistors, resistors, capacitors, or sources. Multi-terminal devices—such as transistors and baluns—are decomposed into multiple edges, and multiple components may connect the same node pair, resulting in heterogeneous, multi-edged graphs that preserve structural and functional diversity. 3 R... | https://arxiv.org/abs/2505.21923v1 |
a differentiable surrogate that enables gradient-based inference in the next stage. Stage 3: Layout-Aware Gradient Reasoning. Given ytargetand a selected topology T∗, we infer a parameter vector x∗by minimizing a loss over the learned forward model fθ. Specifically, we solve: x∗= arg min xLperf(fθ(T∗, x), ytarget) +λLl... | https://arxiv.org/abs/2505.21923v1 |
circuit topologies. Micro F1 (identical to accuracy in the multiclass setting) reaches 99.57%, while macro metrics—averaged equally across classes—highlight robustness to class imbalance. These trends are reinforced by the per-class accuracy plot in Figure 3(c), where most topologies reach 100% accuracy. The confusion ... | https://arxiv.org/abs/2505.21923v1 |
we train the model using a masked mean squared error loss: Lmasked =1P imidX i=1mi·(ˆyi−yi)2, (2) where mi= 1if the i-th metric is defined for the current sample, and zero otherwise. Model Architecture and Training. Each cir- cuit is represented as an undirected multi- edge graph with voltage nets as nodes and cir- cui... | https://arxiv.org/abs/2505.21923v1 |
to unseen architectures. Prediction quality is measured using standard re- gression metrics: coefficient of determination ( R2), root mean squared error (RMSE), and mean ab- solute error (MAE), computed independently for each of the 16 performance metrics. We also report themean relative error per metric , computed as ... | https://arxiv.org/abs/2505.21923v1 |
prioritize functionality, layout loss is softly gated by: g(Lperf) = 1−σ(γ(Lperf−τ)), which attenuates layout penalties when performance error exceeds a threshold τ, encouraging the model to first achieve functionality before optimizing for layout compactness. We set τ= 0.05,γ= 50 , and normalize layout area by 1mm2to ... | https://arxiv.org/abs/2505.21923v1 |
per instance. An overview is shown in Figure 6. Evaluation. We evaluate Stage 3 on 9,500 test instances (500 per topology) using our gradient-based optimization pipeline. A design is considered converged if it meets both: (i) a predicted mean relative error below 10%, and (ii) a layout area under a topology-specific bo... | https://arxiv.org/abs/2505.21923v1 |
system-on-chip. In 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC) , volume 1, pages 38–42, 2020. [4]Xuyang Liu, Md. Hedayatullah Maktoomi, Mahdi Alesheikh, Payam Heydari, and Hamidreza Aghasi. A cmos 49–63-ghz phase-locked stepped-chirp fmcw radar transceiver. IEE... | https://arxiv.org/abs/2505.21923v1 |
on Computer Aided Design , 2024. [19] Chen-Chia Chang, Yikang Shen, Shaoze Fan, Jing Li, Shun Zhang, Ningyuan Cao, Yiran Chen, and Xin Zhang. Lamagic: Language-model-based topology generation for analog integrated circuits. arXiv preprint arXiv:2407.18269 , 2024. [20] Yao Lai, Sungyoung Lee, Guojin Chen, Souradip Podda... | https://arxiv.org/abs/2505.21923v1 |
and scaling of a cmos passive double-balanced mixer. In 2008 Joint 6th International IEEE Northeast Workshop on Circuits and Systems and TAISA Conference , pages 297–300, 2008. 11 [38] S. Chehrazi, R. Bagheri, and A.A. Abidi. Noise in passive fet mixers: a simple physical model. In Proceedings of the IEEE 2004 Custom I... | https://arxiv.org/abs/2505.21923v1 |
preprint arXiv:1412.6980 , 2014. [56] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research , 9(11), 2008. [57] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International c... | https://arxiv.org/abs/2505.21923v1 |
✔ (Cadence) ✔ ✔ ✔ LayoutCopilot [27] ✘ ✘ ✘ ✔ ✔ (Cadence) ✘ ✘ ✘ AnalogGym [18] ✘ ✔ ✘ ✘ ✘ (SPICE) ✘ ✔ ✔ AutoCkt [13] ✘ ✔ ✘ ✘ ✔ (Cadence) ✔ ✘ ✘ (incomplete) L2DC [12] ✘ ✔ ✘ ✘ ✔ (Cadence) ✘ ✘ ✘ CAN-RL [14] ✘ ✔ ✘ ✔ ✔ (Cadence) ✘ ✘ ✘ AnGeL. [17] ✔ ✔ ✔ ✘ ✘ (SPICE) ✘ ✘ ✘ FALCON (This work) ✔ ✔ ✔ ✔ ✔ (Cadence) ✔ ✔ ✔ Table 4: De... | https://arxiv.org/abs/2505.21923v1 |
specified performance characteristics Oscillation Frequency (OscF) Steady-state frequency at which the oscillator generates a periodic signal Tuning Range (TR) Range of achievable oscillation frequencies through variation of control voltages Output Power (OutP) Power delivered to the load PSAT Maximum output power leve... | https://arxiv.org/abs/2505.21923v1 |
and limited spurious rejection [ 30]. The SBPMixer employs a minimalist switching structure to perform frequency translation without active gain, enabling low power operation in applications with relaxed performance demands [ 38]. The parameters and performance metrics for these mixer topologies are listed in Table 7. ... | https://arxiv.org/abs/2505.21923v1 |
80kLip [100–500] pH Lis [300–700] pH Lop [0.8–1.2] nH Los [400–800] pH Lm [50–250] pH WN1 [6–31] µm WN2 [10–35] µm 17 B.4 Voltage Amplifiers (V As) V oltage amplifiers (V As) are fundamental components in analog circuit design, responsible for increasing signal amplitude while preserving waveform integrity. Effective V... | https://arxiv.org/abs/2505.21923v1 |
adopted in frequency synthesizers and PLLs [ 51]. The ColVCO uses an LC tank and capacitive feedback to achieve high frequency stability and low phase noise, making it ideal for precision RF communication and instrumentation [ 52]. The RVCO consists of cascaded delay stages forming a feedback loop, offering low power c... | https://arxiv.org/abs/2505.21923v1 |
here matches that of the Stage 1 experiments (Section 4), enabling consistent cross-stage evaluation. We fine-tune the GNN by freezing all encoder and message-passing layers and updating only the final output head ( output_mlp ). Fine-tuning is performed on the RVCO training set, which contains approximately 30,000 ins... | https://arxiv.org/abs/2505.21923v1 |
VV VV Square Size VV_SIZE 4.0 µm VV VV Spacing VV_SPACE 2.0 µm VV VV to Edge Spacing VV_EDGE_MIN 1.0 µm Resistor (RX, CA, M1)RX Minimum Width WMIN 0.462 µm RX Maximum Width WMAX 5.0 µm RX Minimum Length LMIN 0.4 µm RX Maximum Length LMAX 5.0 µm CA Contact Size CA_SIZE 0.06 µm CA Contact Spacing CA_SPACE 0.10 µm CA CA t... | https://arxiv.org/abs/2505.21923v1 |
Physical Concept: This structure uses heavily doped N+polysilicon overlaid with a silicide layer to reduce resistance. Current flows laterally through the poly-silicide film (see Figure 14(b)), and resistance is shaped by the aspect ratio of the layout as well as process-dependent corrections. Layer Physics Explanation... | https://arxiv.org/abs/2505.21923v1 |
GHz , the skin depth δis approximately 0.41µm; thus, using a metal layer thicker than 4δ(i.e.,1.6µm) ensures efficient current flow. However, increasing thickness beyond this threshold yields diminishing returns inQdue to saturation in current penetration. Turn-to-turn spacing ( S) affects both inductance and quality f... | https://arxiv.org/abs/2505.21923v1 |
layout (b) is DRC-compliant and physically realizable. The final design achieves a mean relative error of 1.3% compared to the target performance. (a) Designed DLNA schematic (b) Layout of designed DLNA Figure 16: Stage 3 results for a synthesized DLNA. The schematic (a) reflects optimized parameters to meet the target... | https://arxiv.org/abs/2505.21923v1 |
arXiv:2505.21926v1 [cs.CL] 28 May 2025Beyond Completion: A Foundation Model for General Knowledge Graph Reasoning Yin Hua♠, Zhiqiang Liu♠, Mingyang Chen♠, Zheng Fang♣, Chi Man Wong♣ ♢, Lingxiao Li♣,Chi Man VONG♢,Huajun Chen♠ ♡,Wen Zhang♠† ♠Zhejiang University ♣Shopee Pte.Ltd., ♢University of Macau ♡Zhejiang Key Laborat... | https://arxiv.org/abs/2505.21926v1 |
2022). In addition, prior work has largely been restricted to in-KG reasoning tasks, such as KG Completion (KGC), and has not adequately addressed the chal- lenges posed by out-of-KG reasoning tasks, such as KGQA. out-of-KG tasks require models to gen- eralize beyond the explicit structure of KGs, incor- porating both ... | https://arxiv.org/abs/2505.21926v1 |
are critical for contextual rea- soning. Moreover, it focuses exclusively on in-KG reasoning tasks, neglecting out-of-KG tasks. Text-aware Knowledge Graph Completion While earlier studies emphasized KG structures, recent work explores textual information for im- proved reasoning. BLP and StAR enhance repre- sentation l... | https://arxiv.org/abs/2505.21926v1 |
whole KG is represented as Gsub ={Esub,Rsub,Tsub,Dsub}with entities Esub ={Etopic,Eoption ,Eother}, where Etopic represents the entity mentioned in the question q,Eoption represents the entity mentioned in the options, and Eother encompasses entities within the subgraph that do not carry particular contextual significa... | https://arxiv.org/abs/2505.21926v1 |
function that initializes node representations conditioned on query q. It can be flexibly adapted for specific scenarios, as demonstrated in subsequent sections. Hnode represents the node representations, Hedge is a learnable matrix for edge representations, and Gdenotes the graph structure. Detailed descrip- tions of ... | https://arxiv.org/abs/2505.21926v1 |
ob- tained via the parameter-free strategy. Specifically, each relation is initialized as an all-ones embed- ding, while the entity graph uses the textual embed- dingsXeas the initial representations. By applying this sequential CMP update process, we generate the global semantic embeddings for relations Rg and entitie... | https://arxiv.org/abs/2505.21926v1 |
nlog(1−p(q, neg _ans)), (18) where qis the query prefix of the triple (h, r,?), andansis the tail entity tthat makes (h, r, t )valid in the knowledge graphNegative samples are gener- ated by randomly selecting tail entities. MERRY is pre-trained on multiple hybrid KG datasets, which equips it with generalizable transfe... | https://arxiv.org/abs/2505.21926v1 |
of 12,102 multiple- choice questions. We follow the in-house split method from (Lin et al., 2019) for experiments and compare our results with several baseline models. We report Accuracy (Acc) on the CSQA dataset. For detailed information on datasets and metric computation formulas, refer to Appendix C and Appendix D, ... | https://arxiv.org/abs/2505.21926v1 |
(%) RoBERTa-Large 73.1 68.7 LLaMA-3-8b-instruct 72.9 71.9 RGCN 72.7 68.4 GconAttn 72.6 68.6 KagNet 73.5 69.0 RN 74.6 69.1 MHGRN 74.5 71.1 QA-GNN 76.5 73.4 GreaseLM 78.5 74.2 MERRY 78.6 74.9 Table 2: Performance comparison on CommonsenseQA in-house split (controlled experiments). 5.4 Main Results (RQ1) We compare MERRY ... | https://arxiv.org/abs/2505.21926v1 |
DTAF primarily enhances the understanding of structural information. Similarly, we conducted ablation experiments on the CSQA dataset, as shown in Table 3. An addi- tional variant, "w/o Edge Scoring", sets all edgeEdge Scoring DTAF IHdev-Acc. (%) IHtest-Acc. (%) ✓ ✓ 78.6 74.9 ✓ 77.7 75.0 71.4 70.7 Table 3: Ablation res... | https://arxiv.org/abs/2505.21926v1 |
fusion mechanisms. Additionally, we pro- posed a flexible edge scoring mechanism to adapt to diverse downstream tasks. Experiments across 28 datasets demonstrate MERRY’s strong gener- alization capabilities in in-KG tasks, such as zero- shot KGC, and its adaptability to out-of-KG tasks, such as KGQA, highlighting its p... | https://arxiv.org/abs/2505.21926v1 |
ques- tion answering. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP) , pages 1295–1309, Online. As- sociation for Computational Linguistics. Mikhail Galkin, Max Berrendorf, and Charles Tap- ley Hoyt. 2022a. An open challenge for induc- tive link prediction on knowled... | https://arxiv.org/abs/2505.21926v1 |
Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ron- nie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini... | https://arxiv.org/abs/2505.21926v1 |
Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrst- edt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally... | https://arxiv.org/abs/2505.21926v1 |
Lee, Chanyoung Chung, and Joyce Jiyoung Whang. 2023. Ingram: Inductive knowledge graph embedding via relation graphs. Preprint , arXiv:2305.19987. Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. 2024. Llava-next-interleave: Tackling multi-image, video, and 3d in large multimod... | https://arxiv.org/abs/2505.21926v1 |
David G. T. Barrett, Ma- teusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. 2017. A simple neural net- work module for relational reasoning. Preprint , arXiv:1706.01427. Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling relational... | https://arxiv.org/abs/2505.21926v1 |
comprehension. InProceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 2346– 2357, Florence, Italy. Association for Computational Linguistics. Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. Qa-gnn: Rea- soning with language models and ... | https://arxiv.org/abs/2505.21926v1 |
and NELL-995 (Xiong et al., 2018), and 2 datasets are derived from the ILPC (Galkin et al., 2022a). These datasets are designed such that the training and test- ing graphs maintain consistent relation types. Additionally, we incorporate 13 datasets from the InGram framework (Lee et al., 2023) to further assess inductiv... | https://arxiv.org/abs/2505.21926v1 |
(25) where Qrepresents the set of all questions. A higher Accuracy score indicates the model’s effec- tiveness in selecting the correct answer from the set of options. E Full Results The full, per-dataset results of MRR and Hits@10 of the zero-shot inference of the pre-trained MERRY model, the pre-trained ULTRA model, ... | https://arxiv.org/abs/2505.21926v1 |
in the graph used for training, validation, or testing. "Valid" and "Test" refer to the triples that need to be predicted in the validation and test sets, respectively, within the corresponding graphs. Group Dataset Supervised SOTA ULTRA(3g) MERRY MRR Hits@10 MRR Hits@10 MRR Hits@10 IndE(WN) WN:v1 0.741 0.826 0.593 0.7... | https://arxiv.org/abs/2505.21926v1 |
Subspecialty-SpecificFoundationModelforIntelligent GastrointestinalPathology LianghuiZhu1#,XitongLing1#,MinxiOuyang1#,XiaopingLiu2#,MingxiFu1,TianGuan1,Fanglei Fu1,XuanyuWang2,MaomaoZeng3,MingxiZhu1,YiboJin4,LimingLiu5,SongDuan6,Qiming He1,YizhiWang1,LuxiXie7*,HouqiangLi8*,YonghongHe1*,SufangTian2* 1.InstituteofBiophar... | https://arxiv.org/abs/2505.21928v1 |
assessmentthatrequiresintegrationofbothlocalmorphologicalfeaturesandglobal architecturalpatterns.Currentcomputationalapproachesincludinggraphneural networksandvisiontransformerspresentpromisingsolutionstotheselimitations throughtheirabilitytoexplicitlyencodespatialdependenciesbetweendistanttissue regions. Clinically,ou... | https://arxiv.org/abs/2505.21928v1 |
classification Differentiationbetweenpoorlydifferentiatedadenocarcinomaandpoorly differentiatedsquamouscellcarcinoma(2classes,LA-LS).Differentiating betweenpoorlydifferentiatedadenocarcinomaandpoorlydifferentiatedsquamous cellcarcinomainthegastrointestinaltractpresentsamajordiagnosticchallenge.This datasetcomprised384W... | https://arxiv.org/abs/2505.21928v1 |
From Reasoning to Learning: A Survey on Hypothesis Dis- covery and Rule Learning with Large Language Models Kaiyu He kaiyu.he@utdallas.edu Department of Computer Science University of Texas at Dallas Zhiyu Chen Zhiyu.Chen2@utdallas.edu Department of Computer Science University of Texas at Dallas Abstract Since the adve... | https://arxiv.org/abs/2505.21935v1 |
al. (2024a) RAG MethodHu et al. (2024); Yang et al. (2025; 2024b); Xiong et al. (2024) Chai et al. (2024) Human supported Zhao et al. (2024); Si et al. (2024); Pu et al. (2025) Formal Language HypothesisW/ Formal observation Young et al. (2022); Nguyen et al. (2023) W/ NL observationCheng et al. (2024); Wang et al. (20... | https://arxiv.org/abs/2505.21935v1 |
deduction, and induction can be iteratively used to refine more robust hypotheses. For each stage, we discuss methods, benchmarks, evaluations, and identify limitations and future directions. A high-level taxonomy guiding this survey is shown in Figure 1. 2 Background Before LLMs, most AI systems stored knowledge as ha... | https://arxiv.org/abs/2505.21935v1 |
instruments, researchers continue to iterate this loop in new domains, validating and refining theories as new data emerges. Today, there is growing interest in whether LLMs can autonomously generate, apply, and validate hypotheses from natural language represented observations, mirroring this iterative process to achi... | https://arxiv.org/abs/2505.21935v1 |
abduction can lead us to a fresh explanatory hypothesis with new observations, e.g., “Swans’ color depends on their habitat,” or “All swans in Texas are white,” which introduces new ideas and is not a case of induction. Thus, inductive reasoning verifies or refines existing hypotheses (in terms of confidence) based on ... | https://arxiv.org/abs/2505.21935v1 |
rule-bound. After real-world entities are encoded as explicit literals, precise inference rules yield provably correct and sound conclusions, making these systems well suited to deductive reasoning. Yet the encoding process strips away many subtle semantic relationships and com- monsense knowledge, limiting the system’... | https://arxiv.org/abs/2505.21935v1 |
n}, that we aim to explain. Let hrepresent the generated explanation or hypothesis. The hypothesis generation task can be defined as generating an hsuch that: h|= (o1∧o2∧···∧ on) This notation means that hlogically entails the observations. In other words, assuming hholds, it guarantees that all observations o1∧o2∧···∧... | https://arxiv.org/abs/2505.21935v1 |
chemistry papers from 2024: experts first segment each pa- per into background, inspiration, and hypothesis; an LLM-based multi-agent system (MOOSE-Chem) then retrieves relevant snippets, drafts hypotheses, and scores them for originality. A similar pipeline appears in Yang et al. (2024b), where 50 conference papers ar... | https://arxiv.org/abs/2505.21935v1 |
When observations are encoded in a formal language, dedicated formal language solvers typically yield clear, white-box solutions that outperform language models. Consequently, using an LLM for these tasks is generally not preferred. Nevertheless, a few early studies in the LLM era have explored this approach. For examp... | https://arxiv.org/abs/2505.21935v1 |
entirely convincing. Therefore, alternative evaluation strategies are needed. Implicit Prediction-based Evaluation : Early benchmarks often relied on question-answering (QA) tasks that required the model to implicitly form a hypothesis to answer a question (Sinha et al., 2019; Weston et al., 2015). For example, conside... | https://arxiv.org/abs/2505.21935v1 |
allows us to verify correctness deterministically. For example, Bowen et al. (2024) designed formal representations for synthetic grouping tasks to evaluate formal language hypothesis generation. Huaetal.(2025)constructedtheirbenchmarkbasedondeterministicregularfunctions,providing a procedural framework for evaluating ... | https://arxiv.org/abs/2505.21935v1 |
structured feedback mechanisms, or hybrid evaluation frameworks integrating automated and expert evaluations, merit explo- ration. Secondly, bridging the gap between formal and natural language hypothesis generation is crucial. Leveragingcodeasanintermediaterepresentationoffersapromisingpathforward, combiningevaluative... | https://arxiv.org/abs/2505.21935v1 |
more complex ones, ultimately arriving at the correct answer. In another approach, Ling et al. (2023) design a pipeline that supervise the correctness of each rea- soning step during hypothesis application. First, the LLM indexes all premises; then it is asked to label the minimal set of premises required to derive new... | https://arxiv.org/abs/2505.21935v1 |
studied, the capability for hypothesis application remains significantly underexplored. According to Sun et al. (2024), hypothesis application involves inferential rule-following, requiring models to consistently apply given hypotheses to derive novel knowledge in unfamiliar domains. Robust hypothesis application is cr... | https://arxiv.org/abs/2505.21935v1 |
Hypothesis Validation Prompt-Based Method: Lampinen et al. (2022); Sun et al. (2024) employ a few-shot prompting approach for hypothesis validation. In this method, case triplets, consisting of an observation, a hypothesis, and its corresponding validity, are provided to the model, which then answers a hypothesis valid... | https://arxiv.org/abs/2505.21935v1 |
models to choose the best explanatory hypothesis and enabling adaptation to hypothesis-generation tasks evaluated against ground-truth explanations. Similarly, Jiang et al. (2023) present the BRAINTEASER benchmark of about 1.1k lateral-thinking puzzles, each offering a question with multiple-choice answers, one that de... | https://arxiv.org/abs/2505.21935v1 |
and induction into a unified learning loopremainsbothchallengingandlargelyunderstudied, yetitistheultimategoalforconstructingend-to-end agents capable of scientific discovery. Despiteafewstudiesthatacknowledgetheinterdependenceamongreasoningtypesandallowmodelstorefine hypotheses iteratively, they still overlook two dec... | https://arxiv.org/abs/2505.21935v1 |
observations that continuously propose new insights. Instead, once an initial hypothesis is formed, we proactively recall our memories or explore further to gather new observations that either strengthen or weaken the hypothesis, allowing us to verify and refine our ideas. Given a hypothesis, Li et al. (2024) and Jung ... | https://arxiv.org/abs/2505.21935v1 |
and evidence collection. For example, Xu et al. (2023) construct a Minecraft–like world in which a “vandal” agent performs up to 26 types of actions (e.g., moving, eating, crafting) to achieve a hidden goal (such as collecting lava or crafting a particular item) and leaves behind tracks as evidence. A detective agent—d... | https://arxiv.org/abs/2505.21935v1 |
introduced by Wang et al. (2022a) often rely on relatively straightforward tasks that do not genuinely necessitate novel hypothesis formation. Conversely, tasks proposed by He et al. (2024), despite aiming to encourage creative hypothesis formation, often yield simplistic, toy-like hypotheses with limited applicability... | https://arxiv.org/abs/2505.21935v1 |
of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Transla- tion and/or Summarization , pp. 65–72, Ann Arbor, Michigan, June 2005. Association for Computational Linguistics. URL https://aclanthology.org/W05-0909/ . AdibBazgir, RamachandraPraneethMadugula, andYuwenZhang. Agentichypothesis: As... | https://arxiv.org/abs/2505.21935v1 |
reasoning abilities of llms, 2024. URL https://arxiv.org/abs/2408.00114 . François Chollet. On the measure of intelligence, 2019. URL https://arxiv.org/abs/1911.01547 . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, ... | https://arxiv.org/abs/2505.21935v1 |
Boyd-Graber, and Naoaki Okazaki (eds.), Findings of the Association for Computational Linguistics: ACL 2023 , pp. 1049–1065, Toronto, Canada, July 2023. Association for Computational Lin- guistics. doi: 10.18653/v1/2023.findings-acl.67. URL https://aclanthology.org/2023.findings-acl. 67/. Kang il Lee, Hyukhun Koh, Dong... | https://arxiv.org/abs/2505.21935v1 |
//aclanthology.org/W04-1013/ . Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. Deductive verificationofchain-of-thoughtreasoning. InA.Oh, T.Naumann, A.Globerson, K.Saenko, M.Hardt, and S. Levine (eds.), Advances in Neural Information Processing Systems , volume 36, pp. 36407–36... | https://arxiv.org/abs/2505.21935v1 |
. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Pierre Isabelle, Eugene Charniak, and Dekang Lin (eds.), Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics ,pp.311–318,Philadelphia,Pennsylvania, USA, J... | https://arxiv.org/abs/2505.21935v1 |
, pp. 8614–8630, Mexico City, Mexico, June 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.naacl-long.476. URL https://aclanthology.org/2024.naacl-long.476/ . Xiaoming Shi, Siqiao Xue, Kangrui Wang, Fan Zhou, James Zhang, Jun Zhou, Chenhao Tan, and Hongyuan Mei. Language models can improve event ... | https://arxiv.org/abs/2505.21935v1 |
Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks, 2015. URL https://arxiv.org/abs/1502.05698 . Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, Mark Riedl, and Yejin Choi. Reframing human-AI collaboration for generating free-text explanations. In Marine Carpuat, Marie-... | https://arxiv.org/abs/2505.21935v1 |
2023.findings-emnlp.160. URL https://aclanthology.org/2023.findings-emnlp.160/ . Hongming Zhang, Xinran Zhao, and Yangqiu Song. WinoWhy: A deep diagnosis of essential commonsense knowledge for answering Winograd schema challenge. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), Proceedings of t... | https://arxiv.org/abs/2505.21935v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.