arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,404.02544 | Semi-Supervised Unconstrained Head Pose Estimation in the Wild | ['Huayi Zhou', 'Fei Jiang', 'Jin Yuan', 'Yong Rui', 'Hongtao Lu', 'Kui Jia'] | ['cs.CV'] | Existing research on unconstrained in-the-wild head pose estimation suffers
from the flaws of its datasets, which consist of either numerous samples by
non-realistic synthesis or constrained collection, or small-scale natural
images yet with plausible manual annotations. This makes fully-supervised
solutions compromise... | 2024-04-03T08:01:00Z | under review. Semi-Supervised Unconstrained Head Pose Estimation | null | null | null | null | null | null | null | null | null |
2,404.02684 | Cross-Architecture Transfer Learning for Linear-Cost Inference
Transformers | ['Sehyun Choi'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Recently, multiple architectures has been proposed to improve the efficiency
of the Transformer Language Models through changing the design of the
self-attention block to have a linear-cost inference (LCI). A notable approach
in this realm is the State-Space Machines (SSMs) architecture, which showed
on-par performance... | 2024-04-03T12:27:36Z | Preprint | null | null | null | null | null | null | null | null | null |
2,404.02822 | Identifying Climate Targets in National Laws and Policies using Machine
Learning | ['Matyas Juhasz', 'Tina Marchand', 'Roshan Melwani', 'Kalyan Dutia', 'Sarah Goodenough', 'Harrison Pim', 'Henry Franks'] | ['cs.CY', 'cs.CL', 'cs.LG'] | Quantified policy targets are a fundamental element of climate policy,
typically characterised by domain-specific and technical language. Current
methods for curating comprehensive views of global climate policy targets
entail significant manual effort. At present there are few scalable methods for
extracting climate t... | 2024-04-03T15:55:27Z | null | null | null | Identifying Climate Targets in National Laws and Policies using Machine Learning | ['Matyas Juhasz', 'Tina Marchand', 'Roshan Melwani', 'Kalyan Dutia', 'Sarah Goodenough', 'Harrison Pim', 'Henry Franks'] | 2,024 | arXiv.org | 0 | 26 | ['Computer Science'] |
2,404.02827 | BAdam: A Memory Efficient Full Parameter Optimization Method for Large
Language Models | ['Qijun Luo', 'Hengxu Yu', 'Xiao Li'] | ['cs.LG'] | This work presents BAdam, an optimization method that leverages the block
coordinate descent (BCD) framework with Adam's update rule. BAdam offers a
memory efficient approach to the full parameter finetuning of large language
models. We conduct a theoretical convergence analysis for BAdam in the
deterministic case. Exp... | 2024-04-03T15:59:42Z | Accepted for Publication in Conference on Neural Information
Processing Systems, 2024 | null | null | null | null | null | null | null | null | null |
2,404.02882 | Linear Attention Sequence Parallelism | ['Weigao Sun', 'Zhen Qin', 'Dong Li', 'Xuyang Shen', 'Yu Qiao', 'Yiran Zhong'] | ['cs.LG', 'cs.CL'] | Sequence parallelism (SP) serves as a prevalent strategy to handle long
sequences that exceed the memory limit of a single device. However, for linear
sequence modeling methods like linear attention, existing SP approaches do not
take advantage of their right-product-first feature, resulting in sub-optimal
communicatio... | 2024-04-03T17:33:21Z | Accepted by TMLR, 23 pages | null | null | null | null | null | null | null | null | null |
2,404.02883 | On the Scalability of Diffusion-based Text-to-Image Generation | ['Hao Li', 'Yang Zou', 'Ying Wang', 'Orchid Majumder', 'Yusheng Xie', 'R. Manmatha', 'Ashwin Swaminathan', 'Zhuowen Tu', 'Stefano Ermon', 'Stefano Soatto'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Scaling up model and data size has been quite successful for the evolution of
LLMs. However, the scaling law for the diffusion based text-to-image (T2I)
models is not fully explored. It is also unclear how to efficiently scale the
model for better performance at reduced cost. The different training settings
and expensi... | 2024-04-03T17:34:28Z | CVPR2024 | null | null | null | null | null | null | null | null | null |
2,404.02905 | Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale
Prediction | ['Keyu Tian', 'Yi Jiang', 'Zehuan Yuan', 'Bingyue Peng', 'Liwei Wang'] | ['cs.CV', 'cs.AI'] | We present Visual AutoRegressive modeling (VAR), a new generation paradigm
that redefines the autoregressive learning on images as coarse-to-fine
"next-scale prediction" or "next-resolution prediction", diverging from the
standard raster-scan "next-token prediction". This simple, intuitive
methodology allows autoregres... | 2024-04-03T17:59:53Z | Demo website: https://var.vision/ | null | null | null | null | null | null | null | null | null |
2,404.02948 | PiSSA: Principal Singular Values and Singular Vectors Adaptation of
Large Language Models | ['Fanxu Meng', 'Zhaohui Wang', 'Muhan Zhang'] | ['cs.LG', 'cs.AI'] | To parameter-efficiently fine-tune (PEFT) large language models (LLMs), the
low-rank adaptation (LoRA) method approximates the model changes $\Delta W \in
\mathbb{R}^{m \times n}$ through the product of two matrices $A \in
\mathbb{R}^{m \times r}$ and $B \in \mathbb{R}^{r \times n}$, where $r \ll
\min(m, n)$, $A$ is in... | 2024-04-03T15:06:43Z | NeurIPS 2024 spotlight | null | null | null | null | null | null | null | null | null |
2,404.03022 | BCAmirs at SemEval-2024 Task 4: Beyond Words: A Multimodal and
Multilingual Exploration of Persuasion in Memes | ['Amirhossein Abaskohi', 'Amirhossein Dabiriaghdam', 'Lele Wang', 'Giuseppe Carenini'] | ['cs.CL', 'cs.CV', 'cs.IT', 'cs.LG', 'math.IT'] | Memes, combining text and images, frequently use metaphors to convey
persuasive messages, shaping public opinion. Motivated by this, our team
engaged in SemEval-2024 Task 4, a hierarchical multi-label classification task
designed to identify rhetorical and psychological persuasion techniques
embedded within memes. To t... | 2024-04-03T19:17:43Z | 12 pages, 5 tables, 2 figures, Proceedings of the 18th International
Workshop on Semantic Evaluation (SemEval-2024) @ NAACL 2024 | null | null | null | null | null | null | null | null | null |
2,404.03361 | nicolay-r at SemEval-2024 Task 3: Using Flan-T5 for Reasoning Emotion
Cause in Conversations with Chain-of-Thought on Emotion States | ['Nicolay Rusnachenko', 'Huizhi Liang'] | ['cs.CL'] | Emotion expression is one of the essential traits of conversations. It may be
self-related or caused by another speaker. The variety of reasons may serve as
a source of the further emotion causes: conversation history, speaker's
emotional state, etc. Inspired by the most recent advances in Chain-of-Thought,
in this wor... | 2024-04-04T11:03:33Z | Ranked 3rd-4th place (F1-proportional) and 5th place (F1-strict) in
SemEval'24 Task 3, Subtask 1, to appear in SemEval-2024 proceedings | null | null | nicolay-r at SemEval-2024 Task 3: Using Flan-T5 for Reasoning Emotion Cause in Conversations with Chain-of-Thought on Emotion States | ['Nicolay Rusnachenko', 'Huizhi Liang'] | 2,024 | International Workshop on Semantic Evaluation | 2 | 8 | ['Computer Science'] |
2,404.03413 | MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with
Interleaved Visual-Textual Tokens | ['Kirolos Ataallah', 'Xiaoqian Shen', 'Eslam Abdelrahman', 'Essam Sleiman', 'Deyao Zhu', 'Jian Ding', 'Mohamed Elhoseiny'] | ['cs.CV'] | This paper introduces MiniGPT4-Video, a multimodal Large Language Model (LLM)
designed specifically for video understanding. The model is capable of
processing both temporal visual and textual data, making it adept at
understanding the complexities of videos. Building upon the success of
MiniGPT-v2, which excelled in t... | 2024-04-04T12:46:01Z | 6 pages,8 figures | null | null | MiniGPT4-Video: Advancing Multimodal LLMs for Video Understanding with Interleaved Visual-Textual Tokens | ['Kirolos Ataallah', 'Xiaoqian Shen', 'Eslam Abdelrahman', 'Essam Sleiman', 'Deyao Zhu', 'Jian Ding', 'Mohamed Elhoseiny'] | 2,024 | arXiv.org | 79 | 33 | ['Computer Science'] |
2,404.03428 | Edisum: Summarizing and Explaining Wikipedia Edits at Scale | ['Marija Šakota', 'Isaac Johnson', 'Guosheng Feng', 'Robert West'] | ['cs.CL'] | An edit summary is a succinct comment written by a Wikipedia editor
explaining the nature of, and reasons for, an edit to a Wikipedia page. Edit
summaries are crucial for maintaining the encyclopedia: they are the first
thing seen by content moderators and they help them decide whether to accept or
reject an edit. Addi... | 2024-04-04T13:15:28Z | null | null | null | null | null | null | null | null | null | null |
2,404.03482 | AdaGlimpse: Active Visual Exploration with Arbitrary Glimpse Position
and Scale | ['Adam Pardyl', 'Michał Wronka', 'Maciej Wołczyk', 'Kamil Adamczewski', 'Tomasz Trzciński', 'Bartosz Zieliński'] | ['cs.CV'] | Active Visual Exploration (AVE) is a task that involves dynamically selecting
observations (glimpses), which is critical to facilitate comprehension and
navigation within an environment. While modern AVE methods have demonstrated
impressive performance, they are constrained to fixed-scale glimpses from rigid
grids. In ... | 2024-04-04T14:35:49Z | ECCV 2024 | null | 10.1007/978-3-031-72664-4_7 | null | null | null | null | null | null | null |
2,404.03528 | BanglaAutoKG: Automatic Bangla Knowledge Graph Construction with
Semantic Neural Graph Filtering | ['Azmine Toushik Wasi', 'Taki Hasan Rafi', 'Raima Islam', 'Dong-Kyu Chae'] | ['cs.CL', 'cs.IR', 'cs.LG', 'cs.NE', 'cs.SI'] | Knowledge Graphs (KGs) have proven essential in information processing and
reasoning applications because they link related entities and give context-rich
information, supporting efficient information retrieval and knowledge
discovery; presenting information flow in a very effective manner. Despite
being widely used gl... | 2024-04-04T15:31:21Z | 7 pages, 3 figures. Accepted to LREC-COLING 2024. Read in ACL
Anthology: https://aclanthology.org/2024.lrec-main.189/ | The 2024 Joint International Conference on Computational
Linguistics, Language Resources and Evaluation (LREC-COLING 2024) | null | null | null | null | null | null | null | null |
2,404.03592 | ReFT: Representation Finetuning for Language Models | ['Zhengxuan Wu', 'Aryaman Arora', 'Zheng Wang', 'Atticus Geiger', 'Dan Jurafsky', 'Christopher D. Manning', 'Christopher Potts'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Parameter-efficient finetuning (PEFT) methods seek to adapt large neural
models via updates to a small number of weights. However, much prior
interpretability work has shown that representations encode rich semantic
information, suggesting that editing representations might be a more powerful
alternative. We pursue thi... | 2024-04-04T17:00:37Z | preprint | null | null | ReFT: Representation Finetuning for Language Models | ['Zhengxuan Wu', 'Aryaman Arora', 'Zheng Wang', 'Atticus Geiger', 'Daniel Jurafsky', 'Christopher D. Manning', 'Christopher Potts'] | 2,024 | Neural Information Processing Systems | 72 | 109 | ['Computer Science'] |
2,404.03608 | Sailor: Open Language Models for South-East Asia | ['Longxu Dou', 'Qian Liu', 'Guangtao Zeng', 'Jia Guo', 'Jiahui Zhou', 'Wei Lu', 'Min Lin'] | ['cs.CL', 'cs.AI'] | We present Sailor, a family of open language models ranging from 0.5B to 7B
parameters, tailored for South-East Asian (SEA) languages. These models are
continually pre-trained from Qwen1.5, a great language model for multilingual
use cases. From Qwen1.5, Sailor models accept 200B to 400B tokens, primarily
covering the ... | 2024-04-04T17:31:32Z | Code is available at https://github.com/sail-sg/sailor-llm | null | null | null | null | null | null | null | null | null |
2,404.0382 | CantTalkAboutThis: Aligning Language Models to Stay on Topic in
Dialogues | ['Makesh Narsimhan Sreedhar', 'Traian Rebedea', 'Shaona Ghosh', 'Jiaqi Zeng', 'Christopher Parisien'] | ['cs.CL'] | Recent advancements in instruction-tuning datasets have predominantly focused
on specific tasks like mathematical or logical reasoning. There has been a
notable gap in data designed for aligning language models to maintain topic
relevance in conversations - a critical aspect for deploying chatbots to
production. We int... | 2024-04-04T22:31:58Z | null | null | null | null | null | null | null | null | null | null |
2,404.03828 | Outlier-Efficient Hopfield Layers for Large Transformer-Based Models | ['Jerry Yao-Chieh Hu', 'Pei-Hsuan Chang', 'Robin Luo', 'Hong-Yu Chen', 'Weijian Li', 'Wei-Po Wang', 'Han Liu'] | ['cs.LG', 'cs.AI', 'stat.ML'] | We introduce an Outlier-Efficient Modern Hopfield Model (termed
$\mathrm{OutEffHop}$) and use it to address the outlier inefficiency problem of
{training} gigantic transformer-based models. Our main contribution is a novel
associative memory model facilitating \textit{outlier-efficient} associative
memory retrievals. I... | 2024-04-04T23:08:43Z | Accepted at ICML 2024; v2 updated to camera-ready version; Code
available at https://github.com/MAGICS-LAB/OutEffHop; Models are on Hugging
Face:
https://huggingface.co/collections/magicslabnu/outeffhop-6610fcede8d2cda23009a98f | null | null | Outlier-Efficient Hopfield Layers for Large Transformer-Based Models | ['Jerry Yao-Chieh Hu', 'Pei-Hsuan Chang', 'Haozheng Luo', 'Hong-Yu Chen', 'Weijian Li', 'Wei-Po Wang', 'Han Liu'] | 2,024 | International Conference on Machine Learning | 29 | 74 | ['Computer Science', 'Mathematics'] |
2,404.04042 | Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer | ['Hele-Andra Kuulmets', 'Taido Purason', 'Agnes Luhtaru', 'Mark Fishel'] | ['cs.CL'] | This paper explores cost-efficient methods to adapt pretrained Large Language
Models (LLMs) to new lower-resource languages, with a specific focus on
Estonian. Leveraging the Llama 2 model, we investigate the impact of combining
cross-lingual instruction-tuning with additional monolingual pretraining. Our
results demon... | 2024-04-05T11:52:02Z | null | Findings of the Association for Computational Linguistics: NAACL
2024, pages 3309-3325 | null | null | null | null | null | null | null | null |
2,404.04167 | Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model | ['Xinrun Du', 'Zhouliang Yu', 'Songyang Gao', 'Ding Pan', 'Yuyang Cheng', 'Ziyang Ma', 'Ruibin Yuan', 'Xingwei Qu', 'Jiaheng Liu', 'Tianyu Zheng', 'Xinchen Luo', 'Guorui Zhou', 'Wenhu Chen', 'Ge Zhang'] | ['cs.CL', 'cs.AI'] | In this study, we introduce CT-LLM, a 2B large language model (LLM) that
illustrates a pivotal shift towards prioritizing the Chinese language in
developing LLMs. Uniquely initiated from scratch, CT-LLM diverges from the
conventional methodology by primarily incorporating Chinese textual data,
utilizing an extensive co... | 2024-04-05T15:20:02Z | null | null | null | null | null | null | null | null | null | null |
2,404.04316 | Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation | ['Xinyu Ma', 'Xu Chu', 'Zhibang Yang', 'Yang Lin', 'Xin Gao', 'Junfeng Zhao'] | ['cs.LG', 'cs.AI', 'cs.CL'] | With the increasingly powerful performances and enormous scales of pretrained
models, promoting parameter efficiency in fine-tuning has become a crucial need
for effective and efficient adaptation to various downstream tasks. One
representative line of fine-tuning methods is Orthogonal Fine-tuning (OFT),
which rigorous... | 2024-04-05T15:28:44Z | Appeared at ICML 2024 | null | null | null | null | null | null | null | null | null |
2,404.04363 | Idea23D: Collaborative LMM Agents Enable 3D Model Generation from
Interleaved Multimodal Inputs | ['Junhao Chen', 'Xiang Li', 'Xiaojun Ye', 'Chao Li', 'Zhaoxin Fan', 'Hao Zhao'] | ['cs.CV'] | With the success of 2D diffusion models, 2D AIGC content has already
transformed our lives. Recently, this success has been extended to 3D AIGC,
with state-of-the-art methods generating textured 3D models from single images
or text. However, we argue that current 3D AIGC methods still do not fully
unleash human creativ... | 2024-04-05T19:16:30Z | Accepted by COLING 2025 (The 31st International Conference on
Computational Linguistics) Project Page: https://idea23d.github.io/ Code:
https://github.com/yisuanwang/Idea23D | null | null | null | null | null | null | null | null | null |
2,404.04465 | Aligning Diffusion Models by Optimizing Human Utility | ['Shufan Li', 'Konstantinos Kallidromitis', 'Akash Gokul', 'Yusuke Kato', 'Kazuki Kozuka'] | ['cs.CV'] | We present Diffusion-KTO, a novel approach for aligning text-to-image
diffusion models by formulating the alignment objective as the maximization of
expected human utility. Since this objective applies to each generation
independently, Diffusion-KTO does not require collecting costly pairwise
preference data nor traini... | 2024-04-06T01:23:23Z | 22 pages, 13 figures | null | null | null | null | null | null | null | null | null |
2,404.04475 | Length-Controlled AlpacaEval: A Simple Way to Debias Automatic
Evaluators | ['Yann Dubois', 'Balázs Galambosi', 'Percy Liang', 'Tatsunori B. Hashimoto'] | ['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML'] | LLM-based auto-annotators have become a key component of the LLM development
process due to their cost-effectiveness and scalability compared to human-based
evaluation. However, these auto-annotators can introduce biases that are hard
to remove. Even simple, known confounders such as preference for longer outputs
remai... | 2024-04-06T02:29:02Z | COLM 2024 | null | null | Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators | ['Yann Dubois', "Bal'azs Galambosi", 'Percy Liang', 'Tatsunori Hashimoto'] | 2,024 | arXiv.org | 403 | 27 | ['Computer Science', 'Mathematics'] |
2,404.04575 | To Cool or not to Cool? Temperature Network Meets Large Foundation
Models via DRO | ['Zi-Hao Qiu', 'Siqi Guo', 'Mao Xu', 'Tuo Zhao', 'Lijun Zhang', 'Tianbao Yang'] | ['cs.LG', 'cs.AI', 'math.OC'] | The temperature parameter plays a profound role during training and/or
inference with large foundation models (LFMs) such as large language models
(LLMs) and CLIP models. Particularly, it adjusts the logits in the softmax
function in LLMs, which is crucial for next token generation, and it scales the
similarities in th... | 2024-04-06T09:55:03Z | 41 pages, 10 figures, accepted by ICML2024 | null | null | null | null | null | null | null | null | null |
2,404.04656 | Binary Classifier Optimization for Large Language Model Alignment | ['Seungjae Jung', 'Gunsoo Han', 'Daniel Wontae Nam', 'Kyoung-Woon On'] | ['cs.LG', 'cs.AI', 'cs.CL'] | In real-world services such as ChatGPT, aligning models based on user
feedback is crucial for improving model performance. However, due to the
simplicity and convenience of providing feedback, users typically offer only
basic binary signals, such as 'thumbs-up' or 'thumbs-down'. Most existing
alignment research, on the... | 2024-04-06T15:20:59Z | ACL 2025 main | null | null | Binary Classifier Optimization for Large Language Model Alignment | ['Seungjae Jung', 'Gunsoo Han', 'D. W. Nam', 'Kyoung-Woon On'] | 2,024 | arXiv.org | 25 | 37 | ['Computer Science'] |
2,404.0485 | How Many Languages Make Good Multilingual Instruction Tuning? A Case
Study on BLOOM | ['Shaoxiong Ji', 'Pinzhen Chen'] | ['cs.CL'] | Instruction tuning a large language model with multiple languages can prepare
it for multilingual downstream tasks. Nonetheless, it is yet to be determined
whether having a handful of languages is sufficient, or whether the benefits
increase with the inclusion of more. By fine-tuning large multilingual models
on 1 to 5... | 2024-04-07T07:44:33Z | COLING 2025 | null | null | How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM | ['Shaoxiong Ji', 'Pinzhen Chen'] | 2,024 | null | 0 | 0 | ['Computer Science'] |
2,404.04991 | An Analysis of Malicious Packages in Open-Source Software in the Wild | ['Xiaoyan Zhou', 'Ying Zhang', 'Wenjia Niu', 'Jiqiang Liu', 'Haining Wang', 'Qiang Li'] | ['cs.CR', 'cs.SE'] | The open-source software (OSS) ecosystem suffers from security threats caused
by malware.However, OSS malware research has three limitations: a lack of
high-quality datasets, a lack of malware diversity, and a lack of attack
campaign contexts. In this paper, we first build the largest dataset of 24,356
malicious packag... | 2024-04-07T15:25:13Z | null | the 55th Annual IEEE/IFIP International Conference on Dependable
Systems and Networks(DSN), 2025 | null | An Analysis of Malicious Packages in Open-Source Software in the Wild | ['Xiaoyan Zhou', 'Ying Zhang', 'Wenjia Niu', 'Jiqiang Liu', 'Haining Wang', 'Qiang Li'] | 2,024 | Dependable Systems and Networks | 1 | 66 | ['Computer Science'] |
2,404.05014 | MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators | ['Shenghai Yuan', 'Jinfa Huang', 'Yujun Shi', 'Yongqi Xu', 'Ruijie Zhu', 'Bin Lin', 'Xinhua Cheng', 'Li Yuan', 'Jiebo Luo'] | ['cs.CV'] | Recent advances in Text-to-Video generation (T2V) have achieved remarkable
success in synthesizing high-quality general videos from textual descriptions.
A largely overlooked problem in T2V is that existing models have not adequately
encoded physical knowledge of the real world, thus generated videos tend to
have limit... | 2024-04-07T16:49:07Z | TPAMI 2025 | null | null | null | null | null | null | null | null | null |
2,404.05022 | DinoBloom: A Foundation Model for Generalizable Cell Embeddings in
Hematology | ['Valentin Koch', 'Sophia J. Wagner', 'Salome Kazeminia', 'Ece Sancar', 'Matthias Hehr', 'Julia Schnabel', 'Tingying Peng', 'Carsten Marr'] | ['cs.CV', 'cs.LG'] | In hematology, computational models offer significant potential to improve
diagnostic accuracy, streamline workflows, and reduce the tedious work of
analyzing single cells in peripheral blood or bone marrow smears. However,
clinical adoption of computational models has been hampered by the lack of
generalization due to... | 2024-04-07T17:25:52Z | null | null | null | DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology | ['Valentin Koch', 'S. Wagner', 'Salome Kazeminia', 'E. Sancar', 'Matthias Hehr', 'Julia A. Schnabel', 'Tingying Peng', 'Carsten Marr'] | 2,024 | International Conference on Medical Image Computing and Computer-Assisted Intervention | 8 | 30 | ['Computer Science'] |
2,404.05405 | Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws | ['Zeyuan Allen-Zhu', 'Yuanzhi Li'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Scaling laws describe the relationship between the size of language models
and their capabilities. Unlike prior studies that evaluate a model's capability
via loss or benchmarks, we estimate the number of knowledge bits a model
stores. We focus on factual knowledge represented as tuples, such as (USA,
capital, Washingt... | 2024-04-08T11:11:31Z | null | null | null | null | null | null | null | null | null | null |
2,404.05428 | Language Models on a Diet: Cost-Efficient Development of Encoders for
Closely-Related Languages via Additional Pretraining | ['Nikola Ljubešić', 'Vít Suchomel', 'Peter Rupnik', 'Taja Kuzman', 'Rik van Noord'] | ['cs.CL'] | The world of language models is going through turbulent times, better and
ever larger models are coming out at an unprecedented speed. However, we argue
that, especially for the scientific community, encoder models of up to 1
billion parameters are still very much needed, their primary usage being in
enriching large co... | 2024-04-08T11:55:44Z | null | null | null | Language Models on a Diet: Cost-Efficient Development of Encoders for Closely-Related Languages via Additional Pretraining | ['Nikola Ljubešić', 'Vít Suchomel', 'Peter Rupnik', 'Taja Kuzman', 'Rik van Noord'] | 2,024 | SIGUL | 5 | 42 | ['Computer Science'] |
2,404.05567 | Dense Training, Sparse Inference: Rethinking Training of
Mixture-of-Experts Language Models | ['Bowen Pan', 'Yikang Shen', 'Haokun Liu', 'Mayank Mishra', 'Gaoyuan Zhang', 'Aude Oliva', 'Colin Raffel', 'Rameswar Panda'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Mixture-of-Experts (MoE) language models can reduce computational costs by
2-4$\times$ compared to dense models without sacrificing performance, making
them more efficient in computation-bounded scenarios. However, MoE models
generally require 2-4$\times$ times more parameters to achieve comparable
performance to a den... | 2024-04-08T14:39:49Z | null | null | null | null | null | null | null | null | null | null |
2,404.0559 | MedExpQA: Multilingual Benchmarking of Large Language Models for Medical
Question Answering | ['Iñigo Alonso', 'Maite Oronoz', 'Rodrigo Agerri'] | ['cs.CL'] | Large Language Models (LLMs) have the potential of facilitating the
development of Artificial Intelligence technology to assist medical experts for
interactive decision support, which has been demonstrated by their competitive
performances in Medical QA. However, while impressive, the required quality bar
for medical a... | 2024-04-08T15:03:57Z | null | Artificial Intelligence in Medicine Volume 155, September 2024,
102938 | 10.1016/j.artmed.2024.102938 | null | null | null | null | null | null | null |
2,404.05673 | CoReS: Orchestrating the Dance of Reasoning and Segmentation | ['Xiaoyi Bao', 'Siyang Sun', 'Shuailei Ma', 'Kecheng Zheng', 'Yuxin Guo', 'Guosheng Zhao', 'Yun Zheng', 'Xingang Wang'] | ['cs.CV'] | The reasoning segmentation task, which demands a nuanced comprehension of
intricate queries to accurately pinpoint object regions, is attracting
increasing attention. However, Multi-modal Large Language Models (MLLM) often
find it difficult to accurately localize the objects described in complex
reasoning contexts. We ... | 2024-04-08T16:55:39Z | Accepted at ECCV 2024 | null | null | null | null | null | null | null | null | null |
2,404.05674 | MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation | ['Kunpeng Song', 'Yizhe Zhu', 'Bingchen Liu', 'Qing Yan', 'Ahmed Elgammal', 'Xiao Yang'] | ['cs.CV'] | In this paper, we present MoMA: an open-vocabulary, training-free
personalized image model that boasts flexible zero-shot capabilities. As
foundational text-to-image models rapidly evolve, the demand for robust
image-to-image translation grows. Addressing this need, MoMA specializes in
subject-driven personalized image... | 2024-04-08T16:55:49Z | null | null | null | null | null | null | null | null | null | null |
2,404.05692 | Evaluating Mathematical Reasoning Beyond Accuracy | ['Shijie Xia', 'Xuefeng Li', 'Yixin Liu', 'Tongshuang Wu', 'Pengfei Liu'] | ['cs.CL'] | The leaderboard of Large Language Models (LLMs) in mathematical tasks has
been continuously updated. However, the majority of evaluations focus solely on
the final results, neglecting the quality of the intermediate steps. This
oversight can mask underlying problems, such as logical errors or unnecessary
steps in the r... | 2024-04-08T17:18:04Z | v2 is the AAAI 2025 camera ready version. Project site with code:
https://github.com/GAIR-NLP/ReasonEval | null | null | null | null | null | null | null | null | null |
2,404.05694 | Comprehensive Study on German Language Models for Clinical and
Biomedical Text Understanding | ['Ahmad Idrissi-Yaghir', 'Amin Dada', 'Henning Schäfer', 'Kamyar Arzideh', 'Giulia Baldini', 'Jan Trienes', 'Max Hasin', 'Jeanette Bewersdorff', 'Cynthia S. Schmidt', 'Marie Bauer', 'Kaleb E. Smith', 'Jiang Bian', 'Yonghui Wu', 'Jörg Schlötterer', 'Torsten Zesch', 'Peter A. Horn', 'Christin Seifert', 'Felix Nensa', 'Je... | ['cs.CL', 'cs.AI', 'cs.LG'] | Recent advances in natural language processing (NLP) can be largely
attributed to the advent of pre-trained language models such as BERT and
RoBERTa. While these models demonstrate remarkable performance on general
datasets, they can struggle in specialized domains such as medicine, where
unique domain-specific termino... | 2024-04-08T17:24:04Z | Accepted at LREC-COLING 2024 | null | null | null | null | null | null | null | null | null |
2,404.05719 | Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs | ['Keen You', 'Haotian Zhang', 'Eldon Schoop', 'Floris Weers', 'Amanda Swearngin', 'Jeffrey Nichols', 'Yinfei Yang', 'Zhe Gan'] | ['cs.CV', 'cs.CL', 'cs.HC'] | Recent advancements in multimodal large language models (MLLMs) have been
noteworthy, yet, these general-domain MLLMs often fall short in their ability
to comprehend and interact effectively with user interface (UI) screens. In
this paper, we present Ferret-UI, a new MLLM tailored for enhanced
understanding of mobile U... | 2024-04-08T17:55:44Z | null | null | null | null | null | null | null | null | null | null |
2,404.05829 | SambaLingo: Teaching Large Language Models New Languages | ['Zoltan Csaki', 'Bo Li', 'Jonathan Li', 'Qiantong Xu', 'Pian Pawakapan', 'Leon Zhang', 'Yun Du', 'Hengyu Zhao', 'Changran Hu', 'Urmish Thakker'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Despite the widespread availability of LLMs, there remains a substantial gap
in their capabilities and availability across diverse languages. One approach
to address these issues has been to take an existing pre-trained LLM and
continue to train it on new languages. While prior works have experimented with
language ada... | 2024-04-08T19:48:36Z | 23 pages | null | null | null | null | null | null | null | null | null |
2,404.05892 | Eagle and Finch: RWKV with Matrix-Valued States and Dynamic Recurrence | ['Bo Peng', 'Daniel Goldstein', 'Quentin Anthony', 'Alon Albalak', 'Eric Alcaide', 'Stella Biderman', 'Eugene Cheah', 'Xingjian Du', 'Teddy Ferdinan', 'Haowen Hou', 'Przemysław Kazienko', 'Kranthi Kiran GV', 'Jan Kocoń', 'Bartłomiej Koptyra', 'Satyapriya Krishna', 'Ronald McClelland Jr.', 'Jiaju Lin', 'Niklas Muennigho... | ['cs.CL', 'cs.AI'] | We present Eagle (RWKV-5) and Finch (RWKV-6), sequence models improving upon
the RWKV (RWKV-4) architecture. Our architectural design advancements include
multi-headed matrix-valued states and a dynamic recurrence mechanism that
improve expressivity while maintaining the inference efficiency characteristics
of RNNs. We... | 2024-04-08T22:20:59Z | null | null | null | null | null | null | null | null | null | null |
2,404.05961 | LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders | ['Parishad BehnamGhader', 'Vaibhav Adlakha', 'Marius Mosbach', 'Dzmitry Bahdanau', 'Nicolas Chapados', 'Siva Reddy'] | ['cs.CL', 'cs.AI'] | Large decoder-only language models (LLMs) are the state-of-the-art models on
most of today's NLP tasks and benchmarks. Yet, the community is only slowly
adopting these models for text embedding tasks, which require rich
contextualized representations. In this work, we introduce LLM2Vec, a simple
unsupervised approach t... | 2024-04-09T02:51:05Z | Accepted to COLM 2024 | null | null | null | null | null | null | null | null | null |
2,404.05993 | AEGIS: Online Adaptive AI Content Safety Moderation with Ensemble of LLM
Experts | ['Shaona Ghosh', 'Prasoon Varshney', 'Erick Galinkin', 'Christopher Parisien'] | ['cs.LG', 'cs.CL', 'cs.CY'] | As Large Language Models (LLMs) and generative AI become more widespread, the
content safety risks associated with their use also increase. We find a notable
deficiency in high-quality content safety datasets and benchmarks that
comprehensively cover a wide range of critical safety areas. To address this,
we define a b... | 2024-04-09T03:54:28Z | null | null | null | AEGIS: Online Adaptive AI Content Safety Moderation with Ensemble of LLM Experts | ['Shaona Ghosh', 'Prasoon Varshney', 'Erick Galinkin', 'Christopher Parisien'] | 2,024 | arXiv.org | 52 | 32 | ['Computer Science'] |
2,404.06138 | Cendol: Open Instruction-tuned Generative Large Language Models for
Indonesian Languages | ['Samuel Cahyawijaya', 'Holy Lovenia', 'Fajri Koto', 'Rifki Afina Putri', 'Emmanuel Dave', 'Jhonson Lee', 'Nuur Shadieq', 'Wawan Cenggoro', 'Salsabil Maulana Akbar', 'Muhammad Ihza Mahendra', 'Dea Annisayanti Putri', 'Bryan Wilie', 'Genta Indra Winata', 'Alham Fikri Aji', 'Ayu Purwarianti', 'Pascale Fung'] | ['cs.CL'] | Large language models (LLMs) show remarkable human-like capability in various
domains and languages. However, a notable quality gap arises in low-resource
languages, e.g., Indonesian indigenous languages, rendering them ineffective
and inefficient in such linguistic contexts. To bridge this quality gap, we
introduce Ce... | 2024-04-09T09:04:30Z | Cendol models are released under Apache 2.0 license and will be made
publicly available soon | null | null | Cendol: Open Instruction-tuned Generative Large Language Models for Indonesian Languages | ['Samuel Cahyawijaya', 'Holy Lovenia', 'Fajri Koto', 'Rifki Afina Putri', 'Emmanuel Dave', 'Jhonson Lee', 'Nuur Shadieq', 'Wawan Cenggoro', 'Salsabil Maulana Akbar', 'Muhammad Ihza Mahendra', 'Dea Annisayanti Putri', 'Bryan Wilie', 'Genta Indra Winata', 'Alham Fikri Aji', 'Ayu Purwarianti', 'Pascale Fung'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 18 | 58 | ['Computer Science'] |
2,404.06186 | Clue-Instruct: Text-Based Clue Generation for Educational Crossword
Puzzles | ['Andrea Zugarini', 'Kamyar Zeinalipour', 'Surya Sai Kadali', 'Marco Maggini', 'Marco Gori', 'Leonardo Rigutini'] | ['cs.CL', 'cs.AI'] | Crossword puzzles are popular linguistic games often used as tools to engage
students in learning. Educational crosswords are characterized by less cryptic
and more factual clues that distinguish them from traditional crossword
puzzles. Despite there exist several publicly available clue-answer pair
databases for tradi... | 2024-04-09T10:12:34Z | null | null | null | Clue-Instruct: Text-Based Clue Generation for Educational Crossword Puzzles | ['Andrea Zugarini', 'Kamyar Zeinalipour', 'Surya Sai Kadali', 'Marco Maggini', 'Marco Gori', 'Leonardo Rigutini'] | 2,024 | International Conference on Language Resources and Evaluation | 6 | 38 | ['Computer Science'] |
2,404.06212 | OmniFusion Technical Report | ['Elizaveta Goncharova', 'Anton Razzhigaev', 'Matvey Mikhalchuk', 'Maxim Kurkin', 'Irina Abdullaeva', 'Matvey Skripkin', 'Ivan Oseledets', 'Denis Dimitrov', 'Andrey Kuznetsov'] | ['cs.CV', 'cs.AI', 'cs.LG', '6804, 68T50 (Primary)', 'I.2.7; I.2.10; I.4.9'] | Last year, multimodal architectures served up a revolution in AI-based
approaches and solutions, extending the capabilities of large language models
(LLM). We propose an \textit{OmniFusion} model based on a pretrained LLM and
adapters for visual modality. We evaluated and compared several architecture
design principles... | 2024-04-09T11:00:19Z | 17 pages, 4 figures, 9 tables, 2 appendices | null | null | null | null | null | null | null | null | null |
2,404.06392 | Event Extraction in Basque: Typologically motivated Cross-Lingual
Transfer-Learning Analysis | ['Mikel Zubillaga', 'Oscar Sainz', 'Ainara Estarrona', 'Oier Lopez de Lacalle', 'Eneko Agirre'] | ['cs.CL', 'cs.AI'] | Cross-lingual transfer-learning is widely used in Event Extraction for
low-resource languages and involves a Multilingual Language Model that is
trained in a source language and applied to the target language. This paper
studies whether the typological similarity between source and target languages
impacts the performa... | 2024-04-09T15:35:41Z | Accepted at LREC-Coling 2024 | null | null | null | null | null | null | null | null | null |
2,404.06395 | MiniCPM: Unveiling the Potential of Small Language Models with Scalable
Training Strategies | ['Shengding Hu', 'Yuge Tu', 'Xu Han', 'Chaoqun He', 'Ganqu Cui', 'Xiang Long', 'Zhi Zheng', 'Yewei Fang', 'Yuxiang Huang', 'Weilin Zhao', 'Xinrong Zhang', 'Zheng Leng Thai', 'Kaihuo Zhang', 'Chongyi Wang', 'Yuan Yao', 'Chenyang Zhao', 'Jie Zhou', 'Jie Cai', 'Zhongwu Zhai', 'Ning Ding', 'Chao Jia', 'Guoyang Zeng', 'Daha... | ['cs.CL', 'cs.LG'] | The burgeoning interest in developing Large Language Models (LLMs) with up to
trillion parameters has been met with concerns regarding resource efficiency
and practical expense, particularly given the immense cost of experimentation.
This scenario underscores the importance of exploring the potential of Small
Language ... | 2024-04-09T15:36:50Z | revise according to peer review | null | null | MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies | ['Shengding Hu', 'Yuge Tu', 'Xu Han', 'Chaoqun He', 'Ganqu Cui', 'Xiang Long', 'Zhi Zheng', 'Yewei Fang', 'Yuxiang Huang', 'Weilin Zhao', 'Xinrong Zhang', 'Z. Thai', 'Kaihuo Zhang', 'Chongyi Wang', 'Yuan Yao', 'Chenyang Zhao', 'Jie Zhou', 'Jie Cai', 'Zhongwu Zhai', 'Ning Ding', 'Chaochao Jia', 'Guoyang Zeng', 'Dahai Li... | 2,024 | arXiv.org | 347 | 73 | ['Computer Science'] |
2,404.06429 | Magic-Boost: Boost 3D Generation with Multi-View Conditioned Diffusion | ['Fan Yang', 'Jianfeng Zhang', 'Yichun Shi', 'Bowen Chen', 'Chenxu Zhang', 'Huichao Zhang', 'Xiaofeng Yang', 'Xiu Li', 'Jiashi Feng', 'Guosheng Lin'] | ['cs.CV', 'cs.AI'] | Benefiting from the rapid development of 2D diffusion models, 3D content
generation has witnessed significant progress. One promising solution is to
finetune the pre-trained 2D diffusion models to produce multi-view images and
then reconstruct them into 3D assets via feed-forward sparse-view
reconstruction models. Howe... | 2024-04-09T16:20:03Z | null | null | null | null | null | null | null | null | null | null |
2,404.06479 | Visually Descriptive Language Model for Vector Graphics Reasoning | ['Zhenhailong Wang', 'Joy Hsu', 'Xingyao Wang', 'Kuan-Hao Huang', 'Manling Li', 'Jiajun Wu', 'Heng Ji'] | ['cs.CL', 'cs.AI', 'cs.CV'] | Despite significant advancements, large multimodal models (LMMs) still
struggle to bridge the gap between low-level visual perception -- focusing on
shapes, sizes, and layouts -- and high-level language reasoning, such as
semantics and logic. This limitation is evident in tasks that require precise
visual perception, l... | 2024-04-09T17:30:18Z | Project page: https://mikewangwzhl.github.io/VDLM/ | TMLR 2025 | null | null | null | null | null | null | null | null |
2,404.06542 | Training-Free Open-Vocabulary Segmentation with Offline
Diffusion-Augmented Prototype Generation | ['Luca Barsellotti', 'Roberto Amoroso', 'Marcella Cornia', 'Lorenzo Baraldi', 'Rita Cucchiara'] | ['cs.CV'] | Open-vocabulary semantic segmentation aims at segmenting arbitrary categories
expressed in textual form. Previous works have trained over large amounts of
image-caption pairs to enforce pixel-level multimodal alignments. However,
captions provide global information about the semantics of a given image but
lack direct l... | 2024-04-09T18:00:25Z | CVPR 2024. Project page: https://aimagelab.github.io/freeda/ | null | null | null | null | null | null | null | null | null |
2,404.06564 | MambaAD: Exploring State Space Models for Multi-class Unsupervised
Anomaly Detection | ['Haoyang He', 'Yuhu Bai', 'Jiangning Zhang', 'Qingdong He', 'Hongxu Chen', 'Zhenye Gan', 'Chengjie Wang', 'Xiangtai Li', 'Guanzhong Tian', 'Lei Xie'] | ['cs.CV'] | Recent advancements in anomaly detection have seen the efficacy of CNN- and
transformer-based approaches. However, CNNs struggle with long-range
dependencies, while transformers are burdened by quadratic computational
complexity. Mamba-based models, with their superior long-range modeling and
linear efficiency, have ga... | 2024-04-09T18:28:55Z | NeurIPS'24 | null | null | null | null | null | null | null | null | null |
2,404.06666 | SafeGen: Mitigating Sexually Explicit Content Generation in
Text-to-Image Models | ['Xinfeng Li', 'Yuchen Yang', 'Jiangyi Deng', 'Chen Yan', 'Yanjiao Chen', 'Xiaoyu Ji', 'Wenyuan Xu'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.CR'] | Text-to-image (T2I) models, such as Stable Diffusion, have exhibited
remarkable performance in generating high-quality images from text descriptions
in recent years. However, text-to-image models may be tricked into generating
not-safe-for-work (NSFW) content, particularly in sexually explicit scenarios.
Existing count... | 2024-04-10T00:26:08Z | Accepted by ACM CCS 2024. Please cite this paper as "Xinfeng Li,
Yuchen Yang, Jiangyi Deng, Chen Yan, Yanjiao Chen, Xiaoyu Ji, Wenyuan Xu.
SafeGen: Mitigating Sexually Explicit Content Generation in Text-to-Image
Models. In Proceedings of ACM Conference on Computer and Communications
Security (CCS), 2024." | null | 10.1145/3658644.3670295 10.1145/3658644.3670295 10.1145/3658644.3670295 | null | null | null | null | null | null | null |
2,404.0667 | What's Mine becomes Yours: Defining, Annotating and Detecting
Context-Dependent Paraphrases in News Interview Dialogs | ['Anna Wegmann', 'Tijs van den Broek', 'Dong Nguyen'] | ['cs.CL'] | Best practices for high conflict conversations like counseling or customer
support almost always include recommendations to paraphrase the previous
speaker. Although paraphrase classification has received widespread attention
in NLP, paraphrases are usually considered independent from context, and common
models and dat... | 2024-04-10T01:14:12Z | Accepted as main conference paper to EMNLP 2024 | null | null | null | null | null | null | null | null | null |
2,404.06809 | Not All Contexts Are Equal: Teaching LLMs Credibility-aware Generation | ['Ruotong Pan', 'Boxi Cao', 'Hongyu Lin', 'Xianpei Han', 'Jia Zheng', 'Sirui Wang', 'Xunliang Cai', 'Le Sun'] | ['cs.CL'] | The rapid development of large language models has led to the widespread
adoption of Retrieval-Augmented Generation (RAG), which integrates external
knowledge to alleviate knowledge bottlenecks and mitigate hallucinations.
However, the existing RAG paradigm inevitably suffers from the impact of flawed
information intro... | 2024-04-10T07:56:26Z | Accepted to EMNLP 2024 Main Conference. Our code, benchmark, and
models are available at https://github.com/panruotong/CAG | null | null | null | null | null | null | null | null | null |
2,404.06912 | Set-Encoder: Permutation-Invariant Inter-Passage Attention for Listwise
Passage Re-Ranking with Cross-Encoders | ['Ferdinand Schlatt', 'Maik Fröbe', 'Harrisen Scells', 'Shengyao Zhuang', 'Bevan Koopman', 'Guido Zuccon', 'Benno Stein', 'Martin Potthast', 'Matthias Hagen'] | ['cs.IR'] | Existing cross-encoder models can be categorized as pointwise, pairwise, or
listwise. Pairwise and listwise models allow passage interactions, which
typically makes them more effective than pointwise models but less efficient
and less robust to input passage order permutations. To enable efficient
permutation-invariant... | 2024-04-10T11:04:24Z | Accepted at ECIR'25 | null | 10.1007/978-3-031-88711-6_1 | null | null | null | null | null | null | null |
2,404.07031 | ORacle: Large Vision-Language Models for Knowledge-Guided Holistic OR
Domain Modeling | ['Ege Özsoy', 'Chantal Pellegrini', 'Matthias Keicher', 'Nassir Navab'] | ['cs.CV'] | Every day, countless surgeries are performed worldwide, each within the
distinct settings of operating rooms (ORs) that vary not only in their setups
but also in the personnel, tools, and equipment used. This inherent diversity
poses a substantial challenge for achieving a holistic understanding of the OR,
as it requir... | 2024-04-10T14:24:10Z | 11 pages, 3 figures, 7 tables | null | null | null | null | null | null | null | null | null |
2,404.07053 | Meta4XNLI: A Crosslingual Parallel Corpus for Metaphor Detection and
Interpretation | ['Elisa Sanchez-Bayona', 'Rodrigo Agerri'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Metaphors, although occasionally unperceived, are ubiquitous in our everyday
language. Thus, it is crucial for Language Models to be able to grasp the
underlying meaning of this kind of figurative language. In this work, we
present Meta4XNLI, a novel parallel dataset for the tasks of metaphor detection
and interpretati... | 2024-04-10T14:44:48Z | null | null | null | null | null | null | null | null | null | null |
2,404.07084 | Dynamic Generation of Personalities with Large Language Models | ['Jianzhi Liu', 'Hexiang Gu', 'Tianyu Zheng', 'Liuyu Xiang', 'Huijia Wu', 'Jie Fu', 'Zhaofeng He'] | ['cs.CL', 'cs.AI'] | In the realm of mimicking human deliberation, large language models (LLMs)
show promising performance, thereby amplifying the importance of this research
area. Deliberation is influenced by both logic and personality. However,
previous studies predominantly focused on the logic of LLMs, neglecting the
exploration of pe... | 2024-04-10T15:17:17Z | null | null | null | Dynamic Generation of Personalities with Large Language Models | ['Jianzhi Liu', 'Hexiang Gu', 'Tianyu Zheng', 'Liuyu Xiang', 'Huijia Wu', 'Jie Fu', 'Zhaofeng He'] | 2,024 | arXiv.org | 3 | 46 | ['Computer Science'] |
2,404.07143 | Leave No Context Behind: Efficient Infinite Context Transformers with
Infini-attention | ['Tsendsuren Munkhdalai', 'Manaal Faruqui', 'Siddharth Gopal'] | ['cs.CL', 'cs.AI', 'cs.LG', 'cs.NE'] | This work introduces an efficient method to scale Transformer-based Large
Language Models (LLMs) to infinitely long inputs with bounded memory and
computation. A key component in our proposed approach is a new attention
technique dubbed Infini-attention. The Infini-attention incorporates a
compressive memory into the v... | 2024-04-10T16:18:42Z | 9 pages, 4 figures, 4 tables (v2 adds: background, implementation
details, recent citations and acknowledgments) | null | null | Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention | ['Tsendsuren Munkhdalai', 'Manaal Faruqui', 'Siddharth Gopal'] | 2,024 | arXiv.org | 124 | 64 | ['Computer Science'] |
2,404.07191 | InstantMesh: Efficient 3D Mesh Generation from a Single Image with
Sparse-view Large Reconstruction Models | ['Jiale Xu', 'Weihao Cheng', 'Yiming Gao', 'Xintao Wang', 'Shenghua Gao', 'Ying Shan'] | ['cs.CV'] | We present InstantMesh, a feed-forward framework for instant 3D mesh
generation from a single image, featuring state-of-the-art generation quality
and significant training scalability. By synergizing the strengths of an
off-the-shelf multiview diffusion model and a sparse-view reconstruction model
based on the LRM arch... | 2024-04-10T17:48:37Z | Technical report. Project: https://github.com/TencentARC/InstantMesh | null | null | InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models | ['Jiale Xu', 'Weihao Cheng', 'Yiming Gao', 'Xintao Wang', 'Shenghua Gao', 'Ying Shan'] | 2,024 | arXiv.org | 208 | 67 | ['Computer Science'] |
2,404.07202 | UMBRAE: Unified Multimodal Brain Decoding | ['Weihao Xia', 'Raoul de Charette', 'Cengiz Öztireli', 'Jing-Hao Xue'] | ['cs.CV', 'cs.AI', 'cs.CL'] | We address prevailing challenges of the brain-powered research, departing
from the observation that the literature hardly recover accurate spatial
information and require subject-specific models. To address these challenges,
we propose UMBRAE, a unified multimodal decoding of brain signals. First, to
extract instance-l... | 2024-04-10T17:59:20Z | ECCV 2024. Project: https://weihaox.github.io/UMBRAE | null | null | null | null | null | null | null | null | null |
2,404.07413 | JetMoE: Reaching Llama2 Performance with 0.1M Dollars | ['Yikang Shen', 'Zhen Guo', 'Tianle Cai', 'Zengyi Qin'] | ['cs.CL', 'cs.AI'] | Large Language Models (LLMs) have achieved remarkable results, but their
increasing resource demand has become a major obstacle to the development of
powerful and accessible super-human intelligence. This report introduces
JetMoE-8B, a new LLM trained with less than $0.1 million, using 1.25T tokens
from carefully mixed... | 2024-04-11T00:52:39Z | null | null | null | null | null | null | null | null | null | null |
2,404.07445 | Multi-view Aggregation Network for Dichotomous Image Segmentation | ['Qian Yu', 'Xiaoqi Zhao', 'Youwei Pang', 'Lihe Zhang', 'Huchuan Lu'] | ['cs.CV'] | Dichotomous Image Segmentation (DIS) has recently emerged towards
high-precision object segmentation from high-resolution natural images.
When designing an effective DIS model, the main challenge is how to balance
the semantic dispersion of high-resolution targets in the small receptive field
and the loss of high-pre... | 2024-04-11T03:00:00Z | Accepted by CVPR2024 as Highlight | null | null | Multi-View Aggregation Network for Dichotomous Image Segmentation | ['Qian Yu', 'Xiaoqi Zhao', 'Youwei Pang', 'Lihe Zhang', 'Huchuan Lu'] | 2,024 | Computer Vision and Pattern Recognition | 17 | 48 | ['Computer Science'] |
2,404.07611 | NoticIA: A Clickbait Article Summarization Dataset in Spanish | ['Iker García-Ferrero', 'Begoña Altuna'] | ['cs.CL', 'cs.AI'] | We present NoticIA, a dataset consisting of 850 Spanish news articles
featuring prominent clickbait headlines, each paired with high-quality,
single-sentence generative summarizations written by humans. This task demands
advanced text understanding and summarization abilities, challenging the
models' capacity to infer ... | 2024-04-11T09:59:01Z | Accepted in the journal Procesamiento del Lenguaje Natural | null | null | null | null | null | null | null | null | null |
2,404.07613 | Medical mT5: An Open-Source Multilingual Text-to-Text LLM for The
Medical Domain | ['Iker García-Ferrero', 'Rodrigo Agerri', 'Aitziber Atutxa Salazar', 'Elena Cabrio', 'Iker de la Iglesia', 'Alberto Lavelli', 'Bernardo Magnini', 'Benjamin Molinet', 'Johana Ramirez-Romero', 'German Rigau', 'Jose Maria Villa-Gonzalez', 'Serena Villata', 'Andrea Zaninello'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Research on language technology for the development of medical applications
is currently a hot topic in Natural Language Understanding and Generation.
Thus, a number of large language models (LLMs) have recently been adapted to
the medical domain, so that they can be used as a tool for mediating in
human-AI interaction... | 2024-04-11T10:01:32Z | LREC-COLING 2024 | null | null | null | null | null | null | null | null | null |
2,404.07724 | Applying Guidance in a Limited Interval Improves Sample and Distribution
Quality in Diffusion Models | ['Tuomas Kynkäänniemi', 'Miika Aittala', 'Tero Karras', 'Samuli Laine', 'Timo Aila', 'Jaakko Lehtinen'] | ['cs.CV', 'cs.AI', 'cs.LG', 'cs.NE', 'stat.ML'] | Guidance is a crucial technique for extracting the best performance out of
image-generating diffusion models. Traditionally, a constant guidance weight
has been applied throughout the sampling chain of an image. We show that
guidance is clearly harmful toward the beginning of the chain (high noise
levels), largely unne... | 2024-04-11T13:16:47Z | NeurIPS 2024 | null | null | Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models | ['T. Kynkäänniemi', 'M. Aittala', 'Tero Karras', 'S. Laine', 'Timo Aila', 'J. Lehtinen'] | 2,024 | Neural Information Processing Systems | 80 | 42 | ['Computer Science', 'Mathematics'] |
2,404.07824 | Heron-Bench: A Benchmark for Evaluating Vision Language Models in
Japanese | ['Yuichi Inoue', 'Kento Sasaki', 'Yuma Ochi', 'Kazuki Fujii', 'Kotaro Tanahashi', 'Yu Yamaguchi'] | ['cs.CV', 'cs.CL'] | Vision Language Models (VLMs) have undergone a rapid evolution, giving rise
to significant advancements in the realm of multimodal understanding tasks.
However, the majority of these models are trained and evaluated on
English-centric datasets, leaving a gap in the development and evaluation of
VLMs for other languages... | 2024-04-11T15:09:22Z | null | null | null | null | null | null | null | null | null | null |
2,404.07904 | HGRN2: Gated Linear RNNs with State Expansion | ['Zhen Qin', 'Songlin Yang', 'Weixuan Sun', 'Xuyang Shen', 'Dong Li', 'Weigao Sun', 'Yiran Zhong'] | ['cs.CL'] | Hierarchically gated linear RNN (HGRN, \citealt{HGRN}) has demonstrated
competitive training speed and performance in language modeling while offering
efficient inference. However, the recurrent state size of HGRN remains
relatively small, limiting its expressiveness. To address this issue, we
introduce a simple outer ... | 2024-04-11T16:43:03Z | Accept to COLM 2024. Yiran Zhong is the corresponding author. Zhen
Qin and Songlin Yang contributed equally to this work. The source code is
available at https://github.com/OpenNLPLab/HGRN2 | null | null | null | null | null | null | null | null | null |
2,404.07921 | AmpleGCG: Learning a Universal and Transferable Generative Model of
Adversarial Suffixes for Jailbreaking Both Open and Closed LLMs | ['Zeyi Liao', 'Huan Sun'] | ['cs.CL'] | As large language models (LLMs) become increasingly prevalent and integrated
into autonomous systems, ensuring their safety is imperative. Despite
significant strides toward safety alignment, recent work
GCG~\citep{zou2023universal} proposes a discrete token optimization algorithm
and selects the single suffix with the... | 2024-04-11T17:05:50Z | Published as a conference paper at COLM 2024
(https://colmweb.org/index.html) | null | null | null | null | null | null | null | null | null |
2,404.07922 | LaVy: Vietnamese Multimodal Large Language Model | ['Chi Tran', 'Huong Le Thanh'] | ['cs.CL', 'cs.CV', 'cs.LG'] | Large Language Models (LLMs) and Multimodal Large language models (MLLMs)
have taken the world by storm with impressive abilities in complex reasoning
and linguistic comprehension. Meanwhile there are plethora of works related to
Vietnamese Large Language Models, the lack of high-quality resources in
multimodality limi... | 2024-04-11T17:09:28Z | 5 pages | null | null | null | null | null | null | null | null | null |
2,404.07965 | Rho-1: Not All Tokens Are What You Need | ['Zhenghao Lin', 'Zhibin Gou', 'Yeyun Gong', 'Xiao Liu', 'Yelong Shen', 'Ruochen Xu', 'Chen Lin', 'Yujiu Yang', 'Jian Jiao', 'Nan Duan', 'Weizhu Chen'] | ['cs.CL', 'cs.AI'] | Previous language model pre-training methods have uniformly applied a
next-token prediction loss to all training tokens. Challenging this norm, we
posit that "9l training". Our initial analysis examines token-level training
dynamics of language model, revealing distinct loss patterns for different
tokens. Leveraging th... | 2024-04-11T17:52:01Z | First two authors equal contribution | null | null | null | null | null | null | null | null | null |
2,404.07972 | OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real
Computer Environments | ['Tianbao Xie', 'Danyang Zhang', 'Jixuan Chen', 'Xiaochuan Li', 'Siheng Zhao', 'Ruisheng Cao', 'Toh Jing Hua', 'Zhoujun Cheng', 'Dongchan Shin', 'Fangyu Lei', 'Yitao Liu', 'Yiheng Xu', 'Shuyan Zhou', 'Silvio Savarese', 'Caiming Xiong', 'Victor Zhong', 'Tao Yu'] | ['cs.AI', 'cs.CL'] | Autonomous agents that accomplish complex computer tasks with minimal human
interventions have the potential to transform human-computer interaction,
significantly enhancing accessibility and productivity. However, existing
benchmarks either lack an interactive environment or are limited to
environments specific to cer... | 2024-04-11T17:56:05Z | 51 pages, 21 figures | null | null | null | null | null | null | null | null | null |
2,404.07979 | LLoCO: Learning Long Contexts Offline | ['Sijun Tan', 'Xiuyu Li', 'Shishir Patil', 'Ziyang Wu', 'Tianjun Zhang', 'Kurt Keutzer', 'Joseph E. Gonzalez', 'Raluca Ada Popa'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Processing long contexts remains a challenge for large language models (LLMs)
due to the quadratic computational and memory overhead of the self-attention
mechanism and the substantial KV cache sizes during generation. We propose
LLoCO, a novel approach to address this problem by learning contexts offline
through conte... | 2024-04-11T17:57:22Z | EMNLP 2024. The first two authors contributed equally to this work | null | null | LLoCO: Learning Long Contexts Offline | ['Sijun Tan', 'Xiuyu Li', 'Shishir G. Patil', 'Ziyang Wu', 'Tianjun Zhang', 'Kurt Keutzer', 'Joseph E. Gonzalez', 'Raluca A. Popa'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 8 | 54 | ['Computer Science'] |
2,404.08382 | Look at the Text: Instruction-Tuned Language Models are More Robust
Multiple Choice Selectors than You Think | ['Xinpeng Wang', 'Chengzhi Hu', 'Bolei Ma', 'Paul Röttger', 'Barbara Plank'] | ['cs.CL', 'cs.AI'] | Multiple choice questions (MCQs) are commonly used to evaluate the
capabilities of large language models (LLMs). One common way to evaluate the
model response is to rank the candidate answers based on the log probability of
the first token prediction. An alternative way is to examine the text output.
Prior work has sho... | 2024-04-12T10:36:15Z | COLM 2024 | null | null | null | null | null | null | null | null | null |
2,404.08535 | Generalized Contrastive Learning for Multi-Modal Retrieval and Ranking | ['Tianyu Zhu', 'Myong Chol Jung', 'Jesse Clark'] | ['cs.IR', 'cs.CV', 'cs.LG'] | Contrastive learning has gained widespread adoption for retrieval tasks due
to its minimal requirement for manual annotations. However, popular training
frameworks typically learn from binary (positive/negative) relevance, making
them ineffective at incorporating desired rankings. As a result, the poor
ranking performa... | 2024-04-12T15:30:03Z | null | The ACM Web Conference 2025 (WWW2025) Industry Track | 10.1145/3701716.3715227 | Generalized Contrastive Learning for Multi-Modal Retrieval and Ranking | ['Tianyu Zhu', 'M. Jung', 'Jesse Clark'] | 2,024 | The Web Conference | 1 | 78 | ['Computer Science'] |
2,404.08582 | FashionFail: Addressing Failure Cases in Fashion Object Detection and
Segmentation | ['Riza Velioglu', 'Robin Chan', 'Barbara Hammer'] | ['cs.CV', 'cs.AI'] | In the realm of fashion object detection and segmentation for online shopping
images, existing state-of-the-art fashion parsing models encounter limitations,
particularly when exposed to non-model-worn apparel and close-up shots. To
address these failures, we introduce FashionFail; a new fashion dataset with
e-commerce... | 2024-04-12T16:28:30Z | to be published in 2024 International Joint Conference on Neural
Networks (IJCNN) | null | 10.1109/IJCNN60899.2024.10651287 | null | null | null | null | null | null | null |
2,404.08634 | When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller,
Stronger Models | ['Sunny Sanyal', 'Ravid Shwartz-Ziv', 'Alexandros G. Dimakis', 'Sujay Sanghavi'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Large Language Models (LLMs) rely on the transformer architecture and its
self-attention mechanism to deliver strong performance across tasks. However,
we uncover a structural inefficiency in standard pre-trained decoder-style
LLMs: in many of the deeper layers, attention matrices frequently collapse to
near rank-one, ... | 2024-04-12T17:53:34Z | 29 pages, 22 figures, 11 tables | null | null | null | null | null | null | null | null | null |
2,404.09158 | StreakNet-Arch: An Anti-scattering Network-based Architecture for
Underwater Carrier LiDAR-Radar Imaging | ['Xuelong Li', 'Hongjun An', 'Haofei Zhao', 'Guangying Li', 'Bo Liu', 'Xing Wang', 'Guanghua Cheng', 'Guojun Wu', 'Zhe Sun'] | ['cs.CV', 'cs.AI'] | In this paper, we introduce StreakNet-Arch, a real-time, end-to-end
binary-classification framework based on our self-developed Underwater Carrier
LiDAR-Radar (UCLR) that embeds Self-Attention and our novel Double Branch Cross
Attention (DBC-Attention) to enhance scatter suppression. Under controlled
water tank validat... | 2024-04-14T06:19:46Z | Accepted by IEEE Transactions on Image Processing (T-IP) | null | 10.1109/TIP.2025.3586431 | null | null | null | null | null | null | null |
2,404.09512 | Magic Clothing: Controllable Garment-Driven Image Synthesis | ['Weifeng Chen', 'Tao Gu', 'Yuhao Xu', 'Chengcai Chen'] | ['cs.CV'] | We propose Magic Clothing, a latent diffusion model (LDM)-based network
architecture for an unexplored garment-driven image synthesis task. Aiming at
generating customized characters wearing the target garments with diverse text
prompts, the image controllability is the most critical issue, i.e., to
preserve the garmen... | 2024-04-15T07:15:39Z | null | null | null | null | null | null | null | null | null | null |
2,404.09556 | nnU-Net Revisited: A Call for Rigorous Validation in 3D Medical Image
Segmentation | ['Fabian Isensee', 'Tassilo Wald', 'Constantin Ulrich', 'Michael Baumgartner', 'Saikat Roy', 'Klaus Maier-Hein', 'Paul F. Jaeger'] | ['cs.CV'] | The release of nnU-Net marked a paradigm shift in 3D medical image
segmentation, demonstrating that a properly configured U-Net architecture could
still achieve state-of-the-art results. Despite this, the pursuit of novel
architectures, and the respective claims of superior performance over the U-Net
baseline, continue... | 2024-04-15T08:19:08Z | Accepted at MICCAI 2024 | null | null | null | null | null | null | null | null | null |
2,404.0961 | LoRA Dropout as a Sparsity Regularizer for Overfitting Control | ['Yang Lin', 'Xinyu Ma', 'Xu Chu', 'Yujie Jin', 'Zhibang Yang', 'Yasha Wang', 'Hong Mei'] | ['cs.LG', 'cs.AI'] | Parameter-efficient fine-tuning methods, represented by LoRA, play an
essential role in adapting large-scale pre-trained models to downstream tasks.
However, fine-tuning LoRA-series models also faces the risk of overfitting on
the training dataset, and yet there's still a lack of theoretical guidance and
practical mech... | 2024-04-15T09:32:12Z | null | null | null | null | null | null | null | null | null | null |
2,404.09836 | How Far Have We Gone in Binary Code Understanding Using Large Language
Models | ['Xiuwei Shang', 'Shaoyin Cheng', 'Guoqiang Chen', 'Yanming Zhang', 'Li Hu', 'Xiao Yu', 'Gangyang Li', 'Weiming Zhang', 'Nenghai Yu'] | ['cs.SE', 'cs.CR'] | Binary code analysis plays a pivotal role in various software security
applications, such as software maintenance, malware detection, software
vulnerability discovery, patch analysis, etc. However, unlike source code,
understanding binary code is challenging for reverse engineers due to the
absence of semantic informat... | 2024-04-15T14:44:08Z | 12 pages, 8 figures, to be published in ICSME 2024 | null | null | How Far Have We Gone in Binary Code Understanding Using Large Language Models | ['Xiuwei Shang', 'Shaoyin Cheng', 'Guoqiang Chen', 'Yanming Zhang', 'Li Hu', 'Xiao Yu', 'Gangyang Li', 'Weiming Zhang', 'Neng H. Yu'] | 2,024 | IEEE International Conference on Software Maintenance and Evolution | 3 | 56 | ['Computer Science'] |
2,404.09956 | Tango 2: Aligning Diffusion-based Text-to-Audio Generations through
Direct Preference Optimization | ['Navonil Majumder', 'Chia-Yu Hung', 'Deepanway Ghosal', 'Wei-Ning Hsu', 'Rada Mihalcea', 'Soujanya Poria'] | ['cs.SD', 'cs.AI', 'cs.CL', 'eess.AS'] | Generative multimodal content is increasingly prevalent in much of the
content creation arena, as it has the potential to allow artists and media
personnel to create pre-production mockups by quickly bringing their ideas to
life. The generation of audio from text prompts is an important aspect of such
processes in the ... | 2024-04-15T17:31:22Z | Accepted at ACM MM 2024 | null | null | Tango 2: Aligning Diffusion-based Text-to-Audio Generations through Direct Preference Optimization | ['Navonil Majumder', 'Chia-Yu Hung', 'Deepanway Ghosal', 'Wei-Ning Hsu', 'Rada Mihalcea', 'Soujanya Poria'] | 2,024 | ACM Multimedia | 61 | 36 | ['Computer Science', 'Engineering'] |
2,404.09987 | OneChart: Purify the Chart Structural Extraction via One Auxiliary Token | ['Jinyue Chen', 'Lingyu Kong', 'Haoran Wei', 'Chenglong Liu', 'Zheng Ge', 'Liang Zhao', 'Jianjian Sun', 'Chunrui Han', 'Xiangyu Zhang'] | ['cs.CV'] | Chart parsing poses a significant challenge due to the diversity of styles,
values, texts, and so forth. Even advanced large vision-language models (LVLMs)
with billions of parameters struggle to handle such tasks satisfactorily. To
address this, we propose OneChart: a reliable agent specifically devised for
the struct... | 2024-04-15T17:58:57Z | 14 pages, 9 figures and 6 tables | null | null | null | null | null | null | null | null | null |
2,404.09988 | in2IN: Leveraging individual Information to Generate Human INteractions | ['Pablo Ruiz Ponce', 'German Barquero', 'Cristina Palmero', 'Sergio Escalera', 'Jose Garcia-Rodriguez'] | ['cs.CV'] | Generating human-human motion interactions conditioned on textual
descriptions is a very useful application in many areas such as robotics,
gaming, animation, and the metaverse. Alongside this utility also comes a great
difficulty in modeling the highly dimensional inter-personal dynamics. In
addition, properly capturi... | 2024-04-15T17:59:04Z | Project page: https://pabloruizponce.github.io/in2IN/ | null | null | null | null | null | null | null | null | null |
2,404.10157 | Salient Object-Aware Background Generation using Text-Guided Diffusion
Models | ['Amir Erfan Eshratifar', 'Joao V. B. Soares', 'Kapil Thadani', 'Shaunak Mishra', 'Mikhail Kuznetsov', 'Yueh-Ning Ku', 'Paloma de Juan'] | ['cs.CV', 'cs.LG'] | Generating background scenes for salient objects plays a crucial role across
various domains including creative design and e-commerce, as it enhances the
presentation and context of subjects by integrating them into tailored
environments. Background generation can be framed as a task of text-conditioned
outpainting, wh... | 2024-04-15T22:13:35Z | Accepted for publication at CVPR 2024's Generative Models for
Computer Vision workshop | null | null | null | null | null | null | null | null | null |
2,404.10518 | MobileNetV4 -- Universal Models for the Mobile Ecosystem | ['Danfeng Qin', 'Chas Leichner', 'Manolis Delakis', 'Marco Fornoni', 'Shixin Luo', 'Fan Yang', 'Weijun Wang', 'Colby Banbury', 'Chengxi Ye', 'Berkin Akin', 'Vaibhav Aggarwal', 'Tenghui Zhu', 'Daniele Moro', 'Andrew Howard'] | ['cs.CV'] | We present the latest generation of MobileNets, known as MobileNetV4 (MNv4),
featuring universally efficient architecture designs for mobile devices. At its
core, we introduce the Universal Inverted Bottleneck (UIB) search block, a
unified and flexible structure that merges Inverted Bottleneck (IB), ConvNext,
Feed Forw... | 2024-04-16T12:41:25Z | null | null | null | null | null | null | null | null | null | null |
2,404.10555 | Construction of Domain-specified Japanese Large Language Model for
Finance through Continual Pre-training | ['Masanori Hirano', 'Kentaro Imajo'] | ['cs.CL', 'q-fin.CP'] | Large language models (LLMs) are now widely used in various fields, including
finance. However, Japanese financial-specific LLMs have not been proposed yet.
Hence, this study aims to construct a Japanese financial-specific LLM through
continual pre-training. Before tuning, we constructed Japanese
financial-focused data... | 2024-04-16T13:26:32Z | 7 pages | null | null | Construction of Domain-Specified Japanese Large Language Model for Finance Through Continual Pre-Training | ['Masanori Hirano', 'Kentaro Imajo'] | 2,024 | IIAI International Conference on Advanced Applied Informatics | 1 | 43 | ['Computer Science', 'Economics'] |
2,404.1071 | Autoregressive Pre-Training on Pixels and Texts | ['Yekun Chai', 'Qingyi Liu', 'Jingwu Xiao', 'Shuohuan Wang', 'Yu Sun', 'Hua Wu'] | ['cs.CL', 'cs.CV'] | The integration of visual and textual information represents a promising
direction in the advancement of language models. In this paper, we explore the
dual modality of language--both visual and textual--within an autoregressive
framework, pre-trained on both document images and texts. Our method employs a
multimodal t... | 2024-04-16T16:36:50Z | EMNLP 2024 | null | null | null | null | null | null | null | null | null |
2,404.10774 | MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents | ['Liyan Tang', 'Philippe Laban', 'Greg Durrett'] | ['cs.CL', 'cs.AI'] | Recognizing if LLM output can be grounded in evidence is central to many
tasks in NLP: retrieval-augmented generation, summarization, document-grounded
dialogue, and more. Current approaches to this kind of fact-checking are based
on verifying each piece of a model generation against potential evidence using
an LLM. Ho... | 2024-04-16T17:59:10Z | EMNLP 2024 | null | null | MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents | ['Liyan Tang', 'Philippe Laban', 'Greg Durrett'] | 2,024 | Conference on Empirical Methods in Natural Language Processing | 103 | 76 | ['Computer Science'] |
2,404.1083 | Fewer Truncations Improve Language Modeling | ['Hantian Ding', 'Zijian Wang', 'Giovanni Paolini', 'Varun Kumar', 'Anoop Deoras', 'Dan Roth', 'Stefano Soatto'] | ['cs.CL', 'cs.AI', 'cs.LG'] | In large language model training, input documents are typically concatenated
together and then split into sequences of equal length to avoid padding tokens.
Despite its efficiency, the concatenation approach compromises data integrity
-- it inevitably breaks many documents into incomplete pieces, leading to
excessive t... | 2024-04-16T18:08:29Z | ICML 2024 | null | null | null | null | null | null | null | null | null |
2,404.10934 | Shears: Unstructured Sparsity with Neural Low-rank Adapter Search | ['J. Pablo Muñoz', 'Jinjie Yuan', 'Nilesh Jain'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Recently, several approaches successfully demonstrated that weight-sharing
Neural Architecture Search (NAS) can effectively explore a search space of
elastic low-rank adapters (LoRA), allowing the parameter-efficient fine-tuning
(PEFT) and compression of large language models. In this paper, we introduce a
novel approa... | 2024-04-16T22:12:36Z | 2024 Annual Conference of the North American Chapter of the
Association for Computational Linguistics (Industry Track) | null | null | Shears: Unstructured Sparsity with Neural Low-rank Adapter Search | ['J. P. Munoz', 'Jinjie Yuan', 'Nilesh Jain'] | 2,024 | North American Chapter of the Association for Computational Linguistics | 7 | 30 | ['Computer Science'] |
2,404.11049 | Stepwise Alignment for Constrained Language Model Policy Optimization | ['Akifumi Wachi', 'Thien Q. Tran', 'Rei Sato', 'Takumi Tanabe', 'Youhei Akimoto'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Safety and trustworthiness are indispensable requirements for real-world
applications of AI systems using large language models (LLMs). This paper
formulates human value alignment as an optimization problem of the language
model policy to maximize reward under a safety constraint, and then proposes an
algorithm, Stepwi... | 2024-04-17T03:44:58Z | Accepted at NeurIPS 2024. Code and models are available at
https://github.com/line/sacpo | null | null | Stepwise Alignment for Constrained Language Model Policy Optimization | ['Akifumi Wachi', 'Thien Q. Tran', 'Rei Sato', 'Takumi Tanabe', 'Yohei Akimoto'] | 2,024 | Neural Information Processing Systems | 10 | 59 | ['Computer Science'] |
2,404.11202 | GhostNetV3: Exploring the Training Strategies for Compact Models | ['Zhenhua Liu', 'Zhiwei Hao', 'Kai Han', 'Yehui Tang', 'Yunhe Wang'] | ['cs.CV'] | Compact neural networks are specially designed for applications on edge
devices with faster inference speed yet modest performance. However, training
strategies of compact models are borrowed from that of conventional models at
present, which ignores their difference in model capacity and thus may impede
the performanc... | 2024-04-17T09:33:31Z | null | null | null | null | null | null | null | null | null | null |
2,404.11317 | Improving Composed Image Retrieval via Contrastive Learning with Scaling
Positives and Negatives | ['Zhangchi Feng', 'Richong Zhang', 'Zhijie Nie'] | ['cs.CV', 'cs.AI'] | The Composed Image Retrieval (CIR) task aims to retrieve target images using
a composed query consisting of a reference image and a modified text. Advanced
methods often utilize contrastive learning as the optimization objective, which
benefits from adequate positive and negative examples. However, the triplet for
CIR ... | 2024-04-17T12:30:54Z | Accepted to ACM MM 2024 Regular Papers | null | null | null | null | null | null | null | null | null |
2,404.11459 | Octopus v3: Technical Report for On-device Sub-billion Multimodal AI
Agent | ['Wei Chen', 'Zhiyuan Li'] | ['cs.CL', 'cs.CV'] | A multimodal AI agent is characterized by its ability to process and learn
from various types of data, including natural language, visual, and audio
inputs, to inform its actions. Despite advancements in large language models
that incorporate visual data, such as GPT-4V, effectively translating
image-based data into ac... | 2024-04-17T15:07:06Z | null | null | null | null | null | null | null | null | null | null |
2,404.11581 | E2ETune: End-to-End Knob Tuning via Fine-tuned Generative Language Model | ['Xinmei Huang', 'Haoyang Li', 'Jing Zhang', 'Xinxin Zhao', 'Zhiming Yao', 'Yiyan Li', 'Tieying Zhang', 'Jianjun Chen', 'Hong Chen', 'Cuiping Li'] | ['cs.AI', 'cs.DB'] | Database knob tuning is a significant challenge for database administrators,
as it involves tuning a large number of configuration knobs with continuous or
discrete values to achieve optimal database performance. Traditional methods,
such as manual tuning or learning-based approaches, typically require numerous
workloa... | 2024-04-17T17:28:05Z | Accepted by VLDB 2025 | null | null | null | null | null | null | null | null | null |
2,404.12096 | LongEmbed: Extending Embedding Models for Long Context Retrieval | ['Dawei Zhu', 'Liang Wang', 'Nan Yang', 'Yifan Song', 'Wenhao Wu', 'Furu Wei', 'Sujian Li'] | ['cs.CL', 'cs.LG'] | Embedding models play a pivot role in modern NLP applications such as IR and
RAG. While the context limit of LLMs has been pushed beyond 1 million tokens,
embedding models are still confined to a narrow context window not exceeding 8k
tokens, refrained from application scenarios requiring long inputs such as
legal cont... | 2024-04-18T11:29:23Z | EMNLP 2024 Camera Ready | null | null | null | null | null | null | null | null | null |
2,404.12104 | Ethical-Lens: Curbing Malicious Usages of Open-Source Text-to-Image
Models | ['Yuzhu Cai', 'Sheng Yin', 'Yuxi Wei', 'Chenxin Xu', 'Weibo Mao', 'Felix Juefei-Xu', 'Siheng Chen', 'Yanfeng Wang'] | ['cs.CV', 'cs.CL', 'cs.LG'] | The burgeoning landscape of text-to-image models, exemplified by innovations
such as Midjourney and DALLE 3, has revolutionized content creation across
diverse sectors. However, these advancements bring forth critical ethical
concerns, particularly with the misuse of open-source models to generate
content that violates... | 2024-04-18T11:38:25Z | 51 pages, 15 figures, 32 tables | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.