arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,410.14309
LoGU: Long-form Generation with Uncertainty Expressions
['Ruihan Yang', 'Caiqi Zhang', 'Zhisong Zhang', 'Xinting Huang', 'Sen Yang', 'Nigel Collier', 'Dong Yu', 'Deqing Yang']
['cs.CL', 'cs.AI']
While Large Language Models (LLMs) demonstrate impressive capabilities, they still struggle with generating factually incorrect content (i.e., hallucinations). A promising approach to mitigate this issue is enabling models to express uncertainty when unsure. Previous research on uncertainty modeling has primarily focus...
2024-10-18T09:15:35Z
ACL 2025 Main
null
null
LoGU: Long-form Generation with Uncertainty Expressions
['Ruihan Yang', 'Caiqi Zhang', 'Zhisong Zhang', 'Xinting Huang', 'Sen Yang', 'Nigel Collier', 'Dong Yu', 'Deqing Yang']
2,024
arXiv.org
9
48
['Computer Science']
2,410.14324
HiCo: Hierarchical Controllable Diffusion Model for Layout-to-image Generation
['Bo Cheng', 'Yuhang Ma', 'Liebucha Wu', 'Shanyuan Liu', 'Ao Ma', 'Xiaoyu Wu', 'Dawei Leng', 'Yuhui Yin']
['cs.CV']
The task of layout-to-image generation involves synthesizing images based on the captions of objects and their spatial positions. Existing methods still struggle in complex layout generation, where common bad cases include object missing, inconsistent lighting, conflicting view angles, etc. To effectively address these...
2024-10-18T09:36:10Z
NeurIPS2024
null
null
null
null
null
null
null
null
null
2,410.14464
Electrocardiogram-Language Model for Few-Shot Question Answering with Meta Learning
['Jialu Tang', 'Tong Xia', 'Yuan Lu', 'Cecilia Mascolo', 'Aaqib Saeed']
['cs.LG']
Electrocardiogram (ECG) interpretation requires specialized expertise, often involving synthesizing insights from ECG signals with complex clinical queries posed in natural language. The scarcity of labeled ECG data coupled with the diverse nature of clinical inquiries presents a significant challenge for developing ro...
2024-10-18T13:48:01Z
Accepted at AHLI CHIL 2025
null
null
null
null
null
null
null
null
null
2,410.14596
Teaching Models to Balance Resisting and Accepting Persuasion
['Elias Stengel-Eskin', 'Peter Hase', 'Mohit Bansal']
['cs.CL', 'cs.AI']
Large language models (LLMs) are susceptible to persuasion, which can pose risks when models are faced with an adversarial interlocutor. We take a first step towards defending models against persuasion while also arguing that defense against adversarial (i.e. negative) persuasion is only half of the equation: models sh...
2024-10-18T16:49:36Z
NAACL Camera-Ready. Code: https://github.com/esteng/persuasion_balanced_training
null
null
Teaching Models to Balance Resisting and Accepting Persuasion
['Elias Stengel-Eskin', 'Peter Hase', 'Mohit Bansal']
2,024
North American Chapter of the Association for Computational Linguistics
5
45
['Computer Science']
2,410.14609
DiSCo: LLM Knowledge Distillation for Efficient Sparse Retrieval in Conversational Search
['Simon Lupart', 'Mohammad Aliannejadi', 'Evangelos Kanoulas']
['cs.IR', 'cs.CL']
Conversational Search (CS) involves retrieving relevant documents from a corpus while considering the conversational context, integrating retrieval with context modeling. Recent advancements in Large Language Models (LLMs) have significantly enhanced CS by enabling query rewriting based on conversational context. Howev...
2024-10-18T17:03:17Z
11 pages, 6 figures. SIGIR '25 Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval July 13--18, 2025 Padua, Italy
null
null
null
null
null
null
null
null
null
2,410.14633
Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning
['Yuxiang Lu', 'Shengcao Cao', 'Yu-Xiong Wang']
['cs.CV']
Vision Foundation Models (VFMs) have demonstrated outstanding performance on numerous downstream tasks. However, due to their inherent representation biases originating from different training paradigms, VFMs exhibit advantages and disadvantages across distinct vision tasks. Although amalgamating the strengths of multi...
2024-10-18T17:32:39Z
Accepted by ICLR2025
null
null
null
null
null
null
null
null
null
2,410.14672
BiGR: Harnessing Binary Latent Codes for Image Generation and Improved Visual Representation Capabilities
['Shaozhe Hao', 'Xuantong Liu', 'Xianbiao Qi', 'Shihao Zhao', 'Bojia Zi', 'Rong Xiao', 'Kai Han', 'Kwan-Yee K. Wong']
['cs.CV', 'cs.AI']
We introduce BiGR, a novel conditional image generation model using compact binary latent codes for generative training, focusing on enhancing both generation and representation capabilities. BiGR is the first conditional generative model that unifies generation and discrimination within the same framework. BiGR featur...
2024-10-18T17:59:04Z
Updated with additional T2I results; Project page: https://haoosz.github.io/BiGR
null
null
null
null
null
null
null
null
null
2,410.14675
To Trust or Not to Trust? Enhancing Large Language Models' Situated Faithfulness to External Contexts
['Yukun Huang', 'Sanxing Chen', 'Hongyi Cai', 'Bhuwan Dhingra']
['cs.CL', 'cs.AI']
Large Language Models (LLMs) are often augmented with external contexts, such as those used in retrieval-augmented generation (RAG). However, these contexts can be inaccurate or intentionally misleading, leading to conflicts with the model's internal knowledge. We argue that robust LLMs should demonstrate situated fait...
2024-10-18T17:59:47Z
null
null
null
To Trust or Not to Trust? Enhancing Large Language Models' Situated Faithfulness to External Contexts
['Yukun Huang', 'Sanxing Chen', 'H. Cai', 'Bhuwan Dhingra']
2,024
International Conference on Learning Representations
4
34
['Computer Science']
2,410.14687
BrainTransformers: SNN-LLM
['Zhengzheng Tang', 'Eva Zhu']
['cs.NE', 'cs.CL', 'cs.LG']
This study introduces BrainTransformers, an innovative Large Language Model (LLM) implemented using Spiking Neural Networks (SNN). Our key contributions include: (1) designing SNN-compatible Transformer components such as SNNMatmul, SNNSoftmax, and SNNSiLU; (2) implementing an SNN approximation of the SiLU activation f...
2024-10-03T14:17:43Z
null
null
null
BrainTransformers: SNN-LLM
['Zhengzheng Tang', 'Eva Zhu']
2,024
arXiv.org
1
15
['Computer Science']
2,410.14735
Agent Skill Acquisition for Large Language Models via CycleQD
['So Kuroki', 'Taishi Nakamura', 'Takuya Akiba', 'Yujin Tang']
['cs.CL', 'cs.AI', 'cs.NE']
Training large language models to acquire specific skills remains a challenging endeavor. Conventional training approaches often struggle with data distribution imbalances and inadequacies in objective functions that do not align well with task-specific performance. To address these challenges, we introduce CycleQD, a ...
2024-10-16T20:27:15Z
To appear at the 13th International Conference on Learning Representations (ICLR 2025)
null
null
null
null
null
null
null
null
null
2,410.14745
Semi-supervised Fine-tuning for Large Language Models
['Junyu Luo', 'Xiao Luo', 'Xiusi Chen', 'Zhiping Xiao', 'Wei Ju', 'Ming Zhang']
['cs.CL', 'cs.AI']
Supervised fine-tuning (SFT) is crucial in adapting large language model (LLMs) to a specific domain or task. However, only a limited amount of labeled data is available in practical applications, which poses a severe challenge for SFT in yielding satisfactory results. Therefore, a data-efficient framework that can ful...
2024-10-17T16:59:46Z
Github Repo: https://github.com/luo-junyu/SemiEvol
NAACL 2025
null
Semi-supervised Fine-tuning for Large Language Models
['Junyu Luo', 'Xiao Luo', 'Xiusi Chen', 'Zhiping Xiao', 'Wei Ju', 'Ming Zhang']
2,024
North American Chapter of the Association for Computational Linguistics
1
65
['Computer Science']
2,410.14815
Adapting Multilingual LLMs to Low-Resource Languages using Continued Pre-training and Synthetic Corpus
['Raviraj Joshi', 'Kanishk Singla', 'Anusha Kamath', 'Raunak Kalani', 'Rakesh Paul', 'Utkarsh Vaidya', 'Sanjay Singh Chauhan', 'Niranjan Wartikar', 'Eileen Long']
['cs.CL', 'cs.LG']
Multilingual LLMs support a variety of languages; however, their performance is suboptimal for low-resource languages. In this work, we emphasize the importance of continued pre-training of multilingual LLMs and the use of translation-based synthetic pre-training corpora for improving LLMs in low-resource languages. We...
2024-10-18T18:35:19Z
null
null
null
null
null
null
null
null
null
null
2,410.15027
Group Diffusion Transformers are Unsupervised Multitask Learners
['Lianghua Huang', 'Wei Wang', 'Zhi-Fan Wu', 'Huanzhang Dou', 'Yupeng Shi', 'Yutong Feng', 'Chen Liang', 'Yu Liu', 'Jingren Zhou']
['cs.CV']
While large language models (LLMs) have revolutionized natural language processing with their task-agnostic capabilities, visual generation tasks such as image translation, style transfer, and character customization still rely heavily on supervised, task-specific datasets. In this work, we introduce Group Diffusion Tr...
2024-10-19T07:53:15Z
null
null
null
null
null
null
null
null
null
null
2,410.15148
Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning
['David Schulte', 'Felix Hamborg', 'Alan Akbik']
['cs.CL', 'cs.LG']
Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods produci...
2024-10-19T16:22:04Z
EMNLP 2024 Main Conference
null
null
null
null
null
null
null
null
null
2,410.15308
LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content
['Mohamed Bayan Kmainasi', 'Ali Ezzat Shahroor', 'Maram Hasanain', 'Sahinur Rahman Laskar', 'Naeemul Hassan', 'Firoj Alam']
['cs.CL', 'cs.AI', '68T50', 'F.2.2; I.2.7']
Large Language Models (LLMs) have demonstrated remarkable success as general-purpose task solvers across various fields. However, their capabilities remain limited when addressing domain-specific problems, particularly in downstream NLP tasks. Research has shown that models fine-tuned on instruction-based downstream NL...
2024-10-20T06:37:37Z
LLMs, Multilingual, Language Diversity, Large Language Models, Social Media, News Media, Specialized LLMs, Fact-checking, Media Analysis, Arabic, Hindi, English
null
null
null
null
null
null
null
null
null
2,410.15316
Ichigo: Mixed-Modal Early-Fusion Realtime Voice Assistant
['Alan Dao', 'Dinh Bach Vu', 'Huy Hoang Ha']
['cs.CL', 'cs.SD', 'eess.AS']
Large Language Models (LLMs) have revolutionized natural language processing, but their application to speech-based tasks remains challenging due to the complexities of integrating audio and text modalities. This paper introduces Ichigo, a mixed-modal model that seamlessly processes interleaved sequences of speech and ...
2024-10-20T07:03:49Z
null
null
null
Ichigo: Mixed-Modal Early-Fusion Realtime Voice Assistant
['Alan Dao', 'Dinh Bach Vu', 'Huy Hoang Ha']
2,024
arXiv.org
5
53
['Computer Science', 'Engineering']
2,410.15458
Allegro: Open the Black Box of Commercial-Level Video Generation Model
['Yuan Zhou', 'Qiuyue Wang', 'Yuxuan Cai', 'Huan Yang']
['cs.CV']
Significant advancements have been made in the field of video generation, with the open-source community contributing a wealth of research papers and tools for training high-quality models. However, despite these efforts, the available information and resources remain insufficient for achieving commercial-level perform...
2024-10-20T17:51:35Z
null
null
null
null
null
null
null
null
null
null
2,410.15608
Moonshine: Speech Recognition for Live Transcription and Voice Commands
['Nat Jeffries', 'Evan King', 'Manjunath Kudlur', 'Guy Nicholson', 'James Wang', 'Pete Warden']
['cs.SD', 'cs.CL', 'cs.LG', 'eess.AS']
This paper introduces Moonshine, a family of speech recognition models optimized for live transcription and voice command processing. Moonshine is based on an encoder-decoder transformer architecture and employs Rotary Position Embedding (RoPE) instead of traditional absolute position embeddings. The model is trained o...
2024-10-21T03:13:20Z
7 pages, 6 figures, 3 tables
null
null
null
null
null
null
null
null
null
2,410.15636
LucidFusion: Reconstructing 3D Gaussians with Arbitrary Unposed Images
['Hao He', 'Yixun Liang', 'Luozhou Wang', 'Yuanhao Cai', 'Xinli Xu', 'Hao-Xiang Guo', 'Xiang Wen', 'Yingcong Chen']
['cs.CV']
Recent large reconstruction models have made notable progress in generating high-quality 3D objects from single images. However, current reconstruction methods often rely on explicit camera pose estimation or fixed viewpoints, restricting their flexibility and practical applicability. We reformulate 3D reconstruction a...
2024-10-21T04:47:01Z
11 pages, 10 figures, [project page](https://heye0507.github.io/LucidFusion_page/)
null
null
LucidFusion: Reconstructing 3D Gaussians with Arbitrary Unposed Images
['Hao He', 'Yixun Liang', 'Luozhou Wang', 'Yuanhao Cai', 'Xinli Xu', 'Hao-Xiang Guo', 'Xiang Wen', 'Yingcong Chen']
2,024
null
0
51
['Computer Science']
2,410.157
InternLM2.5-StepProver: Advancing Automated Theorem Proving via Expert Iteration on Large-Scale LEAN Problems
['Zijian Wu', 'Suozhi Huang', 'Zhejian Zhou', 'Huaiyuan Ying', 'Jiayu Wang', 'Dahua Lin', 'Kai Chen']
['cs.AI', 'cs.CL']
Large Language Models (LLMs) have emerged as powerful tools in mathematical theorem proving, particularly when utilizing formal languages such as LEAN. The major learning paradigm is expert iteration, which necessitates a pre-defined dataset comprising numerous mathematical problems. In this process, LLMs attempt to pr...
2024-10-21T07:18:23Z
null
null
null
null
null
null
null
null
null
null
2,410.15735
AutoTrain: No-code training for state-of-the-art models
['Abhishek Thakur']
['cs.AI']
With the advancements in open-source models, training (or finetuning) models on custom datasets has become a crucial part of developing solutions which are tailored to specific industrial or open-source applications. Yet, there is no single tool which simplifies the process of training across different types of modalit...
2024-10-21T07:53:32Z
null
null
null
null
null
null
null
null
null
null
2,410.15926
Mitigating Object Hallucination via Concentric Causal Attention
['Yun Xing', 'Yiheng Li', 'Ivan Laptev', 'Shijian Lu']
['cs.CV', 'cs.CL']
Recent Large Vision Language Models (LVLMs) present remarkable zero-shot conversational and reasoning capabilities given multimodal queries. Nevertheless, they suffer from object hallucination, a phenomenon where LVLMs are prone to generate textual responses not factually aligned with image inputs. Our pilot study reve...
2024-10-21T11:54:53Z
To appear at NeurIPS 2024. Code is available at https://github.com/xing0047/cca-llava
null
null
null
null
null
null
null
null
null
2,410.15957
CamI2V: Camera-Controlled Image-to-Video Diffusion Model
['Guangcong Zheng', 'Teng Li', 'Rui Jiang', 'Yehao Lu', 'Tao Wu', 'Xi Li']
['cs.CV']
Recent advancements have integrated camera pose as a user-friendly and physics-informed condition in video diffusion models, enabling precise camera control. In this paper, we identify one of the key challenges as effectively modeling noisy cross-frame interactions to enhance geometry consistency and camera controllabi...
2024-10-21T12:36:27Z
null
null
null
CamI2V: Camera-Controlled Image-to-Video Diffusion Model
['Guangcong Zheng', 'Teng Li', 'Rui Jiang', 'Yehao Lu', 'Tao Wu', 'Xi Li']
2,024
arXiv.org
27
72
['Computer Science']
2,410.16153
Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages
['Xiang Yue', 'Yueqi Song', 'Akari Asai', 'Seungone Kim', 'Jean de Dieu Nyandwi', 'Simran Khanuja', 'Anjali Kantharuban', 'Lintang Sutawika', 'Sathyanarayanan Ramamoorthy', 'Graham Neubig']
['cs.CL', 'cs.CV']
Despite recent advances in multimodal large language models (MLLMs), their development has predominantly focused on English- and western-centric datasets and tasks, leaving most of the world's languages and diverse cultural contexts underrepresented. This paper introduces Pangea, a multilingual multimodal LLM trained o...
2024-10-21T16:19:41Z
54 pages, 27 figures
null
null
null
null
null
null
null
null
null
2,410.16166
Beyond Filtering: Adaptive Image-Text Quality Enhancement for MLLM Pretraining
['Han Huang', 'Yuqi Huo', 'Zijia Zhao', 'Haoyu Lu', 'Shu Wu', 'Bingning Wang', 'Qiang Liu', 'Weipeng Chen', 'Liang Wang']
['cs.CV', 'cs.CL']
Multimodal large language models (MLLMs) have made significant strides by integrating visual and textual modalities. A critical factor in training MLLMs is the quality of image-text pairs within multimodal pretraining datasets. However, $\textit {de facto}$ filter-based data quality enhancement paradigms often discard ...
2024-10-21T16:32:41Z
null
null
null
null
null
null
null
null
null
null
2,410.16184
RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style
['Yantao Liu', 'Zijun Yao', 'Rui Min', 'Yixin Cao', 'Lei Hou', 'Juanzi Li']
['cs.CL']
Reward models are critical in techniques like Reinforcement Learning from Human Feedback (RLHF) and Inference Scaling Laws, where they guide language model alignment and select optimal responses. Despite their importance, existing reward model benchmarks often evaluate models by asking them to distinguish between respo...
2024-10-21T16:48:26Z
null
null
null
null
null
null
null
null
null
null
2,410.16198
Improve Vision Language Model Chain-of-thought Reasoning
['Ruohong Zhang', 'Bowen Zhang', 'Yanghao Li', 'Haotian Zhang', 'Zhiqing Sun', 'Zhe Gan', 'Yinfei Yang', 'Ruoming Pang', 'Yiming Yang']
['cs.AI', 'cs.CV', '68T07']
Chain-of-thought (CoT) reasoning in vision language models (VLMs) is crucial for improving interpretability and trustworthiness. However, current training recipes lack robust CoT reasoning data, relying on datasets dominated by short annotations with minimal rationales. In this work, we show that training VLM on short ...
2024-10-21T17:00:06Z
10 pages + appendix
null
null
null
null
null
null
null
null
null
2,410.16256
CompassJudger-1: All-in-one Judge Model Helps Model Evaluation and Evolution
['Maosong Cao', 'Alexander Lam', 'Haodong Duan', 'Hongwei Liu', 'Songyang Zhang', 'Kai Chen']
['cs.CL', 'cs.AI']
Efficient and accurate evaluation is crucial for the continuous improvement of large language models (LLMs). Among various assessment methods, subjective evaluation has garnered significant attention due to its superior alignment with real-world usage scenarios and human preferences. However, human-based evaluations ar...
2024-10-21T17:56:51Z
Technical Report, Code and Models: https://github.com/open-compass/CompassJudger
null
null
CompassJudger-1: All-in-one Judge Model Helps Model Evaluation and Evolution
['Maosong Cao', 'Alexander Lam', 'Haodong Duan', 'Hong-wei Liu', 'Songyang Zhang', 'Kai Chen']
2,024
arXiv.org
20
22
['Computer Science']
2,410.16257
Elucidating the design space of language models for image generation
['Xuantong Liu', 'Shaozhe Hao', 'Xianbiao Qi', 'Tianyang Hu', 'Jun Wang', 'Rong Xiao', 'Yuan Yao']
['cs.CV']
The success of autoregressive (AR) language models in text generation has inspired the computer vision community to adopt Large Language Models (LLMs) for image generation. However, considering the essential differences between text and image modalities, the design space of language models for image generation remains ...
2024-10-21T17:57:04Z
Project page: https://pepper-lll.github.io/LMforImageGeneration/
null
null
Elucidating the design space of language models for image generation
['Xuantong Liu', 'Shaozhe Hao', 'Xianbiao Qi', 'Tianyang Hu', 'Jun Wang', 'Rong Xiao', 'Yuan Yao']
2,024
arXiv.org
3
63
['Computer Science']
2,410.16261
Mini-InternVL: A Flexible-Transfer Pocket Multimodal Model with 5% Parameters and 90% Performance
['Zhangwei Gao', 'Zhe Chen', 'Erfei Cui', 'Yiming Ren', 'Weiyun Wang', 'Jinguo Zhu', 'Hao Tian', 'Shenglong Ye', 'Junjun He', 'Xizhou Zhu', 'Lewei Lu', 'Tong Lu', 'Yu Qiao', 'Jifeng Dai', 'Wenhai Wang']
['cs.CV']
Multimodal large language models (MLLMs) have demonstrated impressive performance in vision-language tasks across a broad spectrum of domains. However, the large model scale and associated high computational costs pose significant challenges for training and deploying MLLMs on consumer-grade GPUs or edge devices, there...
2024-10-21T17:58:20Z
Technical report
null
null
null
null
null
null
null
null
null
2,410.16267
xGen-MM-Vid (BLIP-3-Video): You Only Need 32 Tokens to Represent a Video Even in VLMs
['Michael S. Ryoo', 'Honglu Zhou', 'Shrikant Kendre', 'Can Qin', 'Le Xue', 'Manli Shu', 'Jongwoo Park', 'Kanchana Ranasinghe', 'Silvio Savarese', 'Ran Xu', 'Caiming Xiong', 'Juan Carlos Niebles']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG']
We present xGen-MM-Vid (BLIP-3-Video): a multimodal language model for videos, particularly designed to efficiently capture temporal information over multiple frames. BLIP-3-Video takes advantage of the 'temporal encoder' in addition to the conventional visual tokenizer, which maps a sequence of tokens over multiple fr...
2024-10-21T17:59:11Z
null
null
null
xGen-MM-Vid (BLIP-3-Video): You Only Need 32 Tokens to Represent a Video Even in VLMs
['Michael S Ryoo', 'Honglu Zhou', 'Shrikant B. Kendre', 'Can Qin', 'Le Xue', 'Manli Shu', 'Silvio Savarese', 'Ran Xu', 'Caiming Xiong', 'Juan Carlos Niebles']
2,024
arXiv.org
15
51
['Computer Science']
2,410.1629
A Unified Model for Compressed Sensing MRI Across Undersampling Patterns
['Armeet Singh Jatyani', 'Jiayun Wang', 'Aditi Chandrashekar', 'Zihui Wu', 'Miguel Liu-Schiaffini', 'Bahareh Tolooshams', 'Anima Anandkumar']
['eess.IV', 'cs.CV']
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled measurements, thereby reducing scan time. Recently, deep learning has shown great potential for reconstructing high-fidelity images from highly undersampled measurements. However, one needs to train multiple models for different...
2024-10-05T20:03:57Z
Accepted at 2025 Conference on Computer Vision and Pattern Recognition
null
null
A Unified Model for Compressed Sensing MRI Across Undersampling Patterns
['Armeet Singh Jatyani', 'Jiayun Wang', 'Zihui Wu', 'Miguel Liu-Schiaffini', 'Bahareh Tolooshams', 'Anima Anandkumar']
2,024
null
2
42
['Engineering', 'Computer Science']
2,410.16665
SafetyAnalyst: Interpretable, Transparent, and Steerable Safety Moderation for AI Behavior
['Jing-Jing Li', 'Valentina Pyatkin', 'Max Kleiman-Weiner', 'Liwei Jiang', 'Nouha Dziri', 'Anne G. E. Collins', 'Jana Schaich Borg', 'Maarten Sap', 'Yejin Choi', 'Sydney Levine']
['cs.CL', 'cs.CY']
The ideal AI safety moderation system would be both structurally interpretable (so its decisions can be reliably explained) and steerable (to align to safety standards and reflect a community's values), which current systems fall short on. To address this gap, we present SafetyAnalyst, a novel AI safety moderation fram...
2024-10-22T03:38:37Z
Accepted to ICML 2025
null
null
null
null
null
null
null
null
null
2,410.16703
PLDR-LLM: Large Language Model from Power Law Decoder Representations
['Burc Gokden']
['cs.CL', 'cs.AI']
We present the Large Language Model from Power Law Decoder Representations (PLDR-LLM), a language model that leverages non-linear and linear transformations through Power Law Graph Attention mechanism to generate well-defined deductive and inductive outputs. We pretrain the PLDR-LLMs of varying layer sizes with a small...
2024-10-22T05:16:19Z
22 pages, 4 figures, 10 tables
null
null
PLDR-LLM: Large Language Model from Power Law Decoder Representations
['Burc Gokden']
2,024
arXiv.org
1
45
['Computer Science']
2,410.16794
One-Step Diffusion Distillation through Score Implicit Matching
['Weijian Luo', 'Zemin Huang', 'Zhengyang Geng', 'J. Zico Kolter', 'Guo-jun Qi']
['cs.CV', 'cs.AI', 'cs.LG']
Despite their strong performances on many generative tasks, diffusion models require a large number of sampling steps in order to generate realistic samples. This has motivated the community to develop effective methods to distill pre-trained diffusion models into more efficient models, but these methods still typicall...
2024-10-22T08:17:20Z
Accepted by NeurIPS 2024
NeurIPS 2024
null
null
null
null
null
null
null
null
2,410.1721
Exploring Possibilities of AI-Powered Legal Assistance in Bangladesh through Large Language Modeling
['Azmine Toushik Wasi', 'Wahid Faisal', 'Mst Rafia Islam', 'Mahathir Mohammad Bappy']
['cs.CL', 'cs.AI', 'cs.CY']
Purpose: Bangladesh's legal system struggles with major challenges like delays, complexity, high costs, and millions of unresolved cases, which deter many from pursuing legal action due to lack of knowledge or financial constraints. This research seeks to develop a specialized Large Language Model (LLM) to assist in th...
2024-10-22T17:34:59Z
In Review
null
null
Exploring Possibilities of AI-Powered Legal Assistance in Bangladesh through Large Language Modeling
['Azmine Toushik Wasi', 'Wahid Faisal', 'Mst Rafia Islam', 'M. Bappy']
2,024
arXiv.org
0
29
['Computer Science']
2,410.17215
MiniPLM: Knowledge Distillation for Pre-Training Language Models
['Yuxian Gu', 'Hao Zhou', 'Fandong Meng', 'Jie Zhou', 'Minlie Huang']
['cs.CL']
Knowledge distillation (KD) is widely used to train small, high-performing student language models (LMs) using large teacher LMs. While effective in fine-tuning, KD during pre-training faces efficiency, flexibility, and effectiveness issues. Existing methods either incur high computational costs due to online teacher i...
2024-10-22T17:40:32Z
ICLR 2025
null
null
MiniPLM: Knowledge Distillation for Pre-Training Language Models
['Yuxian Gu', 'Hao Zhou', 'Fandong Meng', 'Jie Zhou', 'Minlie Huang']
2,024
International Conference on Learning Representations
7
92
['Computer Science']
2,410.17225
Dhoroni: Exploring Bengali Climate Change and Environmental Views with a Multi-Perspective News Dataset and Natural Language Processing
['Azmine Toushik Wasi', 'Wahid Faisal', 'Taj Ahmad', 'Abdur Rahman', 'Mst Rafia Islam']
['cs.CL', 'cs.CY', 'cs.LG', 'stat.AP']
Climate change poses critical challenges globally, disproportionately affecting low-income countries that often lack resources and linguistic representation on the international stage. Despite Bangladesh's status as one of the most vulnerable nations to climate impacts, research gaps persist in Bengali-language studies...
2024-10-22T17:47:05Z
In Review
null
null
Dhoroni: Exploring Bengali Climate Change and Environmental Views with a Multi-Perspective News Dataset and Natural Language Processing
['Azmine Toushik Wasi', 'Wahid Faisal', 'Taj Ahmad', 'Abdur Rahman', 'Mst Rafia Islam']
2,024
arXiv.org
0
51
['Computer Science', 'Mathematics']
2,410.17241
Frontiers in Intelligent Colonoscopy
['Ge-Peng Ji', 'Jingyi Liu', 'Peng Xu', 'Nick Barnes', 'Fahad Shahbaz Khan', 'Salman Khan', 'Deng-Ping Fan']
['eess.IV', 'cs.CV']
Colonoscopy is currently one of the most sensitive screening methods for colorectal cancer. This study investigates the frontiers of intelligent colonoscopy techniques and their prospective implications for multimodal medical applications. With this goal, we begin by assessing the current data-centric and model-centric...
2024-10-22T17:57:12Z
[Work in progress] A comprehensive survey of intelligent colonoscopy in the multimodal era. [Updated Version V2] New training strategy for colonoscopy-specific multimodal language model
null
null
null
null
null
null
null
null
null
2,410.17242
LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias
['Haian Jin', 'Hanwen Jiang', 'Hao Tan', 'Kai Zhang', 'Sai Bi', 'Tianyuan Zhang', 'Fujun Luan', 'Noah Snavely', 'Zexiang Xu']
['cs.CV', 'cs.GR', 'cs.LG']
We propose the Large View Synthesis Model (LVSM), a novel transformer-based approach for scalable and generalizable novel view synthesis from sparse-view inputs. We introduce two architectures: (1) an encoder-decoder LVSM, which encodes input image tokens into a fixed number of 1D latent tokens, functioning as a fully ...
2024-10-22T17:58:28Z
project page: https://haian-jin.github.io/projects/LVSM/
null
null
null
null
null
null
null
null
null
2,410.17251
Altogether: Image Captioning via Re-aligning Alt-text
['Hu Xu', 'Po-Yao Huang', 'Xiaoqing Ellen Tan', 'Ching-Feng Yeh', 'Jacob Kahn', 'Christine Jou', 'Gargi Ghosh', 'Omer Levy', 'Luke Zettlemoyer', 'Wen-tau Yih', 'Shang-Wen Li', 'Saining Xie', 'Christoph Feichtenhofer']
['cs.CV', 'cs.CL']
This paper focuses on creating synthetic data to improve the quality of image captions. Existing works typically have two shortcomings. First, they caption images from scratch, ignoring existing alt-text metadata, and second, lack transparency if the captioners' training data (e.g. GPT) is unknown. In this paper, we st...
2024-10-22T17:59:57Z
accepted by EMNLP 2024; Meta CLIP 1.2 Data Engine
null
null
null
null
null
null
null
null
null
2,410.17337
Captions Speak Louder than Images (CASLIE): Generalizing Foundation Models for E-commerce from High-quality Multimodal Instruction Data
['Xinyi Ling', 'Bo Peng', 'Hanwen Du', 'Zhihui Zhu', 'Xia Ning']
['cs.CL', 'cs.AI', 'cs.IR']
Leveraging multimodal data to drive breakthroughs in e-commerce applications through Multimodal Foundation Models (MFMs) is gaining increasing attention from the research community. However, there are significant challenges that hinder the optimal use of multimodal e-commerce data by foundation models: (1) the scarcity...
2024-10-22T18:11:43Z
Xinyi Ling and Bo Peng contributed equally to this paper
null
null
Captions Speak Louder than Images (CASLIE): Generalizing Foundation Models for E-commerce from High-quality Multimodal Instruction Data
['Xinyi Ling', 'B. Peng', 'Hanwen Du', 'Zhihui Zhu', 'Xia Ning']
2,024
arXiv.org
0
50
['Computer Science']
2,410.17434
LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding
['Xiaoqian Shen', 'Yunyang Xiong', 'Changsheng Zhao', 'Lemeng Wu', 'Jun Chen', 'Chenchen Zhu', 'Zechun Liu', 'Fanyi Xiao', 'Balakrishnan Varadarajan', 'Florian Bordes', 'Zhuang Liu', 'Hu Xu', 'Hyunwoo J. Kim', 'Bilge Soran', 'Raghuraman Krishnamoorthi', 'Mohamed Elhoseiny', 'Vikas Chandra']
['cs.CV']
Multimodal Large Language Models (MLLMs) have shown promising progress in understanding and analyzing video content. However, processing long videos remains a significant challenge constrained by LLM's context size. To address this limitation, we propose LongVU, a spatiotemporal adaptive compression mechanism thats red...
2024-10-22T21:21:37Z
Project page: https://vision-cair.github.io/LongVU
null
null
null
null
null
null
null
null
null
2,410.17437
Improving Automatic Speech Recognition with Decoder-Centric Regularisation in Encoder-Decoder Models
['Alexander Polok', 'Santosh Kesiraju', 'Karel Beneš', 'Lukáš Burget', 'Jan Černocký']
['eess.AS']
This paper proposes a simple yet effective way of regularising the encoder-decoder-based automatic speech recognition (ASR) models that enhance the robustness of the model and improve the generalisation to out-of-domain scenarios. The proposed approach is dubbed as $\textbf{De}$coder-$\textbf{C}$entric $\textbf{R}$egul...
2024-10-22T21:27:30Z
null
null
null
null
null
null
null
null
null
null
2,410.17491
X-MOBILITY: End-To-End Generalizable Navigation via World Modeling
['Wei Liu', 'Huihua Zhao', 'Chenran Li', 'Joydeep Biswas', 'Billy Okal', 'Pulkit Goyal', 'Yan Chang', 'Soha Pouya']
['cs.RO']
General-purpose navigation in challenging environments remains a significant problem in robotics, with current state-of-the-art approaches facing myriad limitations. Classical approaches struggle with cluttered settings and require extensive tuning, while learning-based methods face difficulties generalizing to out-of-...
2024-10-23T01:11:29Z
null
null
null
null
null
null
null
null
null
null
2,410.17599
Cross-model Control: Improving Multiple Large Language Models in One-time Training
['Jiayi Wu', 'Hao Sun', 'Hengyi Cai', 'Lixin Su', 'Shuaiqiang Wang', 'Dawei Yin', 'Xiang Li', 'Ming Gao']
['cs.CL']
The number of large language models (LLMs) with varying parameter scales and vocabularies is increasing. While they deliver powerful performance, they also face a set of common optimization needs to meet specific requirements or standards, such as instruction following or avoiding the output of sensitive information fr...
2024-10-23T06:52:09Z
Accepted by NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,410.17736
MojoBench: Language Modeling and Benchmarks for Mojo
['Nishat Raihan', 'Joanna C. S. Santos', 'Marcos Zampieri']
['cs.CL']
The recently introduced Mojo programming language (PL) by Modular, has received significant attention in the scientific community due to its claimed significant speed boost over Python. Despite advancements in code Large Language Models (LLMs) across various PLs, Mojo remains unexplored in this context. To address this...
2024-10-23T10:11:40Z
null
null
null
null
null
null
null
null
null
null
2,410.17856
ROCKET-1: Mastering Open-World Interaction with Visual-Temporal Context Prompting
['Shaofei Cai', 'Zihao Wang', 'Kewei Lian', 'Zhancun Mu', 'Xiaojian Ma', 'Anji Liu', 'Yitao Liang']
['cs.CV', 'cs.AI']
Vision-language models (VLMs) have excelled in multimodal tasks, but adapting them to embodied decision-making in open-world environments presents challenges. One critical issue is bridging the gap between discrete entities in low-level observations and the abstract concepts required for effective planning. A common so...
2024-10-23T13:26:59Z
null
null
null
null
null
null
null
null
null
null
2,410.17891
Scaling Diffusion Language Models via Adaptation from Autoregressive Models
['Shansan Gong', 'Shivam Agarwal', 'Yizhe Zhang', 'Jiacheng Ye', 'Lin Zheng', 'Mukai Li', 'Chenxin An', 'Peilin Zhao', 'Wei Bi', 'Jiawei Han', 'Hao Peng', 'Lingpeng Kong']
['cs.CL']
Diffusion Language Models (DLMs) have emerged as a promising new paradigm for text generative modeling, potentially addressing limitations of autoregressive (AR) models. However, current DLMs have been studied at a smaller scale compared to their AR counterparts and lack fair comparison on language modeling benchmarks....
2024-10-23T14:04:22Z
ICLR 2025. (minor updates) Code: https://github.com/HKUNLP/DiffuLLaMA
null
null
null
null
null
null
null
null
null
2,410.17897
Value Residual Learning
['Zhanchao Zhou', 'Tianyi Wu', 'Zhiyun Jiang', 'Fares Obeid', 'Zhenzhong Lan']
['cs.CL']
While Transformer models have achieved remarkable success in various domains, the effectiveness of information propagation through deep networks remains a critical challenge. Standard hidden state residuals often fail to adequately preserve initial token-level information in deeper layers. This paper introduces ResForm...
2024-10-23T14:15:07Z
null
null
null
Value Residual Learning
['Zhanchao Zhou', 'Tianyi Wu', 'Zhiyun Jiang', 'Fares Obeid', 'Zhenzhong Lan']
2,024
null
1
40
['Computer Science']
2,410.18032
GraphTeam: Facilitating Large Language Model-based Graph Analysis via Multi-Agent Collaboration
['Xin Sky Li', 'Qizhi Chu', 'Yubin Chen', 'Yang Liu', 'Yaoqi Liu', 'Zekai Yu', 'Weize Chen', 'Chen Qian', 'Chuan Shi', 'Cheng Yang']
['cs.AI', 'cs.CL', 'cs.MA']
Graphs are widely used for modeling relational data in real-world scenarios, such as social networks and urban computing. Existing LLM-based graph analysis approaches either integrate graph neural networks (GNNs) for specific machine learning tasks, limiting their transferability, or rely solely on LLMs' internal reaso...
2024-10-23T17:02:59Z
null
null
null
null
null
null
null
null
null
null
2,410.18105
Improving Embedding Accuracy for Document Retrieval Using Entity Relationship Maps and Model-Aware Contrastive Sampling
['Thea Aviss']
['cs.IR', 'cs.AI', 'cs.CL']
In this paper we present APEX-Embedding-7B (Advanced Processing for Epistemic eXtraction), a 7-billion parameter decoder-only text Feature Extraction Model, specifically designed for Document Retrieval-Augmented Generation (RAG) tasks. Our approach employs two training techniques that yield an emergent improvement in f...
2024-10-08T17:36:48Z
10 Pages, 9 Figures
null
null
Improving Embedding Accuracy for Document Retrieval Using Entity Relationship Maps and Model-Aware Contrastive Sampling
['Thea Aviss']
2,024
arXiv.org
0
0
['Computer Science']
2,410.18164
TabDPT: Scaling Tabular Foundation Models on Real Data
['Junwei Ma', 'Valentin Thomas', 'Rasa Hosseinzadeh', 'Hamidreza Kamkari', 'Alex Labach', 'Jesse C. Cresswell', 'Keyvan Golestan', 'Guangwei Yu', 'Anthony L. Caterini', 'Maksims Volkovs']
['cs.LG', 'cs.AI', 'stat.ML']
Tabular data is one of the most ubiquitous sources of information worldwide, spanning a wide variety of domains. This inherent heterogeneity has slowed the development of Tabular Foundation Models (TFMs) capable of fast generalization to unseen datasets. In-Context Learning (ICL) has recently emerged as a promising sol...
2024-10-23T18:00:00Z
Inference repo: github.com/layer6ai-labs/TabDPT-inference; Training repo: github.com/layer6ai-labs/TabDPT-training
null
null
null
null
null
null
null
null
null
2,410.18362
WAFFLE: Finetuning Multi-Modal Model for Automated Front-End Development
['Shanchao Liang', 'Nan Jiang', 'Shangshu Qian', 'Lin Tan']
['cs.SE', 'cs.CL', 'cs.CV']
Web development involves turning UI designs into functional webpages, which can be difficult for both beginners and experienced developers due to the complexity of HTML's hierarchical structures and styles. While Large Language Models (LLMs) have shown promise in generating source code, two major challenges persist in ...
2024-10-24T01:49:49Z
null
null
null
WAFFLE: Multi-Modal Model for Automated Front-End Development
['Shanchao Liang', 'Nan Jiang', 'Shangshu Qian', 'Lin Tan']
2,024
arXiv.org
1
39
['Computer Science']
2,410.18387
Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks
['Lehan Wang', 'Haonan Wang', 'Honglong Yang', 'Jiaji Mao', 'Zehong Yang', 'Jun Shen', 'Xiaomeng Li']
['cs.CV']
Several medical Multimodal Large Languange Models (MLLMs) have been developed to address tasks involving visual images with textual instructions across various medical modalities, achieving impressive results. Most current medical generalist models are region-agnostic, treating the entire image as a holistic representa...
2024-10-24T02:55:41Z
Accepted in ICLR 2025
null
null
Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks
['Lehan Wang', 'Haonan Wang', 'Honglong Yang', 'Jiaji Mao', 'Zehong Yang', 'Jun Shen', 'Xiaomeng Li']
2,024
International Conference on Learning Representations
6
63
['Computer Science']
2,410.18417
Large Language Models Reflect the Ideology of their Creators
['Maarten Buyl', 'Alexander Rogiers', 'Sander Noels', 'Guillaume Bied', 'Iris Dominguez-Catena', 'Edith Heiter', 'Iman Johary', 'Alexandru-Cristian Mara', 'Raphaël Romero', 'Jefrey Lijffijt', 'Tijl De Bie']
['cs.CL', 'cs.LG']
Large language models (LLMs) are trained on vast amounts of data to generate natural language, enabling them to perform tasks like text summarization and question answering. These models have become popular in artificial intelligence (AI) assistants like ChatGPT and already play an influential role in how humans access...
2024-10-24T04:02:30Z
null
null
null
Large Language Models Reflect the Ideology of their Creators
['Maarten Buyl', 'Alexander Rogiers', 'Sander Noels', 'Iris Dominguez-Catena', 'Edith Heiter', 'Raphaël Romero', 'Iman Johary', 'A. Mara', 'Jefrey Lijffijt', 'T. D. Bie']
2,024
arXiv.org
24
35
['Computer Science']
2,410.18451
Skywork-Reward: Bag of Tricks for Reward Modeling in LLMs
['Chris Yuhao Liu', 'Liang Zeng', 'Jiacai Liu', 'Rui Yan', 'Jujie He', 'Chaojie Wang', 'Shuicheng Yan', 'Yang Liu', 'Yahui Zhou']
['cs.AI', 'cs.CL']
In this report, we introduce a collection of methods to enhance reward modeling for LLMs, focusing specifically on data-centric techniques. We propose effective data selection and filtering strategies for curating high-quality open-source preference datasets, culminating in the Skywork-Reward data collection, which con...
2024-10-24T06:06:26Z
null
null
null
Skywork-Reward: Bag of Tricks for Reward Modeling in LLMs
['Chris Liu', 'Liang Zeng', 'Jiacai Liu', 'Rui Yan', 'Jujie He', 'Chaojie Wang', 'Shuicheng Yan', 'Yang Liu', 'Yahui Zhou']
2,024
arXiv.org
116
50
['Computer Science']
2,410.18469
ADVLLM: Iterative Self-Tuning LLMs for Enhanced Jailbreaking Capabilities
['Chung-En Sun', 'Xiaodong Liu', 'Weiwei Yang', 'Tsui-Wei Weng', 'Hao Cheng', 'Aidan San', 'Michel Galley', 'Jianfeng Gao']
['cs.CL', 'cs.LG']
Recent research has shown that Large Language Models (LLMs) are vulnerable to automated jailbreak attacks, where adversarial suffixes crafted by algorithms appended to harmful queries bypass safety alignment and trigger unintended responses. Current methods for generating these suffixes are computationally expensive an...
2024-10-24T06:36:12Z
Accepted to NAACL 2025 Main (oral)
null
null
null
null
null
null
null
null
null
2,410.18481
Dialog2Flow: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction
['Sergio Burdisso', 'Srikanth Madikeri', 'Petr Motlicek']
['cs.CL', 'cs.AI', 'cs.LG']
Efficiently deriving structured workflows from unannotated dialogs remains an underexplored and formidable challenge in computational linguistics. Automating this process could significantly accelerate the manual design of workflows in new domains and enable the grounding of large language models in domain-specific flo...
2024-10-24T07:10:18Z
Accepted to EMNLP 2024 main conference
https://aclanthology.org/2024.emnlp-main.310/
null
Dialog2Flow: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction
['Sergio Burdisso', 'S. Madikeri', 'P. Motlícek']
2,024
Conference on Empirical Methods in Natural Language Processing
3
63
['Computer Science']
2,410.18505
CCI3.0-HQ: a large-scale Chinese dataset of high quality designed for pre-training large language models
['Liangdong Wang', 'Bo-Wen Zhang', 'Chengwei Wu', 'Hanyu Zhao', 'Xiaofeng Shi', 'Shuhao Gu', 'Jijie Li', 'Quanyue Ma', 'TengFei Pan', 'Guang Liu']
['cs.CL']
We present CCI3.0-HQ (https://huggingface.co/datasets/BAAI/CCI3-HQ), a high-quality 500GB subset of the Chinese Corpora Internet 3.0 (CCI3.0)(https://huggingface.co/datasets/BAAI/CCI3-Data), developed using a novel two-stage hybrid filtering pipeline that significantly enhances data quality. To evaluate its effectivene...
2024-10-24T07:50:07Z
null
null
null
CCI3.0-HQ: a large-scale Chinese dataset of high quality designed for pre-training large language models
['Liangdong Wang', 'Bo-wen Zhang', 'Chengwei Wu', 'Hanyu Zhao', 'Xiaofeng Shi', 'Shuhao Gu', 'Jijie Li', 'Quanyue Ma', 'Tengfei Pan', 'Guang Liu']
2,024
arXiv.org
4
20
['Computer Science']
2,410.18514
Scaling up Masked Diffusion Models on Text
['Shen Nie', 'Fengqi Zhu', 'Chao Du', 'Tianyu Pang', 'Qian Liu', 'Guangtao Zeng', 'Min Lin', 'Chongxuan Li']
['cs.AI', 'cs.CL', 'cs.LG']
Masked diffusion models (MDMs) have shown promise in language modeling, yet their scalability and effectiveness in core language tasks, such as text generation and language understanding, remain underexplored. This paper establishes the first scaling law for MDMs, demonstrating a scaling rate comparable to autoregressi...
2024-10-24T08:01:22Z
null
null
null
Scaling up Masked Diffusion Models on Text
['Shen Nie', 'Fengqi Zhu', 'Chao Du', 'Tianyu Pang', 'Qian Liu', 'Guangtao Zeng', 'Min Lin', 'Chongxuan Li']
2,024
International Conference on Learning Representations
30
81
['Computer Science']
2,410.18558
Infinity-MM: Scaling Multimodal Performance with Large-Scale and High-Quality Instruction Data
['Shuhao Gu', 'Jialing Zhang', 'Siyuan Zhou', 'Kevin Yu', 'Zhaohu Xing', 'Liangdong Wang', 'Zhou Cao', 'Jintao Jia', 'Zhuoyi Zhang', 'Yixuan Wang', 'Zhenchong Hu', 'Bo-Wen Zhang', 'Jijie Li', 'Dong Liang', 'Yingli Zhao', 'Songjing Wang', 'Yulong Ao', 'Yiming Ju', 'Huanhuan Ma', 'Xiaotong Li', 'Haiwen Diao', 'Yufeng Cui...
['cs.CL']
Recently, Vision-Language Models (VLMs) have achieved remarkable progress in multimodal tasks, and multimodal instruction data serves as the foundation for enhancing VLM capabilities. Despite the availability of several open-source multimodal datasets, limitations in the scale and quality of open-source instruction dat...
2024-10-24T09:03:48Z
null
null
null
null
null
null
null
null
null
null
2,410.18565
Bielik 7B v0.1: A Polish Language Model -- Development, Insights, and Evaluation
['Krzysztof Ociepa', 'Łukasz Flis', 'Krzysztof Wróbel', 'Adrian Gwoździej', 'Remigiusz Kinas']
['cs.CL', 'cs.AI', 'I.2.7']
We introduce Bielik 7B v0.1, a 7-billion-parameter generative text model for Polish language processing. Trained on curated Polish corpora, this model addresses key challenges in language model development through innovative techniques. These include Weighted Instruction Cross-Entropy Loss, which balances the learning ...
2024-10-24T09:16:09Z
null
null
null
Bielik 7B v0.1: A Polish Language Model - Development, Insights, and Evaluation
['Krzysztof Ociepa', 'Lukasz Flis', "Krzysztof Wr'obel", "Adrian Gwo'zdziej", 'Remigiusz Kinas']
2,024
arXiv.org
4
49
['Computer Science']
2,410.18603
AgentStore: Scalable Integration of Heterogeneous Agents As Specialized Generalist Computer Assistant
['Chengyou Jia', 'Minnan Luo', 'Zhuohang Dang', 'Qiushi Sun', 'Fangzhi Xu', 'Junlin Hu', 'Tianbao Xie', 'Zhiyong Wu']
['cs.AI', 'cs.RO']
Digital agents capable of automating complex computer tasks have attracted considerable attention due to their immense potential to enhance human-computer interaction. However, existing agent methods exhibit deficiencies in their generalization and specialization capabilities, especially in handling open-ended computer...
2024-10-24T09:58:40Z
null
null
null
null
null
null
null
null
null
null
2,410.18634
Little Giants: Synthesizing High-Quality Embedding Data at Scale
['Haonan Chen', 'Liang Wang', 'Nan Yang', 'Yutao Zhu', 'Ziliang Zhao', 'Furu Wei', 'Zhicheng Dou']
['cs.CL', 'cs.AI', 'cs.IR']
Synthetic data generation has become an increasingly popular way of training models without the need for large, manually labeled datasets. For tasks like text embedding, synthetic data offers diverse and scalable training examples, significantly reducing the cost of human annotation. However, most current approaches re...
2024-10-24T10:47:30Z
null
null
null
null
null
null
null
null
null
null
2,410.18666
DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation
['Yuang Ai', 'Xiaoqiang Zhou', 'Huaibo Huang', 'Xiaotian Han', 'Zhengyu Chen', 'Quanzeng You', 'Hongxia Yang']
['cs.CV']
Image restoration (IR) in real-world scenarios presents significant challenges due to the lack of high-capacity models and comprehensive datasets. To tackle these issues, we present a dual strategy: GenIR, an innovative data curation pipeline, and DreamClear, a cutting-edge Diffusion Transformer (DiT)-based image resto...
2024-10-24T11:57:20Z
Accepted by NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,410.18693
Unleashing LLM Reasoning Capability via Scalable Question Synthesis from Scratch
['Yuyang Ding', 'Xinyu Shi', 'Xiaobo Liang', 'Juntao Li', 'Zhaopeng Tu', 'Qiaoming Zhu', 'Min Zhang']
['cs.CL', 'cs.AI']
Improving the mathematical reasoning capabilities of Large Language Models (LLMs) is critical for advancing artificial intelligence. However, access to extensive, diverse, and high-quality reasoning datasets remains a significant challenge, particularly for the open-source community. In this paper, we propose ScaleQues...
2024-10-24T12:42:04Z
ACL 2025
null
null
null
null
null
null
null
null
null
2,410.18775
Robust Watermarking Using Generative Priors Against Image Editing: From Benchmarking to Advances
['Shilin Lu', 'Zihan Zhou', 'Jiayou Lu', 'Yuanzhi Zhu', 'Adams Wai-Kin Kong']
['cs.CV', 'cs.AI', 'cs.CR']
Current image watermarking methods are vulnerable to advanced image editing techniques enabled by large-scale text-to-image models. These models can distort embedded watermarks during editing, posing significant challenges to copyright protection. In this work, we introduce W-Bench, the first comprehensive benchmark de...
2024-10-24T14:28:32Z
Accepted by ICLR 2025
null
null
null
null
null
null
null
null
null
2,410.18857
Probabilistic Language-Image Pre-Training
['Sanghyuk Chun', 'Wonjae Kim', 'Song Park', 'Sangdoo Yun']
['cs.CV', 'cs.LG']
Vision-language models (VLMs) embed aligned image-text pairs into a joint space but often rely on deterministic embeddings, assuming a one-to-one correspondence between images and texts. This oversimplifies real-world relationships, which are inherently many-to-many, with multiple captions describing a single image and...
2024-10-24T15:42:25Z
Code: https://github.com/naver-ai/prolip HuggingFace Hub: https://huggingface.co/collections/SanghyukChun/prolip-6712595dfc87fd8597350291 33 pages, 4.8 MB; LongProLIP paper: arXiv:2503.08048
null
null
Probabilistic Language-Image Pre-Training
['Sanghyuk Chun', 'Wonjae Kim', 'Song Park', 'Sangdoo Yun']
2,024
International Conference on Learning Representations
6
78
['Computer Science']
2,410.18902
LLMs for Extremely Low-Resource Finno-Ugric Languages
['Taido Purason', 'Hele-Andra Kuulmets', 'Mark Fishel']
['cs.CL']
The advancement of large language models (LLMs) has predominantly focused on high-resource languages, leaving low-resource languages, such as those in the Finno-Ugric family, significantly underrepresented. This paper addresses this gap by focusing on V\~oro, Livonian, and Komi. We cover almost the entire cycle of LLM ...
2024-10-24T16:48:12Z
null
Findings of the Association for Computational Linguistics: NAACL 2025, pages 6677-6697
null
null
null
null
null
null
null
null
2,410.18977
Pay Attention and Move Better: Harnessing Attention for Interactive Motion Generation and Training-free Editing
['Ling-Hao Chen', 'Shunlin Lu', 'Wenxun Dai', 'Zhiyang Dou', 'Xuan Ju', 'Jingbo Wang', 'Taku Komura', 'Lei Zhang']
['cs.CV']
This research delves into the problem of interactive editing of human motion generation. Previous motion diffusion models lack explicit modeling of the word-level text-motion correspondence and good explainability, hence restricting their fine-grained editing ability. To address this issue, we propose an attention-base...
2024-10-24T17:59:45Z
Updated MotionCLR technical report
null
null
null
null
null
null
null
null
null
2,410.18978
Framer: Interactive Frame Interpolation
['Wen Wang', 'Qiuyu Wang', 'Kecheng Zheng', 'Hao Ouyang', 'Zhekai Chen', 'Biao Gong', 'Hao Chen', 'Yujun Shen', 'Chunhua Shen']
['cs.CV']
We propose Framer for interactive frame interpolation, which targets producing smoothly transitioning frames between two images as per user creativity. Concretely, besides taking the start and end frames as inputs, our approach supports customizing the transition process by tailoring the trajectory of some selected key...
2024-10-24T17:59:51Z
Project page: https://aim-uofa.github.io/Framer/
null
null
null
null
null
null
null
null
null
2,410.19008
Teach Multimodal LLMs to Comprehend Electrocardiographic Images
['Ruoqi Liu', 'Yuelin Bai', 'Xiang Yue', 'Ping Zhang']
['eess.IV', 'cs.AI', 'cs.CV']
The electrocardiogram (ECG) is an essential non-invasive diagnostic tool for assessing cardiac conditions. Existing automatic interpretation methods suffer from limited generalizability, focusing on a narrow range of cardiac conditions, and typically depend on raw physiological signals, which may not be readily availab...
2024-10-21T20:26:41Z
null
null
null
null
null
null
null
null
null
null
2,410.19278
Applying sparse autoencoders to unlearn knowledge in language models
['Eoin Farrell', 'Yeu-Tong Lau', 'Arthur Conmy']
['cs.LG', 'cs.AI']
We investigate whether sparse autoencoders (SAEs) can be used to remove knowledge from language models. We use the biology subset of the Weapons of Mass Destruction Proxy dataset and test on the gemma-2b-it and gemma-2-2b-it language models. We demonstrate that individual interpretable biology-related SAE features can ...
2024-10-25T03:21:14Z
null
null
null
null
null
null
null
null
null
null
2,410.1929
Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning
['Yujian Liu', 'Shiyu Chang', 'Tommi Jaakkola', 'Yang Zhang']
['cs.CL']
Recent studies have identified one aggravating factor of LLM hallucinations as the knowledge inconsistency between pre-training and fine-tuning, where unfamiliar fine-tuning data mislead the LLM to fabricate plausible but wrong outputs. In this paper, we propose a novel fine-tuning strategy called Prereq-Tune to addres...
2024-10-25T03:48:51Z
null
null
null
Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning
['Yujian Liu', 'Shiyu Chang', 'T. Jaakkola', 'Yang Zhang']
2,024
International Conference on Learning Representations
1
45
['Computer Science']
2,410.19324
Simpler Diffusion (SiD2): 1.5 FID on ImageNet512 with pixel-space diffusion
['Emiel Hoogeboom', 'Thomas Mensink', 'Jonathan Heek', 'Kay Lamerigts', 'Ruiqi Gao', 'Tim Salimans']
['cs.CV', 'cs.LG', 'stat.ML']
Latent diffusion models have become the popular choice for scaling up diffusion models for high resolution image synthesis. Compared to pixel-space models that are trained end-to-end, latent models are perceived to be more efficient and to produce higher image quality at high resolution. Here we challenge these notions...
2024-10-25T06:20:06Z
Accepted to CVPR 2025
null
null
null
null
null
null
null
null
null
2,410.1959
MonoDGP: Monocular 3D Object Detection with Decoupled-Query and Geometry-Error Priors
['Fanqi Pu', 'Yifan Wang', 'Jiru Deng', 'Wenming Yang']
['cs.CV']
Perspective projection has been extensively utilized in monocular 3D object detection methods. It introduces geometric priors from 2D bounding boxes and 3D object dimensions to reduce the uncertainty of depth estimation. However, due to depth errors originating from the object's visual surface, the height of the boundi...
2024-10-25T14:31:43Z
null
null
null
MonoDGP: Monocular 3D Object Detection with Decoupled-Query and Geometry-Error Priors
['Fanqi Pu', 'Yifan Wang', 'Jiru Deng', 'Wenming Yang']
2,024
Computer Vision and Pattern Recognition
3
67
['Computer Science']
2,410.19635
Frozen-DETR: Enhancing DETR with Image Understanding from Frozen Foundation Models
['Shenghao Fu', 'Junkai Yan', 'Qize Yang', 'Xihan Wei', 'Xiaohua Xie', 'Wei-Shi Zheng']
['cs.CV']
Recent vision foundation models can extract universal representations and show impressive abilities in various tasks. However, their application on object detection is largely overlooked, especially without fine-tuning them. In this work, we show that frozen foundation models can be a versatile feature enhancer, even t...
2024-10-25T15:38:24Z
Accepted to NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,410.19704
Multi-view biomedical foundation models for molecule-target and property prediction
['Parthasarathy Suryanarayanan', 'Yunguang Qiu', 'Shreyans Sethi', 'Diwakar Mahajan', 'Hongyang Li', 'Yuxin Yang', 'Elif Eyigoz', 'Aldo Guzman Saenz', 'Daniel E. Platt', 'Timothy H. Rumbell', 'Kenney Ng', 'Sanjoy Dey', 'Myson Burch', 'Bum Chul Kwon', 'Pablo Meyer', 'Feixiong Cheng', 'Jianying Hu', 'Joseph A. Morrone']
['q-bio.BM', 'cs.AI', 'cs.LG']
Quality molecular representations are key to foundation model development in bio-medical research. Previous efforts have typically focused on a single representation or molecular view, which may have strengths or weaknesses on a given task. We develop Multi-view Molecular Embedding with Late Fusion (MMELON), an approac...
2024-10-25T17:22:33Z
40 pages including supplement. 10 figures, 8 tables
null
null
null
null
null
null
null
null
null
2,410.19818
UniMTS: Unified Pre-training for Motion Time Series
['Xiyuan Zhang', 'Diyan Teng', 'Ranak Roy Chowdhury', 'Shuheng Li', 'Dezhi Hong', 'Rajesh K. Gupta', 'Jingbo Shang']
['eess.SP', 'cs.AI', 'cs.LG']
Motion time series collected from mobile and wearable devices such as smartphones and smartwatches offer significant insights into human behavioral patterns, with wide applications in healthcare, automation, IoT, and AR/XR due to their low-power, always-on nature. However, given security and privacy concerns, building ...
2024-10-18T06:39:13Z
NeurIPS 2024. Code: https://github.com/xiyuanzh/UniMTS. Model: https://huggingface.co/xiyuanz/UniMTS
null
null
null
null
null
null
null
null
null
2,410.20088
RARe: Retrieval Augmented Retrieval with In-Context Examples
['Atula Tejaswi', 'Yoonsang Lee', 'Sujay Sanghavi', 'Eunsol Choi']
['cs.CL', 'cs.AI', 'cs.IR']
We investigate whether in-context examples, widely used in decoder-only language models (LLMs), can improve embedding model performance in retrieval tasks. Unlike in LLMs, naively prepending in-context examples (query-document pairs) to the target query at inference time does not work out of the box. We introduce a sim...
2024-10-26T05:46:20Z
null
null
null
null
null
null
null
null
null
null
2,410.20163
UniHGKR: Unified Instruction-aware Heterogeneous Knowledge Retrievers
['Dehai Min', 'Zhiyang Xu', 'Guilin Qi', 'Lifu Huang', 'Chenyu You']
['cs.IR', 'cs.CL']
Existing information retrieval (IR) models often assume a homogeneous structure for knowledge sources and user queries, limiting their applicability in real-world settings where retrieval is inherently heterogeneous and diverse. In this paper, we introduce UniHGKR, a unified instruction-aware heterogeneous knowledge re...
2024-10-26T12:34:07Z
NAACL 2025, Main, Long Paper
null
null
null
null
null
null
null
null
null
2,410.20202
An Efficient Watermarking Method for Latent Diffusion Models via Low-Rank Adaptation
['Dongdong Lin', 'Yue Li', 'Benedetta Tondi', 'Bin Li', 'Mauro Barni']
['cs.CV']
The rapid proliferation of deep neural networks (DNNs) is driving a surge in model watermarking technologies, as the trained deep models themselves serve as intellectual properties. The core of existing model watermarking techniques involves modifying or tuning the models' weights. However, with the emergence of increa...
2024-10-26T15:23:49Z
null
null
null
An Efficient Watermarking Method for Latent Diffusion Models via Low-Rank Adaptation
['Dongdong Lin', 'Yue Li', 'B. Tondi', 'Bin Li', 'Mauro Barni']
2,024
arXiv.org
1
37
['Computer Science']
2,410.20268
Centaur: a foundation model of human cognition
['Marcel Binz', 'Elif Akata', 'Matthias Bethge', 'Franziska Brändle', 'Fred Callaway', 'Julian Coda-Forno', 'Peter Dayan', 'Can Demircan', 'Maria K. Eckstein', 'Noémi Éltető', 'Thomas L. Griffiths', 'Susanne Haridi', 'Akshay K. Jagadish', 'Li Ji-An', 'Alexander Kipnis', 'Sreejan Kumar', 'Tobias Ludwig', 'Marvin Mathony...
['cs.LG']
Establishing a unified theory of cognition has been a major goal of psychology. While there have been previous attempts to instantiate such theories by building computational models, we currently do not have one model that captures the human mind in its entirety. A first step in this direction is to create a model that...
2024-10-26T20:39:41Z
null
null
null
null
null
null
null
null
null
null
2,410.20526
Llama Scope: Extracting Millions of Features from Llama-3.1-8B with Sparse Autoencoders
['Zhengfu He', 'Wentao Shu', 'Xuyang Ge', 'Lingjie Chen', 'Junxuan Wang', 'Yunhua Zhou', 'Frances Liu', 'Qipeng Guo', 'Xuanjing Huang', 'Zuxuan Wu', 'Yu-Gang Jiang', 'Xipeng Qiu']
['cs.LG', 'cs.CL']
Sparse Autoencoders (SAEs) have emerged as a powerful unsupervised method for extracting sparse representations from language models, yet scalable training remains a significant challenge. We introduce a suite of 256 SAEs, trained on each layer and sublayer of the Llama-3.1-8B-Base model, with 32K and 128K features. Mo...
2024-10-27T17:33:49Z
22pages, 12 figures
null
null
null
null
null
null
null
null
null
2,410.20527
CodeRosetta: Pushing the Boundaries of Unsupervised Code Translation for Parallel Programming
['Ali TehraniJamsaz', 'Arijit Bhattacharjee', 'Le Chen', 'Nesreen K. Ahmed', 'Amir Yazdanbakhsh', 'Ali Jannesari']
['cs.DC', 'cs.AI', 'cs.LG', 'cs.PF', 'cs.PL', 'cs.SE']
Recent advancements in Large Language Models (LLMs) have renewed interest in automatic programming language translation. Encoder-decoder transformer models, in particular, have shown promise in translating between different programming languages. However, translating between a language and its high-performance computin...
2024-10-27T17:34:07Z
null
null
null
null
null
null
null
null
null
null
2,410.20651
SubjECTive-QA: Measuring Subjectivity in Earnings Call Transcripts' QA Through Six-Dimensional Feature Analysis
['Huzaifa Pardawala', 'Siddhant Sukhani', 'Agam Shah', 'Veer Kejriwal', 'Abhishek Pillai', 'Rohan Bhasin', 'Andrew DiBiasio', 'Tarun Mandapati', 'Dhruv Adha', 'Sudheer Chava']
['cs.CL', 'cs.AI']
Fact-checking is extensively studied in the context of misinformation and disinformation, addressing objective inaccuracies. However, a softer form of misinformation involves responses that are factually correct but lack certain features such as clarity and relevance. This challenge is prevalent in formal Question-Answ...
2024-10-28T01:17:34Z
Accepted at NeurIPS 2024
null
null
SubjECTive-QA: Measuring Subjectivity in Earnings Call Transcripts' QA Through Six-Dimensional Feature Analysis
['Huzaifa Pardawala', 'Siddhant Sukhani', 'Agam Shah', 'Veer Kejriwal', 'Abhishek Pillai', 'Rohan Bhasin', 'Andrew DiBiasio', 'Tarun Mandapati', 'Dhruv Adha', 'S. Chava']
2,024
Neural Information Processing Systems
2
64
['Computer Science']
2,410.20722
Interpretable Image Classification with Adaptive Prototype-based Vision Transformers
['Chiyu Ma', 'Jon Donnelly', 'Wenjun Liu', 'Soroush Vosoughi', 'Cynthia Rudin', 'Chaofan Chen']
['cs.CV']
We present ProtoViT, a method for interpretable image classification combining deep learning and case-based reasoning. This method classifies an image by comparing it to a set of learned prototypes, providing explanations of the form ``this looks like that.'' In our model, a prototype consists of \textit{parts}, which ...
2024-10-28T04:33:28Z
null
null
null
null
null
null
null
null
null
null
2,410.20771
MrT5: Dynamic Token Merging for Efficient Byte-level Language Models
['Julie Kallini', 'Shikhar Murty', 'Christopher D. Manning', 'Christopher Potts', 'Róbert Csordás']
['cs.CL', 'cs.AI', 'cs.LG']
Models that rely on subword tokenization have significant drawbacks, such as sensitivity to character-level noise like spelling errors and inconsistent compression rates across different languages and scripts. While character- or byte-level models like ByT5 attempt to address these concerns, they have not gained widesp...
2024-10-28T06:14:12Z
null
null
null
null
null
null
null
null
null
null
2,410.20898
David and Goliath: Small One-step Model Beats Large Diffusion with Score Post-training
['Weijian Luo', 'Colin Zhang', 'Debing Zhang', 'Zhengyang Geng']
['cs.CV', 'cs.AI', 'cs.LG', 'cs.MM']
We propose Diff-Instruct* (DI*), a data-efficient post-training approach for one-step text-to-image generative models to improve its human preferences without requiring image data. Our method frames alignment as online reinforcement learning from human feedback (RLHF), which optimizes the one-step model to maximize hum...
2024-10-28T10:26:19Z
Revision: paper accepted by the ICML2025 main conference
null
null
null
null
null
null
null
null
null
2,410.20964
DeTeCtive: Detecting AI-generated Text via Multi-Level Contrastive Learning
['Xun Guo', 'Shan Zhang', 'Yongxin He', 'Ting Zhang', 'Wanquan Feng', 'Haibin Huang', 'Chongyang Ma']
['cs.CL', 'cs.AI', 'cs.LG']
Current techniques for detecting AI-generated text are largely confined to manual feature crafting and supervised binary classification paradigms. These methodologies typically lead to performance bottlenecks and unsatisfactory generalizability. Consequently, these methods are often inapplicable for out-of-distribution...
2024-10-28T12:34:49Z
To appear in NeurIPS 2024. Code is available at https://github.com/heyongxin233/DeTeCtive
null
null
null
null
null
null
null
null
null
2,410.21035
Beyond Autoregression: Fast LLMs via Self-Distillation Through Time
['Justin Deschenaux', 'Caglar Gulcehre']
['cs.LG', 'cs.CL']
Autoregressive (AR) Large Language Models (LLMs) have demonstrated significant success across numerous tasks. However, the AR modeling paradigm presents certain limitations; for instance, contemporary autoregressive LLMs are trained to generate one token at a time, which can result in noticeable latency. Recent advance...
2024-10-28T13:56:30Z
null
null
null
null
null
null
null
null
null
null
2,410.21139
uOttawa at LegalLens-2024: Transformer-based Classification Experiments
['Nima Meghdadi', 'Diana Inkpen']
['cs.CL']
This paper presents the methods used for LegalLens-2024 shared task, which focused on detecting legal violations within unstructured textual data and associating these violations with potentially affected individuals. The shared task included two subtasks: A) Legal Named Entity Recognition (L-NER) and B) Legal Natural ...
2024-10-28T15:42:45Z
Just accepted at the the EMNLP conference
null
null
null
null
null
null
null
null
null
2,410.21228
LoRA vs Full Fine-tuning: An Illusion of Equivalence
['Reece Shuttleworth', 'Jacob Andreas', 'Antonio Torralba', 'Pratyusha Sharma']
['cs.LG', 'cs.CL']
Fine-tuning is a crucial paradigm for adapting pre-trained large language models to downstream tasks. Recently, methods like Low-Rank Adaptation (LoRA) have been shown to effectively fine-tune LLMs with an extreme reduction in trainable parameters. But, \emph{are their learned solutions really equivalent?} We study how...
2024-10-28T17:14:01Z
null
null
null
null
null
null
null
null
null
null
2,410.21252
LongReward: Improving Long-context Large Language Models with AI Feedback
['Jiajie Zhang', 'Zhongni Hou', 'Xin Lv', 'Shulin Cao', 'Zhenyu Hou', 'Yilin Niu', 'Lei Hou', 'Yuxiao Dong', 'Ling Feng', 'Juanzi Li']
['cs.CL', 'cs.LG']
Though significant advancements have been achieved in developing long-context large language models (LLMs), the compromised quality of LLM-synthesized data for supervised fine-tuning (SFT) often affects the long-context performance of SFT models and leads to inherent limitations. In principle, reinforcement learning (R...
2024-10-28T17:50:42Z
null
null
null
null
null
null
null
null
null
null
2,410.21264
LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior
['Hanyu Wang', 'Saksham Suri', 'Yixuan Ren', 'Hao Chen', 'Abhinav Shrivastava']
['cs.CV', 'cs.AI']
We present LARP, a novel video tokenizer designed to overcome limitations in current video tokenization methods for autoregressive (AR) generative models. Unlike traditional patchwise tokenizers that directly encode local visual patches into discrete tokens, LARP introduces a holistic tokenization scheme that gathers i...
2024-10-28T17:57:07Z
ICLR 2025. Project page: https://hywang66.github.io/larp/
null
null
LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior
['Hanyu Wang', 'Saksham Suri', 'Yixuan Ren', 'Hao Chen', 'Abhinav Shrivastava']
2,024
International Conference on Learning Representations
12
72
['Computer Science']
2,410.21485
SpeechQE: Estimating the Quality of Direct Speech Translation
['HyoJung Han', 'Kevin Duh', 'Marine Carpuat']
['cs.CL']
Recent advances in automatic quality estimation for machine translation have exclusively focused on written language, leaving the speech modality underexplored. In this work, we formulate the task of quality estimation for speech translation (SpeechQE), construct a benchmark, and evaluate a family of systems based on c...
2024-10-28T19:50:04Z
EMNLP2024
null
null
null
null
null
null
null
null
null
2,410.21638
Adapting Diffusion Models for Improved Prompt Compliance and Controllable Image Synthesis
['Deepak Sridhar', 'Abhishek Peri', 'Rohith Rachala', 'Nuno Vasconcelos']
['cs.CV']
Recent advances in generative modeling with diffusion processes (DPs) enabled breakthroughs in image synthesis. Despite impressive image quality, these models have various prompt compliance problems, including low recall in generating multiple objects, difficulty in generating text in images, and meeting constraints li...
2024-10-29T00:54:00Z
Accepted to NeurIPS 2024 conference. Project Page: https://deepaksridhar.github.io/factorgraphdiffusion.github.io/
null
null
Adapting Diffusion Models for Improved Prompt Compliance and Controllable Image Synthesis
['Deepak Sridhar', 'Abhishek Peri', 'Rohith Rachala', 'Nuno Vasconcelos']
2,024
Neural Information Processing Systems
1
60
['Computer Science']
2,410.21723
Fine-tuning Large Language Models for DGA and DNS Exfiltration Detection
['Md Abu Sayed', 'Asif Rahman', 'Christopher Kiekintveld', 'Sebastian Garcia']
['cs.CR']
Domain Generation Algorithms (DGAs) are malicious techniques used by malware to dynamically generate seemingly random domain names for communication with Command & Control (C&C) servers. Due to the fast and simple generation of DGA domains, detection methods must be highly efficient and precise to be effective. Large L...
2024-10-29T04:22:28Z
Accepted in Proceedings of the Workshop at AI for Cyber Threat Intelligence (WAITI), 2024
null
null
null
null
null
null
null
null
null
2,410.21801
PerSRV: Personalized Sticker Retrieval with Vision-Language Model
['Heng Er Metilda Chee', 'Jiayin Wang', 'Zhiqiang Guo', 'Weizhi Ma', 'Min Zhang']
['cs.IR']
Instant Messaging is a popular means for daily communication, allowing users to send text and stickers. As the saying goes, "a picture is worth a thousand words", so developing an effective sticker retrieval technique is crucial for enhancing user experience. However, existing sticker retrieval methods rely on labeled ...
2024-10-29T07:13:47Z
Accepted at WWW '25
null
10.1145/3696410.3714772
null
null
null
null
null
null
null