arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,410.21845
Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning
['Jianlan Luo', 'Charles Xu', 'Jeffrey Wu', 'Sergey Levine']
['cs.RO', 'cs.AI']
Reinforcement learning (RL) holds great promise for enabling autonomous acquisition of complex robotic manipulation skills, but realizing this potential in real-world settings has been challenging. We present a human-in-the-loop vision-based RL system that demonstrates impressive performance on a diverse set of dextero...
2024-10-29T08:12:20Z
null
null
null
null
null
null
null
null
null
null
2,410.21966
PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference
['Kendong Liu', 'Zhiyu Zhu', 'Chuanhao Li', 'Hui Liu', 'Huanqiang Zeng', 'Junhui Hou']
['cs.CV']
In this paper, we make the first attempt to align diffusion models for image inpainting with human aesthetic standards via a reinforcement learning framework, significantly improving the quality and visual appeal of inpainted images. Specifically, instead of directly measuring the divergence with paired images, we trai...
2024-10-29T11:49:39Z
null
null
null
null
null
null
null
null
null
null
2,410.21969
BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays
['Yang Zhou', 'Tan Li Hui Faith', 'Yanyu Xu', 'Sicong Leng', 'Xinxing Xu', 'Yong Liu', 'Rick Siow Mong Goh']
['cs.CV']
Medical Vision-Language Pretraining (MedVLP) shows promise in learning generalizable and transferable visual representations from paired and unpaired medical images and reports. MedVLP can provide useful features to downstream tasks and facilitate adapting task-specific models to new setups using fewer examples. Howeve...
2024-10-29T11:53:18Z
Accepted to NeurIPS24 Datasets and Benchmarks Track
null
null
null
null
null
null
null
null
null
2,410.22143
AmpleGCG-Plus: A Strong Generative Model of Adversarial Suffixes to Jailbreak LLMs with Higher Success Rates in Fewer Attempts
['Vishal Kumar', 'Zeyi Liao', 'Jaylen Jones', 'Huan Sun']
['cs.CL']
Although large language models (LLMs) are typically aligned, they remain vulnerable to jailbreaking through either carefully crafted prompts in natural language or, interestingly, gibberish adversarial suffixes. However, gibberish tokens have received relatively less attention despite their success in attacking aligned...
2024-10-29T15:40:07Z
null
null
null
null
null
null
null
null
null
null
2,410.22284
Embedding-based classifiers can detect prompt injection attacks
['Md. Ahsan Ayub', 'Subhabrata Majumdar']
['cs.CR', 'cs.LG']
Large Language Models (LLMs) are seeing significant adoption in every type of organization due to their exceptional generative capabilities. However, LLMs are found to be vulnerable to various adversarial attacks, particularly prompt injection attacks, which trick them into producing harmful or inappropriate content. A...
2024-10-29T17:36:59Z
null
null
null
null
null
null
null
null
null
null
2,410.22313
Senna: Bridging Large Vision-Language Models and End-to-End Autonomous Driving
['Bo Jiang', 'Shaoyu Chen', 'Bencheng Liao', 'Xingyu Zhang', 'Wei Yin', 'Qian Zhang', 'Chang Huang', 'Wenyu Liu', 'Xinggang Wang']
['cs.CV', 'cs.RO']
End-to-end autonomous driving demonstrates strong planning capabilities with large-scale data but still struggles in complex, rare scenarios due to limited commonsense. In contrast, Large Vision-Language Models (LVLMs) excel in scene understanding and reasoning. The path forward lies in merging the strengths of both ap...
2024-10-29T17:53:56Z
Project Page: https://github.com/hustvl/Senna
null
null
null
null
null
null
null
null
null
2,410.22325
Robots Pre-train Robots: Manipulation-Centric Robotic Representation from Large-Scale Robot Datasets
['Guangqi Jiang', 'Yifei Sun', 'Tao Huang', 'Huanyu Li', 'Yongyuan Liang', 'Huazhe Xu']
['cs.RO', 'cs.AI', 'cs.CV']
The pre-training of visual representations has enhanced the efficiency of robot learning. Due to the lack of large-scale in-domain robotic datasets, prior works utilize in-the-wild human videos to pre-train robotic visual representation. Despite their promising results, representations from human videos are inevitably ...
2024-10-29T17:58:13Z
null
null
null
null
null
null
null
null
null
null
2,410.22332
Local Policies Enable Zero-shot Long-horizon Manipulation
['Murtaza Dalal', 'Min Liu', 'Walter Talbott', 'Chen Chen', 'Deepak Pathak', 'Jian Zhang', 'Ruslan Salakhutdinov']
['cs.RO', 'cs.CV', 'cs.LG']
Sim2real for robotic manipulation is difficult due to the challenges of simulating complex contacts and generating realistic task distributions. To tackle the latter problem, we introduce ManipGen, which leverages a new class of policies for sim2real transfer: local policies. Locality enables a variety of appealing pro...
2024-10-29T17:59:55Z
ICRA 2025 accepted paper. Main Paper 7 pages, 3 tables, 3 figures. Appendix 6 pages, 2 figures, 6 tables
null
null
null
null
null
null
null
null
null
2,410.22366
One-Step is Enough: Sparse Autoencoders for Text-to-Image Diffusion Models
['Viacheslav Surkov', 'Chris Wendler', 'Antonio Mari', 'Mikhail Terekhov', 'Justin Deschenaux', 'Robert West', 'Caglar Gulcehre', 'David Bau']
['cs.LG', 'cs.AI', 'cs.CV']
For large language models (LLMs), sparse autoencoders (SAEs) have been shown to decompose intermediate representations that often are not interpretable directly into sparse sums of interpretable features, facilitating better control and subsequent analysis. However, similar analyses and approaches have been lacking for...
2024-10-28T19:01:18Z
null
null
null
One-Step is Enough: Sparse Autoencoders for Text-to-Image Diffusion Models
['Viacheslav Surkov', 'Chris Wendler', 'Antonio Mari', 'Mikhail Terekhov', 'Justin Deschenaux', 'Robert West', 'Caglar Gulcehre', 'David Bau']
2,024
null
13
0
['Computer Science']
2,410.22367
MAMMAL -- Molecular Aligned Multi-Modal Architecture and Language
['Yoel Shoshan', 'Moshiko Raboh', 'Michal Ozery-Flato', 'Vadim Ratner', 'Alex Golts', 'Jeffrey K. Weber', 'Ella Barkan', 'Simona Rabinovici-Cohen', 'Sagi Polaczek', 'Ido Amos', 'Ben Shapira', 'Liam Hazan', 'Matan Ninio', 'Sivan Ravid', 'Michael M. Danziger', 'Yosi Shamay', 'Sharon Kurant', 'Joseph A. Morrone', 'Parthas...
['q-bio.QM', 'cs.AI', 'cs.LG']
Large language models applied to vast biological datasets have the potential to transform biology by uncovering disease mechanisms and accelerating drug development. However, current models are often siloed, trained separately on small-molecules, proteins, or transcriptomic data, limiting their ability to capture compl...
2024-10-28T20:45:52Z
null
null
null
null
null
null
null
null
null
null
2,410.22587
Toxicity of the Commons: Curating Open-Source Pre-Training Data
['Catherine Arnett', 'Eliot Jones', 'Ivan P. Yamshchikov', 'Pierre-Carl Langlais']
['cs.CL']
Open-source large language models are becoming increasingly available and popular among researchers and practitioners. While significant progress has been made on open-weight models, open training data is a practice yet to be adopted by the leading open-weight models creators. At the same time, there researchers are wo...
2024-10-29T23:00:05Z
null
null
null
null
null
null
null
null
null
null
2,410.22655
FlowDCN: Exploring DCN-like Architectures for Fast Image Generation with Arbitrary Resolution
['Shuai Wang', 'Zexian Li', 'Tianhui Song', 'Xubin Li', 'Tiezheng Ge', 'Bo Zheng', 'Limin Wang']
['cs.CV']
Arbitrary-resolution image generation still remains a challenging task in AIGC, as it requires handling varying resolutions and aspect ratios while maintaining high visual quality. Existing transformer-based diffusion methods suffer from quadratic computation cost and limited resolution extrapolation capabilities, maki...
2024-10-30T02:48:50Z
Accepted on NeurIPS24
null
null
FlowDCN: Exploring DCN-like Architectures for Fast Image Generation with Arbitrary Resolution
['Shuai Wang', 'Zexian Li', 'Tian-Shu Song', 'Xubin Li', 'Tiezheng Ge', 'Bo Zheng', 'Limin Wang']
2,024
arXiv.org
3
42
['Computer Science']
2,410.2277
InjecGuard: Benchmarking and Mitigating Over-defense in Prompt Injection Guardrail Models
['Hao Li', 'Xiaogeng Liu']
['cs.CL', 'cs.AI', 'cs.CR']
Prompt injection attacks pose a critical threat to large language models (LLMs), enabling goal hijacking and data leakage. Prompt guard models, though effective in defense, suffer from over-defense -- falsely flagging benign inputs as malicious due to trigger word bias. To address this issue, we introduce NotInject, an...
2024-10-30T07:39:42Z
null
null
null
null
null
null
null
null
null
null
2,410.22886
Less is More: Pre-Training Cross-Lingual Small-Scale Language Models with Cognitively-Plausible Curriculum Learning Strategies
['Suchir Salhan', 'Richard Diehl Martinez', 'Zébulon Goriely', 'Paula Buttery']
['cs.CL', 'cs.AI']
Curriculum Learning has been a popular strategy to improve the cognitive plausibility of Small-Scale Language Models (SSLMs) in the BabyLM Challenge. However, it has not led to considerable improvements over non-curriculum models. We assess whether theoretical linguistic acquisition theories can be used to specify more...
2024-10-30T10:31:54Z
BabyLM Shared Task 2024 (Accepted, Poster), co-located in EMNLP 2024
null
null
null
null
null
null
null
null
null
2,410.22901
HelloMeme: Integrating Spatial Knitting Attentions to Embed High-Level and Fidelity-Rich Conditions in Diffusion Models
['Shengkai Zhang', 'Nianhong Jiao', 'Tian Li', 'Chaojie Yang', 'Chenhui Xue', 'Boya Niu', 'Jun Gao']
['cs.CV', '68T07 (Primary) 68T10', 'I.4.5; I.5.0']
We propose an effective method for inserting adapters into text-to-image foundation models, which enables the execution of complex downstream tasks while preserving the generalization ability of the base model. The core idea of this method is to optimize the attention mechanism related to 2D feature maps, which enhance...
2024-10-30T11:00:51Z
11 pages, 7 figures, 2 tables
null
null
HelloMeme: Integrating Spatial Knitting Attentions to Embed High-Level and Fidelity-Rich Conditions in Diffusion Models
['Shengkai Zhang', 'Nianhong Jiao', 'Tian Li', 'Chaojie Yang', 'Chenhui Xue', 'Boya Niu', 'Jun Gao']
2,024
arXiv.org
3
26
['Computer Science']
2,410.22906
From Babble to Words: Pre-Training Language Models on Continuous Streams of Phonemes
['Zébulon Goriely', 'Richard Diehl Martinez', 'Andrew Caines', 'Lisa Beinborn', 'Paula Buttery']
['cs.CL']
Language models are typically trained on large corpora of text in their default orthographic form. However, this is not the only option; representing data as streams of phonemes can offer unique advantages, from deeper insights into phonological language acquisition to improved performance on sound-based tasks. The cha...
2024-10-30T11:05:01Z
null
null
null
null
null
null
null
null
null
null
2,410.23132
Revisiting MAE pre-training for 3D medical image segmentation
['Tassilo Wald', 'Constantin Ulrich', 'Stanislav Lukyanenko', 'Andrei Goncharov', 'Alberto Paderno', 'Maximilian Miller', 'Leander Maerkisch', 'Paul F. Jäger', 'Klaus Maier-Hein']
['cs.CV', 'cs.AI', 'cs.LG']
Self-Supervised Learning (SSL) presents an exciting opportunity to unlock the potential of vast, untapped clinical datasets, for various downstream applications that suffer from the scarcity of labeled data. While SSL has revolutionized fields like natural language processing and computer vision, its adoption in 3D med...
2024-10-30T15:42:59Z
CVPR 2025. Update to Camera-Ready
null
null
null
null
null
null
null
null
null
2,410.23168
TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters
['Haiyang Wang', 'Yue Fan', 'Muhammad Ferjad Naeem', 'Yongqin Xian', 'Jan Eric Lenssen', 'Liwei Wang', 'Federico Tombari', 'Bernt Schiele']
['cs.LG']
Transformers have become the predominant architecture in foundation models due to their excellent performance across various domains. However, the substantial cost of scaling these models remains a significant concern. This problem arises primarily from their dependence on a fixed number of parameters within linear pro...
2024-10-30T16:19:00Z
Accepted by ICLR for a spotlight presentation
null
null
TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters
['Haiyang Wang', 'Yue Fan', 'Muhammad Ferjad Naeem', 'Yongqin Xian', 'J. E. Lenssen', 'Liwei Wang', 'Federico Tombari', 'B. Schiele']
2,024
International Conference on Learning Representations
2
49
['Computer Science']
2,410.23218
OS-ATLAS: A Foundation Action Model for Generalist GUI Agents
['Zhiyong Wu', 'Zhenyu Wu', 'Fangzhi Xu', 'Yian Wang', 'Qiushi Sun', 'Chengyou Jia', 'Kanzhi Cheng', 'Zichen Ding', 'Liheng Chen', 'Paul Pu Liang', 'Yu Qiao']
['cs.CL', 'cs.CV', 'cs.HC']
Existing efforts in building GUI agents heavily rely on the availability of robust commercial Vision-Language Models (VLMs) such as GPT-4o and GeminiProVision. Practitioners are often reluctant to use open-source VLMs due to their significant performance lag compared to their closed-source counterparts, particularly in...
2024-10-30T17:10:19Z
null
null
null
OS-ATLAS: A Foundation Action Model for Generalist GUI Agents
['Zhiyong Wu', 'Zhenyu Wu', 'Fangzhi Xu', 'Yian Wang', 'Qiushi Sun', 'Chengyou Jia', 'Kanzhi Cheng', 'Zichen Ding', 'Liheng Chen', 'Paul Pu Liang', 'Yu Qiao']
2,024
arXiv.org
73
42
['Computer Science']
2,410.23332
MoLE: Enhancing Human-centric Text-to-image Diffusion via Mixture of Low-rank Experts
['Jie Zhu', 'Yixiong Chen', 'Mingyu Ding', 'Ping Luo', 'Leye Wang', 'Jingdong Wang']
['cs.CV', 'cs.AI', 'cs.LG']
Text-to-image diffusion has attracted vast attention due to its impressive image-generation capabilities. However, when it comes to human-centric text-to-image generation, particularly in the context of faces and hands, the results often fall short of naturalness due to insufficient training priors. We alleviate the is...
2024-10-30T17:59:57Z
Published at NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,410.2337
Multilingual Vision-Language Pre-training for the Remote Sensing Domain
['João Daniel Silva', 'Joao Magalhaes', 'Devis Tuia', 'Bruno Martins']
['cs.CV']
Methods based on Contrastive Language-Image Pre-training (CLIP) are nowadays extensively used in support of vision-and-language tasks involving remote sensing data, such as cross-modal retrieval. The adaptation of CLIP to this specific domain has relied on model fine-tuning with the standard contrastive objective, usin...
2024-10-30T18:13:11Z
Accepted at ACM SIGSPATIAL 2024 - Research Papers
null
10.1145/3678717.3691318
null
null
null
null
null
null
null
2,410.23405
FlowLLM: Flow Matching for Material Generation with Large Language Models as Base Distributions
['Anuroop Sriram', 'Benjamin Kurt Miller', 'Ricky T. Q. Chen', 'Brandon M. Wood']
['cs.LG', 'cond-mat.mtrl-sci', 'cs.AI', 'stat.ML']
Material discovery is a critical area of research with the potential to revolutionize various fields, including carbon capture, renewable energy, and electronics. However, the immense scale of the chemical space makes it challenging to explore all possible materials experimentally. In this paper, we introduce FlowLLM, ...
2024-10-30T19:15:43Z
null
NeurIPS 2024
null
null
null
null
null
null
null
null
2,410.23463
MDCure: A Scalable Pipeline for Multi-Document Instruction-Following
['Gabrielle Kaili-May Liu', 'Bowen Shi', 'Avi Caciularu', 'Idan Szpektor', 'Arman Cohan']
['cs.CL', 'cs.LG']
Multi-document (MD) processing is crucial for LLMs to handle real-world tasks such as summarization and question-answering across large sets of documents. While LLMs have improved at processing long inputs, MD contexts still present unique difficulties, including management of inter-document dependencies, redundancy, a...
2024-10-30T21:08:07Z
null
null
null
null
null
null
null
null
null
null
2,410.23775
In-Context LoRA for Diffusion Transformers
['Lianghua Huang', 'Wei Wang', 'Zhi-Fan Wu', 'Yupeng Shi', 'Huanzhang Dou', 'Chen Liang', 'Yutong Feng', 'Yu Liu', 'Jingren Zhou']
['cs.CV', 'cs.GR']
Recent research arXiv:2410.15027 has explored the use of diffusion transformers (DiTs) for task-agnostic image generation by simply concatenating attention tokens across images. However, despite substantial computational resources, the fidelity of the generated images remains suboptimal. In this study, we reevaluate an...
2024-10-31T09:45:00Z
Tech report. Project page: https://ali-vilab.github.io/In-Context-LoRA-Page/
null
null
In-Context LoRA for Diffusion Transformers
['Lianghua Huang', 'Wei Wang', 'Zhigang Wu', 'Yupeng Shi', 'Huanzhang Dou', 'Chen Liang', 'Yutong Feng', 'Yu Liu', 'Jingren Zhou']
2,024
arXiv.org
35
45
['Computer Science']
2,410.23918
BitStack: Any-Size Compression of Large Language Models in Variable Memory Environments
['Xinghao Wang', 'Pengyu Wang', 'Bo Wang', 'Dong Zhang', 'Yunhua Zhou', 'Xipeng Qiu']
['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG']
Large language models (LLMs) have revolutionized numerous applications, yet their deployment remains challenged by memory constraints on local devices. While scaling laws have enhanced LLM capabilities, the primary bottleneck has shifted from \textit{capability} to \textit{availability}, emphasizing the need for effici...
2024-10-31T13:26:11Z
ICLR 2025
null
null
null
null
null
null
null
null
null
2,410.24139
COSNet: A Novel Semantic Segmentation Network using Enhanced Boundaries in Cluttered Scenes
['Muhammad Ali', 'Mamoona Javaid', 'Mubashir Noman', 'Mustansar Fiaz', 'Salman Khan']
['cs.CV']
Automated waste recycling aims to efficiently separate the recyclable objects from the waste by employing vision-based systems. However, the presence of varying shaped objects having different material types makes it a challenging problem, especially in cluttered environments. Existing segmentation methods perform reas...
2024-10-31T17:03:38Z
Accepted at WACV 2025
null
null
COSNet: A Novel Semantic Segmentation Network using Enhanced Boundaries in Cluttered Scenes
['Muhammad Ali', 'Mamoona Javaid', 'Mubashir Noman', 'M. Fiaz', 'Salman H. Khan']
2,024
IEEE Workshop/Winter Conference on Applications of Computer Vision
0
53
['Computer Science']
2,410.24148
Exploring Vision Language Models for Facial Attribute Recognition: Emotion, Race, Gender, and Age
['Nouar AlDahoul', 'Myles Joshua Toledo Tan', 'Harishwar Reddy Kasireddy', 'Yasir Zaki']
['cs.CV']
Technologies for recognizing facial attributes like race, gender, age, and emotion have several applications, such as surveillance, advertising content, sentiment analysis, and the study of demographic trends and social behaviors. Analyzing demographic characteristics based on images and analyzing facial expressions ha...
2024-10-31T17:09:19Z
52 pages, 13 figures
null
null
Exploring Vision Language Models for Facial Attribute Recognition: Emotion, Race, Gender, and Age
['Nouar Aldahoul', 'M. J. Tan', 'Harishwar Reddy Kasireddy', 'Yasir Zaki']
2,024
arXiv.org
3
0
['Computer Science']
2,410.24164
$π_0$: A Vision-Language-Action Flow Model for General Robot Control
['Kevin Black', 'Noah Brown', 'Danny Driess', 'Adnan Esmail', 'Michael Equi', 'Chelsea Finn', 'Niccolo Fusai', 'Lachy Groom', 'Karol Hausman', 'Brian Ichter', 'Szymon Jakubczak', 'Tim Jones', 'Liyiming Ke', 'Sergey Levine', 'Adrian Li-Bell', 'Mohith Mothukuri', 'Suraj Nair', 'Karl Pertsch', 'Lucy Xiaoyang Shi', 'James ...
['cs.LG', 'cs.RO']
Robot learning holds tremendous promise to unlock the full potential of flexible, general, and dexterous robot systems, as well as to address some of the deepest questions in artificial intelligence. However, bringing robot learning to the level of generality required for effective real-world systems faces major obstac...
2024-10-31T17:22:30Z
See project website for videos: https://physicalintelligence.company/blog/pi0
null
null
π0: A Vision-Language-Action Flow Model for General Robot Control
['Kevin Black', 'Noah Brown', 'Danny Driess', 'Adnan Esmail', 'Michael Equi', 'Chelsea Finn', 'Niccolo Fusai', 'Lachy Groom', 'Karol Hausman', 'Brian Ichter', 'Szymon Jakubczak', 'Tim Jones', 'Liyiming Ke', 'Sergey Levine', 'Adrian Li-Bell', 'Mohith Mothukuri', 'Suraj Nair', 'Karl Pertsch', 'L. X. Shi', 'James Tanner',...
2,024
arXiv.org
287
61
['Computer Science']
2,410.24175
Constraint Back-translation Improves Complex Instruction Following of Large Language Models
['Yunjia Qi', 'Hao Peng', 'Xiaozhi Wang', 'Bin Xu', 'Lei Hou', 'Juanzi Li']
['cs.CL', 'cs.AI']
Large language models (LLMs) struggle to follow instructions with complex constraints in format, length, etc. Following the conventional instruction-tuning practice, previous works conduct post-training on complex instruction-response pairs generated by feeding complex instructions to advanced LLMs. However, even advan...
2024-10-31T17:42:26Z
14 pages, 6 figures
null
null
null
null
null
null
null
null
null
2,410.24198
SelfCodeAlign: Self-Alignment for Code Generation
['Yuxiang Wei', 'Federico Cassano', 'Jiawei Liu', 'Yifeng Ding', 'Naman Jain', 'Zachary Mueller', 'Harm de Vries', 'Leandro von Werra', 'Arjun Guha', 'Lingming Zhang']
['cs.CL', 'cs.LG', 'cs.SE']
Instruction tuning is a supervised fine-tuning approach that significantly improves the ability of large language models (LLMs) to follow human instructions. We propose SelfCodeAlign, the first fully transparent and permissive pipeline for self-aligning code LLMs without extensive human annotations or distillation. Sel...
2024-10-31T17:55:13Z
Accepted to NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,411.00508
CLIP-RT: Learning Language-Conditioned Robotic Policies from Natural Language Supervision
['Gi-Cheon Kang', 'Junghyun Kim', 'Kyuhwan Shim', 'Jun Ki Lee', 'Byoung-Tak Zhang']
['cs.RO']
Teaching robots desired skills in real-world environments remains challenging, especially for non-experts. A key bottleneck is that collecting robotic data often requires expertise or specialized hardware, limiting accessibility and scalability. We posit that natural language offers an intuitive and accessible interfac...
2024-11-01T10:48:03Z
Accepted to RSS 2025. Project website: https://clip-rt.github.io
null
null
CLIP-RT: Learning Language-Conditioned Robotic Policies from Natural Language Supervision
['Gi-Cheon Kang', 'Junghyun Kim', 'Kyuhwan Shim', 'Jun Ki Lee', 'Byoung-Tak Zhang']
2,024
arXiv.org
2
68
['Computer Science']
2,411.00626
ZIM: Zero-Shot Image Matting for Anything
['Beomyoung Kim', 'Chanyong Shin', 'Joonhyun Jeong', 'Hyungsik Jung', 'Se-Yun Lee', 'Sewhan Chun', 'Dong-Hyun Hwang', 'Joonsang Yu']
['cs.CV']
The recent segmentation foundation model, Segment Anything Model (SAM), exhibits strong zero-shot segmentation capabilities, but it falls short in generating fine-grained precise masks. To address this limitation, we propose a novel zero-shot image matting model, called ZIM, with two key contributions: First, we develo...
2024-11-01T14:34:33Z
preprint (21 pages, 16 figures, and 8 tables)
null
null
null
null
null
null
null
null
null
2,411.00762
Face Anonymization Made Simple
['Han-Wei Kung', 'Tuomas Varanka', 'Sanjay Saha', 'Terence Sim', 'Nicu Sebe']
['cs.CV', 'cs.CR']
Current face anonymization techniques often depend on identity loss calculated by face recognition models, which can be inaccurate and unreliable. Additionally, many methods require supplementary data such as facial landmarks and masks to guide the synthesis process. In contrast, our approach uses diffusion models with...
2024-11-01T17:45:21Z
null
null
null
Face Anonymization Made Simple
['Han-Wei Kung', 'Tuomas Varanka', 'Sanjay Saha', 'Terence Sim', 'N. Sebe']
2,024
IEEE Workshop/Winter Conference on Applications of Computer Vision
4
63
['Computer Science']
2,411.00771
CityGaussianV2: Efficient and Geometrically Accurate Reconstruction for Large-Scale Scenes
['Yang Liu', 'Chuanchen Luo', 'Zhongkai Mao', 'Junran Peng', 'Zhaoxiang Zhang']
['cs.CV']
Recently, 3D Gaussian Splatting (3DGS) has revolutionized radiance field reconstruction, manifesting efficient and high-fidelity novel view synthesis. However, accurately representing surfaces, especially in large and complex scenarios, remains a significant challenge due to the unstructured nature of 3DGS. In this pap...
2024-11-01T17:59:31Z
Accepted by ICLR2025
null
null
null
null
null
null
null
null
null
2,411.00776
Randomized Autoregressive Visual Generation
['Qihang Yu', 'Ju He', 'Xueqing Deng', 'Xiaohui Shen', 'Liang-Chieh Chen']
['cs.CV']
This paper presents Randomized AutoRegressive modeling (RAR) for visual generation, which sets a new state-of-the-art performance on the image generation task while maintaining full compatibility with language modeling frameworks. The proposed RAR is simple: during a standard autoregressive training process with a next...
2024-11-01T17:59:58Z
simple method improving autoregressive image generator to SOTA performance; Project page at https://yucornetto.github.io/projects/rar.html
null
null
null
null
null
null
null
null
null
2,411.0089
Rethinking Scale: The Efficacy of Fine-Tuned Open-Source LLMs in Large-Scale Reproducible Social Science Research
['Marcello Carammia', 'Stefano Maria Iacus', 'Giuseppe Porro']
['cs.CL', 'cs.AI', 'stat.ML']
Large Language Models (LLMs) are distinguished by their architecture, which dictates their parameter size and performance capabilities. Social scientists have increasingly adopted LLMs for text classification tasks, which are difficult to scale with human coders. While very large, closed-source models often deliver sup...
2024-10-31T20:26:30Z
null
null
null
null
null
null
null
null
null
null
2,411.00918
LIBMoE: A Library for comprehensive benchmarking Mixture of Experts in Large Language Models
['Nam V. Nguyen', 'Thong T. Doan', 'Luong Tran', 'Van Nguyen', 'Quang Pham']
['cs.CL', 'cs.AI', 'cs.LG']
Mixture of Experts (MoEs) plays an important role in the development of more efficient and effective large language models (LLMs). Due to the enormous resource requirements, studying large scale MoE algorithms remain in-accessible to many researchers. This work develops \emph{LibMoE}, a comprehensive and modular framew...
2024-11-01T14:04:36Z
15 pages, 9 figures
null
null
null
null
null
null
null
null
null
2,411.00986
Taking AI Welfare Seriously
['Robert Long', 'Jeff Sebo', 'Patrick Butlin', 'Kathleen Finlinson', 'Kyle Fish', 'Jacqueline Harding', 'Jacob Pfau', 'Toni Sims', 'Jonathan Birch', 'David Chalmers']
['cs.CY', 'cs.AI', 'q-bio.NC']
In this report, we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood, i.e. of AI systems with their own interests and moral significance, is no longer an issue only for sci-fi or...
2024-11-04T17:57:57Z
null
null
null
null
null
null
null
null
null
null
2,411.01106
SV-RAG: LoRA-Contextualizing Adaptation of MLLMs for Long Document Understanding
['Jian Chen', 'Ruiyi Zhang', 'Yufan Zhou', 'Tong Yu', 'Franck Dernoncourt', 'Jiuxiang Gu', 'Ryan A. Rossi', 'Changyou Chen', 'Tong Sun']
['cs.CV']
Multimodal large language models (MLLMs) have recently shown great progress in text-rich image understanding, yet they still struggle with complex, multi-page visually-rich documents. Traditional methods using document parsers for retrieval-augmented generation suffer from performance and efficiency limitations, while ...
2024-11-02T02:09:01Z
Accepted to ICLR 2025
null
null
SV-RAG: LoRA-Contextualizing Adaptation of MLLMs for Long Document Understanding
['Jian Chen', 'Ruiyi Zhang', 'Yufan Zhou', 'Tong Yu', 'Franck Dernoncourt', 'Jiuxiang Gu', 'Ryan A. Rossi', 'Changyou Chen', 'Tongfei Sun']
2,024
International Conference on Learning Representations
1
69
['Computer Science']
2,411.01156
Fish-Speech: Leveraging Large Language Models for Advanced Multilingual Text-to-Speech Synthesis
['Shijia Liao', 'Yuxuan Wang', 'Tianyu Li', 'Yifan Cheng', 'Ruoyi Zhang', 'Rongzhi Zhou', 'Yijin Xing']
['cs.SD', 'eess.AS']
Text-to-Speech (TTS) systems face ongoing challenges in processing complex linguistic features, handling polyphonic expressions, and producing natural-sounding multilingual speech - capabilities that are crucial for future AI applications. In this paper, we present Fish-Speech, a novel framework that implements a seria...
2024-11-02T07:04:02Z
null
null
null
Fish-Speech: Leveraging Large Language Models for Advanced Multilingual Text-to-Speech Synthesis
['Shijia Liao', 'Yuxuan Wang', 'Tianyue Li', 'Yifan Cheng', 'Ruoyi Zhang', 'Rongzhi Zhou', 'Yijin Xing']
2,024
arXiv.org
17
35
['Computer Science', 'Engineering']
2,411.01176
CmdCaliper: A Semantic-Aware Command-Line Embedding Model and Dataset for Security Research
['Sian-Yao Huang', 'Cheng-Lin Yang', 'Che-Yu Lin', 'Chun-Ying Huang']
['cs.CL']
This research addresses command-line embedding in cybersecurity, a field obstructed by the lack of comprehensive datasets due to privacy and regulation concerns. We propose the first dataset of similar command lines, named CyPHER, for training and unbiased evaluation. The training set is generated using a set of large ...
2024-11-02T08:30:45Z
null
null
null
null
null
null
null
null
null
null
2,411.01661
Sing-On-Your-Beat: Simple Text-Controllable Accompaniment Generations
['Quoc-Huy Trinh', 'Minh-Van Nguyen', 'Trong-Hieu Nguyen Mau', 'Khoa Tran', 'Thanh Do']
['cs.SD', 'cs.AI', 'eess.AS']
Singing is one of the most cherished forms of human entertainment. However, creating a beautiful song requires an accompaniment that complements the vocals and aligns well with the song instruments and genre. With advancements in deep learning, previous research has focused on generating suitable accompaniments but oft...
2024-11-03T19:17:20Z
null
null
null
null
null
null
null
null
null
null
2,411.01747
DynaSaur: Large Language Agents Beyond Predefined Actions
['Dang Nguyen', 'Viet Dac Lai', 'Seunghyun Yoon', 'Ryan A. Rossi', 'Handong Zhao', 'Ruiyi Zhang', 'Puneet Mathur', 'Nedim Lipka', 'Yu Wang', 'Trung Bui', 'Franck Dernoncourt', 'Tianyi Zhou']
['cs.CL']
Existing LLM agent systems typically select actions from a fixed and predefined set at every step. While this approach is effective in closed, narrowly scoped environments, it presents two major challenges for real-world, open-ended scenarios: (1) it significantly restricts the planning and acting capabilities of LLM a...
2024-11-04T02:08:59Z
19 pages, 10 figures
null
null
null
null
null
null
null
null
null
2,411.02059
TableGPT2: A Large Multimodal Model with Tabular Data Integration
['Aofeng Su', 'Aowen Wang', 'Chao Ye', 'Chen Zhou', 'Ga Zhang', 'Gang Chen', 'Guangcheng Zhu', 'Haobo Wang', 'Haokai Xu', 'Hao Chen', 'Haoze Li', 'Haoxuan Lan', 'Jiaming Tian', 'Jing Yuan', 'Junbo Zhao', 'Junlin Zhou', 'Kaizhe Shou', 'Liangyu Zha', 'Lin Long', 'Liyao Li', 'Pengzuo Wu', 'Qi Zhang', 'Qingyi Huang', 'Sais...
['cs.LG', 'cs.AI', 'cs.DB']
The emergence of models like GPTs, Claude, LLaMA, and Qwen has reshaped AI applications, presenting vast new opportunities across industries. Yet, the integration of tabular data remains notably underdeveloped, despite its foundational role in numerous real-world domains. This gap is critical for three main reasons. ...
2024-11-04T13:03:13Z
null
null
null
TableGPT2: A Large Multimodal Model with Tabular Data Integration
['Aofeng Su', 'Aowen Wang', 'Chaonan Ye', 'Chengcheng Zhou', 'Ga Zhang', 'Gang Chen', 'Guangcheng Zhu', 'Haobo Wang', 'Haokai Xu', 'Hao Chen', 'Haoze Li', 'Haoxuan Lan', 'Jiaming Tian', 'Jing Yuan', 'Junbo Zhao', 'Junlin Zhou', 'Kaizhe Shou', 'Liangyu Zha', 'Lin Long', 'Liyao Li', 'Peng Wu', 'Qi Zhang', 'Qingyi Huang',...
2,024
arXiv.org
23
67
['Computer Science']
2,411.02265
Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
['Xingwu Sun', 'Yanfeng Chen', 'Yiqing Huang', 'Ruobing Xie', 'Jiaqi Zhu', 'Kai Zhang', 'Shuaipeng Li', 'Zhen Yang', 'Jonny Han', 'Xiaobo Shu', 'Jiahao Bu', 'Zhongzhi Chen', 'Xuemeng Huang', 'Fengzong Lian', 'Saiyong Yang', 'Jianfeng Yan', 'Yuyuan Zeng', 'Xiaoqin Ren', 'Chao Yu', 'Lulu Wu', 'Yue Mao', 'Jun Xia', 'Tao Y...
['cs.CL', 'cs.AI']
In this paper, we introduce Hunyuan-Large, which is currently the largest open-source Transformer-based mixture of experts model, with a total of 389 billion parameters and 52 billion activation parameters, capable of handling up to 256K tokens. We conduct a thorough evaluation of Hunyuan-Large's superior performance a...
2024-11-04T16:56:26Z
17 pages, 4 Figures
null
null
null
null
null
null
null
null
null
2,411.02293
Hunyuan3D 1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
['Xianghui Yang', 'Huiwen Shi', 'Bowen Zhang', 'Fan Yang', 'Jiacheng Wang', 'Hongxu Zhao', 'Xinhai Liu', 'Xinzhou Wang', 'Qingxiang Lin', 'Jiaao Yu', 'Lifu Wang', 'Jing Xu', 'Zebin He', 'Zhuo Chen', 'Sicong Liu', 'Junta Wu', 'Yihang Lian', 'Shaoxiong Yang', 'Yuhong Liu', 'Yong Yang', 'Di Wang', 'Jie Jiang', 'Chunchao G...
['cs.CV', 'cs.AI']
While 3D generative models have greatly improved artists' workflows, the existing diffusion models for 3D generation suffer from slow generation and poor generalization. To address this issue, we propose a two-stage approach named Hunyuan3D 1.0 including a lite version and a standard version, that both support text- an...
2024-11-04T17:21:42Z
Technical Report; 3D Generation
null
null
Hunyuan3D 1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation
['Xianghui Yang', 'Huiwen Shi', 'Bowen Zhang', 'Fan Yang', 'Jiacheng Wang', 'Hongxu Zhao', 'Xinhai Liu', 'Xinzhou Wang', 'Qin Lin', 'Jiaao Yu', 'Lifu Wang', 'Jing Xu', 'Zebin He', 'Zhuo Chen', 'Si-Ya Liu', 'Junta Wu', 'Yihang Lian', 'Shaoxiong Yang', 'Yuhong Liu', 'Yong Yang', 'Di Wang', 'Jie Jiang', 'Chunchao Guo']
2,024
null
25
0
['Computer Science']
2,411.02319
GenXD: Generating Any 3D and 4D Scenes
['Yuyang Zhao', 'Chung-Ching Lin', 'Kevin Lin', 'Zhiwen Yan', 'Linjie Li', 'Zhengyuan Yang', 'Jianfeng Wang', 'Gim Hee Lee', 'Lijuan Wang']
['cs.CV', 'cs.AI']
Recent developments in 2D visual generation have been remarkably successful. However, 3D and 4D generation remain challenging in real-world applications due to the lack of large-scale 4D data and effective model design. In this paper, we propose to jointly investigate general 3D and 4D generation by leveraging camera a...
2024-11-04T17:45:44Z
null
null
null
GenXD: Generating Any 3D and 4D Scenes
['Yuyang Zhao', 'Chung-Ching Lin', 'K. Lin', 'Zhiwen Yan', 'Linjie Li', 'Zhengyuan Yang', 'Jianfeng Wang', 'Gim Hee Lee', 'Lijuan Wang']
2,024
International Conference on Learning Representations
16
69
['Computer Science']
2,411.02335
Sparsing Law: Towards Large Language Models with Greater Activation Sparsity
['Yuqi Luo', 'Chenyang Song', 'Xu Han', 'Yingfa Chen', 'Chaojun Xiao', 'Xiaojun Meng', 'Liqun Deng', 'Jiansheng Wei', 'Zhiyuan Liu', 'Maosong Sun']
['cs.LG', 'cs.CL', 'stat.ML', 'I.2.7']
Activation sparsity denotes the existence of substantial weakly-contributed elements within activation outputs that can be eliminated, benefiting many important applications concerned with large language models (LLMs). Although promoting greater activation sparsity within LLMs deserves deep studies, existing works lack...
2024-11-04T17:59:04Z
23 pages, 13 figures, 6 tables
null
null
null
null
null
null
null
null
null
2,411.02337
WebRL: Training LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning
['Zehan Qi', 'Xiao Liu', 'Iat Long Iong', 'Hanyu Lai', 'Xueqiao Sun', 'Wenyi Zhao', 'Yu Yang', 'Xinyue Yang', 'Jiadai Sun', 'Shuntian Yao', 'Tianjie Zhang', 'Wei Xu', 'Jie Tang', 'Yuxiao Dong']
['cs.CL']
Large language models (LLMs) have shown remarkable potential as autonomous agents, particularly in web-based tasks. However, existing LLM web agents heavily rely on expensive proprietary LLM APIs, while open LLMs lack the necessary decision-making capabilities. This paper introduces WebRL, a self-evolving online curric...
2024-11-04T17:59:58Z
Published as a conference paper at ICLR 2025
null
null
null
null
null
null
null
null
null
2,411.02355
"Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization
['Eldar Kurtic', 'Alexandre Marques', 'Shubhra Pandit', 'Mark Kurtz', 'Dan Alistarh']
['cs.LG', 'cs.AI']
Quantization is a powerful tool for accelerating large language model (LLM) inference, but the accuracy-performance trade-offs across different formats remain unclear. In this paper, we conduct the most comprehensive empirical study to date, evaluating FP8, INT8, and INT4 quantization across academic benchmarks and rea...
2024-11-04T18:21:59Z
Accepted to ACL 2025
null
null
null
null
null
null
null
null
null
2,411.02359
DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution
['Yang Yue', 'Yulin Wang', 'Bingyi Kang', 'Yizeng Han', 'Shenzhi Wang', 'Shiji Song', 'Jiashi Feng', 'Gao Huang']
['cs.RO', 'cs.AI', 'cs.LG']
MLLMs have demonstrated remarkable comprehension and reasoning capabilities with complex language and visual data. These advances have spurred the vision of establishing a generalist robotic MLLM proficient in understanding complex human instructions and accomplishing various embodied tasks. However, developing MLLMs f...
2024-11-04T18:26:08Z
25 pages, 6 figures, NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,411.02372
Learning General-Purpose Biomedical Volume Representations using Randomized Synthesis
['Neel Dey', 'Benjamin Billot', 'Hallee E. Wong', 'Clinton J. Wang', 'Mengwei Ren', 'P. Ellen Grant', 'Adrian V. Dalca', 'Polina Golland']
['cs.CV', 'cs.LG']
Current volumetric biomedical foundation models struggle to generalize as public 3D datasets are small and do not cover the broad diversity of medical procedures, conditions, anatomical regions, and imaging protocols. We address this by creating a representation learning method that instead anticipates strong domain sh...
2024-11-04T18:40:46Z
ICLR 2025: International Conference on Learning Representations. Code and model weights available at https://github.com/neel-dey/anatomix. Keywords: synthetic data, representation learning, medical image analysis, image registration, image segmentation
null
null
Learning General-Purpose Biomedical Volume Representations using Randomized Synthesis
['Neel Dey', 'Benjamin Billot', 'Hallee E. Wong', 'Clinton Wang', 'Mengwei Ren', 'P. E. Grant', 'Adrian V. Dalca', 'Polina Golland']
2,024
International Conference on Learning Representations
3
112
['Computer Science']
2,411.02441
Cross-D Conv: Cross-Dimensional Transferable Knowledge Base via Fourier Shifting Operation
['Mehmet Can Yavuz', 'Yang Yang']
['cs.CV']
In biomedical imaging analysis, the dichotomy between 2D and 3D data presents a significant challenge. While 3D volumes offer superior real-world applicability, they are less available for each modality and not easy to train in large scale, whereas 2D samples are abundant but less comprehensive. This paper introduces C...
2024-11-02T13:03:44Z
Accepted by IEEE ISBI 2025 4-page paper
null
null
Cross-D Conv: Cross-Dimensional Transferable Knowledge Base via Fourier Shifting Operation
['Mehmet Can Yavuz', 'Yang Yang']
2,024
IEEE International Symposium on Biomedical Imaging
0
25
['Computer Science']
2,411.02571
MM-Embed: Universal Multimodal Retrieval with Multimodal LLMs
['Sheng-Chieh Lin', 'Chankyu Lee', 'Mohammad Shoeybi', 'Jimmy Lin', 'Bryan Catanzaro', 'Wei Ping']
['cs.CL', 'cs.AI', 'cs.CV', 'cs.IR', 'cs.LG']
State-of-the-art retrieval models typically address a straightforward search scenario, in which retrieval tasks are fixed (e.g., finding a passage to answer a specific question) and only a single modality is supported for both queries and retrieved results. This paper introduces techniques for advancing information ret...
2024-11-04T20:06:34Z
Accepted at ICLR 2025. We release the model weights at: https://huggingface.co/nvidia/MM-Embed
null
null
MM-Embed: Universal Multimodal Retrieval with Multimodal LLMs
['Sheng-Chieh Lin', 'Chankyu Lee', 'M. Shoeybi', 'Jimmy Lin', 'Bryan Catanzaro', 'Wei Ping']
2,024
International Conference on Learning Representations
20
65
['Computer Science']
2,411.02657
Zebra-Llama: A Context-Aware Large Language Model for Democratizing Rare Disease Knowledge
['Karthik Soman', 'Andrew Langdon', 'Catalina Villouta', 'Chinmay Agrawal', 'Lashaw Salta', 'Braian Peetoom', 'Gianmarco Bellucci', 'Orion J Buske']
['cs.CL']
Rare diseases present unique challenges in healthcare, often suffering from delayed diagnosis and fragmented information landscapes. The scarcity of reliable knowledge in these conditions poses a distinct challenge for Large Language Models (LLMs) in supporting clinical management and delivering precise patient informa...
2024-11-04T22:45:52Z
26 pages, 4 figures, 1 supplementary figure
null
null
null
null
null
null
null
null
null
2,411.0278
How much is a noisy image worth? Data Scaling Laws for Ambient Diffusion
['Giannis Daras', 'Yeshwanth Cherapanamjeri', 'Constantinos Daskalakis']
['cs.LG', 'cs.CV']
The quality of generative models depends on the quality of the data they are trained on. Creating large-scale, high-quality datasets is often expensive and sometimes impossible, e.g. in certain scientific applications where there is no access to clean data due to physical or instrumentation constraints. Ambient Diffusi...
2024-11-05T03:45:17Z
Work in progress
null
null
null
null
null
null
null
null
null
2,411.02829
CE-CoLLM: Efficient and Adaptive Large Language Models Through Cloud-Edge Collaboration
['Hongpeng Jin', 'Yanzhao Wu']
['cs.DC', 'cs.LG']
Large Language Models (LLMs) exhibit remarkable human-like predictive capabilities. However, it is challenging to deploy LLMs to provide efficient and adaptive inference services at the edge. This paper proposes a novel Cloud-Edge Collaboration framework for LLMs (CE-CoLLM) to tackle these challenges. First, we identif...
2024-11-05T06:00:27Z
To appear in IEEE ICWS 2025
null
null
null
null
null
null
null
null
null
2,411.02853
ADOPT: Modified Adam Can Converge with Any $β_2$ with the Optimal Rate
['Shohei Taniguchi', 'Keno Harada', 'Gouki Minegishi', 'Yuta Oshima', 'Seong Cheol Jeong', 'Go Nagahara', 'Tomoshi Iiyama', 'Masahiro Suzuki', 'Yusuke Iwasawa', 'Yutaka Matsuo']
['cs.LG', 'stat.ML']
Adam is one of the most popular optimization algorithms in deep learning. However, it is known that Adam does not converge in theory unless choosing a hyperparameter, i.e., $\beta_2$, in a problem-dependent manner. There have been many attempts to fix the non-convergence (e.g., AMSGrad), but they require an impractical...
2024-11-05T06:57:47Z
Accepted at Neural Information Processing Systems (NeurIPS 2024)
null
null
null
null
null
null
null
null
null
2,411.02959
HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieved Knowledge in RAG Systems
['Jiejun Tan', 'Zhicheng Dou', 'Wen Wang', 'Mang Wang', 'Weipeng Chen', 'Ji-Rong Wen']
['cs.IR']
Retrieval-Augmented Generation (RAG) has been shown to improve knowledge capabilities and alleviate the hallucination problem of LLMs. The Web is a major source of external knowledge used in RAG systems, and many commercial RAG systems have used Web search engines as their major retrieval systems. Typically, such RAG s...
2024-11-05T09:58:36Z
Accepted by WWW 2025 main conference. Repo: https://github.com/plageon/HtmlRAG
null
10.1145/3696410.3714546
HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieved Knowledge in RAG Systems
['Jiejun Tan', 'Zhicheng Dou', 'Wen Wang', 'Mang Wang', 'Weipeng Chen', 'Ji-Rong Wen']
2,024
The Web Conference
12
85
['Computer Science']
2,411.03307
LLMs for Domain Generation Algorithm Detection
['Reynier Leyva La O', 'Carlos A. Catania', 'Tatiana Parlanti']
['cs.CL', 'cs.CR']
This work analyzes the use of large language models (LLMs) for detecting domain generation algorithms (DGAs). We perform a detailed evaluation of two important techniques: In-Context Learning (ICL) and Supervised Fine-Tuning (SFT), showing how they can improve detection. SFT increases performance by using domain-specif...
2024-11-05T18:01:12Z
null
null
null
LLMs for Domain Generation Algorithm Detection
['Reynier Leyva', 'C. A. Catania', 'Tatiana Parlanti']
2,024
arXiv.org
0
47
['Computer Science']
2,411.03682
LEGATO: Cross-Embodiment Imitation Using a Grasping Tool
['Mingyo Seo', 'H. Andy Park', 'Shenli Yuan', 'Yuke Zhu', 'Luis Sentis']
['cs.RO']
Cross-embodiment imitation learning enables policies trained on specific embodiments to transfer across different robots, unlocking the potential for large-scale imitation learning that is both cost-effective and highly reusable. This paper presents LEGATO, a cross-embodiment imitation learning framework for visuomotor...
2024-11-06T06:06:07Z
Published in RA-L
IEEE Robotics and Automation Letters vol. 10 no. 3 pp. 2854-2861 2025
10.1109/LRA.2025.3535182
null
null
null
null
null
null
null
2,411.03795
VQA$^2$: Visual Question Answering for Video Quality Assessment
['Ziheng Jia', 'Zicheng Zhang', 'Jiaying Qian', 'Haoning Wu', 'Wei Sun', 'Chunyi Li', 'Xiaohong Liu', 'Weisi Lin', 'Guangtao Zhai', 'Xiongkuo Min']
['cs.CV', 'cs.AI']
The advent and proliferation of large multi-modal models (LMMs) have introduced new paradigms to computer vision, transforming various tasks into a unified visual question answering framework. Video Quality Assessment (VQA), a classic field in low-level visual perception, focused initially on quantitative video quality...
2024-11-06T09:39:52Z
23 pages 12 figures
null
null
VQA2: Visual Question Answering for Video Quality Assessment
['Ziheng Jia', 'Zicheng Zhang', 'Jiaying Qian', 'Haoning Wu', 'Wei Sun', 'Chunyi Li', 'Xiaohong Liu', 'Weisi Lin', 'Guangtao Zhai', 'Xiongkuo Min']
2,024
arXiv.org
1
69
['Computer Science']
2,411.03884
Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models
['Zhijian Zhuo', 'Ya Wang', 'Yutao Zeng', 'Xiaoqing Li', 'Xun Zhou', 'Jinwen Ma']
['cs.CL', 'cs.AI', 'cs.LG']
Transformers have found extensive applications across various domains due to the powerful fitting capabilities. This success can be partially attributed to their inherent nonlinearity. Thus, in addition to the ReLU function employed in the original transformer architecture, researchers have explored alternative modules...
2024-11-06T13:00:34Z
Accepted by ICLR 2025
null
null
Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models
['Zhijian Zhuo', 'Ya Wang', 'Yutao Zeng', 'Xiaoqing Li', 'Xun Zhou', 'Jinwen Ma']
2,024
International Conference on Learning Representations
3
63
['Computer Science']
2,411.03887
Reclaiming "Open AI" -- AI Model Serving Can Be Open Access, Yet Monetizable and Loyal
['Zerui Cheng', 'Edoardo Contente', 'Ben Finch', 'Oleg Golev', 'Jonathan Hayase', 'Andrew Miller', 'Niusha Moshrefi', 'Anshul Nasery', 'Sandeep Nailwal', 'Sewoong Oh', 'Himanshu Tyagi', 'Pramod Viswanath']
['cs.AI', 'cs.CR']
The rapid rise of AI has split model serving between open-weight distribution, which often lacks owner control and monetization, and opaque API-based approaches that risk user privacy and model transparency, forming a dichotomy that hinders an equitable AI ecosystem. This position paper introduces, rigorously formulate...
2024-11-01T18:46:03Z
54 pages
null
null
null
null
null
null
null
null
null
2,411.0392
RAGulator: Lightweight Out-of-Context Detectors for Grounded Text Generation
['Ian Poey', 'Jiajun Liu', 'Qishuai Zhong', 'Adrien Chenailler']
['cs.CL']
Real-time detection of out-of-context LLM outputs is crucial for enterprises looking to safely adopt RAG applications. In this work, we train lightweight models to discriminate LLM-generated text that is semantically out-of-context from retrieved text documents. We preprocess a combination of summarisation and semantic...
2024-11-06T13:51:42Z
null
null
null
null
null
null
null
null
null
null
2,411.04125
Community Forensics: Using Thousands of Generators to Train Fake Image Detectors
['Jeongsoo Park', 'Andrew Owens']
['cs.CV']
One of the key challenges of detecting AI-generated images is spotting images that have been created by previously unseen generative models. We argue that the limited diversity of the training data is a major obstacle to addressing this problem, and we propose a new dataset that is significantly larger and more diverse...
2024-11-06T18:59:41Z
16 pages; CVPR 2025; Project page: https://jespark.net/projects/2024/community_forensics
In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), pp. 8245-8257. 2025
null
null
null
null
null
null
null
null
2,411.04168
DiMSUM: Diffusion Mamba -- A Scalable and Unified Spatial-Frequency Method for Image Generation
['Hao Phung', 'Quan Dao', 'Trung Dao', 'Hoang Phan', 'Dimitris Metaxas', 'Anh Tran']
['cs.CV', 'cs.AI']
We introduce a novel state-space architecture for diffusion models, effectively harnessing spatial and frequency information to enhance the inductive bias towards local features in input images for image generation tasks. While state-space networks, including Mamba, a revolutionary advancement in recurrent neural netwo...
2024-11-06T18:59:17Z
Accepted to NeurIPS 2024. Project page: https://vinairesearch.github.io/DiMSUM/
null
null
null
null
null
null
null
null
null
2,411.04403
Towards Competitive Search Relevance For Inference-Free Learned Sparse Retrievers
['Zhichao Geng', 'Yiwen Wang', 'Dongyu Ru', 'Yang Yang']
['cs.IR', 'cs.AI']
Learned sparse retrieval, which can efficiently perform retrieval through mature inverted-index engines, has garnered growing attention in recent years. Particularly, the inference-free sparse retrievers are attractive as they eliminate online model inference in the retrieval phase thereby avoids huge computational cos...
2024-11-07T03:46:43Z
null
null
null
Towards Competitive Search Relevance For Inference-Free Learned Sparse Retrievers
['Zhichao Geng', 'Dongyu Ru', 'Yang Yang']
2,024
arXiv.org
2
45
['Computer Science']
2,411.04496
Thanos: Enhancing Conversational Agents with Skill-of-Mind-Infused Large Language Model
['Young-Jun Lee', 'Dokyong Lee', 'Junyoung Youn', 'Kyeongjin Oh', 'Ho-Jin Choi']
['cs.CL']
To increase social bonding with interlocutors, humans naturally acquire the ability to respond appropriately in a given situation by considering which conversational skill is most suitable for the response - a process we call skill-of-mind. For large language model (LLM)-based conversational agents, planning appropriat...
2024-11-07T07:46:06Z
Code: https://github.com/passing2961/Thanos
null
null
null
null
null
null
null
null
null
2,411.04699
Towards Building Large Scale Datasets and State-of-the-Art Automatic Speech Translation Systems for 14 Indian Languages
['Ashwin Sankar', 'Sparsh Jain', 'Nikhil Narasimhan', 'Devilal Choudhary', 'Dhairya Suman', 'Mohammed Safi Ur Rahman Khan', 'Anoop Kunchukuttan', 'Mitesh M Khapra', 'Raj Dabre']
['cs.CL']
Speech translation for Indian languages remains a challenging task due to the scarcity of large-scale, publicly available datasets that capture the linguistic diversity and domain coverage essential for real-world applications. Existing datasets cover a fraction of Indian languages and lack the breadth needed to train ...
2024-11-07T13:33:34Z
Accepted at ACL (Main) 2025
null
null
null
null
null
null
null
null
null
2,411.04863
OneProt: Towards Multi-Modal Protein Foundation Models
['Klemens Flöge', 'Srisruthi Udayakumar', 'Johanna Sommer', 'Marie Piraud', 'Stefan Kesselheim', 'Vincent Fortuin', 'Stephan Günneman', 'Karel J van der Weg', 'Holger Gohlke', 'Erinc Merdivan', 'Alina Bazarova']
['cs.LG', 'q-bio.BM']
Recent advances in Artificial Intelligence have enabled multi-modal systems to model and translate diverse information spaces. Extending beyond text and vision, we introduce OneProt, a multi-modal AI for proteins that integrates structural, sequence, text, and binding site data. Using the ImageBind framework, OneProt a...
2024-11-07T16:54:54Z
34 pages, 7 figures, 11 tables
null
null
null
null
null
null
null
null
null
2,411.04905
OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models
['Siming Huang', 'Tianhao Cheng', 'J. K. Liu', 'Jiaran Hao', 'Liuyihan Song', 'Yang Xu', 'J. Yang', 'Jiaheng Liu', 'Chenchen Zhang', 'Linzheng Chai', 'Ruifeng Yuan', 'Zhaoxiang Zhang', 'Jie Fu', 'Qian Liu', 'Ge Zhang', 'Zili Wang', 'Yuan Qi', 'Yinghui Xu', 'Wei Chu']
['cs.CL', 'cs.PL']
Large language models (LLMs) for code have become indispensable in various domains, including code generation, reasoning tasks and agent systems. While open-access code LLMs are increasingly approaching the performance levels of proprietary models, high-quality code LLMs suitable for rigorous scientific investigation, ...
2024-11-07T17:47:25Z
null
null
null
null
null
null
null
null
null
null
2,411.04928
DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion
['Wenqiang Sun', 'Shuo Chen', 'Fangfu Liu', 'Zilong Chen', 'Yueqi Duan', 'Jun Zhang', 'Yikai Wang']
['cs.CV', 'cs.AI', 'cs.GR']
In this paper, we introduce \textbf{DimensionX}, a framework designed to generate photorealistic 3D and 4D scenes from just a single image with video diffusion. Our approach begins with the insight that both the spatial structure of a 3D scene and the temporal evolution of a 4D scene can be effectively represented thro...
2024-11-07T18:07:31Z
Project Page: https://chenshuo20.github.io/DimensionX/
null
null
null
null
null
null
null
null
null
2,411.04997
LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation
['Weiquan Huang', 'Aoqi Wu', 'Yifan Yang', 'Xufang Luo', 'Yuqing Yang', 'Liang Hu', 'Qi Dai', 'Chunyu Wang', 'Xiyang Dai', 'Dongdong Chen', 'Chong Luo', 'Lili Qiu']
['cs.CV', 'cs.CL']
CLIP is a foundational multimodal model that aligns image and text features into a shared representation space via contrastive learning on large-scale image-text pairs. Its effectiveness primarily stems from the use of natural language as rich supervision. Motivated by the remarkable advancements in large language mode...
2024-11-07T18:59:16Z
null
null
null
LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation
['Weiquan Huang', 'Aoqi Wu', 'Yifan Yang', 'Xufang Luo', 'Yuqing Yang', 'Liang Hu', 'Qi Dai', 'Xiyang Dai', 'Dongdong Chen', 'Chong Luo', 'Lili Qiu']
2,024
arXiv.org
14
59
['Computer Science']
2,411.05007
SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
['Muyang Li', 'Yujun Lin', 'Zhekai Zhang', 'Tianle Cai', 'Xiuyu Li', 'Junxian Guo', 'Enze Xie', 'Chenlin Meng', 'Jun-Yan Zhu', 'Song Han']
['cs.CV', 'cs.LG']
Diffusion models can effectively generate high-quality images. However, as they scale, rising memory demands and higher latency pose substantial deployment challenges. In this work, we aim to accelerate diffusion models by quantizing their weights and activations to 4 bits. At such an aggressive level, both weights and...
2024-11-07T18:59:58Z
ICLR 2025 Spotlight Quantization Library: https://github.com/mit-han-lab/deepcompressor Inference Engine: https://github.com/mit-han-lab/nunchaku Website: https://hanlab.mit.edu/projects/svdquant Demo: https://svdquant.mit.edu Blog: https://hanlab.mit.edu/blog/svdquant
null
null
SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
['Muyang Li', 'Yujun Lin', 'Zhekai Zhang', 'Tianle Cai', 'Xiuyu Li', 'Junxian Guo', 'Enze Xie', 'Chenlin Meng', 'Jun-Yan Zhu', 'Song Han']
2,024
arXiv.org
34
101
['Computer Science']
2,411.05046
PhoneLM:an Efficient and Capable Small Language Model Family through Principled Pre-training
['Rongjie Yi', 'Xiang Li', 'Weikai Xie', 'Zhenyan Lu', 'Chenghua Wang', 'Ao Zhou', 'Shangguang Wang', 'Xiwen Zhang', 'Mengwei Xu']
['cs.CL', 'cs.AI', 'cs.LG']
The interest in developing small language models (SLM) for on-device deployment is fast growing. However, the existing SLM design hardly considers the device hardware characteristics. Instead, this work presents a simple yet effective principle for SLM design: architecture searching for (near-)optimal runtime efficienc...
2024-11-07T02:19:00Z
null
null
null
null
null
null
null
null
null
null
2,411.05281
Fox-1: Open Small Language Model for Cloud and Edge
['Zijian Hu', 'Jipeng Zhang', 'Rui Pan', 'Zhaozhuo Xu', 'Shanshan Han', 'Han Jin', 'Alay Dilipbhai Shah', 'Dimitris Stripelis', 'Yuhang Yao', 'Salman Avestimehr', 'Tong Zhang', 'Chaoyang He']
['cs.CL', 'cs.AI', 'cs.LG']
We present Fox-1, a series of small language models (SLMs) consisting of Fox-1-1.6B and Fox-1-1.6B-Instruct-v0.1. These models are pre-trained on 3 trillion tokens of web-scraped document data and fine-tuned with 5 billion tokens of instruction-following and multi-turn conversation data. Aiming to improve the pre-train...
2024-11-08T02:24:29Z
Base model is available at https://huggingface.co/tensoropera/Fox-1-1.6B and the instruction-tuned version is available at https://huggingface.co/tensoropera/Fox-1-1.6B-Instruct-v0.1
null
null
null
null
null
null
null
null
null
2,411.05508
An Early FIRST Reproduction and Improvements to Single-Token Decoding for Fast Listwise Reranking
['Zijian Chen', 'Ronak Pradeep', 'Jimmy Lin']
['cs.IR', 'cs.CL']
Recent advances have demonstrated that large language models (LLMs) excel as listwise rerankers, but their high computational demands remain a barrier to widespread adoption. Further, the traditional language modeling (LM) objective is not ideally suited for reranking tasks. FIRST is a novel approach that addresses the...
2024-11-08T12:08:17Z
null
null
null
null
null
null
null
null
null
null
2,411.05738
StdGEN: Semantic-Decomposed 3D Character Generation from Single Images
['Yuze He', 'Yanning Zhou', 'Wang Zhao', 'Zhongkai Wu', 'Kaiwen Xiao', 'Wei Yang', 'Yong-Jin Liu', 'Xiao Han']
['cs.CV']
We present StdGEN, an innovative pipeline for generating semantically decomposed high-quality 3D characters from single images, enabling broad applications in virtual reality, gaming, and filmmaking, etc. Unlike previous methods which struggle with limited decomposability, unsatisfactory quality, and long optimization ...
2024-11-08T17:54:18Z
CVPR 2025. 13 pages, 10 figures
null
null
null
null
null
null
null
null
null
2,411.05823
FlexCAD: Unified and Versatile Controllable CAD Generation with Fine-tuned Large Language Models
['Zhanwei Zhang', 'Shizhao Sun', 'Wenxiao Wang', 'Deng Cai', 'Jiang Bian']
['cs.CV', 'cs.AI', 'cs.GR']
Recently, there is a growing interest in creating computer-aided design (CAD) models based on user intent, known as controllable CAD generation. Existing work offers limited controllability and needs separate models for different types of control, reducing efficiency and practicality. To achieve controllable generation...
2024-11-05T05:45:26Z
Published as a conference paper at ICLR 2025
null
null
null
null
null
null
null
null
null
2,411.05872
Dialectal Coverage And Generalization in Arabic Speech Recognition
['Amirbek Djanibekov', 'Hawau Olamide Toyin', 'Raghad Alshalan', 'Abdullah Alitr', 'Hanan Aldarmaki']
['cs.CL', 'cs.SD', 'eess.AS']
Developing robust automatic speech recognition (ASR) systems for Arabic requires effective strategies to manage its diversity. Existing ASR systems mainly cover the modern standard Arabic (MSA) variety and few high-resource dialects, but fall short in coverage and generalization across the multitude of spoken variants....
2024-11-07T22:23:30Z
null
null
null
null
null
null
null
null
null
null
2,411.05966
Energy Efficient Protein Language Models: Leveraging Small Language Models with LoRA for Controllable Protein Generation
['Aayush Shah', 'Shankar Jayaratnam']
['q-bio.BM', 'cs.LG']
Large language models (LLMs) have demonstrated significant success in natural language processing (NLP) tasks and have shown promising results in other domains such as protein sequence generation. However, there remain salient differences between LLMs used for NLP, which effectively handle multiple tasks and are availa...
2024-11-08T20:52:06Z
null
null
null
Energy Efficient Protein Language Models: Leveraging Small Language Models with LoRA for Controllable Protein Generation
['Aayush Shah', 'Shankar Jayaratnam']
2,024
arXiv.org
0
29
['Biology', 'Computer Science']
2,411.06272
Golden Touchstone: A Comprehensive Bilingual Benchmark for Evaluating Financial Large Language Models
['Xiaojun Wu', 'Junxi Liu', 'Huanyi Su', 'Zhouchi Lin', 'Yiyan Qi', 'Chengjin Xu', 'Jiajun Su', 'Jiajie Zhong', 'Fuwei Wang', 'Saizhuo Wang', 'Fengrui Hua', 'Jia Li', 'Jian Guo']
['cs.CL', 'cs.CE']
As large language models become increasingly prevalent in the financial sector, there is a pressing need for a standardized method to comprehensively assess their performance. However, existing finance benchmarks often suffer from limited language and task coverage, as well as challenges such as low-quality datasets an...
2024-11-09T20:09:11Z
26 pages, 9 tables, 3 figures
null
null
Golden Touchstone: A Comprehensive Bilingual Benchmark for Evaluating Financial Large Language Models
['Xiaojun Wu', 'Junxi Liu', 'Huanyi Su', 'Zhouchi Lin', 'Yiyan Qi', 'Chengjin Xu', 'Jiajun Su', 'Jiajie Zhong', 'Fuwei Wang', 'Sai Wang', 'Fengrui Hua', 'Jia Li', 'Jian Guo']
2,024
arXiv.org
2
74
['Computer Science']
2,411.06441
Detecting AutoEncoder is Enough to Catch LDM Generated Images
['Dmitry Vesnin', 'Dmitry Levshun', 'Andrey Chechulin']
['cs.CV', 'cs.CR', 'cs.LG']
In recent years, diffusion models have become one of the main methods for generating images. However, detecting images generated by these models remains a challenging task. This paper proposes a novel method for detecting images generated by Latent Diffusion Models (LDM) by identifying artifacts introduced by their aut...
2024-11-10T12:17:32Z
null
null
null
null
null
null
null
null
null
null
2,411.06559
Is Your LLM Secretly a World Model of the Internet? Model-Based Planning for Web Agents
['Yu Gu', 'Kai Zhang', 'Yuting Ning', 'Boyuan Zheng', 'Boyu Gou', 'Tianci Xue', 'Cheng Chang', 'Sanjari Srivastava', 'Yanan Xie', 'Peng Qi', 'Huan Sun', 'Yu Su']
['cs.AI']
Language agents based on large language models (LLMs) have demonstrated great promise in automating web-based tasks. Recent work has shown that incorporating advanced planning algorithms, e.g., tree search, is advantageous over reactive planning for web agents. However, unlike simulated sandbox environments, real-world...
2024-11-10T18:50:51Z
22 pages, 11 figures, 6 tables
null
null
null
null
null
null
null
null
null
2,411.06839
LLM-NEO: Parameter Efficient Knowledge Distillation for Large Language Models
['Runming Yang', 'Taiqiang Wu', 'Jiahao Wang', 'Pengfei Hu', 'Yik-Chung Wu', 'Ngai Wong', 'Yujiu Yang']
['cs.CL', 'cs.AI', 'cs.LG']
Knowledge distillation (KD) has been a predominant method for compressing Large Language Models (LLMs). In this paper, we first revisit KD and Low-Rank Adaption (LoRA) and demonstrate that they follow the same paradigm. Inspired by this observation, we propose a parameter-efficient knowledge distillation method, LLM-NE...
2024-11-11T10:07:51Z
ARR under review
null
null
LLM-Neo: Parameter Efficient Knowledge Distillation for Large Language Models
['Runming Yang', 'Taiqiang Wu', 'Jiahao Wang', 'Pengfei Hu', 'Ngai Wong', 'Yujiu Yang']
2,024
arXiv.org
1
33
['Computer Science']
2,411.07121
Decoding Visual Experience and Mapping Semantics through Whole-Brain Analysis Using fMRI Foundation Models
['Yanchen Wang', 'Adam Turnbull', 'Tiange Xiang', 'Yunlong Xu', 'Sa Zhou', 'Adnan Masoud', 'Shekoofeh Azizi', 'Feng Vankee Lin', 'Ehsan Adeli']
['cs.CV']
Neural decoding, the process of understanding how brain activity corresponds to different stimuli, has been a primary objective in cognitive sciences. Over the past three decades, advancements in functional Magnetic Resonance Imaging and machine learning have greatly improved our ability to map visual stimuli to brain ...
2024-11-11T16:51:17Z
null
null
null
Decoding Visual Experience and Mapping Semantics through Whole-Brain Analysis Using fMRI Foundation Models
['Yanchen Wang', 'Adam Turnbull', 'Tiange Xiang', 'Yunlong Xu', 'Sa Zhou', 'Adnan Masoud', 'Shekoofeh Azizi', 'F. Lin', 'Ehsan Adeli']
2,024
arXiv.org
1
64
['Computer Science']
2,411.07122
SCAR: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs
['Ruben Härle', 'Felix Friedrich', 'Manuel Brack', 'Björn Deiseroth', 'Patrick Schramowski', 'Kristian Kersting']
['cs.CL']
Large Language Models (LLMs) have demonstrated remarkable capabilities in generating human-like text, but their output may not be aligned with the user or even produce harmful content. This paper presents a novel approach to detect and steer concepts such as toxicity before generation. We introduce the Sparse Condition...
2024-11-11T16:51:39Z
Accepted at Socially Responsible Language Modelling Research (SoLaR) Workshop at NeurIPS 2024
null
null
null
null
null
null
null
null
null
2,411.07133
Stronger Models are NOT Stronger Teachers for Instruction Tuning
['Zhangchen Xu', 'Fengqing Jiang', 'Luyao Niu', 'Bill Yuchen Lin', 'Radha Poovendran']
['cs.AI', 'cs.CL']
Instruction tuning has been widely adopted to ensure large language models (LLMs) follow user instructions effectively. The resulting instruction-following capabilities of LLMs heavily rely on the instruction datasets used for tuning. Recently, synthetic instruction datasets have emerged as an economically viable solut...
2024-11-11T17:06:48Z
This is paper is accepted at NAACL 2025
null
null
null
null
null
null
null
null
null
2,411.07186
NatureLM-audio: an Audio-Language Foundation Model for Bioacoustics
['David Robinson', 'Marius Miron', 'Masato Hagiwara', 'Benno Weck', 'Sara Keen', 'Milad Alizadeh', 'Gagan Narula', 'Matthieu Geist', 'Olivier Pietquin']
['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS']
Large language models (LLMs) prompted with text and audio have achieved state-of-the-art performance across various auditory tasks, including speech, music, and general audio, showing emergent abilities on unseen tasks. However, their potential has yet to be fully demonstrated in bioacoustics tasks, such as detecting a...
2024-11-11T18:01:45Z
Demo page: https://earthspecies.github.io/naturelm-audio-demo/
null
null
null
null
null
null
null
null
null
2,411.07231
Watermark Anything with Localized Messages
['Tom Sander', 'Pierre Fernandez', 'Alain Durmus', 'Teddy Furon', 'Matthijs Douze']
['cs.CV', 'cs.CR']
Image watermarking methods are not tailored to handle small watermarked areas. This restricts applications in real-world scenarios where parts of the image may come from different sources or have been edited. We introduce a deep-learning model for localized image watermarking, dubbed the Watermark Anything Model (WAM)....
2024-11-11T18:49:58Z
Under review. Code at https://github.com/facebookresearch/watermark-anything
null
null
null
null
null
null
null
null
null
2,411.07238
OpenThaiGPT 1.5: A Thai-Centric Open Source Large Language Model
['Sumeth Yuenyong', 'Kobkrit Viriyayudhakorn', 'Apivadee Piyatumrong', 'Jillaphat Jaroenkantasima']
['cs.CL']
OpenThaiGPT 1.5 is an advanced Thai language chat model based on Qwen v2.5, finetuned on over 2,000,000 Thai instruction pairs. This report provides an engineering perspective on the model's development, capabilities, and performance. We discuss the model's architecture, training process, and key features, including mu...
2024-11-11T18:58:46Z
8 pages, 4 tables
null
null
null
null
null
null
null
null
null
2,411.07404
Controllable Context Sensitivity and the Knob Behind It
['Julian Minder', 'Kevin Du', 'Niklas Stoehr', 'Giovanni Monea', 'Chris Wendler', 'Robert West', 'Ryan Cotterell']
['cs.CL', 'cs.AI']
When making predictions, a language model must trade off how much it relies on its context vs. its prior knowledge. Choosing how sensitive the model is to its context is a fundamental functionality, as it enables the model to excel at tasks like retrieval-augmented generation and question-answering. In this paper, we s...
2024-11-11T22:22:21Z
Published as a conference paper at ICLR 2025
null
null
null
null
null
null
null
null
null
2,411.07635
Breaking the Low-Rank Dilemma of Linear Attention
['Qihang Fan', 'Huaibo Huang', 'Ran He']
['cs.CV']
The Softmax attention mechanism in Transformer models is notoriously computationally expensive, particularly due to its quadratic complexity, posing significant challenges in vision applications. In contrast, linear attention provides a far more efficient solution by reducing the complexity to linear levels. However, c...
2024-11-12T08:30:59Z
The paper is accepted by CVPR2025
null
null
null
null
null
null
null
null
null
2,411.07688
ImageRAG: Enhancing Ultra High Resolution Remote Sensing Imagery Analysis with ImageRAG
['Zilun Zhang', 'Haozhan Shen', 'Tiancheng Zhao', 'Zian Guan', 'Bin Chen', 'Yuhao Wang', 'Xu Jia', 'Yuxiang Cai', 'Yongheng Shang', 'Jianwei Yin']
['cs.CV', 'cs.AI']
Ultra High Resolution (UHR) remote sensing imagery (RSI) (e.g. 100,000 $\times$ 100,000 pixels or more) poses a significant challenge for current Remote Sensing Multimodal Large Language Models (RSMLLMs). If choose to resize the UHR image to standard input image size, the extensive spatial and contextual information th...
2024-11-12T10:12:12Z
Accepted by IEEE Geoscience and Remote Sensing Magazine
null
10.1109/MGRS.2025.3574742
Enhancing Ultra High Resolution Remote Sensing Imagery Analysis with ImageRAG
['Zilun Zhang', 'Haozhan Shen', 'Tiancheng Zhao', 'Yuhao Wang', 'Bin Chen', 'Yuxiang Cai', 'Yongheng Shang', 'Jianwei Yin']
2,024
IEEE Geoscience and Remote Sensing Magazine
3
127
['Computer Science']
2,411.07814
Community Research Earth Digital Intelligence Twin (CREDIT)
['John Schreck', 'Yingkai Sha', 'William Chapman', 'Dhamma Kimpara', 'Judith Berner', 'Seth McGinnis', 'Arnold Kazadi', 'Negin Sobhani', 'Ben Kirk', 'David John Gagne II']
['cs.AI', 'physics.ao-ph']
Recent advancements in artificial intelligence (AI) for numerical weather prediction (NWP) have significantly transformed atmospheric modeling. AI NWP models outperform traditional physics-based systems, such as the Integrated Forecast System (IFS), across several global metrics while requiring fewer computational reso...
2024-11-09T03:08:03Z
null
null
null
null
null
null
null
null
null
null
2,411.07854
Tucano: Advancing Neural Text Generation for Portuguese
['Nicholas Kluge Corrêa', 'Aniket Sen', 'Sophia Falk', 'Shiza Fatimah']
['cs.CL', 'cs.AI', 'cs.LG']
Significant advances have been made in natural language processing in recent years. However, our current deep learning approach to language modeling requires substantial resources in terms of data and computation. One of the side effects of this data-hungry paradigm is the current schism between languages, separating t...
2024-11-12T15:06:06Z
null
null
null
Tucano: Advancing Neural Text Generation for Portuguese
['Nicholas Kluge Corrêa', 'Aniket Sen', 'Sophia Falk', 'Shiza Fatimah']
2,024
arXiv.org
1
0
['Computer Science']
2,411.07975
JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation
['Yiyang Ma', 'Xingchao Liu', 'Xiaokang Chen', 'Wen Liu', 'Chengyue Wu', 'Zhiyu Wu', 'Zizheng Pan', 'Zhenda Xie', 'Haowei Zhang', 'Xingkai yu', 'Liang Zhao', 'Yisong Wang', 'Jiaying Liu', 'Chong Ruan']
['cs.CV', 'cs.AI', 'cs.CL']
We present JanusFlow, a powerful framework that unifies image understanding and generation in a single model. JanusFlow introduces a minimalist architecture that integrates autoregressive language models with rectified flow, a state-of-the-art method in generative modeling. Our key finding demonstrates that rectified f...
2024-11-12T17:55:10Z
Accepted by CVPR 2025
null
null
JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation
['Yiyang Ma', 'Xingchao Liu', 'Xi-aokang Chen', 'Wen Liu', 'Chengyue Wu', 'Zhiyu Wu', 'Zizheng Pan', 'Zhenda Xie', 'Haowei Zhang', 'Xingkai Yu', 'Liang Zhao', 'Yisong Wang', 'Jiaying Liu', 'C. Ruan']
2,024
Computer Vision and Pattern Recognition
39
104
['Computer Science']
2,411.0799
Derivational Morphology Reveals Analogical Generalization in Large Language Models
['Valentin Hofmann', 'Leonie Weissweiler', 'David Mortensen', 'Hinrich Schütze', 'Janet Pierrehumbert']
['cs.CL', 'cs.AI', 'cs.LG']
What mechanisms underlie linguistic generalization in large language models (LLMs)? This question has attracted considerable attention, with most studies analyzing the extent to which the language skills of LLMs resemble rules. As of yet, it is not known whether linguistic generalization in LLMs could equally well be e...
2024-11-12T18:15:19Z
null
null
null
Derivational Morphology Reveals Analogical Generalization in Large Language Models
['Valentin Hofmann', 'Leonie Weissweiler', 'David R. Mortensen', 'Hinrich Schutze', 'J. Pierrehumbert']
2,024
arXiv.org
1
0
['Computer Science']
2,411.08017
Wavelet Latent Diffusion (Wala): Billion-Parameter 3D Generative Model with Compact Wavelet Encodings
['Aditya Sanghi', 'Aliasghar Khani', 'Pradyumna Reddy', 'Arianna Rampini', 'Derek Cheung', 'Kamal Rahimi Malekshan', 'Kanika Madan', 'Hooman Shayani']
['cs.CV', 'cs.AI', 'cs.LG']
Large-scale 3D generative models require substantial computational resources yet often fall short in capturing fine details and complex geometries at high resolutions. We attribute this limitation to the inefficiency of current representations, which lack the compactness required to model the generative models effectiv...
2024-11-12T18:49:06Z
null
null
null
null
null
null
null
null
null
null