arxiv_id
float64
1.5k
2.51k
title
stringlengths
9
178
authors
stringlengths
2
22.8k
categories
stringlengths
4
146
summary
stringlengths
103
1.92k
published
stringdate
2015-02-06 10:44:00
2025-07-10 17:59:58
comments
stringlengths
2
417
journal_ref
stringclasses
321 values
doi
stringclasses
398 values
ss_title
stringlengths
8
159
ss_authors
stringlengths
11
8.38k
ss_year
float64
2.02k
2.03k
ss_venue
stringclasses
281 values
ss_citationCount
float64
0
134k
ss_referenceCount
float64
0
429
ss_fieldsOfStudy
stringclasses
47 values
2,403.14613
DreamReward: Text-to-3D Generation with Human Preference
['Junliang Ye', 'Fangfu Liu', 'Qixiu Li', 'Zhengyi Wang', 'Yikai Wang', 'Xinzhou Wang', 'Yueqi Duan', 'Jun Zhu']
['cs.CV', 'cs.CL', 'cs.LG']
3D content creation from text prompts has shown remarkable success recently. However, current text-to-3D methods often generate 3D results that do not align well with human preferences. In this paper, we present a comprehensive framework, coined DreamReward, to learn and improve text-to-3D models from human preference ...
2024-03-21T17:58:04Z
Project page: https://jamesyjl.github.io/DreamReward
null
null
null
null
null
null
null
null
null
2,403.14627
MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images
['Yuedong Chen', 'Haofei Xu', 'Chuanxia Zheng', 'Bohan Zhuang', 'Marc Pollefeys', 'Andreas Geiger', 'Tat-Jen Cham', 'Jianfei Cai']
['cs.CV']
We introduce MVSplat, an efficient model that, given sparse multi-view images as input, predicts clean feed-forward 3D Gaussians. To accurately localize the Gaussian centers, we build a cost volume representation via plane sweeping, where the cross-view feature similarities stored in the cost volume can provide valuabl...
2024-03-21T17:59:58Z
ECCV2024, Project page: https://donydchen.github.io/mvsplat, Code: https://github.com/donydchen/mvsplat
null
10.1007/978-3-031-72664-4_21
null
null
null
null
null
null
null
2,403.14645
Designing Multi-Step Action Models for Enterprise AI Adoption
['Shreyash Mishra', 'Shrey Shah', 'Rex Pereira']
['cs.CY', 'cs.AI', '68T42', 'I.2.1; I.2.8']
This paper introduces the Multi-Step Action Model (MSAM), a closed-source AI model designed by Empsing to address challenges hindering AI adoption in enterprises. Through a holistic examination, this paper explores MSAM's foundational principles, design architecture, and future trajectory. It evaluates MSAM's performan...
2024-02-21T18:37:13Z
8 pages, 5 figures
null
null
null
null
null
null
null
null
null
2,403.14715
Towards Understanding Why Label Smoothing Degrades Selective Classification and How to Fix It
['Guoxuan Xia', 'Olivier Laurent', 'Gianni Franchi', 'Christos-Savvas Bouganis']
['cs.LG', 'cs.AI', 'cs.CV']
Label smoothing (LS) is a popular regularisation method for training neural networks as it is effective in improving test accuracy and is simple to implement. ``Hard'' one-hot labels are ``smoothed'' by uniformly distributing probability mass to other classes, reducing overfitting. Prior work has suggested that in some...
2024-03-19T06:46:24Z
Published as a conference paper at ICLR 2025
null
null
null
null
null
null
null
null
null
2,403.14773
StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
['Roberto Henschel', 'Levon Khachatryan', 'Hayk Poghosyan', 'Daniil Hayrapetyan', 'Vahram Tadevosyan', 'Zhangyang Wang', 'Shant Navasardyan', 'Humphrey Shi']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.MM', 'eess.IV']
Text-to-video diffusion models enable the generation of high-quality videos that follow text instructions, making it easy to create diverse and individual content. However, existing approaches mostly focus on high-quality short video generation (typically 16 or 24 frames), ending up with hard-cuts when naively extended...
2024-03-21T18:27:29Z
https://github.com/Picsart-AI-Research/StreamingT2V
null
null
null
null
null
null
null
null
null
2,403.14781
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
['Shenhao Zhu', 'Junming Leo Chen', 'Zuozhuo Dai', 'Qingkun Su', 'Yinghui Xu', 'Xun Cao', 'Yao Yao', 'Hao Zhu', 'Siyu Zhu']
['cs.CV']
In this study, we introduce a methodology for human image animation by leveraging a 3D human parametric model within a latent diffusion framework to enhance shape alignment and motion guidance in curernt human generative techniques. The methodology utilizes the SMPL(Skinned Multi-Person Linear) model as the 3D human pa...
2024-03-21T18:52:58Z
null
null
null
null
null
null
null
null
null
null
2,403.1479
Latent Diffusion Models for Attribute-Preserving Image Anonymization
['Luca Piano', 'Pietro Basci', 'Fabrizio Lamberti', 'Lia Morra']
['cs.CV', 'cs.AI']
Generative techniques for image anonymization have great potential to generate datasets that protect the privacy of those depicted in the images, while achieving high data fidelity and utility. Existing methods have focused extensively on preserving facial attributes, but failed to embrace a more comprehensive perspect...
2024-03-21T19:09:21Z
null
null
null
null
null
null
null
null
null
null
2,403.14852
KeyPoint Relative Position Encoding for Face Recognition
['Minchul Kim', 'Yiyang Su', 'Feng Liu', 'Anil Jain', 'Xiaoming Liu']
['cs.CV']
In this paper, we address the challenge of making ViT models more robust to unseen affine transformations. Such robustness becomes useful in various recognition tasks such as face recognition when image alignment failures occur. We propose a novel method called KP-RPE, which leverages key points (e.g.~facial landmarks)...
2024-03-21T21:56:09Z
To appear in CVPR2024
null
null
KeyPoint Relative Position Encoding for Face Recognition
['Minchul Kim', 'Yiyang Su', 'Feng Liu', 'Anil Jain', 'Xiaoming Liu']
2,024
Computer Vision and Pattern Recognition
10
91
['Computer Science']
2,403.15245
Reasoning-Enhanced Object-Centric Learning for Videos
['Jian Li', 'Pu Ren', 'Yang Liu', 'Hao Sun']
['cs.CV', 'cs.AI', 'cs.LG']
Object-centric learning aims to break down complex visual scenes into more manageable object representations, enhancing the understanding and reasoning abilities of machine learning systems toward the physical world. Recently, slot-based video models have demonstrated remarkable proficiency in segmenting and tracking o...
2024-03-22T14:41:55Z
null
null
null
Reasoning-Enhanced Object-Centric Learning for Videos
['Jian Li', 'Pu Ren', 'Yang Liu', 'Hao Sun']
2,024
Knowledge Discovery and Data Mining
2
84
['Computer Science']
2,403.15246
FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions
['Orion Weller', 'Benjamin Chang', 'Sean MacAvaney', 'Kyle Lo', 'Arman Cohan', 'Benjamin Van Durme', 'Dawn Lawrie', 'Luca Soldaini']
['cs.IR', 'cs.CL', 'cs.LG']
Modern Language Models (LMs) are capable of following long and complex instructions that enable a large and diverse set of user requests. While Information Retrieval (IR) models use these LMs as the backbone of their architectures, virtually none of them allow users to provide detailed instructions alongside queries, t...
2024-03-22T14:42:29Z
null
null
null
null
null
null
null
null
null
null
2,403.15279
Fundus: A Simple-to-Use News Scraper Optimized for High Quality Extractions
['Max Dallabetta', 'Conrad Dobberstein', 'Adrian Breiding', 'Alan Akbik']
['cs.CL', 'cs.IR']
This paper introduces Fundus, a user-friendly news scraper that enables users to obtain millions of high-quality news articles with just a few lines of code. Unlike existing news scrapers, we use manually crafted, bespoke content extractors that are specifically tailored to the formatting guidelines of each supported o...
2024-03-22T15:22:06Z
10 pages, 4 figures, ACL 2024, for a screencast see https://www.youtube.com/watch?v=9GJExMelhdI
null
null
null
null
null
null
null
null
null
2,403.15322
CO-Fun: A German Dataset on Company Outsourcing in Fund Prospectuses for Named Entity Recognition and Relation Extraction
['Neda Foroutan', 'Markus Schröder', 'Andreas Dengel']
['cs.CL']
The process of cyber mapping gives insights in relationships among financial entities and service providers. Centered around the outsourcing practices of companies within fund prospectuses in Germany, we introduce a dataset specifically designed for named entity recognition and relation extraction tasks. The labeling p...
2024-03-22T16:17:55Z
null
null
null
null
null
null
null
null
null
null
2,403.15356
Neural Plasticity-Inspired Multimodal Foundation Model for Earth Observation
['Zhitong Xiong', 'Yi Wang', 'Fahong Zhang', 'Adam J. Stewart', 'Joëlle Hanna', 'Damian Borth', 'Ioannis Papoutsis', 'Bertrand Le Saux', 'Gustau Camps-Valls', 'Xiao Xiang Zhu']
['cs.CV']
The development of foundation models has revolutionized our ability to interpret the Earth's surface using satellite observational data. Traditional models have been siloed, tailored to specific sensors or data types like optical, radar, and hyperspectral, each with its own unique characteristics. This specialization h...
2024-03-22T17:11:47Z
36 pages, 7 figures
null
null
Neural Plasticity-Inspired Multimodal Foundation Model for Earth Observation
['Zhitong Xiong', 'Yi Wang', 'Fahong Zhang', 'Adam J. Stewart', 'Joelle Hanna', 'Damian Borth', 'Ioannis Papoutsis', 'B. L. Saux', 'G. Camps-Valls', 'Xiao Xiang Zhu']
2,024
null
18
88
['Computer Science']
2,403.15377
InternVideo2: Scaling Foundation Models for Multimodal Video Understanding
['Yi Wang', 'Kunchang Li', 'Xinhao Li', 'Jiashuo Yu', 'Yinan He', 'Chenting Wang', 'Guo Chen', 'Baoqi Pei', 'Ziang Yan', 'Rongkun Zheng', 'Jilan Xu', 'Zun Wang', 'Yansong Shi', 'Tianxiang Jiang', 'Songze Li', 'Hongjie Zhang', 'Yifei Huang', 'Yu Qiao', 'Yali Wang', 'Limin Wang']
['cs.CV']
We introduce InternVideo2, a new family of video foundation models (ViFM) that achieve the state-of-the-art results in video recognition, video-text tasks, and video-centric dialogue. Our core design is a progressive training approach that unifies the masked video modeling, crossmodal contrastive learning, and next tok...
2024-03-22T17:57:42Z
a technical report about video understanding (accepted to ECCV2024)
null
null
null
null
null
null
null
null
null
2,403.15378
Long-CLIP: Unlocking the Long-Text Capability of CLIP
['Beichen Zhang', 'Pan Zhang', 'Xiaoyi Dong', 'Yuhang Zang', 'Jiaqi Wang']
['cs.CV']
Contrastive Language-Image Pre-training (CLIP) has been the cornerstone for zero-shot classification, text-image retrieval, and text-image generation by aligning image and text modalities. Despite its widespread adoption, a significant limitation of CLIP lies in the inadequate length of text input. The length of the te...
2024-03-22T17:58:16Z
ECCV 2024. All codes and models are publicly available at https://github.com/beichenzbc/Long-CLIP
null
null
null
null
null
null
null
null
null
2,403.15484
RakutenAI-7B: Extending Large Language Models for Japanese
['Rakuten Group', 'Aaron Levine', 'Connie Huang', 'Chenguang Wang', 'Eduardo Batista', 'Ewa Szymanska', 'Hongyi Ding', 'Hou Wei Chou', 'Jean-François Pessiot', 'Johanes Effendi', 'Justin Chiu', 'Kai Torben Ohlhus', 'Karan Chopra', 'Keiji Shinzato', 'Koji Murakami', 'Lee Xiong', 'Lei Chen', 'Maki Kubota', 'Maksim Tkache...
['cs.CL', 'cs.LG']
We introduce RakutenAI-7B, a suite of Japanese-oriented large language models that achieve the best performance on the Japanese LM Harness benchmarks among the open 7B models. Along with the foundation model, we release instruction- and chat-tuned models, RakutenAI-7B-instruct and RakutenAI-7B-chat respectively, under ...
2024-03-21T06:56:07Z
null
null
null
null
null
null
null
null
null
null
2,403.15705
SUP-NeRF: A Streamlined Unification of Pose Estimation and NeRF for Monocular 3D Object Reconstruction
['Yuliang Guo', 'Abhinav Kumar', 'Cheng Zhao', 'Ruoyu Wang', 'Xinyu Huang', 'Liu Ren']
['cs.CV']
Monocular 3D reconstruction for categorical objects heavily relies on accurately perceiving each object's pose. While gradient-based optimization in a NeRF framework updates the initial pose, this paper highlights that scale-depth ambiguity in monocular object reconstruction causes failures when the initial pose deviat...
2024-03-23T03:56:25Z
null
null
null
null
null
null
null
null
null
null
2,403.15882
VLUE: A New Benchmark and Multi-task Knowledge Transfer Learning for Vietnamese Natural Language Understanding
['Phong Nguyen-Thuan Do', 'Son Quoc Tran', 'Phu Gia Hoang', 'Kiet Van Nguyen', 'Ngan Luu-Thuy Nguyen']
['cs.CL']
The success of Natural Language Understanding (NLU) benchmarks in various languages, such as GLUE for English, CLUE for Chinese, KLUE for Korean, and IndoNLU for Indonesian, has facilitated the evaluation of new NLU models across a wide range of tasks. To establish a standardized set of benchmarks for Vietnamese NLU, w...
2024-03-23T16:26:49Z
Accepted at NAACL 2024 (Findings)
null
null
VLUE: A New Benchmark and Multi-task Knowledge Transfer Learning for Vietnamese Natural Language Understanding
['Phong Nguyen-Thuan Do', 'Son Quoc Tran', 'Phu Gia Hoang', 'Kiet Van Nguyen', 'N. Nguyen']
2,024
NAACL-HLT
5
50
['Computer Science']
2,403.16008
CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering
['Hongbin Na']
['cs.CL']
The recent advancements in artificial intelligence highlight the potential of language models in psychological health support. While models trained on data from mental health service platform have achieved preliminary success, challenges persist in areas such as data scarcity, quality, and ensuring a solid foundation i...
2024-03-24T04:34:34Z
Accepted at COLING 2024
null
null
CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering
['Hongbin Na']
2,024
International Conference on Language Resources and Evaluation
17
45
['Computer Science']
2,403.16023
RPMArt: Towards Robust Perception and Manipulation for Articulated Objects
['Junbo Wang', 'Wenhai Liu', 'Qiaojun Yu', 'Yang You', 'Liu Liu', 'Weiming Wang', 'Cewu Lu']
['cs.RO', 'cs.AI', 'cs.CV']
Articulated objects are commonly found in daily life. It is essential that robots can exhibit robust perception and manipulation skills for articulated objects in real-world robotic applications. However, existing methods for articulated objects insufficiently address noise in point clouds and struggle to bridge the ga...
2024-03-24T05:55:39Z
8 pages, 7 figures, accepted by 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), project website at https://r-pmart.github.io
null
null
null
null
null
null
null
null
null
2,403.16051
Segment Anything Model for Road Network Graph Extraction
['Congrui Hetang', 'Haoru Xue', 'Cindy Le', 'Tianwei Yue', 'Wenping Wang', 'Yihui He']
['cs.CV']
We propose SAM-Road, an adaptation of the Segment Anything Model (SAM) for extracting large-scale, vectorized road network graphs from satellite imagery. To predict graph geometry, we formulate it as a dense semantic segmentation task, leveraging the inherent strengths of SAM. The image encoder of SAM is fine-tuned to ...
2024-03-24T07:36:38Z
Accepted by IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR) 2024, 2nd Workshop on Scene Graphs and Graph Representation Learning
null
null
null
null
null
null
null
null
null
2,403.16158
Korean Bio-Medical Corpus (KBMC) for Medical Named Entity Recognition
['Sungjoo Byun', 'Jiseung Hong', 'Sumin Park', 'Dongjun Jang', 'Jean Seo', 'Minseok Kim', 'Chaeyoung Oh', 'Hyopil Shin']
['cs.CL']
Named Entity Recognition (NER) plays a pivotal role in medical Natural Language Processing (NLP). Yet, there has not been an open-source medical NER dataset specifically for the Korean language. To address this, we utilized ChatGPT to assist in constructing the KBMC (Korean Bio-Medical Corpus), which we are now present...
2024-03-24T13:51:05Z
null
LREC-COLING 2024
null
null
null
null
null
null
null
null
2,403.16443
CodeS: Natural Language to Code Repository via Multi-Layer Sketch
['Daoguang Zan', 'Ailun Yu', 'Wei Liu', 'Dong Chen', 'Bo Shen', 'Wei Li', 'Yafen Yao', 'Yongshun Gong', 'Xiaolin Chen', 'Bei Guan', 'Zhiguang Yang', 'Yongji Wang', 'Qianxiang Wang', 'Lizhen Cui']
['cs.CL', 'cs.AI', 'cs.SE']
The impressive performance of large language models (LLMs) on code-related tasks has shown the potential of fully automated software development. In light of this, we introduce a new software engineering task, namely Natural Language to code Repository (NL2Repo). This task aims to generate an entire code repository fro...
2024-03-25T06:09:55Z
https://github.com/NL2Code/CodeS
null
null
null
null
null
null
null
null
null
2,403.16516
Visually Guided Generative Text-Layout Pre-training for Document Intelligence
['Zhiming Mao', 'Haoli Bai', 'Lu Hou', 'Jiansheng Wei', 'Xin Jiang', 'Qun Liu', 'Kam-Fai Wong']
['cs.CL', 'cs.CV']
Prior study shows that pre-training techniques can boost the performance of visual document understanding (VDU), which typically requires models to gain abilities to perceive and reason both document texts and layouts (e.g., locations of texts and table-cells). To this end, we propose visually guided generative text-la...
2024-03-25T08:00:43Z
Accepted to NAACL 2024 main conference. The first version of this paper was submitted to OpenReview (https://openreview.net/forum?id=ARtBIBAmNR) in June 2023
null
null
null
null
null
null
null
null
null
2,403.16558
Elysium: Exploring Object-level Perception in Videos via MLLM
['Han Wang', 'Yanjie Wang', 'Yongjie Ye', 'Yuxiang Nie', 'Can Huang']
['cs.CV']
Multi-modal Large Language Models (MLLMs) have demonstrated their ability to perceive objects in still images, but their application in video-related tasks, such as object tracking, remains understudied. This lack of exploration is primarily due to two key challenges. Firstly, extensive pretraining on large-scale video...
2024-03-25T09:17:15Z
null
null
null
Elysium: Exploring Object-level Perception in Videos via MLLM
['Hang Wang', 'Yanjie Wang', 'Yongjie Ye', 'Yuxiang Nie', 'Can Huang']
2,024
European Conference on Computer Vision
23
80
['Computer Science']
2,403.16614
Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts
['Rabindra Lamsal', 'Maria Rodriguez Read', 'Shanika Karunasekera']
['cs.CL']
Tasks such as semantic search and clustering on crisis-related social media texts enhance our comprehension of crisis discourse, aiding decision-making and targeted interventions. Pre-trained language models have advanced performance in crisis informatics, but their contextual embeddings lack semantic meaningfulness. A...
2024-03-25T10:44:38Z
Accepted to ISCRAM 2024
null
null
Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts
['Rabindra Lamsal', 'M. Read', 'S. Karunasekera']
2,024
Proceedings of the International ISCRAM Conference
2
59
['Computer Science']
2,403.16627
SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions
['Yuda Song', 'Zehao Sun', 'Xuanwu Yin']
['cs.CV']
Recent advancements in diffusion models have positioned them at the forefront of image generation. Despite their superior performance, diffusion models are not without drawbacks; they are characterized by complex architectures and substantial computational demands, resulting in significant latency due to their iterativ...
2024-03-25T11:16:23Z
null
null
null
SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions
['Yuda Song', 'Zehao Sun', 'Xuanwu Yin']
2,024
arXiv.org
18
62
['Computer Science']
2,403.17068
Semantic Ranking for Automated Adversarial Technique Annotation in Security Text
['Udesh Kumarasinghe', 'Ahmed Lekssays', 'Husrev Taha Sencar', 'Sabri Boughorbel', 'Charitha Elvitigala', 'Preslav Nakov']
['cs.CR']
We introduce a new method for extracting structured threat behaviors from threat intelligence text. Our method is based on a multi-stage ranking architecture that allows jointly optimizing for efficiency and effectiveness. Therefore, we believe this problem formulation better aligns with the real-world nature of the ta...
2024-03-25T18:03:58Z
null
null
null
Semantic Ranking for Automated Adversarial Technique Annotation in Security Text
['Udesh Kumarasinghe', 'Ahmed Lekssays', 'H. Sencar', 'Sabri Boughorbel', 'Charitha Elvitigala', 'Preslav Nakov']
2,024
ACM Asia Conference on Computer and Communications Security
7
48
['Computer Science']
2,403.17297
InternLM2 Technical Report
['Zheng Cai', 'Maosong Cao', 'Haojiong Chen', 'Kai Chen', 'Keyu Chen', 'Xin Chen', 'Xun Chen', 'Zehui Chen', 'Zhi Chen', 'Pei Chu', 'Xiaoyi Dong', 'Haodong Duan', 'Qi Fan', 'Zhaoye Fei', 'Yang Gao', 'Jiaye Ge', 'Chenya Gu', 'Yuzhe Gu', 'Tao Gui', 'Aijia Guo', 'Qipeng Guo', 'Conghui He', 'Yingfan Hu', 'Ting Huang', 'Tao...
['cs.CL', 'cs.AI']
The evolution of Large Language Models (LLMs) like ChatGPT and GPT-4 has sparked discussions on the advent of Artificial General Intelligence (AGI). However, replicating such advancements in open-source models has been challenging. This paper introduces InternLM2, an open-source LLM that outperforms its predecessors in...
2024-03-26T00:53:24Z
null
null
null
InternLM2 Technical Report
['Zheng Cai', 'Maosong Cao', 'Haojiong Chen', 'Kai Chen', 'Keyu Chen', 'Xin Chen', 'Xun Chen', 'Zehui Chen', 'Zhi Chen', 'Pei Chu', 'Xiao-wen Dong', 'Haodong Duan', 'Qi Fan', 'Zhaoye Fei', 'Yang Gao', 'Jiaye Ge', 'Chenya Gu', 'Yuzhe Gu', 'Tao Gui', 'Aijia Guo', 'Qipeng Guo', 'Conghui He', 'Yingfan Hu', 'Ting Huang', 'T...
2,024
arXiv.org
209
0
['Computer Science']
2,403.17377
Self-Rectifying Diffusion Sampling with Perturbed-Attention Guidance
['Donghoon Ahn', 'Hyoungwon Cho', 'Jaewon Min', 'Wooseok Jang', 'Jungwoo Kim', 'SeonHwa Kim', 'Hyun Hee Park', 'Kyong Hwan Jin', 'Seungryong Kim']
['cs.CV', 'cs.AI', 'cs.LG']
Recent studies have demonstrated that diffusion models are capable of generating high-quality samples, but their quality heavily depends on sampling guidance techniques, such as classifier guidance (CG) and classifier-free guidance (CFG). These techniques are often not applicable in unconditional generation or in vario...
2024-03-26T04:49:11Z
Project page is available at https://ku-cvlab.github.io/Perturbed-Attention-Guidance. This version reflects the ECCV 2024 camera-ready submission
null
null
null
null
null
null
null
null
null
2,403.17528
Multilingual Sentence-T5: Scalable Sentence Encoders for Multilingual Applications
['Chihiro Yano', 'Akihiko Fukuchi', 'Shoko Fukasawa', 'Hideyuki Tachibana', 'Yotaro Watanabe']
['cs.CL']
Prior work on multilingual sentence embedding has demonstrated that the efficient use of natural language inference (NLI) data to build high-performance models can outperform conventional methods. However, the potential benefits from the recent ``exponential'' growth of language models with billions of parameters have ...
2024-03-26T09:31:55Z
Accepted in LREC-COLING 2024
null
null
null
null
null
null
null
null
null
2,403.17694
AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation
['Huawei Wei', 'Zejun Yang', 'Zhisheng Wang']
['cs.CV', 'cs.GR', 'eess.IV']
In this study, we propose AniPortrait, a novel framework for generating high-quality animation driven by audio and a reference portrait image. Our methodology is divided into two stages. Initially, we extract 3D intermediate representations from audio and project them into a sequence of 2D facial landmarks. Subsequentl...
2024-03-26T13:35:02Z
null
null
null
null
null
null
null
null
null
null
2,403.17834
Developing Generalist Foundation Models from a Multimodal Dataset for 3D Computed Tomography
['Ibrahim Ethem Hamamci', 'Sezgin Er', 'Chenyu Wang', 'Furkan Almas', 'Ayse Gulnihan Simsek', 'Sevval Nil Esirgun', 'Irem Doga', 'Omer Faruk Durugol', 'Weicheng Dai', 'Murong Xu', 'Muhammed Furkan Dasdelen', 'Bastian Wittmann', 'Tamaz Amiranashvili', 'Enis Simsar', 'Mehmet Simsar', 'Emine Bensu Erdemir', 'Abdullah Alan...
['cs.CV']
While computer vision has achieved tremendous success with multimodal encoding and direct textual interaction with images via chat-based large language models, similar advancements in medical imaging AI, particularly in 3D imaging, have been limited due to the scarcity of comprehensive datasets. To address this critica...
2024-03-26T16:19:56Z
null
null
null
null
null
null
null
null
null
null
2,403.17848
ArabicaQA: A Comprehensive Dataset for Arabic Question Answering
['Abdelrahman Abdallah', 'Mahmoud Kasem', 'Mahmoud Abdalla', 'Mohamed Mahmoud', 'Mohamed Elkasaby', 'Yasser Elbendary', 'Adam Jatowt']
['cs.CL', 'cs.IR']
In this paper, we address the significant gap in Arabic natural language processing (NLP) resources by introducing ArabicaQA, the first large-scale dataset for machine reading comprehension and open-domain question answering in Arabic. This comprehensive dataset, consisting of 89,095 answerable and 3,701 unanswerable q...
2024-03-26T16:37:54Z
Accepted at SIGIR 2024
null
null
ArabicaQA: A Comprehensive Dataset for Arabic Question Answering
['Abdelrahman Abdallah', 'M. Kasem', 'Mahmoud Abdalla', 'Mohamed Mahmoud', 'Mohamed Elkasaby', 'Yasser Elbendary', 'Adam Jatowt']
2,024
Annual International ACM SIGIR Conference on Research and Development in Information Retrieval
16
63
['Computer Science']
2,403.17887
The Unreasonable Ineffectiveness of the Deeper Layers
['Andrey Gromov', 'Kushal Tirumala', 'Hassan Shapourian', 'Paolo Glorioso', 'Daniel A. Roberts']
['cs.CL', 'cs.LG', 'stat.ML']
How is knowledge stored in an LLM's weights? We study this via layer pruning: if removing a certain layer does not affect model performance in common question-answering benchmarks, then the weights in that layer are not necessary for storing the knowledge needed to answer those questions. To find these unnecessary para...
2024-03-26T17:20:04Z
10 + 14 pages, 6 + 5 figures. v2: ICLR camera-ready version; additional experiments in an extended discussion
null
null
null
null
null
null
null
null
null
2,403.17889
Large scale paired antibody language models
['Henry Kenlay', 'Frédéric A. Dreyer', 'Aleksandr Kovaltsuk', 'Dom Miketa', 'Douglas Pires', 'Charlotte M. Deane']
['q-bio.BM', 'cs.LG']
Antibodies are proteins produced by the immune system that can identify and neutralise a wide variety of antigens with high specificity and affinity, and constitute the most successful class of biotherapeutics. With the advent of next-generation sequencing, billions of antibody sequences have been collected in recent y...
2024-03-26T17:21:54Z
14 pages, 2 figures, 6 tables, model weights available at https://zenodo.org/doi/10.5281/zenodo.10876908
null
null
Large scale paired antibody language models
['Henry Kenlay', 'Frédéric A. Dreyer', 'Aleksandr Kovaltsuk', 'Dom Miketa', 'Douglas E. V. Pires', 'Charlotte M. Deane']
2,024
PLoS Comput. Biol.
24
54
['Medicine', 'Computer Science', 'Biology']
2,403.17902
Serpent: Scalable and Efficient Image Restoration via Multi-scale Structured State Space Models
['Mohammad Shahab Sepehri', 'Zalan Fabian', 'Mahdi Soltanolkotabi']
['eess.IV', 'cs.CV', 'cs.LG', 'I.4.4; I.4.5']
The landscape of computational building blocks of efficient image restoration architectures is dominated by a combination of convolutional processing and various attention mechanisms. However, convolutional filters, while efficient, are inherently local and therefore struggle with modeling long-range dependencies in im...
2024-03-26T17:43:15Z
null
null
null
Serpent: Scalable and Efficient Image Restoration via Multi-scale Structured State Space Models
['Mohammad Shahab Sepehri', 'Zalan Fabian', 'M. Soltanolkotabi']
2,024
arXiv.org
5
28
['Computer Science', 'Engineering']
2,403.17921
The Need for Speed: Pruning Transformers with One Recipe
['Samir Khaki', 'Konstantinos N. Plataniotis']
['cs.LG']
We introduce the $\textbf{O}$ne-shot $\textbf{P}$runing $\textbf{T}$echnique for $\textbf{I}$nterchangeable $\textbf{N}$etworks ($\textbf{OPTIN}$) framework as a tool to increase the efficiency of pre-trained transformer architectures $\textit{without requiring re-training}$. Recent works have explored improving transf...
2024-03-26T17:55:58Z
Accepted in the International Conference on Learning Representations (ICLR) 2024
null
null
null
null
null
null
null
null
null
2,403.18025
Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER
['Micheal Abaho', 'Danushka Bollegala', 'Gary Leeming', 'Dan Joyce', 'Iain E Buchan']
['cs.CL', 'cs.AI', 'cs.IR', 'cs.LG']
Adapting language models (LMs) to novel domains is often achieved through fine-tuning a pre-trained LM (PLM) on domain-specific data. Fine-tuning introduces new knowledge into an LM, enabling it to comprehend and efficiently perform a target domain task. Fine-tuning can however be inadvertently insensitive if it ignore...
2024-03-26T18:23:16Z
Paper alrerady accepted for publishing by the NAACL 2024 conference (main conference paper)
null
null
null
null
null
null
null
null
null
2,403.1814
Juru: Legal Brazilian Large Language Model from Reputable Sources
['Roseval Malaquias Junior', 'Ramon Pires', 'Roseli Romero', 'Rodrigo Nogueira']
['cs.CL', 'cs.AI']
The high computational cost associated with pretraining large language models limits their research. Two strategies have emerged to address this issue: domain specialization and pretraining with high-quality data. To explore these strategies, we specialized the Sabi\'a-2 Small model with 1.9 billion unique tokens from ...
2024-03-26T22:54:12Z
null
null
null
Juru: Legal Brazilian Large Language Model from Reputable Sources
['Roseval Malaquias Junior', 'Ramon Pires', 'R. Romero', 'Rodrigo Nogueira']
2,024
arXiv.org
0
24
['Computer Science']
2,403.18187
LayoutFlow: Flow Matching for Layout Generation
['Julian Jorge Andrade Guerreiro', 'Naoto Inoue', 'Kento Masui', 'Mayu Otani', 'Hideki Nakayama']
['cs.CV']
Finding a suitable layout represents a crucial task for diverse applications in graphic design. Motivated by simpler and smoother sampling trajectories, we explore the use of Flow Matching as an alternative to current diffusion-based layout generation models. Specifically, we propose LayoutFlow, an efficient flow-based...
2024-03-27T01:40:21Z
Accepted to ECCV 2024, Project Page: https://julianguerreiro.github.io/layoutflow/
null
null
null
null
null
null
null
null
null
2,403.18277
BlendX: Complex Multi-Intent Detection with Blended Patterns
['Yejin Yoon', 'Jungyeon Lee', 'Kangsan Kim', 'Chanhee Park', 'Taeuk Kim']
['cs.CL']
Task-oriented dialogue (TOD) systems are commonly designed with the presumption that each utterance represents a single intent. However, this assumption may not accurately reflect real-world situations, where users frequently express multiple intents within a single utterance. While there is an emerging interest in mul...
2024-03-27T06:13:04Z
Accepted to LREC-COLING2024
null
null
BlendX: Complex Multi-Intent Detection with Blended Patterns
['Yejin Yoon', 'Jungyeon Lee', 'Kangsan Kim', 'Chanhee Park', 'Taeuk Kim']
2,024
International Conference on Language Resources and Evaluation
4
21
['Computer Science']
2,403.18338
mALBERT: Is a Compact Multilingual BERT Model Still Worth It?
['Christophe Servan', 'Sahar Ghannay', 'Sophie Rosset']
['cs.AI']
Within the current trend of Pretained Language Models (PLM), emerge more and more criticisms about the ethical andecological impact of such models. In this article, considering these critical remarks, we propose to focus on smallermodels, such as compact models like ALBERT, which are more ecologically virtuous than the...
2024-03-27T08:25:28Z
The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, May 2024, Torino, Italy
null
null
mALBERT: Is a Compact Multilingual BERT Model Still Worth It?
['Christophe Servan', 'Sahar Ghannay', 'Sophie Rosset']
2,024
International Conference on Language Resources and Evaluation
1
35
['Computer Science']
2,403.18421
BioMedLM: A 2.7B Parameter Language Model Trained On Biomedical Text
['Elliot Bolton', 'Abhinav Venigalla', 'Michihiro Yasunaga', 'David Hall', 'Betty Xiong', 'Tony Lee', 'Roxana Daneshjou', 'Jonathan Frankle', 'Percy Liang', 'Michael Carbin', 'Christopher D. Manning']
['cs.CL', 'cs.AI']
Models such as GPT-4 and Med-PaLM 2 have demonstrated impressive performance on a wide variety of biomedical NLP tasks. However, these models have hundreds of billions of parameters, are computationally expensive to run, require users to send their input data over the internet, and are trained on unknown data sources. ...
2024-03-27T10:18:21Z
23 pages
null
null
BioMedLM: A 2.7B Parameter Language Model Trained On Biomedical Text
['Elliot Bolton', 'Abhinav Venigalla', 'Michihiro Yasunaga', 'David Hall', 'Betty Xiong', 'Tony Lee', 'R. Daneshjou', 'Jonathan Frankle', 'Percy Liang', 'Michael Carbin', 'Christopher D. Manning']
2,024
arXiv.org
64
66
['Computer Science']
2,403.18647
SDSAT: Accelerating LLM Inference through Speculative Decoding with Semantic Adaptive Tokens
['Chengbo Liu', 'Yong Zhu']
['cs.CL']
We propose an acceleration scheme for large language models (LLMs) through Speculative Decoding with Semantic Adaptive Tokens (SDSAT). The primary objective of this design is to enhance the LLM model's ability to generate draft tokens more accurately without compromising the model's accuracy. The core strategies involv...
2024-03-27T14:54:27Z
12 pages, 7 figures
null
null
SDSAT: Accelerating LLM Inference through Speculative Decoding with Semantic Adaptive Tokens
['Chengbo Liu', 'Yong Zhu']
2,024
arXiv.org
0
20
['Computer Science']
2,403.18769
Improved Neural Protoform Reconstruction via Reflex Prediction
['Liang Lu', 'Jingzhi Wang', 'David R. Mortensen']
['cs.CL']
Protolanguage reconstruction is central to historical linguistics. The comparative method, one of the most influential theoretical and methodological frameworks in the history of the language sciences, allows linguists to infer protoforms (reconstructed ancestral words) from their reflexes (related modern words) based ...
2024-03-27T17:13:38Z
Accepted to LREC-COLING 2024
null
null
null
null
null
null
null
null
null
2,403.18814
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
['Yanwei Li', 'Yuechen Zhang', 'Chengyao Wang', 'Zhisheng Zhong', 'Yixin Chen', 'Ruihang Chu', 'Shaoteng Liu', 'Jiaya Jia']
['cs.CV', 'cs.AI', 'cs.CL']
In this work, we introduce Mini-Gemini, a simple and effective framework enhancing multi-modality Vision Language Models (VLMs). Despite the advancements in VLMs facilitating basic visual dialog and reasoning, a performance gap persists compared to advanced models like GPT-4 and Gemini. We try to narrow the gap by mini...
2024-03-27T17:59:04Z
Code and models are available at https://github.com/dvlab-research/MiniGemini
null
null
null
null
null
null
null
null
null
2,403.19154
STaR-GATE: Teaching Language Models to Ask Clarifying Questions
['Chinmaya Andukuri', 'Jan-Philipp Fränken', 'Tobias Gerstenberg', 'Noah D. Goodman']
['cs.CL', 'cs.AI']
When prompting language models to complete a task, users often leave important aspects unsaid. While asking questions could resolve this ambiguity (GATE; Li et al., 2023), models often struggle to ask good questions. We explore a language model's ability to self-improve (STaR; Zelikman et al., 2022) by rewarding the mo...
2024-03-28T05:35:22Z
null
null
null
null
null
null
null
null
null
null
2,403.1927
sDPO: Don't Use Your Data All at Once
['Dahyun Kim', 'Yungi Kim', 'Wonho Song', 'Hyeonwoo Kim', 'Yunsu Kim', 'Sanghoon Kim', 'Chanjun Park']
['cs.CL', 'cs.AI']
As development of large language models (LLM) progresses, aligning them with human preferences has become increasingly important. We propose stepwise DPO (sDPO), an extension of the recently popularized direct preference optimization (DPO) for alignment tuning. This approach involves dividing the available preference d...
2024-03-28T09:56:04Z
null
null
null
sDPO: Don't Use Your Data All at Once
['Dahyun Kim', 'Yungi Kim', 'Wonho Song', 'Hyeonwoo Kim', 'Yunsu Kim', 'Sanghoon Kim', 'Chanjun Park']
2,024
arXiv.org
35
31
['Computer Science']
2,403.19318
TableLLM: Enabling Tabular Data Manipulation by LLMs in Real Office Usage Scenarios
['Xiaokang Zhang', 'Sijia Luo', 'Bohan Zhang', 'Zeyao Ma', 'Jing Zhang', 'Yang Li', 'Guanlin Li', 'Zijun Yao', 'Kangli Xu', 'Jinchang Zhou', 'Daniel Zhang-Li', 'Jifan Yu', 'Shu Zhao', 'Juanzi Li', 'Jie Tang']
['cs.CL']
We introduce TableLLM, a robust large language model (LLM) with 8 billion parameters, purpose-built for proficiently handling tabular data manipulation tasks, whether they are embedded within documents or spreadsheets, catering to real-world office scenarios. We propose a distant supervision method for training, which ...
2024-03-28T11:21:12Z
https://tablellm.github.io/
null
null
TableLLM: Enabling Tabular Data Manipulation by LLMs in Real Office Usage Scenarios
['Xiaokang Zhang', 'Jing Zhang', 'Zeyao Ma', 'Yang Li', 'Bohan Zhang', 'Guanlin Li', 'Zijun Yao', 'Kangli Xu', 'Jinchang Zhou', 'Daniel Zhang-Li', 'Jifan Yu', 'Shu Zhao', 'Juan-Zi Li', 'Jie Tang']
2,024
arXiv.org
37
53
['Computer Science']
2,403.19522
Model Stock: All we need is just a few fine-tuned models
['Dong-Hwan Jang', 'Sangdoo Yun', 'Dongyoon Han']
['cs.LG', 'cs.CV']
This paper introduces an efficient fine-tuning method for large pre-trained models, offering strong in-distribution (ID) and out-of-distribution (OOD) performance. Breaking away from traditional practices that need a multitude of fine-tuned models for averaging, our approach employs significantly fewer models to achiev...
2024-03-28T15:57:20Z
Code at https://github.com/naver-ai/model-stock
null
null
Model Stock: All we need is just a few fine-tuned models
['Dong-Hwan Jang', 'Sangdoo Yun', 'Dongyoon Han']
2,024
European Conference on Computer Vision
45
35
['Computer Science']
2,403.19559
Improving Adversarial Data Collection by Supporting Annotators: Lessons from GAHD, a German Hate Speech Dataset
['Janis Goldzycher', 'Paul Röttger', 'Gerold Schneider']
['cs.CL']
Hate speech detection models are only as good as the data they are trained on. Datasets sourced from social media suffer from systematic gaps and biases, leading to unreliable models with simplistic decision boundaries. Adversarial datasets, collected by exploiting model weaknesses, promise to fix this problem. However...
2024-03-28T16:44:14Z
Accepted at NAACL 2024 (main conference)
null
null
null
null
null
null
null
null
null
2,403.19588
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs
['Donghyun Kim', 'Byeongho Heo', 'Dongyoon Han']
['cs.CV', 'cs.LG', 'cs.NE']
This paper revives Densely Connected Convolutional Networks (DenseNets) and reveals the underrated effectiveness over predominant ResNet-style architectures. We believe DenseNets' potential was overlooked due to untouched training methods and traditional design elements not fully revealing their capabilities. Our pilot...
2024-03-28T17:12:39Z
ECCV 2024. Code at https://github.com/naver-ai/rdnet
null
null
null
null
null
null
null
null
null
2,403.19654
RSMamba: Remote Sensing Image Classification with State Space Model
['Keyan Chen', 'Bowen Chen', 'Chenyang Liu', 'Wenyuan Li', 'Zhengxia Zou', 'Zhenwei Shi']
['cs.CV']
Remote sensing image classification forms the foundation of various understanding tasks, serving a crucial function in remote sensing image interpretation. The recent advancements of Convolutional Neural Networks (CNNs) and Transformers have markedly enhanced classification accuracy. Nonetheless, remote sensing scene c...
2024-03-28T17:59:49Z
null
null
null
null
null
null
null
null
null
null
2,403.19655
GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling
['Bowen Zhang', 'Yiji Cheng', 'Jiaolong Yang', 'Chunyu Wang', 'Feng Zhao', 'Yansong Tang', 'Dong Chen', 'Baining Guo']
['cs.CV']
We introduce a radiance representation that is both structured and fully explicit and thus greatly facilitates 3D generative modeling. Existing radiance representations either require an implicit feature decoder, which significantly degrades the modeling power of the representation, or are spatially unstructured, makin...
2024-03-28T17:59:50Z
NIPS 2024 camera-ready version; Project page: https://gaussiancube.github.io/
null
null
null
null
null
null
null
null
null
2,403.19887
Jamba: A Hybrid Transformer-Mamba Language Model
['Opher Lieber', 'Barak Lenz', 'Hofit Bata', 'Gal Cohen', 'Jhonathan Osin', 'Itay Dalmedigos', 'Erez Safahi', 'Shaked Meirom', 'Yonatan Belinkov', 'Shai Shalev-Shwartz', 'Omri Abend', 'Raz Alon', 'Tomer Asida', 'Amir Bergman', 'Roman Glozman', 'Michael Gokhman', 'Avashalom Manevich', 'Nir Ratner', 'Noam Rozen', 'Erez S...
['cs.CL', 'cs.LG']
We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while k...
2024-03-28T23:55:06Z
Webpage: https://www.ai21.com/jamba
null
null
Jamba: A Hybrid Transformer-Mamba Language Model
['Opher Lieber', 'Barak Lenz', 'Hofit Bata', 'Gal Cohen', 'Jhonathan Osin', 'Itay Dalmedigos', 'Erez Safahi', 'S. Meirom', 'Yonatan Belinkov', 'Shai Shalev-Shwartz', 'Omri Abend', 'Raz Alon', 'Tomer Asida', 'Amir Bergman', 'Roman Glozman', 'Michael Gokhman', 'Avshalom Manevich', 'Nir Ratner', 'N. Rozen', 'Erez Shwartz'...
2,024
arXiv.org
229
56
['Computer Science']
2,403.19924
SceneTracker: Long-term Scene Flow Estimation Network
['Bo Wang', 'Jian Li', 'Yang Yu', 'Li Liu', 'Zhenping Sun', 'Dewen Hu']
['cs.CV']
Considering that scene flow estimation has the capability of the spatial domain to focus but lacks the coherence of the temporal domain, this study proposes long-term scene flow estimation (LSFE), a comprehensive task that can simultaneously capture the fine-grained and long-term 3D motion in an online manner. We intro...
2024-03-29T02:22:54Z
null
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2025
10.1109/TPAMI.2025.3572489
null
null
null
null
null
null
null
2,403.19967
Rewrite the Stars
['Xu Ma', 'Xiyang Dai', 'Yue Bai', 'Yizhou Wang', 'Yun Fu']
['cs.CV']
Recent studies have drawn attention to the untapped potential of the "star operation" (element-wise multiplication) in network design. While intuitive explanations abound, the foundational rationale behind its application remains largely unexplored. Our study attempts to reveal the star operation's ability to map input...
2024-03-29T04:10:07Z
Accepted by CVPR 2024. Codes are made publically available at https://github.com/ma-xu/Rewrite-the-Stars
null
null
null
null
null
null
null
null
null
2,403.2018
Measuring Taiwanese Mandarin Language Understanding
['Po-Heng Chen', 'Sijia Cheng', 'Wei-Lin Chen', 'Yen-Ting Lin', 'Yun-Nung Chen']
['cs.CL']
The evaluation of large language models (LLMs) has drawn substantial attention in the field recently. This work focuses on evaluating LLMs in a Chinese context, specifically, for Traditional Chinese which has been largely underrepresented in existing benchmarks. We present TMLU, a holistic evaluation suit tailored for ...
2024-03-29T13:56:21Z
Preprint. Under review
null
null
null
null
null
null
null
null
null
2,403.20208
Unleashing the Potential of Large Language Models for Predictive Tabular Tasks in Data Science
['Yazheng Yang', 'Yuqi Wang', 'Yaxuan Li', 'Sankalok Sen', 'Lei Li', 'Qi Liu']
['cs.LG', 'cs.AI']
In the domain of data science, the predictive tasks of classification, regression, and imputation of missing values are commonly encountered challenges associated with tabular data. This research endeavors to apply Large Language Models (LLMs) towards addressing these predictive tasks. Despite their proficiency in comp...
2024-03-29T14:41:21Z
10 pages
null
null
null
null
null
null
null
null
null
2,403.20266
Latxa: An Open Language Model and Evaluation Suite for Basque
['Julen Etxaniz', 'Oscar Sainz', 'Naiara Perez', 'Itziar Aldabe', 'German Rigau', 'Eneko Agirre', 'Aitor Ormazabal', 'Mikel Artetxe', 'Aitor Soroa']
['cs.CL', 'cs.AI', 'cs.LG']
We introduce Latxa, a family of large language models for Basque ranging from 7 to 70 billion parameters. Latxa is based on Llama 2, which we continue pretraining on a new Basque corpus comprising 4.3M documents and 4.2B tokens. Addressing the scarcity of high-quality benchmarks for Basque, we further introduce 4 multi...
2024-03-29T16:16:48Z
ACL 2024
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14952--14972. 2024
null
null
null
null
null
null
null
null
2,403.20271
Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want
['Weifeng Lin', 'Xinyu Wei', 'Ruichuan An', 'Peng Gao', 'Bocheng Zou', 'Yulin Luo', 'Siyuan Huang', 'Shanghang Zhang', 'Hongsheng Li']
['cs.CV']
In this paper, we present the Draw-and-Understand framework, exploring how to integrate visual prompting understanding capabilities into Multimodal Large Language Models (MLLMs). Visual prompts allow users to interact through multi-modal instructions, enhancing the models' interactivity and fine-grained image comprehen...
2024-03-29T16:26:20Z
30 pages, 8 figures, 15 tables
null
null
null
null
null
null
null
null
null
2,403.20327
Gecko: Versatile Text Embeddings Distilled from Large Language Models
['Jinhyuk Lee', 'Zhuyun Dai', 'Xiaoqi Ren', 'Blair Chen', 'Daniel Cer', 'Jeremy R. Cole', 'Kai Hui', 'Michael Boratko', 'Rajvi Kapadia', 'Wen Ding', 'Yi Luan', 'Sai Meher Karthik Duddu', 'Gustavo Hernandez Abrego', 'Weiqiang Shi', 'Nithi Gupta', 'Aditya Kusupati', 'Prateek Jain', 'Siddhartha Reddy Jonnalagadda', 'Ming-...
['cs.CL', 'cs.AI']
We present Gecko, a compact and versatile text embedding model. Gecko achieves strong retrieval performance by leveraging a key idea: distilling knowledge from large language models (LLMs) into a retriever. Our two-step distillation process begins with generating diverse, synthetic paired data using an LLM. Next, we fu...
2024-03-29T17:56:40Z
18 pages
null
null
null
null
null
null
null
null
null
2,404.00086
DVIS-DAQ: Improving Video Segmentation via Dynamic Anchor Queries
['Yikang Zhou', 'Tao Zhang', 'Shunping Ji', 'Shuicheng Yan', 'Xiangtai Li']
['cs.CV']
Modern video segmentation methods adopt object queries to perform inter-frame association and demonstrate satisfactory performance in tracking continuously appearing objects despite large-scale motion and transient occlusion. However, they all underperform on newly emerging and disappearing objects that are common in t...
2024-03-29T17:58:50Z
Accepted by ECCV-2024
null
null
null
null
null
null
null
null
null
2,404.00376
Small Language Models Learn Enhanced Reasoning Skills from Medical Textbooks
['Hyunjae Kim', 'Hyeon Hwang', 'Jiwoo Lee', 'Sihyeon Park', 'Dain Kim', 'Taewhoo Lee', 'Chanwoong Yoon', 'Jiwoong Sohn', 'Donghee Choi', 'Jaewoo Kang']
['cs.CL']
While recent advancements in commercial large language models (LM) have shown promising results in medical tasks, their closed-source nature poses significant privacy and security concerns, hindering their widespread use in the medical field. Despite efforts to create open-source models, their limited parameters often ...
2024-03-30T14:09:00Z
Added new LLaMA-3-based models and experiments on NEJM case challenges
null
null
null
null
null
null
null
null
null
2,404.00474
Linguistic Calibration of Long-Form Generations
['Neil Band', 'Xuechen Li', 'Tengyu Ma', 'Tatsunori Hashimoto']
['cs.LG', 'cs.AI', 'cs.CL', 'stat.ML']
Language models (LMs) may lead their users to make suboptimal downstream decisions when they confidently hallucinate. This issue can be mitigated by having the LM verbally convey the probability that its claims are correct, but existing models cannot produce long-form text with calibrated confidence statements. Through...
2024-03-30T20:47:55Z
ICML 2024. Code available at https://github.com/tatsu-lab/linguistic_calibration
null
null
null
null
null
null
null
null
null
2,404.00482
Cross-lingual Named Entity Corpus for Slavic Languages
['Jakub Piskorski', 'Michał Marcińczuk', 'Roman Yangarber']
['cs.CL', 'cs.AI', 'cs.LG']
This paper presents a corpus manually annotated with named entities for six Slavic languages - Bulgarian, Czech, Polish, Slovenian, Russian, and Ukrainian. This work is the result of a series of shared tasks, conducted in 2017-2023 as a part of the Workshops on Slavic Natural Language Processing. The corpus consists of...
2024-03-30T22:20:08Z
Published in LREC-COLING 2024 - The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation
null
null
Cross-lingual Named Entity Corpus for Slavic Languages
['Jakub Piskorski', "Michal Marci'nczuk", 'R. Yangarber']
2,024
International Conference on Language Resources and Evaluation
0
57
['Computer Science']
2,404.00495
Configurable Safety Tuning of Language Models with Synthetic Preference Data
['Victor Gallego']
['cs.CL', 'cs.AI']
State-of-the-art language model fine-tuning techniques, such as Direct Preference Optimization (DPO), restrict user control by hard-coding predefined behaviors into the model. To address this, we propose a novel method, Configurable Safety Tuning (CST), that augments DPO using synthetic preference data to facilitate fl...
2024-03-30T23:28:05Z
null
null
null
null
null
null
null
null
null
null
2,404.0053
Comparing Bad Apples to Good Oranges: Aligning Large Language Models via Joint Preference Optimization
['Hritik Bansal', 'Ashima Suvarna', 'Gantavya Bhatt', 'Nanyun Peng', 'Kai-Wei Chang', 'Aditya Grover']
['cs.CL', 'cs.AI', 'cs.LG']
A common technique for aligning large language models (LLMs) relies on acquiring human preferences by comparing multiple generations conditioned on a fixed context. This method, however, relies solely on pairwise comparisons, where the generations are evaluated within an identical context. While effective to such condi...
2024-03-31T02:05:40Z
22 pages, 16 figures, 7 tables
null
null
Comparing Bad Apples to Good Oranges: Aligning Large Language Models via Joint Preference Optimization
['Hritik Bansal', 'Ashima Suvarna', 'Gantavya Bhatt', 'Nanyun Peng', 'Kai-Wei Chang', 'Aditya Grover']
2,024
arXiv.org
11
61
['Computer Science']
2,404.00578
M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models
['Fan Bai', 'Yuxin Du', 'Tiejun Huang', 'Max Q. -H. Meng', 'Bo Zhao']
['cs.CV']
Medical image analysis is essential to clinical diagnosis and treatment, which is increasingly supported by multi-modal large language models (MLLMs). However, previous research has primarily focused on 2D medical images, leaving 3D images under-explored, despite their richer spatial information. This paper aims to adv...
2024-03-31T06:55:12Z
MLLM, 3D medical image analysis
null
null
null
null
null
null
null
null
null
2,404.00604
Extensive Self-Contrast Enables Feedback-Free Language Model Alignment
['Xiao Liu', 'Xixuan Song', 'Yuxiao Dong', 'Jie Tang']
['cs.CL', 'cs.AI', 'cs.LG']
Reinforcement learning from human feedback (RLHF) has been a central technique for recent large language model (LLM) alignment. However, its heavy dependence on costly human or LLM-as-Judge preference feedback could stymie its wider applications. In this work, we introduce Self-Contrast, a feedback-free large language ...
2024-03-31T08:30:15Z
null
null
null
null
null
null
null
null
null
null
2,404.0061
RQ-RAG: Learning to Refine Queries for Retrieval Augmented Generation
['Chi-Min Chan', 'Chunpu Xu', 'Ruibin Yuan', 'Hongyin Luo', 'Wei Xue', 'Yike Guo', 'Jie Fu']
['cs.CL']
Large Language Models (LLMs) exhibit remarkable capabilities but are prone to generating inaccurate or hallucinatory responses. This limitation stems from their reliance on vast pretraining datasets, making them susceptible to errors in unseen scenarios. To tackle these challenges, Retrieval-Augmented Generation (RAG) ...
2024-03-31T08:58:54Z
null
null
null
null
null
null
null
null
null
null
2,404.00685
Scaling Properties of Speech Language Models
['Santiago Cuervo', 'Ricard Marxer']
['eess.AS', 'cs.AI', 'cs.CL', 'cs.NE']
Speech Language Models (SLMs) aim to learn language from raw audio, without textual resources. Despite significant advances, our current models exhibit weak syntax and semantic abilities. However, if the scaling properties of neural language models hold for the speech modality, these abilities will improve as the amoun...
2024-03-31T13:30:12Z
null
null
10.18653/v1/2024.emnlp-main.21
Scaling Properties of Speech Language Models
['Santiago Cuervo', 'R. Marxer']
2,024
Conference on Empirical Methods in Natural Language Processing
11
28
['Computer Science', 'Engineering']
2,404.00722
DRCT: Saving Image Super-resolution away from Information Bottleneck
['Chih-Chung Hsu', 'Chia-Ming Lee', 'Yi-Shiuan Chou']
['cs.CV', 'cs.AI']
In recent years, Vision Transformer-based approaches for low-level vision tasks have achieved widespread success. Unlike CNN-based models, Transformers are more adept at capturing long-range dependencies, enabling the reconstruction of images utilizing non-local information. In the domain of super-resolution, Swin-tran...
2024-03-31T15:34:45Z
Accepted by CVPRW2024, NTIRE Image Super-resolution (x4)
null
null
DRCT: Saving Image Super-Resolution away from Information Bottleneck
['Chih-Chung Hsu', 'Chia-Ming Lee', 'Yi-Shiuan Chou']
2,024
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
38
64
['Computer Science']
2,404.00862
Bailong: Bilingual Transfer Learning based on QLoRA and Zip-tie Embedding
['Lung-Chuan Chen', 'Zong-Ru Li']
['cs.CL', 'cs.AI']
Large language models (LLMs) have demonstrated exceptional performance in various NLP applications. However, the majority of existing open-source LLMs are pre-trained primarily on English data and little part of other languages. This deficiency in multilingual training data results in suboptimal performance when applie...
2024-04-01T02:04:44Z
null
null
null
null
null
null
null
null
null
null
2,404.00878
TryOn-Adapter: Efficient Fine-Grained Clothing Identity Adaptation for High-Fidelity Virtual Try-On
['Jiazheng Xing', 'Chao Xu', 'Yijie Qian', 'Yang Liu', 'Guang Dai', 'Baigui Sun', 'Yong Liu', 'Jingdong Wang']
['cs.CV']
Virtual try-on focuses on adjusting the given clothes to fit a specific person seamlessly while avoiding any distortion of the patterns and textures of the garment. However, the clothing identity uncontrollability and training inefficiency of existing diffusion-based methods, which struggle to maintain the identity eve...
2024-04-01T03:15:41Z
null
null
null
TryOn-Adapter: Efficient Fine-Grained Clothing Identity Adaptation for High-Fidelity Virtual Try-On
['Jiazheng Xing', 'Chao Xu', 'Yijie Qian', 'Yang Liu', 'Guang Dai', 'Baigui Sun', 'Yong Liu', 'Jingdong Wang']
2,024
International Journal of Computer Vision
1
58
['Computer Science']
2,404.00989
360+x: A Panoptic Multi-modal Scene Understanding Dataset
['Hao Chen', 'Yuqi Hou', 'Chenyuan Qu', 'Irene Testini', 'Xiaohan Hong', 'Jianbo Jiao']
['cs.CV', 'cs.AI', 'cs.MM', 'cs.SD', 'eess.AS']
Human perception of the world is shaped by a multitude of viewpoints and modalities. While many existing datasets focus on scene understanding from a certain perspective (e.g. egocentric or third-person views), our dataset offers a panoptic perspective (i.e. multiple viewpoints with multiple data modalities). Specifica...
2024-04-01T08:34:42Z
CVPR 2024 (Oral Presentation), Project page: https://x360dataset.github.io/
The IEEE/CVF Computer Vision and Pattern Recognition Conference (CVPR) 2024
null
null
null
null
null
null
null
null
2,404.00995
PosterLlama: Bridging Design Ability of Langauge Model to Contents-Aware Layout Generation
['Jaejung Seol', 'Seojun Kim', 'Jaejun Yoo']
['cs.CV']
Visual layout plays a critical role in graphic design fields such as advertising, posters, and web UI design. The recent trend towards content-aware layout generation through generative models has shown promise, yet it often overlooks the semantic intricacies of layout design by treating it as a simple numerical optimi...
2024-04-01T08:46:35Z
ECCV 2024
null
null
null
null
null
null
null
null
null
2,404.01089
Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On
['Xu Yang', 'Changxing Ding', 'Zhibin Hong', 'Junhao Huang', 'Jin Tao', 'Xiangmin Xu']
['cs.CV', 'cs.AI']
Image-based virtual try-on is an increasingly important task for online shopping. It aims to synthesize images of a specific person wearing a specified garment. Diffusion model-based approaches have recently become popular, as they are excellent at image synthesis tasks. However, these approaches usually employ additio...
2024-04-01T12:43:22Z
CVPR 2024
null
null
null
null
null
null
null
null
null
2,404.01094
HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach
['Maxim Nikolaev', 'Mikhail Kuznetsov', 'Dmitry Vetrov', 'Aibek Alanov']
['cs.CV']
Our paper addresses the complex task of transferring a hairstyle from a reference image to an input photo for virtual hair try-on. This task is challenging due to the need to adapt to various photo poses, the sensitivity of hairstyles, and the lack of objective metrics. The current state of the art hairstyle transfer m...
2024-04-01T12:59:49Z
null
null
null
HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach
['Maxim Nikolaev', 'Mikhail Kuznetsov', 'Dmitry P. Vetrov', 'Aibek Alanov']
2,024
Neural Information Processing Systems
6
44
['Computer Science']
2,404.01133
CityGaussian: Real-time High-quality Large-Scale Scene Rendering with Gaussians
['Yang Liu', 'He Guan', 'Chuanchen Luo', 'Lue Fan', 'Naiyan Wang', 'Junran Peng', 'Zhaoxiang Zhang']
['cs.CV']
The advancement of real-time 3D scene reconstruction and novel view synthesis has been significantly propelled by 3D Gaussian Splatting (3DGS). However, effectively training large-scale 3DGS and rendering it in real-time across various scales remains challenging. This paper introduces CityGaussian (CityGS), which emplo...
2024-04-01T14:24:40Z
Accepted by ECCV2024; Project Page: https://dekuliutesla.github.io/citygs/
null
null
CityGaussian: Real-time High-quality Large-Scale Scene Rendering with Gaussians
['Yang Liu', 'He Guan', 'Chuanchen Luo', 'Lue Fan', 'Junran Peng', 'Zhaoxiang Zhang']
2,024
European Conference on Computer Vision
90
50
['Computer Science']
2,404.01197
Getting it Right: Improving Spatial Consistency in Text-to-Image Models
['Agneet Chatterjee', 'Gabriela Ben Melech Stan', 'Estelle Aflalo', 'Sayak Paul', 'Dhruba Ghosh', 'Tejas Gokhale', 'Ludwig Schmidt', 'Hannaneh Hajishirzi', 'Vasudev Lal', 'Chitta Baral', 'Yezhou Yang']
['cs.CV']
One of the key shortcomings in current text-to-image (T2I) models is their inability to consistently generate images which faithfully follow the spatial relationships specified in the text prompt. In this paper, we offer a comprehensive investigation of this limitation, while also developing datasets and methods that s...
2024-04-01T15:55:25Z
Accepted to ECCV 2024. Project Page : https://spright-t2i.github.io/
null
null
null
null
null
null
null
null
null
2,404.01258
Direct Preference Optimization of Video Large Multimodal Models from Language Model Reward
['Ruohong Zhang', 'Liangke Gui', 'Zhiqing Sun', 'Yihao Feng', 'Keyang Xu', 'Yuanhan Zhang', 'Di Fu', 'Chunyuan Li', 'Alexander Hauptmann', 'Yonatan Bisk', 'Yiming Yang']
['cs.CV', 'cs.AI']
Preference modeling techniques, such as direct preference optimization (DPO), has shown effective in enhancing the generalization abilities of large language model (LLM). However, in tasks involving video instruction-following, providing informative feedback, especially for detecting hallucinations in generated respons...
2024-04-01T17:28:16Z
null
null
null
null
null
null
null
null
null
null
2,404.01291
Evaluating Text-to-Visual Generation with Image-to-Text Generation
['Zhiqiu Lin', 'Deepak Pathak', 'Baiqi Li', 'Jiayao Li', 'Xide Xia', 'Graham Neubig', 'Pengchuan Zhang', 'Deva Ramanan']
['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.MM']
Despite significant progress in generative AI, comprehensive evaluation remains challenging because of the lack of effective metrics and standardized benchmarks. For instance, the widely-used CLIPScore measures the alignment between a (generated) image and text prompt, but it fails to produce reliable scores for comple...
2024-04-01T17:58:06Z
We open-source our data, model, and code at: https://github.com/linzhiqiu/t2v_metrics ; Project page: https://linzhiqiu.github.io/papers/vqascore
null
null
Evaluating Text-to-Visual Generation with Image-to-Text Generation
['Zhiqiu Lin', 'Deepak Pathak', 'Baiqi Li', 'Jiayao Li', 'Xide Xia', 'Graham Neubig', 'Pengchuan Zhang', 'Deva Ramanan']
2,024
European Conference on Computer Vision
171
89
['Computer Science']
2,404.01292
Measuring Style Similarity in Diffusion Models
['Gowthami Somepalli', 'Anubhav Gupta', 'Kamal Gupta', 'Shramay Palta', 'Micah Goldblum', 'Jonas Geiping', 'Abhinav Shrivastava', 'Tom Goldstein']
['cs.CV', 'cs.LG']
Generative models are now widely used by graphic designers and artists. Prior works have shown that these models remember and often replicate content from their training data during generation. Hence as their proliferation increases, it has become important to perform a database search to determine whether the properti...
2024-04-01T17:58:30Z
null
null
null
Measuring Style Similarity in Diffusion Models
['Gowthami Somepalli', 'Anubhav Gupta', 'Kamal K. Gupta', 'Shramay Palta', 'Micah Goldblum', 'Jonas Geiping', 'Abhinav Shrivastava', 'Tom Goldstein']
2,024
arXiv.org
42
67
['Computer Science']
2,404.01294
CosmicMan: A Text-to-Image Foundation Model for Humans
['Shikai Li', 'Jianglin Fu', 'Kaiyuan Liu', 'Wentao Wang', 'Kwan-Yee Lin', 'Wayne Wu']
['cs.CV']
We present CosmicMan, a text-to-image foundation model specialized for generating high-fidelity human images. Unlike current general-purpose foundation models that are stuck in the dilemma of inferior quality and text-image misalignment for humans, CosmicMan enables generating photo-realistic human images with meticulo...
2024-04-01T17:59:05Z
Accepted by CVPR 2024. The supplementary material is included. Project Page: https://cosmicman-cvpr2024.github.io
null
null
null
null
null
null
null
null
null
2,404.013
NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance Fields
['Muhammad Zubair Irshad', 'Sergey Zakharov', 'Vitor Guizilini', 'Adrien Gaidon', 'Zsolt Kira', 'Rares Ambrus']
['cs.CV', 'cs.AI', 'cs.LG']
Neural fields excel in computer vision and robotics due to their ability to understand the 3D visual world such as inferring semantics, geometry, and dynamics. Given the capabilities of neural fields in densely representing a 3D scene from 2D images, we ask the question: Can we scale their self-supervised pretraining, ...
2024-04-01T17:59:55Z
Accepted to ECCV 2024. Project Page: https://nerf-mae.github.io/
null
null
null
null
null
null
null
null
null
2,404.01331
LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact Language Model
['Musashi Hinck', 'Matthew L. Olson', 'David Cobbley', 'Shao-Yen Tseng', 'Vasudev Lal']
['cs.CL', 'cs.AI']
We train a suite of multimodal foundation models (MMFM) using the popular LLaVA framework with the recently released Gemma family of large language models (LLMs). Of particular interest is the 2B parameter Gemma model, which provides opportunities to construct capable small-scale MMFMs. In line with findings from other...
2024-03-29T21:32:50Z
CVPR 2024, MMFM workshop. Authors 1 and 2 contributed equally. Models available at https://huggingface.co/intel/llava-gemma-2b/ and https://huggingface.co/intel/llava-gemma-7b/ Training code at https://github.com/IntelLabs/multimodal_cognitive_ai/tree/main/LLaVA-Gemma
null
null
null
null
null
null
null
null
null
2,404.01549
Octopus: On-device language model for function calling of software APIs
['Wei Chen', 'Zhiyuan Li', 'Mingyuan Ma']
['cs.CL', 'cs.SE']
In the rapidly evolving domain of artificial intelligence, Large Language Models (LLMs) play a crucial role due to their advanced text processing and generation abilities. This study introduces a new strategy aimed at harnessing on-device LLMs in invoking software APIs. We meticulously compile a dataset derived from so...
2024-04-02T01:29:28Z
null
null
null
Octopus: On-device language model for function calling of software APIs
['Wei Chen', 'Zhiyuan Li', 'Mingyuan Ma']
2,024
North American Chapter of the Association for Computational Linguistics
16
69
['Computer Science']
2,404.01647
EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis
['Shuai Tan', 'Bin Ji', 'Mengxiao Bi', 'Ye Pan']
['cs.CV']
Achieving disentangled control over multiple facial motions and accommodating diverse input modalities greatly enhances the application and entertainment of the talking head generation. This necessitates a deep exploration of the decoupling space for facial features, ensuring that they a) operate independently without ...
2024-04-02T05:32:39Z
22 pages, 15 figures
null
null
null
null
null
null
null
null
null
2,404.01657
Release of Pre-Trained Models for the Japanese Language
['Kei Sawada', 'Tianyu Zhao', 'Makoto Shing', 'Kentaro Mitsui', 'Akio Kaga', 'Yukiya Hono', 'Toshiaki Wakatsuki', 'Koh Mitsuda']
['cs.CL', 'cs.AI', 'cs.CV', 'cs.LG', 'eess.AS']
AI democratization aims to create a world in which the average person can utilize AI techniques. To achieve this goal, numerous research institutes have attempted to make their results accessible to the public. In particular, large pre-trained models trained on large-scale data have shown unprecedented potential, and t...
2024-04-02T05:59:43Z
9 pages, 1 figure, 5 tables, accepted for LREC-COLING 2024. Models are publicly available at https://huggingface.co/rinna
null
null
null
null
null
null
null
null
null
2,404.01744
Octopus v2: On-device language model for super agent
['Wei Chen', 'Zhiyuan Li']
['cs.CL']
Language models have shown effectiveness in a variety of software applications, particularly in tasks related to automatic workflow. These models possess the crucial ability to call functions, which is essential in creating AI agents. Despite the high performance of large-scale language models in cloud environments, th...
2024-04-02T09:01:32Z
null
null
null
null
null
null
null
null
null
null
2,404.01856
Poro 34B and the Blessing of Multilinguality
['Risto Luukkonen', 'Jonathan Burdge', 'Elaine Zosa', 'Aarne Talman', 'Ville Komulainen', 'Väinö Hatanpää', 'Peter Sarlin', 'Sampo Pyysalo']
['cs.CL']
The pretraining of state-of-the-art large language models now requires trillions of words of text, which is orders of magnitude more than available for the vast majority of languages. While including text in more than one language is an obvious way to acquire more pretraining data, multilinguality is often seen as a cu...
2024-04-02T11:34:12Z
null
null
null
null
null
null
null
null
null
null
2,404.01911
VLRM: Vision-Language Models act as Reward Models for Image Captioning
['Maksim Dzabraev', 'Alexander Kunitsyn', 'Andrei Ivaniuta']
['cs.CV']
In this work, we present an unsupervised method for enhancing an image captioning model (in our case, BLIP2) using reinforcement learning and vision-language models like CLIP and BLIP2-ITM as reward models. The RL-tuned model is able to generate longer and more comprehensive descriptions. Our model reaches impressive 0...
2024-04-02T12:57:22Z
null
null
null
null
null
null
null
null
null
null
2,404.0206
Long-context LLMs Struggle with Long In-context Learning
['Tianle Li', 'Ge Zhang', 'Quy Duc Do', 'Xiang Yue', 'Wenhu Chen']
['cs.CL', 'cs.AI']
Large Language Models (LLMs) have made significant strides in handling long sequences. Some models like Gemini could even to be capable of dealing with millions of tokens. However, their performance evaluation has largely been confined to metrics like perplexity and synthetic tasks, which may not fully capture their tr...
2024-04-02T15:59:11Z
null
null
null
null
null
null
null
null
null
null
2,404.02078
Advancing LLM Reasoning Generalists with Preference Trees
['Lifan Yuan', 'Ganqu Cui', 'Hanbin Wang', 'Ning Ding', 'Xingyao Wang', 'Jia Deng', 'Boji Shan', 'Huimin Chen', 'Ruobing Xie', 'Yankai Lin', 'Zhenghao Liu', 'Bowen Zhou', 'Hao Peng', 'Zhiyuan Liu', 'Maosong Sun']
['cs.AI', 'cs.CL', 'cs.LG']
We introduce Eurus, a suite of large language models (LLMs) optimized for reasoning. Finetuned from Mistral-7B and CodeLlama-70B, Eurus models achieve state-of-the-art results among open-source models on a diverse set of benchmarks covering mathematics, code generation, and logical reasoning problems. Notably, Eurus-70...
2024-04-02T16:25:30Z
Models and data are available at https://github.com/OpenBMB/Eurus
null
null
null
null
null
null
null
null
null
2,404.02132
ViTamin: Designing Scalable Vision Models in the Vision-Language Era
['Jieneng Chen', 'Qihang Yu', 'Xiaohui Shen', 'Alan Yuille', 'Liang-Chieh Chen']
['cs.CV']
Recent breakthroughs in vision-language models (VLMs) start a new page in the vision community. The VLMs provide stronger and more generalizable feature embeddings compared to those from ImageNet-pretrained models, thanks to the training on the large-scale Internet image-text pairs. However, despite the amazing achieve...
2024-04-02T17:40:29Z
CVPR 2024; https://github.com/Beckschen/ViTamin
null
null
ViTamin: Designing Scalable Vision Models in the Vision-Language Era
['Jieneng Chen', 'Qihang Yu', 'Xiaohui Shen', 'Alan L. Yuille', 'Liang-Chieh Chen']
2,024
Computer Vision and Pattern Recognition
29
172
['Computer Science']
2,404.02258
Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
['David Raposo', 'Sam Ritter', 'Blake Richards', 'Timothy Lillicrap', 'Peter Conway Humphreys', 'Adam Santoro']
['cs.LG', 'cs.CL']
Transformer-based language models spread FLOPs uniformly across input sequences. In this work we demonstrate that transformers can instead learn to dynamically allocate FLOPs (or compute) to specific positions in a sequence, optimising the allocation along the sequence for different layers across the model depth. Our m...
2024-04-02T19:28:11Z
null
null
null
Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
['David Raposo', 'Sam Ritter', 'Blake Richards', 'T. Lillicrap', 'Peter Humphreys', 'Adam Santoro']
2,024
arXiv.org
89
19
['Computer Science']
2,404.02406
Exploring Backdoor Vulnerabilities of Chat Models
['Yunzhuo Hao', 'Wenkai Yang', 'Yankai Lin']
['cs.CR', 'cs.AI', 'cs.CL']
Recent researches have shown that Large Language Models (LLMs) are susceptible to a security threat known as Backdoor Attack. The backdoored model will behave well in normal cases but exhibit malicious behaviours on inputs inserted with a specific backdoor trigger. Current backdoor studies on LLMs predominantly focus o...
2024-04-03T02:16:53Z
Code and data are available at https://github.com/hychaochao/Chat-Models-Backdoor-Attacking
null
null
null
null
null
null
null
null
null
2,404.02543
Unbiased Learning to Rank Meets Reality: Lessons from Baidu's Large-Scale Search Dataset
['Philipp Hager', 'Romain Deffayet', 'Jean-Michel Renders', 'Onno Zoeter', 'Maarten de Rijke']
['cs.IR', 'cs.AI']
Unbiased learning-to-rank (ULTR) is a well-established framework for learning from user clicks, which are often biased by the ranker collecting the data. While theoretically justified and extensively tested in simulation, ULTR techniques lack empirical validation, especially on modern search engines. The Baidu-ULTR dat...
2024-04-03T08:00:46Z
null
null
10.1145/3626772.3657892
null
null
null
null
null
null
null