arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,312.00567 | Explanatory Argument Extraction of Correct Answers in Resident Medical
Exams | ['Iakes Goenaga', 'Aitziber Atutxa', 'Koldo Gojenola', 'Maite Oronoz', 'Rodrigo Agerri'] | ['cs.CL'] | Developing the required technology to assist medical experts in their
everyday activities is currently a hot topic in the Artificial Intelligence
research field. Thus, a number of large language models (LLMs) and automated
benchmarks have recently been proposed with the aim of facilitating information
extraction in Evi... | 2023-12-01T13:22:35Z | null | null | null | Explanatory Argument Extraction of Correct Answers in Resident Medical Exams | ['Iakes Goenaga', 'Aitziber Atutxa', 'K. Gojenola', 'M. Oronoz', 'R. Agerri'] | 2,023 | Artif. Intell. Medicine | 8 | 60 | ['Computer Science', 'Medicine'] |
2,312.00738 | SeaLLMs -- Large Language Models for Southeast Asia | ['Xuan-Phi Nguyen', 'Wenxuan Zhang', 'Xin Li', 'Mahani Aljunied', 'Zhiqiang Hu', 'Chenhui Shen', 'Yew Ken Chia', 'Xingxuan Li', 'Jianyu Wang', 'Qingyu Tan', 'Liying Cheng', 'Guanzheng Chen', 'Yue Deng', 'Sen Yang', 'Chaoqun Liu', 'Hang Zhang', 'Lidong Bing'] | ['cs.CL'] | Despite the remarkable achievements of large language models (LLMs) in
various tasks, there remains a linguistic bias that favors high-resource
languages, such as English, often at the expense of low-resource and regional
languages. To address this imbalance, we introduce SeaLLMs, an innovative
series of language model... | 2023-12-01T17:17:56Z | Technical report, ACL 2024 DEMO TRACK | null | null | null | null | null | null | null | null | null |
2,312.00752 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | ['Albert Gu', 'Tri Dao'] | ['cs.LG', 'cs.AI'] | Foundation models, now powering most of the exciting applications in deep
learning, are almost universally based on the Transformer architecture and its
core attention module. Many subquadratic-time architectures such as linear
attention, gated convolution and recurrent models, and structured state space
models (SSMs) ... | 2023-12-01T18:01:34Z | null | null | null | null | null | null | null | null | null | null |
2,312.00784 | ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual
Prompts | ['Mu Cai', 'Haotian Liu', 'Dennis Park', 'Siva Karthik Mustikovela', 'Gregory P. Meyer', 'Yuning Chai', 'Yong Jae Lee'] | ['cs.CV', 'cs.AI', 'cs.CL', 'cs.LG'] | While existing large vision-language multimodal models focus on whole image
understanding, there is a prominent gap in achieving region-specific
comprehension. Current approaches that use textual coordinates or spatial
encodings often fail to provide a user-friendly interface for visual prompting.
To address this chall... | 2023-12-01T18:59:56Z | Accepted to CVPR2024. Project page: https://vip-llava.github.io/ | null | null | null | null | null | null | null | null | null |
2,312.00785 | Sequential Modeling Enables Scalable Learning for Large Vision Models | ['Yutong Bai', 'Xinyang Geng', 'Karttikeya Mangalam', 'Amir Bar', 'Alan Yuille', 'Trevor Darrell', 'Jitendra Malik', 'Alexei A Efros'] | ['cs.CV'] | We introduce a novel sequential modeling approach which enables learning a
Large Vision Model (LVM) without making use of any linguistic data. To do this,
we define a common format, "visual sentences", in which we can represent raw
images and videos as well as annotated data sources such as semantic
segmentations and d... | 2023-12-01T18:59:57Z | Website: https://yutongbai.com/lvm.html | null | null | null | null | null | null | null | null | null |
2,312.00849 | RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from
Fine-grained Correctional Human Feedback | ['Tianyu Yu', 'Yuan Yao', 'Haoye Zhang', 'Taiwen He', 'Yifeng Han', 'Ganqu Cui', 'Jinyi Hu', 'Zhiyuan Liu', 'Hai-Tao Zheng', 'Maosong Sun', 'Tat-Seng Chua'] | ['cs.CL', 'cs.CV'] | Multimodal Large Language Models (MLLMs) have recently demonstrated
impressive capabilities in multimodal understanding, reasoning, and
interaction. However, existing MLLMs prevalently suffer from serious
hallucination problems, generating text that is not factually grounded in
associated images. The problem makes exis... | 2023-12-01T11:36:08Z | Accepted by CVPR 2024 | null | null | RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-Grained Correctional Human Feedback | ['Tianyu Yu', 'Yuan Yao', 'Haoye Zhang', 'Taiwen He', 'Yifeng Han', 'Ganqu Cui', 'Jinyi Hu', 'Zhiyuan Liu', 'Hai-Tao Zheng', 'Maosong Sun', 'Tat-Seng Chua'] | 2,023 | Computer Vision and Pattern Recognition | 231 | 64 | ['Computer Science'] |
2,312.00858 | DeepCache: Accelerating Diffusion Models for Free | ['Xinyin Ma', 'Gongfan Fang', 'Xinchao Wang'] | ['cs.CV', 'cs.AI'] | Diffusion models have recently gained unprecedented attention in the field of
image synthesis due to their remarkable generative capabilities.
Notwithstanding their prowess, these models often incur substantial
computational costs, primarily attributed to the sequential denoising process
and cumbersome model size. Trad... | 2023-12-01T17:01:06Z | Work in progress. Project Page:
https://horseee.github.io/Diffusion_DeepCache/ | null | null | DeepCache: Accelerating Diffusion Models for Free | ['Xinyin Ma', 'Gongfan Fang', 'Xinchao Wang'] | 2,023 | Computer Vision and Pattern Recognition | 152 | 72 | ['Computer Science'] |
2,312.00863 | EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment
Anything | ['Yunyang Xiong', 'Bala Varadarajan', 'Lemeng Wu', 'Xiaoyu Xiang', 'Fanyi Xiao', 'Chenchen Zhu', 'Xiaoliang Dai', 'Dilin Wang', 'Fei Sun', 'Forrest Iandola', 'Raghuraman Krishnamoorthi', 'Vikas Chandra'] | ['cs.CV'] | Segment Anything Model (SAM) has emerged as a powerful tool for numerous
vision applications. A key component that drives the impressive performance for
zero-shot transfer and high versatility is a super large Transformer model
trained on the extensive high-quality SA-1B dataset. While beneficial, the huge
computation ... | 2023-12-01T18:31:00Z | null | null | null | null | null | null | null | null | null | null |
2,312.00869 | Segment and Caption Anything | ['Xiaoke Huang', 'Jianfeng Wang', 'Yansong Tang', 'Zheng Zhang', 'Han Hu', 'Jiwen Lu', 'Lijuan Wang', 'Zicheng Liu'] | ['cs.CV'] | We propose a method to efficiently equip the Segment Anything Model (SAM)
with the ability to generate regional captions. SAM presents strong
generalizability to segment anything while is short for semantic understanding.
By introducing a lightweight query-based feature mixer, we align the
region-specific features with... | 2023-12-01T19:00:17Z | The project page, along with the associated code, can be accessed via
https://xk-huang.github.io/segment-caption-anything/; Update author
information; Accepted by CVPR 24 | null | null | null | null | null | null | null | null | null |
2,312.00886 | Nash Learning from Human Feedback | ['Rémi Munos', 'Michal Valko', 'Daniele Calandriello', 'Mohammad Gheshlaghi Azar', 'Mark Rowland', 'Zhaohan Daniel Guo', 'Yunhao Tang', 'Matthieu Geist', 'Thomas Mesnard', 'Andrea Michi', 'Marco Selvi', 'Sertan Girgin', 'Nikola Momchev', 'Olivier Bachem', 'Daniel J. Mankowitz', 'Doina Precup', 'Bilal Piot'] | ['stat.ML', 'cs.AI', 'cs.GT', 'cs.LG', 'cs.MA'] | Reinforcement learning from human feedback (RLHF) has emerged as the main
paradigm for aligning large language models (LLMs) with human preferences.
Typically, RLHF involves the initial step of learning a reward model from human
feedback, often expressed as preferences between pairs of text generations
produced by a pr... | 2023-12-01T19:26:23Z | null | null | null | null | null | null | null | null | null | null |
2,312.01037 | Eliciting Latent Knowledge from Quirky Language Models | ['Alex Mallen', 'Madeline Brumley', 'Julia Kharchenko', 'Nora Belrose'] | ['cs.LG', 'cs.AI', 'cs.CL'] | Eliciting Latent Knowledge (ELK) aims to find patterns in a capable neural
network's activations that robustly track the true state of the world,
especially in hard-to-verify cases where the model's output is untrusted. To
further ELK research, we introduce 12 datasets and a corresponding suite of
"quirky" language mod... | 2023-12-02T05:47:22Z | COLM 2024 | null | null | Eliciting Latent Knowledge from Quirky Language Models | ['Alex Troy Mallen', 'Nora Belrose'] | 2,023 | arXiv.org | 33 | 45 | ['Computer Science'] |
2,312.01314 | NLEBench+NorGLM: A Comprehensive Empirical Analysis and Benchmark
Dataset for Generative Language Models in Norwegian | ['Peng Liu', 'Lemei Zhang', 'Terje Farup', 'Even W. Lauvrak', 'Jon Espen Ingvaldsen', 'Simen Eide', 'Jon Atle Gulla', 'Zhirong Yang'] | ['cs.CL'] | Norwegian, spoken by only 5 million population, is under-representative
within the most impressive breakthroughs in NLP tasks. To the best of our
knowledge, there has not yet been a comprehensive evaluation of the existing
language models (LMs) on Norwegian generation tasks during the article writing
process. To fill t... | 2023-12-03T08:09:45Z | Accepted at EMNLP 2024 Main Conference. Code available at
https://github.com/Smartmedia-AI/NorGLM/ | null | null | null | null | null | null | null | null | null |
2,312.01479 | OpenVoice: Versatile Instant Voice Cloning | ['Zengyi Qin', 'Wenliang Zhao', 'Xumin Yu', 'Xin Sun'] | ['cs.SD', 'cs.LG', 'eess.AS'] | We introduce OpenVoice, a versatile voice cloning approach that requires only
a short audio clip from the reference speaker to replicate their voice and
generate speech in multiple languages. OpenVoice represents a significant
advancement in addressing the following open challenges in the field: 1)
Flexible Voice Style... | 2023-12-03T18:41:54Z | Technical Report | null | null | null | null | null | null | null | null | null |
2,312.01552 | The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context
Learning | ['Bill Yuchen Lin', 'Abhilasha Ravichander', 'Ximing Lu', 'Nouha Dziri', 'Melanie Sclar', 'Khyathi Chandu', 'Chandra Bhagavatula', 'Yejin Choi'] | ['cs.CL', 'cs.AI'] | The alignment tuning process of large language models (LLMs) typically
involves instruction learning through supervised fine-tuning (SFT) and
preference tuning via reinforcement learning from human feedback (RLHF). A
recent study, LIMA (Zhou et al. 2023), shows that using merely 1K examples for
SFT can achieve signific... | 2023-12-04T00:46:11Z | 26 pages, 8 figures. Project website:
https://allenai.github.io/re-align/ | null | null | The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning | ['Bill Yuchen Lin', 'Abhilasha Ravichander', 'Ximing Lu', 'Nouha Dziri', 'Melanie Sclar', 'Khyathi Raghavi Chandu', 'Chandra Bhagavatula', 'Yejin Choi'] | 2,023 | arXiv.org | 199 | 51 | ['Computer Science'] |
2,312.01678 | Jellyfish: A Large Language Model for Data Preprocessing | ['Haochen Zhang', 'Yuyang Dong', 'Chuan Xiao', 'Masafumi Oyamada'] | ['cs.AI', 'cs.CL', 'cs.DB', 'cs.LG'] | This paper explores the utilization of LLMs for data preprocessing (DP), a
crucial step in the data mining pipeline that transforms raw data into a clean
format conducive to easy processing. Whereas the use of LLMs has sparked
interest in devising universal solutions to DP, recent initiatives in this
domain typically r... | 2023-12-04T07:01:54Z | Accepted by EMNLP 2024, a.k.a. "Jellyfish: Instruction-Tuning Local
Large Language Models for Data Preprocessing'' | null | null | null | null | null | null | null | null | null |
2,312.01725 | StableVITON: Learning Semantic Correspondence with Latent Diffusion
Model for Virtual Try-On | ['Jeongho Kim', 'Gyojung Gu', 'Minho Park', 'Sunghyun Park', 'Jaegul Choo'] | ['cs.CV'] | Given a clothing image and a person image, an image-based virtual try-on aims
to generate a customized image that appears natural and accurately reflects the
characteristics of the clothing image. In this work, we aim to expand the
applicability of the pre-trained diffusion model so that it can be utilized
independentl... | 2023-12-04T08:27:59Z | 17 pages | null | null | null | null | null | null | null | null | null |
2,312.01943 | Instance-guided Cartoon Editing with a Large-scale Dataset | ['Jian Lin', 'Chengze Li', 'Xueting Liu', 'Zhongping Ge'] | ['cs.CV', 'cs.GR', 'I.4.6; I.3.3; I.3.8'] | Cartoon editing, appreciated by both professional illustrators and hobbyists,
allows extensive creative freedom and the development of original narratives
within the cartoon domain. However, the existing literature on cartoon editing
is complex and leans heavily on manual operations, owing to the challenge of
automatic... | 2023-12-04T15:00:15Z | Project page: https://cartoonsegmentation.github.io/ 10 pages, 10
figures | null | null | null | null | null | null | null | null | null |
2,312.02051 | TimeChat: A Time-sensitive Multimodal Large Language Model for Long
Video Understanding | ['Shuhuai Ren', 'Linli Yao', 'Shicheng Li', 'Xu Sun', 'Lu Hou'] | ['cs.CV', 'cs.AI', 'cs.CL'] | This work proposes TimeChat, a time-sensitive multimodal large language model
specifically designed for long video understanding. Our model incorporates two
key architectural contributions: (1) a timestamp-aware frame encoder that binds
visual content with the timestamp of each frame, and (2) a sliding video
Q-Former t... | 2023-12-04T17:09:52Z | CVPR 2024 camera-ready version, code is available at
https://github.com/RenShuhuai-Andy/TimeChat | null | null | null | null | null | null | null | null | null |
2,312.0212 | Magicoder: Empowering Code Generation with OSS-Instruct | ['Yuxiang Wei', 'Zhe Wang', 'Jiawei Liu', 'Yifeng Ding', 'Lingming Zhang'] | ['cs.CL', 'cs.AI', 'cs.SE'] | We introduce Magicoder, a series of fully open-source (code, weights, and
data) Large Language Models (LLMs) for code that significantly closes the gap
with top code models while having no more than 7B parameters. Magicoder models
are trained on 75K synthetic instruction data using OSS-Instruct, a novel
approach to enl... | 2023-12-04T18:50:35Z | Published at ICML 2024 | null | null | null | null | null | null | null | null | null |
2,312.02142 | Object Recognition as Next Token Prediction | ['Kaiyu Yue', 'Bor-Chun Chen', 'Jonas Geiping', 'Hengduo Li', 'Tom Goldstein', 'Ser-Nam Lim'] | ['cs.CV'] | We present an approach to pose object recognition as next token prediction.
The idea is to apply a language decoder that auto-regressively predicts the
text tokens from image embeddings to form labels. To ground this prediction
process in auto-regression, we customize a non-causal attention mask for the
decoder, incorp... | 2023-12-04T18:58:40Z | CVPR 2024 | null | null | null | null | null | null | null | null | null |
2,312.02145 | Repurposing Diffusion-Based Image Generators for Monocular Depth
Estimation | ['Bingxin Ke', 'Anton Obukhov', 'Shengyu Huang', 'Nando Metzger', 'Rodrigo Caye Daudt', 'Konrad Schindler'] | ['cs.CV'] | Monocular depth estimation is a fundamental computer vision task. Recovering
3D depth from a single image is geometrically ill-posed and requires scene
understanding, so it is not surprising that the rise of deep learning has led
to a breakthrough. The impressive progress of monocular depth estimators has
mirrored the ... | 2023-12-04T18:59:13Z | CVPR 2024 camera ready | null | null | null | null | null | null | null | null | null |
2,312.02151 | Guarding Barlow Twins Against Overfitting with Mixed Samples | ['Wele Gedara Chaminda Bandara', 'Celso M. De Melo', 'Vishal M. Patel'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Self-supervised Learning (SSL) aims to learn transferable feature
representations for downstream applications without relying on labeled data.
The Barlow Twins algorithm, renowned for its widespread adoption and
straightforward implementation compared to its counterparts like contrastive
learning methods, minimizes fea... | 2023-12-04T18:59:36Z | Code and checkpoints are available at:
https://github.com/wgcban/mix-bt.git | null | null | null | null | null | null | null | null | null |
2,312.02201 | ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation | ['Peng Wang', 'Yichun Shi'] | ['cs.CV'] | We introduce "ImageDream," an innovative image-prompt, multi-view diffusion
model for 3D object generation. ImageDream stands out for its ability to
produce 3D models of higher quality compared to existing state-of-the-art,
image-conditioned methods. Our approach utilizes a canonical camera
coordination for the objects... | 2023-12-02T20:41:27Z | project page: https://Image-Dream.github.io | null | null | null | null | null | null | null | null | null |
2,312.02406 | Efficient Online Data Mixing For Language Model Pre-Training | ['Alon Albalak', 'Liangming Pan', 'Colin Raffel', 'William Yang Wang'] | ['cs.CL', 'cs.LG'] | The data used to pretrain large language models has a decisive impact on a
model's downstream performance, which has led to a large body of work on data
selection methods that aim to automatically determine the most suitable data to
use for pretraining. Existing data selection methods suffer from slow and
computational... | 2023-12-05T00:42:35Z | null | null | null | null | null | null | null | null | null | null |
2,312.02428 | FreestyleRet: Retrieving Images from Style-Diversified Queries | ['Hao Li', 'Curise Jia', 'Peng Jin', 'Zesen Cheng', 'Kehan Li', 'Jialu Sui', 'Chang Liu', 'Li Yuan'] | ['cs.CV', 'cs.IR'] | Image Retrieval aims to retrieve corresponding images based on a given query.
In application scenarios, users intend to express their retrieval intent
through various query styles. However, current retrieval tasks predominantly
focus on text-query retrieval exploration, leading to limited retrieval query
options and po... | 2023-12-05T02:07:31Z | 16 pages, 7 figures | null | null | FreestyleRet: Retrieving Images from Style-Diversified Queries | ['Hao Li', 'Curise Jia', 'Peng Jin', 'Zesen Cheng', 'Kehan Li', 'Jialu Sui', 'Chang Liu', 'Li-ming Yuan'] | 2,023 | arXiv.org | 8 | 63 | ['Computer Science'] |
2,312.02433 | Lenna: Language Enhanced Reasoning Detection Assistant | ['Fei Wei', 'Xinyu Zhang', 'Ailing Zhang', 'Bo Zhang', 'Xiangxiang Chu'] | ['cs.CV'] | With the fast-paced development of multimodal large language models (MLLMs),
we can now converse with AI systems in natural languages to understand images.
However, the reasoning power and world knowledge embedded in the large language
models have been much less investigated and exploited for image perception
tasks. In... | 2023-12-05T02:19:35Z | null | null | null | null | null | null | null | null | null | null |
2,312.02436 | MUFFIN: Curating Multi-Faceted Instructions for Improving
Instruction-Following | ['Renze Lou', 'Kai Zhang', 'Jian Xie', 'Yuxuan Sun', 'Janice Ahn', 'Hanzi Xu', 'Yu Su', 'Wenpeng Yin'] | ['cs.CL', 'cs.AI'] | In the realm of large language models (LLMs), enhancing instruction-following
capability often involves curating expansive training data. This is achieved
through two primary schemes: i) Scaling-Inputs: Amplifying (input, output)
pairs per task instruction, aiming for better instruction adherence. ii)
Scaling Input-Fre... | 2023-12-05T02:32:08Z | ICLR 2024. Data, model, and code are available at:
https://renzelou.github.io/Muffin/ | null | null | MUFFIN: Curating Multi-Faceted Instructions for Improving Instruction-Following | ['Renze Lou', 'Kai Zhang', 'Jian Xie', 'Yuxuan Sun', 'Janice Ahn', 'Hanzi Xu', 'Yu Su', 'Wenpeng Yin'] | 2,023 | International Conference on Learning Representations | 30 | 73 | ['Computer Science'] |
2,312.02439 | Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language
Models with Creative Humor Generation | ['Shanshan Zhong', 'Zhongzhan Huang', 'Shanghua Gao', 'Wushao Wen', 'Liang Lin', 'Marinka Zitnik', 'Pan Zhou'] | ['cs.AI', 'cs.CL', 'cs.CV'] | Chain-of-Thought (CoT) guides large language models (LLMs) to reason
step-by-step, and can motivate their logical reasoning ability. While effective
for logical tasks, CoT is not conducive to creative problem-solving which often
requires out-of-box thoughts and is crucial for innovation advancements. In
this paper, we ... | 2023-12-05T02:41:57Z | Technical report | null | null | Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation | ['Shan Zhong', 'Zhongzhan Huang', 'Shanghua Gao', 'Wushao Wen', 'Liang Lin', 'M. Zitnik', 'Pan Zhou'] | 2,023 | Computer Vision and Pattern Recognition | 40 | 102 | ['Computer Science'] |
2,312.02598 | Impact of Tokenization on LLaMa Russian Adaptation | ['Mikhail Tikhomirov', 'Daniil Chernyshev'] | ['cs.CL', 'cs.AI'] | Latest instruction-tuned large language models (LLM) show great results on
various tasks, however, they often face performance degradation for non-English
input. There is evidence that the reason lies in inefficient tokenization
caused by low language representation in pre-training data which hinders the
comprehension ... | 2023-12-05T09:16:03Z | null | null | null | null | null | null | null | null | null | null |
2,312.02625 | Diffusion Noise Feature: Accurate and Fast Generated Image Detection | ['Yichi Zhang', 'Xiaogang Xu'] | ['cs.CV'] | Generative models have reached an advanced stage where they can produce
remarkably realistic images. However, this remarkable generative capability
also introduces the risk of disseminating false or misleading information.
Notably, existing image detectors for generated images encounter challenges
such as low accuracy ... | 2023-12-05T10:01:11Z | null | null | null | null | null | null | null | null | null | null |
2,312.02696 | Analyzing and Improving the Training Dynamics of Diffusion Models | ['Tero Karras', 'Miika Aittala', 'Jaakko Lehtinen', 'Janne Hellsten', 'Timo Aila', 'Samuli Laine'] | ['cs.CV', 'cs.AI', 'cs.LG', 'cs.NE', 'stat.ML'] | Diffusion models currently dominate the field of data-driven image synthesis
with their unparalleled scaling to large datasets. In this paper, we identify
and rectify several causes for uneven and ineffective training in the popular
ADM diffusion model architecture, without altering its high-level structure.
Observing ... | 2023-12-05T11:55:47Z | null | null | null | null | null | null | null | null | null | null |
2,312.02724 | RankZephyr: Effective and Robust Zero-Shot Listwise Reranking is a
Breeze! | ['Ronak Pradeep', 'Sahel Sharifymoghaddam', 'Jimmy Lin'] | ['cs.IR'] | In information retrieval, proprietary large language models (LLMs) such as
GPT-4 and open-source counterparts such as LLaMA and Vicuna have played a vital
role in reranking. However, the gap between open-source and closed models
persists, with reliance on proprietary, non-transparent models constraining
reproducibility... | 2023-12-05T12:39:00Z | null | null | null | RankZephyr: Effective and Robust Zero-Shot Listwise Reranking is a Breeze! | ['Ronak Pradeep', 'Sahel Sharifymoghaddam', 'Jimmy J. Lin'] | 2,023 | arXiv.org | 86 | 41 | ['Computer Science'] |
2,312.02803 | Leveraging Domain Adaptation and Data Augmentation to Improve Qur'anic
IR in English and Arabic | ['Vera Pavlova'] | ['cs.CL', 'cs.AI'] | In this work, we approach the problem of Qur'anic information retrieval (IR)
in Arabic and English. Using the latest state-of-the-art methods in neural IR,
we research what helps to tackle this task more efficiently. Training retrieval
models requires a lot of data, which is difficult to obtain for training
in-domain. ... | 2023-12-05T14:44:08Z | null | null | null | Leveraging Domain Adaptation and Data Augmentation to Improve Qur’anic IR in English and Arabic | ['Vera Pavlova'] | 2,023 | ARABICNLP | 2 | 57 | ['Computer Science'] |
2,312.03511 | Kandinsky 3.0 Technical Report | ['Vladimir Arkhipkin', 'Andrei Filatov', 'Viacheslav Vasilev', 'Anastasia Maltseva', 'Said Azizov', 'Igor Pavlov', 'Julia Agafonova', 'Andrey Kuznetsov', 'Denis Dimitrov'] | ['cs.CV', 'cs.LG', 'cs.MM'] | We present Kandinsky 3.0, a large-scale text-to-image generation model based
on latent diffusion, continuing the series of text-to-image Kandinsky models
and reflecting our progress to achieve higher quality and realism of image
generation. In this report we describe the architecture of the model, the data
collection p... | 2023-12-06T14:13:38Z | Project page: https://ai-forever.github.io/Kandinsky-3 | null | null | Kandinsky 3.0 Technical Report | ['V.Ya. Arkhipkin', 'Andrei Filatov', 'Viacheslav Vasilev', 'Anastasia Maltseva', 'Said Azizov', 'Igor Pavlov', 'Julia Agafonova', 'Andrey Kuznetsov', 'Denis Dimitrov'] | 2,023 | arXiv.org | 13 | 50 | ['Computer Science'] |
2,312.03594 | A Task is Worth One Word: Learning with Task Prompts for High-Quality
Versatile Image Inpainting | ['Junhao Zhuang', 'Yanhong Zeng', 'Wenran Liu', 'Chun Yuan', 'Kai Chen'] | ['cs.CV'] | Advancing image inpainting is challenging as it requires filling
user-specified regions for various intents, such as background filling and
object synthesis. Existing approaches focus on either context-aware filling or
object synthesis using text descriptions. However, achieving both tasks
simultaneously is challenging... | 2023-12-06T16:34:46Z | Project page with code: https://powerpaint.github.io/ | null | null | null | null | null | null | null | null | null |
2,312.03626 | TokenCompose: Text-to-Image Diffusion with Token-level Supervision | ['Zirui Wang', 'Zhizhou Sha', 'Zheng Ding', 'Yilin Wang', 'Zhuowen Tu'] | ['cs.CV'] | We present TokenCompose, a Latent Diffusion Model for text-to-image
generation that achieves enhanced consistency between user-specified text
prompts and model-generated images. Despite its tremendous success, the
standard denoising process in the Latent Diffusion Model takes text prompts as
conditions only, absent exp... | 2023-12-06T17:13:15Z | CVPR 2024, 21 pages, 17 figures | null | null | null | null | null | null | null | null | null |
2,312.03631 | Mitigating Open-Vocabulary Caption Hallucinations | ['Assaf Ben-Kish', 'Moran Yanuka', 'Morris Alper', 'Raja Giryes', 'Hadar Averbuch-Elor'] | ['cs.CV', 'cs.AI'] | While recent years have seen rapid progress in image-conditioned text
generation, image captioning still suffers from the fundamental issue of
hallucinations, namely, the generation of spurious details that cannot be
inferred from the given image. Existing methods largely use closed-vocabulary
object lists to mitigate ... | 2023-12-06T17:28:03Z | Website Link: https://assafbk.github.io/mocha/ | null | 10.18653/v1/2024.findings-acl.657 | null | null | null | null | null | null | null |
2,312.03641 | MotionCtrl: A Unified and Flexible Motion Controller for Video
Generation | ['Zhouxia Wang', 'Ziyang Yuan', 'Xintao Wang', 'Tianshui Chen', 'Menghan Xia', 'Ping Luo', 'Ying Shan'] | ['cs.CV', 'cs.AI', 'cs.LG', 'cs.MM'] | Motions in a video primarily consist of camera motion, induced by camera
movement, and object motion, resulting from object movement. Accurate control
of both camera and object motion is essential for video generation. However,
existing works either mainly focus on one type of motion or do not clearly
distinguish betwe... | 2023-12-06T17:49:57Z | SIGGRAPH 2024 Conference Proceedings | null | null | MotionCtrl: A Unified and Flexible Motion Controller for Video Generation | ['Zhouxia Wang', 'Ziyang Yuan', 'Xintao Wang', 'Yaowei Li', 'Tianshui Chen', 'Menghan Xia', 'Ping Luo', 'Ying Shan'] | 2,023 | International Conference on Computer Graphics and Interactive Techniques | 230 | 45 | ['Computer Science'] |
2,312.03668 | Integrating Pre-Trained Speech and Language Models for End-to-End Speech
Recognition | ['Yukiya Hono', 'Koh Mitsuda', 'Tianyu Zhao', 'Kentaro Mitsui', 'Toshiaki Wakatsuki', 'Kei Sawada'] | ['eess.AS', 'cs.AI', 'cs.CL', 'cs.LG'] | Advances in machine learning have made it possible to perform various text
and speech processing tasks, such as automatic speech recognition (ASR), in an
end-to-end (E2E) manner. E2E approaches utilizing pre-trained models are
gaining attention for conserving training data and resources. However, most of
their applicat... | 2023-12-06T18:34:42Z | 17 pages, 4 figures, 9 tables, accepted for Findings of ACL 2024. The
model is available at https://huggingface.co/rinna/nue-asr | null | null | null | null | null | null | null | null | null |
2,312.03732 | A Rank Stabilization Scaling Factor for Fine-Tuning with LoRA | ['Damjan Kalajdzievski'] | ['cs.CL', 'cs.LG', 'I.2.7'] | As large language models (LLMs) have become increasingly compute and memory
intensive, parameter-efficient fine-tuning (PEFT) methods are now a common
strategy to fine-tune LLMs. A popular PEFT method is Low-Rank Adapters (LoRA),
which adds trainable low-rank "adapters" to selected layers. Each adapter
consists of a lo... | 2023-11-28T03:23:20Z | null | null | null | null | null | null | null | null | null | null |
2,312.03849 | LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction
Tuning | ['Bolin Lai', 'Xiaoliang Dai', 'Lawrence Chen', 'Guan Pang', 'James M. Rehg', 'Miao Liu'] | ['cs.CV'] | Generating instructional images of human daily actions from an egocentric
viewpoint serves as a key step towards efficient skill transfer. In this paper,
we introduce a novel problem -- egocentric action frame generation. The goal is
to synthesize an image depicting an action in the user's context (i.e., action
frame) ... | 2023-12-06T19:02:40Z | 34 pages | null | null | null | null | null | null | null | null | null |
2,312.04005 | KOALA: Empirical Lessons Toward Memory-Efficient and Fast Diffusion
Models for Text-to-Image Synthesis | ['Youngwan Lee', 'Kwanyong Park', 'Yoorhim Cho', 'Yong-Ju Lee', 'Sung Ju Hwang'] | ['cs.CV', 'cs.AI'] | As text-to-image (T2I) synthesis models increase in size, they demand higher
inference costs due to the need for more expensive GPUs with larger memory,
which makes it challenging to reproduce these models in addition to the
restricted access to training datasets. Our study aims to reduce these
inference costs and expl... | 2023-12-07T02:46:18Z | NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,312.04433 | DreamVideo: Composing Your Dream Videos with Customized Subject and
Motion | ['Yujie Wei', 'Shiwei Zhang', 'Zhiwu Qing', 'Hangjie Yuan', 'Zhiheng Liu', 'Yu Liu', 'Yingya Zhang', 'Jingren Zhou', 'Hongming Shan'] | ['cs.CV'] | Customized generation using diffusion models has made impressive progress in
image generation, but remains unsatisfactory in the challenging video
generation task, as it requires the controllability of both subjects and
motions. To that end, we present DreamVideo, a novel approach to generating
personalized videos from... | 2023-12-07T16:57:26Z | null | null | null | Dream Video: Composing Your Dream Videos with Customized Subject and Motion | ['Yujie Wei', 'Shiwei Zhang', 'Zhiwu Qing', 'Hangjie Yuan', 'Zhiheng Liu', 'Yu Liu', 'Yingya Zhang', 'Jingren Zhou', 'Hongming Shan'] | 2,023 | Computer Vision and Pattern Recognition | 98 | 96 | ['Computer Science'] |
2,312.04461 | PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding | ['Zhen Li', 'Mingdeng Cao', 'Xintao Wang', 'Zhongang Qi', 'Ming-Ming Cheng', 'Ying Shan'] | ['cs.CV', 'cs.AI', 'cs.LG', 'cs.MM'] | Recent advances in text-to-image generation have made remarkable progress in
synthesizing realistic human photos conditioned on given text prompts. However,
existing personalized generation methods cannot simultaneously satisfy the
requirements of high efficiency, promising identity (ID) fidelity, and flexible
text con... | 2023-12-07T17:32:29Z | Tech report; Project page: https://photo-maker.github.io/ | null | null | null | null | null | null | null | null | null |
2,312.04469 | On the Learnability of Watermarks for Language Models | ['Chenchen Gu', 'Xiang Lisa Li', 'Percy Liang', 'Tatsunori Hashimoto'] | ['cs.LG', 'cs.CL', 'cs.CR'] | Watermarking of language model outputs enables statistical detection of
model-generated text, which can mitigate harms and misuses of language models.
Existing watermarking strategies operate by altering the decoder of an existing
language model. In this paper, we ask whether language models can directly
learn to gener... | 2023-12-07T17:41:44Z | Accepted at ICLR 2024 | null | null | On the Learnability of Watermarks for Language Models | ['Chenchen Gu', 'Xiang Lisa Li', 'Percy Liang', 'Tatsunori Hashimoto'] | 2,023 | International Conference on Learning Representations | 43 | 62 | ['Computer Science'] |
2,312.04483 | Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation | ['Zhiwu Qing', 'Shiwei Zhang', 'Jiayu Wang', 'Xiang Wang', 'Yujie Wei', 'Yingya Zhang', 'Changxin Gao', 'Nong Sang'] | ['cs.CV'] | Despite diffusion models having shown powerful abilities to generate
photorealistic images, generating videos that are realistic and diverse still
remains in its infancy. One of the key reasons is that current methods
intertwine spatial content and temporal dynamics together, leading to a notably
increased complexity o... | 2023-12-07T17:59:07Z | Project page: https://higen-t2v.github.io/ | null | null | Hierarchical Spatio-temporal Decoupling for Text-to- Video Generation | ['Zhiwu Qing', 'Shiwei Zhang', 'Jiayu Wang', 'Xiang Wang', 'Yujie Wei', 'Yingya Zhang', 'Changxin Gao', 'Nong Sang'] | 2,023 | Computer Vision and Pattern Recognition | 43 | 70 | ['Computer Science'] |
2,312.0456 | NeRFiller: Completing Scenes via Generative 3D Inpainting | ['Ethan Weber', 'Aleksander Hołyński', 'Varun Jampani', 'Saurabh Saxena', 'Noah Snavely', 'Abhishek Kar', 'Angjoo Kanazawa'] | ['cs.CV', 'cs.AI', 'cs.GR'] | We propose NeRFiller, an approach that completes missing portions of a 3D
capture via generative 3D inpainting using off-the-shelf 2D visual generative
models. Often parts of a captured 3D scene or object are missing due to mesh
reconstruction failures or a lack of observations (e.g., contact regions, such
as the botto... | 2023-12-07T18:59:41Z | Project page: https://ethanweber.me/nerfiller | null | null | null | null | null | null | null | null | null |
2,312.04563 | Visual Geometry Grounded Deep Structure From Motion | ['Jianyuan Wang', 'Nikita Karaev', 'Christian Rupprecht', 'David Novotny'] | ['cs.CV', 'cs.RO'] | Structure-from-motion (SfM) is a long-standing problem in the computer vision
community, which aims to reconstruct the camera poses and 3D structure of a
scene from a set of unconstrained 2D images. Classical frameworks solve this
problem in an incremental manner by detecting and matching keypoints,
registering images,... | 2023-12-07T18:59:52Z | 8 figures. Project page: https://vggsfm.github.io/ | null | null | null | null | null | null | null | null | null |
2,312.04565 | MuRF: Multi-Baseline Radiance Fields | ['Haofei Xu', 'Anpei Chen', 'Yuedong Chen', 'Christos Sakaridis', 'Yulun Zhang', 'Marc Pollefeys', 'Andreas Geiger', 'Fisher Yu'] | ['cs.CV'] | We present Multi-Baseline Radiance Fields (MuRF), a general feed-forward
approach to solving sparse view synthesis under multiple different baseline
settings (small and large baselines, and different number of input views). To
render a target novel view, we discretize the 3D space into planes parallel to
the target ima... | 2023-12-07T18:59:56Z | CVPR 2024, Project Page: https://haofeixu.github.io/murf/, Code:
https://github.com/autonomousvision/murf | null | null | null | null | null | null | null | null | null |
2,312.04651 | VOODOO 3D: Volumetric Portrait Disentanglement for One-Shot 3D Head
Reenactment | ['Phong Tran', 'Egor Zakharov', 'Long-Nhat Ho', 'Anh Tuan Tran', 'Liwen Hu', 'Hao Li'] | ['cs.CV'] | We present a 3D-aware one-shot head reenactment method based on a fully
volumetric neural disentanglement framework for source appearance and driver
expressions. Our method is real-time and produces high-fidelity and
view-consistent output, suitable for 3D teleconferencing systems based on
holographic displays. Existin... | 2023-12-07T19:19:57Z | null | null | null | VOODOO 3D: Volumetric Portrait Disentanglement for One-Shot 3D Head Reenactment | ['Phong Tran', 'Egor Zakharov', 'Long-Nhat Ho', 'A. Tran', 'Liwen Hu', 'Hao Li'] | 2,023 | Computer Vision and Pattern Recognition | 14 | 107 | ['Computer Science'] |
2,312.04724 | Purple Llama CyberSecEval: A Secure Coding Benchmark for Language Models | ['Manish Bhatt', 'Sahana Chennabasappa', 'Cyrus Nikolaidis', 'Shengye Wan', 'Ivan Evtimov', 'Dominik Gabi', 'Daniel Song', 'Faizan Ahmad', 'Cornelius Aschermann', 'Lorenzo Fontana', 'Sasha Frolov', 'Ravi Prakash Giri', 'Dhaval Kapil', 'Yiannis Kozyrakis', 'David LeBlanc', 'James Milazzo', 'Aleksandar Straumann', 'Gabri... | ['cs.CR', 'cs.LG'] | This paper presents CyberSecEval, a comprehensive benchmark developed to help
bolster the cybersecurity of Large Language Models (LLMs) employed as coding
assistants. As what we believe to be the most extensive unified cybersecurity
safety benchmark to date, CyberSecEval provides a thorough evaluation of LLMs
in two cr... | 2023-12-07T22:07:54Z | null | null | null | Purple Llama CyberSecEval: A Secure Coding Benchmark for Language Models | ['Manish Bhatt', 'Sa-hana Chennabasappa', 'Cyrus Nikolaidis', 'Shengye Wan', 'Ivan Evtimov', 'Dominik Gabi', 'Daniel Song', 'Faizan Ahmad', 'Cornelius Aschermann', 'Lorenzo Fontana', 'Sasha Frolov', 'Ravi Prakash Giri', 'Dhaval Kapil', 'Yiannis Kozyrakis', 'David LeBlanc', 'James Milazzo', 'Aleksandar Straumann', 'Gabr... | 2,023 | arXiv.org | 83 | 19 | ['Computer Science'] |
2,312.04746 | Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized
Narratives from Open-Source Histopathology Videos | ['Mehmet Saygin Seyfioglu', 'Wisdom O. Ikezogwo', 'Fatemeh Ghezloo', 'Ranjay Krishna', 'Linda Shapiro'] | ['cs.CV', 'cs.AI', 'cs.CL'] | Diagnosis in histopathology requires a global whole slide images (WSIs)
analysis, requiring pathologists to compound evidence from different WSI
patches. The gigapixel scale of WSIs poses a challenge for histopathology
multi-modal models. Training multi-model models for histopathology requires
instruction tuning datase... | 2023-12-07T23:16:37Z | null | null | null | Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology Videos | ['M. S. Seyfioglu', 'W. Ikezogwo', 'Fatemeh Ghezloo', 'Ranjay Krishna', 'Linda G. Shapiro'] | 2,023 | Computer Vision and Pattern Recognition | 44 | 35 | ['Computer Science'] |
2,312.04831 | Towards Enhanced Image Inpainting: Mitigating Unwanted Object Insertion
and Preserving Color Consistency | ['Yikai Wang', 'Chenjie Cao', 'Junqiu Yu', 'Ke Fan', 'Xiangyang Xue', 'Yanwei Fu'] | ['cs.CV'] | Recent advances in image inpainting increasingly use generative models to
handle large irregular masks. However, these models can create unrealistic
inpainted images due to two main issues: (1) Unwanted object insertion: Even
with unmasked areas as context, generative models may still generate arbitrary
objects in the ... | 2023-12-08T05:08:06Z | CVPR 2025 Highlight. Project page:
https://yikai-wang.github.io/asuka/ where full-size PDF with more qualitative
results are available | null | null | null | null | null | null | null | null | null |
2,312.05187 | Seamless: Multilingual Expressive and Streaming Speech Translation | ['Seamless Communication', 'Loïc Barrault', 'Yu-An Chung', 'Mariano Coria Meglioli', 'David Dale', 'Ning Dong', 'Mark Duppenthaler', 'Paul-Ambroise Duquenne', 'Brian Ellis', 'Hady Elsahar', 'Justin Haaheim', 'John Hoffman', 'Min-Jae Hwang', 'Hirofumi Inaguma', 'Christopher Klaiber', 'Ilia Kulikov', 'Pengwei Li', 'Danie... | ['cs.CL', 'cs.SD', 'eess.AS'] | Large-scale automatic speech translation systems today lack key features that
help machine-mediated communication feel seamless when compared to
human-to-human dialogue. In this work, we introduce a family of models that
enable end-to-end expressive and multilingual translations in a streaming
fashion. First, we contri... | 2023-12-08T17:18:42Z | null | null | null | Seamless: Multilingual Expressive and Streaming Speech Translation | ['Seamless Communication', 'Loïc Barrault', 'Yu-An Chung', 'Mariano Coria Meglioli', 'David Dale', 'Ning Dong', 'M. Duppenthaler', 'Paul-Ambroise Duquenne', 'Brian Ellis', 'Hady ElSahar', 'Justin Haaheim', 'John Hoffman', 'Min-Jae Hwang', 'H. Inaguma', 'Christopher Klaiber', 'Ilia Kulikov', 'Pengwei Li', 'Daniel Licht'... | 2,023 | arXiv.org | 158 | 0 | ['Computer Science', 'Engineering'] |
2,312.05215 | DeltaZip: Efficient Serving of Multiple Full-Model-Tuned LLMs | ['Xiaozhe Yao', 'Qinghao Hu', 'Ana Klimovic'] | ['cs.DC', 'cs.LG'] | Fine-tuning large language models (LLMs) greatly improves model quality for
downstream tasks. However, serving many fine-tuned LLMs concurrently is
challenging due to the sporadic, bursty, and varying request patterns of
different LLMs. To bridge this gap, we present DeltaZip, an LLM serving system
that efficiently ser... | 2023-12-08T18:07:05Z | EuroSys 2025' | null | null | null | null | null | null | null | null | null |
2,312.05239 | SwiftBrush: One-Step Text-to-Image Diffusion Model with Variational
Score Distillation | ['Thuan Hoang Nguyen', 'Anh Tran'] | ['cs.CV'] | Despite their ability to generate high-resolution and diverse images from
text prompts, text-to-image diffusion models often suffer from slow iterative
sampling processes. Model distillation is one of the most effective directions
to accelerate these models. However, previous distillation methods fail to
retain the gen... | 2023-12-08T18:44:09Z | Accepted to CVPR 2024; Github:
https://github.com/VinAIResearch/SwiftBrush | null | null | null | null | null | null | null | null | null |
2,312.05278 | Lyrics: Boosting Fine-grained Language-Vision Alignment and
Comprehension via Semantic-aware Visual Objects | ['Junyu Lu', 'Dixiang Zhang', 'Songxin Zhang', 'Zejian Xie', 'Zhuoyang Song', 'Cong Lin', 'Jiaxing Zhang', 'Bingyi Jing', 'Pingjian Zhang'] | ['cs.CL'] | Large Vision Language Models (LVLMs) have demonstrated impressive zero-shot
capabilities in various vision-language dialogue scenarios. However, the
absence of fine-grained visual object detection hinders the model from
understanding the details of images, leading to irreparable visual
hallucinations and factual errors... | 2023-12-08T09:02:45Z | null | null | null | null | null | null | null | null | null | null |
2,312.05284 | SlimSAM: 0.1% Data Makes Segment Anything Slim | ['Zigeng Chen', 'Gongfan Fang', 'Xinyin Ma', 'Xinchao Wang'] | ['cs.CV'] | Current approaches for compressing the Segment Anything Model (SAM) yield
commendable results, yet necessitate extensive data to train a new network from
scratch. Employing conventional pruning techniques can remarkably reduce data
requirements but would suffer from a degradation in performance. To address
this challen... | 2023-12-08T12:48:53Z | Accepted by NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,312.05849 | InteractDiffusion: Interaction Control in Text-to-Image Diffusion Models | ['Jiun Tian Hoe', 'Xudong Jiang', 'Chee Seng Chan', 'Yap-Peng Tan', 'Weipeng Hu'] | ['cs.CV', 'cs.GR', 'cs.MM'] | Large-scale text-to-image (T2I) diffusion models have showcased incredible
capabilities in generating coherent images based on textual descriptions,
enabling vast applications in content generation. While recent advancements
have introduced control over factors such as object localization, posture, and
image contours, ... | 2023-12-10T10:35:16Z | Website: https://jiuntian.github.io/interactdiffusion. Accepted at
CVPR2024 | null | null | InteractDiffusion: Interaction Control in Text-to-Image Diffusion Models | ['Jiun Tian Hoe', 'Xudong Jiang', 'Chee Seng Chan', 'Yap-Peng Tan', 'Weipeng Hu'] | 2,023 | Computer Vision and Pattern Recognition | 14 | 41 | ['Computer Science'] |
2,312.06109 | Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models | ['Haoran Wei', 'Lingyu Kong', 'Jinyue Chen', 'Liang Zhao', 'Zheng Ge', 'Jinrong Yang', 'Jianjian Sun', 'Chunrui Han', 'Xiangyu Zhang'] | ['cs.CV'] | Modern Large Vision-Language Models (LVLMs) enjoy the same vision vocabulary
-- CLIP, which can cover most common vision tasks. However, for some special
vision task that needs dense and fine-grained vision perception, e.g.,
document-level OCR or chart understanding, especially in non-English scenarios,
the CLIP-style ... | 2023-12-11T04:26:17Z | null | null | null | null | null | null | null | null | null | null |
2,312.06281 | EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models | ['Samuel J. Paech'] | ['cs.CL', 'cs.AI', 'I.2.7'] | We introduce EQ-Bench, a novel benchmark designed to evaluate aspects of
emotional intelligence in Large Language Models (LLMs). We assess the ability
of LLMs to understand complex emotions and social interactions by asking them
to predict the intensity of emotional states of characters in a dialogue. The
benchmark is ... | 2023-12-11T10:35:32Z | null | null | null | null | null | null | null | null | null | null |
2,312.06462 | Cooperation Does Matter: Exploring Multi-Order Bilateral Relations for
Audio-Visual Segmentation | ['Qi Yang', 'Xing Nie', 'Tong Li', 'Pengfei Gao', 'Ying Guo', 'Cheng Zhen', 'Pengfei Yan', 'Shiming Xiang'] | ['cs.CV', 'cs.AI', 'cs.SD', 'eess.AS'] | Recently, an audio-visual segmentation (AVS) task has been introduced, aiming
to group pixels with sounding objects within a given video. This task
necessitates a first-ever audio-driven pixel-level understanding of the scene,
posing significant challenges. In this paper, we propose an innovative
audio-visual transform... | 2023-12-11T15:51:38Z | CVPR 2024 Highlight. 13 pages, 10 figures | null | null | null | null | null | null | null | null | null |
2,312.06505 | Grounded Question-Answering in Long Egocentric Videos | ['Shangzhe Di', 'Weidi Xie'] | ['cs.CV'] | Existing approaches to video understanding, mainly designed for short videos
from a third-person perspective, are limited in their applicability in certain
fields, such as robotics. In this paper, we delve into open-ended
question-answering (QA) in long, egocentric videos, which allows individuals or
robots to inquire ... | 2023-12-11T16:31:55Z | Accepted to CVPR 2024. Project website at https://dszdsz.cn/GroundVQA | null | null | null | null | null | null | null | null | null |
2,312.0655 | LLM360: Towards Fully Transparent Open-Source LLMs | ['Zhengzhong Liu', 'Aurick Qiao', 'Willie Neiswanger', 'Hongyi Wang', 'Bowen Tan', 'Tianhua Tao', 'Junbo Li', 'Yuqi Wang', 'Suqi Sun', 'Omkar Pangarkar', 'Richard Fan', 'Yi Gu', 'Victor Miller', 'Yonghao Zhuang', 'Guowei He', 'Haonan Li', 'Fajri Koto', 'Liping Tang', 'Nikhil Ranjan', 'Zhiqiang Shen', 'Xuguang Ren', 'Ro... | ['cs.CL', 'cs.AI', 'cs.LG'] | The recent surge in open-source Large Language Models (LLMs), such as LLaMA,
Falcon, and Mistral, provides diverse options for AI practitioners and
researchers. However, most LLMs have only released partial artifacts, such as
the final model weights or inference code, and technical reports increasingly
limit their scop... | 2023-12-11T17:39:00Z | null | null | null | null | null | null | null | null | null | null |
2,312.06575 | EasyVolcap: Accelerating Neural Volumetric Video Research | ['Zhen Xu', 'Tao Xie', 'Sida Peng', 'Haotong Lin', 'Qing Shuai', 'Zhiyuan Yu', 'Guangzhao He', 'Jiaming Sun', 'Hujun Bao', 'Xiaowei Zhou'] | ['cs.CV'] | Volumetric video is a technology that digitally records dynamic events such
as artistic performances, sporting events, and remote conversations. When
acquired, such volumography can be viewed from any viewpoint and timestamp on
flat screens, 3D displays, or VR headsets, enabling immersive viewing
experiences and more f... | 2023-12-11T17:59:46Z | SIGGRAPH Asia 2023 Technical Communications. Source code:
https://github.com/zju3dv/EasyVolcap | null | 10.1145/3610543.3626173 | EasyVolcap: Accelerating Neural Volumetric Video Research | ['Zhen Xu', 'Tao Xie', 'Sida Peng', 'Haotong Lin', 'Qing Shuai', 'Zhiyuan Yu', 'Guangzhao He', 'Jiaming Sun', 'Hujun Bao', 'Xiaowei Zhou'] | 2,023 | SIGGRAPH Asia Technical Communications | 3 | 23 | ['Computer Science'] |
2,312.06635 | Gated Linear Attention Transformers with Hardware-Efficient Training | ['Songlin Yang', 'Bailin Wang', 'Yikang Shen', 'Rameswar Panda', 'Yoon Kim'] | ['cs.LG', 'cs.CL'] | Transformers with linear attention allow for efficient parallel training but
can simultaneously be formulated as an RNN with 2D (matrix-valued) hidden
states, thus enjoying linear-time inference complexity. However, linear
attention generally underperforms ordinary softmax attention. Moreover, current
implementations o... | 2023-12-11T18:51:59Z | minor update | null | null | null | null | null | null | null | null | null |
2,312.06647 | 4M: Massively Multimodal Masked Modeling | ['David Mizrahi', 'Roman Bachmann', 'Oğuzhan Fatih Kar', 'Teresa Yeo', 'Mingfei Gao', 'Afshin Dehghan', 'Amir Zamir'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Current machine learning models for vision are often highly specialized and
limited to a single modality and task. In contrast, recent large language
models exhibit a wide range of capabilities, hinting at a possibility for
similarly versatile models in computer vision. In this paper, we take a step in
this direction a... | 2023-12-11T18:57:35Z | NeurIPS 2023 Spotlight. Project page at https://4m.epfl.ch/ | null | null | 4M: Massively Multimodal Masked Modeling | ['David Mizrahi', 'Roman Bachmann', 'Ouguzhan Fatih Kar', 'Teresa Yeo', 'Mingfei Gao', 'Afshin Dehghan', 'Amir Zamir'] | 2,023 | Neural Information Processing Systems | 75 | 133 | ['Computer Science'] |
2,312.06648 | Dense X Retrieval: What Retrieval Granularity Should We Use? | ['Tong Chen', 'Hongwei Wang', 'Sihao Chen', 'Wenhao Yu', 'Kaixin Ma', 'Xinran Zhao', 'Hongming Zhang', 'Dong Yu'] | ['cs.CL', 'cs.AI', 'cs.IR'] | Dense retrieval has become a prominent method to obtain relevant context or
world knowledge in open-domain NLP tasks. When we use a learned dense retriever
on a retrieval corpus at inference time, an often-overlooked design choice is
the retrieval unit in which the corpus is indexed, e.g. document, passage, or
sentence... | 2023-12-11T18:57:35Z | null | null | null | null | null | null | null | null | null | null |
2,312.06674 | Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations | ['Hakan Inan', 'Kartikeya Upasani', 'Jianfeng Chi', 'Rashi Rungta', 'Krithika Iyer', 'Yuning Mao', 'Michael Tontchev', 'Qing Hu', 'Brian Fuller', 'Davide Testuggine', 'Madian Khabsa'] | ['cs.CL', 'cs.AI'] | We introduce Llama Guard, an LLM-based input-output safeguard model geared
towards Human-AI conversation use cases. Our model incorporates a safety risk
taxonomy, a valuable tool for categorizing a specific set of safety risks found
in LLM prompts (i.e., prompt classification). This taxonomy is also
instrumental in cla... | 2023-12-07T19:40:50Z | null | null | null | null | null | null | null | null | null | null |
2,312.06709 | AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains
Into One | ['Mike Ranzinger', 'Greg Heinrich', 'Jan Kautz', 'Pavlo Molchanov'] | ['cs.CV'] | A handful of visual foundation models (VFMs) have recently emerged as the
backbones for numerous downstream tasks. VFMs like CLIP, DINOv2, SAM are
trained with distinct objectives, exhibiting unique characteristics for various
downstream tasks. We find that despite their conceptual differences, these
models can be effe... | 2023-12-10T17:07:29Z | CVPR 2024 Version 3: CVPR Camera Ready, reconfigured full paper,
table 1 is now more comprehensive Version 2: Added more acknowledgements and
updated table 7 with more recent results. Ensured that the link in the
abstract to our code is working properly Version 3: Fix broken hyperlinks | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), 2024, pp. 12490-12500 | null | AM-RADIO: Agglomerative Vision Foundation Model Reduce All Domains Into One | ['Michael Ranzinger', 'Greg Heinrich', 'Jan Kautz', 'Pavlo Molchanov'] | 2,023 | Computer Vision and Pattern Recognition | 50 | 66 | ['Computer Science'] |
2,312.06725 | EpiDiff: Enhancing Multi-View Synthesis via Localized
Epipolar-Constrained Diffusion | ['Zehuan Huang', 'Hao Wen', 'Junting Dong', 'Yaohui Wang', 'Yangguang Li', 'Xinyuan Chen', 'Yan-Pei Cao', 'Ding Liang', 'Yu Qiao', 'Bo Dai', 'Lu Sheng'] | ['cs.CV'] | Generating multiview images from a single view facilitates the rapid
generation of a 3D mesh conditioned on a single image. Recent methods that
introduce 3D global representation into diffusion models have shown the
potential to generate consistent multiviews, but they have reduced generation
speed and face challenges ... | 2023-12-11T05:20:52Z | Project page: https://huanngzh.github.io/EpiDiff/ | null | null | EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion | ['Zehuan Huang', 'Hao Wen', 'Junting Dong', 'Yaohui Wang', 'Yangguang Li', 'Xinyuan Chen', 'Yan-Pei Cao', 'Ding Liang', 'Yu Qiao', 'Bo Dai', 'Lu Sheng'] | 2,023 | Computer Vision and Pattern Recognition | 37 | 79 | ['Computer Science'] |
2,312.06795 | Model Breadcrumbs: Scaling Multi-Task Model Merging with Sparse Masks | ['MohammadReza Davari', 'Eugene Belilovsky'] | ['cs.LG'] | The rapid development of AI systems has been greatly influenced by the
emergence of foundation models. A common approach for targeted problems
involves fine-tuning these pre-trained foundation models for specific target
tasks, resulting in a rapid spread of models fine-tuned across a diverse array
of tasks. This work f... | 2023-12-11T19:10:55Z | Published in ECCV 2024 | null | null | null | null | null | null | null | null | null |
2,312.06886 | Relightful Harmonization: Lighting-aware Portrait Background Replacement | ['Mengwei Ren', 'Wei Xiong', 'Jae Shin Yoon', 'Zhixin Shu', 'Jianming Zhang', 'HyunJoon Jung', 'Guido Gerig', 'He Zhang'] | ['cs.CV'] | Portrait harmonization aims to composite a subject into a new background,
adjusting its lighting and color to ensure harmony with the background scene.
Existing harmonization techniques often only focus on adjusting the global
color and brightness of the foreground and ignore crucial illumination cues
from the backgrou... | 2023-12-11T23:20:31Z | CVPR 2024 camera ready | null | null | Relightful Harmonization: Lighting-Aware Portrait Background Replacement | ['Mengwei Ren', 'Wei Xiong', 'Jae Shin Yoon', 'Zhixin Shu', 'Jianming Zhang', 'Hyunjoon Jung', 'Guido Gerig', 'He Zhang'] | 2,023 | Computer Vision and Pattern Recognition | 24 | 82 | ['Computer Science'] |
2,312.06947 | MaTe3D: Mask-guided Text-based 3D-aware Portrait Editing | ['Kangneng Zhou', 'Daiheng Gao', 'Xuan Wang', 'Jie Zhang', 'Peng Zhang', 'Xusen Sun', 'Longhao Zhang', 'Shiqi Yang', 'Bang Zhang', 'Liefeng Bo', 'Yaxing Wang', 'Ming-Ming Cheng'] | ['cs.CV'] | 3D-aware portrait editing has a wide range of applications in multiple
fields. However, current approaches are limited due that they can only perform
mask-guided or text-based editing. Even by fusing the two procedures into a
model, the editing quality and stability cannot be ensured. To address this
limitation, we pro... | 2023-12-12T03:04:08Z | 16 pages, 13 figures | null | null | MaTe3D: Mask-guided Text-based 3D-aware Portrait Editing | ['Kangneng Zhou', 'Daiheng Gao', 'Xuan Wang', 'Jie Zhang', 'Peng Zhang', 'Xusen Sun', 'Longhao Zhang', 'Shiqi Yang', 'Bang Zhang', 'Liefeng Bo', 'Yaxing Wang'] | 2,023 | arXiv.org | 4 | 67 | ['Computer Science'] |
2,312.07374 | Relax Image-Specific Prompt Requirement in SAM: A Single Generic Prompt
for Segmenting Camouflaged Objects | ['Jian Hu', 'Jiayi Lin', 'Weitong Cai', 'Shaogang Gong'] | ['cs.CV'] | Camouflaged object detection (COD) approaches heavily rely on pixel-level
annotated datasets. Weakly-supervised COD (WSCOD) approaches use sparse
annotations like scribbles or points to reduce annotation effort, but this can
lead to decreased accuracy. The Segment Anything Model (SAM) shows remarkable
segmentation abil... | 2023-12-12T15:43:36Z | Accepted by AAAI2024 | null | null | null | null | null | null | null | null | null |
2,312.07409 | DiffMorpher: Unleashing the Capability of Diffusion Models for Image
Morphing | ['Kaiwen Zhang', 'Yifan Zhou', 'Xudong Xu', 'Xingang Pan', 'Bo Dai'] | ['cs.CV'] | Diffusion models have achieved remarkable image generation quality surpassing
previous generative models. However, a notable limitation of diffusion models,
in comparison to GANs, is their difficulty in smoothly interpolating between
two image samples, due to their highly unstructured latent space. Such a smooth
interp... | 2023-12-12T16:28:08Z | null | null | null | null | null | null | null | null | null | null |
2,312.07488 | LMDrive: Closed-Loop End-to-End Driving with Large Language Models | ['Hao Shao', 'Yuxuan Hu', 'Letian Wang', 'Steven L. Waslander', 'Yu Liu', 'Hongsheng Li'] | ['cs.CV', 'cs.AI', 'cs.RO'] | Despite significant recent progress in the field of autonomous driving,
modern methods still struggle and can incur serious accidents when encountering
long-tail unforeseen events and challenging urban scenarios. On the one hand,
large language models (LLM) have shown impressive reasoning capabilities that
approach "Ar... | 2023-12-12T18:24:15Z | project page: https://hao-shao.com/projects/lmdrive.html | null | null | LMDrive: Closed-Loop End-to-End Driving with Large Language Models | ['Hao Shao', 'Yuxuan Hu', 'Letian Wang', 'Steven L. Waslander', 'Yu Liu', 'Hongsheng Li'] | 2,023 | Computer Vision and Pattern Recognition | 138 | 56 | ['Computer Science'] |
2,312.07533 | VILA: On Pre-training for Visual Language Models | ['Ji Lin', 'Hongxu Yin', 'Wei Ping', 'Yao Lu', 'Pavlo Molchanov', 'Andrew Tao', 'Huizi Mao', 'Jan Kautz', 'Mohammad Shoeybi', 'Song Han'] | ['cs.CV'] | Visual language models (VLMs) rapidly progressed with the recent success of
large language models. There have been growing efforts on visual instruction
tuning to extend the LLM with visual inputs, but lacks an in-depth study of the
visual language pre-training process, where the model learns to perform joint
modeling ... | 2023-12-12T18:58:18Z | CVPR 2024 | null | null | null | null | null | null | null | null | null |
2,312.07539 | HeadArtist: Text-conditioned 3D Head Generation with Self Score
Distillation | ['Hongyu Liu', 'Xuan Wang', 'Ziyu Wan', 'Yujun Shen', 'Yibing Song', 'Jing Liao', 'Qifeng Chen'] | ['cs.CV'] | This work presents HeadArtist for 3D head generation from text descriptions.
With a landmark-guided ControlNet serving as the generative prior, we come up
with an efficient pipeline that optimizes a parameterized 3D head model under
the supervision of the prior distillation itself. We call such a process self
score dis... | 2023-12-12T18:59:25Z | Amazing results are shown in
https://kumapowerliu.github.io/HeadArtist. Accepted by SIGGRAPH 2024 | null | null | null | null | null | null | null | null | null |
2,312.07625 | Astrocyte-Enabled Advancements in Spiking Neural Networks for Large
Language Modeling | ['Guobin Shen', 'Dongcheng Zhao', 'Yiting Dong', 'Yang Li', 'Jindong Li', 'Kang Sun', 'Yi Zeng'] | ['cs.NE', 'cs.AI'] | Within the complex neuroarchitecture of the brain, astrocytes play crucial
roles in development, structure, and metabolism. These cells regulate neural
activity through tripartite synapses, directly impacting cognitive processes
such as learning and memory. Despite the growing recognition of astrocytes'
significance, t... | 2023-12-12T06:56:31Z | null | null | null | null | null | null | null | null | null | null |
2,312.07931 | Levenshtein Distance Embedding with Poisson Regression for DNA Storage | ['Xiang Wei', 'Alan J. X. Guo', 'Sihan Sun', 'Mengyi Wei', 'Wei Yu'] | ['cs.LG', 'q-bio.QM'] | Efficient computation or approximation of Levenshtein distance, a widely-used
metric for evaluating sequence similarity, has attracted significant attention
with the emergence of DNA storage and other biological applications. Sequence
embedding, which maps Levenshtein distance to a conventional distance between
embeddi... | 2023-12-13T07:20:27Z | null | Proceedings of the AAAI Conference on Artificial Intelligence,
(2024) 38(14), 15796-15804 | 10.1609/aaai.v38i14.29509 | null | null | null | null | null | null | null |
2,312.08617 | RTLCoder: Fully Open-Source and Efficient LLM-Assisted RTL Code
Generation Technique | ['Shang Liu', 'Wenji Fang', 'Yao Lu', 'Jing Wang', 'Qijun Zhang', 'Hongce Zhang', 'Zhiyao Xie'] | ['cs.PL', 'cs.AR'] | The automatic generation of RTL code (e.g., Verilog) using natural language
instructions and large language models (LLMs) has attracted significant
research interest recently. However, most existing approaches heavily rely on
commercial LLMs such as ChatGPT, while open-source LLMs tailored for this
specific design gene... | 2023-12-14T02:42:15Z | Accepted by IEEE Transactions on Computer-Aided Design of Integrated
Circuits and Systems | null | null | null | null | null | null | null | null | null |
2,312.08914 | CogAgent: A Visual Language Model for GUI Agents | ['Wenyi Hong', 'Weihan Wang', 'Qingsong Lv', 'Jiazheng Xu', 'Wenmeng Yu', 'Junhui Ji', 'Yan Wang', 'Zihan Wang', 'Yuxuan Zhang', 'Juanzi Li', 'Bin Xu', 'Yuxiao Dong', 'Ming Ding', 'Jie Tang'] | ['cs.CV'] | People are spending an enormous amount of time on digital devices through
graphical user interfaces (GUIs), e.g., computer or smartphone screens. Large
language models (LLMs) such as ChatGPT can assist people in tasks like writing
emails, but struggle to understand and interact with GUIs, thus limiting their
potential ... | 2023-12-14T13:20:57Z | CVPR 2024 (Highlight), 27 pages, 19 figures | null | null | CogAgent: A Visual Language Model for GUI Agents | ['Wenyi Hong', 'Weihan Wang', 'Qingsong Lv', 'Jiazheng Xu', 'Wenmeng Yu', 'Junhui Ji', 'Yan Wang', 'Zihan Wang', 'Yuxiao Dong', 'Ming Ding', 'Jie Tang'] | 2,023 | Computer Vision and Pattern Recognition | 383 | 42 | ['Computer Science'] |
2,312.08935 | Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human
Annotations | ['Peiyi Wang', 'Lei Li', 'Zhihong Shao', 'R. X. Xu', 'Damai Dai', 'Yifei Li', 'Deli Chen', 'Y. Wu', 'Zhifang Sui'] | ['cs.AI', 'cs.CL', 'cs.LG'] | In this paper, we present an innovative process-oriented math process reward
model called \textbf{Math-Shepherd}, which assigns a reward score to each step
of math problem solutions. The training of Math-Shepherd is achieved using
automatically constructed process-wise supervision data, breaking the
bottleneck of heavy... | 2023-12-14T13:41:54Z | Add Step-by-Step reinforcement learning results | null | null | Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations | ['Peiyi Wang', 'Lei Li', 'Zhihong Shao', 'R. Xu', 'Damai Dai', 'Yifei Li', 'Deli Chen', 'Y.Wu', 'Zhifang Sui'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 398 | 49 | ['Computer Science'] |
2,312.09031 | iComMa: Inverting 3D Gaussian Splatting for Camera Pose Estimation via
Comparing and Matching | ['Yuan Sun', 'Xuan Wang', 'Yunfan Zhang', 'Jie Zhang', 'Caigui Jiang', 'Yu Guo', 'Fei Wang'] | ['cs.CV'] | We present a method named iComMa to address the 6D camera pose estimation
problem in computer vision. Conventional pose estimation methods typically rely
on the target's CAD model or necessitate specific network training tailored to
particular object classes. Some existing methods have achieved promising
results in mes... | 2023-12-14T15:31:33Z | null | null | null | iComMa: Inverting 3D Gaussian Splatting for Camera Pose Estimation via Comparing and Matching | ['Yuanliang Sun', 'Xuan Wang', 'Yunfan Zhang', 'Jie Zhang', 'Caigui Jiang', 'Yu Guo', 'Fei Wang'] | 2,023 | null | 7 | 42 | ['Computer Science'] |
2,312.09109 | VideoLCM: Video Latent Consistency Model | ['Xiang Wang', 'Shiwei Zhang', 'Han Zhang', 'Yu Liu', 'Yingya Zhang', 'Changxin Gao', 'Nong Sang'] | ['cs.CV', 'cs.AI'] | Consistency models have demonstrated powerful capability in efficient image
generation and allowed synthesis within a few sampling steps, alleviating the
high computational cost in diffusion models. However, the consistency model in
the more challenging and resource-consuming video generation is still less
explored. In... | 2023-12-14T16:45:36Z | null | null | null | VideoLCM: Video Latent Consistency Model | ['Xiang Wang', 'Shiwei Zhang', 'Han Zhang', 'Yu Liu', 'Yingya Zhang', 'Changxin Gao', 'Nong Sang'] | 2,023 | arXiv.org | 51 | 62 | ['Computer Science'] |
2,312.09128 | Tokenize Anything via Prompting | ['Ting Pan', 'Lulu Tang', 'Xinlong Wang', 'Shiguang Shan'] | ['cs.CV'] | We present a unified, promptable model capable of simultaneously segmenting,
recognizing, and captioning anything. Unlike SAM, we aim to build a versatile
region representation in the wild via visual prompting. To achieve this, we
train a generalizable model with massive segmentation masks, \eg, SA-1B masks,
and semant... | 2023-12-14T17:01:02Z | code, model, and demo:
https://github.com/baaivision/tokenize-anything | null | null | null | null | null | null | null | null | null |
2,312.09147 | Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D
Reconstruction with Transformers | ['Zi-Xin Zou', 'Zhipeng Yu', 'Yuan-Chen Guo', 'Yangguang Li', 'Ding Liang', 'Yan-Pei Cao', 'Song-Hai Zhang'] | ['cs.CV'] | Recent advancements in 3D reconstruction from single images have been driven
by the evolution of generative models. Prominent among these are methods based
on Score Distillation Sampling (SDS) and the adaptation of diffusion models in
the 3D domain. Despite their progress, these techniques often face limitations
due to... | 2023-12-14T17:18:34Z | Project Page: https://zouzx.github.io/TriplaneGaussian/ | null | null | null | null | null | null | null | null | null |
2,312.09168 | DiffusionLight: Light Probes for Free by Painting a Chrome Ball | ['Pakkapon Phongthawee', 'Worameth Chinchuthakun', 'Nontaphat Sinsunthithet', 'Amit Raj', 'Varun Jampani', 'Pramook Khungurn', 'Supasorn Suwajanakorn'] | ['cs.CV', 'cs.GR', 'cs.LG', 'I.3.3; I.4.8'] | We present a simple yet effective technique to estimate lighting in a single
input image. Current techniques rely heavily on HDR panorama datasets to train
neural networks to regress an input with limited field-of-view to a full
environment map. However, these approaches often struggle with real-world,
uncontrolled set... | 2023-12-14T17:34:53Z | CVPR 2024 Oral. For more information and code, please visit our
website https://diffusionlight.github.io/ | null | null | null | null | null | null | null | null | null |
2,312.09508 | IndicIRSuite: Multilingual Dataset and Neural Information Models for
Indian Languages | ['Saiful Haq', 'Ashutosh Sharma', 'Pushpak Bhattacharyya'] | ['cs.IR', 'cs.CL'] | In this paper, we introduce Neural Information Retrieval resources for 11
widely spoken Indian Languages (Assamese, Bengali, Gujarati, Hindi, Kannada,
Malayalam, Marathi, Oriya, Punjabi, Tamil, and Telugu) from two major Indian
language families (Indo-Aryan and Dravidian). These resources include (a)
INDIC-MARCO, a mul... | 2023-12-15T03:19:53Z | null | null | null | null | null | null | null | null | null | null |
2,312.09897 | A Novel Dataset for Financial Education Text Simplification in Spanish | ['Nelson Perez-Rojas', 'Saul Calderon-Ramirez', 'Martin Solis-Salazar', 'Mario Romero-Sandoval', 'Monica Arias-Monge', 'Horacio Saggion'] | ['cs.AI'] | Text simplification, crucial in natural language processing, aims to make
texts more comprehensible, particularly for specific groups like visually
impaired Spanish speakers, a less-represented language in this field. In
Spanish, there are few datasets that can be used to create text simplification
systems. Our researc... | 2023-12-15T15:47:08Z | null | null | null | A Novel Dataset for Financial Education Text Simplification in Spanish | ['Nelson Pérez-Rojas', 'Saúl Calderón Ramírez', 'Martín Solís-Salazar', 'Mario Romero-Sandoval', 'Monica Arias-Monge', 'Horacio Saggion'] | 2,023 | arXiv.org | 0 | 48 | ['Computer Science'] |
2,312.09993 | LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian
Language | ['Pierpaolo Basile', 'Elio Musacchio', 'Marco Polignano', 'Lucia Siciliani', 'Giuseppe Fiameni', 'Giovanni Semeraro'] | ['cs.CL'] | Large Language Models represent state-of-the-art linguistic models designed
to equip computers with the ability to comprehend natural language. With its
exceptional capacity to capture complex contextual relationships, the LLaMA
(Large Language Model Meta AI) family represents a novel advancement in the
field of natura... | 2023-12-15T18:06:22Z | null | null | null | null | null | null | null | null | null | null |
2,312.1016 | Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in
Chart Captioning | ['Kung-Hsiang Huang', 'Mingyang Zhou', 'Hou Pong Chan', 'Yi R. Fung', 'Zhenhailong Wang', 'Lingyu Zhang', 'Shih-Fu Chang', 'Heng Ji'] | ['cs.CL'] | Recent advancements in large vision-language models (LVLMs) have led to
significant progress in generating natural language descriptions for visual
content and thus enhancing various applications. One issue with these powerful
models is that they sometimes produce texts that are factually inconsistent
with the visual i... | 2023-12-15T19:16:21Z | ACL 2024 Findings | null | null | Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning | ['Kung-Hsiang Huang', 'Mingyang Zhou', 'Hou Pong Chan', 'Y. Fung', 'Zhenhailong Wang', 'Lingyu Zhang', 'Shih-Fu Chang', 'Heng Ji'] | 2,023 | Annual Meeting of the Association for Computational Linguistics | 38 | 57 | ['Computer Science'] |
2,312.10171 | Pipeline and Dataset Generation for Automated Fact-checking in Almost
Any Language | ['Jan Drchal', 'Herbert Ullrich', 'Tomáš Mlynář', 'Václav Moravec'] | ['cs.CL', 'I.2.7; I.5.4'] | This article presents a pipeline for automated fact-checking leveraging
publicly available Language Models and data. The objective is to assess the
accuracy of textual claims using evidence from a ground-truth evidence corpus.
The pipeline consists of two main modules -- the evidence retrieval and the
claim veracity ev... | 2023-12-15T19:43:41Z | submitted to NCAA journal for review | null | 10.1007/s00521-024-10113-5 | Pipeline and Dataset Generation for Automated Fact-checking in Almost Any Language | ['Jan Drchal', 'Herbert Ullrich', 'Tomás Mlynár', 'Václav Moravec'] | 2,023 | Neural computing & applications (Print) | 3 | 79 | ['Computer Science'] |
2,312.103 | Shot2Story: A New Benchmark for Comprehensive Understanding of
Multi-shot Videos | ['Mingfei Han', 'Linjie Yang', 'Xiaojun Chang', 'Lina Yao', 'Heng Wang'] | ['cs.CV'] | A short clip of video may contain progression of multiple events and an
interesting story line. A human need to capture both the event in every shot
and associate them together to understand the story behind it. In this work, we
present a new multi-shot video understanding benchmark Shot2Story with detailed
shot-level ... | 2023-12-16T03:17:30Z | ICLR 2025. Extended annotation with 43K multi-shot videos in total.
https://mingfei.info/shot2story for updates and more information | null | null | null | null | null | null | null | null | null |
2,312.10307 | MusER: Musical Element-Based Regularization for Generating Symbolic
Music with Emotion | ['Shulei Ji', 'Xinyu Yang'] | ['cs.SD', 'cs.AI', 'cs.MM', 'eess.AS'] | Generating music with emotion is an important task in automatic music
generation, in which emotion is evoked through a variety of musical elements
(such as pitch and duration) that change over time and collaborate with each
other. However, prior research on deep learning-based emotional music
generation has rarely expl... | 2023-12-16T03:50:13Z | Accepted by AAAI 2024 | null | null | MusER: Musical Element-Based Regularization for Generating Symbolic Music with Emotion | ['Shulei Ji', 'Xinyu Yang'] | 2,023 | AAAI Conference on Artificial Intelligence | 3 | 29 | ['Computer Science', 'Engineering'] |
2,312.10656 | VidToMe: Video Token Merging for Zero-Shot Video Editing | ['Xirui Li', 'Chao Ma', 'Xiaokang Yang', 'Ming-Hsuan Yang'] | ['cs.CV'] | Diffusion models have made significant advances in generating high-quality
images, but their application to video generation has remained challenging due
to the complexity of temporal motion. Zero-shot video editing offers a solution
by utilizing pre-trained image diffusion models to translate source videos into
new on... | 2023-12-17T09:05:56Z | Project page: https://vidtome-diffusion.github.io | null | null | null | null | null | null | null | null | null |
2,312.10665 | Silkie: Preference Distillation for Large Visual Language Models | ['Lei Li', 'Zhihui Xie', 'Mukai Li', 'Shunian Chen', 'Peiyi Wang', 'Liang Chen', 'Yazheng Yang', 'Benyou Wang', 'Lingpeng Kong'] | ['cs.CV', 'cs.CL'] | This paper explores preference distillation for large vision language models
(LVLMs), improving their ability to generate helpful and faithful responses
anchoring the visual context. We first build a vision-language feedback
(VLFeedback) dataset utilizing AI annotation. Specifically, responses are
generated by models s... | 2023-12-17T09:44:27Z | Project page: https://vlf-silkie.github.io | null | null | Silkie: Preference Distillation for Large Visual Language Models | ['Lei Li', 'Zhihui Xie', 'Mukai Li', 'Shunian Chen', 'Peiyi Wang', 'Liang Chen', 'Yazheng Yang', 'Benyou Wang', 'Lingpeng Kong'] | 2,023 | arXiv.org | 80 | 42 | ['Computer Science'] |
2,312.107 | Cross-Domain Robustness of Transformer-based Keyphrase Generation | ['Anna Glazkova', 'Dmitry Morozov'] | ['cs.CL', 'cs.AI', 'cs.LG', '68T50', 'I.2.7; I.7.m; H.3.3'] | Modern models for text generation show state-of-the-art results in many
natural language processing tasks. In this work, we explore the effectiveness
of abstractive text summarization models for keyphrase selection. A list of
keyphrases is an important element of a text in databases and repositories of
electronic docum... | 2023-12-17T12:27:15Z | Presented at the XXV International Conference "Data Analytics and
Management in Data Intensive Domains" (DAMDID/RCDL), October 2023 | Communications in Computer and Information Science, vol 2086, pp.
249--265 | 10.1007/978-3-031-67826-4_19 | null | null | null | null | null | null | null |
2,312.10741 | StyleSinger: Style Transfer for Out-of-Domain Singing Voice Synthesis | ['Yu Zhang', 'Rongjie Huang', 'Ruiqi Li', 'JinZheng He', 'Yan Xia', 'Feiyang Chen', 'Xinyu Duan', 'Baoxing Huai', 'Zhou Zhao'] | ['eess.AS', 'cs.CL', 'cs.SD'] | Style transfer for out-of-domain (OOD) singing voice synthesis (SVS) focuses
on generating high-quality singing voices with unseen styles (such as timbre,
emotion, pronunciation, and articulation skills) derived from reference singing
voice samples. However, the endeavor to model the intricate nuances of singing
voice ... | 2023-12-17T15:26:16Z | Accepted by AAAI 2024 | Proceedings of the AAAI Conference on Artificial Intelligence,
38(17), 19597-19605. (2024) | 10.1609/aaai.v38i17.29932 | StyleSinger: Style Transfer for Out-of-Domain Singing Voice Synthesis | ['Yu Zhang', 'Rongjie Huang', 'Ruiqi Li', 'Jinzheng He', 'Yan Xia', 'Feiyang Chen', 'Xinyu Duan', 'Baoxing Huai', 'Zhou Zhao'] | 2,023 | AAAI Conference on Artificial Intelligence | 19 | 37 | ['Computer Science', 'Engineering'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.