arxiv_id float64 1.5k 2.51k | title stringlengths 9 178 ⌀ | authors stringlengths 2 22.8k | categories stringlengths 4 146 | summary stringlengths 103 1.92k ⌀ | published stringdate 2015-02-06 10:44:00 2025-07-10 17:59:58 ⌀ | comments stringlengths 2 417 ⌀ | journal_ref stringclasses 321
values | doi stringclasses 398
values | ss_title stringlengths 8 159 ⌀ | ss_authors stringlengths 11 8.38k ⌀ | ss_year float64 2.02k 2.03k ⌀ | ss_venue stringclasses 281
values | ss_citationCount float64 0 134k ⌀ | ss_referenceCount float64 0 429 ⌀ | ss_fieldsOfStudy stringclasses 47
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,401.03048 | Latte: Latent Diffusion Transformer for Video Generation | ['Xin Ma', 'Yaohui Wang', 'Xinyuan Chen', 'Gengyun Jia', 'Ziwei Liu', 'Yuan-Fang Li', 'Cunjian Chen', 'Yu Qiao'] | ['cs.CV'] | We propose Latte, a novel Latent Diffusion Transformer for video generation.
Latte first extracts spatio-temporal tokens from input videos and then adopts a
series of Transformer blocks to model video distribution in the latent space.
In order to model a substantial number of tokens extracted from videos, four
efficien... | 2024-01-05T19:55:15Z | Accepted by Transactions on Machine Learning Research 2025; Project
Page: https://maxin-cn.github.io/latte_project | null | null | null | null | null | null | null | null | null |
2,401.03065 | CRUXEval: A Benchmark for Code Reasoning, Understanding and Execution | ['Alex Gu', 'Baptiste Rozière', 'Hugh Leather', 'Armando Solar-Lezama', 'Gabriel Synnaeve', 'Sida I. Wang'] | ['cs.SE', 'cs.AI', 'cs.LG'] | We present CRUXEval (Code Reasoning, Understanding, and eXecution
Evaluation), a benchmark consisting of 800 Python functions (3-13 lines). Each
function comes with an input-output pair, leading to two natural tasks: input
prediction and output prediction. First, we propose a generic recipe for
generating our execution... | 2024-01-05T20:53:51Z | 71 pages, 29 figures | null | null | null | null | null | null | null | null | null |
2,401.03078 | StreamVC: Real-Time Low-Latency Voice Conversion | ['Yang Yang', 'Yury Kartynnik', 'Yunpeng Li', 'Jiuqiang Tang', 'Xing Li', 'George Sung', 'Matthias Grundmann'] | ['eess.AS', 'cs.LG', 'cs.SD'] | We present StreamVC, a streaming voice conversion solution that preserves the
content and prosody of any source speech while matching the voice timbre from
any target speech. Unlike previous approaches, StreamVC produces the resulting
waveform at low latency from the input signal even on a mobile platform, making
it ap... | 2024-01-05T22:37:26Z | Accepted to ICASSP 2024 | null | null | null | null | null | null | null | null | null |
2,401.03407 | Bilateral Reference for High-Resolution Dichotomous Image Segmentation | ['Peng Zheng', 'Dehong Gao', 'Deng-Ping Fan', 'Li Liu', 'Jorma Laaksonen', 'Wanli Ouyang', 'Nicu Sebe'] | ['cs.CV'] | We introduce a novel bilateral reference framework (BiRefNet) for
high-resolution dichotomous image segmentation (DIS). It comprises two
essential components: the localization module (LM) and the reconstruction
module (RM) with our proposed bilateral reference (BiRef). The LM aids in
object localization using global se... | 2024-01-07T07:56:47Z | Version 6, the final version of the journal with a fixed institute | null | null | null | null | null | null | null | null | null |
2,401.03462 | Long Context Compression with Activation Beacon | ['Peitian Zhang', 'Zheng Liu', 'Shitao Xiao', 'Ninglu Shao', 'Qiwei Ye', 'Zhicheng Dou'] | ['cs.CL', 'cs.AI'] | Long context compression is a critical research problem due to its
significance in reducing the high computational and memory costs associated
with LLMs. In this paper, we propose Activation Beacon, a plug-in module for
transformer-based LLMs that targets effective, efficient, and flexible
compression of long contexts.... | 2024-01-07T11:57:40Z | Newer version of Activation Beacon | null | null | Long Context Compression with Activation Beacon | ['Peitian Zhang', 'Zheng Liu', 'Shitao Xiao', 'Ninglu Shao', 'Qiwei Ye', 'Zhicheng Dou'] | 2,024 | International Conference on Learning Representations | 34 | 38 | ['Computer Science'] |
2,401.03497 | EAT: Self-Supervised Pre-Training with Efficient Audio Transformer | ['Wenxi Chen', 'Yuzhe Liang', 'Ziyang Ma', 'Zhisheng Zheng', 'Xie Chen'] | ['eess.AS', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.SD'] | Audio self-supervised learning (SSL) pre-training, which aims to learn good
representations from unlabeled audio, has made remarkable progress. However,
the extensive computational demands during pre-training pose a significant
barrier to the potential application and optimization of audio SSL models. In
this paper, in... | 2024-01-07T14:31:27Z | null | null | null | EAT: Self-Supervised Pre-Training with Efficient Audio Transformer | ['Wenxi Chen', 'Yuzhe Liang', 'Ziyang Ma', 'Zhisheng Zheng', 'Xie Chen'] | 2,024 | International Joint Conference on Artificial Intelligence | 22 | 51 | ['Computer Science', 'Engineering'] |
2,401.03506 | DiarizationLM: Speaker Diarization Post-Processing with Large Language
Models | ['Quan Wang', 'Yiling Huang', 'Guanlong Zhao', 'Evan Clark', 'Wei Xia', 'Hank Liao'] | ['eess.AS', 'cs.LG', 'cs.SD'] | In this paper, we introduce DiarizationLM, a framework to leverage large
language models (LLM) to post-process the outputs from a speaker diarization
system. Various goals can be achieved with the proposed framework, such as
improving the readability of the diarized transcript, or reducing the word
diarization error ra... | 2024-01-07T14:54:57Z | null | Proc. Interspeech 2024, 3754-3758 (2024) | 10.21437/Interspeech.2024-209 | null | null | null | null | null | null | null |
2,401.0359 | Building Efficient and Effective OpenQA Systems for Low-Resource
Languages | ['Emrah Budur', 'Rıza Özçelik', 'Dilara Soylu', 'Omar Khattab', 'Tunga Güngör', 'Christopher Potts'] | ['cs.CL'] | Question answering (QA) is the task of answering questions posed in natural
language with free-form natural language answers extracted from a given
passage. In the OpenQA variant, only a question text is given, and the system
must retrieve relevant passages from an unstructured knowledge source and use
them to provide ... | 2024-01-07T22:11:36Z | null | Knowledge-Based Systems, Vol. 302, p. 112243, 2024 | 10.1016/j.knosys.2024.112243 | Building Efficient and Effective OpenQA Systems for Low-Resource Languages | ['Emrah Budur', 'Riza Ozccelik', 'Dilara Soylu', 'O. Khattab', 'T. Gungor', 'Christopher Potts'] | 2,024 | Knowledge-Based Systems | 3 | 108 | ['Computer Science'] |
2,401.03804 | TeleChat Technical Report | ['Zhongjiang He', 'Zihan Wang', 'Xinzhang Liu', 'Shixuan Liu', 'Yitong Yao', 'Yuyao Huang', 'Xuelong Li', 'Yongxiang Li', 'Zhonghao Che', 'Zhaoxi Zhang', 'Yan Wang', 'Xin Wang', 'Luwen Pu', 'Huinan Xu', 'Ruiyu Fang', 'Yu Zhao', 'Jie Zhang', 'Xiaomeng Huang', 'Zhilong Lu', 'Jiaxin Peng', 'Wenjun Zheng', 'Shiquan Wang', ... | ['cs.CL', 'cs.AI', 'I.2.7'] | In this technical report, we present TeleChat, a collection of large language
models (LLMs) with parameters of 3 billion, 7 billion and 12 billion. It
includes pretrained language models as well as fine-tuned chat models that is
aligned with human preferences. TeleChat is initially pretrained on an
extensive corpus con... | 2024-01-08T10:43:19Z | 28 pages, 2 figures | null | null | null | null | null | null | null | null | null |
2,401.03833 | T-FREX: A Transformer-based Feature Extraction Method from Mobile App
Reviews | ['Quim Motger', 'Alessio Miaschi', "Felice Dell'Orletta", 'Xavier Franch', 'Jordi Marco'] | ['cs.SE'] | Mobile app reviews are a large-scale data source for software-related
knowledge generation activities, including software maintenance, evolution and
feedback analysis. Effective extraction of features (i.e., functionalities or
characteristics) from these reviews is key to support analysis on the
acceptance of these fea... | 2024-01-08T11:43:03Z | Accepted at IEEE International Conference on Software Analysis,
Evolution and Reengineering (SANER 2024). 12 pages (including references), 5
figures, 4 tables | null | 10.1109/SANER60148.2024.00030 | null | null | null | null | null | null | null |
2,401.03955 | Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced
Zero/Few-Shot Forecasting of Multivariate Time Series | ['Vijay Ekambaram', 'Arindam Jati', 'Pankaj Dayama', 'Sumanta Mukherjee', 'Nam H. Nguyen', 'Wesley M. Gifford', 'Chandra Reddy', 'Jayant Kalagnanam'] | ['cs.LG', 'cs.AI'] | Large pre-trained models excel in zero/few-shot learning for language and
vision tasks but face challenges in multivariate time series (TS) forecasting
due to diverse data characteristics. Consequently, recent research efforts have
focused on developing pre-trained TS forecasting models. These models, whether
built fro... | 2024-01-08T15:21:21Z | Accepted at the 38th Conference on Neural Information Processing
Systems (NeurIPS 2024) | null | null | Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series | ['Vijay Ekambaram', 'Arindam Jati', 'Nam H. Nguyen', 'Pankaj Dayama', 'Chandra Reddy', 'Wesley M. Gifford', 'Jayant Kalagnanam'] | 2,024 | Neural Information Processing Systems | 35 | 49 | ['Computer Science'] |
2,401.03991 | Advancing Spatial Reasoning in Large Language Models: An In-Depth
Evaluation and Enhancement Using the StepGame Benchmark | ['Fangjun Li', 'David C. Hogg', 'Anthony G. Cohn'] | ['cs.AI', 'cs.CL', 'cs.DB', 'cs.LO'] | Artificial intelligence (AI) has made remarkable progress across various
domains, with large language models like ChatGPT gaining substantial attention
for their human-like text-generation capabilities. Despite these achievements,
spatial reasoning remains a significant challenge for these models. Benchmarks
like StepG... | 2024-01-08T16:13:08Z | Camera-Ready version for AAAI 2024 | null | null | Advancing Spatial Reasoning in Large Language Models: An In-Depth Evaluation and Enhancement Using the StepGame Benchmark | ['Fangjun Li', 'David C. Hogg', 'Anthony G. Cohn'] | 2,024 | AAAI Conference on Artificial Intelligence | 33 | 25 | ['Computer Science'] |
2,401.04088 | Mixtral of Experts | ['Albert Q. Jiang', 'Alexandre Sablayrolles', 'Antoine Roux', 'Arthur Mensch', 'Blanche Savary', 'Chris Bamford', 'Devendra Singh Chaplot', 'Diego de las Casas', 'Emma Bou Hanna', 'Florian Bressand', 'Gianna Lengyel', 'Guillaume Bour', 'Guillaume Lample', 'Lélio Renard Lavaud', 'Lucile Saulnier', 'Marie-Anne Lachaux', ... | ['cs.LG', 'cs.CL'] | We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model.
Mixtral has the same architecture as Mistral 7B, with the difference that each
layer is composed of 8 feedforward blocks (i.e. experts). For every token, at
each layer, a router network selects two experts to process the current state
and com... | 2024-01-08T18:47:34Z | See more details at https://mistral.ai/news/mixtral-of-experts/ | null | null | null | null | null | null | null | null | null |
2,401.04319 | Know Your Needs Better: Towards Structured Understanding of Marketer
Demands with Analogical Reasoning Augmented LLMs | ['Junjie Wang', 'Dan Yang', 'Binbin Hu', 'Yue Shen', 'Wen Zhang', 'Jinjie Gu'] | ['cs.CL', 'cs.AI'] | In this paper, we explore a new way for user targeting, where non-expert
marketers could select their target users solely given demands in natural
language form. The key to this issue is how to transform natural languages into
practical structured logical languages, i.e., the structured understanding of
marketer demand... | 2024-01-09T02:25:23Z | Accepted by KDD 2024 | null | null | Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs | ['Junjie Wang', 'Dan Yang', 'Binbin Hu', 'Yue Shen', 'Wen Zhang', 'Jinjie Gu'] | 2,024 | Knowledge Discovery and Data Mining | 2 | 36 | ['Computer Science'] |
2,401.04464 | PhilEO Bench: Evaluating Geo-Spatial Foundation Models | ['Casper Fibaek', 'Luke Camilleri', 'Andreas Luyts', 'Nikolaos Dionelis', 'Bertrand Le Saux'] | ['cs.CV', 'cs.LG'] | Massive amounts of unlabelled data are captured by Earth Observation (EO)
satellites, with the Sentinel-2 constellation generating 1.6 TB of data daily.
This makes Remote Sensing a data-rich domain well suited to Machine Learning
(ML) solutions. However, a bottleneck in applying ML models to EO is the lack
of annotated... | 2024-01-09T09:58:42Z | 6 pages, 5 figures, Submitted to IGARSS 2024 | null | null | null | null | null | null | null | null | null |
2,401.04478 | TwinBooster: Synergising Large Language Models with Barlow Twins and
Gradient Boosting for Enhanced Molecular Property Prediction | ['Maximilian G. Schuh', 'Davide Boldini', 'Stephan A. Sieber'] | ['q-bio.BM', 'cs.AI', 'cs.CL', 'cs.LG'] | The success of drug discovery and development relies on the precise
prediction of molecular activities and properties. While in silico molecular
property prediction has shown remarkable potential, its use has been limited so
far to assays for which large amounts of data are available. In this study, we
use a fine-tuned... | 2024-01-09T10:36:20Z | 13(+9) pages(+appendix), 5 figures, 11 tables | J. Chem. Inf. Model. 2024, 64, 12, 4640-4650 | 10.1021/acs.jcim.4c00765 | null | null | null | null | null | null | null |
2,401.04577 | Masked Audio Generation using a Single Non-Autoregressive Transformer | ['Alon Ziv', 'Itai Gat', 'Gael Le Lan', 'Tal Remez', 'Felix Kreuk', 'Alexandre Défossez', 'Jade Copet', 'Gabriel Synnaeve', 'Yossi Adi'] | ['cs.SD', 'cs.AI', 'cs.LG', 'eess.AS'] | We introduce MAGNeT, a masked generative sequence modeling method that
operates directly over several streams of audio tokens. Unlike prior work,
MAGNeT is comprised of a single-stage, non-autoregressive transformer. During
training, we predict spans of masked tokens obtained from a masking scheduler,
while during infe... | 2024-01-09T14:29:39Z | null | null | null | Masked Audio Generation using a Single Non-Autoregressive Transformer | ['Alon Ziv', 'Itai Gat', 'Gaël Le Lan', 'Tal Remez', 'Felix Kreuk', "Alexandre D'efossez", 'Jade Copet', 'Gabriel Synnaeve', 'Yossi Adi'] | 2,024 | International Conference on Learning Representations | 40 | 57 | ['Computer Science', 'Engineering'] |
2,401.04658 | Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence
Lengths in Large Language Models | ['Zhen Qin', 'Weigao Sun', 'Dong Li', 'Xuyang Shen', 'Weixuan Sun', 'Yiran Zhong'] | ['cs.CL', 'cs.AI'] | Linear attention is an efficient attention mechanism that has recently
emerged as a promising alternative to conventional softmax attention. With its
ability to process tokens in linear computational complexities, linear
attention, in theory, can handle sequences of unlimited length without
sacrificing speed, i.e., mai... | 2024-01-09T16:27:28Z | Technical Report. Yiran Zhong is the corresponding author. The source
code is available at https://github.com/OpenNLPLab/lightning-attention | null | null | Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models | ['Zhen Qin', 'Weigao Sun', 'Dong Li', 'Xuyang Shen', 'Weixuan Sun', 'Yiran Zhong'] | 2,024 | arXiv.org | 28 | 50 | ['Computer Science'] |
2,401.0481 | Translate-Distill: Learning Cross-Language Dense Retrieval by
Translation and Distillation | ['Eugene Yang', 'Dawn Lawrie', 'James Mayfield', 'Douglas W. Oard', 'Scott Miller'] | ['cs.IR', 'cs.CL'] | Prior work on English monolingual retrieval has shown that a cross-encoder
trained using a large number of relevance judgments for query-document pairs
can be used as a teacher to train more efficient, but similarly effective,
dual-encoder student models. Applying a similar knowledge distillation approach
to training a... | 2024-01-09T20:40:49Z | 17 pages, 1 figure, accepted at ECIR 2024 | null | null | null | null | null | null | null | null | null |
2,401.05252 | PIXART-δ: Fast and Controllable Image Generation with Latent
Consistency Models | ['Junsong Chen', 'Yue Wu', 'Simian Luo', 'Enze Xie', 'Sayak Paul', 'Ping Luo', 'Hang Zhao', 'Zhenguo Li'] | ['cs.CV'] | This technical report introduces PIXART-{\delta}, a text-to-image synthesis
framework that integrates the Latent Consistency Model (LCM) and ControlNet
into the advanced PIXART-{\alpha} model. PIXART-{\alpha} is recognized for its
ability to generate high-quality images of 1024px resolution through a
remarkably efficie... | 2024-01-10T16:27:38Z | Technical Report | null | null | null | null | null | null | null | null | null |
2,401.05447 | Can ChatGPT Compute Trustworthy Sentiment Scores from Bloomberg Market
Wraps? | ['Baptiste Lefort', 'Eric Benhamou', 'Jean-Jacques Ohana', 'David Saltiel', 'Beatrice Guez', 'Damien Challet'] | ['q-fin.ST', 'cs.AI'] | We used a dataset of daily Bloomberg Financial Market Summaries from 2010 to
2023, reposted on large financial media, to determine how global news headlines
may affect stock market movements using ChatGPT and a two-stage prompt
approach. We document a statistically significant positive correlation between
the sentiment... | 2024-01-09T10:34:19Z | null | null | null | null | null | null | null | null | null | null |
2,401.05566 | Sleeper Agents: Training Deceptive LLMs that Persist Through Safety
Training | ['Evan Hubinger', 'Carson Denison', 'Jesse Mu', 'Mike Lambert', 'Meg Tong', 'Monte MacDiarmid', 'Tamera Lanham', 'Daniel M. Ziegler', 'Tim Maxwell', 'Newton Cheng', 'Adam Jermyn', 'Amanda Askell', 'Ansh Radhakrishnan', 'Cem Anil', 'David Duvenaud', 'Deep Ganguli', 'Fazl Barez', 'Jack Clark', 'Kamal Ndousse', 'Kshitij S... | ['cs.CR', 'cs.AI', 'cs.CL', 'cs.LG', 'cs.SE'] | Humans are capable of strategically deceptive behavior: behaving helpfully in
most situations, but then behaving very differently in order to pursue
alternative objectives when given the opportunity. If an AI system learned such
a deceptive strategy, could we detect it and remove it using current
state-of-the-art safet... | 2024-01-10T22:14:35Z | updated to add missing acknowledgements | null | null | null | null | null | null | null | null | null |
2,401.05633 | Transforming Image Super-Resolution: A ConvFormer-based Efficient
Approach | ['Gang Wu', 'Junjun Jiang', 'Junpeng Jiang', 'Xianming Liu'] | ['cs.CV', 'eess.IV'] | Recent progress in single-image super-resolution (SISR) has achieved
remarkable performance, yet the computational costs of these methods remain a
challenge for deployment on resource-constrained devices. In particular,
transformer-based methods, which leverage self-attention mechanisms, have led
to significant breakth... | 2024-01-11T03:08:00Z | Accepted by IEEE TIP | IEEE Transactions on Image Processing 2024 | 10.1109/TIP.2024.3477350 | Transforming Image Super-Resolution: A ConvFormer-Based Efficient Approach | ['Gang Wu', 'Junjun Jiang', 'Junpeng Jiang', 'Xianming Liu'] | 2,024 | IEEE Transactions on Image Processing | 11 | 80 | ['Computer Science', 'Medicine', 'Engineering'] |
2,401.06066 | DeepSeekMoE: Towards Ultimate Expert Specialization in
Mixture-of-Experts Language Models | ['Damai Dai', 'Chengqi Deng', 'Chenggang Zhao', 'R. X. Xu', 'Huazuo Gao', 'Deli Chen', 'Jiashi Li', 'Wangding Zeng', 'Xingkai Yu', 'Y. Wu', 'Zhenda Xie', 'Y. K. Li', 'Panpan Huang', 'Fuli Luo', 'Chong Ruan', 'Zhifang Sui', 'Wenfeng Liang'] | ['cs.CL'] | In the era of large language models, Mixture-of-Experts (MoE) is a promising
architecture for managing computational costs when scaling up model parameters.
However, conventional MoE architectures like GShard, which activate the top-$K$
out of $N$ experts, face challenges in ensuring expert specialization, i.e.
each ex... | 2024-01-11T17:31:42Z | null | null | null | null | null | null | null | null | null | null |
2,401.06071 | GroundingGPT:Language Enhanced Multi-modal Grounding Model | ['Zhaowei Li', 'Qi Xu', 'Dong Zhang', 'Hang Song', 'Yiqing Cai', 'Qi Qi', 'Ran Zhou', 'Junting Pan', 'Zefeng Li', 'Van Tu Vu', 'Zhida Huang', 'Tao Wang'] | ['cs.CV', 'cs.CL'] | Multi-modal large language models have demonstrated impressive performance
across various tasks in different modalities. However, existing multi-modal
models primarily emphasize capturing global information within each modality
while neglecting the importance of perceiving local information across
modalities. Consequen... | 2024-01-11T17:41:57Z | null | null | null | null | null | null | null | null | null | null |
2,401.0608 | Secrets of RLHF in Large Language Models Part II: Reward Modeling | ['Binghai Wang', 'Rui Zheng', 'Lu Chen', 'Yan Liu', 'Shihan Dou', 'Caishuang Huang', 'Wei Shen', 'Senjie Jin', 'Enyu Zhou', 'Chenyu Shi', 'Songyang Gao', 'Nuo Xu', 'Yuhao Zhou', 'Xiaoran Fan', 'Zhiheng Xi', 'Jun Zhao', 'Xiao Wang', 'Tao Ji', 'Hang Yan', 'Lixing Shen', 'Zhan Chen', 'Tao Gui', 'Qi Zhang', 'Xipeng Qiu', '... | ['cs.AI'] | Reinforcement Learning from Human Feedback (RLHF) has become a crucial
technology for aligning language models with human values and intentions,
enabling models to produce more helpful and harmless responses. Reward models
are trained as proxies for human preferences to drive reinforcement learning
optimization. While ... | 2024-01-11T17:56:59Z | null | null | null | null | null | null | null | null | null | null |
2,401.06118 | Extreme Compression of Large Language Models via Additive Quantization | ['Vage Egiazarian', 'Andrei Panferov', 'Denis Kuznedelev', 'Elias Frantar', 'Artem Babenko', 'Dan Alistarh'] | ['cs.LG', 'cs.CL'] | The emergence of accurate open large language models (LLMs) has led to a race
towards performant quantization techniques which can enable their execution on
end-user devices. In this paper, we revisit the problem of "extreme" LLM
compression-defined as targeting extremely low bit counts, such as 2 to 3 bits
per paramet... | 2024-01-11T18:54:44Z | ICML, 2024 | null | null | null | null | null | null | null | null | null |
2,401.06121 | TOFU: A Task of Fictitious Unlearning for LLMs | ['Pratyush Maini', 'Zhili Feng', 'Avi Schwarzschild', 'Zachary C. Lipton', 'J. Zico Kolter'] | ['cs.LG', 'cs.CL'] | Large language models trained on massive corpora of data from the web can
memorize and reproduce sensitive or private data raising both legal and ethical
concerns. Unlearning, or tuning models to forget information present in their
training data, provides us with a way to protect private data after training.
Although s... | 2024-01-11T18:57:12Z | https://locuslab.github.io/tofu/ | null | null | TOFU: A Task of Fictitious Unlearning for LLMs | ['Pratyush Maini', 'Zhili Feng', 'Avi Schwarzschild', 'Zachary Chase Lipton', 'J. Kolter'] | 2,024 | arXiv.org | 193 | 49 | ['Computer Science'] |
2,401.06183 | End to end Hindi to English speech conversion using Bark, mBART and a
finetuned XLSR Wav2Vec2 | ['Aniket Tathe', 'Anand Kamble', 'Suyash Kumbharkar', 'Atharva Bhandare', 'Anirban C. Mitra'] | ['eess.AS', 'cs.AI', 'cs.CL', 'cs.LG'] | Speech has long been a barrier to effective communication and connection,
persisting as a challenge in our increasingly interconnected world. This
research paper introduces a transformative solution to this persistent obstacle
an end-to-end speech conversion framework tailored for Hindi-to-English
translation, culminat... | 2024-01-11T04:26:21Z | null | null | null | End to end Hindi to English speech conversion using Bark, mBART and a finetuned XLSR Wav2Vec2 | ['Aniket Tathe', 'Anand Kamble', 'Suyash Kumbharkar', 'Atharva Bhandare', 'Anirban C. Mitra'] | 2,024 | arXiv.org | 1 | 17 | ['Computer Science', 'Engineering'] |
2,401.06197 | Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator
for Vision Applications | ['Yuwen Xiong', 'Zhiqi Li', 'Yuntao Chen', 'Feng Wang', 'Xizhou Zhu', 'Jiapeng Luo', 'Wenhai Wang', 'Tong Lu', 'Hongsheng Li', 'Yu Qiao', 'Lewei Lu', 'Jie Zhou', 'Jifeng Dai'] | ['cs.CV'] | We introduce Deformable Convolution v4 (DCNv4), a highly efficient and
effective operator designed for a broad spectrum of vision applications. DCNv4
addresses the limitations of its predecessor, DCNv3, with two key enhancements:
1. removing softmax normalization in spatial aggregation to enhance its dynamic
property a... | 2024-01-11T14:53:24Z | Tech report; Code: https://github.com/OpenGVLab/DCNv4 | null | null | null | null | null | null | null | null | null |
2,401.06199 | xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering
the Language of Protein | ['Bo Chen', 'Xingyi Cheng', 'Pan Li', 'Yangli-ao Geng', 'Jing Gong', 'Shen Li', 'Zhilei Bei', 'Xu Tan', 'Boyan Wang', 'Xin Zeng', 'Chiming Liu', 'Aohan Zeng', 'Yuxiao Dong', 'Jie Tang', 'Le Song'] | ['q-bio.QM', 'cs.AI', 'cs.LG'] | Protein language models have shown remarkable success in learning biological
information from protein sequences. However, most existing models are limited
by either autoencoding or autoregressive pre-training objectives, which makes
them struggle to handle protein understanding and generation tasks
concurrently. We pro... | 2024-01-11T15:03:17Z | 100 pages with main text and supplementary contents | null | null | xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering the Language of Protein | ['Bo Chen', 'Xingyi Cheng', 'Yangli-ao Geng', 'Shengyin Li', 'Xin Zeng', 'Bo Wang', 'Jing Gong', 'Chiming Liu', 'Aohan Zeng', 'Yuxiao Dong', 'Jie Tang', 'Leo T. Song'] | 2,024 | bioRxiv | 113 | 111 | ['Biology', 'Computer Science'] |
2,401.06209 | Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs | ['Shengbang Tong', 'Zhuang Liu', 'Yuexiang Zhai', 'Yi Ma', 'Yann LeCun', 'Saining Xie'] | ['cs.CV'] | Is vision good enough for language? Recent advancements in multimodal models
primarily stem from the powerful reasoning abilities of large language models
(LLMs). However, the visual component typically depends only on the
instance-level contrastive language-image pre-training (CLIP). Our research
reveals that the visu... | 2024-01-11T18:58:36Z | Project page: https://tsb0601.github.io/mmvp_blog/ | null | null | null | null | null | null | null | null | null |
2,401.06408 | AboutMe: Using Self-Descriptions in Webpages to Document the Effects of
English Pretraining Data Filters | ['Li Lucy', 'Suchin Gururangan', 'Luca Soldaini', 'Emma Strubell', 'David Bamman', 'Lauren F. Klein', 'Jesse Dodge'] | ['cs.CL'] | Large language models' (LLMs) abilities are drawn from their pretraining
data, and model development begins with data curation. However, decisions
around what data is retained or removed during this initial stage are
under-scrutinized. In our work, we ground web text, which is a popular
pretraining data source, to its ... | 2024-01-12T07:10:10Z | 28 pages, 13 figures. Association for Computational Linguistics (ACL)
2024 | null | null | AboutMe: Using Self-Descriptions in Webpages to Document the Effects of English Pretraining Data Filters | ['Li Lucy', 'Suchin Gururangan', 'Luca Soldaini', 'Emma Strubell', 'David Bamman', 'Lauren Klein', 'Jesse Dodge'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 17 | 79 | ['Computer Science'] |
2,401.06416 | Mission: Impossible Language Models | ['Julie Kallini', 'Isabel Papadimitriou', 'Richard Futrell', 'Kyle Mahowald', 'Christopher Potts'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Chomsky and others have very directly claimed that large language models
(LLMs) are equally capable of learning languages that are possible and
impossible for humans to learn. However, there is very little published
experimental evidence to support such a claim. Here, we develop a set of
synthetic impossible languages ... | 2024-01-12T07:24:26Z | null | null | null | Mission: Impossible Language Models | ['Julie Kallini', 'Isabel Papadimitriou', 'Richard Futrell', 'Kyle Mahowald', 'Christopher Potts'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 21 | 76 | ['Computer Science'] |
2,401.06466 | PersianMind: A Cross-Lingual Persian-English Large Language Model | ['Pedram Rostami', 'Ali Salemi', 'Mohammad Javad Dousti'] | ['cs.CL', 'cs.AI'] | Large language models demonstrate remarkable proficiency in various
linguistic tasks and have extensive knowledge across various domains. Although
they perform best in English, their ability in other languages is notable too.
In contrast, open-source models, such as LLaMa, are primarily trained on
English datasets, res... | 2024-01-12T09:24:10Z | null | null | null | PersianMind: A Cross-Lingual Persian-English Large Language Model | ['Pedram Rostami', 'Ali Salemi', 'M. Dousti'] | 2,024 | arXiv.org | 5 | 49 | ['Computer Science'] |
2,401.06532 | INTERS: Unlocking the Power of Large Language Models in Search with
Instruction Tuning | ['Yutao Zhu', 'Peitian Zhang', 'Chenghao Zhang', 'Yifei Chen', 'Binyu Xie', 'Zheng Liu', 'Ji-Rong Wen', 'Zhicheng Dou'] | ['cs.CL', 'cs.IR'] | Large language models (LLMs) have demonstrated impressive capabilities in
various natural language processing tasks. Despite this, their application to
information retrieval (IR) tasks is still challenging due to the infrequent
occurrence of many IR-specific concepts in natural language. While prompt-based
methods can ... | 2024-01-12T12:10:28Z | Accepted by ACL 2024 main conference. Repo:
https://github.com/DaoD/INTERS | null | null | null | null | null | null | null | null | null |
2,401.06591 | Prometheus-Vision: Vision-Language Model as a Judge for Fine-Grained
Evaluation | ['Seongyun Lee', 'Seungone Kim', 'Sue Hyun Park', 'Geewook Kim', 'Minjoon Seo'] | ['cs.CL'] | Assessing long-form responses generated by Vision-Language Models (VLMs) is
challenging. It not only requires checking whether the VLM follows the given
instruction but also verifying whether the text output is properly grounded on
the given image. Inspired by the recent approach of evaluating LMs with LMs, in
this wor... | 2024-01-12T14:19:23Z | Work in progress | null | null | null | null | null | null | null | null | null |
2,401.06706 | Multi-Candidate Speculative Decoding | ['Sen Yang', 'Shujian Huang', 'Xinyu Dai', 'Jiajun Chen'] | ['cs.CL'] | Large language models have shown impressive capabilities across a variety of
NLP tasks, yet their generating text autoregressively is time-consuming. One
way to speed them up is speculative decoding, which generates candidate
segments (a sequence of tokens) from a fast draft model that is then verified
in parallel by t... | 2024-01-12T17:15:23Z | null | null | null | null | null | null | null | null | null | null |
2,401.06761 | APAR: LLMs Can Do Auto-Parallel Auto-Regressive Decoding | ['Mingdao Liu', 'Aohan Zeng', 'Bowen Wang', 'Peng Zhang', 'Jie Tang', 'Yuxiao Dong'] | ['cs.CL'] | The massive adoption of large language models (LLMs) demands efficient
deployment strategies. However, the auto-regressive decoding process, which is
fundamental to how most LLMs generate text, poses challenges to achieve
efficient serving. In this work, we introduce a parallel auto-regressive
generation method. By ins... | 2024-01-12T18:50:36Z | 14 pages | null | null | APAR: LLMs Can Do Auto-Parallel Auto-Regressive Decoding | ['Mingdao Liu', 'Aohan Zeng', 'Bowen Wang', 'Peng Zhang', 'Jie Tang', 'Yuxiao Dong'] | 2,024 | arXiv.org | 10 | 24 | ['Computer Science'] |
2,401.06838 | MAPO: Advancing Multilingual Reasoning through Multilingual
Alignment-as-Preference Optimization | ['Shuaijie She', 'Wei Zou', 'Shujian Huang', 'Wenhao Zhu', 'Xiang Liu', 'Xiang Geng', 'Jiajun Chen'] | ['cs.CL'] | Though reasoning abilities are considered language-agnostic, existing LLMs
exhibit inconsistent reasoning abilities across different languages, e.g.,
reasoning in the dominant language like English is superior to other languages
due to the imbalance of multilingual training data. To enhance reasoning
abilities in non-d... | 2024-01-12T18:03:54Z | The project is available at https://github.com/NJUNLP/MAPO | null | null | MAPO: Advancing Multilingual Reasoning through Multilingual Alignment-as-Preference Optimization | ['Shuaijie She', 'Shujian Huang', 'Wei Zou', 'Wenhao Zhu', 'Xiang Liu', 'Xiang Geng', 'Jiajun Chen'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 42 | 20 | ['Computer Science'] |
2,401.0691 | InRanker: Distilled Rankers for Zero-shot Information Retrieval | ['Thiago Laitz', 'Konstantinos Papakostas', 'Roberto Lotufo', 'Rodrigo Nogueira'] | ['cs.IR'] | Despite multi-billion parameter neural rankers being common components of
state-of-the-art information retrieval pipelines, they are rarely used in
production due to the enormous amount of compute required for inference. In
this work, we propose a new method for distilling large rankers into their
smaller versions focu... | 2024-01-12T21:52:42Z | null | null | null | null | null | null | null | null | null | null |
2,401.07105 | Graph Language Models | ['Moritz Plenz', 'Anette Frank'] | ['cs.CL', 'cs.AI', 'cs.LG', 'I.2.0; I.2.4; I.2.7'] | While Language Models (LMs) are the workhorses of NLP, their interplay with
structured knowledge graphs (KGs) is still actively researched. Current methods
for encoding such graphs typically either (i) linearize them for embedding with
LMs -- which underutilize structural information, or (ii) use Graph Neural
Networks ... | 2024-01-13T16:09:49Z | Accepted at ACL 2024. 9 pages, 10 figures, 9 tables | null | null | null | null | null | null | null | null | null |
2,401.07286 | CANDLE: Iterative Conceptualization and Instantiation Distillation from
Large Language Models for Commonsense Reasoning | ['Weiqi Wang', 'Tianqing Fang', 'Chunyang Li', 'Haochen Shi', 'Wenxuan Ding', 'Baixuan Xu', 'Zhaowei Wang', 'Jiaxin Bai', 'Xin Liu', 'Jiayang Cheng', 'Chunkit Chan', 'Yangqiu Song'] | ['cs.CL'] | The sequential process of conceptualization and instantiation is essential to
generalizable commonsense reasoning as it allows the application of existing
knowledge to unfamiliar scenarios. However, existing works tend to undervalue
the step of instantiation and heavily rely on pre-built concept taxonomies and
human an... | 2024-01-14T13:24:30Z | ACL2024 | null | null | CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning | ['Weiqi Wang', 'Tianqing Fang', 'Chunyang Li', 'Haochen Shi', 'Wenxuan Ding', 'Baixuan Xu', 'Zhaowei Wang', 'Jiaxin Bai', 'Xin Liu', 'Cheng Jiayang', 'Chunkit Chan', 'Yangqiu Song'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 32 | 105 | ['Computer Science'] |
2,401.07519 | InstantID: Zero-shot Identity-Preserving Generation in Seconds | ['Qixun Wang', 'Xu Bai', 'Haofan Wang', 'Zekui Qin', 'Anthony Chen', 'Huaxia Li', 'Xu Tang', 'Yao Hu'] | ['cs.CV', 'cs.AI'] | There has been significant progress in personalized image synthesis with
methods such as Textual Inversion, DreamBooth, and LoRA. Yet, their real-world
applicability is hindered by high storage demands, lengthy fine-tuning
processes, and the need for multiple reference images. Conversely, existing ID
embedding-based me... | 2024-01-15T07:50:18Z | Technical Report, project page available at
https://instantid.github.io/ | null | null | InstantID: Zero-shot Identity-Preserving Generation in Seconds | ['Qixun Wang', 'Xu Bai', 'Haofan Wang', 'Zekui Qin', 'Anthony Chen'] | 2,024 | arXiv.org | 259 | 28 | ['Computer Science'] |
2,401.0776 | On the importance of Data Scale in Pretraining Arabic Language Models | ['Abbas Ghaddar', 'Philippe Langlais', 'Mehdi Rezagholizadeh', 'Boxing Chen'] | ['cs.CL'] | Pretraining monolingual language models have been proven to be vital for
performance in Arabic Natural Language Processing (NLP) tasks. In this paper,
we conduct a comprehensive study on the role of data in Arabic Pretrained
Language Models (PLMs). More precisely, we reassess the performance of a suite
of state-of-the-... | 2024-01-15T15:11:15Z | null | null | null | null | null | null | null | null | null | null |
2,401.07851 | Unlocking Efficiency in Large Language Model Inference: A Comprehensive
Survey of Speculative Decoding | ['Heming Xia', 'Zhe Yang', 'Qingxiu Dong', 'Peiyi Wang', 'Yongqi Li', 'Tao Ge', 'Tianyu Liu', 'Wenjie Li', 'Zhifang Sui'] | ['cs.CL'] | To mitigate the high inference latency stemming from autoregressive decoding
in Large Language Models (LLMs), Speculative Decoding has emerged as a novel
decoding paradigm for LLM inference. In each decoding step, this method first
drafts several future tokens efficiently and then verifies them in parallel.
Unlike auto... | 2024-01-15T17:26:50Z | ACL 2024 Findings (Long Paper), camera-ready version | null | null | Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding | ['Heming Xia', 'Zhe Yang', 'Qingxiu Dong', 'Peiyi Wang', 'Yongqi Li', 'Tao Ge', 'Tianyu Liu', 'Wenjie Li', 'Zhifang Sui'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 130 | 60 | ['Computer Science'] |
2,401.0795 | SciInstruct: a Self-Reflective Instruction Annotated Dataset for
Training Scientific Language Models | ['Dan Zhang', 'Ziniu Hu', 'Sining Zhoubian', 'Zhengxiao Du', 'Kaiyu Yang', 'Zihan Wang', 'Yisong Yue', 'Yuxiao Dong', 'Jie Tang'] | ['cs.CL'] | Large Language Models (LLMs) have shown promise in assisting scientific
discovery. However, such applications are currently limited by LLMs'
deficiencies in understanding intricate scientific concepts, deriving symbolic
equations, and solving advanced numerical calculations. To bridge these gaps,
we introduce SciInstru... | 2024-01-15T20:22:21Z | Accepted to NeurIPS D&B Track 2024 | null | null | null | null | null | null | null | null | null |
2,401.08342 | ECAPA2: A Hybrid Neural Network Architecture and Training Strategy for
Robust Speaker Embeddings | ['Jenthe Thienpondt', 'Kris Demuynck'] | ['eess.AS'] | In this paper, we present ECAPA2, a novel hybrid neural network architecture
and training strategy to produce robust speaker embeddings. Most speaker
verification models are based on either the 1D- or 2D-convolutional operation,
often manifested as Time Delay Neural Networks or ResNets, respectively. Hybrid
models are ... | 2024-01-16T13:17:39Z | proceedings of ASRU 2023 | null | null | ECAPA2: A Hybrid Neural Network Architecture and Training Strategy for Robust Speaker Embeddings | ['Jenthe Thienpondt', 'Kris Demuynck'] | 2,023 | Automatic Speech Recognition & Understanding | 17 | 27 | ['Computer Science', 'Engineering'] |
2,401.08417 | Contrastive Preference Optimization: Pushing the Boundaries of LLM
Performance in Machine Translation | ['Haoran Xu', 'Amr Sharaf', 'Yunmo Chen', 'Weiting Tan', 'Lingfeng Shen', 'Benjamin Van Durme', 'Kenton Murray', 'Young Jin Kim'] | ['cs.CL'] | Moderate-sized large language models (LLMs) -- those with 7B or 13B
parameters -- exhibit promising machine translation (MT) performance. However,
even the top-performing 13B LLM-based translation models, like ALMA, does not
match the performance of state-of-the-art conventional encoder-decoder
translation models or la... | 2024-01-16T15:04:51Z | Accepted at ICML 2024 | null | null | null | null | null | null | null | null | null |
2,401.08503 | Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis | ['Zhenhui Ye', 'Tianyun Zhong', 'Yi Ren', 'Jiaqi Yang', 'Weichuang Li', 'Jiawei Huang', 'Ziyue Jiang', 'Jinzheng He', 'Rongjie Huang', 'Jinglin Liu', 'Chen Zhang', 'Xiang Yin', 'Zejun Ma', 'Zhou Zhao'] | ['cs.CV'] | One-shot 3D talking portrait generation aims to reconstruct a 3D avatar from
an unseen image, and then animate it with a reference video or audio to
generate a talking portrait video. The existing methods fail to simultaneously
achieve the goals of accurate 3D avatar reconstruction and stable talking face
animation. Be... | 2024-01-16T17:04:30Z | ICLR 2024 (Spotlight). Project page: https://real3dportrait.github.io | null | null | Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis | ['Zhenhui Ye', 'Tianyun Zhong', 'Yi Ren', 'Jiaqi Yang', 'Weichuang Li', 'Jia-Bin Huang', 'Ziyue Jiang', 'Jinzheng He', 'Rongjie Huang', 'Jinglin Liu', 'Chen Zhang', 'Xiang Yin', 'Zejun Ma', 'Zhou Zhao'] | 2,024 | International Conference on Learning Representations | 50 | 56 | ['Computer Science'] |
2,401.08508 | EmoLLMs: A Series of Emotional Large Language Models and Annotation
Tools for Comprehensive Affective Analysis | ['Zhiwei Liu', 'Kailai Yang', 'Tianlin Zhang', 'Qianqian Xie', 'Sophia Ananiadou'] | ['cs.CL'] | Sentiment analysis and emotion detection are important research topics in
natural language processing (NLP) and benefit many downstream tasks. With the
widespread application of LLMs, researchers have started exploring the
application of LLMs based on instruction-tuning in the field of sentiment
analysis. However, thes... | 2024-01-16T17:11:11Z | Accepted by KDD 2024 | null | 10.1145/3637528.3671552 | EmoLLMs: A Series of Emotional Large Language Models and Annotation Tools for Comprehensive Affective Analysis | ['Zhiwei Liu', 'Kailai Yang', 'Tianlin Zhang', 'Qianqian Xie', 'Zeping Yu', 'Sophia Ananiadou'] | 2,024 | Knowledge Discovery and Data Mining | 52 | 59 | ['Computer Science'] |
2,401.08541 | Scalable Pre-training of Large Autoregressive Image Models | ['Alaaeldin El-Nouby', 'Michal Klein', 'Shuangfei Zhai', 'Miguel Angel Bautista', 'Alexander Toshev', 'Vaishaal Shankar', 'Joshua M Susskind', 'Armand Joulin'] | ['cs.CV'] | This paper introduces AIM, a collection of vision models pre-trained with an
autoregressive objective. These models are inspired by their textual
counterparts, i.e., Large Language Models (LLMs), and exhibit similar scaling
properties. Specifically, we highlight two key findings: (1) the performance of
the visual featu... | 2024-01-16T18:03:37Z | https://github.com/apple/ml-aim | null | null | null | null | null | null | null | null | null |
2,401.08573 | WAVES: Benchmarking the Robustness of Image Watermarks | ['Bang An', 'Mucong Ding', 'Tahseen Rabbani', 'Aakriti Agrawal', 'Yuancheng Xu', 'Chenghao Deng', 'Sicheng Zhu', 'Abdirisak Mohamed', 'Yuxin Wen', 'Tom Goldstein', 'Furong Huang'] | ['cs.CV', 'cs.CR', 'cs.LG'] | In the burgeoning age of generative AI, watermarks act as identifiers of
provenance and artificial content. We present WAVES (Watermark Analysis Via
Enhanced Stress-testing), a benchmark for assessing image watermark robustness,
overcoming the limitations of current evaluation methods. WAVES integrates
detection and id... | 2024-01-16T18:58:36Z | Accepted by ICML 2024 | null | null | Benchmarking the Robustness of Image Watermarks | ['Bang An', 'Mucong Ding', 'Tahseen Rabbani', 'Aakriti Agrawal', 'Yuancheng Xu', 'Chenghao Deng', 'Sicheng Zhu', 'Abdirisak Mohamed', 'Yuxin Wen', 'Tom Goldstein', 'Furong Huang'] | 2,024 | International Conference on Machine Learning | 50 | 57 | ['Computer Science'] |
2,401.08815 | Adversarial Supervision Makes Layout-to-Image Diffusion Models Thrive | ['Yumeng Li', 'Margret Keuper', 'Dan Zhang', 'Anna Khoreva'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Despite the recent advances in large-scale diffusion models, little progress
has been made on the layout-to-image (L2I) synthesis task. Current L2I models
either suffer from poor editability via text or weak alignment between the
generated image and the input layout. This limits their usability in practice.
To mitigate... | 2024-01-16T20:31:46Z | Accepted at ICLR 2024. Project page:
https://yumengli007.github.io/ALDM/ and code:
https://github.com/boschresearch/ALDM | null | null | null | null | null | null | null | null | null |
2,401.08967 | ReFT: Reasoning with Reinforced Fine-Tuning | ['Trung Quoc Luong', 'Xinbo Zhang', 'Zhanming Jie', 'Peng Sun', 'Xiaoran Jin', 'Hang Li'] | ['cs.CL'] | One way to enhance the reasoning capability of Large Language Models (LLMs)
is to conduct Supervised Fine-Tuning (SFT) using Chain-of-Thought (CoT)
annotations. This approach does not show sufficiently strong generalization
ability, however, because the training only relies on the given CoT data. In
math problem-solvin... | 2024-01-17T04:43:21Z | ACL 2024 main conference; adjust with reviewer comments; 13 pages | null | null | ReFT: Reasoning with Reinforced Fine-Tuning | ['Trung Quoc Luong', 'Xinbo Zhang', 'Zhanming Jie', 'Peng Sun', 'Xiaoran Jin', 'Hang Li'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 132 | 64 | ['Computer Science'] |
2,401.09003 | Augmenting Math Word Problems via Iterative Question Composing | ['Haoxiong Liu', 'Yifan Zhang', 'Yifan Luo', 'Andrew Chi-Chih Yao'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Despite the advancements in large language models (LLMs) for mathematical
reasoning, solving competition-level math problems remains a significant
challenge, especially for open-source LLMs without external tools. We introduce
the MMIQC dataset, comprising a mixture of processed web data and synthetic
question-response... | 2024-01-17T06:48:16Z | null | null | null | Augmenting Math Word Problems via Iterative Question Composing | ['Haoxiong Liu', 'Yifan Zhang', 'Yifan Luo', 'A. C. Yao'] | 2,024 | AAAI Conference on Artificial Intelligence | 46 | 39 | ['Computer Science'] |
2,401.09047 | VideoCrafter2: Overcoming Data Limitations for High-Quality Video
Diffusion Models | ['Haoxin Chen', 'Yong Zhang', 'Xiaodong Cun', 'Menghan Xia', 'Xintao Wang', 'Chao Weng', 'Ying Shan'] | ['cs.CV'] | Text-to-video generation aims to produce a video based on a given prompt.
Recently, several commercial video models have been able to generate plausible
videos with minimal noise, excellent details, and high aesthetic scores.
However, these models rely on large-scale, well-filtered, high-quality videos
that are not acc... | 2024-01-17T08:30:32Z | Homepage: https://ailab-cvc.github.io/videocrafter; Github:
https://github.com/AILab-CVC/VideoCrafter | null | null | null | null | null | null | null | null | null |
2,401.09192 | Preparing Lessons for Progressive Training on Language Models | ['Yu Pan', 'Ye Yuan', 'Yichun Yin', 'Jiaxin Shi', 'Zenglin Xu', 'Ming Zhang', 'Lifeng Shang', 'Xin Jiang', 'Qun Liu'] | ['cs.LG', 'cs.AI'] | The rapid progress of Transformers in artificial intelligence has come at the
cost of increased resource consumption and greenhouse gas emissions due to
growing model sizes. Prior work suggests using pretrained small models to
improve training efficiency, but this approach may not be suitable for new
model structures. ... | 2024-01-17T13:04:14Z | null | null | null | null | null | null | null | null | null | null |
2,401.09414 | Vlogger: Make Your Dream A Vlog | ['Shaobin Zhuang', 'Kunchang Li', 'Xinyuan Chen', 'Yaohui Wang', 'Ziwei Liu', 'Yu Qiao', 'Yali Wang'] | ['cs.CV', 'cs.AI', 'cs.LG', 'cs.MM'] | In this work, we present Vlogger, a generic AI system for generating a
minute-level video blog (i.e., vlog) of user descriptions. Different from short
videos with a few seconds, vlog often contains a complex storyline with
diversified scenes, which is challenging for most existing video generation
approaches. To break ... | 2024-01-17T18:55:12Z | 16 pages, 8 figures, 11 tables | null | null | Vlogger: Make Your Dream A Vlog | ['Shaobin Zhuang', 'Kunchang Li', 'Xinyuan Chen', 'Yaohui Wang', 'Ziwei Liu', 'Yu Qiao', 'Yali Wang'] | 2,024 | Computer Vision and Pattern Recognition | 39 | 91 | ['Computer Science'] |
2,401.09417 | Vision Mamba: Efficient Visual Representation Learning with
Bidirectional State Space Model | ['Lianghui Zhu', 'Bencheng Liao', 'Qian Zhang', 'Xinlong Wang', 'Wenyu Liu', 'Xinggang Wang'] | ['cs.CV', 'cs.LG'] | Recently the state space models (SSMs) with efficient hardware-aware designs,
i.e., the Mamba deep learning model, have shown great potential for long
sequence modeling. Meanwhile building efficient and generic vision backbones
purely upon SSMs is an appealing direction. However, representing visual data
is challenging... | 2024-01-17T18:56:18Z | Vision Mamba (Vim) is accepted by ICML 2024. Code is available at
https://github.com/hustvl/Vim | null | null | null | null | null | null | null | null | null |
2,401.09603 | Rethinking FID: Towards a Better Evaluation Metric for Image Generation | ['Sadeep Jayasumana', 'Srikumar Ramalingam', 'Andreas Veit', 'Daniel Glasner', 'Ayan Chakrabarti', 'Sanjiv Kumar'] | ['cs.CV'] | As with many machine learning problems, the progress of image generation
methods hinges on good evaluation metrics. One of the most popular is the
Frechet Inception Distance (FID). FID estimates the distance between a
distribution of Inception-v3 features of real images, and those of images
generated by the algorithm. ... | 2023-11-30T19:11:01Z | Code is available at:
https://github.com/google-research/google-research/tree/master/cmmd | null | null | null | null | null | null | null | null | null |
2,401.09646 | ClimateGPT: Towards AI Synthesizing Interdisciplinary Research on
Climate Change | ['David Thulke', 'Yingbo Gao', 'Petrus Pelser', 'Rein Brune', 'Rricha Jalota', 'Floris Fok', 'Michael Ramos', 'Ian van Wyk', 'Abdallah Nasir', 'Hayden Goldstein', 'Taylor Tragemann', 'Katie Nguyen', 'Ariana Fowler', 'Andrew Stanco', 'Jon Gabriel', 'Jordan Taylor', 'Dean Moro', 'Evgenii Tsymbalov', 'Juliette de Waal', '... | ['cs.LG', 'cs.AI', 'cs.CL'] | This paper introduces ClimateGPT, a model family of domain-specific large
language models that synthesize interdisciplinary research on climate change.
We trained two 7B models from scratch on a science-oriented dataset of 300B
tokens. For the first model, the 4.2B domain-specific tokens were included
during pre-traini... | 2024-01-17T23:29:46Z | null | null | null | ClimateGPT: Towards AI Synthesizing Interdisciplinary Research on Climate Change | ['David Thulke', 'Yingbo Gao', 'Petrus Pelser', 'Rein Brune', 'Rricha Jalota', 'Floris Fok', 'Michael Ramos', 'Ian van Wyk', 'Abdallah Nasir', 'Hayden Goldstein', 'Taylor Tragemann', 'Katie Nguyen', 'Ariana Fowler', 'Andrew Stanco', 'Jon Gabriel', 'Jordan Taylor', 'Dean Moro', 'Evgenii Tsymbalov', 'Juliette de Waal', '... | 2,024 | arXiv.org | 44 | 0 | ['Computer Science'] |
2,401.09923 | MAMBA: Multi-level Aggregation via Memory Bank for Video Object
Detection | ['Guanxiong Sun', 'Yang Hua', 'Guosheng Hu', 'Neil Robertson'] | ['cs.CV'] | State-of-the-art video object detection methods maintain a memory structure,
either a sliding window or a memory queue, to enhance the current frame using
attention mechanisms. However, we argue that these memory structures are not
efficient or sufficient because of two implied operations: (1) concatenating
all feature... | 2024-01-18T12:13:06Z | update code url https://github.com/guanxiongsun/vfe.pytorch | In Proceedings of the AAAI Conference on Artificial Intelligence
2021 (Vol. 35, No. 3, pp. 2620-2627) | 10.1609/aaai.v35i3.16365 | MAMBA: Multi-level Aggregation via Memory Bank for Video Object Detection | ['Guanxiong Sun', 'Yang Hua', 'Guosheng Hu', 'N. Robertson'] | 2,020 | AAAI Conference on Artificial Intelligence | 60 | 32 | ['Computer Science'] |
2,401.1002 | Self-Rewarding Language Models | ['Weizhe Yuan', 'Richard Yuanzhe Pang', 'Kyunghyun Cho', 'Xian Li', 'Sainbayar Sukhbaatar', 'Jing Xu', 'Jason Weston'] | ['cs.CL', 'cs.AI'] | We posit that to achieve superhuman agents, future models require superhuman
feedback in order to provide an adequate training signal. Current approaches
commonly train reward models from human preferences, which may then be
bottlenecked by human performance level, and secondly these separate frozen
reward models canno... | 2024-01-18T14:43:47Z | ICML 2024 | null | null | null | null | null | null | null | null | null |
2,401.1004 | Large Language Models for Scientific Information Extraction: An
Empirical Study for Virology | ['Mahsa Shamsabadi', "Jennifer D'Souza", 'Sören Auer'] | ['cs.CL', 'cs.AI', 'cs.DL', 'cs.IT', 'math.IT'] | In this paper, we champion the use of structured and semantic content
representation of discourse-based scholarly communication, inspired by tools
like Wikipedia infoboxes or structured Amazon product descriptions. These
representations provide users with a concise overview, aiding scientists in
navigating the dense ac... | 2024-01-18T15:04:55Z | 8 pages, 6 figures, Accepted as Findings of the ACL: EACL 2024 | null | null | Large Language Models for Scientific Information Extraction: An Empirical Study for Virology | ['Mahsa Shamsabadi', "Jennifer D'Souza", 'S. Auer'] | 2,024 | Findings | 8 | 65 | ['Computer Science', 'Mathematics'] |
2,401.1011 | SVIPTR: Fast and Efficient Scene Text Recognition with Vision Permutable
Extractor | ['Xianfu Cheng', 'Weixiao Zhou', 'Xiang Li', 'Jian Yang', 'Hang Zhang', 'Tao Sun', 'Wei Zhang', 'Yuying Mai', 'Tongliang Li', 'Xiaoming Chen', 'Zhoujun Li'] | ['cs.CV'] | Scene Text Recognition (STR) is an important and challenging upstream task
for building structured information databases, that involves recognizing text
within images of natural scenes. Although current state-of-the-art (SOTA)
models for STR exhibit high performance, they typically suffer from low
inference efficiency ... | 2024-01-18T16:27:09Z | 10 pages, 4 figures, 6 tables | null | null | null | null | null | null | null | null | null |
2,401.10166 | VMamba: Visual State Space Model | ['Yue Liu', 'Yunjie Tian', 'Yuzhong Zhao', 'Hongtian Yu', 'Lingxi Xie', 'Yaowei Wang', 'Qixiang Ye', 'Jianbin Jiao', 'Yunfan Liu'] | ['cs.CV'] | Designing computationally efficient network architectures remains an ongoing
necessity in computer vision. In this paper, we adapt Mamba, a state-space
language model, into VMamba, a vision backbone with linear time complexity. At
the core of VMamba is a stack of Visual State-Space (VSS) blocks with the 2D
Selective Sc... | 2024-01-18T17:55:39Z | 33 pages, 14 figures, 15 tables. NeurIPS 2024 spotlight | null | null | null | null | null | null | null | null | null |
2,401.10222 | Supervised Fine-tuning in turn Improves Visual Foundation Models | ['Xiaohu Jiang', 'Yixiao Ge', 'Yuying Ge', 'Dachuan Shi', 'Chun Yuan', 'Ying Shan'] | ['cs.CV', 'cs.AI'] | Image-text training like CLIP has dominated the pretraining of vision
foundation models in recent years. Subsequent efforts have been made to
introduce region-level visual learning into CLIP's pretraining but face
scalability challenges due to the lack of large-scale region-level datasets.
Drawing inspiration from supe... | 2024-01-18T18:58:54Z | 23 pages, 3 figures, Project page:
https://github.com/TencentARC/ViSFT/tree/main | null | null | null | null | null | null | null | null | null |
2,401.10224 | The Manga Whisperer: Automatically Generating Transcriptions for Comics | ['Ragav Sachdeva', 'Andrew Zisserman'] | ['cs.CV'] | In the past few decades, Japanese comics, commonly referred to as Manga, have
transcended both cultural and linguistic boundaries to become a true worldwide
sensation. Yet, the inherent reliance on visual cues and illustration within
manga renders it largely inaccessible to individuals with visual impairments.
In this ... | 2024-01-18T18:59:09Z | Accepted at CVPR'24 | null | null | null | null | null | null | null | null | null |
2,401.10225 | ChatQA: Surpassing GPT-4 on Conversational QA and RAG | ['Zihan Liu', 'Wei Ping', 'Rajarshi Roy', 'Peng Xu', 'Chankyu Lee', 'Mohammad Shoeybi', 'Bryan Catanzaro'] | ['cs.CL', 'cs.AI', 'cs.IR', 'cs.LG'] | In this work, we introduce ChatQA, a suite of models that outperform GPT-4 on
retrieval-augmented generation (RAG) and conversational question answering
(QA). To enhance generation, we propose a two-stage instruction tuning method
that significantly boosts the performance of RAG. For effective retrieval, we
introduce a... | 2024-01-18T18:59:11Z | Accepted at NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,401.10226 | Towards Language-Driven Video Inpainting via Multimodal Large Language
Models | ['Jianzong Wu', 'Xiangtai Li', 'Chenyang Si', 'Shangchen Zhou', 'Jingkang Yang', 'Jiangning Zhang', 'Yining Li', 'Kai Chen', 'Yunhai Tong', 'Ziwei Liu', 'Chen Change Loy'] | ['cs.CV'] | We introduce a new task -- language-driven video inpainting, which uses
natural language instructions to guide the inpainting process. This approach
overcomes the limitations of traditional video inpainting methods that depend
on manually labeled binary masks, a process often tedious and labor-intensive.
We present the... | 2024-01-18T18:59:13Z | CVPR-2024. Project Page: https://jianzongwu.github.io/projects/rovi | null | null | Towards Language-Driven Video Inpainting via Multimodal Large Language Models | ['Jianzong Wu', 'Xiangtai Li', 'Chenyang Si', 'Shangchen Zhou', 'Jingkang Yang', 'Jiangning Zhang', 'Yining Li', 'Kai Chen', 'Yunhai Tong', 'Ziwei Liu', 'Chen Change Loy'] | 2,024 | Computer Vision and Pattern Recognition | 18 | 72 | ['Computer Science'] |
2,401.10407 | Learning High-Quality and General-Purpose Phrase Representations | ['Lihu Chen', 'Gaël Varoquaux', 'Fabian M. Suchanek'] | ['cs.CL'] | Phrase representations play an important role in data science and natural
language processing, benefiting various tasks like Entity Alignment, Record
Linkage, Fuzzy Joins, and Paraphrase Classification. The current
state-of-the-art method involves fine-tuning pre-trained language models for
phrasal embeddings using con... | 2024-01-18T22:32:31Z | Findings of EACL 2024 | null | null | Learning High-Quality and General-Purpose Phrase Representations | ['Lihu Chen', 'G. Varoquaux', 'Fabian M. Suchanek'] | 2,024 | Findings | 3 | 56 | ['Computer Science'] |
2,401.1046 | Ultra-lightweight Neural Differential DSP Vocoder For High Quality
Speech Synthesis | ['Prabhav Agrawal', 'Thilo Koehler', 'Zhiping Xiu', 'Prashant Serai', 'Qing He'] | ['cs.SD', 'cs.LG', 'eess.AS'] | Neural vocoders model the raw audio waveform and synthesize high-quality
audio, but even the highly efficient ones, like MB-MelGAN and LPCNet, fail to
run real-time on a low-end device like a smartglass. A pure digital signal
processing (DSP) based vocoder can be implemented via lightweight fast Fourier
transforms (FFT... | 2024-01-19T02:51:00Z | Accepted for ICASSP 2024 | null | null | null | null | null | null | null | null | null |
2,401.10491 | Knowledge Fusion of Large Language Models | ['Fanqi Wan', 'Xinting Huang', 'Deng Cai', 'Xiaojun Quan', 'Wei Bi', 'Shuming Shi'] | ['cs.CL'] | While training large language models (LLMs) from scratch can generate models
with distinct functionalities and strengths, it comes at significant costs and
may result in redundant capabilities. Alternatively, a cost-effective and
compelling approach is to merge existing pre-trained LLMs into a more potent
model. Howeve... | 2024-01-19T05:02:46Z | Accepted to ICLR 2024 | null | null | Knowledge Fusion of Large Language Models | ['Fanqi Wan', 'Xinting Huang', 'Deng Cai', 'Xiaojun Quan', 'Wei Bi', 'Shuming Shi'] | 2,024 | International Conference on Learning Representations | 73 | 62 | ['Computer Science'] |
2,401.1058 | PHOENIX: Open-Source Language Adaption for Direct Preference
Optimization | ['Matthias Uhlig', 'Sigurd Schacht', 'Sudarshan Kamath Barkur'] | ['cs.CL'] | Large language models have gained immense importance in recent years and have
demonstrated outstanding results in solving various tasks. However, despite
these achievements, many questions remain unanswered in the context of large
language models. Besides the optimal use of the models for inference and the
alignment of... | 2024-01-19T09:46:08Z | null | null | null | PHOENIX: Open-Source Language Adaption for Direct Preference Optimization | ['Matthias Uhlig', 'Sigurd Schacht', 'Sudarshan Kamath Barkur'] | 2,024 | arXiv.org | 1 | 36 | ['Computer Science'] |
2,401.10695 | LangBridge: Multilingual Reasoning Without Multilingual Supervision | ['Dongkeun Yoon', 'Joel Jang', 'Sungdong Kim', 'Seungone Kim', 'Sheikh Shafayat', 'Minjoon Seo'] | ['cs.CL'] | We introduce LangBridge, a zero-shot approach to adapt language models for
multilingual reasoning tasks without multilingual supervision. LangBridge
operates by bridging two models, each specialized in different aspects: (1) one
specialized in understanding multiple languages (e.g., mT5 encoder) and (2) one
specialized... | 2024-01-19T14:00:19Z | ACL 2024 Main | null | null | null | null | null | null | null | null | null |
2,401.10815 | Exploring scalable medical image encoders beyond text supervision | ['Fernando Pérez-García', 'Harshita Sharma', 'Sam Bond-Taylor', 'Kenza Bouzid', 'Valentina Salvatelli', 'Maximilian Ilse', 'Shruthi Bannur', 'Daniel C. Castro', 'Anton Schwaighofer', 'Matthew P. Lungren', 'Maria Teodora Wetscherek', 'Noel Codella', 'Stephanie L. Hyland', 'Javier Alvarez-Valle', 'Ozan Oktay'] | ['cs.CV'] | Language-supervised pre-training has proven to be a valuable method for
extracting semantically meaningful features from images, serving as a
foundational element in multimodal systems within the computer vision and
medical imaging domains. However, the computed features are limited by the
information contained in the ... | 2024-01-19T17:02:17Z | null | Nature Machine Intelligence (2025) | 10.1038/s42256-024-00965-w | Exploring scalable medical image encoders beyond text supervision | ["Fernando P'erez-Garc'ia", 'Harshita Sharma', 'Sam Bond-Taylor', 'Kenza Bouzid', 'Valentina Salvatelli', 'Maximilian Ilse', 'Shruthi Bannur', 'Daniel C. Castro', 'Anton Schwaighofer', 'M. Lungren', 'M. Wetscherek', 'Noel Codella', 'Stephanie L. Hyland', 'Javier Alvarez-Valle', 'O. Oktay'] | 2,024 | Nat. Mac. Intell. | 35 | 34 | ['Computer Science'] |
2,401.10891 | Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data | ['Lihe Yang', 'Bingyi Kang', 'Zilong Huang', 'Xiaogang Xu', 'Jiashi Feng', 'Hengshuang Zhao'] | ['cs.CV'] | This work presents Depth Anything, a highly practical solution for robust
monocular depth estimation. Without pursuing novel technical modules, we aim to
build a simple yet powerful foundation model dealing with any images under any
circumstances. To this end, we scale up the dataset by designing a data engine
to colle... | 2024-01-19T18:59:52Z | Accepted by CVPR 2024. Project page: https://depth-anything.github.io | null | null | null | null | null | null | null | null | null |
2,401.11067 | Make-A-Shape: a Ten-Million-scale 3D Shape Model | ['Ka-Hei Hui', 'Aditya Sanghi', 'Arianna Rampini', 'Kamal Rahimi Malekshan', 'Zhengzhe Liu', 'Hooman Shayani', 'Chi-Wing Fu'] | ['cs.CV', 'cs.GR'] | Significant progress has been made in training large generative models for
natural language and images. Yet, the advancement of 3D generative models is
hindered by their substantial resource demands for training, along with
inefficient, non-compact, and less expressive representations. This paper
introduces Make-A-Shap... | 2024-01-20T00:21:58Z | null | null | null | Make-A-Shape: a Ten-Million-scale 3D Shape Model | ['Ka-Hei Hui', 'Aditya Sanghi', 'Arianna Rampini', 'Kamal Rahimi Malekshan', 'Zhengzhe Liu', 'Hooman Shayani', 'Chi-Wing Fu'] | 2,024 | International Conference on Machine Learning | 18 | 113 | ['Computer Science'] |
2,401.11248 | Drop your Decoder: Pre-training with Bag-of-Word Prediction for Dense
Passage Retrieval | ['Guangyuan Ma', 'Xing Wu', 'Zijia Lin', 'Songlin Hu'] | ['cs.IR', 'cs.CL'] | Masked auto-encoder pre-training has emerged as a prevalent technique for
initializing and enhancing dense retrieval systems. It generally utilizes
additional Transformer decoder blocks to provide sustainable supervision
signals and compress contextual information into dense representations.
However, the underlying rea... | 2024-01-20T15:02:33Z | Accepted by SIGIR24. Our code is available at
https://github.com/ma787639046/bowdpr | null | null | Drop your Decoder: Pre-training with Bag-of-Word Prediction for Dense Passage Retrieval. | ['Guangyuan Ma', 'Xing Wu', 'Zijia Lin', 'Songlin Hu'] | 2,024 | Annual International ACM SIGIR Conference on Research and Development in Information Retrieval | 4 | 40 | ['Computer Science'] |
2,401.11374 | Language Models as Hierarchy Encoders | ['Yuan He', 'Zhangdie Yuan', 'Jiaoyan Chen', 'Ian Horrocks'] | ['cs.CL', 'cs.AI', 'cs.LG'] | Interpreting hierarchical structures latent in language is a key limitation
of current language models (LMs). While previous research has implicitly
leveraged these hierarchies to enhance LMs, approaches for their explicit
encoding are yet to be explored. To address this, we introduce a novel approach
to re-train trans... | 2024-01-21T02:29:12Z | Accept at NeurIPS 2024 | null | null | null | null | null | null | null | null | null |
2,401.11708 | Mastering Text-to-Image Diffusion: Recaptioning, Planning, and
Generating with Multimodal LLMs | ['Ling Yang', 'Zhaochen Yu', 'Chenlin Meng', 'Minkai Xu', 'Stefano Ermon', 'Bin Cui'] | ['cs.CV', 'cs.AI', 'cs.LG'] | Diffusion models have exhibit exceptional performance in text-to-image
generation and editing. However, existing methods often face challenges when
handling complex text prompts that involve multiple objects with multiple
attributes and relationships. In this paper, we propose a brand new
training-free text-to-image ge... | 2024-01-22T06:16:29Z | ICML 2024. Project:
https://github.com/YangLing0818/RPG-DiffusionMaster | null | null | null | null | null | null | null | null | null |
2,401.11835 | Unveiling the Human-like Similarities of Automatic Facial Expression
Recognition: An Empirical Exploration through Explainable AI | ['F. Xavier Gaya-Morey', 'Silvia Ramis-Guarinos', 'Cristina Manresa-Yee', 'Jose M. Buades-Rubio'] | ['cs.CV'] | Facial expression recognition is vital for human behavior analysis, and deep
learning has enabled models that can outperform humans. However, it is unclear
how closely they mimic human processing. This study aims to explore the
similarity between deep neural networks and human perception by comparing
twelve different n... | 2024-01-22T10:52:02Z | Multimed Tools Appl (2024) | null | 10.1007/s11042-024-20090-5 | Unveiling the Human-like Similarities of Automatic Facial Expression Recognition: An Empirical Exploration through Explainable AI | ['F. X. Gaya-Morey', 'S. Ramis-Guarinos', 'Cristina Manresa-Yee', 'Jose Maria Buades Rubio'] | 2,024 | Multim. Tools Appl. | 3 | 96 | ['Computer Science'] |
2,401.11864 | Distilling Mathematical Reasoning Capabilities into Small Language
Models | ['Xunyu Zhu', 'Jian Li', 'Yong Liu', 'Can Ma', 'Weiping Wang'] | ['cs.CL', 'cs.AI'] | This work addresses the challenge of democratizing advanced Large Language
Models (LLMs) by compressing their mathematical reasoning capabilities into
sub-billion parameter Small Language Models (SLMs) without compromising
performance. We introduce Equation-of-Thought Distillation (EoTD), a novel
technique that encapsu... | 2024-01-22T11:37:18Z | Accepted for publication in Neural Networks | null | null | null | null | null | null | null | null | null |
2,401.11944 | CMMMU: A Chinese Massive Multi-discipline Multimodal Understanding
Benchmark | ['Ge Zhang', 'Xinrun Du', 'Bei Chen', 'Yiming Liang', 'Tongxu Luo', 'Tianyu Zheng', 'Kang Zhu', 'Yuyang Cheng', 'Chunpu Xu', 'Shuyue Guo', 'Haoran Zhang', 'Xingwei Qu', 'Junjie Wang', 'Ruibin Yuan', 'Yizhi Li', 'Zekun Wang', 'Yudong Liu', 'Yu-Hsuan Tsai', 'Fengji Zhang', 'Chenghua Lin', 'Wenhao Huang', 'Jie Fu'] | ['cs.CL', 'cs.AI', 'cs.CV'] | As the capabilities of large multimodal models (LMMs) continue to advance,
evaluating the performance of LMMs emerges as an increasing need. Additionally,
there is an even larger gap in evaluating the advanced knowledge and reasoning
abilities of LMMs in non-English contexts such as Chinese. We introduce CMMMU,
a new C... | 2024-01-22T13:34:34Z | null | null | null | null | null | null | null | null | null | null |
2,401.12168 | SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning
Capabilities | ['Boyuan Chen', 'Zhuo Xu', 'Sean Kirmani', 'Brian Ichter', 'Danny Driess', 'Pete Florence', 'Dorsa Sadigh', 'Leonidas Guibas', 'Fei Xia'] | ['cs.CV', 'cs.CL', 'cs.LG', 'cs.RO'] | Understanding and reasoning about spatial relationships is a fundamental
capability for Visual Question Answering (VQA) and robotics. While Vision
Language Models (VLM) have demonstrated remarkable performance in certain VQA
benchmarks, they still lack capabilities in 3D spatial reasoning, such as
recognizing quantitat... | 2024-01-22T18:01:01Z | null | null | null | SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities | ['Boyuan Chen', 'Zhuo Xu', 'Sean Kirmani', 'Brian Ichter', 'Danny Driess', 'Pete Florence', 'Dorsa Sadigh', 'Leonidas J. Guibas', 'Fei Xia'] | 2,024 | Computer Vision and Pattern Recognition | 270 | 72 | ['Computer Science'] |
2,401.12181 | Universal Neurons in GPT2 Language Models | ['Wes Gurnee', 'Theo Horsley', 'Zifan Carl Guo', 'Tara Rezaei Kheirkhah', 'Qinyi Sun', 'Will Hathaway', 'Neel Nanda', 'Dimitris Bertsimas'] | ['cs.LG', 'cs.AI', 'cs.CL'] | A basic question within the emerging field of mechanistic interpretability is
the degree to which neural networks learn the same underlying mechanisms. In
other words, are neural mechanisms universal across different models? In this
work, we study the universality of individual neurons across GPT2 models
trained from d... | 2024-01-22T18:11:01Z | null | null | null | Universal Neurons in GPT2 Language Models | ['Wes Gurnee', 'Theo Horsley', 'Zifan Carl Guo', 'Tara Rezaei Kheirkhah', 'Qinyi Sun', 'Will Hathaway', 'Neel Nanda', 'Dimitris Bertsimas'] | 2,024 | Trans. Mach. Learn. Res. | 47 | 103 | ['Computer Science'] |
2,401.12208 | A Vision-Language Foundation Model to Enhance Efficiency of Chest X-ray
Interpretation | ['Zhihong Chen', 'Maya Varma', 'Justin Xu', 'Magdalini Paschali', 'Dave Van Veen', 'Andrew Johnston', 'Alaa Youssef', 'Louis Blankemeier', 'Christian Bluethgen', 'Stephan Altmayer', 'Jeya Maria Jose Valanarasu', 'Mohamed Siddig Eltayeb Muneer', 'Eduardo Pontes Reis', 'Joseph Paul Cohen', 'Cameron Olsen', 'Tanishq Mathe... | ['cs.CV', 'cs.CL'] | Over 1.4 billion chest X-rays (CXRs) are performed annually due to their
cost-effectiveness as an initial diagnostic test. This scale of radiological
studies provides a significant opportunity to streamline CXR interpretation and
documentation. While foundation models are a promising solution, the lack of
publicly avai... | 2024-01-22T18:51:07Z | 26 pages, 8 figures | null | null | null | null | null | null | null | null | null |
2,401.12246 | Orion-14B: Open-source Multilingual Large Language Models | ['Du Chen', 'Yi Huang', 'Xiaopu Li', 'Yongqiang Li', 'Yongqiang Liu', 'Haihui Pan', 'Leichao Xu', 'Dacheng Zhang', 'Zhipeng Zhang', 'Kun Han'] | ['cs.CL', 'cs.LG'] | In this study, we introduce Orion-14B, a collection of multilingual large
language models with 14 billion parameters. We utilize a data scheduling
approach to train a foundational model on a diverse corpus of 2.5 trillion
tokens, sourced from texts in English, Chinese, Japanese, Korean, and other
languages. Additionall... | 2024-01-20T12:29:27Z | Authors are alphabetically listed by last names, except the
corresponding author who is listed last | null | null | null | null | null | null | null | null | null |
2,401.12292 | GRATH: Gradual Self-Truthifying for Large Language Models | ['Weixin Chen', 'Dawn Song', 'Bo Li'] | ['cs.CL', 'cs.AI'] | Truthfulness is paramount for large language models (LLMs) as they are
increasingly deployed in real-world applications. However, existing LLMs still
struggle with generating truthful content, as evidenced by their modest
performance on benchmarks like TruthfulQA. To address this issue, we propose
GRAdual self-truTHify... | 2024-01-22T19:00:08Z | null | null | null | GRATH: Gradual Self-Truthifying for Large Language Models | ['Weixin Chen', 'D. Song', 'Bo Li'] | 2,024 | International Conference on Machine Learning | 6 | 46 | ['Computer Science'] |
2,401.12345 | Distributionally Robust Receive Combining | ['Shixiong Wang', 'Wei Dai', 'Geoffrey Ye Li'] | ['eess.SP'] | This article investigates signal estimation in wireless transmission (i.e.,
receive combining) from the perspective of statistical machine learning, where
the transmit signals may be from an integrated sensing and communication
system; that is, 1) signals may be not only discrete constellation points but
also arbitrary... | 2024-01-22T20:20:48Z | null | IEEE Transactions on Signal Processing, June 2025 | 10.1109/TSP.2025.3582082 | null | null | null | null | null | null | null |
2,401.12503 | Small Language Model Meets with Reinforced Vision Vocabulary | ['Haoran Wei', 'Lingyu Kong', 'Jinyue Chen', 'Liang Zhao', 'Zheng Ge', 'En Yu', 'Jianjian Sun', 'Chunrui Han', 'Xiangyu Zhang'] | ['cs.CV'] | Playing Large Vision Language Models (LVLMs) in 2023 is trendy among the AI
community. However, the relatively large number of parameters (more than 7B) of
popular LVLMs makes it difficult to train and deploy on consumer GPUs,
discouraging many researchers with limited resources. Imagine how cool it would
be to experie... | 2024-01-23T05:55:26Z | null | null | null | Small Language Model Meets with Reinforced Vision Vocabulary | ['Haoran Wei', 'Lingyu Kong', 'Jinyue Chen', 'Liang Zhao', 'Zheng Ge', 'En Yu', 'Jian‐Yuan Sun', 'Chunrui Han', 'Xiangyu Zhang'] | 2,024 | arXiv.org | 41 | 58 | ['Computer Science'] |
2,401.13147 | Deep Spatiotemporal Clutter Filtering of Transthoracic Echocardiographic
Images: Leveraging Contextual Attention and Residual Learning | ['Mahdi Tabassian', 'Somayeh Akbari', 'Sandro Queirós', "Jan D'hooge"] | ['eess.IV', 'cs.CV'] | This study presents a deep convolutional autoencoder network for filtering
reverberation clutter from transthoracic echocardiographic (TTE) image
sequences. Given the spatiotemporal nature of this type of clutter, the
filtering network employs 3D convolutional layers to suppress it throughout the
cardiac cycle. The des... | 2024-01-23T23:50:04Z | 19 pages, 14 figures | null | null | Deep Spatiotemporal Clutter Filtering of Transthoracic Echocardiographic Images: Leveraging Contextual Attention and Residual Learning | ['Mahdi Tabassian', 'Somayeh Akbari', "Sandro Queir'os", 'Jan D’hooge'] | 2,024 | null | 0 | 0 | ['Engineering', 'Computer Science'] |
2,401.13223 | TAT-LLM: A Specialized Language Model for Discrete Reasoning over
Tabular and Textual Data | ['Fengbin Zhu', 'Ziyang Liu', 'Fuli Feng', 'Chao Wang', 'Moxin Li', 'Tat-Seng Chua'] | ['cs.CL', 'cs.AI'] | In this work, we address question answering (QA) over a hybrid of tabular and
textual data that are very common content on the Web (e.g. SEC filings), where
discrete reasoning capabilities are often required. Recently, large language
models (LLMs) like GPT-4 have demonstrated strong multi-step reasoning
capabilities. W... | 2024-01-24T04:28:50Z | Accepted by ICAIF 24 | null | null | null | null | null | null | null | null | null |
2,401.13303 | MaLA-500: Massive Language Adaptation of Large Language Models | ['Peiqin Lin', 'Shaoxiong Ji', 'Jörg Tiedemann', 'André F. T. Martins', 'Hinrich Schütze'] | ['cs.CL'] | Large language models (LLMs) have advanced the state of the art in natural
language processing. However, their predominant design for English or a limited
set of languages creates a substantial gap in their effectiveness for
low-resource languages. To bridge this gap, we introduce MaLA-500, a novel
large language model... | 2024-01-24T08:57:39Z | null | null | null | MaLA-500: Massive Language Adaptation of Large Language Models | ['Peiqin Lin', 'Shaoxiong Ji', 'Jörg Tiedemann', 'André F. T. Martins', 'Hinrich Schütze'] | 2,024 | arXiv.org | 18 | 56 | ['Computer Science'] |
2,401.13511 | Tissue Cross-Section and Pen Marking Segmentation in Whole Slide Images | ['Ruben T. Lucassen', 'Willeke A. M. Blokx', 'Mitko Veta'] | ['eess.IV', 'cs.CV', 'cs.LG'] | Tissue segmentation is a routine preprocessing step to reduce the
computational cost of whole slide image (WSI) analysis by excluding background
regions. Traditional image processing techniques are commonly used for tissue
segmentation, but often require manual adjustments to parameter values for
atypical cases, fail t... | 2024-01-24T15:09:12Z | 6 pages, 3 figures | null | null | Tissue cross-section and pen marking segmentation in whole slide images | ['Ruben T. Lucassen', 'W. Blokx', 'M. Veta'] | 2,024 | Medical Imaging | 4 | 13 | ['Computer Science', 'Engineering'] |
2,401.1366 | MambaByte: Token-free Selective State Space Model | ['Junxiong Wang', 'Tushaar Gangavarapu', 'Jing Nathan Yan', 'Alexander M. Rush'] | ['cs.CL', 'cs.LG'] | Token-free language models learn directly from raw bytes and remove the
inductive bias of subword tokenization. Operating on bytes, however, results in
significantly longer sequences. In this setting, standard autoregressive
Transformers scale poorly as the effective memory required grows with sequence
length. The rece... | 2024-01-24T18:53:53Z | Published at COLM 2024 | null | null | MambaByte: Token-free Selective State Space Model | ['Junxiong Wang', 'Tushaar Gangavarapu', 'Jing Nathan Yan', 'Alexander M. Rush'] | 2,024 | arXiv.org | 41 | 67 | ['Computer Science'] |
2,401.13919 | WebVoyager: Building an End-to-End Web Agent with Large Multimodal
Models | ['Hongliang He', 'Wenlin Yao', 'Kaixin Ma', 'Wenhao Yu', 'Yong Dai', 'Hongming Zhang', 'Zhenzhong Lan', 'Dong Yu'] | ['cs.CL', 'cs.AI'] | The rapid advancement of large language models (LLMs) has led to a new era
marked by the development of autonomous applications in real-world scenarios,
which drives innovation in creating advanced web agents. Existing web agents
typically only handle one input modality and are evaluated only in simplified
web simulato... | 2024-01-25T03:33:18Z | Accepted to ACL 2024 (main). Code and data is released at
https://github.com/MinorJerry/WebVoyager | null | null | WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models | ['Hongliang He', 'Wenlin Yao', 'Kaixin Ma', 'Wenhao Yu', 'Yong Dai', 'Hongming Zhang', 'Zhenzhong Lan', 'Dong Yu'] | 2,024 | Annual Meeting of the Association for Computational Linguistics | 151 | 47 | ['Computer Science'] |
2,401.13979 | Routoo: Learning to Route to Large Language Models Effectively | ['Alireza Mohammadshahi', 'Arshad Rafiq Shaikh', 'Majid Yazdani'] | ['cs.CL', 'cs.AI', 'cs.LG'] | LLMs with superior response quality--particularly larger or closed-source
models--often come with higher inference costs, making their deployment
inefficient and costly. Meanwhile, developing foundational LLMs from scratch is
becoming increasingly resource-intensive and impractical for many applications.
To address the... | 2024-01-25T06:45:32Z | null | null | null | null | null | null | null | null | null | null |
2,401.14196 | DeepSeek-Coder: When the Large Language Model Meets Programming -- The
Rise of Code Intelligence | ['Daya Guo', 'Qihao Zhu', 'Dejian Yang', 'Zhenda Xie', 'Kai Dong', 'Wentao Zhang', 'Guanting Chen', 'Xiao Bi', 'Y. Wu', 'Y. K. Li', 'Fuli Luo', 'Yingfei Xiong', 'Wenfeng Liang'] | ['cs.SE', 'cs.CL', 'cs.LG'] | The rapid development of large language models has revolutionized code
intelligence in software development. However, the predominance of
closed-source models has restricted extensive research and development. To
address this, we introduce the DeepSeek-Coder series, a range of open-source
code models with sizes from 1.... | 2024-01-25T14:17:53Z | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.